markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Next we will need a specialized tokenizer for this model. This one will try to use the [spaCy](https://spacy.io/) and [ftfy](https://pypi.org/project/ftfy/) libraries if they are installed, or else it will fall back to BERT's `BasicTokenizer` followed by Byte-Pair Encoding (which should be fine for most use cases).
from transformers import OpenAIGPTTokenizer tokenizer = OpenAIGPTTokenizer.from_pretrained("openai-gpt")
_____no_output_____
Apache-2.0
16_nlp_with_rnns_and_attention.ipynb
otamilocintra/ml2gh
Now let's use the tokenizer to tokenize and encode the prompt text:
prompt_text = "This royal throne of kings, this sceptred isle" encoded_prompt = tokenizer.encode(prompt_text, add_special_tokens=False, return_tensors="tf") encoded_prompt
_____no_output_____
Apache-2.0
16_nlp_with_rnns_and_attention.ipynb
otamilocintra/ml2gh
Easy! Next, let's use the model to generate text after the prompt. We will generate 5 different sentences, each starting with the prompt text, followed by 40 additional tokens. For an explanation of what all the hyperparameters do, make sure to check out this great [blog post](https://huggingface.co/blog/how-to-generate) by Patrick von Platen (from Hugging Face). You can play around with the hyperparameters to try to obtain better results.
num_sequences = 5 length = 40 generated_sequences = model.generate( input_ids=encoded_prompt, do_sample=True, max_length=length + len(encoded_prompt[0]), temperature=1.0, top_k=0, top_p=0.9, repetition_penalty=1.0, num_return_sequences=num_sequences, ) generated_sequences
_____no_output_____
Apache-2.0
16_nlp_with_rnns_and_attention.ipynb
otamilocintra/ml2gh
Now let's decode the generated sequences and print them:
for sequence in generated_sequences: text = tokenizer.decode(sequence, clean_up_tokenization_spaces=True) print(text) print("-" * 80)
this royal throne of kings, this sceptred isle. even if someone had given them permission, even if it were required, they would never have been allowed to live through the hell they've survived.' 'they couldn't have known that. -------------------------------------------------------------------------------- this royal throne of kings, this sceptred isle and these people are royalty.' then the mute prince and prince edward broke off and went to their rooms. the talk passed again between the princes and the guards and the princess was of great -------------------------------------------------------------------------------- this royal throne of kings, this sceptred isle has its own highness, an alatte that waits to save you. in this kingdom your people must emulate the kings of the realm. in this kingdom your kin should be saved from this pit and -------------------------------------------------------------------------------- this royal throne of kings, this sceptred isle belongs to me. " " the great throne of penvynne? " " indeed, " said the king with a nod of his head. " this world was once composed of a magical -------------------------------------------------------------------------------- this royal throne of kings, this sceptred isle is empty. this is a modern - day fedaykin court, a place where kings are governed, not emperors and judges. i don't see any sign of life that is not their own --------------------------------------------------------------------------------
Apache-2.0
16_nlp_with_rnns_and_attention.ipynb
otamilocintra/ml2gh
Notebook to verify the calculations of our simulator Importing required libraries
# importaing standard libraries import matplotlib.pyplot as plt import matplotlib.ticker as ticker from scipy.signal import freqs,periodogram,cheby1 import numpy as np # import quantum libraries import qutip from itertools import product from numpy import array, kron from qmldataset import pauli_operators, create_custom_simulator, run_experiment
2021-09-26 16:34:01.309496: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
MIT
simulation/verification.ipynb
rajibchakravorty/QDataSet
Step 1: Create a simulatorWe supply the parameters and create a simulator. Here we will create a 1-qubit experiment with Control on X-Axis, Type 1 noise on Z-Axis
dimension = 2 evolution_time = 1 num_time_steps = 1024 omega = 12 dynamic_operators = [0.5*pauli_operators[1]] static_operators = [0.5*pauli_operators[3]*omega] noise_operators = [0.5*pauli_operators[3]] measurement_operators = pauli_operators[1:] initial_states = [ np.array([[0.5, 0.5], [0.5, 0.5]]), np.array([[0.5, -0.5], [-0.5, 0.5]]), np.array([[0.5, -0.5j], [0.5j, 0.5]]), np.array([[0.5, 0.5j], [-0.5j, 0.5]]), np.array([[1, 0], [0, 0]]), np.array([[0, 0], [0, 1]]) ] num_realizations = 200 num_pulses = 5 noise_profile = ['Type 1'] distortion = True simulator_with_distortion = create_custom_simulator( evolution_time=evolution_time, num_time_steps=num_time_steps, dimension=dimension, dynamic_operators=dynamic_operators, static_operators=static_operators, noise_operators=noise_operators, measurement_operators=measurement_operators, initial_states=initial_states, num_realizations=num_realizations, num_pulses=num_pulses, noise_profile=noise_profile, distortion=distortion, pulse_shape="Square" )
2021-09-26 16:34:05.687838: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set 2021-09-26 16:34:05.689143: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcuda.so.1 2021-09-26 16:34:05.737996: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-09-26 16:34:05.738543: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 0 with properties: pciBusID: 0000:0e:00.0 name: GeForce GTX 1050 Ti computeCapability: 6.1 coreClock: 1.43GHz coreCount: 6 deviceMemorySize: 3.94GiB deviceMemoryBandwidth: 104.43GiB/s 2021-09-26 16:34:05.738610: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0 2021-09-26 16:34:05.741933: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11 2021-09-26 16:34:05.742095: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11 2021-09-26 16:34:05.743695: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10 2021-09-26 16:34:05.744247: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10 2021-09-26 16:34:05.746528: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.10 2021-09-26 16:34:05.747524: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.11 2021-09-26 16:34:05.747858: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8 2021-09-26 16:34:05.748020: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-09-26 16:34:05.748524: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-09-26 16:34:05.748866: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1862] Adding visible gpu devices: 0 2021-09-26 16:34:05.749988: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set 2021-09-26 16:34:05.750242: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-09-26 16:34:05.750682: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 0 with properties: pciBusID: 0000:0e:00.0 name: GeForce GTX 1050 Ti computeCapability: 6.1 coreClock: 1.43GHz coreCount: 6 deviceMemorySize: 3.94GiB deviceMemoryBandwidth: 104.43GiB/s 2021-09-26 16:34:05.750765: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0 2021-09-26 16:34:05.750804: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11 2021-09-26 16:34:05.750829: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11 2021-09-26 16:34:05.750852: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10 2021-09-26 16:34:05.750874: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10 2021-09-26 16:34:05.750896: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.10 2021-09-26 16:34:05.750918: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.11 2021-09-26 16:34:05.750941: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8 2021-09-26 16:34:05.751068: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-09-26 16:34:05.751499: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-09-26 16:34:05.751834: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1862] Adding visible gpu devices: 0 2021-09-26 16:34:05.751902: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0 2021-09-26 16:34:06.554192: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1261] Device interconnect StreamExecutor with strength 1 edge matrix: 2021-09-26 16:34:06.554242: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1267] 0 2021-09-26 16:34:06.554252: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1280] 0: N 2021-09-26 16:34:06.554522: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-09-26 16:34:06.555210: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-09-26 16:34:06.555616: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2021-09-26 16:34:06.555966: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1406] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3250 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1050 Ti, pci bus id: 0000:0e:00.0, compute capability: 6.1)
MIT
simulation/verification.ipynb
rajibchakravorty/QDataSet
Now we run a single experimentThe experiment will produce a result by simulating `num_realizations` number of noise realizations.
experiment_result = run_experiment(simulator=simulator_with_distortion)
2021-09-26 16:34:09.738690: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:116] None of the MLIR optimization passes are enabled (registered 2) 2021-09-26 16:34:09.761987: I tensorflow/core/platform/profile_utils/cpu_utils.cc:112] CPU Frequency: 3094175000 Hz 2021-09-26 16:34:13.227801: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11 2021-09-26 16:34:13.612214: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11 2021-09-26 16:34:13.642600: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10 2021-09-26 16:34:13.917894: I tensorflow/core/util/cuda_solvers.cc:180] Creating CudaSolver handles for stream 0x5646df642640 2021-09-26 16:34:13.918105: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.10 2021-09-26 16:34:14.231919: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
MIT
simulation/verification.ipynb
rajibchakravorty/QDataSet
Once run, let us read the experiment outcome
# plot the pulse plt.figure() num_controls = len(experiment_result["sim_parameters"]["dynamic_operators"]) for idx in range(num_controls): plt.subplot(num_controls , 1, idx+1 ) plt.plot(experiment_result["time_range"], experiment_result["pulses"][:,0,idx], label="undistorted") plt.plot(experiment_result["time_range"], experiment_result["distorted_pulses"][:,0,idx], label="distorted") plt.xlabel('t') plt.ylabel('f(t)') plt.grid() plt.legend() print(experiment_result["pulse_parameters"])
[[-20.345783 0.12233578 0.1 ] [ 58.95591 0.27380085 0.1 ] [ 38.14025 0.4457677 0.1 ] [ 29.669308 0.61551726 0.1 ] [-74.14498 0.7660476 0.1 ]]
MIT
simulation/verification.ipynb
rajibchakravorty/QDataSet
Display the distortion if exists
if distortion: # display distortion filter if exists distortion = cheby1(4,0.1,2*np.pi*20, analog=True) # evaluate frequency response of the filter w, Hw = freqs(distortion[0], distortion[1]) plt.figure(figsize=[15,4]) plt.subplot(1,2,1) plt.semilogx(w, 20*np.log(np.abs(Hw))) plt.xlabel(r'$\Omega$') plt.ylabel(r'$|H(\Omega)|$') plt.grid() plt.subplot(1,2,2) plt.semilogx(w, np.angle(Hw)) plt.xlabel(r'$\Omega$') plt.ylabel(r'arg $H(\Omega)$') plt.grid()
_____no_output_____
MIT
simulation/verification.ipynb
rajibchakravorty/QDataSet
Display the noise
# display noise if exists for idx_profile,profile in enumerate(experiment_result["sim_parameters"]["noise_profile"]): if profile in ['Type 2','Type 3','Type 4'] or (profile=='Type 6' and p==0): # estimate the correlation matrix of the noise correlation = 0 for k in range(experiment_result["sim_parameters"]["num_realizations"]): correlation = correlation + experiment_result["noise"][:,k:k+1,idx_profile]@experiment_result["noise"][:,k:k+1,idx_profile].T correlation = correlation/data["sim_parameters"]["num_realizations"] # plot correlation matrix plt.figure() plt.matshow(correlation,0) plt.colorbar() p = 0 elif profile in ['Type 1','Type 5']: # estimate the PSD of the noise psd = 0 for k in range(experiment_result["sim_parameters"]["num_realizations"]): f, Pxx = periodogram(experiment_result["noise"][:,k,idx_profile], experiment_result["sim_parameters"]["num_time_steps"]/experiment_result["sim_parameters"]["evolution_time"]) psd = psd + Pxx psd = psd/experiment_result["sim_parameters"]["num_realizations"] plt.figure() plt.plot(f[f>0], psd[1:]) plt.xlabel('f') plt.ylabel('psd') plt.grid() p = 1
_____no_output_____
MIT
simulation/verification.ipynb
rajibchakravorty/QDataSet
Comparing the output with `qutip`Hint: They should be same !!
# load initial states, measurement operators, and control Hamilotonian initial_states = [qutip.Qobj(state) for state in experiment_result["sim_parameters"]["initial_states"] ] measurements = [qutip.Qobj(op) for op in experiment_result["sim_parameters"]["measurement_operators"] ] H0 = [ [qutip.Qobj(op), np.ones((len(experiment_result["sim_parameters"]["time_range"])))] for op in experiment_result["sim_parameters"]["static_operators"] ] + [ [qutip.Qobj(op), experiment_result["distorted_pulses"][:,0,idx]] for idx, op in enumerate(experiment_result["sim_parameters"]["dynamic_operators"])] expectations = np.zeros( (1,experiment_result["sim_parameters"]["num_realizations"], len(initial_states)*len(measurements))) for idx_K in range(experiment_result["sim_parameters"]["num_realizations"]): H1 = [ [qutip.Qobj(op), experiment_result["noise"][:, idx_K, idx]] for idx, op in enumerate(experiment_result["sim_parameters"]["noise_operators"]) ] results = [ qutip.mesolve(H0 + H1, rho, np.array(experiment_result["sim_parameters"]["time_range"]), e_ops=measurements).expect for rho in initial_states] expectations [0, idx_K, :] = np.concatenate( [np.array([results[idx_rho][idx_M][-1] for idx_M in range(len(measurements))]) for idx_rho in range(len(initial_states))]) print(idx_K+1, end="\r") # plot the average expectation over all noise realizations for every observable plt.figure() plt.plot(np.average(expectations, 1)[0], label="qutip") plt.plot(experiment_result["average_expectation"][0], label = "tf") plt.ylabel("Average observable value") plt.xlabel("observable Index") plt.gca().xaxis.set_major_locator(ticker.MaxNLocator(integer=True)) plt.legend() plt.grid() # plot all possible observables for a particular noise realization idx_K = 10 plt.figure() plt.plot(expectations[0, idx_K,:], label="qutip") plt.plot(experiment_result["expectations"][idx_K,:], label = "tf") plt.ylabel("Observable Value for realization %d"%idx_K) plt.xlabel("Observable Index") plt.gca().xaxis.set_major_locator(ticker.MaxNLocator(integer=True)) plt.legend() plt.grid()
_____no_output_____
MIT
simulation/verification.ipynb
rajibchakravorty/QDataSet
Continuación clase método de la transformada inversa
# Librería de optimización from scipy import optimize from scipy.stats import beta import matplotlib.pyplot as plt import numpy as np import pandas as pd # %matplotlib notebook %matplotlib inline
_____no_output_____
MIT
TEMA-2/Clase10_MetodoAceptacionRechazo.ipynb
AndresHdzJmz/SPF-2021-I
Función para crear histograma de distribuciones discretas
def Gen_distr_discreta(p_acum: 'P.Acumulada de la distribución a generar', indices: 'valores reales a generar aleatoriamente', N: 'cantidad de números aleatorios a generar'): U =np.random.rand(N) # Diccionario de valores aleatorios rand2reales = {i: idx for i, idx in enumerate(indices)} # Series de los valores aletorios y = pd.Series([sum([1 for p in p_acum if p < ui]) for ui in U]).map(rand2reales) return y def plot_histogram_discrete(distribucion:'señal de varibles aleatorias de un distribución DISCRETA dada', label:'label del legend a aparecer en el gráfica', densidad:'por defecto regresa el histograma en densidad'=True): # len(set(distribucion)) cuenta la cantidad de elementos distintos de la variable 'distribucion' plt.figure(figsize=[10,4]) y, x = np.histogram(distribucion, bins=len(set(distribucion)), density=densidad) plt.bar(x[1:], y, label=label) plt.legend() plt.show()
_____no_output_____
MIT
TEMA-2/Clase10_MetodoAceptacionRechazo.ipynb
AndresHdzJmz/SPF-2021-I
Ejemplo binomial: La distribución binomial modela el número de éxitos de n ensayos independientes donde hay una probabilidad p de éxito en cada ensayo.Generar una variable aletoria binomial con parámetros $n=10$ y $p=0.7$. Recordar que$$X\sim binomial(n,p) \longrightarrow p_i=P(X=i)=\frac{n!}{i!(n-i)!}p^i(1-p)^{n-i},\quad i=0,1,\cdots,n$$> Tarea: Demostrar la validez de la siguiente ecuación>$$p_{i+1}=\frac{n-i}{i+1}\frac{p}{1-p} p_i \longrightarrow \text{Hablar de las ventajas que sea recursiva}$$ **El Algoritmo que debemos realizar:** 1. Generar $U$. 2. Si $U<p_0$, poner $X=0$ y detenerse. 3. Si $p_0<U<p_0+p_1$, poner $X=1$ y detenerse. $$ \vdots$$ 4. Si $p_0+\cdots+p_{n-1}<U<p_0+\cdots+p_{n}$, poner $X=n$ y detenerse.
# Función que calcula la probabilidad acumulada optimizada def P_acum_Binomial_o(n,p): Pr = np.zeros(n) Pr[0] = (1-p)**n def pr(i): nonlocal Pr c = p/(1-p) Pr[i+1]=(c*(n-i)/(i+1))*Pr[i] # Lleno el vector Pr usando compresión de listas [pr(i) for i in range(n-1)] return np.cumsum(Pr) # def D_binomial_intermedia(n,p,N): n = 10; p = 0.7; N = 10**5 p_acum = P_acum_Binomial_o(n,p) # Usando el método de la transformada inversa d_binomial = Gen_distr_discreta(p_acum, np.arange(0, n+1), N) plot_histogram_discrete(d_binomial, 'función creada con tran. Inversa') # Usando numpy d_bino_numpy = np.random.binomial(n,p,N) plot_histogram_discrete(d_bino_numpy, 'función creada con numpy')
_____no_output_____
MIT
TEMA-2/Clase10_MetodoAceptacionRechazo.ipynb
AndresHdzJmz/SPF-2021-I
Explore el funcionamiento del siguiente comando
list(set(d_binomial))
_____no_output_____
MIT
TEMA-2/Clase10_MetodoAceptacionRechazo.ipynb
AndresHdzJmz/SPF-2021-I
> TareaSeguir un procedimiento similar al mostrado cuando se generó una distribución binomial, pero en esta caso genere un código que genere variables aletorias Poisson cuya función de distribución de probabilidad esta dada por:>$$P(k,\lambda)=\frac{e^{-\lambda}(\lambda)^k}{k!}$$ > Demuestre matemáticamente que > $$P(k+1)=\frac{\lambda}{k+1}P(k)$$> y a partir de esta relación genere variables aletorias que distribuyen poisson usando el método de la transformada inversa.Enlace: https://es.wikipedia.org/wiki/Distribuci%C3%B3n_de_Poisson $\begin{aligned}\frac{p_{k+1}}{p_k}& = \frac{e^{-\lambda}(\lambda)^k}{k!} \\& = \frac{e^{-\lambda}(\lambda)^k}{k!}\end{aligned}$ Método de aceptación rechazoEste método surgió debido a que muchas distribuciones continuas, no era factible aplicar el método de transformación inversa porque $x= F^{-1}(U)$ no se puede calcular (o al menos no es computacionalmente eficientemente).Con frecuencia, estos métodos son considerablemente más rápidos que el método de transformación inversa. Ahora ilustramos el **método de aceptación y rechazo** en un ejemplo simple. Suponga que tenemos una función de densidad de probabilidad (PDF) de una distribución beta, la cual viene dada:$$f(x)=\frac{x^{\alpha_1-1}(1-x)^{\alpha_2-1}}{B(\alpha_1,\alpha_2)} \quad x\in[0,1] \longrightarrow B(\alpha_1,\alpha_2)\equiv \int_{0}^{1}x^{\alpha_1-1}(1-x)^{\alpha_2-1}, \ \alpha_1,\alpha_2>1$$**Hablar de las desventajas** Ahora definiremos formalmente el método:Note que $f(x)$ debe ser una **función acotada y con dominio finito** $a\leq x \leq b$ como se muestra a continuación:![imagen.png](attachment:imagen.png)De acuerdo a esta función $f(x)$ el método propone los siguientes pasos. Asuma que podemos encontrar una función $t(x)$ tal que$$t(x)\geq f(x), \quad \forall x$$Note que la función $t(x)\geq 0$ no es una PDF debido a $$\int_{-\infty}^{\infty}t(x)dx\geq \int_{-\infty}^{\infty}f(x)dx =1$$Tomemos$$c=\int_{-\infty}^{\infty}t(x)\geq 1$$Definamos la función $g(x)=t(x)/c \rightarrow g(x)$ **es una densidad**. Resultando entonces $$\frac{f(x)}{g(x)}\leq c,\quad \forall x$$El siguiente algoritmo genera una variable aleatoria $X$, distribuida de acuerdo a la densidad $f(x)$ 1. Generar $R_1$ teniendo densidad $g(x)$ 2. Generar $R_2 \rightarrow U \sim U(0,1)$ independiente de $R_1$ del paso 1 . 3. Evaluar la función de probabilidad en $R_1$. 4. Determinar si la siguiente desigualdad se cumple: $$R_2\leq \frac{f(R_1)}{t(R_1)}$$ Si la respuesta es afirmativa se utiliza $X=R_1$, de lo contrario es necesario pasar nuevamente al paso 1, tantas veces como sea necesario.> Se puede demostrar que la $P(aceptar)=1/c$ Ejemplo 1: Función beta$$f(x; a,b) = \frac{1}{B(\alpha, \beta)} x^{\alpha - 1} (1 - x)^{\beta - 1}$$ a). Caso particular: $\alpha=\beta=3$Con estos valores la PDF es $$f(x)=30(x^2-2x^3+x^4)$$
# Función de aceptación y rechazo usando for def Acep_rechazo2(R2:'Variables distruidas U~U(0,1)', R1:'Variables distribuidas como g(x)', f:'función objetivo a generar', t:'función que mayora a f'): # R1 = np.random.rand(N) f_x = f(R1) t_x = t(R1) condition = R2*t_x <=f_x for i in range(len(R1)): if condition[i]: plt.plot(R1[i],R2[i]*t_x[i],'ob') else: plt.plot(R1[i],R2[i]*t_x[i],'o') plt.show() # Función de aceptación y rechazo usando compresión de listas def Acep_rechazo(R2:'Variables distruidas U~U(0,1)', R1:'Variables distribuidas como g(x)', f:'función objetivo a generar', t:'función que mayora a f'): # R1 = np.random.rand(N) f_x = f(R1) t_x = t(R1) condition = R2*t_x <=f_x # [plt.plot(R1[i],R2[i]*t_x[i],'ob') if condition[i] else plt.plot(R1[i],R2[i]*t_x[i],'o') \ # for i in range(len(R1))] # plt.show() x = [R1[i] for i in range(len(R1)) if condition[i]] return x # Ilustración del método de aceptación y rechazo cuando se toma t(x) constante N = 100 # Función objetivo f = lambda x: 30 * (x**2 -2 * x**3 + x**4) # Máximo de la función f max_f = f(optimize.fmin(lambda x:-f(x), 0, disp=False)) # Función t -> Función constante t = lambda x: max_f * np.ones([len(x)]) # Rango donde se graficará las funciones x = np.arange(0, 1, 0.01) print('El máximo de f es:',max_f) # Gráficas de las funciones plt.plot(x,f(x),label='f(x)') plt.plot(x,t(x),label='t(x)') plt.legend() # Validación del método N = 20000 # número de puntos a simular # Como estoy tomando t(x) constante solo es necesario generar valores aleatorios U~(0,1) R2 = np.random.rand(N) R1 = np.random.uniform(0, 1, size=N) x_r = Acep_rechazo(R2, R1, f, t) y,x_n, _ = plt.hist(x_r, bins=50, density=True) np.cumsum(y)[-1]
_____no_output_____
MIT
TEMA-2/Clase10_MetodoAceptacionRechazo.ipynb
AndresHdzJmz/SPF-2021-I
b). Caso general: $\alpha,\beta>0$
# Parámetros de la función beta a =10; b=3 N = 500 # número de puntos # Función objetivo f = lambda x: beta.pdf(x,a,b) x = np.arange(0,1,0.01) plt.plot(x,f(x),'k') # Encuentro el máximo de la función f c = float(f(optimize.fmin(lambda x:-f(x),0,disp=False))) print('El máximo de la función es:',c) t = lambda x: c*np.ones(len(x)) plt.plot(x,f(x),'k') plt.plot(x,t(x),'b') R2 = np.random.rand(N) R1 = np.random.rand(N) Acep_rechazo(R2,R1,f,t) plt.show()
El máximo de la función es: 3.5848168690361635
MIT
TEMA-2/Clase10_MetodoAceptacionRechazo.ipynb
AndresHdzJmz/SPF-2021-I
Tarea 6Partiendo que se desea generar variables aleatorias para la siguiente función de densidad$$f(x)=30(x^2-2x^3+x^4)$$Responda los siguientes literales:1. Usar como función que mayora a $f(x)$ a $t(x)=a \sin(\pi x)$ donde a es el máximo de la función $f(x)$ y graficarlas en una misma gráfica, para validar que en realidad si cumple la condición $t(x)\geq f(x)$.2. Encontrar la función de densidad $g(x)$ según lo visto en clase. Reportar todos los cálculos realizados para encontrar dicha función usando Markdown (Latex).3. Usar la función encontrada en el punto 2 y utilizar el método de la transformada inversa visto en la clase 9, para generar variables aleatorias que sigan la distribución $g(x)$. **Nota:** Recuerde que el método de la transformada inversa funciona con la distribución de probabilidad acumulada y no con su densidad. Nuevamente similar al punto anterior reportar todos los cálculos usando Markdown (Latex). 4. Según el punto 3, generar 10000 puntos aleatorios que sigan la distribución $g(x)$ y comparar con su histograma para validar que los puntos generados siguen la distribución deseada. El resultado debe ser como sigue:![imagen.png](attachment:imagen.png) 5. Genere 500 puntos aleatorios usando el método de aceptación y rechazo y las funciones $f(x)$ y $t(x)$ para validar que todos los cálculos anteriores están correctamente realizados. El resultado debe de ser como sigue:![imagen.png](attachment:imagen.png) 6. Comparar el porcentaje de puntos de aceptación cuando se usa $t(x)$ constante y $t(x)$ un pulso senoidal. Concluir 7. Genere una variable aleatoria $X$ a partir de la siguiente PDF$$f(x)=20x(1-x)^3$$ usando el método de aceptación y rechazo 8. Seguir un procedimiento similar al mostrado cuando se generó una distribución binomial, pero en esta caso genere un código que genere variables aletorias Poisson cuya función de distribución de probabilidad esta dada por:>$$P(k,\lambda)=\frac{e^{-\lambda}(\lambda)^k}{k!}$$ > Demuestre matemáticamente que > $$P(k+1)=\frac{\lambda}{k+1}P(k)$$> y a partir de esta relación genere variables aletorias que distribuyen poisson usando el método de la transformada inversa.Enlace: https://es.wikipedia.org/wiki/Distribuci%C3%B3n_de_Poisson Parámetros de entregaVoy a habilitar un link en Canvas donde deben de subir su cuaderno de python con la sulución de los problemas planteados en parejas. La podrán entregar a mas tardar el martes 6 de octubre a las 6pm. Como será en parejas, deben de crear un proyecto conjunto en github y realizar los ejercicios de manera conjunta, de manera similar a como realizaron los ejercicios en la tarea 1. **Deben de poner en la solución de la tarea el enlace de github de el administrador del repositorio**, del cuál me basaré para poner la calificación. Solución Tarea
import numpy as np import matplotlib.pyplot as plt
_____no_output_____
MIT
TEMA-2/Clase10_MetodoAceptacionRechazo.ipynb
AndresHdzJmz/SPF-2021-I
xdata로 상관계수가 높은 column을 넣어서 Ridge- elasticnet으로 상관계수가 높은 feature를 넣어 모델생성
from sklearn.metrics import mean_squared_error # 필요 패키지 로드 #from sklearn.preprocessing import MinMaxScaler from sklearn.preprocessing import StandardScaler from sklearn.linear_model import Ridge from sklearn.metrics import mean_absolute_error # y값인 q1-q5가 결측인 2020년 데이터 제거 a = df[0:-82] a # 경찰서와 연도 데이터 제거 a.drop(columns = ['jur_stn', 'year'], inplace = True) a_ = a.drop(columns = ['q1', 'q2', 'q3', 'q4', 'q5']) a_ a_.columns a_1 = a_ a_2 = a_ a_3 = a_ a_4 = a_ a_5 = a_ # StandardScaling scaler1 = StandardScaler() scaler1.fit(a_1) # scaler에 xdata 학습 a_s1 = scaler1.transform(a_1) a_s1 # StandardScaling scaler2 = StandardScaler() scaler2.fit(a_2) # scaler에 xdata 학습 a_s2 = scaler2.transform(a_2) a_s2 # StandardScaling scaler3 = StandardScaler() scaler3.fit(a_3) # scaler에 xdata 학습 a_s3 = scaler3.transform(a_3) a_s3 # StandardScaling scaler4 = StandardScaler() scaler4.fit(a_4) # scaler에 xdata 학습 a_s4 = scaler4.transform(a_4) a_s4 # StandardScaling scaler5 = StandardScaler() scaler5.fit(a_5) # scaler에 xdata 학습 a_s5 = scaler5.transform(a_5) a_s5 # x데이터 설정 2017, 2018 데이터를 학습용, 2019년 데이터를 검증용 데이터셋으로 설정 xtrain1 = a_s1[:-82] xtest1 = a_s1[-82:] xtrain2 = a_s2[:-82] xtest2 = a_s2[-82:] xtrain3 = a_s3[:-82] xtest3 = a_s3[-82:] xtrain4 = a_s4[:-82] xtest4 = a_s4[-82:] xtrain5 = a_s5[:-82] xtest5 = a_s5[-82:] # y데이터 설정 2017, 2018 데이터를 학습용, 2019년 데이터를 검증용 데이터셋으로 설정 train = a[:-82] test = a[-82:] ytrain1 = train['q1'] ytrain2 = train['q2'] ytrain3 = train['q3'] ytrain4 = train['q4'] ytrain5 = train['q5'] ytest1 = test['q1'] ytest2 = test['q2'] ytest3 = test['q3'] ytest4 = test['q4'] ytest5 = test['q5'] a_1.columns # 그리드 서치 패키지 from sklearn.model_selection import GridSearchCV # 최적 성능을 내는 lasso의 alpha값을 얻기 위해 param_grid 생성 param_grid = {'alpha' : np.linspace(0.001, 10.0, 10000)} #그리드 서치 설정 grid_search = GridSearchCV(Ridge(), param_grid = param_grid, cv = 10, n_jobs = -1, scoring ='r2') #grid_search = GridSearchCV(Ridge(), param_grid = param_grid, cv = 10, n_jobs = -1, scoring ='neg_mean_absolute_error')
_____no_output_____
MIT
2.Model_code/Linear/ridge_grid_search.ipynb
PpangPpang93/Main_project_police
q1 절도폭력
# 그리드 서치 후 최고 성능의 모델을 ridge1에 저장 grid_search.fit(xtrain1, ytrain1) ridge1 = grid_search.best_estimator_ # MAE 출력 y_pred1 = ridge1.predict(xtest1) mean_absolute_error(ytest1, y_pred1) # 결과 print('alpha =', ridge1.alpha) print(ridge1.coef_) # Ridge 회귀분석으로 나온 weghit값 print('가장 강한 양의 상관관계: ',a_1.columns[ridge1.coef_.argmax()], '\n가장 강한 음의 상관관계: ', a_1.columns[ridge1.coef_.argmin()])
alpha = 10.0 [ 0.1858718 0.35944705 -0.18654878 -0.88752003 0.12550527 -0.00879849 0.23483285 -0.25640454 0.46469126 0.50380436 0.85031836 -0.29439753 -0.17669475 0.36080588 0.12274043 -0.78509441 -0.35994638 0.30965558 0.10691769 0.51217057 -0.14159903 0.05857899 0.07225416 -0.33213259 -0.30612817 -0.28521751 -0.02244433 0.04295547 0.23212191 -0.43095751 0.99679932 0.67407008 0.04719795 0.02266245 -0.26957157 0.1635311 0.27777131 0.18326995 -0.26235237 -0.253049 0.18470932 -0.32227054 -0.11586935 0.20246951 0.00619761 -0.08666845 -0.39903773 0.31214314 0.26027898 -0.34774913 -0.043771 0.18120353 0.03499807 -0.3080387 -0.20039405 0.26735254 -0.12913279 0.02064122 -0.08359852 0.24854472] 가장 강한 양의 상관관계: 외국인인구수대비검거수 가장 강한 음의 상관관계: vio_cnt
MIT
2.Model_code/Linear/ridge_grid_search.ipynb
PpangPpang93/Main_project_police
q2 강도살인
# 그리드 서치 후 최고 성능의 모델을 ela2에 저장 grid_search.fit(xtrain2, ytrain2) ridge2 = grid_search.best_estimator_ # MAE 출력 y_pred2 = ridge2.predict(xtest2) mean_absolute_error(ytest2, y_pred2) # 결과 print('alpha =', ridge2.alpha) print(ridge2.coef_) # Ridge 회귀분석으로 나온 weghit값 print('가장 강한 양의 상관관계: ',a_2.columns[ridge2.coef_.argmax()], '\n가장 강한 음의 상관관계: ', a_2.columns[ridge2.coef_.argmin()])
alpha = 4.566000000000001 [ 0.0860792 0.4554509 -0.42882456 -1.39474717 -0.04082773 0.67245513 0.44065367 -0.35757788 0.36512942 1.13101206 1.47174792 -0.50713043 -0.21119051 0.64726755 0.26600496 -1.10801628 -0.61302412 0.37013122 -0.29569089 0.61392221 -0.10098044 0.05371732 0.21285102 -0.44627423 -0.21074038 -0.11088239 -0.02215864 0.05477171 0.20860851 -0.79054475 0.7222386 1.1018597 -0.29250057 0.16025184 -0.52723943 0.31317932 0.42472581 0.10930088 -0.19559189 -0.25294844 0.02556162 -0.55658869 -0.10951789 0.24595558 0.02282998 -0.02243589 -0.49783467 0.394488 0.30491444 -0.03552448 -0.15389 0.13197577 0.11347984 -0.57586571 0.02578398 0.08091628 -0.29398064 0.16571328 -0.32548941 0.66159982] 가장 강한 양의 상관관계: for_u20 가장 강한 음의 상관관계: vio_cnt
MIT
2.Model_code/Linear/ridge_grid_search.ipynb
PpangPpang93/Main_project_police
q3 교통안전
# 그리드 서치 후 최고 성능의 모델을 lasso3에 저장 grid_search.fit(xtrain3, ytrain3) ridge3 = grid_search.best_estimator_ ridge3 = Ridge(alpha = 23) ridge3.fit(xtrain3, ytrain3) # MAE 출력 y_pred3 = ridge3.predict(xtest3) mean_absolute_error(ytest3, y_pred3) # 결과 print('alpha =', ridge3.alpha) print(ridge3.coef_) # Ridge 회귀분석으로 나온 weghit값 print('가장 강한 양의 상관관계: ',a_3.columns[ridge3.coef_.argmax()], '\n가장 강한 음의 상관관계: ', a_3.columns[ridge3.coef_.argmin()])
alpha = 23 [ 0.60462781 -0.01836216 0.33480332 0.07597766 0.05094331 -0.01801049 0.12955529 -0.03098228 -0.02389251 0.32231312 0.36155864 -0.06357061 -0.17338605 0.10758736 -0.13071408 -0.23385295 -0.16651811 0.02234594 0.22738254 0.10056028 -0.08911714 -0.08948507 0.10180638 -0.05404081 0.32308556 -0.05319068 -0.00210497 0.03723783 0.19267206 0.11423388 0.02766457 0.52296002 -0.01345592 0.01568679 0.13478935 -0.21232877 -0.05575848 0.2075885 -0.04617672 -0.14361172 0.12973644 -0.55315098 -0.43058753 0.13637724 0.26248718 0.09187102 -0.28458975 0.21837938 0.20221084 0.26698087 -0.06346565 -0.14705863 0.1048014 -0.05481377 -0.00895966 0.02854603 -0.05046488 -0.08276508 0.03386703 0.16616376] 가장 강한 양의 상관관계: child 가장 강한 음의 상관관계: ofn_10
MIT
2.Model_code/Linear/ridge_grid_search.ipynb
PpangPpang93/Main_project_police
q4 법질서 준수도
# 그리드 서치 후 최고 성능의 모델을 lasso4에 저장 grid_search.fit(xtrain4, ytrain4) ridge4 = grid_search.best_estimator_ # MAE 출력 y_pred4 = ridge4.predict(xtest4) mean_absolute_error(ytest4, y_pred4) # 결과 print('alpha =', ridge4.alpha) print(ridge4.coef_) # Ridge 회귀분석으로 나온 weghit값 print('가장 강한 양의 상관관계: ',a_4.columns[ridge4.coef_.argmax()], '\n가장 강한 음의 상관관계: ', a_4.columns[ridge4.coef_.argmin()])
alpha = 10.0 [ 0.08422014 0.47496439 0.14834264 -0.31512474 0.90924745 0.72152184 -0.16558919 -0.19975831 -0.11015924 0.08660646 0.65342416 -0.33030785 -0.23131614 0.15442808 -0.02441021 -0.79282277 -0.16042672 0.17084599 0.28093741 0.34898235 -0.48400998 0.13322148 -0.25607675 -0.2081495 0.36480096 -0.54529326 -0.28543694 0.0490413 0.16311427 -0.4195897 0.58055968 -0.99569428 -0.04666948 0.45819622 -0.02352386 0.81593753 0.04091217 0.1235867 0.20385689 -0.3854837 0.20093322 -0.4193185 -0.48252598 0.45797532 0.03029416 0.08614095 -0.38301169 0.23971484 0.48417109 0.48881207 -0.05618829 -0.22196673 0.16491527 -0.3411742 0.12608131 -0.01440109 -0.33302747 -0.25247363 0.13361622 0.38994271] 가장 강한 양의 상관관계: mur_rob_cnt 가장 강한 음의 상관관계: 인구수대비경찰수
MIT
2.Model_code/Linear/ridge_grid_search.ipynb
PpangPpang93/Main_project_police
q5 전반적 안전도
# 그리드 서치 후 최고 성능의 모델을 lasso4에 저장 grid_search.fit(xtrain5, ytrain5) ridge5 = grid_search.best_estimator_ ridge5 = Ridge(alpha = 6.25) ridge5.fit(xtrain5, ytrain5) # MAE 출력 y_pred5 = ridge5.predict(xtest5) mean_absolute_error(ytest5, y_pred5) # 결과 print('alpha =', ridge5.alpha) print(ridge5.coef_) # Ridge 회귀분석으로 나온 weghit값 print('가장 강한 양의 상관관계: ',a_5.columns[ridge5.coef_.argmax()], '\n가장 강한 음의 상관관계: ', a_5.columns[ridge5.coef_.argmin()])
alpha = 6.25 [ 0.06421938 0.4187727 0.01456397 -0.98143189 0.27599427 0.34433021 0.25962214 -0.1884639 0.36799516 0.6268941 0.78630458 -0.33307267 -0.33674456 0.32468536 -0.07203347 -0.87313847 -0.42267696 0.30238462 0.04100536 0.52735428 -0.07934469 -0.02891441 0.096653 -0.20399747 0.23988935 -0.51472116 -0.10890164 -0.01168908 0.09871749 -0.28613236 0.67575661 0.2864437 -0.0625666 0.13997026 -0.18723721 0.13303888 0.25390993 0.1604691 -0.01088797 -0.19593194 0.06767935 -0.38309175 -0.24771206 0.3046604 0.00549987 0.06127319 -0.37182862 0.27782696 0.29352346 -0.01387247 -0.08819971 -0.00617579 0.13260857 -0.30891844 0.00401649 0.11547854 -0.32287934 -0.00472153 -0.06880729 0.2824365 ] 가장 강한 양의 상관관계: for_u20 가장 강한 음의 상관관계: vio_cnt
MIT
2.Model_code/Linear/ridge_grid_search.ipynb
PpangPpang93/Main_project_police
Optional: Dropout**Note**: This exercise is optional and using dropout is not required to pass beyond the linear regime of the scoring function for your fully connected network.Dropout [1] is a technique for regularizing neural networks by randomly setting some features to zero during the forward pass. In this exercise you will implement a dropout layer and modify your fully-connected network to optionally use dropout.[1] Geoffrey E. Hinton et al, "Improving neural networks by preventing co-adaptation of feature detectors", arXiv 2012
# As usual, a bit of setup import time import numpy as np import matplotlib.pyplot as plt from exercise_code.classifiers.fc_net import * from exercise_code.data_utils import get_CIFAR10_data from exercise_code.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array from exercise_code.solver import Solver %matplotlib inline plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' # for auto-reloading external modules # see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython %load_ext autoreload %autoreload 2 # supress cluttering warnings in solutions import warnings warnings.filterwarnings('ignore') def rel_error(x, y): """ returns relative error """ return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y)))) # Load the (preprocessed) CIFAR10 data. data = get_CIFAR10_data() for k, v in data.items(): print('%s: ' % k, v.shape)
_____no_output_____
RSA-MD
exercise_2/3_Dropout-optional.ipynb
nazmicancalik/i2dl
Dropout forward passIn the file `exercise_code/layers.py`, implement the forward pass for dropout. Since dropout behaves differently during training and testing, make sure to implement the operation for both modes.Once you have done so, run the cell below to test your implementation.
x = np.random.randn(500, 500) + 10 for p in [0.3, 0.6, 0.75]: out, _ = dropout_forward(x, {'mode': 'train', 'p': p}) out_test, _ = dropout_forward(x, {'mode': 'test', 'p': p}) print('Running tests with p = ', p) print('Mean of input: ', x.mean()) print('Mean of train-time output: ', out.mean()) print('Mean of test-time output: ', out_test.mean()) print('Fraction of train-time output set to zero: ', (out == 0).mean()) print('Fraction of test-time output set to zero: ', (out_test == 0).mean()) print()
_____no_output_____
RSA-MD
exercise_2/3_Dropout-optional.ipynb
nazmicancalik/i2dl
Dropout backward passIn the file `exercise_code/layers.py`, implement the backward pass for dropout. After doing so, run the following cell to numerically gradient-check your implementation.
x = np.random.randn(10, 10) + 10 dout = np.random.randn(*x.shape) dropout_param = {'mode': 'train', 'p': 0.8, 'seed': 123} out, cache = dropout_forward(x, dropout_param) dx = dropout_backward(dout, cache) dx_num = eval_numerical_gradient_array(lambda xx: dropout_forward(xx, dropout_param)[0], x, dout) print('dx relative error: ', rel_error(dx, dx_num))
_____no_output_____
RSA-MD
exercise_2/3_Dropout-optional.ipynb
nazmicancalik/i2dl
Fully-connected nets with DropoutIn the file `exercise_code/classifiers/fc_net.py`, modify your implementation to use dropout. Specificially, if the constructor the the net receives a nonzero value for the `dropout` parameter, then the net should add dropout immediately after every ReLU nonlinearity. After doing so, run the following to numerically gradient-check your implementation.
N, D, H1, H2, C = 2, 15, 20, 30, 10 X = np.random.randn(N, D) y = np.random.randint(C, size=(N,)) for dropout in [0, 0.25, 0.5]: print('Running check with dropout = ', dropout) model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C, weight_scale=5e-2, dtype=np.float64, dropout=dropout, seed=123) loss, grads = model.loss(X, y) print('Initial loss: ', loss) for name in sorted(grads): f = lambda _: model.loss(X, y)[0] grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5) print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name]))) print()
_____no_output_____
RSA-MD
exercise_2/3_Dropout-optional.ipynb
nazmicancalik/i2dl
Regularization experimentAs an experiment, we will train a pair of two-layer networks on 500 training examples: one will use no dropout, and one will use a dropout probability of 0.75. We will then visualize the training and validation accuracies of the two networks over time.
# Train two identical nets, one with dropout and one without num_train = 500 small_data = { 'X_train': data['X_train'][:num_train], 'y_train': data['y_train'][:num_train], 'X_val': data['X_val'], 'y_val': data['y_val'], } solvers = {} dropout_choices = [0, 0.75] for dropout in dropout_choices: model = FullyConnectedNet([500], dropout=dropout) print("dropout = ", dropout) solver = Solver(model, small_data, num_epochs=25, batch_size=100, update_rule='adam', optim_config={ 'learning_rate': 5e-4, }, verbose=True, print_every=100) solver.train() solvers[dropout] = solver # Plot train and validation accuracies of the two models train_accs = [] val_accs = [] for dropout in dropout_choices: solver = solvers[dropout] train_accs.append(solver.train_acc_history[-1]) val_accs.append(solver.val_acc_history[-1]) plt.subplot(3, 1, 1) for dropout in dropout_choices: plt.plot(solvers[dropout].train_acc_history, 'o', label='%.2f dropout' % dropout) plt.title('Train accuracy') plt.xlabel('Epoch') plt.ylabel('Accuracy') plt.legend(ncol=2, loc='lower right') plt.subplot(3, 1, 2) for dropout in dropout_choices: plt.plot(solvers[dropout].val_acc_history, 'o', label='%.2f dropout' % dropout) plt.title('Val accuracy') plt.xlabel('Epoch') plt.ylabel('Accuracy') plt.legend(ncol=2, loc='lower right') plt.gcf().set_size_inches(15, 15) plt.show()
_____no_output_____
RSA-MD
exercise_2/3_Dropout-optional.ipynb
nazmicancalik/i2dl
QDA
load("PCA.rda") load("DP.rda") suppressMessages(library(caret)) set.seed(201703) options(warn=-1) # QDA pca_qda_s = train(response~., data = pca_train, method = "qda", trControl = trainControl(method = "LOOCV")) pca_qda_te = predict(pca_qda_s, data.frame(pca_test_s)) pca_qda_ac = mean(pca_qda_te == golub_test_r) pca_qda_re = c(LOOCV = pca_qda_s$results$Accuracy, Test = pca_qda_ac) pca_qda_re
_____no_output_____
MIT
ReproducingMLpipelines/Paper6/ModelQDAPCA.ipynb
CompareML/AIM-Manuscript
!pwd
/content
MIT
Udacity Course.ipynb
jtkrohm/jt
print("JT")
JT
MIT
Udacity Course.ipynb
jtkrohm/jt
Dependencies
from openvaccine_scripts import * import warnings, json from sklearn.model_selection import KFold, StratifiedKFold import tensorflow.keras.layers as L import tensorflow.keras.backend as K from tensorflow.keras import optimizers, losses, Model from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau SEED = 0 seed_everything(SEED) warnings.filterwarnings('ignore')
_____no_output_____
MIT
Model backlog/Models/41-openvaccine-weighted-samples.ipynb
dimitreOliveira/COVID-19-Vaccine-Degradation-Prediction
Model parameters
config = { "BATCH_SIZE": 64, "EPOCHS": 120, "LEARNING_RATE": 1e-3, "ES_PATIENCE": 10, "N_FOLDS": 5, "N_USED_FOLDS": 5, "PB_SEQ_LEN": 107, "PV_SEQ_LEN": 130, } with open('config.json', 'w') as json_file: json.dump(json.loads(json.dumps(config)), json_file) config
_____no_output_____
MIT
Model backlog/Models/41-openvaccine-weighted-samples.ipynb
dimitreOliveira/COVID-19-Vaccine-Degradation-Prediction
Load data
database_base_path = '/kaggle/input/stanford-covid-vaccine/' train = pd.read_json(database_base_path + 'train.json', lines=True) test = pd.read_json(database_base_path + 'test.json', lines=True) print('Train samples: %d' % len(train)) display(train.head()) print(f'Test samples: {len(test)}') display(test.head())
Train samples: 2400
MIT
Model backlog/Models/41-openvaccine-weighted-samples.ipynb
dimitreOliveira/COVID-19-Vaccine-Degradation-Prediction
Auxiliary functions
def get_dataset(x, y=None, sample_weights=None, labeled=True, shuffled=True, batch_size=32, buffer_size=-1, seed=0): input_map = {'inputs_seq': x['sequence'], 'inputs_struct': x['structure'], 'inputs_loop': x['predicted_loop_type'], 'inputs_bpps_max': x['bpps_max'], 'inputs_bpps_sum': x['bpps_sum'], 'inputs_bpps_mean': x['bpps_mean'], 'inputs_bpps_scaled': x['bpps_scaled']} if labeled: output_map = {'output_react': y['reactivity'], 'output_bg_ph': y['deg_Mg_pH10'], 'output_ph': y['deg_pH10'], 'output_mg_c': y['deg_Mg_50C'], 'output_c': y['deg_50C']} if sample_weights is not None: dataset = tf.data.Dataset.from_tensor_slices((input_map, output_map, sample_weights)) else: dataset = tf.data.Dataset.from_tensor_slices((input_map, output_map)) else: dataset = tf.data.Dataset.from_tensor_slices((input_map)) if shuffled: dataset = dataset.shuffle(2048, seed=seed) dataset = dataset.batch(batch_size) dataset = dataset.prefetch(buffer_size) return dataset
_____no_output_____
MIT
Model backlog/Models/41-openvaccine-weighted-samples.ipynb
dimitreOliveira/COVID-19-Vaccine-Degradation-Prediction
Model
def model_fn(hidden_dim=384, dropout=.5, pred_len=68, n_outputs=5): inputs_seq = L.Input(shape=(None, 1), name='inputs_seq') inputs_struct = L.Input(shape=(None, 1), name='inputs_struct') inputs_loop = L.Input(shape=(None, 1), name='inputs_loop') inputs_bpps_max = L.Input(shape=(None, 1), name='inputs_bpps_max') inputs_bpps_sum = L.Input(shape=(None, 1), name='inputs_bpps_sum') inputs_bpps_mean = L.Input(shape=(None, 1), name='inputs_bpps_mean') inputs_bpps_scaled = L.Input(shape=(None, 1), name='inputs_bpps_scaled') def _one_hot(x, num_classes): return K.squeeze(K.one_hot(K.cast(x, 'uint8'), num_classes=num_classes), axis=2) ohe_seq = L.Lambda(_one_hot, arguments={'num_classes': len(token2int_seq)}, input_shape=(None, 1))(inputs_seq) ohe_struct = L.Lambda(_one_hot, arguments={'num_classes': len(token2int_struct)}, input_shape=(None, 1))(inputs_struct) ohe_loop = L.Lambda(_one_hot, arguments={'num_classes': len(token2int_loop)}, input_shape=(None, 1))(inputs_loop) # Conv block conv_seq = L.Conv1D(filters=64, kernel_size=5, strides=1, padding='same')(ohe_seq) conv_struct = L.Conv1D(filters=64, kernel_size=5, strides=1, padding='same')(ohe_struct) conv_loop = L.Conv1D(filters=64, kernel_size=5, strides=1, padding='same')(ohe_loop) conv_bpps_max = L.Conv1D(filters=64, kernel_size=5, strides=1, padding='same')(inputs_bpps_max) conv_bpps_sum = L.Conv1D(filters=64, kernel_size=5, strides=1, padding='same')(inputs_bpps_sum) conv_bpps_mean = L.Conv1D(filters=64, kernel_size=5, strides=1, padding='same')(inputs_bpps_mean) conv_bpps_scaled = L.Conv1D(filters=64, kernel_size=5, strides=1, padding='same')(inputs_bpps_scaled) x_concat = L.concatenate([conv_seq, conv_struct, conv_loop, conv_bpps_max, conv_bpps_sum, conv_bpps_mean, conv_bpps_scaled], axis=-1, name='conv_concatenate') # Recurrent block x = L.Bidirectional(L.GRU(hidden_dim, dropout=dropout, return_sequences=True, kernel_initializer='orthogonal'))(x_concat) x_rec = L.Bidirectional(L.GRU(hidden_dim, dropout=dropout, return_sequences=True, kernel_initializer='orthogonal'))(x) x = L.Add()([x_rec, x]) x_rec = L.Bidirectional(L.GRU(hidden_dim, dropout=dropout, return_sequences=True, kernel_initializer='orthogonal'))(x) x = L.Add()([x_rec, x]) # Since we are only making predictions on the first part of each sequence, we have to truncate it x_truncated = x[:, :pred_len] output_react = L.Dense(1, activation='linear', name='output_react')(x_truncated) output_bg_ph = L.Dense(1, activation='linear', name='output_bg_ph')(x_truncated) output_ph = L.Dense(1, activation='linear', name='output_ph')(x_truncated) output_mg_c = L.Dense(1, activation='linear', name='output_mg_c')(x_truncated) output_c = L.Dense(1, activation='linear', name='output_c')(x_truncated) model = Model(inputs=[inputs_seq, inputs_struct, inputs_loop, inputs_bpps_max, inputs_bpps_sum, inputs_bpps_mean, inputs_bpps_scaled], outputs=[output_react, output_bg_ph, output_ph, output_mg_c, output_c]) opt = optimizers.Adam(learning_rate=config['LEARNING_RATE']) model.compile(optimizer=opt, loss={'output_react': MCRMSE, 'output_bg_ph': MCRMSE, 'output_ph': MCRMSE, 'output_mg_c': MCRMSE, 'output_c': MCRMSE}, loss_weights={'output_react': 2., 'output_bg_ph': 2., 'output_ph': 1., 'output_mg_c': 2., 'output_c': 1.}) return model model = model_fn() model.summary()
Model: "functional_1" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== inputs_seq (InputLayer) [(None, None, 1)] 0 __________________________________________________________________________________________________ inputs_struct (InputLayer) [(None, None, 1)] 0 __________________________________________________________________________________________________ inputs_loop (InputLayer) [(None, None, 1)] 0 __________________________________________________________________________________________________ lambda (Lambda) (None, None, 4) 0 inputs_seq[0][0] __________________________________________________________________________________________________ lambda_1 (Lambda) (None, None, 3) 0 inputs_struct[0][0] __________________________________________________________________________________________________ lambda_2 (Lambda) (None, None, 7) 0 inputs_loop[0][0] __________________________________________________________________________________________________ inputs_bpps_max (InputLayer) [(None, None, 1)] 0 __________________________________________________________________________________________________ inputs_bpps_sum (InputLayer) [(None, None, 1)] 0 __________________________________________________________________________________________________ inputs_bpps_mean (InputLayer) [(None, None, 1)] 0 __________________________________________________________________________________________________ inputs_bpps_scaled (InputLayer) [(None, None, 1)] 0 __________________________________________________________________________________________________ conv1d (Conv1D) (None, None, 64) 1344 lambda[0][0] __________________________________________________________________________________________________ conv1d_1 (Conv1D) (None, None, 64) 1024 lambda_1[0][0] __________________________________________________________________________________________________ conv1d_2 (Conv1D) (None, None, 64) 2304 lambda_2[0][0] __________________________________________________________________________________________________ conv1d_3 (Conv1D) (None, None, 64) 384 inputs_bpps_max[0][0] __________________________________________________________________________________________________ conv1d_4 (Conv1D) (None, None, 64) 384 inputs_bpps_sum[0][0] __________________________________________________________________________________________________ conv1d_5 (Conv1D) (None, None, 64) 384 inputs_bpps_mean[0][0] __________________________________________________________________________________________________ conv1d_6 (Conv1D) (None, None, 64) 384 inputs_bpps_scaled[0][0] __________________________________________________________________________________________________ conv_concatenate (Concatenate) (None, None, 448) 0 conv1d[0][0] conv1d_1[0][0] conv1d_2[0][0] conv1d_3[0][0] conv1d_4[0][0] conv1d_5[0][0] conv1d_6[0][0] __________________________________________________________________________________________________ bidirectional (Bidirectional) (None, None, 768) 1921536 conv_concatenate[0][0] __________________________________________________________________________________________________ bidirectional_1 (Bidirectional) (None, None, 768) 2658816 bidirectional[0][0] __________________________________________________________________________________________________ add (Add) (None, None, 768) 0 bidirectional_1[0][0] bidirectional[0][0] __________________________________________________________________________________________________ bidirectional_2 (Bidirectional) (None, None, 768) 2658816 add[0][0] __________________________________________________________________________________________________ add_1 (Add) (None, None, 768) 0 bidirectional_2[0][0] add[0][0] __________________________________________________________________________________________________ tf_op_layer_strided_slice (Tens [(None, None, 768)] 0 add_1[0][0] __________________________________________________________________________________________________ output_react (Dense) (None, None, 1) 769 tf_op_layer_strided_slice[0][0] __________________________________________________________________________________________________ output_bg_ph (Dense) (None, None, 1) 769 tf_op_layer_strided_slice[0][0] __________________________________________________________________________________________________ output_ph (Dense) (None, None, 1) 769 tf_op_layer_strided_slice[0][0] __________________________________________________________________________________________________ output_mg_c (Dense) (None, None, 1) 769 tf_op_layer_strided_slice[0][0] __________________________________________________________________________________________________ output_c (Dense) (None, None, 1) 769 tf_op_layer_strided_slice[0][0] ================================================================================================== Total params: 7,249,221 Trainable params: 7,249,221 Non-trainable params: 0 __________________________________________________________________________________________________
MIT
Model backlog/Models/41-openvaccine-weighted-samples.ipynb
dimitreOliveira/COVID-19-Vaccine-Degradation-Prediction
Pre-process
# Add bpps as features bpps_max = [] bpps_sum = [] bpps_mean = [] bpps_scaled = [] bpps_nb_mean = 0.077522 # mean of bpps_nb across all training data bpps_nb_std = 0.08914 # std of bpps_nb across all training data for row in train.itertuples(): probability = np.load(f'{database_base_path}/bpps/{row.id}.npy') bpps_max.append(probability.max(-1).tolist()) bpps_sum.append((1-probability.sum(-1)).tolist()) bpps_mean.append((1-probability.mean(-1)).tolist()) # bpps nb bpps_nb = (probability > 0).sum(axis=0) / probability.shape[0] bpps_nb = (bpps_nb - bpps_nb_mean) / bpps_nb_std bpps_scaled.append(bpps_nb) train = train.assign(bpps_max=bpps_max, bpps_sum=bpps_sum, bpps_mean=bpps_mean, bpps_scaled=bpps_scaled) bpps_max = [] bpps_sum = [] bpps_mean = [] bpps_scaled = [] for row in test.itertuples(): probability = np.load(f'{database_base_path}/bpps/{row.id}.npy') bpps_max.append(probability.max(-1).tolist()) bpps_sum.append((1-probability.sum(-1)).tolist()) bpps_mean.append((1-probability.mean(-1)).tolist()) # bpps nb bpps_nb = (probability > 0).sum(axis=0) / probability.shape[0] bpps_nb = (bpps_nb - bpps_nb_mean) / bpps_nb_std bpps_scaled.append(bpps_nb) test = test.assign(bpps_max=bpps_max, bpps_sum=bpps_sum, bpps_mean=bpps_mean, bpps_scaled=bpps_scaled) feature_cols = ['sequence', 'structure', 'predicted_loop_type', 'bpps_max', 'bpps_sum', 'bpps_mean', 'bpps_scaled'] pred_cols = ['reactivity', 'deg_Mg_pH10', 'deg_pH10', 'deg_Mg_50C', 'deg_50C'] encoder_list = [token2int_seq, token2int_struct, token2int_loop, None, None, None, None] public_test = test.query("seq_length == 107").copy() private_test = test.query("seq_length == 130").copy() x_test_public = get_features_dict(public_test, feature_cols, encoder_list, public_test.index) x_test_private = get_features_dict(private_test, feature_cols, encoder_list, private_test.index) # To use as stratified col train['signal_to_noise_int'] = train['signal_to_noise'].astype(int)
_____no_output_____
MIT
Model backlog/Models/41-openvaccine-weighted-samples.ipynb
dimitreOliveira/COVID-19-Vaccine-Degradation-Prediction
Training
AUTO = tf.data.experimental.AUTOTUNE skf = KFold(n_splits=config['N_USED_FOLDS'], shuffle=True, random_state=SEED) history_list = [] oof = train[['id', 'SN_filter', 'signal_to_noise'] + pred_cols].copy() oof_preds = np.zeros((len(train), 68, len(pred_cols))) test_public_preds = np.zeros((len(public_test), config['PB_SEQ_LEN'], len(pred_cols))) test_private_preds = np.zeros((len(private_test), config['PV_SEQ_LEN'], len(pred_cols))) for fold,(train_idx, valid_idx) in enumerate(skf.split(train['signal_to_noise_int'])): if fold >= config['N_USED_FOLDS']: break print(f'\nFOLD: {fold+1}') ### Create datasets x_train = get_features_dict(train, feature_cols, encoder_list, train_idx) x_valid = get_features_dict(train, feature_cols, encoder_list, valid_idx) y_train = get_targets_dict(train, pred_cols, train_idx) y_valid = get_targets_dict(train, pred_cols, valid_idx) w_train = np.log(train.iloc[train_idx]['signal_to_noise'].values+1.1)/2 w_valid = np.log(train.iloc[valid_idx]['signal_to_noise'].values+1.1)/2 train_ds = get_dataset(x_train, y_train, w_train, labeled=True, shuffled=True, batch_size=config['BATCH_SIZE'], buffer_size=AUTO, seed=SEED) valid_ds = get_dataset(x_valid, y_valid, w_valid, labeled=True, shuffled=False, batch_size=config['BATCH_SIZE'], buffer_size=AUTO, seed=SEED) oof_ds = get_dataset(x_valid, labeled=False, shuffled=False, batch_size=config['BATCH_SIZE'], buffer_size=AUTO, seed=SEED) test_public_ds = get_dataset(x_test_public, labeled=False, shuffled=False, batch_size=config['BATCH_SIZE'], buffer_size=AUTO, seed=SEED) test_private_ds = get_dataset(x_test_private, labeled=False, shuffled=False, batch_size=config['BATCH_SIZE'], buffer_size=AUTO, seed=SEED) ### Model K.clear_session() model = model_fn() model_path = f'model_{fold}.h5' es = EarlyStopping(monitor='val_loss', mode='min', patience=config['ES_PATIENCE'], restore_best_weights=True, verbose=1) rlrp = ReduceLROnPlateau(monitor='val_loss', mode='min', factor=0.1, patience=5, verbose=1) ### Train history = model.fit(train_ds, validation_data=valid_ds, callbacks=[es, rlrp], epochs=config['EPOCHS'], batch_size=config['BATCH_SIZE'], verbose=2).history history_list.append(history) # Save last model weights model.save_weights(model_path) ### Inference oof_ds_preds = np.array(model.predict(oof_ds)).reshape((len(pred_cols), len(valid_idx), 68)).transpose((1, 2, 0)) oof_preds[valid_idx] = oof_ds_preds # Short sequence (public test) model = model_fn(pred_len=config['PB_SEQ_LEN']) model.load_weights(model_path) test_public_ds_preds = np.array(model.predict(test_public_ds)).reshape((len(pred_cols), len(public_test), config['PB_SEQ_LEN'])).transpose((1, 2, 0)) test_public_preds += test_public_ds_preds * (1 / config['N_USED_FOLDS']) # Long sequence (private test) model = model_fn(pred_len=config['PV_SEQ_LEN']) model.load_weights(model_path) test_private_ds_preds = np.array(model.predict(test_private_ds)).reshape((len(pred_cols), len(private_test), config['PV_SEQ_LEN'])).transpose((1, 2, 0)) test_private_preds += test_private_ds_preds * (1 / config['N_USED_FOLDS'])
FOLD: 1 Epoch 1/120 30/30 - 7s - loss: 3.5876 - output_react_loss: 0.4014 - output_bg_ph_loss: 0.5248 - output_ph_loss: 0.5226 - output_mg_c_loss: 0.4188 - output_c_loss: 0.3750 - val_loss: 2.3896 - val_output_react_loss: 0.2423 - val_output_bg_ph_loss: 0.3301 - val_output_ph_loss: 0.3582 - val_output_mg_c_loss: 0.3026 - val_output_c_loss: 0.2814 Epoch 2/120 30/30 - 4s - loss: 2.3368 - output_react_loss: 0.2458 - output_bg_ph_loss: 0.3183 - output_ph_loss: 0.3391 - output_mg_c_loss: 0.2968 - output_c_loss: 0.2759 - val_loss: 2.1912 - val_output_react_loss: 0.2287 - val_output_bg_ph_loss: 0.2984 - val_output_ph_loss: 0.3103 - val_output_mg_c_loss: 0.2815 - val_output_c_loss: 0.2637 Epoch 3/120 30/30 - 4s - loss: 2.1930 - output_react_loss: 0.2331 - output_bg_ph_loss: 0.2995 - output_ph_loss: 0.3077 - output_mg_c_loss: 0.2797 - output_c_loss: 0.2607 - val_loss: 2.1003 - val_output_react_loss: 0.2207 - val_output_bg_ph_loss: 0.2852 - val_output_ph_loss: 0.2954 - val_output_mg_c_loss: 0.2694 - val_output_c_loss: 0.2543 Epoch 4/120 30/30 - 4s - loss: 2.1113 - output_react_loss: 0.2256 - output_bg_ph_loss: 0.2875 - output_ph_loss: 0.2958 - output_mg_c_loss: 0.2688 - output_c_loss: 0.2517 - val_loss: 2.0628 - val_output_react_loss: 0.2158 - val_output_bg_ph_loss: 0.2790 - val_output_ph_loss: 0.2873 - val_output_mg_c_loss: 0.2689 - val_output_c_loss: 0.2481 Epoch 5/120 30/30 - 4s - loss: 2.0656 - output_react_loss: 0.2222 - output_bg_ph_loss: 0.2797 - output_ph_loss: 0.2893 - output_mg_c_loss: 0.2634 - output_c_loss: 0.2458 - val_loss: 2.0033 - val_output_react_loss: 0.2101 - val_output_bg_ph_loss: 0.2707 - val_output_ph_loss: 0.2806 - val_output_mg_c_loss: 0.2582 - val_output_c_loss: 0.2447 Epoch 6/120 30/30 - 4s - loss: 2.0200 - output_react_loss: 0.2169 - output_bg_ph_loss: 0.2731 - output_ph_loss: 0.2834 - output_mg_c_loss: 0.2571 - output_c_loss: 0.2423 - val_loss: 1.9670 - val_output_react_loss: 0.2091 - val_output_bg_ph_loss: 0.2643 - val_output_ph_loss: 0.2761 - val_output_mg_c_loss: 0.2527 - val_output_c_loss: 0.2388 Epoch 7/120 30/30 - 4s - loss: 1.9784 - output_react_loss: 0.2111 - output_bg_ph_loss: 0.2672 - output_ph_loss: 0.2794 - output_mg_c_loss: 0.2514 - output_c_loss: 0.2395 - val_loss: 1.9392 - val_output_react_loss: 0.2027 - val_output_bg_ph_loss: 0.2598 - val_output_ph_loss: 0.2728 - val_output_mg_c_loss: 0.2518 - val_output_c_loss: 0.2378 Epoch 8/120 30/30 - 4s - loss: 1.9386 - output_react_loss: 0.2071 - output_bg_ph_loss: 0.2625 - output_ph_loss: 0.2737 - output_mg_c_loss: 0.2455 - output_c_loss: 0.2344 - val_loss: 1.8829 - val_output_react_loss: 0.1985 - val_output_bg_ph_loss: 0.2537 - val_output_ph_loss: 0.2657 - val_output_mg_c_loss: 0.2406 - val_output_c_loss: 0.2314 Epoch 9/120 30/30 - 4s - loss: 1.8977 - output_react_loss: 0.2042 - output_bg_ph_loss: 0.2566 - output_ph_loss: 0.2687 - output_mg_c_loss: 0.2393 - output_c_loss: 0.2288 - val_loss: 1.8674 - val_output_react_loss: 0.2007 - val_output_bg_ph_loss: 0.2505 - val_output_ph_loss: 0.2620 - val_output_mg_c_loss: 0.2378 - val_output_c_loss: 0.2274 Epoch 10/120 30/30 - 4s - loss: 1.8620 - output_react_loss: 0.2009 - output_bg_ph_loss: 0.2526 - output_ph_loss: 0.2634 - output_mg_c_loss: 0.2335 - output_c_loss: 0.2246 - val_loss: 1.7981 - val_output_react_loss: 0.1917 - val_output_bg_ph_loss: 0.2430 - val_output_ph_loss: 0.2556 - val_output_mg_c_loss: 0.2259 - val_output_c_loss: 0.2214 Epoch 11/120 30/30 - 4s - loss: 1.8257 - output_react_loss: 0.1971 - output_bg_ph_loss: 0.2474 - output_ph_loss: 0.2592 - output_mg_c_loss: 0.2282 - output_c_loss: 0.2212 - val_loss: 1.7811 - val_output_react_loss: 0.1907 - val_output_bg_ph_loss: 0.2395 - val_output_ph_loss: 0.2524 - val_output_mg_c_loss: 0.2241 - val_output_c_loss: 0.2202 Epoch 12/120 30/30 - 4s - loss: 1.7962 - output_react_loss: 0.1954 - output_bg_ph_loss: 0.2432 - output_ph_loss: 0.2553 - output_mg_c_loss: 0.2234 - output_c_loss: 0.2169 - val_loss: 1.7353 - val_output_react_loss: 0.1875 - val_output_bg_ph_loss: 0.2335 - val_output_ph_loss: 0.2477 - val_output_mg_c_loss: 0.2163 - val_output_c_loss: 0.2132 Epoch 13/120 30/30 - 4s - loss: 1.7619 - output_react_loss: 0.1927 - output_bg_ph_loss: 0.2374 - output_ph_loss: 0.2505 - output_mg_c_loss: 0.2186 - output_c_loss: 0.2140 - val_loss: 1.6985 - val_output_react_loss: 0.1841 - val_output_bg_ph_loss: 0.2283 - val_output_ph_loss: 0.2429 - val_output_mg_c_loss: 0.2094 - val_output_c_loss: 0.2120 Epoch 14/120 30/30 - 4s - loss: 1.7308 - output_react_loss: 0.1895 - output_bg_ph_loss: 0.2336 - output_ph_loss: 0.2465 - output_mg_c_loss: 0.2141 - output_c_loss: 0.2101 - val_loss: 1.6867 - val_output_react_loss: 0.1823 - val_output_bg_ph_loss: 0.2246 - val_output_ph_loss: 0.2381 - val_output_mg_c_loss: 0.2142 - val_output_c_loss: 0.2063 Epoch 15/120 30/30 - 4s - loss: 1.7042 - output_react_loss: 0.1884 - output_bg_ph_loss: 0.2298 - output_ph_loss: 0.2409 - output_mg_c_loss: 0.2103 - output_c_loss: 0.2064 - val_loss: 1.6697 - val_output_react_loss: 0.1855 - val_output_bg_ph_loss: 0.2197 - val_output_ph_loss: 0.2369 - val_output_mg_c_loss: 0.2079 - val_output_c_loss: 0.2067 Epoch 16/120 30/30 - 4s - loss: 1.6806 - output_react_loss: 0.1866 - output_bg_ph_loss: 0.2256 - output_ph_loss: 0.2385 - output_mg_c_loss: 0.2068 - output_c_loss: 0.2041 - val_loss: 1.6219 - val_output_react_loss: 0.1798 - val_output_bg_ph_loss: 0.2175 - val_output_ph_loss: 0.2300 - val_output_mg_c_loss: 0.1986 - val_output_c_loss: 0.2002 Epoch 17/120 30/30 - 4s - loss: 1.6525 - output_react_loss: 0.1837 - output_bg_ph_loss: 0.2234 - output_ph_loss: 0.2343 - output_mg_c_loss: 0.2017 - output_c_loss: 0.2007 - val_loss: 1.6066 - val_output_react_loss: 0.1761 - val_output_bg_ph_loss: 0.2152 - val_output_ph_loss: 0.2272 - val_output_mg_c_loss: 0.1986 - val_output_c_loss: 0.1997 Epoch 18/120 30/30 - 4s - loss: 1.6309 - output_react_loss: 0.1818 - output_bg_ph_loss: 0.2197 - output_ph_loss: 0.2312 - output_mg_c_loss: 0.1988 - output_c_loss: 0.1988 - val_loss: 1.5870 - val_output_react_loss: 0.1748 - val_output_bg_ph_loss: 0.2136 - val_output_ph_loss: 0.2283 - val_output_mg_c_loss: 0.1931 - val_output_c_loss: 0.1955 Epoch 19/120 30/30 - 4s - loss: 1.6094 - output_react_loss: 0.1798 - output_bg_ph_loss: 0.2166 - output_ph_loss: 0.2287 - output_mg_c_loss: 0.1956 - output_c_loss: 0.1968 - val_loss: 1.5582 - val_output_react_loss: 0.1740 - val_output_bg_ph_loss: 0.2089 - val_output_ph_loss: 0.2212 - val_output_mg_c_loss: 0.1884 - val_output_c_loss: 0.1945 Epoch 20/120 30/30 - 4s - loss: 1.5891 - output_react_loss: 0.1787 - output_bg_ph_loss: 0.2138 - output_ph_loss: 0.2258 - output_mg_c_loss: 0.1922 - output_c_loss: 0.1938 - val_loss: 1.5465 - val_output_react_loss: 0.1723 - val_output_bg_ph_loss: 0.2074 - val_output_ph_loss: 0.2199 - val_output_mg_c_loss: 0.1876 - val_output_c_loss: 0.1918 Epoch 21/120 30/30 - 4s - loss: 1.5734 - output_react_loss: 0.1770 - output_bg_ph_loss: 0.2115 - output_ph_loss: 0.2227 - output_mg_c_loss: 0.1901 - output_c_loss: 0.1935 - val_loss: 1.5534 - val_output_react_loss: 0.1733 - val_output_bg_ph_loss: 0.2077 - val_output_ph_loss: 0.2189 - val_output_mg_c_loss: 0.1901 - val_output_c_loss: 0.1923 Epoch 22/120 30/30 - 4s - loss: 1.5669 - output_react_loss: 0.1770 - output_bg_ph_loss: 0.2107 - output_ph_loss: 0.2226 - output_mg_c_loss: 0.1891 - output_c_loss: 0.1906 - val_loss: 1.5548 - val_output_react_loss: 0.1749 - val_output_bg_ph_loss: 0.2074 - val_output_ph_loss: 0.2198 - val_output_mg_c_loss: 0.1889 - val_output_c_loss: 0.1926 Epoch 23/120 30/30 - 4s - loss: 1.5423 - output_react_loss: 0.1747 - output_bg_ph_loss: 0.2067 - output_ph_loss: 0.2198 - output_mg_c_loss: 0.1855 - output_c_loss: 0.1887 - val_loss: 1.5106 - val_output_react_loss: 0.1702 - val_output_bg_ph_loss: 0.2019 - val_output_ph_loss: 0.2144 - val_output_mg_c_loss: 0.1827 - val_output_c_loss: 0.1866 Epoch 24/120 30/30 - 4s - loss: 1.5258 - output_react_loss: 0.1735 - output_bg_ph_loss: 0.2046 - output_ph_loss: 0.2168 - output_mg_c_loss: 0.1829 - output_c_loss: 0.1871 - val_loss: 1.5220 - val_output_react_loss: 0.1725 - val_output_bg_ph_loss: 0.2026 - val_output_ph_loss: 0.2141 - val_output_mg_c_loss: 0.1829 - val_output_c_loss: 0.1918 Epoch 25/120 30/30 - 4s - loss: 1.5203 - output_react_loss: 0.1732 - output_bg_ph_loss: 0.2036 - output_ph_loss: 0.2154 - output_mg_c_loss: 0.1820 - output_c_loss: 0.1872 - val_loss: 1.5117 - val_output_react_loss: 0.1699 - val_output_bg_ph_loss: 0.2024 - val_output_ph_loss: 0.2127 - val_output_mg_c_loss: 0.1838 - val_output_c_loss: 0.1870 Epoch 26/120 30/30 - 4s - loss: 1.5020 - output_react_loss: 0.1709 - output_bg_ph_loss: 0.2012 - output_ph_loss: 0.2140 - output_mg_c_loss: 0.1795 - output_c_loss: 0.1846 - val_loss: 1.4843 - val_output_react_loss: 0.1663 - val_output_bg_ph_loss: 0.1983 - val_output_ph_loss: 0.2096 - val_output_mg_c_loss: 0.1803 - val_output_c_loss: 0.1847 Epoch 27/120 30/30 - 4s - loss: 1.4868 - output_react_loss: 0.1695 - output_bg_ph_loss: 0.1987 - output_ph_loss: 0.2121 - output_mg_c_loss: 0.1775 - output_c_loss: 0.1832 - val_loss: 1.4843 - val_output_react_loss: 0.1678 - val_output_bg_ph_loss: 0.1985 - val_output_ph_loss: 0.2095 - val_output_mg_c_loss: 0.1787 - val_output_c_loss: 0.1849 Epoch 28/120 30/30 - 4s - loss: 1.4696 - output_react_loss: 0.1672 - output_bg_ph_loss: 0.1963 - output_ph_loss: 0.2095 - output_mg_c_loss: 0.1760 - output_c_loss: 0.1810 - val_loss: 1.4791 - val_output_react_loss: 0.1653 - val_output_bg_ph_loss: 0.1986 - val_output_ph_loss: 0.2102 - val_output_mg_c_loss: 0.1790 - val_output_c_loss: 0.1831 Epoch 29/120 30/30 - 4s - loss: 1.4549 - output_react_loss: 0.1656 - output_bg_ph_loss: 0.1946 - output_ph_loss: 0.2077 - output_mg_c_loss: 0.1733 - output_c_loss: 0.1801 - val_loss: 1.4733 - val_output_react_loss: 0.1647 - val_output_bg_ph_loss: 0.1971 - val_output_ph_loss: 0.2090 - val_output_mg_c_loss: 0.1777 - val_output_c_loss: 0.1851 Epoch 30/120 30/30 - 4s - loss: 1.4524 - output_react_loss: 0.1648 - output_bg_ph_loss: 0.1944 - output_ph_loss: 0.2081 - output_mg_c_loss: 0.1729 - output_c_loss: 0.1800 - val_loss: 1.4779 - val_output_react_loss: 0.1646 - val_output_bg_ph_loss: 0.1985 - val_output_ph_loss: 0.2102 - val_output_mg_c_loss: 0.1793 - val_output_c_loss: 0.1831 Epoch 31/120 30/30 - 4s - loss: 1.4463 - output_react_loss: 0.1650 - output_bg_ph_loss: 0.1932 - output_ph_loss: 0.2056 - output_mg_c_loss: 0.1727 - output_c_loss: 0.1788 - val_loss: 1.4795 - val_output_react_loss: 0.1642 - val_output_bg_ph_loss: 0.1987 - val_output_ph_loss: 0.2091 - val_output_mg_c_loss: 0.1810 - val_output_c_loss: 0.1825 Epoch 32/120 30/30 - 4s - loss: 1.4305 - output_react_loss: 0.1628 - output_bg_ph_loss: 0.1910 - output_ph_loss: 0.2042 - output_mg_c_loss: 0.1709 - output_c_loss: 0.1770 - val_loss: 1.4637 - val_output_react_loss: 0.1654 - val_output_bg_ph_loss: 0.1962 - val_output_ph_loss: 0.2053 - val_output_mg_c_loss: 0.1771 - val_output_c_loss: 0.1809 Epoch 33/120 30/30 - 4s - loss: 1.4080 - output_react_loss: 0.1612 - output_bg_ph_loss: 0.1875 - output_ph_loss: 0.2017 - output_mg_c_loss: 0.1669 - output_c_loss: 0.1751 - val_loss: 1.4503 - val_output_react_loss: 0.1617 - val_output_bg_ph_loss: 0.1945 - val_output_ph_loss: 0.2064 - val_output_mg_c_loss: 0.1753 - val_output_c_loss: 0.1808 Epoch 34/120 30/30 - 4s - loss: 1.3964 - output_react_loss: 0.1589 - output_bg_ph_loss: 0.1861 - output_ph_loss: 0.2005 - output_mg_c_loss: 0.1656 - output_c_loss: 0.1746 - val_loss: 1.4477 - val_output_react_loss: 0.1612 - val_output_bg_ph_loss: 0.1934 - val_output_ph_loss: 0.2053 - val_output_mg_c_loss: 0.1769 - val_output_c_loss: 0.1796 Epoch 35/120 30/30 - 4s - loss: 1.3855 - output_react_loss: 0.1582 - output_bg_ph_loss: 0.1841 - output_ph_loss: 0.1993 - output_mg_c_loss: 0.1642 - output_c_loss: 0.1730 - val_loss: 1.4545 - val_output_react_loss: 0.1636 - val_output_bg_ph_loss: 0.1967 - val_output_ph_loss: 0.2045 - val_output_mg_c_loss: 0.1745 - val_output_c_loss: 0.1804 Epoch 36/120 30/30 - 4s - loss: 1.3699 - output_react_loss: 0.1563 - output_bg_ph_loss: 0.1831 - output_ph_loss: 0.1966 - output_mg_c_loss: 0.1612 - output_c_loss: 0.1720 - val_loss: 1.4480 - val_output_react_loss: 0.1625 - val_output_bg_ph_loss: 0.1951 - val_output_ph_loss: 0.2040 - val_output_mg_c_loss: 0.1748 - val_output_c_loss: 0.1791 Epoch 37/120 30/30 - 4s - loss: 1.3578 - output_react_loss: 0.1550 - output_bg_ph_loss: 0.1809 - output_ph_loss: 0.1955 - output_mg_c_loss: 0.1600 - output_c_loss: 0.1706 - val_loss: 1.4404 - val_output_react_loss: 0.1603 - val_output_bg_ph_loss: 0.1939 - val_output_ph_loss: 0.2032 - val_output_mg_c_loss: 0.1751 - val_output_c_loss: 0.1785 Epoch 38/120 30/30 - 4s - loss: 1.3499 - output_react_loss: 0.1540 - output_bg_ph_loss: 0.1800 - output_ph_loss: 0.1946 - output_mg_c_loss: 0.1585 - output_c_loss: 0.1702 - val_loss: 1.4460 - val_output_react_loss: 0.1627 - val_output_bg_ph_loss: 0.1938 - val_output_ph_loss: 0.2036 - val_output_mg_c_loss: 0.1748 - val_output_c_loss: 0.1797 Epoch 39/120 30/30 - 4s - loss: 1.3364 - output_react_loss: 0.1533 - output_bg_ph_loss: 0.1777 - output_ph_loss: 0.1922 - output_mg_c_loss: 0.1568 - output_c_loss: 0.1686 - val_loss: 1.4498 - val_output_react_loss: 0.1634 - val_output_bg_ph_loss: 0.1934 - val_output_ph_loss: 0.2049 - val_output_mg_c_loss: 0.1760 - val_output_c_loss: 0.1794 Epoch 40/120 30/30 - 4s - loss: 1.3255 - output_react_loss: 0.1515 - output_bg_ph_loss: 0.1755 - output_ph_loss: 0.1918 - output_mg_c_loss: 0.1559 - output_c_loss: 0.1679 - val_loss: 1.4271 - val_output_react_loss: 0.1605 - val_output_bg_ph_loss: 0.1918 - val_output_ph_loss: 0.2000 - val_output_mg_c_loss: 0.1724 - val_output_c_loss: 0.1777 Epoch 41/120 30/30 - 4s - loss: 1.3098 - output_react_loss: 0.1499 - output_bg_ph_loss: 0.1738 - output_ph_loss: 0.1895 - output_mg_c_loss: 0.1530 - output_c_loss: 0.1670 - val_loss: 1.4364 - val_output_react_loss: 0.1606 - val_output_bg_ph_loss: 0.1939 - val_output_ph_loss: 0.2019 - val_output_mg_c_loss: 0.1734 - val_output_c_loss: 0.1787 Epoch 42/120 30/30 - 4s - loss: 1.2943 - output_react_loss: 0.1479 - output_bg_ph_loss: 0.1708 - output_ph_loss: 0.1874 - output_mg_c_loss: 0.1519 - output_c_loss: 0.1658 - val_loss: 1.4241 - val_output_react_loss: 0.1592 - val_output_bg_ph_loss: 0.1919 - val_output_ph_loss: 0.1998 - val_output_mg_c_loss: 0.1728 - val_output_c_loss: 0.1764 Epoch 43/120 30/30 - 4s - loss: 1.2860 - output_react_loss: 0.1469 - output_bg_ph_loss: 0.1700 - output_ph_loss: 0.1867 - output_mg_c_loss: 0.1503 - output_c_loss: 0.1649 - val_loss: 1.4232 - val_output_react_loss: 0.1588 - val_output_bg_ph_loss: 0.1908 - val_output_ph_loss: 0.2020 - val_output_mg_c_loss: 0.1719 - val_output_c_loss: 0.1782 Epoch 44/120 30/30 - 4s - loss: 1.2816 - output_react_loss: 0.1464 - output_bg_ph_loss: 0.1691 - output_ph_loss: 0.1864 - output_mg_c_loss: 0.1499 - output_c_loss: 0.1644 - val_loss: 1.4265 - val_output_react_loss: 0.1592 - val_output_bg_ph_loss: 0.1931 - val_output_ph_loss: 0.2009 - val_output_mg_c_loss: 0.1721 - val_output_c_loss: 0.1766 Epoch 45/120 30/30 - 4s - loss: 1.2646 - output_react_loss: 0.1461 - output_bg_ph_loss: 0.1662 - output_ph_loss: 0.1838 - output_mg_c_loss: 0.1468 - output_c_loss: 0.1625 - val_loss: 1.4248 - val_output_react_loss: 0.1617 - val_output_bg_ph_loss: 0.1898 - val_output_ph_loss: 0.2020 - val_output_mg_c_loss: 0.1713 - val_output_c_loss: 0.1771 Epoch 46/120 30/30 - 4s - loss: 1.2535 - output_react_loss: 0.1443 - output_bg_ph_loss: 0.1645 - output_ph_loss: 0.1822 - output_mg_c_loss: 0.1457 - output_c_loss: 0.1623 - val_loss: 1.4157 - val_output_react_loss: 0.1588 - val_output_bg_ph_loss: 0.1907 - val_output_ph_loss: 0.1993 - val_output_mg_c_loss: 0.1713 - val_output_c_loss: 0.1747 Epoch 47/120 30/30 - 4s - loss: 1.2434 - output_react_loss: 0.1428 - output_bg_ph_loss: 0.1631 - output_ph_loss: 0.1813 - output_mg_c_loss: 0.1444 - output_c_loss: 0.1613 - val_loss: 1.4238 - val_output_react_loss: 0.1591 - val_output_bg_ph_loss: 0.1909 - val_output_ph_loss: 0.1999 - val_output_mg_c_loss: 0.1731 - val_output_c_loss: 0.1779 Epoch 48/120 30/30 - 4s - loss: 1.2313 - output_react_loss: 0.1413 - output_bg_ph_loss: 0.1612 - output_ph_loss: 0.1800 - output_mg_c_loss: 0.1430 - output_c_loss: 0.1604 - val_loss: 1.4135 - val_output_react_loss: 0.1573 - val_output_bg_ph_loss: 0.1904 - val_output_ph_loss: 0.1980 - val_output_mg_c_loss: 0.1719 - val_output_c_loss: 0.1762 Epoch 49/120 30/30 - 4s - loss: 1.2177 - output_react_loss: 0.1402 - output_bg_ph_loss: 0.1592 - output_ph_loss: 0.1783 - output_mg_c_loss: 0.1408 - output_c_loss: 0.1592 - val_loss: 1.4164 - val_output_react_loss: 0.1576 - val_output_bg_ph_loss: 0.1905 - val_output_ph_loss: 0.2003 - val_output_mg_c_loss: 0.1717 - val_output_c_loss: 0.1764 Epoch 50/120 30/30 - 4s - loss: 1.2093 - output_react_loss: 0.1388 - output_bg_ph_loss: 0.1581 - output_ph_loss: 0.1777 - output_mg_c_loss: 0.1397 - output_c_loss: 0.1583 - val_loss: 1.4258 - val_output_react_loss: 0.1585 - val_output_bg_ph_loss: 0.1924 - val_output_ph_loss: 0.2019 - val_output_mg_c_loss: 0.1731 - val_output_c_loss: 0.1760 Epoch 51/120 30/30 - 4s - loss: 1.1991 - output_react_loss: 0.1378 - output_bg_ph_loss: 0.1561 - output_ph_loss: 0.1764 - output_mg_c_loss: 0.1387 - output_c_loss: 0.1578 - val_loss: 1.4040 - val_output_react_loss: 0.1570 - val_output_bg_ph_loss: 0.1893 - val_output_ph_loss: 0.1959 - val_output_mg_c_loss: 0.1699 - val_output_c_loss: 0.1756 Epoch 52/120 30/30 - 4s - loss: 1.1923 - output_react_loss: 0.1369 - output_bg_ph_loss: 0.1546 - output_ph_loss: 0.1757 - output_mg_c_loss: 0.1380 - output_c_loss: 0.1574 - val_loss: 1.4119 - val_output_react_loss: 0.1568 - val_output_bg_ph_loss: 0.1897 - val_output_ph_loss: 0.1990 - val_output_mg_c_loss: 0.1722 - val_output_c_loss: 0.1756 Epoch 53/120 30/30 - 4s - loss: 1.1826 - output_react_loss: 0.1350 - output_bg_ph_loss: 0.1534 - output_ph_loss: 0.1749 - output_mg_c_loss: 0.1371 - output_c_loss: 0.1566 - val_loss: 1.3991 - val_output_react_loss: 0.1555 - val_output_bg_ph_loss: 0.1890 - val_output_ph_loss: 0.1962 - val_output_mg_c_loss: 0.1698 - val_output_c_loss: 0.1744 Epoch 54/120 30/30 - 4s - loss: 1.1733 - output_react_loss: 0.1351 - output_bg_ph_loss: 0.1520 - output_ph_loss: 0.1738 - output_mg_c_loss: 0.1352 - output_c_loss: 0.1550 - val_loss: 1.3966 - val_output_react_loss: 0.1555 - val_output_bg_ph_loss: 0.1883 - val_output_ph_loss: 0.1963 - val_output_mg_c_loss: 0.1695 - val_output_c_loss: 0.1737 Epoch 55/120 30/30 - 4s - loss: 1.1656 - output_react_loss: 0.1341 - output_bg_ph_loss: 0.1510 - output_ph_loss: 0.1722 - output_mg_c_loss: 0.1341 - output_c_loss: 0.1552 - val_loss: 1.4028 - val_output_react_loss: 0.1571 - val_output_bg_ph_loss: 0.1893 - val_output_ph_loss: 0.1979 - val_output_mg_c_loss: 0.1684 - val_output_c_loss: 0.1754 Epoch 56/120 30/30 - 4s - loss: 1.1554 - output_react_loss: 0.1320 - output_bg_ph_loss: 0.1492 - output_ph_loss: 0.1718 - output_mg_c_loss: 0.1333 - output_c_loss: 0.1546 - val_loss: 1.4014 - val_output_react_loss: 0.1566 - val_output_bg_ph_loss: 0.1881 - val_output_ph_loss: 0.1978 - val_output_mg_c_loss: 0.1700 - val_output_c_loss: 0.1743 Epoch 57/120 30/30 - 4s - loss: 1.1445 - output_react_loss: 0.1312 - output_bg_ph_loss: 0.1472 - output_ph_loss: 0.1705 - output_mg_c_loss: 0.1321 - output_c_loss: 0.1531 - val_loss: 1.3987 - val_output_react_loss: 0.1584 - val_output_bg_ph_loss: 0.1880 - val_output_ph_loss: 0.1964 - val_output_mg_c_loss: 0.1679 - val_output_c_loss: 0.1737 Epoch 58/120 30/30 - 4s - loss: 1.1385 - output_react_loss: 0.1304 - output_bg_ph_loss: 0.1464 - output_ph_loss: 0.1695 - output_mg_c_loss: 0.1314 - output_c_loss: 0.1527 - val_loss: 1.3989 - val_output_react_loss: 0.1554 - val_output_bg_ph_loss: 0.1891 - val_output_ph_loss: 0.1963 - val_output_mg_c_loss: 0.1695 - val_output_c_loss: 0.1748 Epoch 59/120 Epoch 00059: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513. 30/30 - 4s - loss: 1.1291 - output_react_loss: 0.1290 - output_bg_ph_loss: 0.1454 - output_ph_loss: 0.1685 - output_mg_c_loss: 0.1298 - output_c_loss: 0.1523 - val_loss: 1.4020 - val_output_react_loss: 0.1566 - val_output_bg_ph_loss: 0.1908 - val_output_ph_loss: 0.1954 - val_output_mg_c_loss: 0.1697 - val_output_c_loss: 0.1724 Epoch 60/120 30/30 - 4s - loss: 1.0961 - output_react_loss: 0.1250 - output_bg_ph_loss: 0.1409 - output_ph_loss: 0.1642 - output_mg_c_loss: 0.1256 - output_c_loss: 0.1490 - val_loss: 1.3785 - val_output_react_loss: 0.1536 - val_output_bg_ph_loss: 0.1865 - val_output_ph_loss: 0.1942 - val_output_mg_c_loss: 0.1664 - val_output_c_loss: 0.1715 Epoch 61/120 30/30 - 4s - loss: 1.0803 - output_react_loss: 0.1238 - output_bg_ph_loss: 0.1385 - output_ph_loss: 0.1622 - output_mg_c_loss: 0.1229 - output_c_loss: 0.1477 - val_loss: 1.3770 - val_output_react_loss: 0.1537 - val_output_bg_ph_loss: 0.1859 - val_output_ph_loss: 0.1939 - val_output_mg_c_loss: 0.1665 - val_output_c_loss: 0.1710 Epoch 62/120 30/30 - 4s - loss: 1.0757 - output_react_loss: 0.1226 - output_bg_ph_loss: 0.1379 - output_ph_loss: 0.1620 - output_mg_c_loss: 0.1226 - output_c_loss: 0.1474 - val_loss: 1.3769 - val_output_react_loss: 0.1540 - val_output_bg_ph_loss: 0.1861 - val_output_ph_loss: 0.1931 - val_output_mg_c_loss: 0.1663 - val_output_c_loss: 0.1710 Epoch 63/120 30/30 - 4s - loss: 1.0726 - output_react_loss: 0.1226 - output_bg_ph_loss: 0.1371 - output_ph_loss: 0.1613 - output_mg_c_loss: 0.1224 - output_c_loss: 0.1471 - val_loss: 1.3742 - val_output_react_loss: 0.1535 - val_output_bg_ph_loss: 0.1855 - val_output_ph_loss: 0.1931 - val_output_mg_c_loss: 0.1660 - val_output_c_loss: 0.1710 Epoch 64/120 30/30 - 4s - loss: 1.0701 - output_react_loss: 0.1222 - output_bg_ph_loss: 0.1370 - output_ph_loss: 0.1612 - output_mg_c_loss: 0.1219 - output_c_loss: 0.1469 - val_loss: 1.3771 - val_output_react_loss: 0.1539 - val_output_bg_ph_loss: 0.1862 - val_output_ph_loss: 0.1932 - val_output_mg_c_loss: 0.1664 - val_output_c_loss: 0.1710 Epoch 65/120 30/30 - 4s - loss: 1.0683 - output_react_loss: 0.1217 - output_bg_ph_loss: 0.1365 - output_ph_loss: 0.1614 - output_mg_c_loss: 0.1218 - output_c_loss: 0.1469 - val_loss: 1.3770 - val_output_react_loss: 0.1540 - val_output_bg_ph_loss: 0.1861 - val_output_ph_loss: 0.1932 - val_output_mg_c_loss: 0.1663 - val_output_c_loss: 0.1708 Epoch 66/120 30/30 - 4s - loss: 1.0662 - output_react_loss: 0.1215 - output_bg_ph_loss: 0.1363 - output_ph_loss: 0.1608 - output_mg_c_loss: 0.1215 - output_c_loss: 0.1467 - val_loss: 1.3724 - val_output_react_loss: 0.1534 - val_output_bg_ph_loss: 0.1858 - val_output_ph_loss: 0.1926 - val_output_mg_c_loss: 0.1655 - val_output_c_loss: 0.1705 Epoch 67/120 30/30 - 4s - loss: 1.0620 - output_react_loss: 0.1211 - output_bg_ph_loss: 0.1360 - output_ph_loss: 0.1600 - output_mg_c_loss: 0.1208 - output_c_loss: 0.1462 - val_loss: 1.3752 - val_output_react_loss: 0.1537 - val_output_bg_ph_loss: 0.1860 - val_output_ph_loss: 0.1928 - val_output_mg_c_loss: 0.1661 - val_output_c_loss: 0.1707 Epoch 68/120 30/30 - 4s - loss: 1.0630 - output_react_loss: 0.1213 - output_bg_ph_loss: 0.1362 - output_ph_loss: 0.1605 - output_mg_c_loss: 0.1208 - output_c_loss: 0.1459 - val_loss: 1.3735 - val_output_react_loss: 0.1538 - val_output_bg_ph_loss: 0.1854 - val_output_ph_loss: 0.1928 - val_output_mg_c_loss: 0.1658 - val_output_c_loss: 0.1707 Epoch 69/120 30/30 - 4s - loss: 1.0598 - output_react_loss: 0.1211 - output_bg_ph_loss: 0.1353 - output_ph_loss: 0.1602 - output_mg_c_loss: 0.1203 - output_c_loss: 0.1461 - val_loss: 1.3737 - val_output_react_loss: 0.1537 - val_output_bg_ph_loss: 0.1856 - val_output_ph_loss: 0.1925 - val_output_mg_c_loss: 0.1661 - val_output_c_loss: 0.1706 Epoch 70/120 30/30 - 4s - loss: 1.0572 - output_react_loss: 0.1203 - output_bg_ph_loss: 0.1349 - output_ph_loss: 0.1597 - output_mg_c_loss: 0.1204 - output_c_loss: 0.1462 - val_loss: 1.3752 - val_output_react_loss: 0.1540 - val_output_bg_ph_loss: 0.1860 - val_output_ph_loss: 0.1928 - val_output_mg_c_loss: 0.1658 - val_output_c_loss: 0.1709 Epoch 71/120 Epoch 00071: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05. 30/30 - 4s - loss: 1.0552 - output_react_loss: 0.1202 - output_bg_ph_loss: 0.1348 - output_ph_loss: 0.1597 - output_mg_c_loss: 0.1198 - output_c_loss: 0.1457 - val_loss: 1.3739 - val_output_react_loss: 0.1537 - val_output_bg_ph_loss: 0.1856 - val_output_ph_loss: 0.1925 - val_output_mg_c_loss: 0.1660 - val_output_c_loss: 0.1707 Epoch 72/120 30/30 - 4s - loss: 1.0525 - output_react_loss: 0.1202 - output_bg_ph_loss: 0.1344 - output_ph_loss: 0.1588 - output_mg_c_loss: 0.1196 - output_c_loss: 0.1453 - val_loss: 1.3732 - val_output_react_loss: 0.1535 - val_output_bg_ph_loss: 0.1856 - val_output_ph_loss: 0.1926 - val_output_mg_c_loss: 0.1658 - val_output_c_loss: 0.1707 Epoch 73/120 30/30 - 4s - loss: 1.0532 - output_react_loss: 0.1202 - output_bg_ph_loss: 0.1344 - output_ph_loss: 0.1589 - output_mg_c_loss: 0.1198 - output_c_loss: 0.1455 - val_loss: 1.3745 - val_output_react_loss: 0.1537 - val_output_bg_ph_loss: 0.1859 - val_output_ph_loss: 0.1927 - val_output_mg_c_loss: 0.1659 - val_output_c_loss: 0.1707 Epoch 74/120 30/30 - 4s - loss: 1.0519 - output_react_loss: 0.1197 - output_bg_ph_loss: 0.1340 - output_ph_loss: 0.1591 - output_mg_c_loss: 0.1198 - output_c_loss: 0.1458 - val_loss: 1.3735 - val_output_react_loss: 0.1536 - val_output_bg_ph_loss: 0.1857 - val_output_ph_loss: 0.1927 - val_output_mg_c_loss: 0.1657 - val_output_c_loss: 0.1706 Epoch 75/120 30/30 - 4s - loss: 1.0519 - output_react_loss: 0.1201 - output_bg_ph_loss: 0.1342 - output_ph_loss: 0.1590 - output_mg_c_loss: 0.1194 - output_c_loss: 0.1454 - val_loss: 1.3752 - val_output_react_loss: 0.1538 - val_output_bg_ph_loss: 0.1861 - val_output_ph_loss: 0.1929 - val_output_mg_c_loss: 0.1660 - val_output_c_loss: 0.1706 Epoch 76/120 Restoring model weights from the end of the best epoch. Epoch 00076: ReduceLROnPlateau reducing learning rate to 1.0000000656873453e-06. 30/30 - 4s - loss: 1.0535 - output_react_loss: 0.1202 - output_bg_ph_loss: 0.1344 - output_ph_loss: 0.1595 - output_mg_c_loss: 0.1198 - output_c_loss: 0.1453 - val_loss: 1.3733 - val_output_react_loss: 0.1535 - val_output_bg_ph_loss: 0.1857 - val_output_ph_loss: 0.1926 - val_output_mg_c_loss: 0.1658 - val_output_c_loss: 0.1706 Epoch 00076: early stopping FOLD: 2 Epoch 1/120 30/30 - 6s - loss: 3.6145 - output_react_loss: 0.3859 - output_bg_ph_loss: 0.5921 - output_ph_loss: 0.4493 - output_mg_c_loss: 0.4325 - output_c_loss: 0.3441 - val_loss: 2.3399 - val_output_react_loss: 0.2505 - val_output_bg_ph_loss: 0.3187 - val_output_ph_loss: 0.3453 - val_output_mg_c_loss: 0.2940 - val_output_c_loss: 0.2685 Epoch 2/120 30/30 - 4s - loss: 2.3576 - output_react_loss: 0.2459 - output_bg_ph_loss: 0.3224 - output_ph_loss: 0.3355 - output_mg_c_loss: 0.3020 - output_c_loss: 0.2814 - val_loss: 2.2172 - val_output_react_loss: 0.2371 - val_output_bg_ph_loss: 0.3000 - val_output_ph_loss: 0.3191 - val_output_mg_c_loss: 0.2854 - val_output_c_loss: 0.2530 Epoch 3/120 30/30 - 4s - loss: 2.2149 - output_react_loss: 0.2330 - output_bg_ph_loss: 0.3027 - output_ph_loss: 0.3102 - output_mg_c_loss: 0.2846 - output_c_loss: 0.2642 - val_loss: 2.0746 - val_output_react_loss: 0.2288 - val_output_bg_ph_loss: 0.2810 - val_output_ph_loss: 0.2896 - val_output_mg_c_loss: 0.2628 - val_output_c_loss: 0.2399 Epoch 4/120 30/30 - 4s - loss: 2.1275 - output_react_loss: 0.2246 - output_bg_ph_loss: 0.2904 - output_ph_loss: 0.2974 - output_mg_c_loss: 0.2725 - output_c_loss: 0.2550 - val_loss: 2.0056 - val_output_react_loss: 0.2222 - val_output_bg_ph_loss: 0.2705 - val_output_ph_loss: 0.2807 - val_output_mg_c_loss: 0.2531 - val_output_c_loss: 0.2333 Epoch 5/120 30/30 - 4s - loss: 2.0735 - output_react_loss: 0.2192 - output_bg_ph_loss: 0.2818 - output_ph_loss: 0.2905 - output_mg_c_loss: 0.2655 - output_c_loss: 0.2498 - val_loss: 1.9704 - val_output_react_loss: 0.2194 - val_output_bg_ph_loss: 0.2644 - val_output_ph_loss: 0.2763 - val_output_mg_c_loss: 0.2475 - val_output_c_loss: 0.2314 Epoch 6/120 30/30 - 4s - loss: 2.0289 - output_react_loss: 0.2151 - output_bg_ph_loss: 0.2754 - output_ph_loss: 0.2853 - output_mg_c_loss: 0.2589 - output_c_loss: 0.2449 - val_loss: 1.9270 - val_output_react_loss: 0.2155 - val_output_bg_ph_loss: 0.2589 - val_output_ph_loss: 0.2696 - val_output_mg_c_loss: 0.2413 - val_output_c_loss: 0.2260 Epoch 7/120 30/30 - 4s - loss: 1.9888 - output_react_loss: 0.2102 - output_bg_ph_loss: 0.2702 - output_ph_loss: 0.2806 - output_mg_c_loss: 0.2530 - output_c_loss: 0.2415 - val_loss: 1.8879 - val_output_react_loss: 0.2115 - val_output_bg_ph_loss: 0.2536 - val_output_ph_loss: 0.2647 - val_output_mg_c_loss: 0.2361 - val_output_c_loss: 0.2208 Epoch 8/120 30/30 - 4s - loss: 1.9514 - output_react_loss: 0.2075 - output_bg_ph_loss: 0.2642 - output_ph_loss: 0.2749 - output_mg_c_loss: 0.2480 - output_c_loss: 0.2371 - val_loss: 1.8471 - val_output_react_loss: 0.2068 - val_output_bg_ph_loss: 0.2480 - val_output_ph_loss: 0.2605 - val_output_mg_c_loss: 0.2298 - val_output_c_loss: 0.2174 Epoch 9/120 30/30 - 4s - loss: 1.9122 - output_react_loss: 0.2040 - output_bg_ph_loss: 0.2591 - output_ph_loss: 0.2702 - output_mg_c_loss: 0.2419 - output_c_loss: 0.2320 - val_loss: 1.8044 - val_output_react_loss: 0.2034 - val_output_bg_ph_loss: 0.2428 - val_output_ph_loss: 0.2542 - val_output_mg_c_loss: 0.2228 - val_output_c_loss: 0.2123 Epoch 10/120 30/30 - 4s - loss: 1.8711 - output_react_loss: 0.2001 - output_bg_ph_loss: 0.2537 - output_ph_loss: 0.2650 - output_mg_c_loss: 0.2357 - output_c_loss: 0.2270 - val_loss: 1.7675 - val_output_react_loss: 0.1992 - val_output_bg_ph_loss: 0.2380 - val_output_ph_loss: 0.2501 - val_output_mg_c_loss: 0.2171 - val_output_c_loss: 0.2087 Epoch 11/120 30/30 - 4s - loss: 1.8335 - output_react_loss: 0.1971 - output_bg_ph_loss: 0.2484 - output_ph_loss: 0.2608 - output_mg_c_loss: 0.2294 - output_c_loss: 0.2228 - val_loss: 1.7336 - val_output_react_loss: 0.1977 - val_output_bg_ph_loss: 0.2323 - val_output_ph_loss: 0.2453 - val_output_mg_c_loss: 0.2118 - val_output_c_loss: 0.2047 Epoch 12/120 30/30 - 4s - loss: 1.8107 - output_react_loss: 0.1965 - output_bg_ph_loss: 0.2439 - output_ph_loss: 0.2586 - output_mg_c_loss: 0.2258 - output_c_loss: 0.2198 - val_loss: 1.7074 - val_output_react_loss: 0.1958 - val_output_bg_ph_loss: 0.2284 - val_output_ph_loss: 0.2408 - val_output_mg_c_loss: 0.2082 - val_output_c_loss: 0.2018 Epoch 13/120 30/30 - 4s - loss: 1.7743 - output_react_loss: 0.1928 - output_bg_ph_loss: 0.2393 - output_ph_loss: 0.2523 - output_mg_c_loss: 0.2205 - output_c_loss: 0.2166 - val_loss: 1.6761 - val_output_react_loss: 0.1924 - val_output_bg_ph_loss: 0.2229 - val_output_ph_loss: 0.2397 - val_output_mg_c_loss: 0.2029 - val_output_c_loss: 0.1999 Epoch 14/120 30/30 - 4s - loss: 1.7513 - output_react_loss: 0.1912 - output_bg_ph_loss: 0.2354 - output_ph_loss: 0.2497 - output_mg_c_loss: 0.2166 - output_c_loss: 0.2151 - val_loss: 1.6811 - val_output_react_loss: 0.1942 - val_output_bg_ph_loss: 0.2250 - val_output_ph_loss: 0.2379 - val_output_mg_c_loss: 0.2030 - val_output_c_loss: 0.1988 Epoch 15/120 30/30 - 4s - loss: 1.7119 - output_react_loss: 0.1879 - output_bg_ph_loss: 0.2308 - output_ph_loss: 0.2437 - output_mg_c_loss: 0.2109 - output_c_loss: 0.2090 - val_loss: 1.6297 - val_output_react_loss: 0.1884 - val_output_bg_ph_loss: 0.2167 - val_output_ph_loss: 0.2328 - val_output_mg_c_loss: 0.1967 - val_output_c_loss: 0.1935 Epoch 16/120 30/30 - 4s - loss: 1.6810 - output_react_loss: 0.1860 - output_bg_ph_loss: 0.2267 - output_ph_loss: 0.2385 - output_mg_c_loss: 0.2058 - output_c_loss: 0.2055 - val_loss: 1.6170 - val_output_react_loss: 0.1875 - val_output_bg_ph_loss: 0.2155 - val_output_ph_loss: 0.2305 - val_output_mg_c_loss: 0.1937 - val_output_c_loss: 0.1930 Epoch 17/120 30/30 - 4s - loss: 1.6523 - output_react_loss: 0.1832 - output_bg_ph_loss: 0.2224 - output_ph_loss: 0.2354 - output_mg_c_loss: 0.2018 - output_c_loss: 0.2021 - val_loss: 1.5968 - val_output_react_loss: 0.1862 - val_output_bg_ph_loss: 0.2149 - val_output_ph_loss: 0.2259 - val_output_mg_c_loss: 0.1904 - val_output_c_loss: 0.1879 Epoch 18/120 30/30 - 4s - loss: 1.6380 - output_react_loss: 0.1828 - output_bg_ph_loss: 0.2208 - output_ph_loss: 0.2325 - output_mg_c_loss: 0.1994 - output_c_loss: 0.1995 - val_loss: 1.5862 - val_output_react_loss: 0.1854 - val_output_bg_ph_loss: 0.2106 - val_output_ph_loss: 0.2241 - val_output_mg_c_loss: 0.1908 - val_output_c_loss: 0.1885 Epoch 19/120 30/30 - 4s - loss: 1.6157 - output_react_loss: 0.1806 - output_bg_ph_loss: 0.2169 - output_ph_loss: 0.2290 - output_mg_c_loss: 0.1973 - output_c_loss: 0.1972 - val_loss: 1.5765 - val_output_react_loss: 0.1847 - val_output_bg_ph_loss: 0.2082 - val_output_ph_loss: 0.2216 - val_output_mg_c_loss: 0.1914 - val_output_c_loss: 0.1865 Epoch 20/120 30/30 - 4s - loss: 1.6029 - output_react_loss: 0.1791 - output_bg_ph_loss: 0.2152 - output_ph_loss: 0.2268 - output_mg_c_loss: 0.1956 - output_c_loss: 0.1964 - val_loss: 1.5573 - val_output_react_loss: 0.1836 - val_output_bg_ph_loss: 0.2079 - val_output_ph_loss: 0.2193 - val_output_mg_c_loss: 0.1852 - val_output_c_loss: 0.1847 Epoch 21/120 30/30 - 4s - loss: 1.5803 - output_react_loss: 0.1772 - output_bg_ph_loss: 0.2124 - output_ph_loss: 0.2248 - output_mg_c_loss: 0.1916 - output_c_loss: 0.1931 - val_loss: 1.5394 - val_output_react_loss: 0.1816 - val_output_bg_ph_loss: 0.2045 - val_output_ph_loss: 0.2176 - val_output_mg_c_loss: 0.1823 - val_output_c_loss: 0.1849 Epoch 22/120 30/30 - 4s - loss: 1.5563 - output_react_loss: 0.1744 - output_bg_ph_loss: 0.2090 - output_ph_loss: 0.2214 - output_mg_c_loss: 0.1885 - output_c_loss: 0.1912 - val_loss: 1.5260 - val_output_react_loss: 0.1777 - val_output_bg_ph_loss: 0.2033 - val_output_ph_loss: 0.2172 - val_output_mg_c_loss: 0.1819 - val_output_c_loss: 0.1830 Epoch 23/120 30/30 - 4s - loss: 1.5434 - output_react_loss: 0.1736 - output_bg_ph_loss: 0.2071 - output_ph_loss: 0.2185 - output_mg_c_loss: 0.1869 - output_c_loss: 0.1897 - val_loss: 1.5234 - val_output_react_loss: 0.1809 - val_output_bg_ph_loss: 0.2025 - val_output_ph_loss: 0.2137 - val_output_mg_c_loss: 0.1805 - val_output_c_loss: 0.1819 Epoch 24/120 30/30 - 4s - loss: 1.5343 - output_react_loss: 0.1722 - output_bg_ph_loss: 0.2054 - output_ph_loss: 0.2185 - output_mg_c_loss: 0.1856 - output_c_loss: 0.1893 - val_loss: 1.5283 - val_output_react_loss: 0.1780 - val_output_bg_ph_loss: 0.2032 - val_output_ph_loss: 0.2157 - val_output_mg_c_loss: 0.1841 - val_output_c_loss: 0.1821 Epoch 25/120 30/30 - 4s - loss: 1.5174 - output_react_loss: 0.1716 - output_bg_ph_loss: 0.2034 - output_ph_loss: 0.2156 - output_mg_c_loss: 0.1826 - output_c_loss: 0.1867 - val_loss: 1.5109 - val_output_react_loss: 0.1774 - val_output_bg_ph_loss: 0.2005 - val_output_ph_loss: 0.2153 - val_output_mg_c_loss: 0.1783 - val_output_c_loss: 0.1831 Epoch 26/120 30/30 - 4s - loss: 1.5015 - output_react_loss: 0.1694 - output_bg_ph_loss: 0.2014 - output_ph_loss: 0.2131 - output_mg_c_loss: 0.1808 - output_c_loss: 0.1853 - val_loss: 1.5142 - val_output_react_loss: 0.1784 - val_output_bg_ph_loss: 0.2008 - val_output_ph_loss: 0.2152 - val_output_mg_c_loss: 0.1790 - val_output_c_loss: 0.1829 Epoch 27/120 30/30 - 4s - loss: 1.4996 - output_react_loss: 0.1689 - output_bg_ph_loss: 0.2007 - output_ph_loss: 0.2131 - output_mg_c_loss: 0.1809 - output_c_loss: 0.1855 - val_loss: 1.4946 - val_output_react_loss: 0.1759 - val_output_bg_ph_loss: 0.1988 - val_output_ph_loss: 0.2121 - val_output_mg_c_loss: 0.1765 - val_output_c_loss: 0.1802 Epoch 28/120 30/30 - 4s - loss: 1.4816 - output_react_loss: 0.1671 - output_bg_ph_loss: 0.1989 - output_ph_loss: 0.2108 - output_mg_c_loss: 0.1780 - output_c_loss: 0.1829 - val_loss: 1.4931 - val_output_react_loss: 0.1736 - val_output_bg_ph_loss: 0.1997 - val_output_ph_loss: 0.2120 - val_output_mg_c_loss: 0.1778 - val_output_c_loss: 0.1788 Epoch 29/120 30/30 - 4s - loss: 1.4568 - output_react_loss: 0.1649 - output_bg_ph_loss: 0.1953 - output_ph_loss: 0.2080 - output_mg_c_loss: 0.1738 - output_c_loss: 0.1806 - val_loss: 1.4792 - val_output_react_loss: 0.1718 - val_output_bg_ph_loss: 0.1966 - val_output_ph_loss: 0.2116 - val_output_mg_c_loss: 0.1762 - val_output_c_loss: 0.1785 Epoch 30/120 30/30 - 4s - loss: 1.4500 - output_react_loss: 0.1651 - output_bg_ph_loss: 0.1939 - output_ph_loss: 0.2071 - output_mg_c_loss: 0.1727 - output_c_loss: 0.1796 - val_loss: 1.4692 - val_output_react_loss: 0.1729 - val_output_bg_ph_loss: 0.1954 - val_output_ph_loss: 0.2076 - val_output_mg_c_loss: 0.1738 - val_output_c_loss: 0.1774 Epoch 31/120 30/30 - 4s - loss: 1.4315 - output_react_loss: 0.1628 - output_bg_ph_loss: 0.1913 - output_ph_loss: 0.2046 - output_mg_c_loss: 0.1702 - output_c_loss: 0.1783 - val_loss: 1.4725 - val_output_react_loss: 0.1725 - val_output_bg_ph_loss: 0.1965 - val_output_ph_loss: 0.2093 - val_output_mg_c_loss: 0.1739 - val_output_c_loss: 0.1775 Epoch 32/120 30/30 - 4s - loss: 1.4301 - output_react_loss: 0.1631 - output_bg_ph_loss: 0.1913 - output_ph_loss: 0.2036 - output_mg_c_loss: 0.1701 - output_c_loss: 0.1775 - val_loss: 1.4742 - val_output_react_loss: 0.1708 - val_output_bg_ph_loss: 0.1988 - val_output_ph_loss: 0.2076 - val_output_mg_c_loss: 0.1744 - val_output_c_loss: 0.1785 Epoch 33/120 30/30 - 4s - loss: 1.4128 - output_react_loss: 0.1607 - output_bg_ph_loss: 0.1888 - output_ph_loss: 0.2019 - output_mg_c_loss: 0.1677 - output_c_loss: 0.1768 - val_loss: 1.4512 - val_output_react_loss: 0.1699 - val_output_bg_ph_loss: 0.1930 - val_output_ph_loss: 0.2068 - val_output_mg_c_loss: 0.1718 - val_output_c_loss: 0.1749 Epoch 34/120 30/30 - 4s - loss: 1.4018 - output_react_loss: 0.1591 - output_bg_ph_loss: 0.1874 - output_ph_loss: 0.1999 - output_mg_c_loss: 0.1671 - output_c_loss: 0.1747 - val_loss: 1.4551 - val_output_react_loss: 0.1690 - val_output_bg_ph_loss: 0.1959 - val_output_ph_loss: 0.2054 - val_output_mg_c_loss: 0.1725 - val_output_c_loss: 0.1748 Epoch 35/120 30/30 - 4s - loss: 1.3803 - output_react_loss: 0.1570 - output_bg_ph_loss: 0.1846 - output_ph_loss: 0.1977 - output_mg_c_loss: 0.1634 - output_c_loss: 0.1726 - val_loss: 1.4474 - val_output_react_loss: 0.1691 - val_output_bg_ph_loss: 0.1930 - val_output_ph_loss: 0.2054 - val_output_mg_c_loss: 0.1712 - val_output_c_loss: 0.1755 Epoch 36/120 30/30 - 4s - loss: 1.3753 - output_react_loss: 0.1562 - output_bg_ph_loss: 0.1836 - output_ph_loss: 0.1969 - output_mg_c_loss: 0.1628 - output_c_loss: 0.1732 - val_loss: 1.4415 - val_output_react_loss: 0.1677 - val_output_bg_ph_loss: 0.1933 - val_output_ph_loss: 0.2050 - val_output_mg_c_loss: 0.1693 - val_output_c_loss: 0.1759 Epoch 37/120 30/30 - 4s - loss: 1.3664 - output_react_loss: 0.1550 - output_bg_ph_loss: 0.1825 - output_ph_loss: 0.1963 - output_mg_c_loss: 0.1618 - output_c_loss: 0.1714 - val_loss: 1.4405 - val_output_react_loss: 0.1660 - val_output_bg_ph_loss: 0.1931 - val_output_ph_loss: 0.2038 - val_output_mg_c_loss: 0.1717 - val_output_c_loss: 0.1752 Epoch 38/120 30/30 - 4s - loss: 1.3563 - output_react_loss: 0.1542 - output_bg_ph_loss: 0.1803 - output_ph_loss: 0.1948 - output_mg_c_loss: 0.1611 - output_c_loss: 0.1705 - val_loss: 1.4525 - val_output_react_loss: 0.1674 - val_output_bg_ph_loss: 0.1941 - val_output_ph_loss: 0.2042 - val_output_mg_c_loss: 0.1751 - val_output_c_loss: 0.1751 Epoch 39/120 30/30 - 4s - loss: 1.3425 - output_react_loss: 0.1530 - output_bg_ph_loss: 0.1785 - output_ph_loss: 0.1926 - output_mg_c_loss: 0.1587 - output_c_loss: 0.1695 - val_loss: 1.4482 - val_output_react_loss: 0.1660 - val_output_bg_ph_loss: 0.1944 - val_output_ph_loss: 0.2048 - val_output_mg_c_loss: 0.1745 - val_output_c_loss: 0.1739 Epoch 40/120 30/30 - 4s - loss: 1.3267 - output_react_loss: 0.1514 - output_bg_ph_loss: 0.1765 - output_ph_loss: 0.1906 - output_mg_c_loss: 0.1561 - output_c_loss: 0.1680 - val_loss: 1.4371 - val_output_react_loss: 0.1674 - val_output_bg_ph_loss: 0.1924 - val_output_ph_loss: 0.2034 - val_output_mg_c_loss: 0.1701 - val_output_c_loss: 0.1740 Epoch 41/120 30/30 - 4s - loss: 1.3195 - output_react_loss: 0.1513 - output_bg_ph_loss: 0.1748 - output_ph_loss: 0.1901 - output_mg_c_loss: 0.1548 - output_c_loss: 0.1675 - val_loss: 1.4320 - val_output_react_loss: 0.1677 - val_output_bg_ph_loss: 0.1907 - val_output_ph_loss: 0.2030 - val_output_mg_c_loss: 0.1695 - val_output_c_loss: 0.1732 Epoch 42/120 30/30 - 4s - loss: 1.3130 - output_react_loss: 0.1507 - output_bg_ph_loss: 0.1737 - output_ph_loss: 0.1888 - output_mg_c_loss: 0.1543 - output_c_loss: 0.1668 - val_loss: 1.4384 - val_output_react_loss: 0.1667 - val_output_bg_ph_loss: 0.1920 - val_output_ph_loss: 0.2024 - val_output_mg_c_loss: 0.1724 - val_output_c_loss: 0.1740 Epoch 43/120 30/30 - 4s - loss: 1.2921 - output_react_loss: 0.1475 - output_bg_ph_loss: 0.1710 - output_ph_loss: 0.1871 - output_mg_c_loss: 0.1514 - output_c_loss: 0.1653 - val_loss: 1.4282 - val_output_react_loss: 0.1652 - val_output_bg_ph_loss: 0.1922 - val_output_ph_loss: 0.2021 - val_output_mg_c_loss: 0.1695 - val_output_c_loss: 0.1723 Epoch 44/120 30/30 - 4s - loss: 1.2815 - output_react_loss: 0.1466 - output_bg_ph_loss: 0.1692 - output_ph_loss: 0.1856 - output_mg_c_loss: 0.1500 - output_c_loss: 0.1642 - val_loss: 1.4339 - val_output_react_loss: 0.1656 - val_output_bg_ph_loss: 0.1940 - val_output_ph_loss: 0.2022 - val_output_mg_c_loss: 0.1696 - val_output_c_loss: 0.1732 Epoch 45/120 30/30 - 4s - loss: 1.2732 - output_react_loss: 0.1459 - output_bg_ph_loss: 0.1680 - output_ph_loss: 0.1845 - output_mg_c_loss: 0.1482 - output_c_loss: 0.1642 - val_loss: 1.4221 - val_output_react_loss: 0.1641 - val_output_bg_ph_loss: 0.1924 - val_output_ph_loss: 0.2002 - val_output_mg_c_loss: 0.1686 - val_output_c_loss: 0.1718 Epoch 46/120 30/30 - 4s - loss: 1.2570 - output_react_loss: 0.1437 - output_bg_ph_loss: 0.1657 - output_ph_loss: 0.1829 - output_mg_c_loss: 0.1467 - output_c_loss: 0.1619 - val_loss: 1.4303 - val_output_react_loss: 0.1656 - val_output_bg_ph_loss: 0.1914 - val_output_ph_loss: 0.2012 - val_output_mg_c_loss: 0.1709 - val_output_c_loss: 0.1733 Epoch 47/120 30/30 - 4s - loss: 1.2501 - output_react_loss: 0.1438 - output_bg_ph_loss: 0.1645 - output_ph_loss: 0.1816 - output_mg_c_loss: 0.1453 - output_c_loss: 0.1614 - val_loss: 1.4307 - val_output_react_loss: 0.1647 - val_output_bg_ph_loss: 0.1925 - val_output_ph_loss: 0.2022 - val_output_mg_c_loss: 0.1704 - val_output_c_loss: 0.1734 Epoch 48/120 30/30 - 4s - loss: 1.2440 - output_react_loss: 0.1427 - output_bg_ph_loss: 0.1635 - output_ph_loss: 0.1804 - output_mg_c_loss: 0.1451 - output_c_loss: 0.1610 - val_loss: 1.4270 - val_output_react_loss: 0.1643 - val_output_bg_ph_loss: 0.1927 - val_output_ph_loss: 0.2016 - val_output_mg_c_loss: 0.1691 - val_output_c_loss: 0.1731 Epoch 49/120 30/30 - 4s - loss: 1.2308 - output_react_loss: 0.1412 - output_bg_ph_loss: 0.1621 - output_ph_loss: 0.1796 - output_mg_c_loss: 0.1426 - output_c_loss: 0.1595 - val_loss: 1.4181 - val_output_react_loss: 0.1639 - val_output_bg_ph_loss: 0.1909 - val_output_ph_loss: 0.2019 - val_output_mg_c_loss: 0.1673 - val_output_c_loss: 0.1721 Epoch 50/120 30/30 - 4s - loss: 1.2201 - output_react_loss: 0.1394 - output_bg_ph_loss: 0.1601 - output_ph_loss: 0.1780 - output_mg_c_loss: 0.1418 - output_c_loss: 0.1596 - val_loss: 1.4202 - val_output_react_loss: 0.1648 - val_output_bg_ph_loss: 0.1908 - val_output_ph_loss: 0.1992 - val_output_mg_c_loss: 0.1688 - val_output_c_loss: 0.1722 Epoch 51/120 30/30 - 4s - loss: 1.2128 - output_react_loss: 0.1396 - output_bg_ph_loss: 0.1582 - output_ph_loss: 0.1767 - output_mg_c_loss: 0.1410 - output_c_loss: 0.1586 - val_loss: 1.4129 - val_output_react_loss: 0.1630 - val_output_bg_ph_loss: 0.1901 - val_output_ph_loss: 0.1986 - val_output_mg_c_loss: 0.1676 - val_output_c_loss: 0.1728 Epoch 52/120 30/30 - 4s - loss: 1.1992 - output_react_loss: 0.1369 - output_bg_ph_loss: 0.1572 - output_ph_loss: 0.1754 - output_mg_c_loss: 0.1391 - output_c_loss: 0.1575 - val_loss: 1.4104 - val_output_react_loss: 0.1624 - val_output_bg_ph_loss: 0.1908 - val_output_ph_loss: 0.1987 - val_output_mg_c_loss: 0.1671 - val_output_c_loss: 0.1710 Epoch 53/120 30/30 - 4s - loss: 1.1897 - output_react_loss: 0.1354 - output_bg_ph_loss: 0.1559 - output_ph_loss: 0.1739 - output_mg_c_loss: 0.1382 - output_c_loss: 0.1568 - val_loss: 1.4089 - val_output_react_loss: 0.1623 - val_output_bg_ph_loss: 0.1895 - val_output_ph_loss: 0.1988 - val_output_mg_c_loss: 0.1672 - val_output_c_loss: 0.1720 Epoch 54/120 30/30 - 4s - loss: 1.1787 - output_react_loss: 0.1347 - output_bg_ph_loss: 0.1537 - output_ph_loss: 0.1731 - output_mg_c_loss: 0.1364 - output_c_loss: 0.1561 - val_loss: 1.4114 - val_output_react_loss: 0.1631 - val_output_bg_ph_loss: 0.1891 - val_output_ph_loss: 0.1992 - val_output_mg_c_loss: 0.1684 - val_output_c_loss: 0.1711 Epoch 55/120 30/30 - 4s - loss: 1.1707 - output_react_loss: 0.1338 - output_bg_ph_loss: 0.1526 - output_ph_loss: 0.1727 - output_mg_c_loss: 0.1350 - output_c_loss: 0.1553 - val_loss: 1.4131 - val_output_react_loss: 0.1642 - val_output_bg_ph_loss: 0.1899 - val_output_ph_loss: 0.1999 - val_output_mg_c_loss: 0.1669 - val_output_c_loss: 0.1713 Epoch 56/120 30/30 - 4s - loss: 1.1640 - output_react_loss: 0.1334 - output_bg_ph_loss: 0.1515 - output_ph_loss: 0.1714 - output_mg_c_loss: 0.1341 - output_c_loss: 0.1547 - val_loss: 1.4110 - val_output_react_loss: 0.1626 - val_output_bg_ph_loss: 0.1892 - val_output_ph_loss: 0.2002 - val_output_mg_c_loss: 0.1675 - val_output_c_loss: 0.1722 Epoch 57/120 30/30 - 4s - loss: 1.1600 - output_react_loss: 0.1331 - output_bg_ph_loss: 0.1498 - output_ph_loss: 0.1712 - output_mg_c_loss: 0.1343 - output_c_loss: 0.1544 - val_loss: 1.4093 - val_output_react_loss: 0.1622 - val_output_bg_ph_loss: 0.1895 - val_output_ph_loss: 0.2006 - val_output_mg_c_loss: 0.1668 - val_output_c_loss: 0.1716 Epoch 58/120 Epoch 00058: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513. 30/30 - 4s - loss: 1.1474 - output_react_loss: 0.1314 - output_bg_ph_loss: 0.1481 - output_ph_loss: 0.1701 - output_mg_c_loss: 0.1322 - output_c_loss: 0.1538 - val_loss: 1.4093 - val_output_react_loss: 0.1614 - val_output_bg_ph_loss: 0.1886 - val_output_ph_loss: 0.1999 - val_output_mg_c_loss: 0.1685 - val_output_c_loss: 0.1724 Epoch 59/120 30/30 - 4s - loss: 1.1176 - output_react_loss: 0.1275 - output_bg_ph_loss: 0.1443 - output_ph_loss: 0.1662 - output_mg_c_loss: 0.1284 - output_c_loss: 0.1510 - val_loss: 1.3902 - val_output_react_loss: 0.1602 - val_output_bg_ph_loss: 0.1875 - val_output_ph_loss: 0.1962 - val_output_mg_c_loss: 0.1648 - val_output_c_loss: 0.1690 Epoch 60/120 30/30 - 4s - loss: 1.1001 - output_react_loss: 0.1252 - output_bg_ph_loss: 0.1424 - output_ph_loss: 0.1636 - output_mg_c_loss: 0.1260 - output_c_loss: 0.1493 - val_loss: 1.3827 - val_output_react_loss: 0.1596 - val_output_bg_ph_loss: 0.1860 - val_output_ph_loss: 0.1955 - val_output_mg_c_loss: 0.1637 - val_output_c_loss: 0.1687 Epoch 61/120 30/30 - 4s - loss: 1.0945 - output_react_loss: 0.1250 - output_bg_ph_loss: 0.1411 - output_ph_loss: 0.1633 - output_mg_c_loss: 0.1252 - output_c_loss: 0.1487 - val_loss: 1.3811 - val_output_react_loss: 0.1592 - val_output_bg_ph_loss: 0.1861 - val_output_ph_loss: 0.1953 - val_output_mg_c_loss: 0.1634 - val_output_c_loss: 0.1685 Epoch 62/120 30/30 - 4s - loss: 1.0906 - output_react_loss: 0.1246 - output_bg_ph_loss: 0.1407 - output_ph_loss: 0.1625 - output_mg_c_loss: 0.1246 - output_c_loss: 0.1482 - val_loss: 1.3849 - val_output_react_loss: 0.1597 - val_output_bg_ph_loss: 0.1867 - val_output_ph_loss: 0.1955 - val_output_mg_c_loss: 0.1638 - val_output_c_loss: 0.1688 Epoch 63/120 30/30 - 4s - loss: 1.0896 - output_react_loss: 0.1246 - output_bg_ph_loss: 0.1402 - output_ph_loss: 0.1625 - output_mg_c_loss: 0.1246 - output_c_loss: 0.1482 - val_loss: 1.3850 - val_output_react_loss: 0.1597 - val_output_bg_ph_loss: 0.1867 - val_output_ph_loss: 0.1955 - val_output_mg_c_loss: 0.1639 - val_output_c_loss: 0.1689 Epoch 64/120 30/30 - 4s - loss: 1.0856 - output_react_loss: 0.1240 - output_bg_ph_loss: 0.1397 - output_ph_loss: 0.1618 - output_mg_c_loss: 0.1241 - output_c_loss: 0.1481 - val_loss: 1.3833 - val_output_react_loss: 0.1593 - val_output_bg_ph_loss: 0.1866 - val_output_ph_loss: 0.1956 - val_output_mg_c_loss: 0.1636 - val_output_c_loss: 0.1688 Epoch 65/120 30/30 - 4s - loss: 1.0841 - output_react_loss: 0.1239 - output_bg_ph_loss: 0.1396 - output_ph_loss: 0.1618 - output_mg_c_loss: 0.1237 - output_c_loss: 0.1477 - val_loss: 1.3844 - val_output_react_loss: 0.1594 - val_output_bg_ph_loss: 0.1868 - val_output_ph_loss: 0.1957 - val_output_mg_c_loss: 0.1638 - val_output_c_loss: 0.1688 Epoch 66/120 Epoch 00066: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05. 30/30 - 4s - loss: 1.0825 - output_react_loss: 0.1236 - output_bg_ph_loss: 0.1392 - output_ph_loss: 0.1619 - output_mg_c_loss: 0.1238 - output_c_loss: 0.1474 - val_loss: 1.3842 - val_output_react_loss: 0.1594 - val_output_bg_ph_loss: 0.1869 - val_output_ph_loss: 0.1953 - val_output_mg_c_loss: 0.1638 - val_output_c_loss: 0.1688 Epoch 67/120 30/30 - 4s - loss: 1.0786 - output_react_loss: 0.1230 - output_bg_ph_loss: 0.1389 - output_ph_loss: 0.1613 - output_mg_c_loss: 0.1230 - output_c_loss: 0.1475 - val_loss: 1.3829 - val_output_react_loss: 0.1594 - val_output_bg_ph_loss: 0.1866 - val_output_ph_loss: 0.1953 - val_output_mg_c_loss: 0.1635 - val_output_c_loss: 0.1686 Epoch 68/120 30/30 - 4s - loss: 1.0785 - output_react_loss: 0.1231 - output_bg_ph_loss: 0.1389 - output_ph_loss: 0.1613 - output_mg_c_loss: 0.1231 - output_c_loss: 0.1472 - val_loss: 1.3836 - val_output_react_loss: 0.1594 - val_output_bg_ph_loss: 0.1867 - val_output_ph_loss: 0.1954 - val_output_mg_c_loss: 0.1637 - val_output_c_loss: 0.1686 Epoch 69/120 30/30 - 4s - loss: 1.0787 - output_react_loss: 0.1231 - output_bg_ph_loss: 0.1390 - output_ph_loss: 0.1613 - output_mg_c_loss: 0.1230 - output_c_loss: 0.1471 - val_loss: 1.3834 - val_output_react_loss: 0.1594 - val_output_bg_ph_loss: 0.1867 - val_output_ph_loss: 0.1954 - val_output_mg_c_loss: 0.1636 - val_output_c_loss: 0.1686 Epoch 70/120 30/30 - 4s - loss: 1.0777 - output_react_loss: 0.1229 - output_bg_ph_loss: 0.1388 - output_ph_loss: 0.1613 - output_mg_c_loss: 0.1229 - output_c_loss: 0.1471 - val_loss: 1.3826 - val_output_react_loss: 0.1593 - val_output_bg_ph_loss: 0.1865 - val_output_ph_loss: 0.1952 - val_output_mg_c_loss: 0.1635 - val_output_c_loss: 0.1686 Epoch 71/120 Restoring model weights from the end of the best epoch. Epoch 00071: ReduceLROnPlateau reducing learning rate to 1.0000000656873453e-06. 30/30 - 4s - loss: 1.0761 - output_react_loss: 0.1230 - output_bg_ph_loss: 0.1383 - output_ph_loss: 0.1610 - output_mg_c_loss: 0.1226 - output_c_loss: 0.1472 - val_loss: 1.3837 - val_output_react_loss: 0.1595 - val_output_bg_ph_loss: 0.1867 - val_output_ph_loss: 0.1953 - val_output_mg_c_loss: 0.1636 - val_output_c_loss: 0.1686 Epoch 00071: early stopping FOLD: 3 Epoch 1/120 30/30 - 6s - loss: 3.6279 - output_react_loss: 0.4708 - output_bg_ph_loss: 0.4401 - output_ph_loss: 0.5027 - output_mg_c_loss: 0.4080 - output_c_loss: 0.4872 - val_loss: 2.3493 - val_output_react_loss: 0.2399 - val_output_bg_ph_loss: 0.3151 - val_output_ph_loss: 0.3559 - val_output_mg_c_loss: 0.2959 - val_output_c_loss: 0.2916 Epoch 2/120 30/30 - 4s - loss: 2.3306 - output_react_loss: 0.2492 - output_bg_ph_loss: 0.3149 - output_ph_loss: 0.3309 - output_mg_c_loss: 0.2956 - output_c_loss: 0.2803 - val_loss: 2.1497 - val_output_react_loss: 0.2243 - val_output_bg_ph_loss: 0.2951 - val_output_ph_loss: 0.3007 - val_output_mg_c_loss: 0.2761 - val_output_c_loss: 0.2581 Epoch 3/120 30/30 - 4s - loss: 2.1989 - output_react_loss: 0.2361 - output_bg_ph_loss: 0.2977 - output_ph_loss: 0.3080 - output_mg_c_loss: 0.2802 - output_c_loss: 0.2626 - val_loss: 2.0834 - val_output_react_loss: 0.2165 - val_output_bg_ph_loss: 0.2847 - val_output_ph_loss: 0.2912 - val_output_mg_c_loss: 0.2693 - val_output_c_loss: 0.2512 Epoch 4/120 30/30 - 4s - loss: 2.1209 - output_react_loss: 0.2280 - output_bg_ph_loss: 0.2863 - output_ph_loss: 0.2969 - output_mg_c_loss: 0.2705 - output_c_loss: 0.2542 - val_loss: 2.0157 - val_output_react_loss: 0.2112 - val_output_bg_ph_loss: 0.2745 - val_output_ph_loss: 0.2839 - val_output_mg_c_loss: 0.2586 - val_output_c_loss: 0.2433 Epoch 5/120 30/30 - 4s - loss: 2.0648 - output_react_loss: 0.2220 - output_bg_ph_loss: 0.2786 - output_ph_loss: 0.2894 - output_mg_c_loss: 0.2631 - output_c_loss: 0.2480 - val_loss: 1.9764 - val_output_react_loss: 0.2073 - val_output_bg_ph_loss: 0.2704 - val_output_ph_loss: 0.2772 - val_output_mg_c_loss: 0.2526 - val_output_c_loss: 0.2388 Epoch 6/120 30/30 - 4s - loss: 2.0171 - output_react_loss: 0.2174 - output_bg_ph_loss: 0.2718 - output_ph_loss: 0.2831 - output_mg_c_loss: 0.2561 - output_c_loss: 0.2433 - val_loss: 1.9254 - val_output_react_loss: 0.2017 - val_output_bg_ph_loss: 0.2617 - val_output_ph_loss: 0.2729 - val_output_mg_c_loss: 0.2456 - val_output_c_loss: 0.2346 Epoch 7/120 30/30 - 4s - loss: 1.9804 - output_react_loss: 0.2134 - output_bg_ph_loss: 0.2666 - output_ph_loss: 0.2790 - output_mg_c_loss: 0.2510 - output_c_loss: 0.2393 - val_loss: 1.8904 - val_output_react_loss: 0.1986 - val_output_bg_ph_loss: 0.2558 - val_output_ph_loss: 0.2685 - val_output_mg_c_loss: 0.2407 - val_output_c_loss: 0.2317 Epoch 8/120 30/30 - 4s - loss: 1.9470 - output_react_loss: 0.2095 - output_bg_ph_loss: 0.2620 - output_ph_loss: 0.2744 - output_mg_c_loss: 0.2469 - output_c_loss: 0.2359 - val_loss: 1.8707 - val_output_react_loss: 0.1958 - val_output_bg_ph_loss: 0.2540 - val_output_ph_loss: 0.2636 - val_output_mg_c_loss: 0.2396 - val_output_c_loss: 0.2282 Epoch 9/120 30/30 - 4s - loss: 1.9133 - output_react_loss: 0.2058 - output_bg_ph_loss: 0.2575 - output_ph_loss: 0.2693 - output_mg_c_loss: 0.2427 - output_c_loss: 0.2320 - val_loss: 1.8330 - val_output_react_loss: 0.1929 - val_output_bg_ph_loss: 0.2480 - val_output_ph_loss: 0.2612 - val_output_mg_c_loss: 0.2325 - val_output_c_loss: 0.2251 Epoch 10/120 30/30 - 4s - loss: 1.8769 - output_react_loss: 0.2033 - output_bg_ph_loss: 0.2529 - output_ph_loss: 0.2644 - output_mg_c_loss: 0.2360 - output_c_loss: 0.2282 - val_loss: 1.7937 - val_output_react_loss: 0.1910 - val_output_bg_ph_loss: 0.2442 - val_output_ph_loss: 0.2535 - val_output_mg_c_loss: 0.2252 - val_output_c_loss: 0.2194 Epoch 11/120 30/30 - 4s - loss: 1.8367 - output_react_loss: 0.1997 - output_bg_ph_loss: 0.2481 - output_ph_loss: 0.2597 - output_mg_c_loss: 0.2289 - output_c_loss: 0.2235 - val_loss: 1.7648 - val_output_react_loss: 0.1890 - val_output_bg_ph_loss: 0.2392 - val_output_ph_loss: 0.2512 - val_output_mg_c_loss: 0.2202 - val_output_c_loss: 0.2167 Epoch 12/120 30/30 - 4s - loss: 1.8048 - output_react_loss: 0.1977 - output_bg_ph_loss: 0.2433 - output_ph_loss: 0.2558 - output_mg_c_loss: 0.2239 - output_c_loss: 0.2192 - val_loss: 1.7376 - val_output_react_loss: 0.1860 - val_output_bg_ph_loss: 0.2374 - val_output_ph_loss: 0.2471 - val_output_mg_c_loss: 0.2156 - val_output_c_loss: 0.2126 Epoch 13/120 30/30 - 4s - loss: 1.7767 - output_react_loss: 0.1948 - output_bg_ph_loss: 0.2394 - output_ph_loss: 0.2518 - output_mg_c_loss: 0.2202 - output_c_loss: 0.2162 - val_loss: 1.7016 - val_output_react_loss: 0.1839 - val_output_bg_ph_loss: 0.2292 - val_output_ph_loss: 0.2427 - val_output_mg_c_loss: 0.2118 - val_output_c_loss: 0.2093 Epoch 14/120 30/30 - 4s - loss: 1.7474 - output_react_loss: 0.1928 - output_bg_ph_loss: 0.2342 - output_ph_loss: 0.2482 - output_mg_c_loss: 0.2160 - output_c_loss: 0.2130 - val_loss: 1.6829 - val_output_react_loss: 0.1834 - val_output_bg_ph_loss: 0.2253 - val_output_ph_loss: 0.2390 - val_output_mg_c_loss: 0.2099 - val_output_c_loss: 0.2067 Epoch 15/120 30/30 - 4s - loss: 1.7325 - output_react_loss: 0.1918 - output_bg_ph_loss: 0.2336 - output_ph_loss: 0.2449 - output_mg_c_loss: 0.2133 - output_c_loss: 0.2102 - val_loss: 1.6513 - val_output_react_loss: 0.1831 - val_output_bg_ph_loss: 0.2217 - val_output_ph_loss: 0.2346 - val_output_mg_c_loss: 0.2026 - val_output_c_loss: 0.2020 Epoch 16/120 30/30 - 4s - loss: 1.6941 - output_react_loss: 0.1881 - output_bg_ph_loss: 0.2277 - output_ph_loss: 0.2409 - output_mg_c_loss: 0.2077 - output_c_loss: 0.2060 - val_loss: 1.6195 - val_output_react_loss: 0.1785 - val_output_bg_ph_loss: 0.2184 - val_output_ph_loss: 0.2301 - val_output_mg_c_loss: 0.1983 - val_output_c_loss: 0.1989 Epoch 17/120 30/30 - 4s - loss: 1.6725 - output_react_loss: 0.1860 - output_bg_ph_loss: 0.2254 - output_ph_loss: 0.2373 - output_mg_c_loss: 0.2044 - output_c_loss: 0.2035 - val_loss: 1.6132 - val_output_react_loss: 0.1794 - val_output_bg_ph_loss: 0.2200 - val_output_ph_loss: 0.2279 - val_output_mg_c_loss: 0.1947 - val_output_c_loss: 0.1969 Epoch 18/120 30/30 - 4s - loss: 1.6494 - output_react_loss: 0.1849 - output_bg_ph_loss: 0.2210 - output_ph_loss: 0.2333 - output_mg_c_loss: 0.2015 - output_c_loss: 0.2011 - val_loss: 1.5786 - val_output_react_loss: 0.1762 - val_output_bg_ph_loss: 0.2117 - val_output_ph_loss: 0.2239 - val_output_mg_c_loss: 0.1929 - val_output_c_loss: 0.1933 Epoch 19/120 30/30 - 4s - loss: 1.6229 - output_react_loss: 0.1826 - output_bg_ph_loss: 0.2178 - output_ph_loss: 0.2306 - output_mg_c_loss: 0.1967 - output_c_loss: 0.1980 - val_loss: 1.5810 - val_output_react_loss: 0.1745 - val_output_bg_ph_loss: 0.2117 - val_output_ph_loss: 0.2241 - val_output_mg_c_loss: 0.1959 - val_output_c_loss: 0.1928 Epoch 20/120 30/30 - 4s - loss: 1.6070 - output_react_loss: 0.1805 - output_bg_ph_loss: 0.2163 - output_ph_loss: 0.2276 - output_mg_c_loss: 0.1949 - output_c_loss: 0.1959 - val_loss: 1.5546 - val_output_react_loss: 0.1742 - val_output_bg_ph_loss: 0.2108 - val_output_ph_loss: 0.2193 - val_output_mg_c_loss: 0.1880 - val_output_c_loss: 0.1892 Epoch 21/120 30/30 - 4s - loss: 1.5869 - output_react_loss: 0.1796 - output_bg_ph_loss: 0.2125 - output_ph_loss: 0.2249 - output_mg_c_loss: 0.1919 - output_c_loss: 0.1940 - val_loss: 1.5329 - val_output_react_loss: 0.1726 - val_output_bg_ph_loss: 0.2073 - val_output_ph_loss: 0.2173 - val_output_mg_c_loss: 0.1846 - val_output_c_loss: 0.1866 Epoch 22/120 30/30 - 4s - loss: 1.5733 - output_react_loss: 0.1779 - output_bg_ph_loss: 0.2106 - output_ph_loss: 0.2236 - output_mg_c_loss: 0.1904 - output_c_loss: 0.1919 - val_loss: 1.5222 - val_output_react_loss: 0.1710 - val_output_bg_ph_loss: 0.2052 - val_output_ph_loss: 0.2158 - val_output_mg_c_loss: 0.1834 - val_output_c_loss: 0.1872 Epoch 23/120 30/30 - 4s - loss: 1.5579 - output_react_loss: 0.1772 - output_bg_ph_loss: 0.2084 - output_ph_loss: 0.2210 - output_mg_c_loss: 0.1876 - output_c_loss: 0.1904 - val_loss: 1.5246 - val_output_react_loss: 0.1722 - val_output_bg_ph_loss: 0.2057 - val_output_ph_loss: 0.2136 - val_output_mg_c_loss: 0.1846 - val_output_c_loss: 0.1860 Epoch 24/120 30/30 - 4s - loss: 1.5364 - output_react_loss: 0.1741 - output_bg_ph_loss: 0.2062 - output_ph_loss: 0.2176 - output_mg_c_loss: 0.1850 - output_c_loss: 0.1881 - val_loss: 1.5003 - val_output_react_loss: 0.1682 - val_output_bg_ph_loss: 0.2027 - val_output_ph_loss: 0.2137 - val_output_mg_c_loss: 0.1802 - val_output_c_loss: 0.1843 Epoch 25/120 30/30 - 4s - loss: 1.5273 - output_react_loss: 0.1735 - output_bg_ph_loss: 0.2043 - output_ph_loss: 0.2179 - output_mg_c_loss: 0.1834 - output_c_loss: 0.1871 - val_loss: 1.4845 - val_output_react_loss: 0.1662 - val_output_bg_ph_loss: 0.1994 - val_output_ph_loss: 0.2108 - val_output_mg_c_loss: 0.1792 - val_output_c_loss: 0.1840 Epoch 26/120 30/30 - 4s - loss: 1.5129 - output_react_loss: 0.1714 - output_bg_ph_loss: 0.2027 - output_ph_loss: 0.2159 - output_mg_c_loss: 0.1817 - output_c_loss: 0.1855 - val_loss: 1.4768 - val_output_react_loss: 0.1652 - val_output_bg_ph_loss: 0.1994 - val_output_ph_loss: 0.2103 - val_output_mg_c_loss: 0.1780 - val_output_c_loss: 0.1812 Epoch 27/120 30/30 - 4s - loss: 1.4976 - output_react_loss: 0.1711 - output_bg_ph_loss: 0.2004 - output_ph_loss: 0.2128 - output_mg_c_loss: 0.1791 - output_c_loss: 0.1837 - val_loss: 1.4699 - val_output_react_loss: 0.1654 - val_output_bg_ph_loss: 0.1987 - val_output_ph_loss: 0.2090 - val_output_mg_c_loss: 0.1759 - val_output_c_loss: 0.1810 Epoch 28/120 30/30 - 4s - loss: 1.4888 - output_react_loss: 0.1696 - output_bg_ph_loss: 0.1992 - output_ph_loss: 0.2127 - output_mg_c_loss: 0.1777 - output_c_loss: 0.1830 - val_loss: 1.4813 - val_output_react_loss: 0.1656 - val_output_bg_ph_loss: 0.2007 - val_output_ph_loss: 0.2127 - val_output_mg_c_loss: 0.1770 - val_output_c_loss: 0.1819 Epoch 29/120 30/30 - 4s - loss: 1.4731 - output_react_loss: 0.1675 - output_bg_ph_loss: 0.1969 - output_ph_loss: 0.2094 - output_mg_c_loss: 0.1768 - output_c_loss: 0.1813 - val_loss: 1.4557 - val_output_react_loss: 0.1649 - val_output_bg_ph_loss: 0.1959 - val_output_ph_loss: 0.2065 - val_output_mg_c_loss: 0.1743 - val_output_c_loss: 0.1790 Epoch 30/120 30/30 - 4s - loss: 1.4539 - output_react_loss: 0.1659 - output_bg_ph_loss: 0.1936 - output_ph_loss: 0.2075 - output_mg_c_loss: 0.1740 - output_c_loss: 0.1794 - val_loss: 1.4551 - val_output_react_loss: 0.1638 - val_output_bg_ph_loss: 0.1959 - val_output_ph_loss: 0.2085 - val_output_mg_c_loss: 0.1741 - val_output_c_loss: 0.1788 Epoch 31/120 30/30 - 4s - loss: 1.4469 - output_react_loss: 0.1637 - output_bg_ph_loss: 0.1931 - output_ph_loss: 0.2079 - output_mg_c_loss: 0.1732 - output_c_loss: 0.1790 - val_loss: 1.4536 - val_output_react_loss: 0.1646 - val_output_bg_ph_loss: 0.1944 - val_output_ph_loss: 0.2072 - val_output_mg_c_loss: 0.1746 - val_output_c_loss: 0.1792 Epoch 32/120 30/30 - 4s - loss: 1.4325 - output_react_loss: 0.1638 - output_bg_ph_loss: 0.1906 - output_ph_loss: 0.2051 - output_mg_c_loss: 0.1706 - output_c_loss: 0.1775 - val_loss: 1.4452 - val_output_react_loss: 0.1623 - val_output_bg_ph_loss: 0.1946 - val_output_ph_loss: 0.2037 - val_output_mg_c_loss: 0.1747 - val_output_c_loss: 0.1784 Epoch 33/120 30/30 - 4s - loss: 1.4220 - output_react_loss: 0.1622 - output_bg_ph_loss: 0.1893 - output_ph_loss: 0.2040 - output_mg_c_loss: 0.1692 - output_c_loss: 0.1767 - val_loss: 1.4594 - val_output_react_loss: 0.1629 - val_output_bg_ph_loss: 0.1978 - val_output_ph_loss: 0.2033 - val_output_mg_c_loss: 0.1769 - val_output_c_loss: 0.1812 Epoch 34/120 30/30 - 4s - loss: 1.4184 - output_react_loss: 0.1625 - output_bg_ph_loss: 0.1887 - output_ph_loss: 0.2031 - output_mg_c_loss: 0.1680 - output_c_loss: 0.1769 - val_loss: 1.4377 - val_output_react_loss: 0.1619 - val_output_bg_ph_loss: 0.1934 - val_output_ph_loss: 0.2034 - val_output_mg_c_loss: 0.1733 - val_output_c_loss: 0.1773 Epoch 35/120 30/30 - 4s - loss: 1.4003 - output_react_loss: 0.1603 - output_bg_ph_loss: 0.1857 - output_ph_loss: 0.2006 - output_mg_c_loss: 0.1664 - output_c_loss: 0.1748 - val_loss: 1.4235 - val_output_react_loss: 0.1610 - val_output_bg_ph_loss: 0.1923 - val_output_ph_loss: 0.2016 - val_output_mg_c_loss: 0.1700 - val_output_c_loss: 0.1751 Epoch 36/120 30/30 - 4s - loss: 1.3808 - output_react_loss: 0.1586 - output_bg_ph_loss: 0.1832 - output_ph_loss: 0.1980 - output_mg_c_loss: 0.1633 - output_c_loss: 0.1724 - val_loss: 1.4229 - val_output_react_loss: 0.1599 - val_output_bg_ph_loss: 0.1926 - val_output_ph_loss: 0.2031 - val_output_mg_c_loss: 0.1697 - val_output_c_loss: 0.1755 Epoch 37/120 30/30 - 4s - loss: 1.3685 - output_react_loss: 0.1564 - output_bg_ph_loss: 0.1817 - output_ph_loss: 0.1970 - output_mg_c_loss: 0.1615 - output_c_loss: 0.1722 - val_loss: 1.4198 - val_output_react_loss: 0.1599 - val_output_bg_ph_loss: 0.1915 - val_output_ph_loss: 0.2013 - val_output_mg_c_loss: 0.1707 - val_output_c_loss: 0.1743 Epoch 38/120 30/30 - 4s - loss: 1.3600 - output_react_loss: 0.1562 - output_bg_ph_loss: 0.1804 - output_ph_loss: 0.1961 - output_mg_c_loss: 0.1599 - output_c_loss: 0.1710 - val_loss: 1.4149 - val_output_react_loss: 0.1596 - val_output_bg_ph_loss: 0.1908 - val_output_ph_loss: 0.1992 - val_output_mg_c_loss: 0.1703 - val_output_c_loss: 0.1743 Epoch 39/120 30/30 - 4s - loss: 1.3497 - output_react_loss: 0.1552 - output_bg_ph_loss: 0.1786 - output_ph_loss: 0.1941 - output_mg_c_loss: 0.1589 - output_c_loss: 0.1702 - val_loss: 1.4254 - val_output_react_loss: 0.1590 - val_output_bg_ph_loss: 0.1934 - val_output_ph_loss: 0.2004 - val_output_mg_c_loss: 0.1725 - val_output_c_loss: 0.1750 Epoch 40/120 30/30 - 4s - loss: 1.3433 - output_react_loss: 0.1548 - output_bg_ph_loss: 0.1776 - output_ph_loss: 0.1930 - output_mg_c_loss: 0.1579 - output_c_loss: 0.1695 - val_loss: 1.4085 - val_output_react_loss: 0.1595 - val_output_bg_ph_loss: 0.1896 - val_output_ph_loss: 0.1986 - val_output_mg_c_loss: 0.1694 - val_output_c_loss: 0.1730 Epoch 41/120 30/30 - 4s - loss: 1.3233 - output_react_loss: 0.1523 - output_bg_ph_loss: 0.1749 - output_ph_loss: 0.1909 - output_mg_c_loss: 0.1550 - output_c_loss: 0.1679 - val_loss: 1.4082 - val_output_react_loss: 0.1584 - val_output_bg_ph_loss: 0.1920 - val_output_ph_loss: 0.1984 - val_output_mg_c_loss: 0.1675 - val_output_c_loss: 0.1739 Epoch 42/120 30/30 - 4s - loss: 1.3142 - output_react_loss: 0.1512 - output_bg_ph_loss: 0.1739 - output_ph_loss: 0.1907 - output_mg_c_loss: 0.1533 - output_c_loss: 0.1668 - val_loss: 1.4005 - val_output_react_loss: 0.1576 - val_output_bg_ph_loss: 0.1896 - val_output_ph_loss: 0.1986 - val_output_mg_c_loss: 0.1675 - val_output_c_loss: 0.1725 Epoch 43/120 30/30 - 4s - loss: 1.3001 - output_react_loss: 0.1491 - output_bg_ph_loss: 0.1718 - output_ph_loss: 0.1889 - output_mg_c_loss: 0.1516 - output_c_loss: 0.1661 - val_loss: 1.3872 - val_output_react_loss: 0.1562 - val_output_bg_ph_loss: 0.1884 - val_output_ph_loss: 0.1952 - val_output_mg_c_loss: 0.1655 - val_output_c_loss: 0.1716 Epoch 44/120 30/30 - 4s - loss: 1.2919 - output_react_loss: 0.1484 - output_bg_ph_loss: 0.1699 - output_ph_loss: 0.1882 - output_mg_c_loss: 0.1510 - output_c_loss: 0.1651 - val_loss: 1.3897 - val_output_react_loss: 0.1564 - val_output_bg_ph_loss: 0.1880 - val_output_ph_loss: 0.1962 - val_output_mg_c_loss: 0.1666 - val_output_c_loss: 0.1716 Epoch 45/120 30/30 - 4s - loss: 1.2778 - output_react_loss: 0.1472 - output_bg_ph_loss: 0.1683 - output_ph_loss: 0.1851 - output_mg_c_loss: 0.1490 - output_c_loss: 0.1635 - val_loss: 1.3925 - val_output_react_loss: 0.1570 - val_output_bg_ph_loss: 0.1884 - val_output_ph_loss: 0.1960 - val_output_mg_c_loss: 0.1669 - val_output_c_loss: 0.1719 Epoch 46/120 30/30 - 4s - loss: 1.2677 - output_react_loss: 0.1463 - output_bg_ph_loss: 0.1662 - output_ph_loss: 0.1843 - output_mg_c_loss: 0.1479 - output_c_loss: 0.1627 - val_loss: 1.3833 - val_output_react_loss: 0.1568 - val_output_bg_ph_loss: 0.1867 - val_output_ph_loss: 0.1960 - val_output_mg_c_loss: 0.1646 - val_output_c_loss: 0.1712 Epoch 47/120 30/30 - 4s - loss: 1.2556 - output_react_loss: 0.1445 - output_bg_ph_loss: 0.1647 - output_ph_loss: 0.1827 - output_mg_c_loss: 0.1461 - output_c_loss: 0.1622 - val_loss: 1.4004 - val_output_react_loss: 0.1552 - val_output_bg_ph_loss: 0.1895 - val_output_ph_loss: 0.1976 - val_output_mg_c_loss: 0.1695 - val_output_c_loss: 0.1743 Epoch 48/120 30/30 - 4s - loss: 1.2482 - output_react_loss: 0.1434 - output_bg_ph_loss: 0.1634 - output_ph_loss: 0.1821 - output_mg_c_loss: 0.1456 - output_c_loss: 0.1614 - val_loss: 1.3820 - val_output_react_loss: 0.1557 - val_output_bg_ph_loss: 0.1876 - val_output_ph_loss: 0.1940 - val_output_mg_c_loss: 0.1656 - val_output_c_loss: 0.1703 Epoch 49/120 30/30 - 4s - loss: 1.2309 - output_react_loss: 0.1416 - output_bg_ph_loss: 0.1612 - output_ph_loss: 0.1799 - output_mg_c_loss: 0.1428 - output_c_loss: 0.1598 - val_loss: 1.3827 - val_output_react_loss: 0.1567 - val_output_bg_ph_loss: 0.1871 - val_output_ph_loss: 0.1941 - val_output_mg_c_loss: 0.1650 - val_output_c_loss: 0.1711 Epoch 50/120 30/30 - 4s - loss: 1.2231 - output_react_loss: 0.1412 - output_bg_ph_loss: 0.1596 - output_ph_loss: 0.1790 - output_mg_c_loss: 0.1418 - output_c_loss: 0.1589 - val_loss: 1.3795 - val_output_react_loss: 0.1544 - val_output_bg_ph_loss: 0.1885 - val_output_ph_loss: 0.1933 - val_output_mg_c_loss: 0.1649 - val_output_c_loss: 0.1706 Epoch 51/120 30/30 - 4s - loss: 1.2174 - output_react_loss: 0.1405 - output_bg_ph_loss: 0.1583 - output_ph_loss: 0.1784 - output_mg_c_loss: 0.1411 - output_c_loss: 0.1593 - val_loss: 1.3675 - val_output_react_loss: 0.1533 - val_output_bg_ph_loss: 0.1867 - val_output_ph_loss: 0.1930 - val_output_mg_c_loss: 0.1627 - val_output_c_loss: 0.1690 Epoch 52/120 30/30 - 4s - loss: 1.2036 - output_react_loss: 0.1388 - output_bg_ph_loss: 0.1565 - output_ph_loss: 0.1764 - output_mg_c_loss: 0.1395 - output_c_loss: 0.1576 - val_loss: 1.3853 - val_output_react_loss: 0.1563 - val_output_bg_ph_loss: 0.1880 - val_output_ph_loss: 0.1951 - val_output_mg_c_loss: 0.1656 - val_output_c_loss: 0.1704 Epoch 53/120 30/30 - 4s - loss: 1.1960 - output_react_loss: 0.1374 - output_bg_ph_loss: 0.1554 - output_ph_loss: 0.1759 - output_mg_c_loss: 0.1384 - output_c_loss: 0.1575 - val_loss: 1.3918 - val_output_react_loss: 0.1545 - val_output_bg_ph_loss: 0.1888 - val_output_ph_loss: 0.1959 - val_output_mg_c_loss: 0.1683 - val_output_c_loss: 0.1728 Epoch 54/120 30/30 - 4s - loss: 1.1874 - output_react_loss: 0.1364 - output_bg_ph_loss: 0.1540 - output_ph_loss: 0.1747 - output_mg_c_loss: 0.1374 - output_c_loss: 0.1571 - val_loss: 1.3927 - val_output_react_loss: 0.1559 - val_output_bg_ph_loss: 0.1898 - val_output_ph_loss: 0.1962 - val_output_mg_c_loss: 0.1667 - val_output_c_loss: 0.1717 Epoch 55/120 30/30 - 4s - loss: 1.1849 - output_react_loss: 0.1362 - output_bg_ph_loss: 0.1539 - output_ph_loss: 0.1743 - output_mg_c_loss: 0.1369 - output_c_loss: 0.1565 - val_loss: 1.3862 - val_output_react_loss: 0.1550 - val_output_bg_ph_loss: 0.1889 - val_output_ph_loss: 0.1968 - val_output_mg_c_loss: 0.1655 - val_output_c_loss: 0.1707 Epoch 56/120 30/30 - 4s - loss: 1.1682 - output_react_loss: 0.1345 - output_bg_ph_loss: 0.1513 - output_ph_loss: 0.1728 - output_mg_c_loss: 0.1344 - output_c_loss: 0.1550 - val_loss: 1.3599 - val_output_react_loss: 0.1527 - val_output_bg_ph_loss: 0.1842 - val_output_ph_loss: 0.1922 - val_output_mg_c_loss: 0.1627 - val_output_c_loss: 0.1685 Epoch 57/120 30/30 - 4s - loss: 1.1616 - output_react_loss: 0.1333 - output_bg_ph_loss: 0.1503 - output_ph_loss: 0.1717 - output_mg_c_loss: 0.1338 - output_c_loss: 0.1550 - val_loss: 1.3782 - val_output_react_loss: 0.1538 - val_output_bg_ph_loss: 0.1884 - val_output_ph_loss: 0.1950 - val_output_mg_c_loss: 0.1645 - val_output_c_loss: 0.1698 Epoch 58/120 30/30 - 4s - loss: 1.1523 - output_react_loss: 0.1327 - output_bg_ph_loss: 0.1490 - output_ph_loss: 0.1706 - output_mg_c_loss: 0.1324 - output_c_loss: 0.1536 - val_loss: 1.3716 - val_output_react_loss: 0.1548 - val_output_bg_ph_loss: 0.1865 - val_output_ph_loss: 0.1927 - val_output_mg_c_loss: 0.1632 - val_output_c_loss: 0.1698 Epoch 59/120 30/30 - 4s - loss: 1.1425 - output_react_loss: 0.1312 - output_bg_ph_loss: 0.1474 - output_ph_loss: 0.1693 - output_mg_c_loss: 0.1314 - output_c_loss: 0.1531 - val_loss: 1.3839 - val_output_react_loss: 0.1552 - val_output_bg_ph_loss: 0.1883 - val_output_ph_loss: 0.1946 - val_output_mg_c_loss: 0.1658 - val_output_c_loss: 0.1708 Epoch 60/120 30/30 - 4s - loss: 1.1355 - output_react_loss: 0.1303 - output_bg_ph_loss: 0.1460 - output_ph_loss: 0.1680 - output_mg_c_loss: 0.1314 - output_c_loss: 0.1522 - val_loss: 1.3736 - val_output_react_loss: 0.1523 - val_output_bg_ph_loss: 0.1877 - val_output_ph_loss: 0.1945 - val_output_mg_c_loss: 0.1644 - val_output_c_loss: 0.1702 Epoch 61/120 Epoch 00061: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513. 30/30 - 4s - loss: 1.1305 - output_react_loss: 0.1289 - output_bg_ph_loss: 0.1459 - output_ph_loss: 0.1683 - output_mg_c_loss: 0.1298 - output_c_loss: 0.1527 - val_loss: 1.3654 - val_output_react_loss: 0.1529 - val_output_bg_ph_loss: 0.1861 - val_output_ph_loss: 0.1922 - val_output_mg_c_loss: 0.1635 - val_output_c_loss: 0.1682 Epoch 62/120 30/30 - 4s - loss: 1.0979 - output_react_loss: 0.1258 - output_bg_ph_loss: 0.1411 - output_ph_loss: 0.1641 - output_mg_c_loss: 0.1255 - output_c_loss: 0.1490 - val_loss: 1.3404 - val_output_react_loss: 0.1499 - val_output_bg_ph_loss: 0.1825 - val_output_ph_loss: 0.1894 - val_output_mg_c_loss: 0.1597 - val_output_c_loss: 0.1668 Epoch 63/120 30/30 - 4s - loss: 1.0789 - output_react_loss: 0.1238 - output_bg_ph_loss: 0.1381 - output_ph_loss: 0.1619 - output_mg_c_loss: 0.1229 - output_c_loss: 0.1476 - val_loss: 1.3418 - val_output_react_loss: 0.1497 - val_output_bg_ph_loss: 0.1827 - val_output_ph_loss: 0.1899 - val_output_mg_c_loss: 0.1600 - val_output_c_loss: 0.1671 Epoch 64/120 30/30 - 4s - loss: 1.0765 - output_react_loss: 0.1233 - output_bg_ph_loss: 0.1380 - output_ph_loss: 0.1616 - output_mg_c_loss: 0.1225 - output_c_loss: 0.1472 - val_loss: 1.3376 - val_output_react_loss: 0.1495 - val_output_bg_ph_loss: 0.1822 - val_output_ph_loss: 0.1888 - val_output_mg_c_loss: 0.1595 - val_output_c_loss: 0.1664 Epoch 65/120 30/30 - 4s - loss: 1.0710 - output_react_loss: 0.1227 - output_bg_ph_loss: 0.1369 - output_ph_loss: 0.1608 - output_mg_c_loss: 0.1220 - output_c_loss: 0.1470 - val_loss: 1.3407 - val_output_react_loss: 0.1497 - val_output_bg_ph_loss: 0.1825 - val_output_ph_loss: 0.1895 - val_output_mg_c_loss: 0.1601 - val_output_c_loss: 0.1666 Epoch 66/120 30/30 - 4s - loss: 1.0704 - output_react_loss: 0.1227 - output_bg_ph_loss: 0.1370 - output_ph_loss: 0.1607 - output_mg_c_loss: 0.1218 - output_c_loss: 0.1467 - val_loss: 1.3403 - val_output_react_loss: 0.1492 - val_output_bg_ph_loss: 0.1826 - val_output_ph_loss: 0.1891 - val_output_mg_c_loss: 0.1604 - val_output_c_loss: 0.1669 Epoch 67/120 30/30 - 4s - loss: 1.0669 - output_react_loss: 0.1224 - output_bg_ph_loss: 0.1365 - output_ph_loss: 0.1602 - output_mg_c_loss: 0.1214 - output_c_loss: 0.1462 - val_loss: 1.3418 - val_output_react_loss: 0.1498 - val_output_bg_ph_loss: 0.1827 - val_output_ph_loss: 0.1895 - val_output_mg_c_loss: 0.1603 - val_output_c_loss: 0.1667 Epoch 68/120 30/30 - 4s - loss: 1.0631 - output_react_loss: 0.1218 - output_bg_ph_loss: 0.1359 - output_ph_loss: 0.1593 - output_mg_c_loss: 0.1212 - output_c_loss: 0.1460 - val_loss: 1.3403 - val_output_react_loss: 0.1498 - val_output_bg_ph_loss: 0.1825 - val_output_ph_loss: 0.1890 - val_output_mg_c_loss: 0.1600 - val_output_c_loss: 0.1666 Epoch 69/120 Epoch 00069: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05. 30/30 - 4s - loss: 1.0639 - output_react_loss: 0.1221 - output_bg_ph_loss: 0.1357 - output_ph_loss: 0.1603 - output_mg_c_loss: 0.1210 - output_c_loss: 0.1461 - val_loss: 1.3406 - val_output_react_loss: 0.1498 - val_output_bg_ph_loss: 0.1826 - val_output_ph_loss: 0.1890 - val_output_mg_c_loss: 0.1600 - val_output_c_loss: 0.1666 Epoch 70/120 30/30 - 4s - loss: 1.0592 - output_react_loss: 0.1215 - output_bg_ph_loss: 0.1353 - output_ph_loss: 0.1594 - output_mg_c_loss: 0.1203 - output_c_loss: 0.1456 - val_loss: 1.3397 - val_output_react_loss: 0.1497 - val_output_bg_ph_loss: 0.1824 - val_output_ph_loss: 0.1890 - val_output_mg_c_loss: 0.1599 - val_output_c_loss: 0.1666 Epoch 71/120 30/30 - 4s - loss: 1.0588 - output_react_loss: 0.1216 - output_bg_ph_loss: 0.1351 - output_ph_loss: 0.1593 - output_mg_c_loss: 0.1203 - output_c_loss: 0.1453 - val_loss: 1.3392 - val_output_react_loss: 0.1496 - val_output_bg_ph_loss: 0.1824 - val_output_ph_loss: 0.1890 - val_output_mg_c_loss: 0.1599 - val_output_c_loss: 0.1666 Epoch 72/120 30/30 - 4s - loss: 1.0593 - output_react_loss: 0.1213 - output_bg_ph_loss: 0.1353 - output_ph_loss: 0.1589 - output_mg_c_loss: 0.1208 - output_c_loss: 0.1456 - val_loss: 1.3390 - val_output_react_loss: 0.1496 - val_output_bg_ph_loss: 0.1823 - val_output_ph_loss: 0.1889 - val_output_mg_c_loss: 0.1599 - val_output_c_loss: 0.1666 Epoch 73/120 30/30 - 4s - loss: 1.0578 - output_react_loss: 0.1212 - output_bg_ph_loss: 0.1352 - output_ph_loss: 0.1589 - output_mg_c_loss: 0.1201 - output_c_loss: 0.1458 - val_loss: 1.3388 - val_output_react_loss: 0.1496 - val_output_bg_ph_loss: 0.1823 - val_output_ph_loss: 0.1890 - val_output_mg_c_loss: 0.1598 - val_output_c_loss: 0.1665 Epoch 74/120 Restoring model weights from the end of the best epoch. Epoch 00074: ReduceLROnPlateau reducing learning rate to 1.0000000656873453e-06. 30/30 - 4s - loss: 1.0578 - output_react_loss: 0.1212 - output_bg_ph_loss: 0.1350 - output_ph_loss: 0.1591 - output_mg_c_loss: 0.1203 - output_c_loss: 0.1457 - val_loss: 1.3391 - val_output_react_loss: 0.1496 - val_output_bg_ph_loss: 0.1824 - val_output_ph_loss: 0.1890 - val_output_mg_c_loss: 0.1598 - val_output_c_loss: 0.1666 Epoch 00074: early stopping FOLD: 4 Epoch 1/120 30/30 - 6s - loss: 3.5823 - output_react_loss: 0.4021 - output_bg_ph_loss: 0.4325 - output_ph_loss: 0.4183 - output_mg_c_loss: 0.5652 - output_c_loss: 0.3643 - val_loss: 2.3387 - val_output_react_loss: 0.2429 - val_output_bg_ph_loss: 0.3170 - val_output_ph_loss: 0.3297 - val_output_mg_c_loss: 0.3079 - val_output_c_loss: 0.2732 Epoch 2/120 30/30 - 4s - loss: 2.3416 - output_react_loss: 0.2468 - output_bg_ph_loss: 0.3150 - output_ph_loss: 0.3305 - output_mg_c_loss: 0.3056 - output_c_loss: 0.2763 - val_loss: 2.1665 - val_output_react_loss: 0.2286 - val_output_bg_ph_loss: 0.2981 - val_output_ph_loss: 0.3018 - val_output_mg_c_loss: 0.2791 - val_output_c_loss: 0.2531 Epoch 3/120 30/30 - 4s - loss: 2.2095 - output_react_loss: 0.2356 - output_bg_ph_loss: 0.2987 - output_ph_loss: 0.3096 - output_mg_c_loss: 0.2841 - output_c_loss: 0.2631 - val_loss: 2.0789 - val_output_react_loss: 0.2214 - val_output_bg_ph_loss: 0.2849 - val_output_ph_loss: 0.2868 - val_output_mg_c_loss: 0.2673 - val_output_c_loss: 0.2448 Epoch 4/120 30/30 - 4s - loss: 2.1361 - output_react_loss: 0.2284 - output_bg_ph_loss: 0.2880 - output_ph_loss: 0.3003 - output_mg_c_loss: 0.2736 - output_c_loss: 0.2558 - val_loss: 2.0266 - val_output_react_loss: 0.2151 - val_output_bg_ph_loss: 0.2761 - val_output_ph_loss: 0.2838 - val_output_mg_c_loss: 0.2601 - val_output_c_loss: 0.2403 Epoch 5/120 30/30 - 4s - loss: 2.0682 - output_react_loss: 0.2218 - output_bg_ph_loss: 0.2780 - output_ph_loss: 0.2908 - output_mg_c_loss: 0.2647 - output_c_loss: 0.2484 - val_loss: 1.9716 - val_output_react_loss: 0.2101 - val_output_bg_ph_loss: 0.2693 - val_output_ph_loss: 0.2742 - val_output_mg_c_loss: 0.2521 - val_output_c_loss: 0.2345 Epoch 6/120 30/30 - 4s - loss: 2.0186 - output_react_loss: 0.2167 - output_bg_ph_loss: 0.2715 - output_ph_loss: 0.2837 - output_mg_c_loss: 0.2577 - output_c_loss: 0.2432 - val_loss: 1.9286 - val_output_react_loss: 0.2043 - val_output_bg_ph_loss: 0.2636 - val_output_ph_loss: 0.2702 - val_output_mg_c_loss: 0.2461 - val_output_c_loss: 0.2306 Epoch 7/120 30/30 - 4s - loss: 1.9794 - output_react_loss: 0.2131 - output_bg_ph_loss: 0.2661 - output_ph_loss: 0.2788 - output_mg_c_loss: 0.2519 - output_c_loss: 0.2383 - val_loss: 1.8964 - val_output_react_loss: 0.2028 - val_output_bg_ph_loss: 0.2595 - val_output_ph_loss: 0.2648 - val_output_mg_c_loss: 0.2403 - val_output_c_loss: 0.2263 Epoch 8/120 30/30 - 4s - loss: 1.9333 - output_react_loss: 0.2085 - output_bg_ph_loss: 0.2602 - output_ph_loss: 0.2725 - output_mg_c_loss: 0.2448 - output_c_loss: 0.2339 - val_loss: 1.8370 - val_output_react_loss: 0.1947 - val_output_bg_ph_loss: 0.2520 - val_output_ph_loss: 0.2572 - val_output_mg_c_loss: 0.2333 - val_output_c_loss: 0.2198 Epoch 9/120 30/30 - 4s - loss: 1.8834 - output_react_loss: 0.2045 - output_bg_ph_loss: 0.2542 - output_ph_loss: 0.2659 - output_mg_c_loss: 0.2361 - output_c_loss: 0.2281 - val_loss: 1.8088 - val_output_react_loss: 0.1929 - val_output_bg_ph_loss: 0.2510 - val_output_ph_loss: 0.2530 - val_output_mg_c_loss: 0.2263 - val_output_c_loss: 0.2153 Epoch 10/120 30/30 - 4s - loss: 1.8425 - output_react_loss: 0.2013 - output_bg_ph_loss: 0.2478 - output_ph_loss: 0.2609 - output_mg_c_loss: 0.2304 - output_c_loss: 0.2228 - val_loss: 1.7647 - val_output_react_loss: 0.1912 - val_output_bg_ph_loss: 0.2408 - val_output_ph_loss: 0.2485 - val_output_mg_c_loss: 0.2200 - val_output_c_loss: 0.2122 Epoch 11/120 30/30 - 4s - loss: 1.8167 - output_react_loss: 0.1992 - output_bg_ph_loss: 0.2440 - output_ph_loss: 0.2577 - output_mg_c_loss: 0.2260 - output_c_loss: 0.2206 - val_loss: 1.7501 - val_output_react_loss: 0.1888 - val_output_bg_ph_loss: 0.2405 - val_output_ph_loss: 0.2455 - val_output_mg_c_loss: 0.2174 - val_output_c_loss: 0.2113 Epoch 12/120 30/30 - 4s - loss: 1.7783 - output_react_loss: 0.1968 - output_bg_ph_loss: 0.2379 - output_ph_loss: 0.2527 - output_mg_c_loss: 0.2198 - output_c_loss: 0.2167 - val_loss: 1.7254 - val_output_react_loss: 0.1857 - val_output_bg_ph_loss: 0.2370 - val_output_ph_loss: 0.2433 - val_output_mg_c_loss: 0.2153 - val_output_c_loss: 0.2064 Epoch 13/120 30/30 - 4s - loss: 1.7545 - output_react_loss: 0.1948 - output_bg_ph_loss: 0.2343 - output_ph_loss: 0.2489 - output_mg_c_loss: 0.2165 - output_c_loss: 0.2144 - val_loss: 1.6945 - val_output_react_loss: 0.1844 - val_output_bg_ph_loss: 0.2308 - val_output_ph_loss: 0.2409 - val_output_mg_c_loss: 0.2091 - val_output_c_loss: 0.2049 Epoch 14/120 30/30 - 4s - loss: 1.7199 - output_react_loss: 0.1912 - output_bg_ph_loss: 0.2298 - output_ph_loss: 0.2453 - output_mg_c_loss: 0.2111 - output_c_loss: 0.2104 - val_loss: 1.6693 - val_output_react_loss: 0.1828 - val_output_bg_ph_loss: 0.2273 - val_output_ph_loss: 0.2347 - val_output_mg_c_loss: 0.2062 - val_output_c_loss: 0.2020 Epoch 15/120 30/30 - 4s - loss: 1.7001 - output_react_loss: 0.1908 - output_bg_ph_loss: 0.2265 - output_ph_loss: 0.2412 - output_mg_c_loss: 0.2083 - output_c_loss: 0.2076 - val_loss: 1.6376 - val_output_react_loss: 0.1790 - val_output_bg_ph_loss: 0.2223 - val_output_ph_loss: 0.2338 - val_output_mg_c_loss: 0.2018 - val_output_c_loss: 0.1975 Epoch 16/120 30/30 - 4s - loss: 1.6763 - output_react_loss: 0.1888 - output_bg_ph_loss: 0.2231 - output_ph_loss: 0.2393 - output_mg_c_loss: 0.2040 - output_c_loss: 0.2052 - val_loss: 1.6350 - val_output_react_loss: 0.1826 - val_output_bg_ph_loss: 0.2237 - val_output_ph_loss: 0.2268 - val_output_mg_c_loss: 0.1991 - val_output_c_loss: 0.1974 Epoch 17/120 30/30 - 4s - loss: 1.6524 - output_react_loss: 0.1862 - output_bg_ph_loss: 0.2214 - output_ph_loss: 0.2341 - output_mg_c_loss: 0.2005 - output_c_loss: 0.2021 - val_loss: 1.5882 - val_output_react_loss: 0.1756 - val_output_bg_ph_loss: 0.2165 - val_output_ph_loss: 0.2255 - val_output_mg_c_loss: 0.1940 - val_output_c_loss: 0.1905 Epoch 18/120 30/30 - 4s - loss: 1.6277 - output_react_loss: 0.1833 - output_bg_ph_loss: 0.2173 - output_ph_loss: 0.2318 - output_mg_c_loss: 0.1976 - output_c_loss: 0.1996 - val_loss: 1.5811 - val_output_react_loss: 0.1760 - val_output_bg_ph_loss: 0.2158 - val_output_ph_loss: 0.2202 - val_output_mg_c_loss: 0.1939 - val_output_c_loss: 0.1894 Epoch 19/120 30/30 - 4s - loss: 1.6048 - output_react_loss: 0.1822 - output_bg_ph_loss: 0.2142 - output_ph_loss: 0.2282 - output_mg_c_loss: 0.1936 - output_c_loss: 0.1967 - val_loss: 1.5823 - val_output_react_loss: 0.1775 - val_output_bg_ph_loss: 0.2169 - val_output_ph_loss: 0.2204 - val_output_mg_c_loss: 0.1927 - val_output_c_loss: 0.1878 Epoch 20/120 30/30 - 4s - loss: 1.5862 - output_react_loss: 0.1800 - output_bg_ph_loss: 0.2113 - output_ph_loss: 0.2259 - output_mg_c_loss: 0.1914 - output_c_loss: 0.1948 - val_loss: 1.5650 - val_output_react_loss: 0.1768 - val_output_bg_ph_loss: 0.2115 - val_output_ph_loss: 0.2230 - val_output_mg_c_loss: 0.1889 - val_output_c_loss: 0.1875 Epoch 21/120 30/30 - 4s - loss: 1.5777 - output_react_loss: 0.1791 - output_bg_ph_loss: 0.2095 - output_ph_loss: 0.2254 - output_mg_c_loss: 0.1905 - output_c_loss: 0.1941 - val_loss: 1.5429 - val_output_react_loss: 0.1721 - val_output_bg_ph_loss: 0.2116 - val_output_ph_loss: 0.2152 - val_output_mg_c_loss: 0.1877 - val_output_c_loss: 0.1849 Epoch 22/120 30/30 - 4s - loss: 1.5588 - output_react_loss: 0.1763 - output_bg_ph_loss: 0.2085 - output_ph_loss: 0.2219 - output_mg_c_loss: 0.1877 - output_c_loss: 0.1920 - val_loss: 1.5415 - val_output_react_loss: 0.1723 - val_output_bg_ph_loss: 0.2125 - val_output_ph_loss: 0.2138 - val_output_mg_c_loss: 0.1864 - val_output_c_loss: 0.1854 Epoch 23/120 30/30 - 4s - loss: 1.5387 - output_react_loss: 0.1756 - output_bg_ph_loss: 0.2049 - output_ph_loss: 0.2187 - output_mg_c_loss: 0.1848 - output_c_loss: 0.1894 - val_loss: 1.5317 - val_output_react_loss: 0.1734 - val_output_bg_ph_loss: 0.2069 - val_output_ph_loss: 0.2141 - val_output_mg_c_loss: 0.1860 - val_output_c_loss: 0.1850 Epoch 24/120 30/30 - 4s - loss: 1.5313 - output_react_loss: 0.1749 - output_bg_ph_loss: 0.2038 - output_ph_loss: 0.2174 - output_mg_c_loss: 0.1841 - output_c_loss: 0.1884 - val_loss: 1.5225 - val_output_react_loss: 0.1708 - val_output_bg_ph_loss: 0.2102 - val_output_ph_loss: 0.2109 - val_output_mg_c_loss: 0.1831 - val_output_c_loss: 0.1832 Epoch 25/120 30/30 - 4s - loss: 1.5198 - output_react_loss: 0.1732 - output_bg_ph_loss: 0.2032 - output_ph_loss: 0.2165 - output_mg_c_loss: 0.1820 - output_c_loss: 0.1865 - val_loss: 1.5092 - val_output_react_loss: 0.1707 - val_output_bg_ph_loss: 0.2060 - val_output_ph_loss: 0.2100 - val_output_mg_c_loss: 0.1819 - val_output_c_loss: 0.1819 Epoch 26/120 30/30 - 4s - loss: 1.4993 - output_react_loss: 0.1707 - output_bg_ph_loss: 0.1994 - output_ph_loss: 0.2137 - output_mg_c_loss: 0.1800 - output_c_loss: 0.1853 - val_loss: 1.5011 - val_output_react_loss: 0.1676 - val_output_bg_ph_loss: 0.2052 - val_output_ph_loss: 0.2101 - val_output_mg_c_loss: 0.1827 - val_output_c_loss: 0.1802 Epoch 27/120 30/30 - 4s - loss: 1.4854 - output_react_loss: 0.1697 - output_bg_ph_loss: 0.1975 - output_ph_loss: 0.2121 - output_mg_c_loss: 0.1775 - output_c_loss: 0.1838 - val_loss: 1.4949 - val_output_react_loss: 0.1661 - val_output_bg_ph_loss: 0.2053 - val_output_ph_loss: 0.2086 - val_output_mg_c_loss: 0.1820 - val_output_c_loss: 0.1796 Epoch 28/120 30/30 - 4s - loss: 1.4652 - output_react_loss: 0.1671 - output_bg_ph_loss: 0.1949 - output_ph_loss: 0.2095 - output_mg_c_loss: 0.1749 - output_c_loss: 0.1820 - val_loss: 1.4817 - val_output_react_loss: 0.1649 - val_output_bg_ph_loss: 0.2028 - val_output_ph_loss: 0.2067 - val_output_mg_c_loss: 0.1799 - val_output_c_loss: 0.1797 Epoch 29/120 30/30 - 4s - loss: 1.4570 - output_react_loss: 0.1660 - output_bg_ph_loss: 0.1937 - output_ph_loss: 0.2082 - output_mg_c_loss: 0.1741 - output_c_loss: 0.1812 - val_loss: 1.4784 - val_output_react_loss: 0.1650 - val_output_bg_ph_loss: 0.2023 - val_output_ph_loss: 0.2065 - val_output_mg_c_loss: 0.1798 - val_output_c_loss: 0.1777 Epoch 30/120 30/30 - 4s - loss: 1.4460 - output_react_loss: 0.1661 - output_bg_ph_loss: 0.1917 - output_ph_loss: 0.2062 - output_mg_c_loss: 0.1721 - output_c_loss: 0.1802 - val_loss: 1.4758 - val_output_react_loss: 0.1670 - val_output_bg_ph_loss: 0.2029 - val_output_ph_loss: 0.2043 - val_output_mg_c_loss: 0.1774 - val_output_c_loss: 0.1770 Epoch 31/120 30/30 - 4s - loss: 1.4358 - output_react_loss: 0.1645 - output_bg_ph_loss: 0.1911 - output_ph_loss: 0.2049 - output_mg_c_loss: 0.1706 - output_c_loss: 0.1785 - val_loss: 1.4832 - val_output_react_loss: 0.1686 - val_output_bg_ph_loss: 0.2021 - val_output_ph_loss: 0.2041 - val_output_mg_c_loss: 0.1796 - val_output_c_loss: 0.1785 Epoch 32/120 30/30 - 4s - loss: 1.4263 - output_react_loss: 0.1634 - output_bg_ph_loss: 0.1897 - output_ph_loss: 0.2040 - output_mg_c_loss: 0.1691 - output_c_loss: 0.1779 - val_loss: 1.4731 - val_output_react_loss: 0.1646 - val_output_bg_ph_loss: 0.2030 - val_output_ph_loss: 0.2041 - val_output_mg_c_loss: 0.1781 - val_output_c_loss: 0.1774 Epoch 33/120 30/30 - 4s - loss: 1.4067 - output_react_loss: 0.1617 - output_bg_ph_loss: 0.1863 - output_ph_loss: 0.2020 - output_mg_c_loss: 0.1664 - output_c_loss: 0.1758 - val_loss: 1.4670 - val_output_react_loss: 0.1635 - val_output_bg_ph_loss: 0.2002 - val_output_ph_loss: 0.2062 - val_output_mg_c_loss: 0.1782 - val_output_c_loss: 0.1769 Epoch 34/120 30/30 - 4s - loss: 1.3967 - output_react_loss: 0.1602 - output_bg_ph_loss: 0.1851 - output_ph_loss: 0.2008 - output_mg_c_loss: 0.1653 - output_c_loss: 0.1748 - val_loss: 1.4740 - val_output_react_loss: 0.1648 - val_output_bg_ph_loss: 0.2007 - val_output_ph_loss: 0.2066 - val_output_mg_c_loss: 0.1789 - val_output_c_loss: 0.1787 Epoch 35/120 30/30 - 4s - loss: 1.3834 - output_react_loss: 0.1589 - output_bg_ph_loss: 0.1827 - output_ph_loss: 0.1988 - output_mg_c_loss: 0.1639 - output_c_loss: 0.1735 - val_loss: 1.4549 - val_output_react_loss: 0.1615 - val_output_bg_ph_loss: 0.2006 - val_output_ph_loss: 0.2023 - val_output_mg_c_loss: 0.1769 - val_output_c_loss: 0.1745 Epoch 36/120 30/30 - 4s - loss: 1.3745 - output_react_loss: 0.1570 - output_bg_ph_loss: 0.1814 - output_ph_loss: 0.1975 - output_mg_c_loss: 0.1635 - output_c_loss: 0.1733 - val_loss: 1.4527 - val_output_react_loss: 0.1617 - val_output_bg_ph_loss: 0.1999 - val_output_ph_loss: 0.2023 - val_output_mg_c_loss: 0.1758 - val_output_c_loss: 0.1757 Epoch 37/120 30/30 - 4s - loss: 1.3622 - output_react_loss: 0.1562 - output_bg_ph_loss: 0.1804 - output_ph_loss: 0.1964 - output_mg_c_loss: 0.1608 - output_c_loss: 0.1711 - val_loss: 1.4521 - val_output_react_loss: 0.1622 - val_output_bg_ph_loss: 0.1991 - val_output_ph_loss: 0.2038 - val_output_mg_c_loss: 0.1743 - val_output_c_loss: 0.1769 Epoch 38/120 30/30 - 4s - loss: 1.3482 - output_react_loss: 0.1548 - output_bg_ph_loss: 0.1783 - output_ph_loss: 0.1951 - output_mg_c_loss: 0.1582 - output_c_loss: 0.1707 - val_loss: 1.4610 - val_output_react_loss: 0.1620 - val_output_bg_ph_loss: 0.2001 - val_output_ph_loss: 0.2033 - val_output_mg_c_loss: 0.1781 - val_output_c_loss: 0.1773 Epoch 39/120 30/30 - 4s - loss: 1.3435 - output_react_loss: 0.1556 - output_bg_ph_loss: 0.1775 - output_ph_loss: 0.1932 - output_mg_c_loss: 0.1570 - output_c_loss: 0.1701 - val_loss: 1.4591 - val_output_react_loss: 0.1601 - val_output_bg_ph_loss: 0.1998 - val_output_ph_loss: 0.2049 - val_output_mg_c_loss: 0.1774 - val_output_c_loss: 0.1796 Epoch 40/120 30/30 - 4s - loss: 1.3337 - output_react_loss: 0.1529 - output_bg_ph_loss: 0.1768 - output_ph_loss: 0.1918 - output_mg_c_loss: 0.1566 - output_c_loss: 0.1694 - val_loss: 1.4426 - val_output_react_loss: 0.1603 - val_output_bg_ph_loss: 0.1975 - val_output_ph_loss: 0.2000 - val_output_mg_c_loss: 0.1764 - val_output_c_loss: 0.1741 Epoch 41/120 30/30 - 4s - loss: 1.3097 - output_react_loss: 0.1507 - output_bg_ph_loss: 0.1727 - output_ph_loss: 0.1900 - output_mg_c_loss: 0.1529 - output_c_loss: 0.1671 - val_loss: 1.4403 - val_output_react_loss: 0.1614 - val_output_bg_ph_loss: 0.1966 - val_output_ph_loss: 0.2001 - val_output_mg_c_loss: 0.1746 - val_output_c_loss: 0.1748 Epoch 42/120 30/30 - 4s - loss: 1.2990 - output_react_loss: 0.1496 - output_bg_ph_loss: 0.1711 - output_ph_loss: 0.1881 - output_mg_c_loss: 0.1516 - output_c_loss: 0.1663 - val_loss: 1.4326 - val_output_react_loss: 0.1595 - val_output_bg_ph_loss: 0.1977 - val_output_ph_loss: 0.1992 - val_output_mg_c_loss: 0.1733 - val_output_c_loss: 0.1722 Epoch 43/120 30/30 - 4s - loss: 1.2873 - output_react_loss: 0.1487 - output_bg_ph_loss: 0.1690 - output_ph_loss: 0.1869 - output_mg_c_loss: 0.1500 - output_c_loss: 0.1650 - val_loss: 1.4424 - val_output_react_loss: 0.1615 - val_output_bg_ph_loss: 0.1981 - val_output_ph_loss: 0.2005 - val_output_mg_c_loss: 0.1741 - val_output_c_loss: 0.1744 Epoch 44/120 30/30 - 4s - loss: 1.2752 - output_react_loss: 0.1478 - output_bg_ph_loss: 0.1677 - output_ph_loss: 0.1849 - output_mg_c_loss: 0.1478 - output_c_loss: 0.1637 - val_loss: 1.4444 - val_output_react_loss: 0.1597 - val_output_bg_ph_loss: 0.1990 - val_output_ph_loss: 0.2009 - val_output_mg_c_loss: 0.1753 - val_output_c_loss: 0.1754 Epoch 45/120 30/30 - 4s - loss: 1.2712 - output_react_loss: 0.1466 - output_bg_ph_loss: 0.1671 - output_ph_loss: 0.1844 - output_mg_c_loss: 0.1477 - output_c_loss: 0.1639 - val_loss: 1.4365 - val_output_react_loss: 0.1613 - val_output_bg_ph_loss: 0.1957 - val_output_ph_loss: 0.1989 - val_output_mg_c_loss: 0.1747 - val_output_c_loss: 0.1741 Epoch 46/120 30/30 - 4s - loss: 1.2567 - output_react_loss: 0.1453 - output_bg_ph_loss: 0.1646 - output_ph_loss: 0.1829 - output_mg_c_loss: 0.1460 - output_c_loss: 0.1621 - val_loss: 1.4306 - val_output_react_loss: 0.1597 - val_output_bg_ph_loss: 0.1965 - val_output_ph_loss: 0.1983 - val_output_mg_c_loss: 0.1734 - val_output_c_loss: 0.1732 Epoch 47/120 30/30 - 4s - loss: 1.2402 - output_react_loss: 0.1428 - output_bg_ph_loss: 0.1625 - output_ph_loss: 0.1811 - output_mg_c_loss: 0.1436 - output_c_loss: 0.1612 - val_loss: 1.4266 - val_output_react_loss: 0.1595 - val_output_bg_ph_loss: 0.1955 - val_output_ph_loss: 0.1983 - val_output_mg_c_loss: 0.1724 - val_output_c_loss: 0.1735 Epoch 48/120 30/30 - 4s - loss: 1.2358 - output_react_loss: 0.1429 - output_bg_ph_loss: 0.1610 - output_ph_loss: 0.1803 - output_mg_c_loss: 0.1435 - output_c_loss: 0.1607 - val_loss: 1.4273 - val_output_react_loss: 0.1578 - val_output_bg_ph_loss: 0.1974 - val_output_ph_loss: 0.1983 - val_output_mg_c_loss: 0.1730 - val_output_c_loss: 0.1726 Epoch 49/120 30/30 - 4s - loss: 1.2235 - output_react_loss: 0.1419 - output_bg_ph_loss: 0.1597 - output_ph_loss: 0.1784 - output_mg_c_loss: 0.1414 - output_c_loss: 0.1592 - val_loss: 1.4392 - val_output_react_loss: 0.1610 - val_output_bg_ph_loss: 0.1981 - val_output_ph_loss: 0.1990 - val_output_mg_c_loss: 0.1740 - val_output_c_loss: 0.1740 Epoch 50/120 30/30 - 4s - loss: 1.2135 - output_react_loss: 0.1402 - output_bg_ph_loss: 0.1583 - output_ph_loss: 0.1773 - output_mg_c_loss: 0.1404 - output_c_loss: 0.1585 - val_loss: 1.4262 - val_output_react_loss: 0.1578 - val_output_bg_ph_loss: 0.1959 - val_output_ph_loss: 0.1997 - val_output_mg_c_loss: 0.1728 - val_output_c_loss: 0.1736 Epoch 51/120 30/30 - 4s - loss: 1.2031 - output_react_loss: 0.1393 - output_bg_ph_loss: 0.1566 - output_ph_loss: 0.1761 - output_mg_c_loss: 0.1388 - output_c_loss: 0.1576 - val_loss: 1.4383 - val_output_react_loss: 0.1601 - val_output_bg_ph_loss: 0.1985 - val_output_ph_loss: 0.1987 - val_output_mg_c_loss: 0.1738 - val_output_c_loss: 0.1747 Epoch 52/120 30/30 - 4s - loss: 1.1939 - output_react_loss: 0.1373 - output_bg_ph_loss: 0.1555 - output_ph_loss: 0.1759 - output_mg_c_loss: 0.1377 - output_c_loss: 0.1572 - val_loss: 1.4197 - val_output_react_loss: 0.1575 - val_output_bg_ph_loss: 0.1939 - val_output_ph_loss: 0.1999 - val_output_mg_c_loss: 0.1721 - val_output_c_loss: 0.1728 Epoch 53/120 30/30 - 4s - loss: 1.1848 - output_react_loss: 0.1372 - output_bg_ph_loss: 0.1532 - output_ph_loss: 0.1748 - output_mg_c_loss: 0.1361 - output_c_loss: 0.1570 - val_loss: 1.4191 - val_output_react_loss: 0.1578 - val_output_bg_ph_loss: 0.1945 - val_output_ph_loss: 0.1976 - val_output_mg_c_loss: 0.1723 - val_output_c_loss: 0.1721 Epoch 54/120 30/30 - 4s - loss: 1.1797 - output_react_loss: 0.1367 - output_bg_ph_loss: 0.1527 - output_ph_loss: 0.1738 - output_mg_c_loss: 0.1354 - output_c_loss: 0.1562 - val_loss: 1.4224 - val_output_react_loss: 0.1571 - val_output_bg_ph_loss: 0.1954 - val_output_ph_loss: 0.2002 - val_output_mg_c_loss: 0.1720 - val_output_c_loss: 0.1730 Epoch 55/120 30/30 - 4s - loss: 1.1625 - output_react_loss: 0.1344 - output_bg_ph_loss: 0.1498 - output_ph_loss: 0.1721 - output_mg_c_loss: 0.1335 - output_c_loss: 0.1550 - val_loss: 1.4158 - val_output_react_loss: 0.1557 - val_output_bg_ph_loss: 0.1957 - val_output_ph_loss: 0.1979 - val_output_mg_c_loss: 0.1713 - val_output_c_loss: 0.1724 Epoch 56/120 30/30 - 4s - loss: 1.1535 - output_react_loss: 0.1331 - output_bg_ph_loss: 0.1486 - output_ph_loss: 0.1706 - output_mg_c_loss: 0.1326 - output_c_loss: 0.1542 - val_loss: 1.4149 - val_output_react_loss: 0.1578 - val_output_bg_ph_loss: 0.1943 - val_output_ph_loss: 0.1963 - val_output_mg_c_loss: 0.1714 - val_output_c_loss: 0.1717 Epoch 57/120 30/30 - 4s - loss: 1.1467 - output_react_loss: 0.1326 - output_bg_ph_loss: 0.1476 - output_ph_loss: 0.1699 - output_mg_c_loss: 0.1313 - output_c_loss: 0.1537 - val_loss: 1.4238 - val_output_react_loss: 0.1579 - val_output_bg_ph_loss: 0.1965 - val_output_ph_loss: 0.1971 - val_output_mg_c_loss: 0.1729 - val_output_c_loss: 0.1721 Epoch 58/120 30/30 - 4s - loss: 1.1412 - output_react_loss: 0.1322 - output_bg_ph_loss: 0.1468 - output_ph_loss: 0.1691 - output_mg_c_loss: 0.1307 - output_c_loss: 0.1528 - val_loss: 1.4103 - val_output_react_loss: 0.1573 - val_output_bg_ph_loss: 0.1933 - val_output_ph_loss: 0.1966 - val_output_mg_c_loss: 0.1702 - val_output_c_loss: 0.1722 Epoch 59/120 30/30 - 4s - loss: 1.1307 - output_react_loss: 0.1303 - output_bg_ph_loss: 0.1454 - output_ph_loss: 0.1680 - output_mg_c_loss: 0.1297 - output_c_loss: 0.1521 - val_loss: 1.4202 - val_output_react_loss: 0.1564 - val_output_bg_ph_loss: 0.1952 - val_output_ph_loss: 0.1973 - val_output_mg_c_loss: 0.1730 - val_output_c_loss: 0.1736 Epoch 60/120 30/30 - 4s - loss: 1.1223 - output_react_loss: 0.1289 - output_bg_ph_loss: 0.1443 - output_ph_loss: 0.1668 - output_mg_c_loss: 0.1285 - output_c_loss: 0.1520 - val_loss: 1.4188 - val_output_react_loss: 0.1562 - val_output_bg_ph_loss: 0.1951 - val_output_ph_loss: 0.1973 - val_output_mg_c_loss: 0.1725 - val_output_c_loss: 0.1740 Epoch 61/120 30/30 - 4s - loss: 1.1185 - output_react_loss: 0.1289 - output_bg_ph_loss: 0.1435 - output_ph_loss: 0.1667 - output_mg_c_loss: 0.1277 - output_c_loss: 0.1516 - val_loss: 1.4116 - val_output_react_loss: 0.1573 - val_output_bg_ph_loss: 0.1939 - val_output_ph_loss: 0.1964 - val_output_mg_c_loss: 0.1706 - val_output_c_loss: 0.1716 Epoch 62/120 30/30 - 4s - loss: 1.1087 - output_react_loss: 0.1275 - output_bg_ph_loss: 0.1422 - output_ph_loss: 0.1657 - output_mg_c_loss: 0.1266 - output_c_loss: 0.1504 - val_loss: 1.4122 - val_output_react_loss: 0.1556 - val_output_bg_ph_loss: 0.1954 - val_output_ph_loss: 0.1955 - val_output_mg_c_loss: 0.1718 - val_output_c_loss: 0.1710 Epoch 63/120 Epoch 00063: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513. 30/30 - 4s - loss: 1.1008 - output_react_loss: 0.1265 - output_bg_ph_loss: 0.1410 - output_ph_loss: 0.1645 - output_mg_c_loss: 0.1257 - output_c_loss: 0.1499 - val_loss: 1.4111 - val_output_react_loss: 0.1567 - val_output_bg_ph_loss: 0.1946 - val_output_ph_loss: 0.1957 - val_output_mg_c_loss: 0.1706 - val_output_c_loss: 0.1715 Epoch 64/120 30/30 - 4s - loss: 1.0687 - output_react_loss: 0.1228 - output_bg_ph_loss: 0.1365 - output_ph_loss: 0.1606 - output_mg_c_loss: 0.1214 - output_c_loss: 0.1468 - val_loss: 1.3944 - val_output_react_loss: 0.1544 - val_output_bg_ph_loss: 0.1922 - val_output_ph_loss: 0.1940 - val_output_mg_c_loss: 0.1689 - val_output_c_loss: 0.1695 Epoch 65/120 30/30 - 4s - loss: 1.0562 - output_react_loss: 0.1207 - output_bg_ph_loss: 0.1348 - output_ph_loss: 0.1592 - output_mg_c_loss: 0.1202 - output_c_loss: 0.1455 - val_loss: 1.3915 - val_output_react_loss: 0.1541 - val_output_bg_ph_loss: 0.1915 - val_output_ph_loss: 0.1938 - val_output_mg_c_loss: 0.1686 - val_output_c_loss: 0.1694 Epoch 66/120 30/30 - 4s - loss: 1.0498 - output_react_loss: 0.1208 - output_bg_ph_loss: 0.1336 - output_ph_loss: 0.1583 - output_mg_c_loss: 0.1187 - output_c_loss: 0.1452 - val_loss: 1.3897 - val_output_react_loss: 0.1537 - val_output_bg_ph_loss: 0.1915 - val_output_ph_loss: 0.1934 - val_output_mg_c_loss: 0.1682 - val_output_c_loss: 0.1693 Epoch 67/120 30/30 - 4s - loss: 1.0466 - output_react_loss: 0.1199 - output_bg_ph_loss: 0.1331 - output_ph_loss: 0.1581 - output_mg_c_loss: 0.1187 - output_c_loss: 0.1452 - val_loss: 1.3895 - val_output_react_loss: 0.1536 - val_output_bg_ph_loss: 0.1916 - val_output_ph_loss: 0.1930 - val_output_mg_c_loss: 0.1682 - val_output_c_loss: 0.1694 Epoch 68/120 30/30 - 4s - loss: 1.0450 - output_react_loss: 0.1201 - output_bg_ph_loss: 0.1331 - output_ph_loss: 0.1574 - output_mg_c_loss: 0.1183 - output_c_loss: 0.1448 - val_loss: 1.3906 - val_output_react_loss: 0.1537 - val_output_bg_ph_loss: 0.1917 - val_output_ph_loss: 0.1935 - val_output_mg_c_loss: 0.1683 - val_output_c_loss: 0.1696 Epoch 69/120 30/30 - 4s - loss: 1.0420 - output_react_loss: 0.1199 - output_bg_ph_loss: 0.1325 - output_ph_loss: 0.1573 - output_mg_c_loss: 0.1176 - output_c_loss: 0.1446 - val_loss: 1.3891 - val_output_react_loss: 0.1538 - val_output_bg_ph_loss: 0.1914 - val_output_ph_loss: 0.1934 - val_output_mg_c_loss: 0.1679 - val_output_c_loss: 0.1694 Epoch 70/120 30/30 - 4s - loss: 1.0388 - output_react_loss: 0.1191 - output_bg_ph_loss: 0.1323 - output_ph_loss: 0.1571 - output_mg_c_loss: 0.1173 - output_c_loss: 0.1443 - val_loss: 1.3896 - val_output_react_loss: 0.1537 - val_output_bg_ph_loss: 0.1915 - val_output_ph_loss: 0.1931 - val_output_mg_c_loss: 0.1683 - val_output_c_loss: 0.1695 Epoch 71/120 30/30 - 4s - loss: 1.0374 - output_react_loss: 0.1192 - output_bg_ph_loss: 0.1318 - output_ph_loss: 0.1571 - output_mg_c_loss: 0.1171 - output_c_loss: 0.1441 - val_loss: 1.3843 - val_output_react_loss: 0.1532 - val_output_bg_ph_loss: 0.1906 - val_output_ph_loss: 0.1926 - val_output_mg_c_loss: 0.1674 - val_output_c_loss: 0.1692 Epoch 72/120 30/30 - 4s - loss: 1.0367 - output_react_loss: 0.1191 - output_bg_ph_loss: 0.1315 - output_ph_loss: 0.1569 - output_mg_c_loss: 0.1171 - output_c_loss: 0.1444 - val_loss: 1.3886 - val_output_react_loss: 0.1538 - val_output_bg_ph_loss: 0.1913 - val_output_ph_loss: 0.1931 - val_output_mg_c_loss: 0.1679 - val_output_c_loss: 0.1694 Epoch 73/120 30/30 - 4s - loss: 1.0347 - output_react_loss: 0.1186 - output_bg_ph_loss: 0.1316 - output_ph_loss: 0.1566 - output_mg_c_loss: 0.1169 - output_c_loss: 0.1438 - val_loss: 1.3895 - val_output_react_loss: 0.1537 - val_output_bg_ph_loss: 0.1915 - val_output_ph_loss: 0.1933 - val_output_mg_c_loss: 0.1680 - val_output_c_loss: 0.1696 Epoch 74/120 30/30 - 4s - loss: 1.0339 - output_react_loss: 0.1185 - output_bg_ph_loss: 0.1315 - output_ph_loss: 0.1565 - output_mg_c_loss: 0.1167 - output_c_loss: 0.1441 - val_loss: 1.3882 - val_output_react_loss: 0.1536 - val_output_bg_ph_loss: 0.1915 - val_output_ph_loss: 0.1930 - val_output_mg_c_loss: 0.1678 - val_output_c_loss: 0.1692 Epoch 75/120 30/30 - 4s - loss: 1.0325 - output_react_loss: 0.1184 - output_bg_ph_loss: 0.1313 - output_ph_loss: 0.1565 - output_mg_c_loss: 0.1164 - output_c_loss: 0.1438 - val_loss: 1.3885 - val_output_react_loss: 0.1540 - val_output_bg_ph_loss: 0.1912 - val_output_ph_loss: 0.1935 - val_output_mg_c_loss: 0.1677 - val_output_c_loss: 0.1694 Epoch 76/120 Epoch 00076: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05. 30/30 - 4s - loss: 1.0312 - output_react_loss: 0.1178 - output_bg_ph_loss: 0.1312 - output_ph_loss: 0.1562 - output_mg_c_loss: 0.1166 - output_c_loss: 0.1436 - val_loss: 1.3896 - val_output_react_loss: 0.1540 - val_output_bg_ph_loss: 0.1918 - val_output_ph_loss: 0.1930 - val_output_mg_c_loss: 0.1679 - val_output_c_loss: 0.1693 Epoch 77/120 30/30 - 4s - loss: 1.0297 - output_react_loss: 0.1183 - output_bg_ph_loss: 0.1305 - output_ph_loss: 0.1558 - output_mg_c_loss: 0.1164 - output_c_loss: 0.1435 - val_loss: 1.3870 - val_output_react_loss: 0.1537 - val_output_bg_ph_loss: 0.1912 - val_output_ph_loss: 0.1928 - val_output_mg_c_loss: 0.1676 - val_output_c_loss: 0.1691 Epoch 78/120 30/30 - 4s - loss: 1.0273 - output_react_loss: 0.1175 - output_bg_ph_loss: 0.1306 - output_ph_loss: 0.1558 - output_mg_c_loss: 0.1160 - output_c_loss: 0.1434 - val_loss: 1.3881 - val_output_react_loss: 0.1538 - val_output_bg_ph_loss: 0.1914 - val_output_ph_loss: 0.1929 - val_output_mg_c_loss: 0.1678 - val_output_c_loss: 0.1693 Epoch 79/120 30/30 - 4s - loss: 1.0267 - output_react_loss: 0.1175 - output_bg_ph_loss: 0.1307 - output_ph_loss: 0.1554 - output_mg_c_loss: 0.1159 - output_c_loss: 0.1432 - val_loss: 1.3860 - val_output_react_loss: 0.1536 - val_output_bg_ph_loss: 0.1910 - val_output_ph_loss: 0.1927 - val_output_mg_c_loss: 0.1675 - val_output_c_loss: 0.1690 Epoch 80/120 30/30 - 4s - loss: 1.0253 - output_react_loss: 0.1177 - output_bg_ph_loss: 0.1301 - output_ph_loss: 0.1553 - output_mg_c_loss: 0.1157 - output_c_loss: 0.1430 - val_loss: 1.3869 - val_output_react_loss: 0.1537 - val_output_bg_ph_loss: 0.1911 - val_output_ph_loss: 0.1928 - val_output_mg_c_loss: 0.1677 - val_output_c_loss: 0.1691 Epoch 81/120 Restoring model weights from the end of the best epoch. Epoch 00081: ReduceLROnPlateau reducing learning rate to 1.0000000656873453e-06. 30/30 - 4s - loss: 1.0248 - output_react_loss: 0.1175 - output_bg_ph_loss: 0.1302 - output_ph_loss: 0.1550 - output_mg_c_loss: 0.1157 - output_c_loss: 0.1430 - val_loss: 1.3866 - val_output_react_loss: 0.1537 - val_output_bg_ph_loss: 0.1911 - val_output_ph_loss: 0.1927 - val_output_mg_c_loss: 0.1676 - val_output_c_loss: 0.1691 Epoch 00081: early stopping FOLD: 5 Epoch 1/120 30/30 - 7s - loss: 3.4458 - output_react_loss: 0.3593 - output_bg_ph_loss: 0.4657 - output_ph_loss: 0.4703 - output_mg_c_loss: 0.4055 - output_c_loss: 0.5145 - val_loss: 2.3242 - val_output_react_loss: 0.2370 - val_output_bg_ph_loss: 0.3207 - val_output_ph_loss: 0.3390 - val_output_mg_c_loss: 0.2940 - val_output_c_loss: 0.2819 Epoch 2/120 30/30 - 4s - loss: 2.3412 - output_react_loss: 0.2486 - output_bg_ph_loss: 0.3172 - output_ph_loss: 0.3346 - output_mg_c_loss: 0.2977 - output_c_loss: 0.2796 - val_loss: 2.1392 - val_output_react_loss: 0.2231 - val_output_bg_ph_loss: 0.2903 - val_output_ph_loss: 0.3000 - val_output_mg_c_loss: 0.2757 - val_output_c_loss: 0.2611 Epoch 3/120 30/30 - 4s - loss: 2.1906 - output_react_loss: 0.2336 - output_bg_ph_loss: 0.2973 - output_ph_loss: 0.3080 - output_mg_c_loss: 0.2799 - output_c_loss: 0.2610 - val_loss: 2.0496 - val_output_react_loss: 0.2155 - val_output_bg_ph_loss: 0.2775 - val_output_ph_loss: 0.2863 - val_output_mg_c_loss: 0.2639 - val_output_c_loss: 0.2495 Epoch 4/120 30/30 - 4s - loss: 2.1081 - output_react_loss: 0.2254 - output_bg_ph_loss: 0.2860 - output_ph_loss: 0.2960 - output_mg_c_loss: 0.2687 - output_c_loss: 0.2519 - val_loss: 1.9834 - val_output_react_loss: 0.2082 - val_output_bg_ph_loss: 0.2697 - val_output_ph_loss: 0.2792 - val_output_mg_c_loss: 0.2532 - val_output_c_loss: 0.2422 Epoch 5/120 30/30 - 4s - loss: 2.0613 - output_react_loss: 0.2210 - output_bg_ph_loss: 0.2797 - output_ph_loss: 0.2892 - output_mg_c_loss: 0.2618 - output_c_loss: 0.2471 - val_loss: 1.9542 - val_output_react_loss: 0.2049 - val_output_bg_ph_loss: 0.2635 - val_output_ph_loss: 0.2762 - val_output_mg_c_loss: 0.2509 - val_output_c_loss: 0.2396 Epoch 6/120 30/30 - 4s - loss: 2.0134 - output_react_loss: 0.2154 - output_bg_ph_loss: 0.2728 - output_ph_loss: 0.2841 - output_mg_c_loss: 0.2554 - output_c_loss: 0.2420 - val_loss: 1.9134 - val_output_react_loss: 0.2021 - val_output_bg_ph_loss: 0.2574 - val_output_ph_loss: 0.2724 - val_output_mg_c_loss: 0.2428 - val_output_c_loss: 0.2362 Epoch 7/120 30/30 - 4s - loss: 1.9820 - output_react_loss: 0.2123 - output_bg_ph_loss: 0.2671 - output_ph_loss: 0.2796 - output_mg_c_loss: 0.2520 - output_c_loss: 0.2396 - val_loss: 1.8820 - val_output_react_loss: 0.1984 - val_output_bg_ph_loss: 0.2530 - val_output_ph_loss: 0.2684 - val_output_mg_c_loss: 0.2393 - val_output_c_loss: 0.2321 Epoch 8/120 30/30 - 4s - loss: 1.9454 - output_react_loss: 0.2090 - output_bg_ph_loss: 0.2620 - output_ph_loss: 0.2747 - output_mg_c_loss: 0.2467 - output_c_loss: 0.2352 - val_loss: 1.8977 - val_output_react_loss: 0.2002 - val_output_bg_ph_loss: 0.2519 - val_output_ph_loss: 0.2695 - val_output_mg_c_loss: 0.2451 - val_output_c_loss: 0.2338 Epoch 9/120 30/30 - 4s - loss: 1.9095 - output_react_loss: 0.2074 - output_bg_ph_loss: 0.2577 - output_ph_loss: 0.2687 - output_mg_c_loss: 0.2401 - output_c_loss: 0.2304 - val_loss: 1.8009 - val_output_react_loss: 0.1922 - val_output_bg_ph_loss: 0.2415 - val_output_ph_loss: 0.2581 - val_output_mg_c_loss: 0.2264 - val_output_c_loss: 0.2226 Epoch 10/120 30/30 - 4s - loss: 1.8745 - output_react_loss: 0.2030 - output_bg_ph_loss: 0.2535 - output_ph_loss: 0.2633 - output_mg_c_loss: 0.2359 - output_c_loss: 0.2263 - val_loss: 1.7623 - val_output_react_loss: 0.1896 - val_output_bg_ph_loss: 0.2361 - val_output_ph_loss: 0.2520 - val_output_mg_c_loss: 0.2206 - val_output_c_loss: 0.2178 Epoch 11/120 30/30 - 4s - loss: 1.8198 - output_react_loss: 0.1991 - output_bg_ph_loss: 0.2455 - output_ph_loss: 0.2571 - output_mg_c_loss: 0.2266 - output_c_loss: 0.2203 - val_loss: 1.7228 - val_output_react_loss: 0.1851 - val_output_bg_ph_loss: 0.2301 - val_output_ph_loss: 0.2478 - val_output_mg_c_loss: 0.2159 - val_output_c_loss: 0.2128 Epoch 12/120 30/30 - 4s - loss: 1.7833 - output_react_loss: 0.1955 - output_bg_ph_loss: 0.2410 - output_ph_loss: 0.2523 - output_mg_c_loss: 0.2210 - output_c_loss: 0.2160 - val_loss: 1.7021 - val_output_react_loss: 0.1849 - val_output_bg_ph_loss: 0.2294 - val_output_ph_loss: 0.2438 - val_output_mg_c_loss: 0.2097 - val_output_c_loss: 0.2104 Epoch 13/120 30/30 - 4s - loss: 1.7582 - output_react_loss: 0.1937 - output_bg_ph_loss: 0.2375 - output_ph_loss: 0.2488 - output_mg_c_loss: 0.2169 - output_c_loss: 0.2132 - val_loss: 1.6732 - val_output_react_loss: 0.1831 - val_output_bg_ph_loss: 0.2242 - val_output_ph_loss: 0.2397 - val_output_mg_c_loss: 0.2050 - val_output_c_loss: 0.2091 Epoch 14/120 30/30 - 4s - loss: 1.7339 - output_react_loss: 0.1927 - output_bg_ph_loss: 0.2336 - output_ph_loss: 0.2440 - output_mg_c_loss: 0.2130 - output_c_loss: 0.2112 - val_loss: 1.6394 - val_output_react_loss: 0.1817 - val_output_bg_ph_loss: 0.2180 - val_output_ph_loss: 0.2364 - val_output_mg_c_loss: 0.1997 - val_output_c_loss: 0.2042 Epoch 15/120 30/30 - 4s - loss: 1.7064 - output_react_loss: 0.1901 - output_bg_ph_loss: 0.2288 - output_ph_loss: 0.2406 - output_mg_c_loss: 0.2105 - output_c_loss: 0.2070 - val_loss: 1.6321 - val_output_react_loss: 0.1809 - val_output_bg_ph_loss: 0.2161 - val_output_ph_loss: 0.2334 - val_output_mg_c_loss: 0.2008 - val_output_c_loss: 0.2030 Epoch 16/120 30/30 - 4s - loss: 1.6720 - output_react_loss: 0.1868 - output_bg_ph_loss: 0.2251 - output_ph_loss: 0.2358 - output_mg_c_loss: 0.2047 - output_c_loss: 0.2028 - val_loss: 1.5972 - val_output_react_loss: 0.1764 - val_output_bg_ph_loss: 0.2135 - val_output_ph_loss: 0.2312 - val_output_mg_c_loss: 0.1941 - val_output_c_loss: 0.1979 Epoch 17/120 30/30 - 4s - loss: 1.6563 - output_react_loss: 0.1854 - output_bg_ph_loss: 0.2229 - output_ph_loss: 0.2326 - output_mg_c_loss: 0.2035 - output_c_loss: 0.2002 - val_loss: 1.5762 - val_output_react_loss: 0.1759 - val_output_bg_ph_loss: 0.2097 - val_output_ph_loss: 0.2266 - val_output_mg_c_loss: 0.1917 - val_output_c_loss: 0.1948 Epoch 18/120 30/30 - 4s - loss: 1.6257 - output_react_loss: 0.1831 - output_bg_ph_loss: 0.2185 - output_ph_loss: 0.2303 - output_mg_c_loss: 0.1976 - output_c_loss: 0.1968 - val_loss: 1.5659 - val_output_react_loss: 0.1721 - val_output_bg_ph_loss: 0.2082 - val_output_ph_loss: 0.2277 - val_output_mg_c_loss: 0.1911 - val_output_c_loss: 0.1953 Epoch 19/120 30/30 - 4s - loss: 1.6121 - output_react_loss: 0.1813 - output_bg_ph_loss: 0.2168 - output_ph_loss: 0.2274 - output_mg_c_loss: 0.1960 - output_c_loss: 0.1966 - val_loss: 1.5488 - val_output_react_loss: 0.1719 - val_output_bg_ph_loss: 0.2071 - val_output_ph_loss: 0.2247 - val_output_mg_c_loss: 0.1867 - val_output_c_loss: 0.1927 Epoch 20/120 30/30 - 4s - loss: 1.5916 - output_react_loss: 0.1797 - output_bg_ph_loss: 0.2138 - output_ph_loss: 0.2248 - output_mg_c_loss: 0.1932 - output_c_loss: 0.1934 - val_loss: 1.5293 - val_output_react_loss: 0.1700 - val_output_bg_ph_loss: 0.2039 - val_output_ph_loss: 0.2205 - val_output_mg_c_loss: 0.1851 - val_output_c_loss: 0.1908 Epoch 21/120 30/30 - 4s - loss: 1.5736 - output_react_loss: 0.1787 - output_bg_ph_loss: 0.2109 - output_ph_loss: 0.2210 - output_mg_c_loss: 0.1913 - output_c_loss: 0.1907 - val_loss: 1.5306 - val_output_react_loss: 0.1701 - val_output_bg_ph_loss: 0.2029 - val_output_ph_loss: 0.2201 - val_output_mg_c_loss: 0.1869 - val_output_c_loss: 0.1908 Epoch 22/120 30/30 - 4s - loss: 1.5604 - output_react_loss: 0.1762 - output_bg_ph_loss: 0.2097 - output_ph_loss: 0.2204 - output_mg_c_loss: 0.1891 - output_c_loss: 0.1900 - val_loss: 1.5215 - val_output_react_loss: 0.1683 - val_output_bg_ph_loss: 0.2035 - val_output_ph_loss: 0.2204 - val_output_mg_c_loss: 0.1841 - val_output_c_loss: 0.1895 Epoch 23/120 30/30 - 4s - loss: 1.5393 - output_react_loss: 0.1746 - output_bg_ph_loss: 0.2069 - output_ph_loss: 0.2178 - output_mg_c_loss: 0.1855 - output_c_loss: 0.1875 - val_loss: 1.5298 - val_output_react_loss: 0.1689 - val_output_bg_ph_loss: 0.2016 - val_output_ph_loss: 0.2216 - val_output_mg_c_loss: 0.1889 - val_output_c_loss: 0.1893 Epoch 24/120 30/30 - 4s - loss: 1.5338 - output_react_loss: 0.1746 - output_bg_ph_loss: 0.2053 - output_ph_loss: 0.2184 - output_mg_c_loss: 0.1847 - output_c_loss: 0.1863 - val_loss: 1.4955 - val_output_react_loss: 0.1678 - val_output_bg_ph_loss: 0.1998 - val_output_ph_loss: 0.2164 - val_output_mg_c_loss: 0.1794 - val_output_c_loss: 0.1852 Epoch 25/120 30/30 - 4s - loss: 1.5171 - output_react_loss: 0.1725 - output_bg_ph_loss: 0.2039 - output_ph_loss: 0.2148 - output_mg_c_loss: 0.1823 - output_c_loss: 0.1851 - val_loss: 1.4934 - val_output_react_loss: 0.1669 - val_output_bg_ph_loss: 0.2001 - val_output_ph_loss: 0.2154 - val_output_mg_c_loss: 0.1795 - val_output_c_loss: 0.1852 Epoch 26/120 30/30 - 4s - loss: 1.5046 - output_react_loss: 0.1709 - output_bg_ph_loss: 0.2015 - output_ph_loss: 0.2142 - output_mg_c_loss: 0.1805 - output_c_loss: 0.1846 - val_loss: 1.5036 - val_output_react_loss: 0.1680 - val_output_bg_ph_loss: 0.1997 - val_output_ph_loss: 0.2164 - val_output_mg_c_loss: 0.1818 - val_output_c_loss: 0.1882 Epoch 27/120 30/30 - 4s - loss: 1.4908 - output_react_loss: 0.1699 - output_bg_ph_loss: 0.1989 - output_ph_loss: 0.2112 - output_mg_c_loss: 0.1796 - output_c_loss: 0.1827 - val_loss: 1.4716 - val_output_react_loss: 0.1648 - val_output_bg_ph_loss: 0.1954 - val_output_ph_loss: 0.2138 - val_output_mg_c_loss: 0.1767 - val_output_c_loss: 0.1838 Epoch 28/120 30/30 - 4s - loss: 1.4667 - output_react_loss: 0.1673 - output_bg_ph_loss: 0.1962 - output_ph_loss: 0.2087 - output_mg_c_loss: 0.1754 - output_c_loss: 0.1803 - val_loss: 1.4610 - val_output_react_loss: 0.1630 - val_output_bg_ph_loss: 0.1939 - val_output_ph_loss: 0.2120 - val_output_mg_c_loss: 0.1761 - val_output_c_loss: 0.1832 Epoch 29/120 30/30 - 4s - loss: 1.4574 - output_react_loss: 0.1658 - output_bg_ph_loss: 0.1956 - output_ph_loss: 0.2075 - output_mg_c_loss: 0.1738 - output_c_loss: 0.1796 - val_loss: 1.4602 - val_output_react_loss: 0.1624 - val_output_bg_ph_loss: 0.1954 - val_output_ph_loss: 0.2102 - val_output_mg_c_loss: 0.1756 - val_output_c_loss: 0.1832 Epoch 30/120 30/30 - 4s - loss: 1.4417 - output_react_loss: 0.1647 - output_bg_ph_loss: 0.1932 - output_ph_loss: 0.2046 - output_mg_c_loss: 0.1720 - output_c_loss: 0.1773 - val_loss: 1.4464 - val_output_react_loss: 0.1611 - val_output_bg_ph_loss: 0.1924 - val_output_ph_loss: 0.2095 - val_output_mg_c_loss: 0.1743 - val_output_c_loss: 0.1814 Epoch 31/120 30/30 - 4s - loss: 1.4285 - output_react_loss: 0.1642 - output_bg_ph_loss: 0.1907 - output_ph_loss: 0.2035 - output_mg_c_loss: 0.1696 - output_c_loss: 0.1760 - val_loss: 1.4446 - val_output_react_loss: 0.1618 - val_output_bg_ph_loss: 0.1919 - val_output_ph_loss: 0.2100 - val_output_mg_c_loss: 0.1734 - val_output_c_loss: 0.1803 Epoch 32/120 30/30 - 4s - loss: 1.4184 - output_react_loss: 0.1622 - output_bg_ph_loss: 0.1895 - output_ph_loss: 0.2019 - output_mg_c_loss: 0.1689 - output_c_loss: 0.1754 - val_loss: 1.4534 - val_output_react_loss: 0.1609 - val_output_bg_ph_loss: 0.1953 - val_output_ph_loss: 0.2112 - val_output_mg_c_loss: 0.1741 - val_output_c_loss: 0.1815 Epoch 33/120 30/30 - 4s - loss: 1.4091 - output_react_loss: 0.1610 - output_bg_ph_loss: 0.1895 - output_ph_loss: 0.2008 - output_mg_c_loss: 0.1665 - output_c_loss: 0.1745 - val_loss: 1.4502 - val_output_react_loss: 0.1623 - val_output_bg_ph_loss: 0.1925 - val_output_ph_loss: 0.2104 - val_output_mg_c_loss: 0.1750 - val_output_c_loss: 0.1802 Epoch 34/120 30/30 - 4s - loss: 1.3896 - output_react_loss: 0.1586 - output_bg_ph_loss: 0.1855 - output_ph_loss: 0.1991 - output_mg_c_loss: 0.1648 - output_c_loss: 0.1725 - val_loss: 1.4296 - val_output_react_loss: 0.1595 - val_output_bg_ph_loss: 0.1910 - val_output_ph_loss: 0.2064 - val_output_mg_c_loss: 0.1716 - val_output_c_loss: 0.1789 Epoch 35/120 30/30 - 4s - loss: 1.3768 - output_react_loss: 0.1578 - output_bg_ph_loss: 0.1833 - output_ph_loss: 0.1967 - output_mg_c_loss: 0.1630 - output_c_loss: 0.1719 - val_loss: 1.4474 - val_output_react_loss: 0.1614 - val_output_bg_ph_loss: 0.1937 - val_output_ph_loss: 0.2085 - val_output_mg_c_loss: 0.1749 - val_output_c_loss: 0.1790 Epoch 36/120 30/30 - 4s - loss: 1.3706 - output_react_loss: 0.1572 - output_bg_ph_loss: 0.1834 - output_ph_loss: 0.1951 - output_mg_c_loss: 0.1617 - output_c_loss: 0.1709 - val_loss: 1.4299 - val_output_react_loss: 0.1599 - val_output_bg_ph_loss: 0.1896 - val_output_ph_loss: 0.2079 - val_output_mg_c_loss: 0.1724 - val_output_c_loss: 0.1781 Epoch 37/120 30/30 - 4s - loss: 1.3548 - output_react_loss: 0.1558 - output_bg_ph_loss: 0.1795 - output_ph_loss: 0.1946 - output_mg_c_loss: 0.1597 - output_c_loss: 0.1702 - val_loss: 1.4276 - val_output_react_loss: 0.1587 - val_output_bg_ph_loss: 0.1909 - val_output_ph_loss: 0.2072 - val_output_mg_c_loss: 0.1711 - val_output_c_loss: 0.1792 Epoch 38/120 30/30 - 4s - loss: 1.3451 - output_react_loss: 0.1547 - output_bg_ph_loss: 0.1786 - output_ph_loss: 0.1928 - output_mg_c_loss: 0.1586 - output_c_loss: 0.1685 - val_loss: 1.4262 - val_output_react_loss: 0.1580 - val_output_bg_ph_loss: 0.1890 - val_output_ph_loss: 0.2086 - val_output_mg_c_loss: 0.1729 - val_output_c_loss: 0.1779 Epoch 39/120 30/30 - 4s - loss: 1.3280 - output_react_loss: 0.1532 - output_bg_ph_loss: 0.1765 - output_ph_loss: 0.1897 - output_mg_c_loss: 0.1559 - output_c_loss: 0.1670 - val_loss: 1.4240 - val_output_react_loss: 0.1572 - val_output_bg_ph_loss: 0.1906 - val_output_ph_loss: 0.2071 - val_output_mg_c_loss: 0.1718 - val_output_c_loss: 0.1777 Epoch 40/120 30/30 - 4s - loss: 1.3176 - output_react_loss: 0.1509 - output_bg_ph_loss: 0.1751 - output_ph_loss: 0.1895 - output_mg_c_loss: 0.1548 - output_c_loss: 0.1665 - val_loss: 1.4150 - val_output_react_loss: 0.1572 - val_output_bg_ph_loss: 0.1895 - val_output_ph_loss: 0.2054 - val_output_mg_c_loss: 0.1696 - val_output_c_loss: 0.1771 Epoch 41/120 30/30 - 4s - loss: 1.2996 - output_react_loss: 0.1494 - output_bg_ph_loss: 0.1720 - output_ph_loss: 0.1880 - output_mg_c_loss: 0.1517 - output_c_loss: 0.1655 - val_loss: 1.4102 - val_output_react_loss: 0.1566 - val_output_bg_ph_loss: 0.1882 - val_output_ph_loss: 0.2034 - val_output_mg_c_loss: 0.1698 - val_output_c_loss: 0.1775 Epoch 42/120 30/30 - 4s - loss: 1.2954 - output_react_loss: 0.1483 - output_bg_ph_loss: 0.1717 - output_ph_loss: 0.1868 - output_mg_c_loss: 0.1519 - output_c_loss: 0.1649 - val_loss: 1.4034 - val_output_react_loss: 0.1561 - val_output_bg_ph_loss: 0.1871 - val_output_ph_loss: 0.2030 - val_output_mg_c_loss: 0.1694 - val_output_c_loss: 0.1751 Epoch 43/120 30/30 - 4s - loss: 1.2840 - output_react_loss: 0.1478 - output_bg_ph_loss: 0.1697 - output_ph_loss: 0.1851 - output_mg_c_loss: 0.1500 - output_c_loss: 0.1639 - val_loss: 1.4042 - val_output_react_loss: 0.1552 - val_output_bg_ph_loss: 0.1871 - val_output_ph_loss: 0.2044 - val_output_mg_c_loss: 0.1698 - val_output_c_loss: 0.1757 Epoch 44/120 30/30 - 4s - loss: 1.2650 - output_react_loss: 0.1461 - output_bg_ph_loss: 0.1671 - output_ph_loss: 0.1829 - output_mg_c_loss: 0.1472 - output_c_loss: 0.1614 - val_loss: 1.4036 - val_output_react_loss: 0.1555 - val_output_bg_ph_loss: 0.1871 - val_output_ph_loss: 0.2035 - val_output_mg_c_loss: 0.1695 - val_output_c_loss: 0.1760 Epoch 45/120 30/30 - 4s - loss: 1.2563 - output_react_loss: 0.1449 - output_bg_ph_loss: 0.1655 - output_ph_loss: 0.1821 - output_mg_c_loss: 0.1463 - output_c_loss: 0.1609 - val_loss: 1.4015 - val_output_react_loss: 0.1549 - val_output_bg_ph_loss: 0.1872 - val_output_ph_loss: 0.2050 - val_output_mg_c_loss: 0.1680 - val_output_c_loss: 0.1765 Epoch 46/120 30/30 - 4s - loss: 1.2468 - output_react_loss: 0.1436 - output_bg_ph_loss: 0.1643 - output_ph_loss: 0.1808 - output_mg_c_loss: 0.1448 - output_c_loss: 0.1605 - val_loss: 1.4169 - val_output_react_loss: 0.1565 - val_output_bg_ph_loss: 0.1893 - val_output_ph_loss: 0.2080 - val_output_mg_c_loss: 0.1704 - val_output_c_loss: 0.1765 Epoch 47/120 30/30 - 4s - loss: 1.2330 - output_react_loss: 0.1423 - output_bg_ph_loss: 0.1624 - output_ph_loss: 0.1792 - output_mg_c_loss: 0.1425 - output_c_loss: 0.1592 - val_loss: 1.4140 - val_output_react_loss: 0.1580 - val_output_bg_ph_loss: 0.1892 - val_output_ph_loss: 0.2049 - val_output_mg_c_loss: 0.1693 - val_output_c_loss: 0.1761 Epoch 48/120 30/30 - 4s - loss: 1.2217 - output_react_loss: 0.1412 - output_bg_ph_loss: 0.1608 - output_ph_loss: 0.1776 - output_mg_c_loss: 0.1410 - output_c_loss: 0.1582 - val_loss: 1.4001 - val_output_react_loss: 0.1567 - val_output_bg_ph_loss: 0.1860 - val_output_ph_loss: 0.2021 - val_output_mg_c_loss: 0.1687 - val_output_c_loss: 0.1752 Epoch 49/120 30/30 - 4s - loss: 1.2181 - output_react_loss: 0.1397 - output_bg_ph_loss: 0.1599 - output_ph_loss: 0.1769 - output_mg_c_loss: 0.1415 - output_c_loss: 0.1589 - val_loss: 1.4031 - val_output_react_loss: 0.1582 - val_output_bg_ph_loss: 0.1863 - val_output_ph_loss: 0.2025 - val_output_mg_c_loss: 0.1677 - val_output_c_loss: 0.1762 Epoch 50/120 30/30 - 4s - loss: 1.2070 - output_react_loss: 0.1389 - output_bg_ph_loss: 0.1581 - output_ph_loss: 0.1765 - output_mg_c_loss: 0.1397 - output_c_loss: 0.1572 - val_loss: 1.4027 - val_output_react_loss: 0.1563 - val_output_bg_ph_loss: 0.1855 - val_output_ph_loss: 0.2055 - val_output_mg_c_loss: 0.1693 - val_output_c_loss: 0.1751 Epoch 51/120 30/30 - 4s - loss: 1.1981 - output_react_loss: 0.1384 - output_bg_ph_loss: 0.1564 - output_ph_loss: 0.1750 - output_mg_c_loss: 0.1385 - output_c_loss: 0.1565 - val_loss: 1.3961 - val_output_react_loss: 0.1568 - val_output_bg_ph_loss: 0.1855 - val_output_ph_loss: 0.2011 - val_output_mg_c_loss: 0.1673 - val_output_c_loss: 0.1756 Epoch 52/120 30/30 - 4s - loss: 1.1854 - output_react_loss: 0.1367 - output_bg_ph_loss: 0.1551 - output_ph_loss: 0.1732 - output_mg_c_loss: 0.1366 - output_c_loss: 0.1556 - val_loss: 1.3965 - val_output_react_loss: 0.1543 - val_output_bg_ph_loss: 0.1869 - val_output_ph_loss: 0.2033 - val_output_mg_c_loss: 0.1681 - val_output_c_loss: 0.1746 Epoch 53/120 30/30 - 4s - loss: 1.1750 - output_react_loss: 0.1346 - output_bg_ph_loss: 0.1535 - output_ph_loss: 0.1729 - output_mg_c_loss: 0.1355 - output_c_loss: 0.1548 - val_loss: 1.3956 - val_output_react_loss: 0.1550 - val_output_bg_ph_loss: 0.1876 - val_output_ph_loss: 0.2023 - val_output_mg_c_loss: 0.1670 - val_output_c_loss: 0.1740 Epoch 54/120 30/30 - 4s - loss: 1.1608 - output_react_loss: 0.1338 - output_bg_ph_loss: 0.1511 - output_ph_loss: 0.1706 - output_mg_c_loss: 0.1334 - output_c_loss: 0.1535 - val_loss: 1.3888 - val_output_react_loss: 0.1533 - val_output_bg_ph_loss: 0.1854 - val_output_ph_loss: 0.2016 - val_output_mg_c_loss: 0.1672 - val_output_c_loss: 0.1752 Epoch 55/120 30/30 - 4s - loss: 1.1588 - output_react_loss: 0.1331 - output_bg_ph_loss: 0.1512 - output_ph_loss: 0.1709 - output_mg_c_loss: 0.1330 - output_c_loss: 0.1533 - val_loss: 1.3910 - val_output_react_loss: 0.1560 - val_output_bg_ph_loss: 0.1858 - val_output_ph_loss: 0.2017 - val_output_mg_c_loss: 0.1661 - val_output_c_loss: 0.1734 Epoch 56/120 30/30 - 4s - loss: 1.1483 - output_react_loss: 0.1316 - output_bg_ph_loss: 0.1495 - output_ph_loss: 0.1692 - output_mg_c_loss: 0.1323 - output_c_loss: 0.1525 - val_loss: 1.3862 - val_output_react_loss: 0.1553 - val_output_bg_ph_loss: 0.1846 - val_output_ph_loss: 0.2018 - val_output_mg_c_loss: 0.1654 - val_output_c_loss: 0.1739 Epoch 57/120 30/30 - 4s - loss: 1.1401 - output_react_loss: 0.1306 - output_bg_ph_loss: 0.1481 - output_ph_loss: 0.1686 - output_mg_c_loss: 0.1311 - output_c_loss: 0.1519 - val_loss: 1.3812 - val_output_react_loss: 0.1548 - val_output_bg_ph_loss: 0.1853 - val_output_ph_loss: 0.1993 - val_output_mg_c_loss: 0.1645 - val_output_c_loss: 0.1728 Epoch 58/120 30/30 - 4s - loss: 1.1335 - output_react_loss: 0.1299 - output_bg_ph_loss: 0.1469 - output_ph_loss: 0.1677 - output_mg_c_loss: 0.1300 - output_c_loss: 0.1522 - val_loss: 1.3878 - val_output_react_loss: 0.1552 - val_output_bg_ph_loss: 0.1857 - val_output_ph_loss: 0.2006 - val_output_mg_c_loss: 0.1659 - val_output_c_loss: 0.1736 Epoch 59/120 30/30 - 4s - loss: 1.1235 - output_react_loss: 0.1286 - output_bg_ph_loss: 0.1459 - output_ph_loss: 0.1659 - output_mg_c_loss: 0.1292 - output_c_loss: 0.1503 - val_loss: 1.3851 - val_output_react_loss: 0.1539 - val_output_bg_ph_loss: 0.1864 - val_output_ph_loss: 0.2004 - val_output_mg_c_loss: 0.1656 - val_output_c_loss: 0.1731 Epoch 60/120 30/30 - 4s - loss: 1.1176 - output_react_loss: 0.1273 - output_bg_ph_loss: 0.1444 - output_ph_loss: 0.1662 - output_mg_c_loss: 0.1286 - output_c_loss: 0.1509 - val_loss: 1.3796 - val_output_react_loss: 0.1534 - val_output_bg_ph_loss: 0.1850 - val_output_ph_loss: 0.1988 - val_output_mg_c_loss: 0.1654 - val_output_c_loss: 0.1733 Epoch 61/120 30/30 - 4s - loss: 1.1088 - output_react_loss: 0.1266 - output_bg_ph_loss: 0.1433 - output_ph_loss: 0.1646 - output_mg_c_loss: 0.1273 - output_c_loss: 0.1497 - val_loss: 1.3802 - val_output_react_loss: 0.1539 - val_output_bg_ph_loss: 0.1847 - val_output_ph_loss: 0.1986 - val_output_mg_c_loss: 0.1658 - val_output_c_loss: 0.1728 Epoch 62/120 30/30 - 4s - loss: 1.0994 - output_react_loss: 0.1260 - output_bg_ph_loss: 0.1420 - output_ph_loss: 0.1630 - output_mg_c_loss: 0.1258 - output_c_loss: 0.1487 - val_loss: 1.3924 - val_output_react_loss: 0.1541 - val_output_bg_ph_loss: 0.1875 - val_output_ph_loss: 0.1997 - val_output_mg_c_loss: 0.1679 - val_output_c_loss: 0.1738 Epoch 63/120 30/30 - 4s - loss: 1.0950 - output_react_loss: 0.1249 - output_bg_ph_loss: 0.1416 - output_ph_loss: 0.1630 - output_mg_c_loss: 0.1253 - output_c_loss: 0.1485 - val_loss: 1.3826 - val_output_react_loss: 0.1545 - val_output_bg_ph_loss: 0.1859 - val_output_ph_loss: 0.1999 - val_output_mg_c_loss: 0.1645 - val_output_c_loss: 0.1728 Epoch 64/120 30/30 - 4s - loss: 1.0861 - output_react_loss: 0.1235 - output_bg_ph_loss: 0.1400 - output_ph_loss: 0.1621 - output_mg_c_loss: 0.1244 - output_c_loss: 0.1480 - val_loss: 1.3814 - val_output_react_loss: 0.1538 - val_output_bg_ph_loss: 0.1852 - val_output_ph_loss: 0.1987 - val_output_mg_c_loss: 0.1660 - val_output_c_loss: 0.1728 Epoch 65/120 30/30 - 4s - loss: 1.0824 - output_react_loss: 0.1231 - output_bg_ph_loss: 0.1395 - output_ph_loss: 0.1610 - output_mg_c_loss: 0.1243 - output_c_loss: 0.1475 - val_loss: 1.3740 - val_output_react_loss: 0.1532 - val_output_bg_ph_loss: 0.1842 - val_output_ph_loss: 0.1975 - val_output_mg_c_loss: 0.1643 - val_output_c_loss: 0.1731 Epoch 66/120 30/30 - 4s - loss: 1.0727 - output_react_loss: 0.1223 - output_bg_ph_loss: 0.1380 - output_ph_loss: 0.1603 - output_mg_c_loss: 0.1224 - output_c_loss: 0.1472 - val_loss: 1.3818 - val_output_react_loss: 0.1545 - val_output_bg_ph_loss: 0.1845 - val_output_ph_loss: 0.1992 - val_output_mg_c_loss: 0.1656 - val_output_c_loss: 0.1732 Epoch 67/120 30/30 - 4s - loss: 1.0669 - output_react_loss: 0.1214 - output_bg_ph_loss: 0.1374 - output_ph_loss: 0.1591 - output_mg_c_loss: 0.1220 - output_c_loss: 0.1462 - val_loss: 1.3793 - val_output_react_loss: 0.1535 - val_output_bg_ph_loss: 0.1842 - val_output_ph_loss: 0.1990 - val_output_mg_c_loss: 0.1658 - val_output_c_loss: 0.1735 Epoch 68/120 30/30 - 4s - loss: 1.0648 - output_react_loss: 0.1207 - output_bg_ph_loss: 0.1369 - output_ph_loss: 0.1597 - output_mg_c_loss: 0.1217 - output_c_loss: 0.1466 - val_loss: 1.3731 - val_output_react_loss: 0.1524 - val_output_bg_ph_loss: 0.1842 - val_output_ph_loss: 0.1980 - val_output_mg_c_loss: 0.1649 - val_output_c_loss: 0.1720 Epoch 69/120 30/30 - 4s - loss: 1.0554 - output_react_loss: 0.1203 - output_bg_ph_loss: 0.1356 - output_ph_loss: 0.1577 - output_mg_c_loss: 0.1203 - output_c_loss: 0.1453 - val_loss: 1.3766 - val_output_react_loss: 0.1543 - val_output_bg_ph_loss: 0.1839 - val_output_ph_loss: 0.1981 - val_output_mg_c_loss: 0.1648 - val_output_c_loss: 0.1726 Epoch 70/120 30/30 - 4s - loss: 1.0462 - output_react_loss: 0.1189 - output_bg_ph_loss: 0.1337 - output_ph_loss: 0.1576 - output_mg_c_loss: 0.1193 - output_c_loss: 0.1447 - val_loss: 1.3739 - val_output_react_loss: 0.1541 - val_output_bg_ph_loss: 0.1839 - val_output_ph_loss: 0.1983 - val_output_mg_c_loss: 0.1638 - val_output_c_loss: 0.1721 Epoch 71/120 30/30 - 4s - loss: 1.0415 - output_react_loss: 0.1181 - output_bg_ph_loss: 0.1331 - output_ph_loss: 0.1571 - output_mg_c_loss: 0.1190 - output_c_loss: 0.1442 - val_loss: 1.3765 - val_output_react_loss: 0.1533 - val_output_bg_ph_loss: 0.1847 - val_output_ph_loss: 0.1975 - val_output_mg_c_loss: 0.1651 - val_output_c_loss: 0.1727 Epoch 72/120 30/30 - 4s - loss: 1.0342 - output_react_loss: 0.1173 - output_bg_ph_loss: 0.1322 - output_ph_loss: 0.1552 - output_mg_c_loss: 0.1180 - output_c_loss: 0.1442 - val_loss: 1.3744 - val_output_react_loss: 0.1531 - val_output_bg_ph_loss: 0.1840 - val_output_ph_loss: 0.1988 - val_output_mg_c_loss: 0.1643 - val_output_c_loss: 0.1729 Epoch 73/120 30/30 - 4s - loss: 1.0332 - output_react_loss: 0.1169 - output_bg_ph_loss: 0.1318 - output_ph_loss: 0.1559 - output_mg_c_loss: 0.1183 - output_c_loss: 0.1433 - val_loss: 1.3703 - val_output_react_loss: 0.1526 - val_output_bg_ph_loss: 0.1837 - val_output_ph_loss: 0.1982 - val_output_mg_c_loss: 0.1635 - val_output_c_loss: 0.1725 Epoch 74/120 30/30 - 4s - loss: 1.0244 - output_react_loss: 0.1163 - output_bg_ph_loss: 0.1304 - output_ph_loss: 0.1541 - output_mg_c_loss: 0.1171 - output_c_loss: 0.1429 - val_loss: 1.3742 - val_output_react_loss: 0.1532 - val_output_bg_ph_loss: 0.1841 - val_output_ph_loss: 0.1984 - val_output_mg_c_loss: 0.1645 - val_output_c_loss: 0.1723 Epoch 75/120 30/30 - 4s - loss: 1.0191 - output_react_loss: 0.1154 - output_bg_ph_loss: 0.1299 - output_ph_loss: 0.1534 - output_mg_c_loss: 0.1162 - output_c_loss: 0.1426 - val_loss: 1.3764 - val_output_react_loss: 0.1536 - val_output_bg_ph_loss: 0.1844 - val_output_ph_loss: 0.1981 - val_output_mg_c_loss: 0.1650 - val_output_c_loss: 0.1723 Epoch 76/120 30/30 - 4s - loss: 1.0117 - output_react_loss: 0.1140 - output_bg_ph_loss: 0.1289 - output_ph_loss: 0.1533 - output_mg_c_loss: 0.1153 - output_c_loss: 0.1419 - val_loss: 1.3760 - val_output_react_loss: 0.1528 - val_output_bg_ph_loss: 0.1840 - val_output_ph_loss: 0.1975 - val_output_mg_c_loss: 0.1663 - val_output_c_loss: 0.1724 Epoch 77/120 30/30 - 4s - loss: 1.0066 - output_react_loss: 0.1138 - output_bg_ph_loss: 0.1276 - output_ph_loss: 0.1524 - output_mg_c_loss: 0.1151 - output_c_loss: 0.1414 - val_loss: 1.3641 - val_output_react_loss: 0.1525 - val_output_bg_ph_loss: 0.1827 - val_output_ph_loss: 0.1961 - val_output_mg_c_loss: 0.1630 - val_output_c_loss: 0.1715 Epoch 78/120 30/30 - 4s - loss: 1.0015 - output_react_loss: 0.1136 - output_bg_ph_loss: 0.1269 - output_ph_loss: 0.1516 - output_mg_c_loss: 0.1137 - output_c_loss: 0.1415 - val_loss: 1.3709 - val_output_react_loss: 0.1526 - val_output_bg_ph_loss: 0.1836 - val_output_ph_loss: 0.1965 - val_output_mg_c_loss: 0.1649 - val_output_c_loss: 0.1722 Epoch 79/120 30/30 - 4s - loss: 1.0020 - output_react_loss: 0.1129 - output_bg_ph_loss: 0.1273 - output_ph_loss: 0.1513 - output_mg_c_loss: 0.1145 - output_c_loss: 0.1413 - val_loss: 1.3770 - val_output_react_loss: 0.1553 - val_output_bg_ph_loss: 0.1840 - val_output_ph_loss: 0.1985 - val_output_mg_c_loss: 0.1636 - val_output_c_loss: 0.1727 Epoch 80/120 30/30 - 4s - loss: 0.9949 - output_react_loss: 0.1124 - output_bg_ph_loss: 0.1257 - output_ph_loss: 0.1514 - output_mg_c_loss: 0.1134 - output_c_loss: 0.1405 - val_loss: 1.3778 - val_output_react_loss: 0.1536 - val_output_bg_ph_loss: 0.1844 - val_output_ph_loss: 0.1979 - val_output_mg_c_loss: 0.1655 - val_output_c_loss: 0.1729 Epoch 81/120 30/30 - 4s - loss: 0.9892 - output_react_loss: 0.1110 - output_bg_ph_loss: 0.1255 - output_ph_loss: 0.1500 - output_mg_c_loss: 0.1128 - output_c_loss: 0.1405 - val_loss: 1.3727 - val_output_react_loss: 0.1537 - val_output_bg_ph_loss: 0.1836 - val_output_ph_loss: 0.1973 - val_output_mg_c_loss: 0.1645 - val_output_c_loss: 0.1719 Epoch 82/120 Epoch 00082: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513. 30/30 - 4s - loss: 0.9845 - output_react_loss: 0.1109 - output_bg_ph_loss: 0.1248 - output_ph_loss: 0.1497 - output_mg_c_loss: 0.1119 - output_c_loss: 0.1398 - val_loss: 1.3714 - val_output_react_loss: 0.1526 - val_output_bg_ph_loss: 0.1843 - val_output_ph_loss: 0.1971 - val_output_mg_c_loss: 0.1645 - val_output_c_loss: 0.1716 Epoch 83/120 30/30 - 4s - loss: 0.9578 - output_react_loss: 0.1073 - output_bg_ph_loss: 0.1210 - output_ph_loss: 0.1464 - output_mg_c_loss: 0.1087 - output_c_loss: 0.1373 - val_loss: 1.3552 - val_output_react_loss: 0.1510 - val_output_bg_ph_loss: 0.1816 - val_output_ph_loss: 0.1953 - val_output_mg_c_loss: 0.1623 - val_output_c_loss: 0.1701 Epoch 84/120 30/30 - 4s - loss: 0.9430 - output_react_loss: 0.1058 - output_bg_ph_loss: 0.1188 - output_ph_loss: 0.1444 - output_mg_c_loss: 0.1066 - output_c_loss: 0.1360 - val_loss: 1.3524 - val_output_react_loss: 0.1510 - val_output_bg_ph_loss: 0.1808 - val_output_ph_loss: 0.1954 - val_output_mg_c_loss: 0.1617 - val_output_c_loss: 0.1699 Epoch 85/120 30/30 - 4s - loss: 0.9379 - output_react_loss: 0.1051 - output_bg_ph_loss: 0.1180 - output_ph_loss: 0.1442 - output_mg_c_loss: 0.1060 - output_c_loss: 0.1355 - val_loss: 1.3515 - val_output_react_loss: 0.1510 - val_output_bg_ph_loss: 0.1808 - val_output_ph_loss: 0.1951 - val_output_mg_c_loss: 0.1615 - val_output_c_loss: 0.1697 Epoch 86/120 30/30 - 4s - loss: 0.9328 - output_react_loss: 0.1049 - output_bg_ph_loss: 0.1170 - output_ph_loss: 0.1433 - output_mg_c_loss: 0.1053 - output_c_loss: 0.1352 - val_loss: 1.3519 - val_output_react_loss: 0.1511 - val_output_bg_ph_loss: 0.1806 - val_output_ph_loss: 0.1954 - val_output_mg_c_loss: 0.1617 - val_output_c_loss: 0.1698 Epoch 87/120 30/30 - 4s - loss: 0.9294 - output_react_loss: 0.1041 - output_bg_ph_loss: 0.1172 - output_ph_loss: 0.1429 - output_mg_c_loss: 0.1045 - output_c_loss: 0.1348 - val_loss: 1.3500 - val_output_react_loss: 0.1509 - val_output_bg_ph_loss: 0.1803 - val_output_ph_loss: 0.1952 - val_output_mg_c_loss: 0.1614 - val_output_c_loss: 0.1695 Epoch 88/120 30/30 - 4s - loss: 0.9272 - output_react_loss: 0.1036 - output_bg_ph_loss: 0.1168 - output_ph_loss: 0.1426 - output_mg_c_loss: 0.1046 - output_c_loss: 0.1347 - val_loss: 1.3502 - val_output_react_loss: 0.1509 - val_output_bg_ph_loss: 0.1807 - val_output_ph_loss: 0.1948 - val_output_mg_c_loss: 0.1614 - val_output_c_loss: 0.1696 Epoch 89/120 30/30 - 4s - loss: 0.9260 - output_react_loss: 0.1035 - output_bg_ph_loss: 0.1166 - output_ph_loss: 0.1427 - output_mg_c_loss: 0.1044 - output_c_loss: 0.1343 - val_loss: 1.3490 - val_output_react_loss: 0.1507 - val_output_bg_ph_loss: 0.1802 - val_output_ph_loss: 0.1949 - val_output_mg_c_loss: 0.1613 - val_output_c_loss: 0.1697 Epoch 90/120 30/30 - 4s - loss: 0.9232 - output_react_loss: 0.1035 - output_bg_ph_loss: 0.1159 - output_ph_loss: 0.1420 - output_mg_c_loss: 0.1041 - output_c_loss: 0.1342 - val_loss: 1.3500 - val_output_react_loss: 0.1507 - val_output_bg_ph_loss: 0.1807 - val_output_ph_loss: 0.1948 - val_output_mg_c_loss: 0.1614 - val_output_c_loss: 0.1696 Epoch 91/120 30/30 - 4s - loss: 0.9227 - output_react_loss: 0.1034 - output_bg_ph_loss: 0.1157 - output_ph_loss: 0.1421 - output_mg_c_loss: 0.1040 - output_c_loss: 0.1344 - val_loss: 1.3476 - val_output_react_loss: 0.1507 - val_output_bg_ph_loss: 0.1802 - val_output_ph_loss: 0.1944 - val_output_mg_c_loss: 0.1611 - val_output_c_loss: 0.1694 Epoch 92/120 30/30 - 4s - loss: 0.9214 - output_react_loss: 0.1029 - output_bg_ph_loss: 0.1158 - output_ph_loss: 0.1416 - output_mg_c_loss: 0.1042 - output_c_loss: 0.1341 - val_loss: 1.3497 - val_output_react_loss: 0.1510 - val_output_bg_ph_loss: 0.1804 - val_output_ph_loss: 0.1947 - val_output_mg_c_loss: 0.1613 - val_output_c_loss: 0.1696 Epoch 93/120 30/30 - 4s - loss: 0.9200 - output_react_loss: 0.1031 - output_bg_ph_loss: 0.1155 - output_ph_loss: 0.1419 - output_mg_c_loss: 0.1036 - output_c_loss: 0.1338 - val_loss: 1.3500 - val_output_react_loss: 0.1506 - val_output_bg_ph_loss: 0.1807 - val_output_ph_loss: 0.1948 - val_output_mg_c_loss: 0.1615 - val_output_c_loss: 0.1696 Epoch 94/120 30/30 - 4s - loss: 0.9215 - output_react_loss: 0.1032 - output_bg_ph_loss: 0.1155 - output_ph_loss: 0.1419 - output_mg_c_loss: 0.1039 - output_c_loss: 0.1342 - val_loss: 1.3506 - val_output_react_loss: 0.1507 - val_output_bg_ph_loss: 0.1808 - val_output_ph_loss: 0.1950 - val_output_mg_c_loss: 0.1615 - val_output_c_loss: 0.1697 Epoch 95/120 30/30 - 4s - loss: 0.9169 - output_react_loss: 0.1029 - output_bg_ph_loss: 0.1148 - output_ph_loss: 0.1411 - output_mg_c_loss: 0.1032 - output_c_loss: 0.1339 - val_loss: 1.3491 - val_output_react_loss: 0.1506 - val_output_bg_ph_loss: 0.1804 - val_output_ph_loss: 0.1946 - val_output_mg_c_loss: 0.1614 - val_output_c_loss: 0.1697 Epoch 96/120 Epoch 00096: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05. 30/30 - 4s - loss: 0.9155 - output_react_loss: 0.1027 - output_bg_ph_loss: 0.1143 - output_ph_loss: 0.1413 - output_mg_c_loss: 0.1032 - output_c_loss: 0.1337 - val_loss: 1.3490 - val_output_react_loss: 0.1506 - val_output_bg_ph_loss: 0.1805 - val_output_ph_loss: 0.1945 - val_output_mg_c_loss: 0.1613 - val_output_c_loss: 0.1697 Epoch 97/120 30/30 - 4s - loss: 0.9136 - output_react_loss: 0.1022 - output_bg_ph_loss: 0.1145 - output_ph_loss: 0.1411 - output_mg_c_loss: 0.1028 - output_c_loss: 0.1336 - val_loss: 1.3496 - val_output_react_loss: 0.1507 - val_output_bg_ph_loss: 0.1806 - val_output_ph_loss: 0.1946 - val_output_mg_c_loss: 0.1614 - val_output_c_loss: 0.1697 Epoch 98/120 30/30 - 4s - loss: 0.9124 - output_react_loss: 0.1021 - output_bg_ph_loss: 0.1139 - output_ph_loss: 0.1412 - output_mg_c_loss: 0.1027 - output_c_loss: 0.1338 - val_loss: 1.3495 - val_output_react_loss: 0.1507 - val_output_bg_ph_loss: 0.1805 - val_output_ph_loss: 0.1946 - val_output_mg_c_loss: 0.1614 - val_output_c_loss: 0.1697 Epoch 99/120 30/30 - 4s - loss: 0.9112 - output_react_loss: 0.1018 - output_bg_ph_loss: 0.1140 - output_ph_loss: 0.1411 - output_mg_c_loss: 0.1025 - output_c_loss: 0.1334 - val_loss: 1.3486 - val_output_react_loss: 0.1506 - val_output_bg_ph_loss: 0.1804 - val_output_ph_loss: 0.1945 - val_output_mg_c_loss: 0.1613 - val_output_c_loss: 0.1696 Epoch 100/120 30/30 - 4s - loss: 0.9126 - output_react_loss: 0.1022 - output_bg_ph_loss: 0.1142 - output_ph_loss: 0.1408 - output_mg_c_loss: 0.1027 - output_c_loss: 0.1336 - val_loss: 1.3490 - val_output_react_loss: 0.1506 - val_output_bg_ph_loss: 0.1805 - val_output_ph_loss: 0.1945 - val_output_mg_c_loss: 0.1613 - val_output_c_loss: 0.1696 Epoch 101/120 Restoring model weights from the end of the best epoch. Epoch 00101: ReduceLROnPlateau reducing learning rate to 1.0000000656873453e-06. 30/30 - 4s - loss: 0.9104 - output_react_loss: 0.1018 - output_bg_ph_loss: 0.1138 - output_ph_loss: 0.1407 - output_mg_c_loss: 0.1026 - output_c_loss: 0.1335 - val_loss: 1.3486 - val_output_react_loss: 0.1506 - val_output_bg_ph_loss: 0.1804 - val_output_ph_loss: 0.1945 - val_output_mg_c_loss: 0.1613 - val_output_c_loss: 0.1696 Epoch 00101: early stopping
MIT
Model backlog/Models/41-openvaccine-weighted-samples.ipynb
dimitreOliveira/COVID-19-Vaccine-Degradation-Prediction
Model loss graph
for fold, history in enumerate(history_list): print(f'\nFOLD: {fold+1}') print(f"Train {np.array(history['loss']).min():.5f} Validation {np.array(history['val_loss']).min():.5f}") plot_metrics_agg(history_list)
FOLD: 1 Train 1.05189 Validation 1.37240 FOLD: 2 Train 1.07609 Validation 1.38107 FOLD: 3 Train 1.05777 Validation 1.33757 FOLD: 4 Train 1.02478 Validation 1.38429 FOLD: 5 Train 0.91044 Validation 1.34764
MIT
Model backlog/Models/41-openvaccine-weighted-samples.ipynb
dimitreOliveira/COVID-19-Vaccine-Degradation-Prediction
Post-processing
# Assign preds to OOF set for idx, col in enumerate(pred_cols): val = oof_preds[:, :, idx] oof = oof.assign(**{f'{col}_pred': list(val)}) oof.to_csv('oof.csv', index=False) oof_preds_dict = {} for col in pred_cols: oof_preds_dict[col] = oof_preds[:, :, idx] # Assign values to test set preds_ls = [] for df, preds in [(public_test, test_public_preds), (private_test, test_private_preds)]: for i, uid in enumerate(df.id): single_pred = preds[i] single_df = pd.DataFrame(single_pred, columns=pred_cols) single_df['id_seqpos'] = [f'{uid}_{x}' for x in range(single_df.shape[0])] preds_ls.append(single_df) preds_df = pd.concat(preds_ls)
_____no_output_____
MIT
Model backlog/Models/41-openvaccine-weighted-samples.ipynb
dimitreOliveira/COVID-19-Vaccine-Degradation-Prediction
Model evaluation
y_true_dict = get_targets_dict(train, pred_cols, train.index) y_true = np.array([y_true_dict[col] for col in pred_cols]).transpose((1, 2, 0, 3)).reshape(oof_preds.shape) display(evaluate_model(train, y_true, oof_preds, pred_cols))
_____no_output_____
MIT
Model backlog/Models/41-openvaccine-weighted-samples.ipynb
dimitreOliveira/COVID-19-Vaccine-Degradation-Prediction
Visualize test predictions
submission = pd.read_csv(database_base_path + 'sample_submission.csv') submission = submission[['id_seqpos']].merge(preds_df, on=['id_seqpos'])
_____no_output_____
MIT
Model backlog/Models/41-openvaccine-weighted-samples.ipynb
dimitreOliveira/COVID-19-Vaccine-Degradation-Prediction
Test set predictions
display(submission.head(10)) display(submission.describe()) submission.to_csv('submission.csv', index=False)
_____no_output_____
MIT
Model backlog/Models/41-openvaccine-weighted-samples.ipynb
dimitreOliveira/COVID-19-Vaccine-Degradation-Prediction
Inspecting trained model
seed = 600 system = 'chaotic-rnn' if os.path.exists('./synth_data/%s_%s'%(system, seed)): data_dict = read_data('./synth_data/%s_%s'%(system, seed)) else: from synthetic_data import generate_chaotic_rnn_data param_dict = yaml.load(open('./synth_data/%s_params.yaml'%system, 'r'), Loader=yaml.FullLoader) data_dict = generate_chaotic_rnn_data(Ncells=param_dict['cells'], Ninits=param_dict['inits'], Ntrial=param_dict['trials'], Nsteps=param_dict['steps'], # Nstepsinbin=param_dict['steps_in_bin'], dt_rnn=param_dict['dt_sys'], dt_spike = param_dict['dt_spike'], maxRate= param_dict['rate_scale'], save=False, seed=seed) # For spike data train_data = torch.Tensor(data_dict['train_spikes']).to(device) valid_data = torch.Tensor(data_dict['valid_spikes']).to(device) train_truth = {'rates' : data_dict['train_rates']} valid_truth = {'rates' : data_dict['valid_rates']} train_ds = torch.utils.data.TensorDataset(train_data) valid_ds = torch.utils.data.TensorDataset(valid_data) num_trials, num_steps, num_cells = train_data.shape; print(train_data.shape); print('Number of datapoints = %s'%train_data.numel()) hyperparams = load_parameters('parameters/parameters_%s_spikes.yaml'%system) hyperparams['run_name'] = 'poisson_%s%i_f20_g1200_eg1128_u1_c1128_ec1128_191125_localtest'%(system, seed) model = LFADS(inputs_dim = num_cells, T = num_steps, dt = float(data_dict['dt']), device=device, model_hyperparams=hyperparams).to(device) # model.load_checkpoint('best') # model.epochs model.gru_generator.fc_h_ru.weight.std() 1/(np.sqrt(400)) total_params = 0 for ix, (name, param) in enumerate(model.named_parameters()): print(ix, name, list(param.shape), param.numel(), param.requires_grad) total_params += param.numel() print('Total parameters: %i'%total_params) model.fit(train_ds, valid_ds, train_truth=train_truth, valid_truth=valid_truth, max_epochs=2000, batch_size=128, use_tensorboard=True, health_check=True) model.load_checkpoint('best') model.plot_summary(valid_data, valid_truth) results_dict = model.plot_recon_rsquared(valid_data, valid_truth, train_data, train_truth)
_____no_output_____
MIT
deprecated/.ipynb_checkpoints/lfads_demo-checkpoint.ipynb
lyprince/hierarchical_lfads
Analyze a large dataset with Google BigQuery**Learning Objectives**1. Access an ecommerce dataset1. Look at the dataset metadata1. Remove duplicate entries1. Write and execute queries Introduction BigQuery is Google's fully managed, NoOps, low cost analytics database. With BigQuery you can query terabytes and terabytes of data without having any infrastructure to manage or needing a database administrator. BigQuery uses SQL and can take advantage of the pay-as-you-go model. BigQuery allows you to focus on analyzing data to find meaningful insights.We have a publicly available ecommerce dataset that has millions of Google Analytics records for the Google Merchandise Store loaded into a table in BigQuery. In this lab, you use a copy of that dataset. Sample scenarios are provided, from which you look at the data and ways to remove duplicate information. The lab then steps you through further analysis the data.BigQuery can be accessed by its own browser-based interface, Google Data Studio, and many third party tools. In this lab you will use the BigQuery directly in notebook cells using the iPython magic command `%%bigquery`.The steps you will follow in the lab are analogous to what you would do to prepare data for use in advanced ML operations. You will follow the notebook to experiment with the BigQuery queries provided to analyze the data. Set up the notebook environment__VERY IMPORTANT__: In the cell below you must replace the text `` with you GCP project id.
import os import pandas as pd PROJECT = "<YOUR PROJECT>" #TODO Replace with your project id os.environ["PROJECT"] = PROJECT pd.options.display.max_columns = 50
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive2/how_google_does_ml/bigquery/solution/analyze_with_bigquery_solution.ipynb
Glairly/introduction_to_tensorflow
Explore eCommerce data and identify duplicate recordsScenario: You were provided with Google Analytics logs for an eCommerce website in a BigQuery dataset. The data analyst team created a new BigQuery table of all the raw eCommerce visitor session data. This data tracks user interactions, location, device types, time on page, and details of any transaction. Your ultimate plan is to use this data in an ML capacity to create a model that delivers highly accurate predictions of user behavior to support tailored marketing campaigns.First, a few notes on BigQuery within a python notebook context. Any cell that starts with `%%bigquery` (the BigQuery Magic) will be interpreted as a SQL query that is executed on BigQuery, and the result is printed to our notebook.BigQuery supports [two flavors](https://cloud.google.com/bigquery/docs/reference/standard-sql/migrating-from-legacy-sqlcomparison_of_legacy_and_standard_sql) of SQL syntax: legacy SQL and standard SQL. The preferred is standard SQL because it complies with the official SQL:2011 standard. To instruct BigQuery to interpret our syntax as such we start the query with `standardSQL`.Our first query is accessing the BigQuery Information Schema which stores all object-related metadata. In this case we want to see metadata details for the "all_sessions_raw" table. Tip: To run the current cell you can click the cell and hit **shift enter** TODO 2
%%bigquery --project $PROJECT #standardsql SELECT * EXCEPT (table_catalog, table_schema, is_generated, generation_expression, is_stored, is_updatable, is_hidden, is_system_defined, is_partitioning_column, clustering_ordinal_position) FROM `data-to-insights.ecommerce.INFORMATION_SCHEMA.COLUMNS` WHERE table_name="all_sessions_raw"
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive2/how_google_does_ml/bigquery/solution/analyze_with_bigquery_solution.ipynb
Glairly/introduction_to_tensorflow
Next examine how many rows are in the table. TODO 1
%%bigquery --project $PROJECT #standardSQL SELECT count(*) FROM `data-to-insights.ecommerce.all_sessions_raw`
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive2/how_google_does_ml/bigquery/solution/analyze_with_bigquery_solution.ipynb
Glairly/introduction_to_tensorflow
Now take a quick at few rows of data in the table.
%%bigquery --project $PROJECT #standardSQL SELECT * FROM `data-to-insights.ecommerce.all_sessions_raw` LIMIT 7
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive2/how_google_does_ml/bigquery/solution/analyze_with_bigquery_solution.ipynb
Glairly/introduction_to_tensorflow
Identify duplicate rowsSeeing a sample amount of data may give you greater intuition for what is included in the dataset. But since the table is quite large, a preview is not likely to render meaningful results. As you scan and scroll through the sample rows you see there is no singular field that uniquely identifies a row, so you need advanced logic to identify duplicate rows.The query below uses the SQL GROUP BY function on every field and counts (COUNT) where there are rows that have the same values across every field.If every field is unique, the COUNT will return 1 as there are no other groupings of rows with the exact same value for all fields.If there is a row with the same values for all fields, they will be grouped together and the COUNT will be greater than 1. The last part of the query is an aggregation filter using HAVING to only show the results that have a COUNT of duplicates greater than 1.Run the following query to find duplicate records across all columns. TODO 3
%%bigquery --project $PROJECT #standardSQL SELECT count(*) AS num_duplicate_rows, * FROM `data-to-insights.ecommerce.all_sessions_raw` GROUP BY fullvisitorid, channelgrouping, time, country, city, totaltransactionrevenue, transactions, timeonsite, pageviews, sessionqualitydim, date, visitid, type, productrefundamount, productquantity, productprice, productrevenue, productsku, v2productname, v2productcategory, productvariant, currencycode, itemquantity, itemrevenue, transactionrevenue, transactionid, pagetitle, searchkeyword, pagepathlevel1, ecommerceaction_type, ecommerceaction_step, ecommerceaction_option HAVING num_duplicate_rows > 1;
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive2/how_google_does_ml/bigquery/solution/analyze_with_bigquery_solution.ipynb
Glairly/introduction_to_tensorflow
As you can see there are quite a few "duplicate" records (615) when analyzed with these parameters.In your own datasets, even if you have a unique key, it is still beneficial to confirm the uniqueness of the rows with COUNT, GROUP BY, and HAVING before you begin your analysis. Analyze the new all_sessions tableIn this section you use a deduplicated table called all_sessions.Scenario: Your data analyst team has provided you with a relevant query, and your schema experts have identified the key fields that must be unique for each record per your schema.Run the query to confirm that no duplicates exist, this time against the "all_sessions" table:
%%bigquery --project $PROJECT #standardSQL SELECT fullvisitorid, # the unique visitor ID visitid, # a visitor can have multiple visits date, # session date stored as string YYYYMMDD time, # time of the individual site hit (can be 0 or more) v2productname, # not unique since a product can have variants like Color productsku, # unique for each product type, # visit and/or event trigger ecommerceaction_type, # maps to ‘add to cart', ‘completed checkout' ecommerceaction_step, ecommerceaction_option, transactionrevenue, # revenue of the order transactionid, # unique identifier for revenue bearing transaction count(*) AS row_count FROM `data-to-insights.ecommerce.all_sessions` GROUP BY 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12 HAVING row_count > 1 # find duplicates
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive2/how_google_does_ml/bigquery/solution/analyze_with_bigquery_solution.ipynb
Glairly/introduction_to_tensorflow
The query returns zero records indicating no duplicates exist. Write basic SQL against the eCommerce data (TODO 4)In this section, you query for insights on the ecommerce dataset.A good first path of analysis is to find the total unique visitorsThe query below determines the total views by counting product_views and the number of unique visitors by counting fullVisitorID.
%%bigquery --project $PROJECT #standardSQL SELECT count(*) AS product_views, count(DISTINCT fullvisitorid) AS unique_visitors FROM `data-to-insights.ecommerce.all_sessions`;
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive2/how_google_does_ml/bigquery/solution/analyze_with_bigquery_solution.ipynb
Glairly/introduction_to_tensorflow
The next query shows total unique visitors(fullVisitorID) by the referring site (channelGrouping):
%%bigquery --project $PROJECT #standardSQL SELECT count(DISTINCT fullvisitorid) AS unique_visitors, channelgrouping FROM `data-to-insights.ecommerce.all_sessions` GROUP BY 2 ORDER BY 2 DESC;
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive2/how_google_does_ml/bigquery/solution/analyze_with_bigquery_solution.ipynb
Glairly/introduction_to_tensorflow
To find deeper insights in the data, the next query lists the five products with the most views (product_views) from unique visitors. The query counts number of times a product (v2ProductName) was viewed (product_views), puts the list in descending order, and lists the top 5 entries:
%%bigquery --project $PROJECT #standardSQL SELECT count(*) AS product_views, ( v2productname ) AS ProductName FROM `data-to-insights.ecommerce.all_sessions` WHERE type = 'PAGE' GROUP BY v2productname ORDER BY product_views DESC LIMIT 5;
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive2/how_google_does_ml/bigquery/solution/analyze_with_bigquery_solution.ipynb
Glairly/introduction_to_tensorflow
Now expand your previous query to include the total number of distinct products ordered and the total number of total units ordered (productQuantity):
%%bigquery --project $PROJECT #standardSQL SELECT count(*) AS product_views, count(productquantity) AS orders, sum(productquantity) AS quantity_product_ordered, v2productname FROM `data-to-insights.ecommerce.all_sessions` WHERE type = 'PAGE' GROUP BY v2productname ORDER BY product_views DESC LIMIT 5;
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive2/how_google_does_ml/bigquery/solution/analyze_with_bigquery_solution.ipynb
Glairly/introduction_to_tensorflow
Lastly, expand the query to include the average amount of product per order (total number of units ordered/total number of orders, or `SUM(productQuantity)/COUNT(productQuantity)`).
%%bigquery --project $PROJECT #standardSQL SELECT count(*) AS product_views, count(productquantity) AS orders, sum(productquantity) AS quantity_product_ordered, sum(productquantity) / Count(productquantity) AS avg_per_order, v2productname AS productName FROM `data-to-insights.ecommerce.all_sessions` WHERE type = 'PAGE' GROUP BY v2productname ORDER BY product_views DESC LIMIT 5;
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive2/how_google_does_ml/bigquery/solution/analyze_with_bigquery_solution.ipynb
Glairly/introduction_to_tensorflow
download the dataset from https://www.ncdc.noaa.gov/cag/global/time-series Data wrangling Normalize the column name :change the `value` to `Surface Temperature in Africa` in each dataframe
Africa1 <- read_csv(file = "Africa.csv") Africa <- Africa1 %>% rename("SurfaceTemperature" = "Value") Africa2 <- Africa %>% rename("Surface Temperature in Africa" = "SurfaceTemperature") North_America1 <- read_csv(file = "North America.csv") North_America <- North_America1 %>% rename("SurfaceTemperature" = "Value") North_America2 <- North_America %>% rename("Surface Temperature in North America" = "SurfaceTemperature") South_America1 <- read_csv(file = "South America.csv") South_America <- South_America1 %>% rename("SurfaceTemperature" = "Value") South_America2 <- South_America %>% rename("Surface Temperature in South America" = "SurfaceTemperature") Europe1 <- read_csv(file = "Europe.csv") Europe <- Europe1 %>% rename("SurfaceTemperature" = "Value") Europe2 <- Europe %>% rename("Surface Temperature in Europe" = "SurfaceTemperature") Asia1 <- read_csv(file = "Asia.csv") Asia <- Asia1 %>% rename("SurfaceTemperature" = "Value") Asia2 <- Asia %>% rename("Surface Temperature in Asia" = "SurfaceTemperature") Oceania1 <- read_csv(file = "Oceania.csv") Oceania <- Oceania1 %>% rename("SurfaceTemperature" = "Value") Oceania2 <- Oceania %>% rename("Surface Temperature in Oceania" = "SurfaceTemperature")
_____no_output_____
Apache-2.0
Climate.ipynb
Deyang-Li/tidy-beauty
Join together!
climate_df <- Africa2 %>% full_join(North_America2) %>% full_join(South_America2) %>% full_join(Europe2) %>% full_join(Asia2) %>% full_join(Oceania2)
Joining, by = "Year" Joining, by = "Year" Joining, by = "Year" Joining, by = "Year" Joining, by = "Year"
Apache-2.0
Climate.ipynb
Deyang-Li/tidy-beauty
check about the types of the columns, the missing values, and output a quick summary of the dataset.
glimpse(climate_df) summary(climate_df) climate_df %>% skim() %>% kable() write_csv(climate_df,"Climate.csv")
_____no_output_____
Apache-2.0
Climate.ipynb
Deyang-Li/tidy-beauty
Data analysis choose the data from 1950 to 2018 for ploting
Africa$pos = Africa$SurfaceTemperature >= 0 Africa_climate_plot <- Africa %>% filter( Year >= 1950) %>% ggplot(aes( x = Year, y = SurfaceTemperature, fill = pos)) + labs(title = "Time Series of Surface Temperature Anomalies in Africa") + scale_x_continuous(breaks=seq(1950, 2020, 10)) + scale_y_continuous(breaks=seq(-1, 1.8, 0.2)) + geom_bar(stat = "identity",position = "identity", colour = "black", size = 0.05) + xlab("Year") + ylab ("Surface Temperature ( ºC )") + theme_light()+ theme(plot.title = element_text(hjust = 0.5)) + scale_fill_manual(values = c("#CCEEFF", "#FFDDDD"), guide = FALSE) Africa_climate_plot ggsave(Africa_climate_plot,filename = "Africa climate plot.jpg",width = 12,height = 9) North_America$pos = North_America$SurfaceTemperature >= 0 North_America_climate_plot <- North_America %>% filter( Year >= 1950) %>% ggplot(aes( x = Year, y = SurfaceTemperature, fill = pos)) + labs(title = "Time Series of Surface Temperature Anomalies in North America") + scale_x_continuous(breaks=seq(1950, 2020, 10)) + scale_y_continuous(breaks=seq(-1, 1.8, 0.2)) + geom_bar(stat = "identity",position = "identity", colour = "black", size = 0.05) + xlab("Year") + ylab ("Surface Temperature ( ºC )") + theme_light()+ theme(plot.title = element_text(hjust = 0.5)) + scale_fill_manual(values = c("#CCEEFF", "#FFDDDD"), guide = FALSE) North_America_climate_plot ggsave(North_America_climate_plot,filename = "North America climate plot.jpg",width = 12,height = 9) South_America$pos = South_America$SurfaceTemperature >= 0 South_America_climate_plot <- South_America %>% filter( Year >= 1950) %>% ggplot(aes( x = Year, y = SurfaceTemperature, fill = pos)) + labs(title = "Time Series of Surface Temperature Anomalies in South America") + scale_x_continuous(breaks=seq(1950, 2020, 10)) + scale_y_continuous(breaks=seq(-1, 1.8, 0.2)) + geom_bar(stat = "identity",position = "identity", colour = "black", size = 0.05) + xlab("Year") + ylab ("Surface Temperature ( ºC )") + theme_light()+ theme(plot.title = element_text(hjust = 0.5)) + scale_fill_manual(values = c("#CCEEFF", "#FFDDDD"), guide = FALSE) South_America_climate_plot ggsave(South_America_climate_plot,filename = "South America climate plot.jpg",width = 12,height = 9) Europe$pos = Europe$SurfaceTemperature >= 0 Europe_climate_plot <- Europe %>% filter( Year >= 1950) %>% ggplot(aes( x = Year, y = SurfaceTemperature, fill = pos)) + labs(title = "Time Series of Surface Temperature Anomalies in Europe") + scale_x_continuous(breaks=seq(1950, 2020, 10)) + scale_y_continuous(breaks=seq(-1, 1.8, 0.2)) + geom_bar(stat = "identity",position = "identity", colour = "black", size = 0.05) + xlab("Year") + ylab ("Surface Temperature ( ºC )") + theme_light()+ theme(plot.title = element_text(hjust = 0.5)) + scale_fill_manual(values = c("#CCEEFF", "#FFDDDD"), guide = FALSE) Europe_climate_plot ggsave(Europe_climate_plot,filename = "Europe climate plot.jpg",width = 12,height = 9) Asia$pos = Asia$SurfaceTemperature >= 0 Asia_climate_plot <- Asia %>% filter( Year >= 1950) %>% ggplot(aes( x = Year, y = SurfaceTemperature, fill = pos)) + labs(title = "Time Series of Surface Temperature Anomalies in Asia") + scale_x_continuous(breaks=seq(1950, 2020, 10)) + scale_y_continuous(breaks=seq(-1, 1.8, 0.2)) + geom_bar(stat = "identity",position = "identity", colour = "black", size = 0.05) + xlab("Year") + ylab ("Surface Temperature ( ºC )") + theme_light()+ theme(plot.title = element_text(hjust = 0.5)) + scale_fill_manual(values = c("#CCEEFF", "#FFDDDD"), guide = FALSE) Asia_climate_plot ggsave(Asia_climate_plot,filename = "Asia climate plot.jpg",width = 12,height = 9) Oceania$pos = Oceania$SurfaceTemperature >= 0 Oceania_climate_plot <- Oceania %>% filter( Year >= 1950) %>% ggplot(aes( x = Year, y = SurfaceTemperature, fill = pos)) + labs(title = "Time Series of Surface Temperature Anomalies in Oceania") + scale_x_continuous(breaks=seq(1950, 2020, 10)) + scale_y_continuous(breaks=seq(-1, 1.8, 0.2)) + geom_bar(stat = "identity",position = "identity", colour = "black", size = 0.05) + xlab("Year") + ylab ("Surface Temperature ( ºC )") + theme_light()+ theme(plot.title = element_text(hjust = 0.5)) + scale_fill_manual(values = c("#CCEEFF", "#FFDDDD"), guide = FALSE) Oceania_climate_plot ggsave(Oceania_climate_plot,filename = "Oceania climate plot.jpg",width = 12,height = 9)
_____no_output_____
Apache-2.0
Climate.ipynb
Deyang-Li/tidy-beauty
Put all plots together
library(ggpubr) general_plot <- ggarrange(Africa_climate_plot, Asia_climate_plot, Europe_climate_plot, South_America_climate_plot, North_America_climate_plot, Oceania_climate_plot, ncol = 2, nrow = 3) general_plot ggsave(general_plot,filename = "Climate general plot.jpg",width = 12,height = 9)
_____no_output_____
Apache-2.0
Climate.ipynb
Deyang-Li/tidy-beauty
For some reason the mixed layer depth coordinate indices are displaced by +1 in relation to the ECCO data stored on Pangeo. The coordinates need to be matched for future calculations.
mxldepth.coords['i'] = coords['i'] mxldepth.coords['j'] = coords['j']
_____no_output_____
BSD-3-Clause
ecco_LPsstvarbudget_load.ipynb
cpatrizio88/pangeo_binder_example
Calculate climatological mean mixed layer depth. We will be using this later to mask grid points outside of the mixed layer.
mxldepth_clim=mxldepth.mean(dim='time').load() #mxldepth_clim=mxldepth.mean(dim='time').persist()
_____no_output_____
BSD-3-Clause
ecco_LPsstvarbudget_load.ipynb
cpatrizio88/pangeo_binder_example
Make a mask of points outside the ocean mixed layer:
mxlpoints = np.abs(coords['Z']) <= mxldepth_clim # Flag for low-pass filtering lowpass=True # Filter requirements order = 5 fs = 1 # sample rate, (cycles per month) Tn = 12*3. cutoff = 1/Tn # desired cutoff frequency of the filter (cycles per month) # Face numbers to analyze # 0: Southern Ocean (Atlantic) # 1: South Atlantic Ocean / Africa # 2: East North Atlantic / Europe # 3: Southern Ocean (Indian) # 4: Indian Ocean # 5: Asia # 6: Arctic # 7: North Pacific (central) # 8: West South Pacific # 9: Southern Ocean (West Pacific) # 10: North America / West North Atlantic # 11: East South Pacific / South America # 12: Southern Ocean(East Pacific) #facen = [5,7] #Note: longitude bounds can either be 0 < bounds < 360, or -180 < bounds < 180. #The only requirement is that the left longitude bound is less than the right bound #(along date line must use 0 < bounds < 360). #(along prime meridian must use -180 < bounds < 180) # Complete global #facen=[0,1,2,3,4,5,6,7,8,9,10,11,12] #bnds = [0,359.9,-90,90] #facen=[] #bnds = [0,359.9,-90,90] # Global (excluding polar regions) #facen=[1,2,4,5,7,8,10,11] #bnds = [0,359.9,-58,70] #Southern Ocean (Atlantic) #facen=[0] #bnds = [-20,20,-58,-90] #1: South Atlantic Ocean / Africa #facen=[1] #bnds = [-38,30,-58,10] #2: East North Atlantic #facen=[2] #bnds = [-38,30,10,70] #3: Southern Ocean (Indian) #facen=[3] #bnds = [60,143,-58,-90] #4: Indian Ocean #facen=[4] #bnds = [60,143,-58,10] #7: North Pacific (central) #facen=[7] #bnds = [145,230,10,70] #8: West South Pacific #facen=[8] #bnds = [145,230,-58,10] #11: East South Pacific #facen=[11] #bnds = [-128,-38,-58,10] #2, 10: North Atlantic facen=[2,10] bnds = [-80,0,10,70] #5,7,10: North Pacific #facen=[5,7,10] #bnds = [100,270,10,70] #4,5,7,8,10,11: Pacific #facen=[4,5,7,8,10,11] #bnds = [100,300,-70,70] #5,7,8,10,11: Tropical Pacific #facen=[5,7,8,10,11] #bnds = [145,290,-15,15] #5,7: KOE #facen=[5,7] #bnds = [120,180,15,60] rho0 = 1029 #sea-water density (kg/m^3) c_p = 3994 #sea-water heat capacity (J/kg/K) coords=coords.isel(face=facen) # Vertical grid spacing drF = coords.drF hFacC = coords.hFacC #rA = coords.rA.isel(face=facen).load() #vol = drF*hFacC*rA.load() c_o = rho0*c_p*drF*hFacC T = ds_snp.T.isel(face=facen) adv_ConvH = ds.adv_ConvH.isel(face=facen) dif_ConvH = ds.dif_ConvH.isel(face=facen) forcH = ds.forcH.isel(face=facen) dt = coords.time_snp[1:].load() dt = dt.rename({'time_snp': 'time'}) # delta t in seconds. Note: divide by 10**9 to convert nanoseconds to seconds dt.values = [float(t)/10**9 for t in np.diff(coords.time_snp)] # time axis of dt should be the same as of the monthly averages dt.time.values = coords.time[1:-1].values lons = coords.XC lats = coords.YC T_anom, T_clim = st.anom(T) C_adv_anom, C_adv_clim = st.anom(adv_ConvH) C_dif_anom, C_dif_clim = st.anom(dif_ConvH) C_forc_anom, C_forc_clim = st.anom(forcH) totalH_anom = C_adv_anom + C_dif_anom + C_forc_anom T_anom = T_anom.chunk({'time':ntchunk-1}) C_adv_anom = C_adv_anom.chunk({'time':ntchunk}) C_dif_anom = C_dif_anom.chunk({'time':ntchunk}) C_forc_anom = C_forc_anom.chunk({'time':ntchunk}) if lowpass: T_anom = T_anom.chunk({'time':288, 'j':10, 'i':10}) C_adv_anom = C_adv_anom.chunk({'time':288, 'j':10, 'i':10}) C_dif_anom = C_dif_anom.chunk({'time':288, 'j':10, 'i':10}) C_forc_anom = C_forc_anom.chunk({'time':288, 'j':10, 'i':10}) T_anom = stats.butter_lowpass_filter_xr(T_anom, cutoff, fs, order) C_adv_anom = stats.butter_lowpass_filter_xr(C_adv_anom, cutoff, fs, order) C_dif_anom = stats.butter_lowpass_filter_xr(C_dif_anom, cutoff, fs, order) C_forc_anom = stats.butter_lowpass_filter_xr(C_forc_anom, cutoff, fs, order) totalH_anom = C_adv_anom + C_dif_anom + C_forc_anom %time T_anom.load() %time C_adv_anom.load() %time C_dif_anom.load() %time C_forc_anom.load() tendH_perMonth = (T_anom.shift(time=-1)-T_anom)[:-1] # Make sure time axis is the same as for the monthly variables tendH_perMonth.time.values = coords.time[1:-1].values # Convert tendency from 1/month to 1/s tendH_perSec = tendH_perMonth/dt tendH_perSec = tendH_perSec.transpose('face','time', 'k', 'j', 'i') # Define tendH array with correct dimensions tendH_anom = xr.DataArray(np.nan*np.zeros([len(facen),np.shape(tendH_perSec)[1]+2,50,90,90]), coords={'face': facen, 'time': range(np.shape(tendH_perSec)[1]+2),'k': np.array(range(0,50)), 'j': np.array(range(0,90)),'i': np.array(range(0,90))},dims=['face', 'time','k', 'j','i']) tendH_anom.time.values = coords.time.values tendH_anom tendH_anom.nbytes/1e9 # Add coordinates# tendH_anom['XC'] = lons tendH_anom['YC'] = lats tendH_anom['Z'] = coords.Z # Total tendency (degC/s) tendH_anom.values[:,1:-1,:] = tendH_perSec.values %time tendH_anom.load() #%time tendH.persist() # Convert from degC/s to W/m^2 tendH_anom = c_o*tendH_anom tendH_anom = tendH_anom.transpose('time','face', 'k', 'j', 'i') face=0 k = 0 j = 15 i = 15 plt.figure(figsize=(14,10)) plt.subplot(2, 1, 1) plt.plot(tendH_anom.time, tendH_anom.isel(face=face,k=k,j=j,i=i), lw=4, color='K', marker='.',label='total tendency') plt.plot(C_forc_anom.time, C_forc_anom.isel(face=face,k=k,j=j,i=i), lw=2, color='C0', marker='.',label='forcing') plt.plot(C_adv_anom.time, C_adv_anom.isel(face=face,k=k,j=j,i=i), lw=2, color='C1', marker='.',label='advection') plt.axhline(0,color='k',lw=1) plt.plot(C_dif_anom.time, C_dif_anom.isel(face=face,k=k,j=j,i=i), lw=2, color='C2',label='diffusion') plt.setp(plt.gca(), 'xticklabels',[]) plt.legend(loc='best',frameon=False,fontsize=14) plt.subplot(2, 1, 2) plt.plot(totalH_anom.time, totalH_anom.isel(face=face,k=k,j=j,i=i), lw=4, color='red', marker='.',label='RHS') plt.plot(tendH_anom.time, tendH_anom.isel(face=face,k=k,j=j,i=i), lw=2, color='blue', marker='.',label='LHS') plt.plot(tendH_anom.time, (totalH_anom-tendH_anom).isel(face=face,k=k,j=j,i=i), lw=2, color='k', marker='.',label='RHS - LHS') plt.legend(loc='best',frameon=False,fontsize=14) plt.savefig(fout + 'sstbudget_anom_ts.png') T_var = T_anom.var(dim='time') %time T_var.load() #%time T_var.persist() tendH_anom = tendH_anom/c_o #tendH_anom = tendH_anom.transpose('time','face', 'k', 'j', 'i') cov_adv = st.cov(tendH_anom, C_adv_anom) cov_dif = st.cov(tendH_anom, C_dif_anom) cov_forc = st.cov(tendH_anom, C_forc_anom) cov_adv.nbytes/1e9 %time cov_adv.load() %time cov_dif.load() %time cov_forc.load() deltat = dt.mean() deltat.compute() r_1 = st.cor(T_anom, T_anom,lagx=1).compute() r_1 fac = (deltat**2/(2*c_o*(1-r_1))) fac.load() T_var_sum = fac*(cov_adv + cov_dif + cov_forc) %time T_var_sum.load() #%time T_var_sum.persist() mapper = LLCMapper(coords) k=0 mapper(T_var.isel(k=k), bnds=bnds, cmap='cubehelix_r', vmin=0,vmax=1.0) mapper(T_var_sum.isel(k=k), bnds=bnds, cmap='cubehelix_r', vmin=0,vmax=1.0)
_____no_output_____
BSD-3-Clause
ecco_LPsstvarbudget_load.ipynb
cpatrizio88/pangeo_binder_example
The temperature variance budget is clearly balanced! Let's take a look at the contribution due to each term.
T_var_adv = fac*cov_adv T_var_dif = fac*cov_dif T_var_forc = fac*cov_forc vmin=-1.0 vmax=1.0 sstmax=1.6 if lowpass: sstmax=0.5 vmin=-0.5 vmax=0.5
_____no_output_____
BSD-3-Clause
ecco_LPsstvarbudget_load.ipynb
cpatrizio88/pangeo_binder_example
Contributions to temperature variance from advection, diffusion and surface forcing
k=0 mapper(T_var_sum.isel(k=k), bnds=bnds, cmap='cubehelix_r', vmin=0,vmax=sstmax) plt.title(r'temperature variance (K$^2$)') plt.savefig(fout + 'Tvar_sum.png') mapper(T_var_adv.isel(k=k), bnds=bnds, cmap='RdBu_r', vmin=vmin,vmax=vmax) plt.title(r'advective contribution (K$^2$)') plt.savefig(fout + 'Tvar_adv.png') mapper(T_var_dif.isel(k=k), bnds=bnds, cmap='RdBu_r', vmin=vmin,vmax=vmax) plt.title(r'diffusive contribution (K$^2$)') plt.savefig(fout + 'Tvar_dif.png') mapper(T_var_forc.isel(k=k), bnds=bnds, cmap='RdBu_r', vmin=vmin,vmax=vmax) plt.title(r'surface forcing contribution (K$^2$)') plt.savefig(fout + 'Tvar_forc.png')
_____no_output_____
BSD-3-Clause
ecco_LPsstvarbudget_load.ipynb
cpatrizio88/pangeo_binder_example
Contributions to ocean mixed layer temperature variance from advection, diffusion and surface forcing
mxlpoints = mxlpoints.isel(face=facen) delz = drF*hFacC delz=delz.where(mxlpoints) delz_sum = delz.sum(dim='k') mxlpoints weights = delz/delz_sum T_var_mxl = (weights*T_var).where(mxlpoints).sum(dim='k') T_var_adv_mxl = (weights*T_var_adv).where(mxlpoints).sum(dim='k') T_var_dif_mxl = (weights*T_var_dif).where(mxlpoints).sum(dim='k') T_var_forc_mxl = (weights*T_var_forc).where(mxlpoints).sum(dim='k') T_var_sum_mxl = T_var_adv_mxl + T_var_dif_mxl + T_var_forc_mxl #f, axes = plt.subplots(2,2,figsize=(16,12)) #f.tight_layout() mapper(T_var_sum_mxl, bnds=bnds, cmap='cubehelix_r', vmin=0,vmax=sstmax) plt.title(r'temperature variance (K$^2$)') plt.savefig(fout + 'Tmxlvar_sum.png') mapper(T_var_adv_mxl, bnds=bnds, cmap='RdBu_r', vmin=vmin,vmax=vmax) plt.title(r'advective contribution (K$^2$)') plt.savefig(fout + 'Tmxlvar_adv.png') mapper(T_var_dif_mxl, bnds=bnds, cmap='RdBu_r', vmin=vmin,vmax=vmax) plt.title(r'diffusive contribution (K$^2$)') plt.savefig(fout + 'Tmxlvar_dif.png') mapper(T_var_forc_mxl, bnds=bnds, cmap='RdBu_r', vmin=vmin,vmax=vmax) plt.title(r'surface forcing contribution (K$^2$)') plt.savefig(fout + 'Tmxlvar_forc.png') #mapper(T_var_sum_mxl, bnds=bnds, cmap='cubehelix_r', vmin=0,vmax=1.0) #plt.title(r'temperature variance (K$^2$)') #plt.savefig(fout + 'Tmxlvar_sum.png') mapper(T_var_adv_mxl + T_var_dif_mxl, bnds=bnds, cmap='RdBu_r', vmin=vmin,vmax=vmax) plt.title(r'ocean dynamics (advective + diffusive) contribution (K$^2$)') plt.savefig(fout + 'Tmxlvar_ocndyn.png') #mapper(T_var_forc_mxl, bnds=bnds, cmap='RdBu_r', vmin=-1.0,vmax=1.0) #plt.title(r'surface forcing contribution (K$^2$)') #plt.savefig(fout + 'Tmxlvar_forc.png')
_____no_output_____
BSD-3-Clause
ecco_LPsstvarbudget_load.ipynb
cpatrizio88/pangeo_binder_example
speakers = os.listdir('./speaker_spectrograms/')speaker_pred = dict()for speaker in speakers: spects = np.load('./speaker_spectrograms/' + speaker) spects = spects.reshape(spects.shape+(1,)) pred = model.predict(spects) pred = np.argmax(pred, axis=-1) pred_labels = classes[pred] speaker_pred[speaker.split('.')[0]] = pred_labelswith open('./per_speaker_pred.pkl', 'wb') as handle: pickle.dump(speaker_pred, handle, protocol=pickle.HIGHEST_PROTOCOL)
speaker_pred = pickle.load(open('./per_speaker_pred.pkl', 'rb')) speaker_gt = pickle.load(open('./per_speaker_gt.pkl', 'rb')) per_speaker = dict() for speaker in os.listdir('./speaker_spectrograms/'): speaker = speaker.split('.')[0] pred = np.array(speaker_pred[speaker]) gt = np.array(speaker_gt[speaker]) per_label = dict() for label in np.unique(gt): label_idx = np.where(gt == label) acc = np.sum(np.core.defchararray.equal(pred[label_idx], gt[label_idx])) / len(label_idx[0]) per_label[label] = acc * 100 per_speaker[speaker] = per_label list(per_speaker.values())[0] per_speaker_acc = dict() for speaker in os.listdir('./speaker_spectrograms/'): speaker = speaker.split('.')[0] pred = speaker_pred[speaker] gt = speaker_gt[speaker] acc = np.sum(np.core.defchararray.equal(pred, gt)) / len(pred) per_speaker_acc[speaker] = acc * 100 sorted_per_speaker_acc = sorted(per_speaker_acc.items(), key=lambda x: x[1], reverse=True) class_names = [] class_accs = [] per_class_accuracy_list = np.full((len(classes), len(per_speaker)), np.nan) for index, item in enumerate(sorted_per_class_acc): class_names.append(item[0]) class_accs.append(item[1]) for i, speaker in enumerate(list(per_speaker.values())): if item[0] in speaker.keys(): per_class_accuracy_list[index, i] = speaker[item[0]] boxprops = dict(linestyle='-', linewidth=1.0, color='k') medianprops = dict(linestyle='-', linewidth=1.0, color='k') whiskerprops = dict(linestyle='-', linewidth=1.0, color='k') capprops = dict(linestyle='-', linewidth=1.0, color='k') plt.boxplot(per_class_accuracy_list.T, patch_artist = True, boxprops=boxprops, capprops=capprops, medianprops=medianprops, whiskerprops=whiskerprops, whis="range") class_names fig, ax = plt.subplots(1, 1, sharex=True, figsize=(5, 3)) boxprops = dict(linestyle='-', linewidth=1.0, color='k') medianprops = dict(linestyle='-', linewidth=1.0, color='k') whiskerprops = dict(linestyle='-', linewidth=1.0, color='k') capprops = dict(linestyle='-', linewidth=1.0, color='k') bplot = ax.boxplot( [100 * , 100 * none_accs, 100 * all_accs], patch_artist = True, boxprops=boxprops, capprops=capprops, medianprops=medianprops, whiskerprops=whiskerprops, whis="range");
_____no_output_____
Apache-2.0
Speaker_predictions.ipynb
aakaashjois/Dense-Recurrent-Net-For-Speech-Command-Classification
Tweepy streamer Find Top tweeting user: - Find User who is tweeting a lot. - Find top 50 across the world. Since this is streaming application, we will use python logging module to log. [Further read.](https://www.webcodegeeks.com/python/python-logging-example/)
import logging # python logging module # basic format for logging logFormat = "%(asctime)s - [%(levelname)s] (%(funcName)s:%(lineno)d) %(message)s" # logs will be stored in tweepy.log logging.basicConfig(filename='tweepytopuser.log', level=logging.INFO, format=logFormat, datefmt="%Y-%m-%d %H:%M:%S")
_____no_output_____
Apache-2.0
Dalon_4_RTD_MiniPro_Tweepy_Q5.ipynb
intellect82/venkateswarlu_SVAP_Asmt_R3
Authentication and AuthorisationCreate an app in twitter [here](https://apps.twitter.com/). Copy the necessary keys and access tokens, which will be used here in our code. The authorization is done using Oauth, An open protocol to allow secure authorization in a simple and standard method from web, mobile and desktop applications. [Further read](https://oauth.net/). We will use Tweepy a python module. Tweepy is open-sourced, hosted on [GitHub](https://github.com/tweepy/tweepy) and enables Python to communicate with Twitter platform and use its API. Tweepy supports oauth authentication. Authentication is handled by the tweepy.AuthHandler class.
import tweepy # importing all the modules required import socket # will be used to create sockets import json # manipulate json from httplib import IncompleteRead # Keep these tokens secret, as anyone can have full access to your # twitter account, using these tokens consumerKey = "#" consumerSecret = "#" accessToken = "#-#" accessTokenSecret = "#"
_____no_output_____
Apache-2.0
Dalon_4_RTD_MiniPro_Tweepy_Q5.ipynb
intellect82/venkateswarlu_SVAP_Asmt_R3
Post this step, we will have full access to twitter api's
# Performing the authentication and authorization, post this step # we will have full access to twitter api's def connectToTwitter(): """Connect to twitter.""" try: auth = tweepy.OAuthHandler(consumerKey, consumerSecret) auth.set_access_token(accessToken, accessTokenSecret) api = tweepy.API(auth) logging.info("Successfully logged in to twitter.") return api, auth except Exception as e: logging.info("Something went wrong in oauth, please check your tokens.") logging.error(e)
_____no_output_____
Apache-2.0
Dalon_4_RTD_MiniPro_Tweepy_Q5.ipynb
intellect82/venkateswarlu_SVAP_Asmt_R3
Streaming with tweepyThe Twitter streaming API is used to download twitter messages in real time. We use streaming api instead of rest api because, the REST api is used to pull data from twitter but the streaming api pushes messages to a persistent session. This allows the streaming api to download more data in real time than could be done using the REST API.In Tweepy, an instance of tweepy.Stream establishes a streaming session and routes messages to StreamListener instance. The on_data method of a stream listener receives all messages and calls functions according to the message type. But the on_data method is only a stub, so we need to implement the functionality by subclassing StreamListener. Using the streaming api has three steps.1. Create a class inheriting from StreamListener2. Using that class create a Stream object3. Connect to the Twitter API using the Stream.
# Tweet listner class which subclasses from tweepy.StreamListener class TweetListner(tweepy.StreamListener): """Twitter stream listner""" def __init__(self, csocket): self.clientSocket = csocket def dataProcessing(self, data): """Process the data, before sending to spark streaming """ sendData = {} # data that is sent to spark streamer user = data.get("user", {}) name = user.get("name", "undefined").encode('utf-8') sendData["name"] = name #data_string = "{}:{}".format(name, followersCount) self.clientSocket.send(json.dumps(sendData) + u"\n") # append new line character, so that spark recognizes it logging.debug(json.dumps(sendData)) def on_data(self, raw_data): """ Called when raw data is received from connection. return False to stop stream and close connection. """ try: data = json.loads(raw_data) self.dataProcessing(data) #self.clientSocket.send(json.dumps(sendData) + u"\n") # Because the connection was breaking return True except Exception as e: logging.error("An unhandled exception has occured, check your data processing") logging.error(e) raise e def on_error(self, status_code): """Called when a non-200 status code is returned""" logging.error("A non-200 status code is returned") return True # Creating a proxy socket def createProxySocket(host, port): """ Returns a socket which can be used to connect to spark. """ try: s = socket.socket() # initialize socket instance s.bind((host, port)) # bind to the given host and port s.listen(5) # Enable a server to accept connections. logging.info("Listening on the port {}".format(port)) cSocket, address = s.accept() # waiting for a connection logging.info("Received Request from: {}".format(address)) return cSocket except socket.error as e: if e.errno == socket.errno.EADDRINUSE: # Address in use logging.error("The given host:port {}:{} is already in use"\ .format(host, port)) logging.info("Trying on port: {}".format(port + 1)) return createProxySocket(host, port + 1)
_____no_output_____
Apache-2.0
Dalon_4_RTD_MiniPro_Tweepy_Q5.ipynb
intellect82/venkateswarlu_SVAP_Asmt_R3
Drawbacks of twitter streaming APIThe major drawback of the Streaming API is that Twitter’s Steaming API provides only a sample of tweets that are occurring. The actual percentage of total tweets users receive with Twitter’s Streaming API varies heavily based on the criteria users request and the current traffic. Studies have estimated that using Twitter’s Streaming API users can expect to receive anywhere from 1% of the tweets to over 40% of tweets in near real-time. The reason that you do not receive all of the tweets from the Twitter Streaming API is simply because Twitter doesn’t have the current infrastructure to support it, and they don’t want to; hence, the Twitter Firehose. [Ref](https://brightplanet.com/2013/06/twitter-firehose-vs-twitter-api-whats-the-difference-and-why-should-you-care/)So we will use a hack i.e. get the top trending topics and use that to filter data.
if __name__ == "__main__": try: api, auth = connectToTwitter() # connecting to twitter # Global information is available by using 1 as the WOEID # woeid = getWOEIDForTrendsAvailable(api, "Worldwide") # get the woeid of the worldwide host = "localhost" port = 8600 cSocket = createProxySocket(host, port) # Creating a socket while True: try: # Connect/reconnect the stream tweetStream = tweepy.Stream(auth, TweetListner(cSocket)) # Stream the twitter data # DON'T run this approach async or you'll just create a ton of streams! tweetStream.filter(track="iphone") # Filter on trending topics except IncompleteRead: # Oh well, reconnect and keep trucking continue except KeyboardInterrupt: # Or however you want to exit this loop tweetStream.disconnect() break except Exception as e: logging.error("Unhandled exception has occured") logging.error(e) continue except KeyboardInterrupt: # Keyboard interrupt called logging.error("KeyboardInterrupt was hit") except Exception as e: logging.error("Unhandled exception has occured") logging.error(e)
_____no_output_____
Apache-2.0
Dalon_4_RTD_MiniPro_Tweepy_Q5.ipynb
intellect82/venkateswarlu_SVAP_Asmt_R3
 Analizando información de IMDB con KerasYa aprendiste cómo se construye una red neuronal. ¡Ahora es tu turno! En este reto, vas a construir una red neuronal que logra predecir si hay un sentimiento positivo o negativo en un review.
import numpy as np import keras from keras.datasets import imdb from keras.models import Sequential from keras.layers import Dense, Dropout, Activation from keras.preprocessing.text import Tokenizer import matplotlib.pyplot as plt %matplotlib inline np.random.seed(42)
_____no_output_____
MIT
2.IMDB.ipynb
Krax7/master-data-ai
 Paso 1. Cargar la información
# IMDB ya es un dataset que es parte de Keras, así que lo tenemos fácil! (x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=1000) print(x_train.shape) print(x_test.shape)
_____no_output_____
MIT
2.IMDB.ipynb
Krax7/master-data-ai
 Paso 2. Comprender la informaciónEsta vez la información ya esta preprocesada, por lo cuál es mucho más fácil trabajar con ella. Todas las palabras han sido transformadas a números, y cada review es un vector con las palabras que contine. El output es el sentimiento, donde 1 es un sentimiento positivo y 0 un sentimiento negativo
print(x_train[0]) print(y_train[0])
_____no_output_____
MIT
2.IMDB.ipynb
Krax7/master-data-ai
 Paso 3. Modificar la información para la red neuronal One-hot encodingTenemos un vector con números, pero queremos convertirlo en muchos vectores con valor 0 ó 1. Por ejemplo, si el vector preprocesado contiene el número 14, entonces el vector procesado, en la entrada 14, será 1. Haremos lo mismo para la salida. Estamos trabajando con 50mil datos, así que se puede tardar unos segundos.
# One-hot encoding the output into vector mode, each of length 1000 tokenizer = Tokenizer(num_words=1000) x_train = tokenizer.sequences_to_matrix(x_train, mode='binary') x_test = tokenizer.sequences_to_matrix(x_test, mode='binary') print(x_train[0]) # One-hot encoding the output num_classes = 2 y_train = keras.utils.to_categorical(y_train, num_classes) y_test = keras.utils.to_categorical(y_test, num_classes) print(y_train.shape) print(y_test.shape)
_____no_output_____
MIT
2.IMDB.ipynb
Krax7/master-data-ai
Paso 4. Construimos Arquitectura del ModeloConstruye un modelo secuencial. Siéntete libre de explorar y experimentar.
## TODO: Construye un modelo secuencial ## TODO: Compila el modelo con un optimizador y una función de pérdida
_____no_output_____
MIT
2.IMDB.ipynb
Krax7/master-data-ai
Paso 5. Entrenamos el modelo
## TODO: Corre el modelo. Experimenta con diferentes tamaños de batch y número de epochs. # Usa verbose=2 para ver cómo va progresando el modelo
_____no_output_____
MIT
2.IMDB.ipynb
Krax7/master-data-ai
Paso 6. Evaluamos el modelo¿Crees poder llegar a más de 80%? ¿Qué tal arriba de 85%?
score = model.evaluate(x_test, y_test, verbose=0) print("Accuracy: ", score[1])
_____no_output_____
MIT
2.IMDB.ipynb
Krax7/master-data-ai
SOLUCIONES No las veas antes de intentar tú primero Ya intentaste tú primero?  Intenta primero
## TODO: Construye un modelo secuencial model = Sequential() model.add(Dense(512, activation='relu', input_dim=1000)) model.add(Dropout(0.5)) model.add(Dense(num_classes, activation='softmax')) model.summary() ## TODO: Compila el modelo con un optimizador y una función de pérdida model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy']) ## TODO: Corre el modelo. Experimenta con diferentes tamaños de batch y número de epochs. # Usa verbose=2 para ver cómo va progresando el modelo model.fit(x_train, y_train, batch_size=32, epochs=10, validation_data=(x_test, y_test), verbose=2)
_____no_output_____
MIT
2.IMDB.ipynb
Krax7/master-data-ai
Learn with us: www.zerotodeeplearning.comCopyright © 2021: Zero to Deep Learning ® Catalit LLC.
# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License.
_____no_output_____
Apache-2.0
notebooks/Pre-trained_Models.ipynb
zuhairah87/ztdl-masterclasses
Pre-trained Models
import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import tensorflow as tf import os from tensorflow.keras.preprocessing.image import ImageDataGenerator # sports_images_path = tf.keras.utils.get_file( # 'sports_images', # 'https://archive.org/download/ztdl_sports_images/sports_images.tgz', # untar=True) ![[ ! -f sports_images.tar.gz ]] && gsutil cp gs://ztdl-datasets/sports_images.tar.gz . ![[ ! -d sports_images ]] && echo "Extracting images..." && tar zxf sports_images.tar.gz sports_images_path = './sports_images' train_path = os.path.join(sports_images_path, 'train') test_path = os.path.join(sports_images_path, 'test') batch_size = 16 img_size = 224 train_datagen = ImageDataGenerator() \ .flow_from_directory(train_path, target_size = (img_size, img_size), batch_size = batch_size, class_mode = 'sparse') try: assert(train_datagen.samples == 11414) except: raise Exception("Found less images than expected. Please remove the files and download again.") classes_dict = train_datagen.class_indices classes = list(classes_dict.keys()) classes batch, labels = train_datagen.next() batch.shape labels.shape plt.figure(figsize=(10, 10)) for i in range(len(batch)): plt.subplot(4, 4, i+1) plt.imshow(batch[i].astype('int')) plt.title(classes[int(labels[i])]) plt.axis('off') plt.tight_layout()
_____no_output_____
Apache-2.0
notebooks/Pre-trained_Models.ipynb
zuhairah87/ztdl-masterclasses
Pre-trained modelLet's use a Resnet50 model to classify the images without any training.
from PIL import Image from io import BytesIO from IPython.display import HTML import base64 from tensorflow.keras.applications.resnet50 import ResNet50 from tensorflow.keras.applications.resnet50 import preprocess_input as preprocess_input_resnet50 from tensorflow.keras.applications.resnet50 import decode_predictions as decode_predictions_resnet50 model = ResNet50(weights='imagenet') batch_preprocessed = preprocess_input_resnet50(batch.copy()) predictions = model.predict(batch_preprocessed) decoded_top_3 = decode_predictions_resnet50(predictions, top=3) def image_formatter(a): im = Image.fromarray(a) im.thumbnail((28, 28), Image.LANCZOS) with BytesIO() as buffer: im.save(buffer, 'jpeg') im_base64 = base64.b64encode(buffer.getvalue()).decode() return f'<img src="data:image/jpeg;base64,{im_base64}">' def display_batch(batch, decoded_top_3): res = [] for i, top3 in enumerate(decoded_top_3): im = image_formatter(batch[i].astype('uint8')) cl = classes[int(labels[i])] line = [im, cl] for item in top3: line = line + list(item[1:]) res.append(line) res_df = pd.DataFrame(res, columns=['image', 'ground_truth', 'top_1', 'prob_1', 'top_2', 'prob_2', 'top_3', 'prob_3']) return res_df.style.bar(color='lightgreen', vmin=0, vmax=1) display_batch(batch, decoded_top_3)
_____no_output_____
Apache-2.0
notebooks/Pre-trained_Models.ipynb
zuhairah87/ztdl-masterclasses
Stochastic examplesThis example is designed to show how to use the stochatic optimizationalgorithms for descrete and semicontinous measures from the POT library.
# Author: Kilian Fatras <[email protected]> # # License: MIT License import matplotlib.pylab as pl import numpy as np import ot import ot.plot
_____no_output_____
MIT
notebooks/plot_stochastic.ipynb
vfdev-5/POT
COMPUTE TRANSPORTATION MATRIX FOR SEMI-DUAL PROBLEM
print("------------SEMI-DUAL PROBLEM------------")
------------SEMI-DUAL PROBLEM------------
MIT
notebooks/plot_stochastic.ipynb
vfdev-5/POT
DISCRETE CASESample two discrete measures for the discrete case---------------------------------------------Define 2 discrete measures a and b, the points where are defined the sourceand the target measures and finally the cost matrix c.
n_source = 7 n_target = 4 reg = 1 numItermax = 1000 a = ot.utils.unif(n_source) b = ot.utils.unif(n_target) rng = np.random.RandomState(0) X_source = rng.randn(n_source, 2) Y_target = rng.randn(n_target, 2) M = ot.dist(X_source, Y_target)
_____no_output_____
MIT
notebooks/plot_stochastic.ipynb
vfdev-5/POT
Call the "SAG" method to find the transportation matrix in the discrete case---------------------------------------------Define the method "SAG", call ot.solve_semi_dual_entropic and plot theresults.
method = "SAG" sag_pi = ot.stochastic.solve_semi_dual_entropic(a, b, M, reg, method, numItermax) print(sag_pi)
[[2.55553509e-02 9.96395660e-02 1.76579142e-02 4.31178196e-06] [1.21640234e-01 1.25357448e-02 1.30225078e-03 7.37891338e-03] [3.56123975e-03 7.61451746e-02 6.31505947e-02 1.33831456e-07] [2.61515202e-02 3.34246014e-02 8.28734709e-02 4.07550428e-04] [9.85500870e-03 7.52288517e-04 1.08262628e-02 1.21423583e-01] [2.16904253e-02 9.03825797e-04 1.87178503e-03 1.18391107e-01] [4.15462212e-02 2.65987989e-02 7.23177216e-02 2.39440107e-03]]
MIT
notebooks/plot_stochastic.ipynb
vfdev-5/POT
SEMICONTINOUS CASESample one general measure a, one discrete measures b for the semicontinouscase---------------------------------------------Define one general measure a, one discrete measures b, the points whereare defined the source and the target measures and finally the cost matrix c.
n_source = 7 n_target = 4 reg = 1 numItermax = 1000 log = True a = ot.utils.unif(n_source) b = ot.utils.unif(n_target) rng = np.random.RandomState(0) X_source = rng.randn(n_source, 2) Y_target = rng.randn(n_target, 2) M = ot.dist(X_source, Y_target)
_____no_output_____
MIT
notebooks/plot_stochastic.ipynb
vfdev-5/POT
Call the "ASGD" method to find the transportation matrix in the semicontinouscase---------------------------------------------Define the method "ASGD", call ot.solve_semi_dual_entropic and plot theresults.
method = "ASGD" asgd_pi, log_asgd = ot.stochastic.solve_semi_dual_entropic(a, b, M, reg, method, numItermax, log=log) print(log_asgd['alpha'], log_asgd['beta']) print(asgd_pi)
[3.75309361 7.63288278 3.76418767 2.53747778 1.70389504 3.53981297 2.67663944] [-2.49164966 -2.25281897 -0.77666675 5.52113539] [[2.19699465e-02 1.03185982e-01 1.76983379e-02 2.87611188e-06] [1.20688044e-01 1.49823131e-02 1.50635578e-03 5.68043045e-03] [3.01194583e-03 7.75764779e-02 6.22686313e-02 8.78225379e-08] [2.28707628e-02 3.52120795e-02 8.44977549e-02 2.76545693e-04] [1.19721129e-02 1.10087991e-03 1.53333937e-02 1.14450756e-01] [2.65247890e-02 1.33140544e-03 2.66861405e-03 1.12332334e-01] [3.71512413e-02 2.86513804e-02 7.53932500e-02 1.66127118e-03]]
MIT
notebooks/plot_stochastic.ipynb
vfdev-5/POT
Compare the results with the Sinkhorn algorithm---------------------------------------------Call the Sinkhorn algorithm from POT
sinkhorn_pi = ot.sinkhorn(a, b, M, reg) print(sinkhorn_pi)
[[2.55535622e-02 9.96413843e-02 1.76578860e-02 4.31043335e-06] [1.21640742e-01 1.25369034e-02 1.30234529e-03 7.37715259e-03] [3.56096458e-03 7.61460101e-02 6.31500344e-02 1.33788624e-07] [2.61499607e-02 3.34255577e-02 8.28741973e-02 4.07427179e-04] [9.85698720e-03 7.52505948e-04 1.08291770e-02 1.21418473e-01] [2.16947591e-02 9.04086158e-04 1.87228707e-03 1.18386011e-01] [4.15442692e-02 2.65998963e-02 7.23192701e-02 2.39370724e-03]]
MIT
notebooks/plot_stochastic.ipynb
vfdev-5/POT
PLOT TRANSPORTATION MATRIX Plot SAG results----------------
pl.figure(4, figsize=(5, 5)) ot.plot.plot1D_mat(a, b, sag_pi, 'semi-dual : OT matrix SAG') pl.show()
_____no_output_____
MIT
notebooks/plot_stochastic.ipynb
vfdev-5/POT
Plot ASGD results-----------------
pl.figure(4, figsize=(5, 5)) ot.plot.plot1D_mat(a, b, asgd_pi, 'semi-dual : OT matrix ASGD') pl.show()
_____no_output_____
MIT
notebooks/plot_stochastic.ipynb
vfdev-5/POT
Plot Sinkhorn results---------------------
pl.figure(4, figsize=(5, 5)) ot.plot.plot1D_mat(a, b, sinkhorn_pi, 'OT matrix Sinkhorn') pl.show()
_____no_output_____
MIT
notebooks/plot_stochastic.ipynb
vfdev-5/POT
COMPUTE TRANSPORTATION MATRIX FOR DUAL PROBLEM
print("------------DUAL PROBLEM------------")
------------DUAL PROBLEM------------
MIT
notebooks/plot_stochastic.ipynb
vfdev-5/POT
SEMICONTINOUS CASESample one general measure a, one discrete measures b for the semicontinouscase---------------------------------------------Define one general measure a, one discrete measures b, the points whereare defined the source and the target measures and finally the cost matrix c.
n_source = 7 n_target = 4 reg = 1 numItermax = 100000 lr = 0.1 batch_size = 3 log = True a = ot.utils.unif(n_source) b = ot.utils.unif(n_target) rng = np.random.RandomState(0) X_source = rng.randn(n_source, 2) Y_target = rng.randn(n_target, 2) M = ot.dist(X_source, Y_target)
_____no_output_____
MIT
notebooks/plot_stochastic.ipynb
vfdev-5/POT
Call the "SGD" dual method to find the transportation matrix in thesemicontinous case---------------------------------------------Call ot.solve_dual_entropic and plot the results.
sgd_dual_pi, log_sgd = ot.stochastic.solve_dual_entropic(a, b, M, reg, batch_size, numItermax, lr, log=log) print(log_sgd['alpha'], log_sgd['beta']) print(sgd_dual_pi)
[ 1.67648902 5.3770004 1.70385554 0.4276547 -0.77206786 1.0474898 0.54202203] [-0.23723788 -0.20259434 1.30855788 8.06179985] [[2.62451875e-02 1.00499531e-01 1.78515577e-02 4.57450829e-06] [1.20510690e-01 1.21972758e-02 1.27002374e-03 7.55197481e-03] [3.65708350e-03 7.67963231e-02 6.38381061e-02 1.41974930e-07] [2.64286344e-02 3.31748063e-02 8.24445965e-02 4.25479786e-04] [9.59295422e-03 7.19190875e-04 1.03739180e-02 1.22100712e-01] [2.09087627e-02 8.55676046e-04 1.77617241e-03 1.17896019e-01] [4.18792948e-02 2.63326297e-02 7.17598381e-02 2.49335733e-03]]
MIT
notebooks/plot_stochastic.ipynb
vfdev-5/POT
Compare the results with the Sinkhorn algorithm---------------------------------------------Call the Sinkhorn algorithm from POT
sinkhorn_pi = ot.sinkhorn(a, b, M, reg) print(sinkhorn_pi)
[[2.55535622e-02 9.96413843e-02 1.76578860e-02 4.31043335e-06] [1.21640742e-01 1.25369034e-02 1.30234529e-03 7.37715259e-03] [3.56096458e-03 7.61460101e-02 6.31500344e-02 1.33788624e-07] [2.61499607e-02 3.34255577e-02 8.28741973e-02 4.07427179e-04] [9.85698720e-03 7.52505948e-04 1.08291770e-02 1.21418473e-01] [2.16947591e-02 9.04086158e-04 1.87228707e-03 1.18386011e-01] [4.15442692e-02 2.65998963e-02 7.23192701e-02 2.39370724e-03]]
MIT
notebooks/plot_stochastic.ipynb
vfdev-5/POT
Plot SGD results-----------------
pl.figure(4, figsize=(5, 5)) ot.plot.plot1D_mat(a, b, sgd_dual_pi, 'dual : OT matrix SGD') pl.show()
_____no_output_____
MIT
notebooks/plot_stochastic.ipynb
vfdev-5/POT
Plot Sinkhorn results---------------------
pl.figure(4, figsize=(5, 5)) ot.plot.plot1D_mat(a, b, sinkhorn_pi, 'OT matrix Sinkhorn') pl.show()
_____no_output_____
MIT
notebooks/plot_stochastic.ipynb
vfdev-5/POT