markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
<div class="alert alert-success"> **EXERCISE 6**: * Loop over the data files, read and process the file using our defined function, and append the dataframe to a list. * Combine the the different DataFrames in the list into a single DataFrame where the different columns are the different stations. Call the result `combined_data`. <details><summary>Hints</summary> - The `data_files` list contains `Path` objects (from the pathlib module). To get the actual file name as a string, use the `.name` attribute. - The station name is always first 7 characters of the file name. </details> </div>
# %load _solutions/case4_air_quality_processing12.py # %load _solutions/case4_air_quality_processing13.py combined_data.head()
notebooks/case4_air_quality_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
Finally, we don't want to have to repeat this each time we use the data. Therefore, let's save the processed data to a csv file.
# let's first give the index a descriptive name combined_data.index.name = 'datetime' combined_data.to_csv("airbase_data_processed.csv")
notebooks/case4_air_quality_processing.ipynb
jorisvandenbossche/DS-python-data-analysis
bsd-3-clause
Example 2: Extracellular response of synaptic input This is an example of LFPy running in a Jupyter notebook. To run through this example code and produce output, press &lt;shift-Enter&gt; in each code block below. First step is to import LFPy and other packages for analysis and plotting:
import numpy as np import matplotlib.pyplot as plt from matplotlib.gridspec import GridSpec import LFPy
examples/LFPy-example-02.ipynb
LFPy/LFPy
gpl-3.0
Create some dictionarys with parameters for cell, synapse and extracellular electrode:
cellParameters = { 'morphology': 'morphologies/L5_Mainen96_LFPy.hoc', 'tstart': -50, 'tstop': 100, 'dt': 2**-4, 'passive': True, } synapseParameters = { 'syntype': 'Exp2Syn', 'e': 0., 'tau1': 0.5, 'tau2': 2.0, 'weight': 0.005, 'record_current': True, } z = np.mgrid[-400:1201:100] electrodeParameters = { 'x': np.zeros(z.size), 'y': np.zeros(z.size), 'z': z, 'sigma': 0.3, }
examples/LFPy-example-02.ipynb
LFPy/LFPy
gpl-3.0
Then, create the cell, synapse and electrode objects using the LFPy.Cell, LFPy.Synapse, LFPy.RecExtElectrode classes.
cell = LFPy.Cell(**cellParameters) cell.set_pos(x=-10, y=0, z=0) cell.set_rotation(x=4.98919, y=-4.33261, z=np.pi) synapse = LFPy.Synapse(cell, idx = cell.get_closest_idx(z=800), **synapseParameters) synapse.set_spike_times(np.array([10, 30, 50])) electrode = LFPy.RecExtElectrode(cell, **electrodeParameters)
examples/LFPy-example-02.ipynb
LFPy/LFPy
gpl-3.0
Run the simulation using cell.simulate() probing the extracellular potential with the additional keyword argument probes=[electrode]
cell.simulate(probes=[electrode])
examples/LFPy-example-02.ipynb
LFPy/LFPy
gpl-3.0
Then plot the somatic potential and the prediction obtained using the RecExtElectrode instance (now accessible as electrode.data):
fig = plt.figure(figsize=(12, 6)) gs = GridSpec(2, 3) ax0 = fig.add_subplot(gs[:, 0]) ax0.plot(cell.x.T, cell.z.T, 'k') ax0.plot(synapse.x, synapse.z, color='r', marker='o', markersize=10, label='synapse') ax0.plot(electrode.x, electrode.z, '.', color='g', label='electrode') ax0.axis([-500, 500, -450, 1250]) ax0.legend() ax0.set_xlabel('x (um)') ax0.set_ylabel('z (um)') ax0.set_title('morphology') ax1 = fig.add_subplot(gs[0, 1]) ax1.plot(cell.tvec, synapse.i, 'r') ax1.set_title('synaptic current (pA)') plt.setp(ax1.get_xticklabels(), visible=False) ax2 = fig.add_subplot(gs[1, 1], sharex=ax1) ax2.plot(cell.tvec, cell.somav, 'k') ax2.set_title('somatic voltage (mV)') ax3 = fig.add_subplot(gs[:, 2], sharey=ax0, sharex=ax1) im = ax3.pcolormesh(cell.tvec, electrode.z, electrode.data, vmin=-abs(electrode.data).max(), vmax=abs(electrode.data).max(), shading='auto') plt.colorbar(im) ax3.set_title('LFP (mV)') ax3.set_xlabel('time (ms)') #savefig('LFPy-example-02.pdf', dpi=300)
examples/LFPy-example-02.ipynb
LFPy/LFPy
gpl-3.0
Now I can run my script:
%cd data/SF_Si_bulk/ %run ../../../../../Code/SF/sf.py
cumulant-to-pdf.ipynb
teoguso/sol_1116
mit
Not very elegant, I know. It's just for demo pourposes.
cd ../../../
cumulant-to-pdf.ipynb
teoguso/sol_1116
mit
I have first to import a few modules/set up a few things:
from __future__ import print_function import numpy as np import matplotlib.pyplot as plt # plt.rcParams['figure.figsize'] = (9., 6.) %matplotlib inline
cumulant-to-pdf.ipynb
teoguso/sol_1116
mit
Next I can read the data from a local folder:
sf_c = np.genfromtxt( 'data/SF_Si_bulk/Spfunctions/spftot_exp_kpt_1_19_bd_1_4_s1.0_p1.0_800ev_np1.dat') sf_gw = np.genfromtxt( 'data/SF_Si_bulk/Spfunctions/spftot_gw_s1.0_p1.0_800ev.dat') #!gvim spftot_exp_kpt_1_19_bd_1_4_s1.0_p1.0_800ev_np1.dat
cumulant-to-pdf.ipynb
teoguso/sol_1116
mit
Now I can plot the stored arrays.
plt.plot(sf_c[:,0], sf_c[:,1], label='1-pole cumulant') plt.plot(sf_gw[:,0], sf_gw[:,1], label='GW') plt.xlim(-50, 0) plt.ylim(0, 300) plt.title("Bulk Si - Spectral function - ib=1, ikpt=1") plt.xlabel("Energy (eV)") plt.grid(); plt.legend(loc='best')
cumulant-to-pdf.ipynb
teoguso/sol_1116
mit
Creating a PDF document I can create a PDF version of this notebook from itself, using the command line:
!jupyter-nbconvert --to pdf cumulant-to-pdf.ipynb pwd !xpdf cumulant-to-pdf.pdf
cumulant-to-pdf.ipynb
teoguso/sol_1116
mit
自己构造一组随机的数据
X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]]) y = np.array([1, 1, 1, 2, 2, 2])
ipynbs/appendix/ensemble/voting.ipynb
NLP-Deeplearning-Club/Classic-ML-Methods-Algo
mit
初始化多个分类器模型
clf1 = LogisticRegression(random_state=1) clf2 = RandomForestClassifier(random_state=1) clf3 = GaussianNB()
ipynbs/appendix/ensemble/voting.ipynb
NLP-Deeplearning-Club/Classic-ML-Methods-Algo
mit
初始化投票器
eclf1 = VotingClassifier(estimators=[('lr', clf1), ('rf', clf2), ('gnb', clf3)], voting='hard')
ipynbs/appendix/ensemble/voting.ipynb
NLP-Deeplearning-Club/Classic-ML-Methods-Algo
mit
训练模型
eclf1 = eclf1.fit(X, y)
ipynbs/appendix/ensemble/voting.ipynb
NLP-Deeplearning-Club/Classic-ML-Methods-Algo
mit
预测
print(eclf1.predict(X))
ipynbs/appendix/ensemble/voting.ipynb
NLP-Deeplearning-Club/Classic-ML-Methods-Algo
mit
投票器的设置 投票器可以设置 voting hard表示直接以多数原则投票确定结果,soft表示基于预测概率之和的argmax来预测类别标签 n_jobs 并行任务数设置 weights 不同分类器的权重 flatten_transform(0.19版本接口) 仅当voting为'soft'时有用.flatten_transform = true时影响变换输出的形状,变换方法返回形为(n_samples,n_classifiers * n_classes).如果flatten_transform = false,则返回(n_classifiers,n_samples,n_classes)
eclf3 = VotingClassifier(estimators=[('lr', clf1), ('rf', clf2), ('gnb', clf3)],n_jobs=3, voting='soft', weights=[2,1,1]) eclf3 = eclf3.fit(X, y) print(eclf3.predict(X)) print(eclf3.transform(X).shape)
ipynbs/appendix/ensemble/voting.ipynb
NLP-Deeplearning-Club/Classic-ML-Methods-Algo
mit
TensorFlow Addons Optimizers: CyclicalLearningRate <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/addons/tutorials/optimizers_cyclicallearningrate"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/addons/blob/master/docs/tutorials/optimizers_cyclicallearningrate.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/addons/blob/master/docs/tutorials/optimizers_cyclicallearningrate.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/addons/docs/tutorials/optimizers_cyclicallearningrate.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> Overview This tutorial demonstrates the use of Cyclical Learning Rate from the Addons package. Cyclical Learning Rates It has been shown it is beneficial to adjust the learning rate as training progresses for a neural network. It has manifold benefits ranging from saddle point recovery to preventing numerical instabilities that may arise during backpropagation. But how does one know how much to adjust with respect to a particular training timestamp? In 2015, Leslie Smith noticed that you would want to increase the learning rate to traverse faster across the loss landscape but you would also want to reduce the learning rate when approaching convergence. To realize this idea, he proposed Cyclical Learning Rates (CLR) where you would adjust the learning rate with respect to the cycles of a function. For a visual demonstration, you can check out this blog. CLR is now available as a TensorFlow API. For more details, check out the original paper here. Setup
!pip install -q -U tensorflow_addons from tensorflow.keras import layers import tensorflow_addons as tfa import tensorflow as tf import numpy as np import matplotlib.pyplot as plt tf.random.set_seed(42) np.random.seed(42)
site/en-snapshot/addons/tutorials/optimizers_cyclicallearningrate.ipynb
tensorflow/docs-l10n
apache-2.0
Load and prepare dataset
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.fashion_mnist.load_data() x_train = np.expand_dims(x_train, -1) x_test = np.expand_dims(x_test, -1)
site/en-snapshot/addons/tutorials/optimizers_cyclicallearningrate.ipynb
tensorflow/docs-l10n
apache-2.0
Define hyperparameters
BATCH_SIZE = 64 EPOCHS = 10 INIT_LR = 1e-4 MAX_LR = 1e-2
site/en-snapshot/addons/tutorials/optimizers_cyclicallearningrate.ipynb
tensorflow/docs-l10n
apache-2.0
Define model building and model training utilities
def get_training_model(): model = tf.keras.Sequential( [ layers.InputLayer((28, 28, 1)), layers.experimental.preprocessing.Rescaling(scale=1./255), layers.Conv2D(16, (5, 5), activation="relu"), layers.MaxPooling2D(pool_size=(2, 2)), layers.Conv2D(32, (5, 5), activation="relu"), layers.MaxPooling2D(pool_size=(2, 2)), layers.SpatialDropout2D(0.2), layers.GlobalAvgPool2D(), layers.Dense(128, activation="relu"), layers.Dense(10, activation="softmax"), ] ) return model def train_model(model, optimizer): model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"]) history = model.fit(x_train, y_train, batch_size=BATCH_SIZE, validation_data=(x_test, y_test), epochs=EPOCHS) return history
site/en-snapshot/addons/tutorials/optimizers_cyclicallearningrate.ipynb
tensorflow/docs-l10n
apache-2.0
In the interest of reproducibility, the initial model weights are serialized which you will be using to conduct our experiments.
initial_model = get_training_model() initial_model.save("initial_model")
site/en-snapshot/addons/tutorials/optimizers_cyclicallearningrate.ipynb
tensorflow/docs-l10n
apache-2.0
Train a model without CLR
standard_model = tf.keras.models.load_model("initial_model") no_clr_history = train_model(standard_model, optimizer="sgd")
site/en-snapshot/addons/tutorials/optimizers_cyclicallearningrate.ipynb
tensorflow/docs-l10n
apache-2.0
Define CLR schedule The tfa.optimizers.CyclicalLearningRate module return a direct schedule that can be passed to an optimizer. The schedule takes a step as its input and outputs a value calculated using CLR formula as laid out in the paper.
steps_per_epoch = len(x_train) // BATCH_SIZE clr = tfa.optimizers.CyclicalLearningRate(initial_learning_rate=INIT_LR, maximal_learning_rate=MAX_LR, scale_fn=lambda x: 1/(2.**(x-1)), step_size=2 * steps_per_epoch ) optimizer = tf.keras.optimizers.SGD(clr)
site/en-snapshot/addons/tutorials/optimizers_cyclicallearningrate.ipynb
tensorflow/docs-l10n
apache-2.0
Here, you specify the lower and upper bounds of the learning rate and the schedule will oscillate in between that range ([1e-4, 1e-2] in this case). scale_fn is used to define the function that would scale up and scale down the learning rate within a given cycle. step_size defines the duration of a single cycle. A step_size of 2 means you need a total of 4 iterations to complete one cycle. The recommended value for step_size is as follows: factor * steps_per_epoch where factor lies within the [2, 8] range. In the same CLR paper, Leslie also presented a simple and elegant method to choose the bounds for learning rate. You are encouraged to check it out as well. This blog post provides a nice introduction to the method. Below, you visualize how the clr schedule looks like.
step = np.arange(0, EPOCHS * steps_per_epoch) lr = clr(step) plt.plot(step, lr) plt.xlabel("Steps") plt.ylabel("Learning Rate") plt.show()
site/en-snapshot/addons/tutorials/optimizers_cyclicallearningrate.ipynb
tensorflow/docs-l10n
apache-2.0
In order to better visualize the effect of CLR, you can plot the schedule with an increased number of steps.
step = np.arange(0, 100 * steps_per_epoch) lr = clr(step) plt.plot(step, lr) plt.xlabel("Steps") plt.ylabel("Learning Rate") plt.show()
site/en-snapshot/addons/tutorials/optimizers_cyclicallearningrate.ipynb
tensorflow/docs-l10n
apache-2.0
The function you are using in this tutorial is referred to as the triangular2 method in the CLR paper. There are other two functions there were explored namely triangular and exp (short for exponential). Train a model with CLR
clr_model = tf.keras.models.load_model("initial_model") clr_history = train_model(clr_model, optimizer=optimizer)
site/en-snapshot/addons/tutorials/optimizers_cyclicallearningrate.ipynb
tensorflow/docs-l10n
apache-2.0
As expected the loss starts higher than the usual and then it stabilizes as the cycles progress. You can confirm this visually with the plots below. Visualize losses
(fig, ax) = plt.subplots(2, 1, figsize=(10, 8)) ax[0].plot(no_clr_history.history["loss"], label="train_loss") ax[0].plot(no_clr_history.history["val_loss"], label="val_loss") ax[0].set_title("No CLR") ax[0].set_xlabel("Epochs") ax[0].set_ylabel("Loss") ax[0].set_ylim([0, 2.5]) ax[0].legend() ax[1].plot(clr_history.history["loss"], label="train_loss") ax[1].plot(clr_history.history["val_loss"], label="val_loss") ax[1].set_title("CLR") ax[1].set_xlabel("Epochs") ax[1].set_ylabel("Loss") ax[1].set_ylim([0, 2.5]) ax[1].legend() fig.tight_layout(pad=3.0) fig.show()
site/en-snapshot/addons/tutorials/optimizers_cyclicallearningrate.ipynb
tensorflow/docs-l10n
apache-2.0
Non-parametric 1 sample cluster statistic on single trial power This script shows how to estimate significant clusters in time-frequency power estimates. It uses a non-parametric statistical procedure based on permutations and cluster level statistics. The procedure consists of: extracting epochs compute single trial power estimates baseline line correct the power estimates (power ratios) compute stats to see if ratio deviates from 1. Here, the unit of observation is epochs from a specific study subject. However, the same logic applies when the unit of observation is a number of study subjects each of whom contribute their own averaged data (i.e., an average of their epochs). This would then be considered an analysis at the "2nd level". For more information on cluster-based permutation testing in MNE-Python, see also: tut-cluster-spatiotemporal-sensor
# Authors: Alexandre Gramfort <[email protected]> # Stefan Appelhoff <[email protected]> # # License: BSD-3-Clause import numpy as np import matplotlib.pyplot as plt import scipy.stats import mne from mne.time_frequency import tfr_morlet from mne.stats import permutation_cluster_1samp_test from mne.datasets import sample
dev/_downloads/5b9edf9c05aec2b9bb1f128f174ca0f3/40_cluster_1samp_time_freq.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Set parameters
data_path = sample.data_path() meg_path = data_path / 'MEG' / 'sample' raw_fname = meg_path / 'sample_audvis_raw.fif' tmin, tmax, event_id = -0.3, 0.6, 1 # Setup for reading the raw data raw = mne.io.read_raw_fif(raw_fname) events = mne.find_events(raw, stim_channel='STI 014') include = [] raw.info['bads'] += ['MEG 2443', 'EEG 053'] # bads + 2 more # picks MEG gradiometers picks = mne.pick_types(raw.info, meg='grad', eeg=False, eog=True, stim=False, include=include, exclude='bads') # Load condition 1 event_id = 1 epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks, baseline=(None, 0), preload=True, reject=dict(grad=4000e-13, eog=150e-6)) # just use right temporal sensors for speed epochs.pick_channels(mne.read_vectorview_selection('Right-temporal')) evoked = epochs.average() # Factor to down-sample the temporal dimension of the TFR computed by # tfr_morlet. Decimation occurs after frequency decomposition and can # be used to reduce memory usage (and possibly computational time of downstream # operations such as nonparametric statistics) if you don't need high # spectrotemporal resolution. decim = 5 # define frequencies of interest freqs = np.arange(8, 40, 2) # run the TFR decomposition tfr_epochs = tfr_morlet(epochs, freqs, n_cycles=4., decim=decim, average=False, return_itc=False, n_jobs=None) # Baseline power tfr_epochs.apply_baseline(mode='logratio', baseline=(-.100, 0)) # Crop in time to keep only what is between 0 and 400 ms evoked.crop(-0.1, 0.4) tfr_epochs.crop(-0.1, 0.4) epochs_power = tfr_epochs.data
dev/_downloads/5b9edf9c05aec2b9bb1f128f174ca0f3/40_cluster_1samp_time_freq.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Define adjacency for statistics To perform a cluster-based permutation test, we need a suitable definition for the adjacency of sensors, time points, and frequency bins. The adjacency matrix will be used to form clusters. We first compute the sensor adjacency, and then combine that with a "lattice" adjacency for the time-frequency plane, which assumes that elements at index "N" are adjacent to elements at indices "N + 1" and "N - 1" (forming a "grid" on the time-frequency plane).
# find_ch_adjacency first attempts to find an existing "neighbor" # (adjacency) file for given sensor layout. # If such a file doesn't exist, an adjacency matrix is computed on the fly, # using Delaunay triangulations. sensor_adjacency, ch_names = mne.channels.find_ch_adjacency( tfr_epochs.info, 'grad') # In this case, find_ch_adjacency finds an appropriate file and # reads it (see log output: "neuromag306planar"). # However, we need to subselect the channels we are actually using use_idx = [ch_names.index(ch_name) for ch_name in tfr_epochs.ch_names] sensor_adjacency = sensor_adjacency[use_idx][:, use_idx] # Our sensor adjacency matrix is of shape n_chs × n_chs assert sensor_adjacency.shape == \ (len(tfr_epochs.ch_names), len(tfr_epochs.ch_names)) # Now we need to prepare adjacency information for the time-frequency # plane. For that, we use "combine_adjacency", and pass dimensions # as in the data we want to test (excluding observations). Here: # channels × frequencies × times assert epochs_power.data.shape == ( len(epochs), len(tfr_epochs.ch_names), len(tfr_epochs.freqs), len(tfr_epochs.times)) adjacency = mne.stats.combine_adjacency( sensor_adjacency, len(tfr_epochs.freqs), len(tfr_epochs.times)) # The overall adjacency we end up with is a square matrix with each # dimension matching the data size (excluding observations) in an # "unrolled" format, so: len(channels × frequencies × times) assert adjacency.shape[0] == adjacency.shape[1] == \ len(tfr_epochs.ch_names) * len(tfr_epochs.freqs) * len(tfr_epochs.times)
dev/_downloads/5b9edf9c05aec2b9bb1f128f174ca0f3/40_cluster_1samp_time_freq.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Compute statistic For forming clusters, we need to specify a critical test statistic threshold. Only data bins exceeding this threshold will be used to form clusters. Here, we use a t-test and can make use of Scipy's percent point function of the t distribution to get a t-value that corresponds to a specific alpha level for significance. This threshold is often called the "cluster forming threshold". <div class="alert alert-info"><h4>Note</h4><p>The choice of the threshold is more or less arbitrary. Choosing a t-value corresponding to p=0.05, p=0.01, or p=0.001 may often provide a good starting point. Depending on the specific dataset you are working with, you may need to adjust the threshold.</p></div>
# We want a two-tailed test tail = 0 # In this example, we wish to set the threshold for including data bins in # the cluster forming process to the t-value corresponding to p=0.001 for the # given data. # # Because we conduct a two-tailed test, we divide the p-value by 2 (which means # we're making use of both tails of the distribution). # As the degrees of freedom, we specify the number of observations # (here: epochs) minus 1. # Finally, we subtract 0.001 / 2 from 1, to get the critical t-value # on the right tail (this is needed for MNE-Python internals) degrees_of_freedom = len(epochs) - 1 t_thresh = scipy.stats.t.ppf(1 - 0.001 / 2, df=degrees_of_freedom) # Set the number of permutations to run. # Warning: 50 is way too small for a real-world analysis (where values of 5000 # or higher are used), but here we use it to increase the computation speed. n_permutations = 50 # Run the analysis T_obs, clusters, cluster_p_values, H0 = \ permutation_cluster_1samp_test(epochs_power, n_permutations=n_permutations, threshold=t_thresh, tail=tail, adjacency=adjacency, out_type='mask', verbose=True)
dev/_downloads/5b9edf9c05aec2b9bb1f128f174ca0f3/40_cluster_1samp_time_freq.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
View time-frequency plots We now visualize the observed clusters that are statistically significant under our permutation distribution. <div class="alert alert-danger"><h4>Warning</h4><p>Talking about "significant clusters" can be convenient, but you must be aware of all associated caveats! For example, it is **invalid** to interpret the cluster p value as being spatially or temporally specific. A cluster with sufficiently low (for example < 0.05) p value at specific location does not allow you to say that the significant effect is at that particular location. The p value only tells you about the probability of obtaining similar or stronger/larger cluster anywhere in the data if there were no differences between the compared conditions. So it only allows you to draw conclusions about the differences in the data "in general", not at specific locations. See the comprehensive [FieldTrip tutorial](ft_cluster_) for more information. [FieldTrip tutorial](ft_cluster_) for more information.</p></div> .. include:: ../../links.inc
evoked_data = evoked.data times = 1e3 * evoked.times plt.figure() plt.subplots_adjust(0.12, 0.08, 0.96, 0.94, 0.2, 0.43) T_obs_plot = np.nan * np.ones_like(T_obs) for c, p_val in zip(clusters, cluster_p_values): if p_val <= 0.05: T_obs_plot[c] = T_obs[c] # Just plot one channel's data # use the following to show a specific one: # ch_idx = tfr_epochs.ch_names.index('MEG 1332') ch_idx, f_idx, t_idx = np.unravel_index( np.nanargmax(np.abs(T_obs_plot)), epochs_power.shape[1:]) vmax = np.max(np.abs(T_obs)) vmin = -vmax plt.subplot(2, 1, 1) plt.imshow(T_obs[ch_idx], cmap=plt.cm.gray, extent=[times[0], times[-1], freqs[0], freqs[-1]], aspect='auto', origin='lower', vmin=vmin, vmax=vmax) plt.imshow(T_obs_plot[ch_idx], cmap=plt.cm.RdBu_r, extent=[times[0], times[-1], freqs[0], freqs[-1]], aspect='auto', origin='lower', vmin=vmin, vmax=vmax) plt.colorbar() plt.xlabel('Time (ms)') plt.ylabel('Frequency (Hz)') plt.title(f'Induced power ({tfr_epochs.ch_names[ch_idx]})') ax2 = plt.subplot(2, 1, 2) evoked.plot(axes=[ax2], time_unit='s') plt.show()
dev/_downloads/5b9edf9c05aec2b9bb1f128f174ca0f3/40_cluster_1samp_time_freq.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
上面显示的结果类似一个电子表格,这个结构称为Pandas的数据帧(data frame)。 pandas的两个主要数据结构:Series和DataFrame: - Series类似于一维数组,它有一组数据以及一组与之相关的数据标签(即索引)组成。 - DataFrame是一个表格型的数据结构,它含有一组有序的列,每列可以是不同的值类型。DataFrame既有行索引也有列索引,它可以被看做由Series组成的字典。
# display the last 5 rows data.tail() # check the shape of the DataFrame(rows, colums) data.shape
Scikit-learn/.ipynb_checkpoints/(3)linear_regression-checkpoint.ipynb
jasonding1354/pyDataScienceToolkits_Base
mit
特征: - TV:对于一个给定市场中单一产品,用于电视上的广告费用(以千为单位) - Radio:在广播媒体上投资的广告费用 - Newspaper:用于报纸媒体的广告费用 响应: - Sales:对应产品的销量 在这个案例中,我们通过不同的广告投入,预测产品销量。因为响应变量是一个连续的值,所以这个问题是一个回归问题。数据集一共有200个观测值,每一组观测对应一个市场的情况。
import seaborn as sns %matplotlib inline # visualize the relationship between the features and the response using scatterplots sns.pairplot(data, x_vars=['TV','Radio','Newspaper'], y_vars='Sales', size=7, aspect=0.8)
Scikit-learn/.ipynb_checkpoints/(3)linear_regression-checkpoint.ipynb
jasonding1354/pyDataScienceToolkits_Base
mit
seaborn的pairplot函数绘制X的每一维度和对应Y的散点图。通过设置size和aspect参数来调节显示的大小和比例。可以从图中看出,TV特征和销量是有比较强的线性关系的,而Radio和Sales线性关系弱一些,Newspaper和Sales线性关系更弱。通过加入一个参数kind='reg',seaborn可以添加一条最佳拟合直线和95%的置信带。
sns.pairplot(data, x_vars=['TV','Radio','Newspaper'], y_vars='Sales', size=7, aspect=0.8, kind='reg')
Scikit-learn/.ipynb_checkpoints/(3)linear_regression-checkpoint.ipynb
jasonding1354/pyDataScienceToolkits_Base
mit
2. 线性回归模型 优点:快速;没有调节参数;可轻易解释;可理解 缺点:相比其他复杂一些的模型,其预测准确率不是太高,因为它假设特征和响应之间存在确定的线性关系,这种假设对于非线性的关系,线性回归模型显然不能很好的对这种数据建模。 线性模型表达式: $y = \beta_0 + \beta_1x_1 + \beta_2x_2 + ... + \beta_nx_n$ 其中 - y是响应 - $\beta_0是截距$ - $\beta_1是x1的系数,以此类推$ 在这个案例中: $y = \beta_0 + \beta_1TV + \beta_2Radio + ... + \beta_n*Newspaper$ (1)使用pandas来构建X和y scikit-learn要求X是一个特征矩阵,y是一个NumPy向量 pandas构建在NumPy之上 因此,X可以是pandas的DataFrame,y可以是pandas的Series,scikit-learn可以理解这种结构
# create a python list of feature names feature_cols = ['TV', 'Radio', 'Newspaper'] # use the list to select a subset of the original DataFrame X = data[feature_cols] # equivalent command to do this in one line X = data[['TV', 'Radio', 'Newspaper']] # print the first 5 rows X.head() # check the type and shape of X print type(X) print X.shape # select a Series from the DataFrame y = data['Sales'] # equivalent command that works if there are no spaces in the column name y = data.Sales # print the first 5 values y.head() print type(y) print y.shape
Scikit-learn/.ipynb_checkpoints/(3)linear_regression-checkpoint.ipynb
jasonding1354/pyDataScienceToolkits_Base
mit
(2)构造训练集和测试集
from sklearn.cross_validation import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1) # default split is 75% for training and 25% for testing print X_train.shape print y_train.shape print X_test.shape print y_test.shape
Scikit-learn/.ipynb_checkpoints/(3)linear_regression-checkpoint.ipynb
jasonding1354/pyDataScienceToolkits_Base
mit
(3)Scikit-learn的线性回归
from sklearn.linear_model import LinearRegression linreg = LinearRegression() linreg.fit(X_train, y_train) print linreg.intercept_ print linreg.coef_ # pair the feature names with the coefficients zip(feature_cols, linreg.coef_)
Scikit-learn/.ipynb_checkpoints/(3)linear_regression-checkpoint.ipynb
jasonding1354/pyDataScienceToolkits_Base
mit
$y = 2.88 + 0.0466 * TV + 0.179 * Radio + 0.00345 * Newspaper$ 如何解释各个特征对应的系数的意义? - 对于给定了Radio和Newspaper的广告投入,如果在TV广告上每多投入1个单位,对应销量将增加0.0466个单位 - 更明确一点,加入其它两个媒体投入固定,在TV广告上没增加1000美元(因为单位是1000美元),销量将增加46.6(因为单位是1000) (4)预测
y_pred = linreg.predict(X_test)
Scikit-learn/.ipynb_checkpoints/(3)linear_regression-checkpoint.ipynb
jasonding1354/pyDataScienceToolkits_Base
mit
3. 回归问题的评价测度 对于分类问题,评价测度是准确率,但这种方法不适用于回归问题。我们使用针对连续数值的评价测度(evaluation metrics)。 下面介绍三种常用的针对回归问题的评价测度
# define true and predicted response values true = [100, 50, 30, 20] pred = [90, 50, 50, 30]
Scikit-learn/.ipynb_checkpoints/(3)linear_regression-checkpoint.ipynb
jasonding1354/pyDataScienceToolkits_Base
mit
(1)平均绝对误差(Mean Absolute Error, MAE) $\frac{1}{n}\sum_{i=1}^{n}|y_i - \hat{y_i}|$ (2)均方误差(Mean Squared Error, MSE) $\frac{1}{n}\sum_{i=1}^{n}(y_i - \hat{y_i})^2$ (3)均方根误差(Root Mean Squared Error, RMSE) $\sqrt{\frac{1}{n}\sum_{i=1}^{n}(y_i - \hat{y_i})^2}$
from sklearn import metrics import numpy as np # calculate MAE by hand print "MAE by hand:",(10 + 0 + 20 + 10)/4. # calculate MAE using scikit-learn print "MAE:",metrics.mean_absolute_error(true, pred) # calculate MSE by hand print "MSE by hand:",(10**2 + 0**2 + 20**2 + 10**2)/4. # calculate MSE using scikit-learn print "MSE:",metrics.mean_squared_error(true, pred) # calculate RMSE by hand print "RMSE by hand:",np.sqrt((10**2 + 0**2 + 20**2 + 10**2)/4.) # calculate RMSE using scikit-learn print "RMSE:",np.sqrt(metrics.mean_squared_error(true, pred))
Scikit-learn/.ipynb_checkpoints/(3)linear_regression-checkpoint.ipynb
jasonding1354/pyDataScienceToolkits_Base
mit
计算Sales预测的RMSE
print np.sqrt(metrics.mean_squared_error(y_test, y_pred))
Scikit-learn/.ipynb_checkpoints/(3)linear_regression-checkpoint.ipynb
jasonding1354/pyDataScienceToolkits_Base
mit
4. 特征选择 在之前展示的数据中,我们看到Newspaper和销量之间的线性关系比较弱,现在我们移除这个特征,看看线性回归预测的结果的RMSE如何?
feature_cols = ['TV', 'Radio'] X = data[feature_cols] y = data.Sales X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1) linreg.fit(X_train, y_train) y_pred = linreg.predict(X_test) print np.sqrt(metrics.mean_squared_error(y_test, y_pred))
Scikit-learn/.ipynb_checkpoints/(3)linear_regression-checkpoint.ipynb
jasonding1354/pyDataScienceToolkits_Base
mit
Now, if we have two classes of data, we might be able to classify the data well with just the projection onto just one eigenvector. Could be either eigenvector. First, with second class having mean [-5,3] and $\Sigma=\begin{bmatrix} 0.9 & 0.8\ -0.8 & 0.9 \end{bmatrix}$.
N = 200 data1 = mvNormalRand(N,[0,4],[[0.9,0.8],[0.8,0.9]]) data2 = mvNormalRand(N,[-5,3],[[0.9,0.8],[-0.8,0.9]]) data = np.vstack((data1,data2)) means = np.mean(data,axis=0) U,S,V = np.linalg.svd(data-means) V = V.T plotOriginalAndTransformed(data,V)
cs480/23 Linear Dimensionality Reduction.ipynb
atlury/deep-opencl
lgpl-3.0
And again, with first class $\Sigma=\begin{bmatrix} 0.9 & 0.2\ 0.2 & 20 \end{bmatrix}$ and second class having $\Sigma=\begin{bmatrix} 0.9 & 0.2\ -0.2 & 20 \end{bmatrix}$.
N = 200 data1 = mvNormalRand(N,[0,4],[[0.9,0.2],[0.2,0.9]]) data2 = mvNormalRand(N,[-5,3],[[0.9,0.2],[-0.2,20]]) data = np.vstack((data1,data2)) means = np.mean(data,axis=0) U,S,V = np.linalg.svd(data - means) V = V.T plotOriginalAndTransformed(data,V)
cs480/23 Linear Dimensionality Reduction.ipynb
atlury/deep-opencl
lgpl-3.0
Sammon Mapping Introductions to Sammon Mapping are found at * Sammon Mapping in Wikipedia * Sammon Mapping, by Paul Henderson A Sammon Mapping is one that maps each data sample $d_i$ to a location in two dimensions, $p_i$, such that distances between pairs of points are preserved. The objective defined by Sammon is to minimize the squared difference in distances between pairs of data points and their projections through the use of an objective function like $$ \sum_{i=1}^{N-1} \sum_{j=i+1}^N \left (\frac{||d_i - d_j||}{s} - ||p_i - p_j|| \right )^2 $$ The typical Sammon Mapping algorithm does a gradient descent on this function by adjusting all of the two-dimensional points $p_{ij}$. Each iteration requires computing all pairwise distances. One way to decrease this amount of work is to just work with a subset of points, perhaps picked randomly. To display all points, we just find an explicit mapping (function) that projects a data sample to a two-dimensional point. Let's call this $f$, so $f(d_i) = p_i$. For now, let's just use a linear function for $f$, so $$ f(d_i) = d_i^T \theta $$ where $\theta$ is a $D\times 2$ matrix of coefficients. To do this in python, let's start with calculating all pairwise distances. Let $X$ be our $N\times D$ matrix of data samples, one per row. We can use a list comprehension to calculate the distance between each row in $X$ and each of the rows following that row.
X = np.array([ [0,1], [4,5], [10,20]]) X N = X.shape[0] # number of rows [(i,j) for i in range(N-1) for j in range(i+1,N)] [X[i,:] - X[j,:] for i in range(N-1) for j in range(i+1,N)] np.array([X[i,:] - X[j,:] for i in range(N-1) for j in range(i+1,N)])
cs480/23 Linear Dimensionality Reduction.ipynb
atlury/deep-opencl
lgpl-3.0
To convert these differences to distances, just
diffs = np.array([X[i,:] - X[j,:] for i in range(N-1) for j in range(i+1,N)]) np.sqrt(np.sum(diffs*diffs, axis=1))
cs480/23 Linear Dimensionality Reduction.ipynb
atlury/deep-opencl
lgpl-3.0
And to calculate the projection, a call to np.dot is all that is needed. Let's make a function to do the projection, and one to convert differences to distances.
def diffToDist(dX): return np.sqrt(np.sum(dX*dX, axis=1)) def proj(X,theta): return np.dot(X,theta) diffToDist(diffs) proj(X,np.array([[1,0.2],[0.3,0.8]]))
cs480/23 Linear Dimensionality Reduction.ipynb
atlury/deep-opencl
lgpl-3.0
Now, to follow the negative gradient of the objective function, we need its gradient, with respect to $\theta$. With a little work, you can derive it to find $$ \begin{align} \nabla_\theta &= \frac{1}{2} \sum_{i=1}^{N-1} \sum_{j=i+1}^N \left (\frac{||d_i - d_j||}{s} - ||p_i - p_j|| \right )^2 \ &= 2 \frac{1}{2} \sum_{i=1}^{N-1} \sum_{j=i+1}^N \left (\frac{||d_i - d_j||}{s} - ||f(d_i;\theta) - f(d_j;\theta)|| \right ) (-1) \nabla_\theta ||f(d_i;\theta) - f(d_j;\theta)||\ &= - \sum_{i=1}^{N-1} \sum_{j=i+1}^N \left (\frac{||d_i - d_j||}{s} - ||f(d_i;\theta) - f(d_j;\theta)|| \right ) \frac{(d_i-d_j)^T (p_i - p_j)}{||p_i - p_j||} \end{align} $$ So, we need to keep the differences around, in addition to the distances. First, let's write a function for the objective function, so we can monitor it, to make sure we are decrease it with each iteration. Let's multiply by $1/N$ so the values we get don't grow huge with large $N$.
def objective(X,proj,theta,s): N = X.shape[0] P = proj(X,theta) dX = np.array([X[i,:] - X[j,:] for i in range(N-1) for j in range(i+1,N)]) dP = np.array([P[i,:] - P[j,:] for i in range(N-1) for j in range(i+1,N)]) return 1/N * np.sum( (diffToDist(dX)/s - diffToDist(dP))**2)
cs480/23 Linear Dimensionality Reduction.ipynb
atlury/deep-opencl
lgpl-3.0
Now for the gradient $$ \begin{align} \nabla_\theta &= - \sum_{i=1}^{N-1} \sum_{j=i+1}^N \left (\frac{||d_i - d_j||}{s} - ||f(d_i;\theta) - f(d_j;\theta)|| \right ) \frac{(d_i-d_j)^T (p_i - p_j)}{||p_i - p_j||} \end{align} $$
def gradient(X,proj,theta,s): N = X.shape[0] P = proj(X,theta) dX = np.array([X[i,:] - X[j,:] for i in range(N-1) for j in range(i+1,N)]) dP = np.array([P[i,:] - P[j,:] for i in range(N-1) for j in range(i+1,N)]) distX = diffToDist(dX) distP = diffToDist(dP) return -1/N * np.dot((((distX/s - distP) / distP).reshape((-1,1)) * dX).T, dP)
cs480/23 Linear Dimensionality Reduction.ipynb
atlury/deep-opencl
lgpl-3.0
This last line has the potential for dividing by zero! Let's avoid this, in a very ad-hoc manner, by replacing zeros in distP by its smallest nonzero value
def gradient(X,proj,theta,s): N = X.shape[0] P = proj(X,theta) dX = np.array([X[i,:] - X[j,:] for i in range(N-1) for j in range(i+1,N)]) dP = np.array([P[i,:] - P[j,:] for i in range(N-1) for j in range(i+1,N)]) distX = diffToDist(dX) distP = diffToDist(dP) minimumNonzero = np.min(distP[distP>0]) distP[distP==0] = minimumNonzero return -1/N * np.dot((((distX/s - distP) / distP).reshape((-1,1)) * dX).T, dP) n = 8 X = np.random.multivariate_normal([2,3], 0.5*np.eye(2), n) X = np.vstack((X, np.random.multivariate_normal([1,-1], 0.2*np.eye(2), n))) X = X - np.mean(X,axis=0) s = 0.5 * np.sqrt(np.max(np.var(X,axis=0))) print('s',s) # theta = np.random.uniform(-1,1,(2,2)) # theta = np.eye(2) + np.random.uniform(-0.1,0.1,(2,2)) u,svalues,v = np.linalg.svd(X) v = v.T theta = v[:,:2] nIterations = 10 vals = [] for i in range(nIterations): theta -= 0.001 * gradient(X,proj,theta,s) v = objective(X,proj,theta,s) vals.append(v) # print('X\n',X) # print('P\n',proj(X,theta)) print('theta\n',theta) plt.figure(figsize=(10,15)) plt.subplot(3,1,(1,2)) P = proj(X,theta) mn = 1.1*np.min(X) mx = 1.1*np.max(X) plt.axis([mn,mx,mn,mx]) #strings = [chr(ord('a')+i) for i in range(X.shape[0])] strings = [i for i in range(X.shape[0])] for i in range(X.shape[0]): plt.text(X[i,0],X[i,1],strings[i],color='black',size=15) for i in range(P.shape[0]): plt.text(P[i,0],P[i,1],strings[i],color='green',size=15) plt.title('2D data, Originals in black') plt.subplot(3,1,3) plt.plot(vals) plt.ylabel('Objective Function');
cs480/23 Linear Dimensionality Reduction.ipynb
atlury/deep-opencl
lgpl-3.0
Let's watch the mapping develop. One way to do this is to save the values of $\theta$ after each iteration, then use interact to step through the interations.
from IPython.html.widgets import interact n = 10 X = np.random.multivariate_normal([2,3], 0.5*np.eye(2), n) X = np.vstack((X, np.random.multivariate_normal([1,-1], 0.2*np.eye(2), n))) X = X - np.mean(X,axis=0) s = 0.5 * np.sqrt(np.max(np.var(X,axis=0))) print('s',s) u,svalues,v = np.linalg.svd(X) V = v.T theta = V[:,:2] theta = (np.random.uniform(size=((2,2)))-0.5) * 10 thetas = [theta] # store all theta values nIterations = 200 vals = [] for i in range(nIterations): theta = theta - 0.02 * gradient(X,proj,theta,s) v = objective(X,proj,theta,s) thetas.append(theta.copy()) vals.append(v) mn = 1.5*np.min(X) mx = 1.5*np.max(X) strings = [i for i in range(X.shape[0])] @interact(i=(0,nIterations-1,1)) def plotIteration(i): #plt.cla() plt.figure(figsize=(8,10)) theta = thetas[i] val = vals[i] P = proj(X,theta) plt.axis([mn,mx,mn,mx]) for i in range(X.shape[0]): plt.text(X[i,0],X[i,1],strings[i],color='black',size=15) for i in range(P.shape[0]): plt.text(P[i,0],P[i,1],strings[i],color='red',size=15) plt.title('2D data, Originals in black. Objective = ' + str(val))
cs480/23 Linear Dimensionality Reduction.ipynb
atlury/deep-opencl
lgpl-3.0
Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Time Stepping Framework --&gt; Passive Tracers Transport 3. Key Properties --&gt; Time Stepping Framework --&gt; Biology Sources Sinks 4. Key Properties --&gt; Transport Scheme 5. Key Properties --&gt; Boundary Forcing 6. Key Properties --&gt; Gas Exchange 7. Key Properties --&gt; Carbon Chemistry 8. Tracers 9. Tracers --&gt; Ecosystem 10. Tracers --&gt; Ecosystem --&gt; Phytoplankton 11. Tracers --&gt; Ecosystem --&gt; Zooplankton 12. Tracers --&gt; Disolved Organic Matter 13. Tracers --&gt; Particules 14. Tracers --&gt; Dic Alkalinity 1. Key Properties Ocean Biogeochemistry key properties 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of ocean biogeochemistry model
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s)
notebooks/uhh/cmip6/models/sandbox-2/ocnbgchem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of ocean biogeochemistry model code (PISCES 2.0,...)
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s)
notebooks/uhh/cmip6/models/sandbox-2/ocnbgchem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
1.3. Model Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of ocean biogeochemistry model
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.model_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Geochemical" # "NPZD" # "PFT" # "Other: [Please specify]" # TODO - please enter value(s)
notebooks/uhh/cmip6/models/sandbox-2/ocnbgchem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
1.4. Elemental Stoichiometry Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe elemental stoichiometry (fixed, variable, mix of the two)
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Fixed" # "Variable" # "Mix of both" # TODO - please enter value(s)
notebooks/uhh/cmip6/models/sandbox-2/ocnbgchem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
1.5. Elemental Stoichiometry Details Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe which elements have fixed/variable stoichiometry
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s)
notebooks/uhh/cmip6/models/sandbox-2/ocnbgchem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
1.6. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of all prognostic tracer variables in the ocean biogeochemistry component
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s)
notebooks/uhh/cmip6/models/sandbox-2/ocnbgchem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
1.7. Diagnostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of all diagnotic tracer variables in the ocean biogeochemistry component
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s)
notebooks/uhh/cmip6/models/sandbox-2/ocnbgchem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
1.8. Damping Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe any tracer damping used (such as artificial correction or relaxation to climatology,...)
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.damping') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s)
notebooks/uhh/cmip6/models/sandbox-2/ocnbgchem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
2. Key Properties --&gt; Time Stepping Framework --&gt; Passive Tracers Transport Time stepping method for passive tracers transport in ocean biogeochemistry 2.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time stepping framework for passive tracers
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "use ocean model transport time step" # "use specific time step" # TODO - please enter value(s)
notebooks/uhh/cmip6/models/sandbox-2/ocnbgchem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
2.2. Timestep If Not From Ocean Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Time step for passive tracers (if different from ocean)
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s)
notebooks/uhh/cmip6/models/sandbox-2/ocnbgchem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
3. Key Properties --&gt; Time Stepping Framework --&gt; Biology Sources Sinks Time stepping framework for biology sources and sinks in ocean biogeochemistry 3.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time stepping framework for biology sources and sinks
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "use ocean model transport time step" # "use specific time step" # TODO - please enter value(s)
notebooks/uhh/cmip6/models/sandbox-2/ocnbgchem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
3.2. Timestep If Not From Ocean Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Time step for biology sources and sinks (if different from ocean)
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s)
notebooks/uhh/cmip6/models/sandbox-2/ocnbgchem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
4. Key Properties --&gt; Transport Scheme Transport scheme in ocean biogeochemistry 4.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of transport scheme
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Offline" # "Online" # TODO - please enter value(s)
notebooks/uhh/cmip6/models/sandbox-2/ocnbgchem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
4.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Transport scheme used
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Use that of ocean model" # "Other: [Please specify]" # TODO - please enter value(s)
notebooks/uhh/cmip6/models/sandbox-2/ocnbgchem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
4.3. Use Different Scheme Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Decribe transport scheme if different than that of ocean model
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s)
notebooks/uhh/cmip6/models/sandbox-2/ocnbgchem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
5. Key Properties --&gt; Boundary Forcing Properties of biogeochemistry boundary forcing 5.1. Atmospheric Deposition Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how atmospheric deposition is modeled
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "from file (climatology)" # "from file (interannual variations)" # "from Atmospheric Chemistry model" # TODO - please enter value(s)
notebooks/uhh/cmip6/models/sandbox-2/ocnbgchem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
5.2. River Input Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how river input is modeled
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "from file (climatology)" # "from file (interannual variations)" # "from Land Surface model" # TODO - please enter value(s)
notebooks/uhh/cmip6/models/sandbox-2/ocnbgchem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
5.3. Sediments From Boundary Conditions Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List which sediments are speficied from boundary condition
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s)
notebooks/uhh/cmip6/models/sandbox-2/ocnbgchem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
5.4. Sediments From Explicit Model Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List which sediments are speficied from explicit sediment model
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s)
notebooks/uhh/cmip6/models/sandbox-2/ocnbgchem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
6. Key Properties --&gt; Gas Exchange *Properties of gas exchange in ocean biogeochemistry * 6.1. CO2 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is CO2 gas exchange modeled ?
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s)
notebooks/uhh/cmip6/models/sandbox-2/ocnbgchem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
6.2. CO2 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe CO2 gas exchange
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "OMIP protocol" # "Other: [Please specify]" # TODO - please enter value(s)
notebooks/uhh/cmip6/models/sandbox-2/ocnbgchem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
6.3. O2 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is O2 gas exchange modeled ?
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s)
notebooks/uhh/cmip6/models/sandbox-2/ocnbgchem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
6.4. O2 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe O2 gas exchange
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "OMIP protocol" # "Other: [Please specify]" # TODO - please enter value(s)
notebooks/uhh/cmip6/models/sandbox-2/ocnbgchem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
6.5. DMS Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is DMS gas exchange modeled ?
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s)
notebooks/uhh/cmip6/models/sandbox-2/ocnbgchem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
6.6. DMS Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify DMS gas exchange scheme type
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s)
notebooks/uhh/cmip6/models/sandbox-2/ocnbgchem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
6.7. N2 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is N2 gas exchange modeled ?
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s)
notebooks/uhh/cmip6/models/sandbox-2/ocnbgchem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
6.8. N2 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify N2 gas exchange scheme type
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s)
notebooks/uhh/cmip6/models/sandbox-2/ocnbgchem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
6.9. N2O Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is N2O gas exchange modeled ?
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s)
notebooks/uhh/cmip6/models/sandbox-2/ocnbgchem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
6.10. N2O Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify N2O gas exchange scheme type
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s)
notebooks/uhh/cmip6/models/sandbox-2/ocnbgchem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
6.11. CFC11 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is CFC11 gas exchange modeled ?
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s)
notebooks/uhh/cmip6/models/sandbox-2/ocnbgchem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
6.12. CFC11 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify CFC11 gas exchange scheme type
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s)
notebooks/uhh/cmip6/models/sandbox-2/ocnbgchem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
6.13. CFC12 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is CFC12 gas exchange modeled ?
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s)
notebooks/uhh/cmip6/models/sandbox-2/ocnbgchem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
6.14. CFC12 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify CFC12 gas exchange scheme type
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s)
notebooks/uhh/cmip6/models/sandbox-2/ocnbgchem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
6.15. SF6 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is SF6 gas exchange modeled ?
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s)
notebooks/uhh/cmip6/models/sandbox-2/ocnbgchem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
6.16. SF6 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify SF6 gas exchange scheme type
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s)
notebooks/uhh/cmip6/models/sandbox-2/ocnbgchem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
6.17. 13CO2 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is 13CO2 gas exchange modeled ?
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s)
notebooks/uhh/cmip6/models/sandbox-2/ocnbgchem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
6.18. 13CO2 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify 13CO2 gas exchange scheme type
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s)
notebooks/uhh/cmip6/models/sandbox-2/ocnbgchem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
6.19. 14CO2 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is 14CO2 gas exchange modeled ?
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s)
notebooks/uhh/cmip6/models/sandbox-2/ocnbgchem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
6.20. 14CO2 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify 14CO2 gas exchange scheme type
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s)
notebooks/uhh/cmip6/models/sandbox-2/ocnbgchem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
6.21. Other Gases Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify any other gas exchange
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s)
notebooks/uhh/cmip6/models/sandbox-2/ocnbgchem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
7. Key Properties --&gt; Carbon Chemistry Properties of carbon chemistry biogeochemistry 7.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how carbon chemistry is modeled
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "OMIP protocol" # "Other protocol" # TODO - please enter value(s)
notebooks/uhh/cmip6/models/sandbox-2/ocnbgchem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
7.2. PH Scale Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If NOT OMIP protocol, describe pH scale.
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Sea water" # "Free" # "Other: [Please specify]" # TODO - please enter value(s)
notebooks/uhh/cmip6/models/sandbox-2/ocnbgchem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
7.3. Constants If Not OMIP Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If NOT OMIP protocol, list carbon chemistry constants.
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s)
notebooks/uhh/cmip6/models/sandbox-2/ocnbgchem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
8. Tracers Ocean biogeochemistry tracers 8.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of tracers in ocean biogeochemistry
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s)
notebooks/uhh/cmip6/models/sandbox-2/ocnbgchem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
8.2. Sulfur Cycle Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is sulfur cycle modeled ?
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s)
notebooks/uhh/cmip6/models/sandbox-2/ocnbgchem.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0