Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
12,400 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using subset-selector
Create SubsetSelector with data and plot. y should be a multidimensional array. Each sample from a subset of y is graphed. You can scroll through subsets using the figure's toolbar, select graphs of interest, and get the data back from them later. Execute the cell below and click on some graphs to try it out.
Step1: Using saved data | Python Code:
from subset_selector import SubsetSelector
ss = SubsetSelector(x, y)
ss.plot()
Explanation: Using subset-selector
Create SubsetSelector with data and plot. y should be a multidimensional array. Each sample from a subset of y is graphed. You can scroll through subsets using the figure's toolbar, select graphs of interest, and get the data back from them later. Execute the cell below and click on some graphs to try it out.
End of explanation
saved_data = ss.get_ydata()
figure, _ = plt.subplots(1, 2, figsize=(12, 2))
for data, ax in zip(saved_data, figure.get_axes()):
ax.plot(x, data)
Explanation: Using saved data
End of explanation |
12,401 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: Keras ã§ãã¹ãã³ã°ãšããã£ã³ã°ããã
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: ã¯ããã«
ãã¹ãã³ã°ã¯ãã·ãŒã±ã³ã¹åŠçã¬ã€ã€ãŒã«å
¥åã®ç¹å®ã®æéã¹ããããæ¬ èœããŠããããããŒã¿ãåŠçããéã«ã¹ãããããå¿
èŠãããããšãäŒããããã«äœ¿çšããææ³ã§ãã
ããã£ã³ã°ã¯ããã¹ãã³ã°ãããã¹ããããã·ãŒã±ã³ã¹ã®å
é ãŸãã¯æ«å°Ÿã«ããç¹æ®ãªãã¹ãã³ã°ã§ããããã£ã³ã°ã¯ãã·ãŒã±ã³ã¹ããŒã¿ãé£ç¶ãããããã«ãšã³ã³ãŒãããå¿
èŠæ§ããçãŸããŸããããããå
ã®ãã¹ãŠã®ã·ãŒã±ã³ã¹ãæå®ã®æšæºã®é·ãã«åãããããã«ã¯ãäžéšã®ã·ãŒã±ã³ã¹ãããã£ã³ã°ãŸãã¯ãã©ã³ã±ãŒãããïŒåãè©°ããïŒå¿
èŠãããããã§ãã
ã§ã¯ã詳ããèŠãŠãããŸãããã
ããã£ã³ã°ã·ãŒã±ã³ã¹ããŒã¿
ã·ãŒã±ã³ã¹ããŒã¿ãåŠçããéã«åã
ã®ãµã³ãã«ã®é·ããç°ãªãããšã¯ãéåžžã«äžè¬çã§ããæ¬¡ã®äŸïŒåèªãšããŠããŒã¯ã³åãããããã¹ãïŒãèããŠã¿ãŸãã
[
["Hello", "world", "!"],
["How", "are", "you", "doing", "today"],
["The", "weather", "will", "be", "nice", "tomorrow"],
]
èªåœæ€çŽ¢ã®åŸãããŒã¿ã¯ä»¥äžã®ããã«æŽæ°ãšããŠãã¯ãã«åããããããããŸããã
[
[71, 1331, 4231]
[73, 8, 3215, 55, 927],
[83, 91, 1, 645, 1253, 927],
]
ããŒã¿ã¯ãåã
ã®ãµã³ãã«ããããã 3ã5ã6 ã®é·ããæã€ãã¹ãããããªã¹ãã§ãããã£ãŒãã©ãŒãã³ã°ã¢ãã«ã®å
¥åããŒã¿ã¯ïŒåäžã®ãã³ãœã«ïŒäŸãã°ãã®å Žåã ãš(batch_size, 6, vocab_size)ã®ãããªåœ¢ç¶ïŒã§ãªããã°ãªããªããããæé·ã®ã¢ã€ãã ãããçããµã³ãã«ã¯ãäœããã®ãã¬ãŒã¹ãã«ããŒå€ã§ããã£ã³ã°ããå¿
èŠããããŸããïŒãã®ä»£ããã«ãçããµã³ãã«ãããã£ã³ã°ããåã«é·ããµã³ãã«ããã©ã³ã±ãŒãããããšãå¯èœã§ããïŒ
Keras 㯠Python ã®ãªã¹ããå
±éã®é·ãã«ãã©ã³ã±ãŒããããããã£ã³ã°ããããããŠãŒãã£ãªãã£é¢æ°ãæäŸããŸãïŒtf.keras.preprocessing.sequence.pad_sequences
Step3: ãã¹ãã³ã°
å
šãŠã®ãµã³ãã«ãçµ±äžãããé·ãã«ãªã£ãã®ã§ãä»åºŠã¯ããŒã¿ã®äžéšãå®éã«ããã£ã³ã°ãããç¡èŠãããã¹ãã§ããããšãã¢ãã«ã«ç¥ãããªããã°ãªããŸããããã®ã¡ã«ããºã ããã¹ãã³ã°ã§ãã
Keras ã¢ãã«ã§å
¥åãã¹ã¯ãå°å
¥ããã«ã¯ã3 ã€ã®æ¹æ³ããããŸãã
keras.layers.Masking ã¬ã€ã€ãŒã远å ããã
keras.layers.Embedding ã¬ã€ã€ãŒã mask_zero=True ã§èšå®ããã
maskåŒæ°ããµããŒãããã¬ã€ã€ãŒïŒRNN ã¬ã€ã€ãŒãªã©ïŒãåŒã³åºãéã«ããã®åŒæ°ãæåã§æž¡ãã
ãã¹ã¯çæã¬ã€ã€ãŒ
Step4: åºåãããçµæããåããããã«ããã¹ã¯ã¯åœ¢ç¶ã(batch_size, sequence_length)ã® 2 次å
ããŒã«ãã³ãœã«ã§ãããããã§ã¯åã
ã® False ãšã³ããªã¯ã察å¿ããæéã¹ããããåŠçäžã«ç¡èŠãã¹ãã§ããããšã瀺ããŠããŸãã
Functional API ãš Sequential API ã®ãã¹ã¯äŒæ
Functional API ãŸã㯠Sequential API ã䜿çšããå ŽåãEmbedding ã¬ã€ã€ãŒãŸã㯠Masking ã¬ã€ã€ãŒã«ãã£ãŠçæããããã¹ã¯ã¯ããããã䜿çšã§ããä»»æã®ã¬ã€ã€ãŒïŒäŸãã° RNN ã¬ã€ã€ãŒãªã©ïŒã«ãããã¯ãŒã¯ãä»ããŠäŒæãããŸããKeras ã¯å
¥åã«å¯Ÿå¿ãããã¹ã¯ãèªåçã«ååŸãããã®äœ¿ç𿹿³ãç¥ã£ãŠããä»»æã®ã¬ã€ã€ãŒã«æž¡ããŸãã
äŸãã°ã以äžã® Sequential API ã¢ãã«ã§ã¯ãLSTM ã¬ã€ã€ãŒã¯èªåçã«ãã¹ã¯ãååŸããŸããã€ãŸãããã¯ãããã£ã³ã°ãããå€ãç¡èŠãããšããããšã§ãã
Step5: ããã¯ã以äžã® Functional API ã¢ãã«ã§ãåæ§ã§ãã
Step6: ãã¹ã¯ãã³ãœã«ãçŽæ¥ã¬ã€ã€ãŒã«æž¡ã
ãã¹ã¯ãæ±ãããšãã§ããã¬ã€ã€ãŒïŒLSTM ã¬ã€ã€ãŒãªã©ïŒã¯ããããã® __call__ ã¡ãœããã« mask åŒæ°ãæã£ãŠããŸãã
äžæ¹ããã¹ã¯ãçæããã¬ã€ã€ãŒïŒäŸãã° <br> EmbeddingïŒã¯ãåŒã³åºãå¯èœãª compute_mask(input, previous_mask) ã¡ãœãããå
¬éããŸãã
äŸãã°äžèšã®ããã«ããŠããã¹ã¯ãçæããã¬ã€ã€ãŒã® compute_mask() ã¡ãœããã®åºåãããã¹ã¯ãæ¶è²»ããã¬ã€ã€ãŒã® __call__ ã¡ãœããã«æž¡ãããšãã§ããŸãã
Step8: ã«ã¹ã¿ã ã¬ã€ã€ãŒã§ãã¹ãã³ã°ããµããŒããã
å Žåã«ãã£ãŠã¯ããã¹ã¯ãçæããã¬ã€ã€ãŒïŒEmbedding ãªã©ïŒããçŸåšã®ãã¹ã¯ã倿Žããã¬ã€ã€ãŒãæžãå¿
èŠããããŸãã
äŸãã°ãæé次å
ã§é£çµãã Concatenate ã¬ã€ã€ãŒã®ããã«ãå
¥åãšã¯ç°ãªãæé次å
ãæã€ãã³ãœã«ãçæããã¬ã€ã€ãŒã¯ãçŸåšã®ãã¹ã¯ã倿ŽããŠããã¹ã¯ãããæéã¹ããããäžæµã®ã¬ã€ã€ãŒãé©åã«èæ
®ã«å
¥ããããããã«ããå¿
èŠããããŸãã
ãããè¡ãã«ã¯ãã¬ã€ã€ãŒã« layer.compute_mask() ã¡ãœãããå®è£
ããŸããããã¯ãå
¥åãšçŸåšã®ãã¹ã¯ãäžããããæã«æ°ãããã¹ã¯ãçæããŸãã
ããã§ã¯ãçŸåšã®ãã¹ã¯ã倿Žããå¿
èŠããã TemporalSplit ã¬ã€ã€ãŒã®äŸã瀺ããŸãã
Step9: ãã 1 ã€ã®äŸãšããŠãå
¥åå€ãããã¹ã¯ãçæã§ãã CustomEmbedding ã¬ã€ã€ãŒã®äŸã瀺ããŸãã
Step10: ãªããã€ã³ããŠäºææ§ã®ããã¬ã€ã€ãŒéã§ãã¹ã¯ãäŒæãã
ã»ãšãã©ã®ã¬ã€ã€ãŒã¯æé次å
ã倿ŽããªããããçŸåšã®ãã¹ã¯ã倿Žããå¿
èŠã¯ãããŸãããããããçŸåšã®ãã¹ã¯ã倿Žããã«ããããæ¬¡ã®ã¬ã€ã€ãŒã«äŒæãããå ŽåããããŸããããã¯ãªããã€ã³åäœã§ãã ããã©ã«ãã§ã¯ãïŒãã¬ãŒã ã¯ãŒã¯ããã¹ã¯ã®äŒæãå®å
šãã©ãã倿ããæ¹æ³ãæããªãããïŒã«ã¹ã¿ã ã¬ã€ã€ãŒã¯çŸåšã®ãã¹ã¯ãç Žæ£ããŸãã
æé次å
ã倿Žããªãã«ã¹ã¿ã ã¬ã€ã€ãŒãæã¡ããããçŸåšã®å
¥åãã¹ã¯ãäŒæã§ããããã«ãããå Žåã¯ãã¬ã€ã€ãŒã®ã³ã³ã¹ãã©ã¯ã¿ã self.supports_masking = True ã«èšå®ããå¿
èŠããããŸãããã®å Žåãcompute_mask() ã®ããã©ã«ãã®åäœã¯ãçŸåšã®ãã¹ã¯ãééãããã ããšãªããŸãã
ãã¹ã¯äŒæ¬ã®ããã«ãã¯ã€ããªã¹ãåãããã¬ã€ã€ãŒã®äŸã瀺ããŸãã
Step11: ããã§ããã¹ã¯çæã¬ã€ã€ãŒïŒEmbedding ãªã©ïŒãšãã¹ã¯æ¶è²»ã¬ã€ã€ãŒïŒLSTM ãªã©ïŒéã§ãã®ã«ã¹ã¿ã ã¬ã€ã€ãŒã®äœ¿çšãå¯èœãšãªãããã¹ã¯æ¶è²»ã¬ã€ã€ãŒãŸã§å±ãããã«ãã¹ã¯ãæž¡ããŸãã
Step12: ãã¹ã¯æ
å ±ãå¿
èŠãªã¬ã€ã€ãŒãæžã
äžéšã®ã¬ã€ã€ãŒã¯ãã¹ã¯ã³ã³ã·ã¥ãŒãã§ããããã㯠call ã§ mask åŒæ°ãåãåããããã䜿ã£ãŠç¹å®ã®æéã¹ããããã¹ããããããã©ããã倿ããŸãã
ãã®ãããªã¬ã€ã€ãŒãæžãã«ã¯ãåçŽã« call ã·ã°ããã£ã« mask=None åŒæ°ã远å ããŸããå
¥åã«é¢é£ä»ãããããã¹ã¯ã¯ããããå©çšå¯èœãªæã«ãã€ã§ãã¬ã€ã€ãŒã«æž¡ãããŸãã
以äžã«ç°¡åãªäŸã瀺ããŸããããã¯å
¥åã·ãŒã±ã³ã¹ã®æé次å
ïŒè»ž 1ïŒã®ãœããããã¯ã¹ãèšç®ãããã¹ã¯ãããã¿ã€ã ã¹ããããç Žæ£ããã¬ã€ã€ãŒã§ãã | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
Explanation: Keras ã§ãã¹ãã³ã°ãšããã£ã³ã°ããã
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/guide/keras/masking_and_padding"><img src="https://www.tensorflow.org/images/tf_logo_32px.png"> TensorFlow.orgã§è¡šç€º</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/guide/keras/masking_and_padding.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png"> Google Colab ã§å®è¡</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/guide/keras/masking_and_padding.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub ã§ãœãŒã¹ã衚瀺</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/guide/keras/masking_and_padding.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ããŒãããã¯ãããŠã³ããŒã/a0}</a></td>
</table>
Setup
End of explanation
raw_inputs = [
[711, 632, 71],
[73, 8, 3215, 55, 927],
[83, 91, 1, 645, 1253, 927],
]
# By default, this will pad using 0s; it is configurable via the
# "value" parameter.
# Note that you could "pre" padding (at the beginning) or
# "post" padding (at the end).
# We recommend using "post" padding when working with RNN layers
# (in order to be able to use the
# CuDNN implementation of the layers).
padded_inputs = tf.keras.preprocessing.sequence.pad_sequences(
raw_inputs, padding="post"
)
print(padded_inputs)
Explanation: ã¯ããã«
ãã¹ãã³ã°ã¯ãã·ãŒã±ã³ã¹åŠçã¬ã€ã€ãŒã«å
¥åã®ç¹å®ã®æéã¹ããããæ¬ èœããŠããããããŒã¿ãåŠçããéã«ã¹ãããããå¿
èŠãããããšãäŒããããã«äœ¿çšããææ³ã§ãã
ããã£ã³ã°ã¯ããã¹ãã³ã°ãããã¹ããããã·ãŒã±ã³ã¹ã®å
é ãŸãã¯æ«å°Ÿã«ããç¹æ®ãªãã¹ãã³ã°ã§ããããã£ã³ã°ã¯ãã·ãŒã±ã³ã¹ããŒã¿ãé£ç¶ãããããã«ãšã³ã³ãŒãããå¿
èŠæ§ããçãŸããŸããããããå
ã®ãã¹ãŠã®ã·ãŒã±ã³ã¹ãæå®ã®æšæºã®é·ãã«åãããããã«ã¯ãäžéšã®ã·ãŒã±ã³ã¹ãããã£ã³ã°ãŸãã¯ãã©ã³ã±ãŒãããïŒåãè©°ããïŒå¿
èŠãããããã§ãã
ã§ã¯ã詳ããèŠãŠãããŸãããã
ããã£ã³ã°ã·ãŒã±ã³ã¹ããŒã¿
ã·ãŒã±ã³ã¹ããŒã¿ãåŠçããéã«åã
ã®ãµã³ãã«ã®é·ããç°ãªãããšã¯ãéåžžã«äžè¬çã§ããæ¬¡ã®äŸïŒåèªãšããŠããŒã¯ã³åãããããã¹ãïŒãèããŠã¿ãŸãã
[
["Hello", "world", "!"],
["How", "are", "you", "doing", "today"],
["The", "weather", "will", "be", "nice", "tomorrow"],
]
èªåœæ€çŽ¢ã®åŸãããŒã¿ã¯ä»¥äžã®ããã«æŽæ°ãšããŠãã¯ãã«åããããããããŸããã
[
[71, 1331, 4231]
[73, 8, 3215, 55, 927],
[83, 91, 1, 645, 1253, 927],
]
ããŒã¿ã¯ãåã
ã®ãµã³ãã«ããããã 3ã5ã6 ã®é·ããæã€ãã¹ãããããªã¹ãã§ãããã£ãŒãã©ãŒãã³ã°ã¢ãã«ã®å
¥åããŒã¿ã¯ïŒåäžã®ãã³ãœã«ïŒäŸãã°ãã®å Žåã ãš(batch_size, 6, vocab_size)ã®ãããªåœ¢ç¶ïŒã§ãªããã°ãªããªããããæé·ã®ã¢ã€ãã ãããçããµã³ãã«ã¯ãäœããã®ãã¬ãŒã¹ãã«ããŒå€ã§ããã£ã³ã°ããå¿
èŠããããŸããïŒãã®ä»£ããã«ãçããµã³ãã«ãããã£ã³ã°ããåã«é·ããµã³ãã«ããã©ã³ã±ãŒãããããšãå¯èœã§ããïŒ
Keras 㯠Python ã®ãªã¹ããå
±éã®é·ãã«ãã©ã³ã±ãŒããããããã£ã³ã°ããããããŠãŒãã£ãªãã£é¢æ°ãæäŸããŸãïŒtf.keras.preprocessing.sequence.pad_sequences
End of explanation
embedding = layers.Embedding(input_dim=5000, output_dim=16, mask_zero=True)
masked_output = embedding(padded_inputs)
print(masked_output._keras_mask)
masking_layer = layers.Masking()
# Simulate the embedding lookup by expanding the 2D input to 3D,
# with embedding dimension of 10.
unmasked_embedding = tf.cast(
tf.tile(tf.expand_dims(padded_inputs, axis=-1), [1, 1, 10]), tf.float32
)
masked_embedding = masking_layer(unmasked_embedding)
print(masked_embedding._keras_mask)
Explanation: ãã¹ãã³ã°
å
šãŠã®ãµã³ãã«ãçµ±äžãããé·ãã«ãªã£ãã®ã§ãä»åºŠã¯ããŒã¿ã®äžéšãå®éã«ããã£ã³ã°ãããç¡èŠãããã¹ãã§ããããšãã¢ãã«ã«ç¥ãããªããã°ãªããŸããããã®ã¡ã«ããºã ããã¹ãã³ã°ã§ãã
Keras ã¢ãã«ã§å
¥åãã¹ã¯ãå°å
¥ããã«ã¯ã3 ã€ã®æ¹æ³ããããŸãã
keras.layers.Masking ã¬ã€ã€ãŒã远å ããã
keras.layers.Embedding ã¬ã€ã€ãŒã mask_zero=True ã§èšå®ããã
maskåŒæ°ããµããŒãããã¬ã€ã€ãŒïŒRNN ã¬ã€ã€ãŒãªã©ïŒãåŒã³åºãéã«ããã®åŒæ°ãæåã§æž¡ãã
ãã¹ã¯çæã¬ã€ã€ãŒ : Embedding ãš Masking
å
éšã§ãããã®ã¬ã€ã€ãŒã¯ãã¹ã¯ãã³ãœã«ïŒåœ¢ç¶(batch, sequence_length)ã® 2 次å
ãã³ãœã«ïŒãäœæããMasking ãŸã㯠Embedding ã¬ã€ã€ãŒã«ãã£ãŠè¿ããããã³ãœã«åºåã«ã¢ã¿ããããŸãã
End of explanation
model = keras.Sequential(
[layers.Embedding(input_dim=5000, output_dim=16, mask_zero=True), layers.LSTM(32),]
)
Explanation: åºåãããçµæããåããããã«ããã¹ã¯ã¯åœ¢ç¶ã(batch_size, sequence_length)ã® 2 次å
ããŒã«ãã³ãœã«ã§ãããããã§ã¯åã
ã® False ãšã³ããªã¯ã察å¿ããæéã¹ããããåŠçäžã«ç¡èŠãã¹ãã§ããããšã瀺ããŠããŸãã
Functional API ãš Sequential API ã®ãã¹ã¯äŒæ
Functional API ãŸã㯠Sequential API ã䜿çšããå ŽåãEmbedding ã¬ã€ã€ãŒãŸã㯠Masking ã¬ã€ã€ãŒã«ãã£ãŠçæããããã¹ã¯ã¯ããããã䜿çšã§ããä»»æã®ã¬ã€ã€ãŒïŒäŸãã° RNN ã¬ã€ã€ãŒãªã©ïŒã«ãããã¯ãŒã¯ãä»ããŠäŒæãããŸããKeras ã¯å
¥åã«å¯Ÿå¿ãããã¹ã¯ãèªåçã«ååŸãããã®äœ¿ç𿹿³ãç¥ã£ãŠããä»»æã®ã¬ã€ã€ãŒã«æž¡ããŸãã
äŸãã°ã以äžã® Sequential API ã¢ãã«ã§ã¯ãLSTM ã¬ã€ã€ãŒã¯èªåçã«ãã¹ã¯ãååŸããŸããã€ãŸãããã¯ãããã£ã³ã°ãããå€ãç¡èŠãããšããããšã§ãã
End of explanation
inputs = keras.Input(shape=(None,), dtype="int32")
x = layers.Embedding(input_dim=5000, output_dim=16, mask_zero=True)(inputs)
outputs = layers.LSTM(32)(x)
model = keras.Model(inputs, outputs)
Explanation: ããã¯ã以äžã® Functional API ã¢ãã«ã§ãåæ§ã§ãã
End of explanation
class MyLayer(layers.Layer):
def __init__(self, **kwargs):
super(MyLayer, self).__init__(**kwargs)
self.embedding = layers.Embedding(input_dim=5000, output_dim=16, mask_zero=True)
self.lstm = layers.LSTM(32)
def call(self, inputs):
x = self.embedding(inputs)
# Note that you could also prepare a `mask` tensor manually.
# It only needs to be a boolean tensor
# with the right shape, i.e. (batch_size, timesteps).
mask = self.embedding.compute_mask(inputs)
output = self.lstm(x, mask=mask) # The layer will ignore the masked values
return output
layer = MyLayer()
x = np.random.random((32, 10)) * 100
x = x.astype("int32")
layer(x)
Explanation: ãã¹ã¯ãã³ãœã«ãçŽæ¥ã¬ã€ã€ãŒã«æž¡ã
ãã¹ã¯ãæ±ãããšãã§ããã¬ã€ã€ãŒïŒLSTM ã¬ã€ã€ãŒãªã©ïŒã¯ããããã® __call__ ã¡ãœããã« mask åŒæ°ãæã£ãŠããŸãã
äžæ¹ããã¹ã¯ãçæããã¬ã€ã€ãŒïŒäŸãã° <br> EmbeddingïŒã¯ãåŒã³åºãå¯èœãª compute_mask(input, previous_mask) ã¡ãœãããå
¬éããŸãã
äŸãã°äžèšã®ããã«ããŠããã¹ã¯ãçæããã¬ã€ã€ãŒã® compute_mask() ã¡ãœããã®åºåãããã¹ã¯ãæ¶è²»ããã¬ã€ã€ãŒã® __call__ ã¡ãœããã«æž¡ãããšãã§ããŸãã
End of explanation
class TemporalSplit(keras.layers.Layer):
Split the input tensor into 2 tensors along the time dimension.
def call(self, inputs):
# Expect the input to be 3D and mask to be 2D, split the input tensor into 2
# subtensors along the time axis (axis 1).
return tf.split(inputs, 2, axis=1)
def compute_mask(self, inputs, mask=None):
# Also split the mask into 2 if it presents.
if mask is None:
return None
return tf.split(mask, 2, axis=1)
first_half, second_half = TemporalSplit()(masked_embedding)
print(first_half._keras_mask)
print(second_half._keras_mask)
Explanation: ã«ã¹ã¿ã ã¬ã€ã€ãŒã§ãã¹ãã³ã°ããµããŒããã
å Žåã«ãã£ãŠã¯ããã¹ã¯ãçæããã¬ã€ã€ãŒïŒEmbedding ãªã©ïŒããçŸåšã®ãã¹ã¯ã倿Žããã¬ã€ã€ãŒãæžãå¿
èŠããããŸãã
äŸãã°ãæé次å
ã§é£çµãã Concatenate ã¬ã€ã€ãŒã®ããã«ãå
¥åãšã¯ç°ãªãæé次å
ãæã€ãã³ãœã«ãçæããã¬ã€ã€ãŒã¯ãçŸåšã®ãã¹ã¯ã倿ŽããŠããã¹ã¯ãããæéã¹ããããäžæµã®ã¬ã€ã€ãŒãé©åã«èæ
®ã«å
¥ããããããã«ããå¿
èŠããããŸãã
ãããè¡ãã«ã¯ãã¬ã€ã€ãŒã« layer.compute_mask() ã¡ãœãããå®è£
ããŸããããã¯ãå
¥åãšçŸåšã®ãã¹ã¯ãäžããããæã«æ°ãããã¹ã¯ãçæããŸãã
ããã§ã¯ãçŸåšã®ãã¹ã¯ã倿Žããå¿
èŠããã TemporalSplit ã¬ã€ã€ãŒã®äŸã瀺ããŸãã
End of explanation
class CustomEmbedding(keras.layers.Layer):
def __init__(self, input_dim, output_dim, mask_zero=False, **kwargs):
super(CustomEmbedding, self).__init__(**kwargs)
self.input_dim = input_dim
self.output_dim = output_dim
self.mask_zero = mask_zero
def build(self, input_shape):
self.embeddings = self.add_weight(
shape=(self.input_dim, self.output_dim),
initializer="random_normal",
dtype="float32",
)
def call(self, inputs):
return tf.nn.embedding_lookup(self.embeddings, inputs)
def compute_mask(self, inputs, mask=None):
if not self.mask_zero:
return None
return tf.not_equal(inputs, 0)
layer = CustomEmbedding(10, 32, mask_zero=True)
x = np.random.random((3, 10)) * 9
x = x.astype("int32")
y = layer(x)
mask = layer.compute_mask(x)
print(mask)
Explanation: ãã 1 ã€ã®äŸãšããŠãå
¥åå€ãããã¹ã¯ãçæã§ãã CustomEmbedding ã¬ã€ã€ãŒã®äŸã瀺ããŸãã
End of explanation
class MyActivation(keras.layers.Layer):
def __init__(self, **kwargs):
super(MyActivation, self).__init__(**kwargs)
# Signal that the layer is safe for mask propagation
self.supports_masking = True
def call(self, inputs):
return tf.nn.relu(inputs)
Explanation: ãªããã€ã³ããŠäºææ§ã®ããã¬ã€ã€ãŒéã§ãã¹ã¯ãäŒæãã
ã»ãšãã©ã®ã¬ã€ã€ãŒã¯æé次å
ã倿ŽããªããããçŸåšã®ãã¹ã¯ã倿Žããå¿
èŠã¯ãããŸãããããããçŸåšã®ãã¹ã¯ã倿Žããã«ããããæ¬¡ã®ã¬ã€ã€ãŒã«äŒæãããå ŽåããããŸããããã¯ãªããã€ã³åäœã§ãã ããã©ã«ãã§ã¯ãïŒãã¬ãŒã ã¯ãŒã¯ããã¹ã¯ã®äŒæãå®å
šãã©ãã倿ããæ¹æ³ãæããªãããïŒã«ã¹ã¿ã ã¬ã€ã€ãŒã¯çŸåšã®ãã¹ã¯ãç Žæ£ããŸãã
æé次å
ã倿Žããªãã«ã¹ã¿ã ã¬ã€ã€ãŒãæã¡ããããçŸåšã®å
¥åãã¹ã¯ãäŒæã§ããããã«ãããå Žåã¯ãã¬ã€ã€ãŒã®ã³ã³ã¹ãã©ã¯ã¿ã self.supports_masking = True ã«èšå®ããå¿
èŠããããŸãããã®å Žåãcompute_mask() ã®ããã©ã«ãã®åäœã¯ãçŸåšã®ãã¹ã¯ãééãããã ããšãªããŸãã
ãã¹ã¯äŒæ¬ã®ããã«ãã¯ã€ããªã¹ãåãããã¬ã€ã€ãŒã®äŸã瀺ããŸãã:
End of explanation
inputs = keras.Input(shape=(None,), dtype="int32")
x = layers.Embedding(input_dim=5000, output_dim=16, mask_zero=True)(inputs)
x = MyActivation()(x) # Will pass the mask along
print("Mask found:", x._keras_mask)
outputs = layers.LSTM(32)(x) # Will receive the mask
model = keras.Model(inputs, outputs)
Explanation: ããã§ããã¹ã¯çæã¬ã€ã€ãŒïŒEmbedding ãªã©ïŒãšãã¹ã¯æ¶è²»ã¬ã€ã€ãŒïŒLSTM ãªã©ïŒéã§ãã®ã«ã¹ã¿ã ã¬ã€ã€ãŒã®äœ¿çšãå¯èœãšãªãããã¹ã¯æ¶è²»ã¬ã€ã€ãŒãŸã§å±ãããã«ãã¹ã¯ãæž¡ããŸãã
End of explanation
class TemporalSoftmax(keras.layers.Layer):
def call(self, inputs, mask=None):
broadcast_float_mask = tf.expand_dims(tf.cast(mask, "float32"), -1)
inputs_exp = tf.exp(inputs) * broadcast_float_mask
inputs_sum = tf.reduce_sum(
inputs_exp * broadcast_float_mask, axis=-1, keepdims=True
)
return inputs_exp / inputs_sum
inputs = keras.Input(shape=(None,), dtype="int32")
x = layers.Embedding(input_dim=10, output_dim=32, mask_zero=True)(inputs)
x = layers.Dense(1)(x)
outputs = TemporalSoftmax()(x)
model = keras.Model(inputs, outputs)
y = model(np.random.randint(0, 10, size=(32, 100)), np.random.random((32, 100, 1)))
Explanation: ãã¹ã¯æ
å ±ãå¿
èŠãªã¬ã€ã€ãŒãæžã
äžéšã®ã¬ã€ã€ãŒã¯ãã¹ã¯ã³ã³ã·ã¥ãŒãã§ããããã㯠call ã§ mask åŒæ°ãåãåããããã䜿ã£ãŠç¹å®ã®æéã¹ããããã¹ããããããã©ããã倿ããŸãã
ãã®ãããªã¬ã€ã€ãŒãæžãã«ã¯ãåçŽã« call ã·ã°ããã£ã« mask=None åŒæ°ã远å ããŸããå
¥åã«é¢é£ä»ãããããã¹ã¯ã¯ããããå©çšå¯èœãªæã«ãã€ã§ãã¬ã€ã€ãŒã«æž¡ãããŸãã
以äžã«ç°¡åãªäŸã瀺ããŸããããã¯å
¥åã·ãŒã±ã³ã¹ã®æé次å
ïŒè»ž 1ïŒã®ãœããããã¯ã¹ãèšç®ãããã¹ã¯ãããã¿ã€ã ã¹ããããç Žæ£ããã¬ã€ã€ãŒã§ãã
End of explanation |
12,402 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
See quick reference at the bottom
See full module reference section for full details
In the begining of each analysis, the first step is to load ReproPhylo and its dependencies with the command
Step1: Once this is done we can start a Project. A Project contains all the data, metadata, methods and environment information, and it is the unit that is saved as a pickle file, which is version controled with <a href=http
Step2: This is a single Locus description (a Locus object). We can confirm its content by printing it like this
Step3: Describing loci using a file
Another way of describing loci is to write them in a file. The file has one line for each Locus, where each line has at least four items, separated by commas. The items, as above, are the character type, the feature type, the name of the locus and other possible aliases. At least one alias must be specified, but it can be identical to the name. For the MT-CO1 Locus, a file would look like this
Step4: The command generated the loci file and wrote it in data/loci.csv. Here are some excerpts separated by three dots
Step5: Regardless of whether we have one or more Locus objects, they are read as a list, which means that they are wrapped with square brackets and separated by comma
Step6: This command will start the Project and will write it to the pickle file outputs/dummy.pkpj
Step7: This will provoke a bunch of Git related messages which will be discussed in the version control section of this tutorial.
If we print the Project we'll get this massage
Step8: 3.2.3 Modifying the loci of an existing Project
As you have seen, when you start a Project you pass a list of loci or a csv file name with the loci attributes
Step9: 3.2.4 Quick reference | Python Code:
from reprophylo import *
Explanation: See quick reference at the bottom
See full module reference section for full details
In the begining of each analysis, the first step is to load ReproPhylo and its dependencies with the command
End of explanation
coi = Locus(char_type='dna',
feature_type='CDS',
name='MT-CO1',
aliases=['cox1', 'coi'])
Explanation: Once this is done we can start a Project. A Project contains all the data, metadata, methods and environment information, and it is the unit that is saved as a pickle file, which is version controled with <a href=http://en.wikipedia.org/wiki/Git_(software)>Git</a>.
Although ReproPhylo is designed to record versions and update the pickle file automatically, this will be opt-out of in this tutorial, and will be introduced after we have covered the basics.
Instead, we will manually save a pickle file at the end of each section, and will load it in the next one. You should use the same pickle file name at the end of all the sections. The new content will be added to the one already present in the file.
If you want to jump ahead, there are presaved pickle files (Tutorial_files/basic/outputs), numbered according to the section after which they were saved. For example, outputs/3.6.alignments.pkpj was saved at the end of section 3.6 and can be loaded at the top of section 3.7, instead of your own file.
To start a Project, we have to specify the loci to analyse (not actual sequence data, only some information on the loci) and a pickle file name.
3.2.1 Describing Loci
A Locus can be described manually using a command or by providing a file. For each Locus, we have to specify the character type (DNA or protein) the feature type (eg, rRNA, CDS or gene), the name of the locus (eg, MT-CO1) and other possible aliases which may come handy if we want to read a genbank file (eg, cox1, coi).
Describe loci using a command
End of explanation
print coi
Explanation: This is a single Locus description (a Locus object). We can confirm its content by printing it like this:
End of explanation
list_loci_in_genbank('data/Tetillidae.gb', # The input genbank
# file
'data/loci.csv', # The loci file
'outputs/loci_counts.txt') # Additional
# output,
# discussed
# below.
Explanation: Describing loci using a file
Another way of describing loci is to write them in a file. The file has one line for each Locus, where each line has at least four items, separated by commas. The items, as above, are the character type, the feature type, the name of the locus and other possible aliases. At least one alias must be specified, but it can be identical to the name. For the MT-CO1 Locus, a file would look like this:
dna,CDS,MT-CO1,cox1,coi
Deducing a loci file from a genbank file
A third way of describing loci is to run a command that guesses them from a genbank file and writes them into a comma delimited file, as above. This file can be used as is, or it can be edited. The following command will prepare such a loci file from a genbank file containing all the GenBank records belonging to the sponge family Tetillidae. Text starting with a hash (#) is a comment which do not affect the command:
End of explanation
ssu = Locus('dna','rRNA','18S',['ssu','SSU-rRNA'])
Explanation: The command generated the loci file and wrote it in data/loci.csv. Here are some excerpts separated by three dots:
<pre>
dna,rRNA,18s,18S ribosomal RNA,18S rRNA
dna,rRNA,28s,28S large subunit ribosomal RNA,28S ribosomal RNA
...
dna,CDS,MT-ATP8,atp8,ATP8
dna,CDS,MT-CO1,coi,COI,cox1,COX1,coxI
...
dna,rRNA,rnl,rnl
dna,rRNA,rns,rns
dna,rRNA,rrnL,rrnL
</pre>
Each line represents a locus that was found in the genbank file data/Tetillidae.gb. For some genes, such as 18s, synonyms were recognized and placed as aliases in one line. In other cases, such as for rnl and rrnL, they were not.
Editing the loci file
Possible edits to this file include:
Synonymization. This is done by adding a comma and a shared integer in all the lines that are the same locus. For example the lines
dna,rRNA,rnl,rnl
dna,rRNA,rrnL,rrnL
will become
dna,rRNA,rnl,rnl,9
dna,rRNA,rrnL,rrnL,9
Which integer is written is unimportant, as long as it is shared between synonymous lines.
Change of character type. If our data includes translations to protein sequence, we can change dna to prot, as such:
prot,CDS,MT-CO1,coi,COI,cox1,COX1,coxI.
This will tell the program to use protein sequences instead of DNA sequence. The sequence alignment tutorial explains how to use both protein and DNA sequence of the same locus to conduct codon alignment.
Deletion of loci. It is possible to delete loci we do not want to analyse. They will not be read, even if they exit in our data.
The second file that the command above produced, the outputs/loci_counts.txt, contains a list of the loci found in the genbank file, with the number of their occurances. This can be used as a guide when desciding which loci to delete and which to keep.
3.2.2 Loading loci to a new Project
Loading Locus objects
First we'll make another Locus object to make a point that more than one can be read:
End of explanation
loci_list = [coi, ssu]
Explanation: Regardless of whether we have one or more Locus objects, they are read as a list, which means that they are wrapped with square brackets and separated by comma:
End of explanation
pj = Project('data/edited_loci.csv',
pickle='outputs/my_project.pkpj', git=False)
Explanation: This command will start the Project and will write it to the pickle file outputs/dummy.pkpj:
<pre>
pj = Project(loci_list, pickle='outputs/dummy.pkpj')
</pre>
This following alternative will start a Project and will load the loci from a file data/edited_loci.csv that looks like this:
<pre>
dna,rRNA,18s,18S ribosomal RNA,18S rRNA
dna,rRNA,28s,28S large subunit ribosomal RNA
dna,CDS,MT-CO1,coi,COI,cox1,COX1,coxI
</pre>
End of explanation
print pj
Explanation: This will provoke a bunch of Git related messages which will be discussed in the version control section of this tutorial.
If we print the Project we'll get this massage:
End of explanation
# Update the pickle file
pickle_pj(pj, 'outputs/my_project.pkpj')
Explanation: 3.2.3 Modifying the loci of an existing Project
As you have seen, when you start a Project you pass a list of loci or a csv file name with the loci attributes:
pj = Project(loci_list, pickle='filename')
Once the Project exists, it is possible to modify the Locus objects it contains. To add a Locus, you need to create it, as you have done:
lsu = Locus('dna', 'rRNA', '28S', ['28s','LSU-rRNA'])
and then also add it to the Project. Loci are stored in a list called pj.loci. So the new Locus can be appended to it:
pj.loci.append(ssu)
or if we have a list of new loci to add, for example:
new_loci_list = [nd5, lsu]
it can be added to the loci list like so:
pj.loci += new_loci_list
Lastly, we can modify loci that are already in pj.loci. For example, change the name and add an alias to the MT-CO1 Locus object:
<pre>
for l in pj.loci: # Find the Locus named MT-CO1
if l.name == 'MT-CO1':
l.name = 'COI' # Rename it to COI
l.aliases.append('coi') # Add the alias coi
</pre>
End of explanation
# A Locus object
coi = Locus(char_type='dna', # or 'prot'
feature_type='CDS', # any string
name='MT-CO1', # any string
aliases=['coi', 'cox1']) # list of strings
# Guess loci.csv file from a genbank file
list_loci_in_genbank('genbank.gb',
'loci.csv',
'loci_counts.txt')
# Start a Project
# With a Locus object list
pj = Project([coi, ssu], pickle='pickle_filename')
# With a loci.csv file
pj = Project('loci.csv', pickle='pickle_filename')
# Add a Locus to an existing Project
pj.loci.append(coi)
#Or
pj.loci += [coi]
# Modify a Locus existing in a Project
for l in pj.loci:
if l.name == 'MT-CO1':
l.name = 'newName'
l.feature_type = 'newFeatureType'
l.char_type = 'prot'
l.aliases.append('newAlias')
#Or
l.aliases += ['newAlias1,newAlias2']
Explanation: 3.2.4 Quick reference
End of explanation |
12,403 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter 3
Step1: if ... else statement
python
if <condition>
Step2: if ...elif ... else statement
python
if <condition>
Step3: Imagine that in the above program, 23 is the temperature which was read by some sensor or manually entered by the user and Normal is the response of the program.
Step4: One line if
There are instances where we might need to have
If the code block is composed of only one line, it can be written after the colon | Python Code:
password = input("Please enter the password:")
if password == "Simsim":
print("\t> Welcome to the cave")
x = "Mayank"
y = "TEST"
if y == "TEST":
print(x)
if y:
print("Hello World")
z = None
if z:
print("TEST")
x = 11
if x > 10:
print("Hello")
if x > 10.999999999999:
print("Hello again")
if x % 2 == 0:
print("Bye bye bye ...")
x = 10
y = None
z = "111"
print(id(y))
if x:
print("Hello in x")
if y:
print("Hello in Y")
if z:
print("Hello in Z")
Explanation: Chapter 3: Compound statements
Compound statements contain one or groups of other statements; they affect or control the execution of those other statements in some way.
In general, they span multiple lines, but can be also be listed in a single line.
The if, while and for statements implement traditional control flow constructs, whereas try specifies exception handlers and/or cleanup code for a group of statements, while the with statement allows the execution of initialization and finalization code around a block of code. Function and class definitions are also syntactically compound statements.
They consists of one or more âclausesâ. A clause consists of a 'header' and a âsuiteâ.
The 'clause' headers of a particular compound statement are all at the same indentation level. They should begins with an uniquely identifying keyword and should ends with a colon.
'suite' is a group of statements controlled by a clause. It can be of one or more semicolon-separated simple statements on the same line as the header, following the headerâs colon (one liner), or it can be one or more indented statements on subsequent lines. Only the latter form of a suite can contain nested compound statements; the following is illegal, mostly because it wouldnât be clear to which if clause a following else clause would belong:
Traditional Control Flow Constructs
if Statement
The if statement is used for conditional execution similar to that in most common languages. If statement can be constructed in three format depending on our need.
if: when we have "if something do something" condition
if .. else: when we have condition like "if something: do something else do something else"
if .. elif ..else: When we have too many conditions or nested conditions
if
This format is used when specific operation needs to be performed if a specific condition is met.
Syntax:
python
if <condition>:
<code block>
Where:
<condition>: sentence that can be evaluated as true or false.
<code block>: sequence of command lines.
The clauses elif and else are optional and several elifs for the if may be used but only one else at the end.
Parentheses are only required to avoid ambiguity.
Example:
End of explanation
x = "Anuja"
if x == "mayank":
print("Name is mayank")
else:
print("Name is not mayank and its", x)
Explanation: if ... else statement
python
if <condition>:
<code block>
else:
<code block>
Where:
<condition>: sentence that can be evaluated as true if statement if statemente or false.
<code block>: sequence of command lines.
The clauses elif and else are optional and several elifs for the if may be used but only one else at the end.
Parentheses are only required to avoid ambiguity.
End of explanation
# temperature value used to test
temp = 31
if temp < 0:
print ('Freezing...')
elif 0 <= temp <= 20:
print ('Cold')
elif 21 <= temp <= 25:
print ('Room Temprature')
elif 26 <= temp <= 35:
print ('Hot')
else:
print ('Its very HOT!, lets stay at home... \nand drink lemonade.')
# temperature value used to test
temp = 60
if temp < 0:
print ('Freezing...')
elif 0 <= temp <= 20:
print ('Cold')
elif 21 <= temp <= 25:
print ('Room Temprature')
elif 26 <= temp <= 35:
print ('Hot')
else:
print ('Its very HOT!, lets stay at home... \nand drink lemonade.')
Explanation: if ...elif ... else statement
python
if <condition>:
<code block>
elif <condition>:
<code block>
elif <condition>:
<code block>
else:
<code block>
Where:
<condition>: sentence that can be evaluated as true or false.
<code block>: sequence of command lines.
The clauses elif and else are optional and several elifs for the if may be used but only one else at the end.
Parentheses are only required to avoid ambiguity.
End of explanation
a = "apple"
b = "banana"
c = "Mango"
if a == "apple":
print("apple")
elif b == "Mango":
print("mango")
elif c == "Mango":
print("My Mango farm")
Explanation: Imagine that in the above program, 23 is the temperature which was read by some sensor or manually entered by the user and Normal is the response of the program.
End of explanation
x = 20
if x > 10: print ("Hello ")
print("-"*30)
val = 1 if x < 10 else 24
print(val)
Explanation: One line if
There are instances where we might need to have
If the code block is composed of only one line, it can be written after the colon:
if temp < 0: print 'Freezing...'
Since version 2.5, Python supports the expression:
<variable> = <value 1> if <condition> else <value 2>
Where <variable> receives <value 1> if <condition> is true and <value 2> otherwise.
End of explanation |
12,404 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This IPython Notebook is for integrating filter curves with the spectra to show the Si gap's effect size on tranmission in IR imaging.
Author
Step1: From the Thorlabs website
Step2: Normalize the transmission
Step3: Drop wavelengths shorter than 1200 nm since they are absorbed.
Step4: Construct a model.
Step5: Plot the integrated flux for a variety of gap sizes.
Define an integral function.
Step6: Small gaps. | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import seaborn as sns
Explanation: This IPython Notebook is for integrating filter curves with the spectra to show the Si gap's effect size on tranmission in IR imaging.
Author: Michael Gully-Santiago, [email protected]
Date: April 16, 2015
First, let's see if we can get the filter curve data from say... Thorlabs.
End of explanation
fc = pd.read_excel("../data/FB1250-10.xlsx", sheetname='Transmission Data', parse_cols=[2,3,4], skipfooter=2)
fc.tail()
fc.columns
Explanation: From the Thorlabs website:
https://www.thorlabs.com/newgrouppage9.cfm?objectgroup_id=1000
Read in the filter curve, fc
End of explanation
fc['wavelength'] = fc['Wavelength (nm)']
fc['transmission'] = fc['% Transmission']/fc['% Transmission'].max()
Explanation: Normalize the transmission
End of explanation
fc.drop(fc.index[fc.wavelength < 1150], inplace=True)
sns.set_context('notebook', font_scale=1.5)
Explanation: Drop wavelengths shorter than 1200 nm since they are absorbed.
End of explanation
import etalon as etalon
np.random.seed(78704)
fc.wavelength.values
n1 = etalon.sellmeier_Si(fc.wavelength.values)
dsp = etalon.T_gap_Si_fast(fc.wavelength, 0.0, n1)
sns.set_context('paper', font_scale=1.6)
sns.set_style('ticks')
model_absolute = etalon.T_gap_Si_fast(fc.wavelength, 50.0, n1)
model = model_absolute/dsp
plt.plot(fc.wavelength, model,label='50 nm gap spectrum')
model_absolute = etalon.T_gap_Si_fast(fc.wavelength, 250.0, n1)
model = model_absolute/dsp
plt.plot(fc.wavelength, model,'--', label='250 nm gap spectrum')
plt.fill_betweenx(fc.transmission, fc.wavelength, color='k',alpha=0.3)
plt.text(1260, 0.5, 'FB1250-10', fontsize=14)
#plt.plot(fc.wavelength, fc.transmission, '--',label='Filter Curve')
plt.xlabel('$\lambda$ (nm)')
plt.ylabel('T')
plt.legend(loc='lower right')
plt.xlim(1200, 1400)
plt.savefig('../figs/F1250_10_filter.pdf')
Explanation: Construct a model.
End of explanation
fc.transmission_norm = fc.transmission/fc.transmission.sum()
integrate_flux = lambda x: (x * fc.transmission_norm).sum()
Explanation: Plot the integrated flux for a variety of gap sizes.
Define an integral function.
End of explanation
gap_sizes = np.arange(0, 50, 2)
gap_trans = [integrate_flux(etalon.T_gap_Si_fast(fc.wavelength, gap_size, n1)/dsp) for gap_size in gap_sizes]
sns.set_context('paper', font_scale=1.6)
sns.set_style('ticks')
plt.plot(gap_sizes, gap_trans, 's', label='Integrated transmission')
plt.xlabel('Gap axial extent $d$ (nm)')
plt.ylabel('FB1250-10 Transmission')
plt.hlines(1.0, 0, 50, label='100% transmission')
plt.hlines(0.998, 0, 50, linestyle='dashed', label = '99.8% transmission')
plt.legend(loc='lower left')
plt.savefig('../figs/FB1250-10_integ_trans.pdf')
out_tbl = pd.DataFrame({'d (nm)':gap_sizes[::4], 'FB1250-10 Transmission':gap_trans[::4]})
out_tbl['FB1250-10 Transmission'] = out_tbl['FB1250-10 Transmission'].round(3)
out_tbl = out_tbl[['d (nm)','FB1250-10 Transmission']]
out_tbl
out_tbl.to_latex('../tbls/tbl_FB1250_raw.tex', index=False)
Explanation: Small gaps.
End of explanation |
12,405 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Spark + Python = PySpark
Esse notebook introduz os conceitos básicos do Spark através de sua interface com a linguagem Python. Como aplicação inicial faremos o clássico examplo de contador de palavras . Com esse exemplo é possÃvel entender a lógica de programação funcional para as diversas tarefas de exploração de dados distribuÃdos.
Para isso utilizaremos o livro texto Trabalhos completos de William Shakespeare obtidos do Projeto Gutenberg. Veremos que esse mesmo algoritmo pode ser empregado em textos de qualquer tamanho.
Esse notebook contém
Step2: (1b) Plural
Vamos criar uma função que transforma uma palavra no plural adicionando uma letra 's' ao final da string. Em seguida vamos utilizar a função map() para aplicar a transformação em cada palavra do RDD.
Em Python (e muitas outras linguagens) a concatenação de strings é custosa. Uma alternativa melhor é criar uma nova string utilizando str.format().
Nota
Step3: (1c) Aplicando a função ao RDD
Transforme cada palavra do nosso RDD em plural usando map()
Em seguida, utilizaremos o comando collect() que retorna a RDD como uma lista do Python.
Step4: Nota
Step5: (1e) Tamanho de cada palavra
Agora use map() e uma função lambda para retornar o número de caracteres em cada palavra. Utilize collect() para armazenar o resultado em forma de listas na variável destino.
Step6: (1f) RDDs de pares e tuplas
Para contar a frequência de cada palavra de maneira distribuÃda, primeiro devemos atribuir um valor para cada palavra do RDD. Isso irá gerar um base de dados (chave, valor). Desse modo podemos agrupar a base através da chave, calculando a soma dos valores atribuÃdos. No nosso caso, vamos atribuir o valor 1 para cada palavra.
Um RDD contendo a estrutura de tupla chave-valor (k,v) é chamada de RDD de tuplas ou pair RDD.
Vamos criar nosso RDD de pares usando a transformação map() com uma função lambda().
Step7: Parte 2
Step8: (2b) Calculando as contagens
Após o groupByKey() nossa RDD contém elementos compostos da palavra, como chave, e um iterador contendo todos os valores correspondentes aquela chave.
Utilizando a transformação mapValues() e a função sum(), contrua um novo RDD que consiste de tuplas (chave, soma).
Step9: (2c) reduceByKey
Um comando mais interessante para a contagem é o reduceByKey() que cria uma nova RDD de tuplas.
Essa transformação aplica a transformação reduce() vista na aula anterior para os valores de cada chave. Dessa forma, a função de transformação pode ser aplicada em cada partição local para depois ser enviada para redistribuição de partições, reduzindo o total de dados sendo movidos e não mantendo listas grandes na memória.
Step10: (2d) Agrupando os comandos
A forma mais usual de realizar essa tarefa, partindo do nosso RDD palavrasRDD, é encadear os comandos map e reduceByKey em uma linha de comando.
Step11: Parte 3
Step12: (3b) Calculando a Média de contagem de palavras
Encontre a média de frequência das palavras utilizando o RDD contagem.
Note que a função do comando reduce() é aplicada em cada tupla do RDD. Para realizar a soma das contagens, primeiro é necessário mapear o RDD para um RDD contendo apenas os valores das frequências (sem as chaves).
Step14: Parte 4
Step16: (4b) Normalizando o texto
Quando trabalhamos com dados reais, geralmente precisamos padronizar os atributos de tal forma que diferenças sutis por conta de erro de medição ou diferença de normatização, sejam desconsideradas. Para o próximo passo vamos padronizar o texto para
Step17: (4c) Carregando arquivo texto
Para a próxima parte vamos utilizar o livro Trabalhos completos de William Shakespeare do Projeto Gutenberg.
Para converter um texto em uma RDD, utilizamos a função textFile() que recebe como entrada o nome do arquivo texto que queremos utilizar e o número de partições.
O nome do arquivo texto pode se referir a um arquivo local ou uma URI de arquivo distribuÃdo (ex.
Step18: (4d) Extraindo as palavras
Antes de poder usar nossa função Before we can use the contaPalavras(), temos ainda que trabalhar em cima da nossa RDD
Step19: Conforme deve ter percebido, o uso da função map() gera uma lista para cada linha, criando um RDD contendo uma lista de listas.
Para resolver esse problema, o Spark possui uma função análoga chamada flatMap() que aplica a transformação do map(), porém achatando o retorno em forma de lista para uma lista unidimensional.
Step20: Nota
Step21: (4e) Remover linhas vazias
Para o próximo passo vamos filtrar as linhas vazias com o comando filter(). Uma linha vazia é uma string sem nenhum conteúdo.
Step22: (4f) Contagem de palavras
Agora que nossa RDD contém uma lista de palavras, podemos aplicar nossa função contaPalavras().
Aplique a função em nossa RDD e utilize a função takeOrdered para imprimir as 15 palavras mais frequentes.
takeOrdered() pode receber um segundo parâmetro que instrui o Spark em como ordenar os elementos. Ex.
Step24: Parte 5
Step28: (5b) Valores Categóricos
Quando nossos objetos são representados por atributos categóricos, eles não possuem uma similaridade espacial. Para calcularmos a similaridade entre eles podemos primeiro transformar nosso vetor de atrbutos em um vetor binário indicando, para cada possÃvel valor de cada atributo, se ele possui esse atributo ou não.
Com o vetor binário podemos utilizar a distância de Hamming definida por | Python Code:
from pyspark import SparkContext
sc =SparkContext()
ListaPalavras = ['gato', 'elefante', 'rato', 'rato', 'gato']
palavrasRDD = sc.parallelize(ListaPalavras, 4)
print type(palavrasRDD)
Explanation: Spark + Python = PySpark
Esse notebook introduz os conceitos básicos do Spark através de sua interface com a linguagem Python. Como aplicação inicial faremos o clássico examplo de contador de palavras . Com esse exemplo é possÃvel entender a lógica de programação funcional para as diversas tarefas de exploração de dados distribuÃdos.
Para isso utilizaremos o livro texto Trabalhos completos de William Shakespeare obtidos do Projeto Gutenberg. Veremos que esse mesmo algoritmo pode ser empregado em textos de qualquer tamanho.
Esse notebook contém:
Parte 1: Criando uma base RDD e RDDs de tuplas
Parte 2: Manipulando RDDs de tuplas
Parte 3: Encontrando palavras únicas e calculando médias
Parte 4: Aplicar contagem de palavras em um arquivo
Parte 5: Similaridade entre Objetos
Para os exercÃcios é aconselhável consultar a documentação da API do PySpark
Part 1: Criando e Manipulando RDDs
Nessa parte do notebook vamos criar uma base RDD a partir de uma lista com o comando parallelize.
(1a) Criando uma base RDD
Podemos criar uma base RDD de diversos tipos e fonte do Python com o comando sc.parallelize(fonte, particoes), sendo fonte uma variável contendo os dados (ex.: uma lista) e particoes o número de partições para trabalhar em paralelo.
End of explanation
# EXERCICIO
def Plural(palavra):
Adds an 's' to `palavra`.
Args:
palavra (str): A string.
Returns:
str: A string with 's' added to it.
return "{0}{1}".format(palavra,"s")#<COMPLETAR>
print Plural('gato')
help(Plural)
assert Plural('rato')=='ratos', 'resultado incorreto!'
print 'OK'
Explanation: (1b) Plural
Vamos criar uma função que transforma uma palavra no plural adicionando uma letra 's' ao final da string. Em seguida vamos utilizar a função map() para aplicar a transformação em cada palavra do RDD.
Em Python (e muitas outras linguagens) a concatenação de strings é custosa. Uma alternativa melhor é criar uma nova string utilizando str.format().
Nota: a string entre os conjuntos de três aspas representa a documentação da função. Essa documentação é exibida com o comando help(). Vamos utilizar a padronização de documentação sugerida para o Python, manteremos essa documentação em inglês.
End of explanation
# EXERCICIO
pluralRDD = palavrasRDD.map(Plural)#<COMPLETAR>
print pluralRDD.collect()
assert pluralRDD.collect()==['gatos','elefantes','ratos','ratos','gatos'], 'valores incorretos!'
print 'OK'
Explanation: (1c) Aplicando a função ao RDD
Transforme cada palavra do nosso RDD em plural usando map()
Em seguida, utilizaremos o comando collect() que retorna a RDD como uma lista do Python.
End of explanation
# EXERCICIO
pluralLambdaRDD = palavrasRDD.map(lambda x: "{0}{1}".format(x,"s"))#<COMPLETAR>
print pluralLambdaRDD.collect()
assert pluralLambdaRDD.collect()==['gatos','elefantes','ratos','ratos','gatos'], 'valores incorretos!'
print 'OK'
Explanation: Nota: utilize o comando collect() apenas quando tiver certeza de que a lista caberá na memória. Para gravar os resultados de volta em arquivo texto ou base de dados utilizaremos outro comando.
(1d) Utilizando uma função lambda
Repita a criação de um RDD de plurais, porém utilizando uma função lambda.
End of explanation
# EXERCICIO
pluralTamanho = (pluralRDD.map(lambda x: len(x))
#<COMPLETAR>
).collect()
print pluralTamanho
assert pluralTamanho==[5,9,5,5,5], 'valores incorretos'
print "OK"
Explanation: (1e) Tamanho de cada palavra
Agora use map() e uma função lambda para retornar o número de caracteres em cada palavra. Utilize collect() para armazenar o resultado em forma de listas na variável destino.
End of explanation
# EXERCICIO
palavraPar = palavrasRDD.map(lambda x: (x,1))#<COMPLETAR>
print palavraPar.collect()
assert palavraPar.collect() == [('gato',1),('elefante',1),('rato',1),('rato',1),('gato',1)], 'valores incorretos!'
print "OK"
Explanation: (1f) RDDs de pares e tuplas
Para contar a frequência de cada palavra de maneira distribuÃda, primeiro devemos atribuir um valor para cada palavra do RDD. Isso irá gerar um base de dados (chave, valor). Desse modo podemos agrupar a base através da chave, calculando a soma dos valores atribuÃdos. No nosso caso, vamos atribuir o valor 1 para cada palavra.
Um RDD contendo a estrutura de tupla chave-valor (k,v) é chamada de RDD de tuplas ou pair RDD.
Vamos criar nosso RDD de pares usando a transformação map() com uma função lambda().
End of explanation
# EXERCICIO
palavrasGrupo = palavraPar.groupByKey()
for chave, valor in palavrasGrupo.collect():
print '{0}: {1}'.format(chave, list(valor))
assert sorted(palavrasGrupo.mapValues(lambda x: list(x)).collect()) == [('elefante', [1]), ('gato',[1, 1]), ('rato',[1, 1])], 'Valores incorretos!'
print "OK"
Explanation: Parte 2: Manipulando RDD de tuplas
Vamos manipular nossa RDD para contar as palavras do texto.
(2a) Função groupByKey()
A função groupByKey() agrupa todos os valores de um RDD através da chave (primeiro elemento da tupla) agregando os valores em uma lista.
Essa abordagem tem um ponto fraco pois:
A operação requer que os dados distribuÃdos sejam movidos em massa para que permaneçam na partição correta.
As listas podem se tornar muito grandes. Imagine contar todas as palavras do Wikipedia: termos comuns como "a", "e" formarão uma lista enorme de valores que pode não caber na memória do processo escravo.
End of explanation
# EXERCICIO
contagemGroup = palavrasGrupo.mapValues(lambda x: sum (x))#<COMPLETAR>
print contagemGroup.collect()
assert list(sorted(contagemGroup.collect()))==[('elefante',1), ('gato',2), ('rato',2)], 'valores incorretos!'
print "OK"
Explanation: (2b) Calculando as contagens
Após o groupByKey() nossa RDD contém elementos compostos da palavra, como chave, e um iterador contendo todos os valores correspondentes aquela chave.
Utilizando a transformação mapValues() e a função sum(), contrua um novo RDD que consiste de tuplas (chave, soma).
End of explanation
# EXERCICIO
from operator import add
contagem = palavraPar.reduceByKey(add)#<COMPLETAR>
print contagem.collect()
assert sorted(contagem.collect())==[('elefante',1), ('gato',2), ('rato',2)], 'valores incorretos!'
print "OK"
Explanation: (2c) reduceByKey
Um comando mais interessante para a contagem é o reduceByKey() que cria uma nova RDD de tuplas.
Essa transformação aplica a transformação reduce() vista na aula anterior para os valores de cada chave. Dessa forma, a função de transformação pode ser aplicada em cada partição local para depois ser enviada para redistribuição de partições, reduzindo o total de dados sendo movidos e não mantendo listas grandes na memória.
End of explanation
# EXERCICIO
contagemFinal = (palavrasRDD.map(lambda x:(x,1)).reduceByKey(add)
#<COMPLETAR>
#<COMPLETAR>
)
contagemFinal = contagemFinal.collect()
print contagemFinal
assert sorted(contagemFinal)==[('elefante',1), ('gato',2), ('rato',2)], 'valores incorretos!'
print "OK"
Explanation: (2d) Agrupando os comandos
A forma mais usual de realizar essa tarefa, partindo do nosso RDD palavrasRDD, é encadear os comandos map e reduceByKey em uma linha de comando.
End of explanation
# EXERCICIO
palavrasUnicas = (palavrasRDD.map(lambda x:(x,1)).reduceByKey(lambda y,z:1)).collect()#<COMPLETAR>
palavrasUnicas = len(palavrasUnicas)
print palavrasUnicas
assert palavrasUnicas==3, 'valor incorreto!'
print "OK"
Explanation: Parte 3: Encontrando as palavras únicas e calculando a média de contagem
(3a) Palavras Ãnicas
Calcule a quantidade de palavras únicas do RDD. Utilize comandos de RDD da API do PySpark e alguma das últimas RDDs geradas nos exercÃcios anteriores.
End of explanation
# EXERCICIO
# add é equivalente a lambda x,y: x+y
palavrasRDD2 = sc.parallelize(contagemFinal)
#print palavrasRDD2.collect()
total = (palavrasRDD2.map(lambda x:(x[1])).reduce(add))
media = total / float(palavrasUnicas)
print total
print round(media, 2)
assert round(media, 2)==1.67, 'valores incorretos!'
print "OK"
Explanation: (3b) Calculando a Média de contagem de palavras
Encontre a média de frequência das palavras utilizando o RDD contagem.
Note que a função do comando reduce() é aplicada em cada tupla do RDD. Para realizar a soma das contagens, primeiro é necessário mapear o RDD para um RDD contendo apenas os valores das frequências (sem as chaves).
End of explanation
# EXERCICIO
def contaPalavras(chavesRDD):
Creates a pair RDD with word counts from an RDD of words.
Args:
chavesRDD (RDD of str): An RDD consisting of words.
Returns:
RDD of (str, int): An RDD consisting of (word, count) tuples.
return (chavesRDD.map(lambda x: (x,1)).reduceByKey(add)
#<COMPLETAR>
#<COMPLETAR>
)
print contaPalavras(palavrasRDD).collect()
assert sorted(contaPalavras(palavrasRDD).collect())==[('elefante',1), ('gato',2), ('rato',2)], 'valores incorretos!'
print "OK"
Explanation: Parte 4: Aplicar nosso algoritmo em um arquivo
(4a) Função contaPalavras
Para podermos aplicar nosso algoritmo genéricamente em diversos RDDs, vamos primeiro criar uma função para aplicá-lo em qualquer fonte de dados. Essa função recebe de entrada um RDD contendo uma lista de chaves (palavras) e retorna um RDD de tuplas com as chaves e a contagem delas nessa RDD
End of explanation
# EXERCICIO
import re
def removerPontuacao(texto):
Removes punctuation, changes to lower case, and strips leading and trailing spaces.
Note:
Only spaces, letters, and numbers should be retained. Other characters should should be
eliminated (e.g. it's becomes its). Leading and trailing spaces should be removed after
punctuation is removed.
Args:
texto (str): A string.
Returns:
str: The cleaned up string.
return re.sub(r'[^A-Za-z0-9 ]', '', texto).strip().lower()
print removerPontuacao('Ola, quem esta ai??!')
print removerPontuacao(' Sem espaco e_sublinhado!')
assert removerPontuacao(' O uso de virgulas, embora permitido, nao deve contar. ')=='o uso de virgulas embora permitido nao deve contar', 'string incorreta!'
print "OK"
Explanation: (4b) Normalizando o texto
Quando trabalhamos com dados reais, geralmente precisamos padronizar os atributos de tal forma que diferenças sutis por conta de erro de medição ou diferença de normatização, sejam desconsideradas. Para o próximo passo vamos padronizar o texto para:
Padronizar a capitalização das palavras (tudo maiúsculo ou tudo minúsculo).
Remover pontuação.
Remover espaços no inÃcio e no final da palavra.
Crie uma função removerPontuacao que converte todo o texto para minúscula, remove qualquer pontuação e espaços em branco no inÃcio ou final da palavra. Para isso, utilize a biblioteca re para remover todo texto que não seja letra, número ou espaço, encadeando com as funções de string para remover espaços em branco e converter para minúscula (veja Strings).
End of explanation
# Apenas execute a célula
import os.path
import urllib2
url = 'http://www.gutenberg.org/cache/epub/100/pg100.txt' # url do livro
arquivo = os.path.join('Data','Aula02','shakespeare.txt') # local de destino: 'Data/Aula02/shakespeare.txt'
if os.path.isfile(arquivo): # verifica se já fizemos download do arquivo
print 'Arquivo já existe!'
else:
try:
response = urllib2.urlopen(url)
arquivo = (response.read()).split() #ja gera uma lista de palavras
except IOError:
print 'ImpossÃvel fazer o download: {0}'.format(url)
# lê o arquivo com textFile e aplica a função removerPontuacao
shakesRDD = (sc.textFile(arquivo).map(removerPontuacao))
# zipWithIndex gera tuplas (conteudo, indice) onde indice é a posição do conteudo na lista sequencial
# Ex.: sc.parallelize(['gato','cachorro','boi']).zipWithIndex() ==> [('gato',0), ('cachorro',1), ('boi',2)]
# sep.join() junta as strings de uma lista através do separador sep. Ex.: ','.join(['a','b','c']) ==> 'a,b,c'
print '\n'.join(shakesRDD
.zipWithIndex()
.map(lambda (linha, num): '{0}: {1}'.format(num,linha))
.take(15)
)
Explanation: (4c) Carregando arquivo texto
Para a próxima parte vamos utilizar o livro Trabalhos completos de William Shakespeare do Projeto Gutenberg.
Para converter um texto em uma RDD, utilizamos a função textFile() que recebe como entrada o nome do arquivo texto que queremos utilizar e o número de partições.
O nome do arquivo texto pode se referir a um arquivo local ou uma URI de arquivo distribuÃdo (ex.: hdfs://).
Vamos também aplicar a função removerPontuacao() para normalizar o texto e verificar as 15 primeiras linhas com o comando take().
End of explanation
# EXERCICIO
shakesPalavrasRDD = shakesRDD.map(lambda x: x.split())#<COMPLETAR>
total = shakesPalavrasRDD.count()
print shakesPalavrasRDD.take(5)
print total
Explanation: (4d) Extraindo as palavras
Antes de poder usar nossa função Before we can use the contaPalavras(), temos ainda que trabalhar em cima da nossa RDD:
Precisamos gerar listas de palavras ao invés de listas de sentenças.
Eliminar linhas vazias.
As strings em Python tem o método split() que faz a separação de uma string por separador. No nosso caso, queremos separar as strings por espaço.
Utilize a função map() para gerar um novo RDD como uma lista de palavras.
End of explanation
# EXERCICIO
shakesPalavrasRDD = shakesRDD.flatMap(lambda x: x.split())
total = shakesPalavrasRDD.count()
print shakesPalavrasRDD.top(5)
print total
Explanation: Conforme deve ter percebido, o uso da função map() gera uma lista para cada linha, criando um RDD contendo uma lista de listas.
Para resolver esse problema, o Spark possui uma função análoga chamada flatMap() que aplica a transformação do map(), porém achatando o retorno em forma de lista para uma lista unidimensional.
End of explanation
#assert total==927631 or total == 928908, "valor incorreto de palavras!"
#print "OK"
assert shakesPalavrasRDD.top(5)==[u'zwaggerd', u'zounds', u'zounds', u'zounds', u'zounds'],'lista incorreta de palavras'
print "OK"
Explanation: Nota: os asserts abaixo de contagem de palavra podem falhar por diferença de formato do arquivo .txt antigo e novo. Eu avaliarei somente os códigos nesse trecho.
End of explanation
# EXERCICIO
shakesLimpoRDD = shakesPalavrasRDD.filter(lambda x: len(x)>0)#<COMPLETAR>
total = shakesLimpoRDD.count()
print total
assert total==882996, 'valor incorreto!'
print "OK"
Explanation: (4e) Remover linhas vazias
Para o próximo passo vamos filtrar as linhas vazias com o comando filter(). Uma linha vazia é uma string sem nenhum conteúdo.
End of explanation
# EXERCICIO
#print contaPalavras(shakesLimpoRDD).collect()
top15 = contaPalavras(shakesLimpoRDD).takeOrdered(15, lambda x: -x[1])#<COMPLETAR>
print '\n'.join(map(lambda (w, c): '{0}: {1}'.format(w, c), top15))
assert top15 == [(u'the', 27361), (u'and', 26028), (u'i', 20681), (u'to', 19150), (u'of', 17463),
(u'a', 14593), (u'you', 13615), (u'my', 12481), (u'in', 10956), (u'that', 10890),
(u'is', 9134), (u'not', 8497), (u'with', 7771), (u'me', 7769), (u'it', 7678)],'valores incorretos!'
print "OK"
Explanation: (4f) Contagem de palavras
Agora que nossa RDD contém uma lista de palavras, podemos aplicar nossa função contaPalavras().
Aplique a função em nossa RDD e utilize a função takeOrdered para imprimir as 15 palavras mais frequentes.
takeOrdered() pode receber um segundo parâmetro que instrui o Spark em como ordenar os elementos. Ex.:
takeOrdered(15, key=lambda x: -x): ordem decrescente dos valores de x
End of explanation
import numpy as np
# Vamos criar uma função pNorm que recebe como parâmetro p e retorna uma função que calcula a pNorma
def pNorm(p):
Generates a function to calculate the p-Norm between two points.
Args:
p (int): The integer p.
Returns:
Dist: A function that calculates the p-Norm.
def Dist(x,y):
return np.power(np.power(np.abs(x-y),p).sum(),1/float(p))
return Dist
# Vamos criar uma RDD com valores numéricos
numPointsRDD = sc.parallelize(enumerate(np.random.random(size=(10,100))))
# EXERCICIO
# Procure dentre os comandos do PySpark, um que consiga fazer o produto cartesiano da base com ela mesma
cartPointsRDD = numPointsRDD.cartesian(numPointsRDD)#<COMPLETAR>
# Aplique um mapa para transformar nossa RDD em uma RDD de tuplas ((id1,id2), (vetor1,vetor2))
# DICA: primeiro utilize o comando take(1) e imprima o resultado para verificar o formato atual da RDD
cartPointsParesRDD = cartPointsRDD.map(lambda ((x1,x2),(y1,y2)):((x1,y1),(x2,y2)))#<COMPLETAR>
#print cartPointsParesRDD
#Aplique um mapa para calcular a Distância Euclidiana entre os pares
Euclid = pNorm(2)
distRDD = cartPointsParesRDD.map(lambda ((x1,y1),(x2,y2)): Euclid(x2,y2))#<COMPLETAR>
#print(distRDD.collect())
# Encontre a distância máxima, mÃnima e média, aplicando um mapa que transforma (chave,valor) --> valor
# e utilizando os comandos internos do pyspark para o cálculo da min, max, mean
#statRDD = distRDD.<COMPLETAR>
#minv, maxv, meanv = statRDD.<COMPLETAR>, statRDD.<COMPLETAR>, statRDD.<COMPLETAR>
minv, maxv, meanv = distRDD.min(), distRDD.max(), distRDD.mean()#<COMPLETAR>
print minv, maxv, meanv
assert (minv.round(2), maxv.round(2), meanv.round(2))==(0.0, 4.70, 3.65), 'Valores incorretos'
print "OK"
Explanation: Parte 5: Similaridade entre Objetos
Nessa parte do laboratório vamos aprender a calcular a distância entre atributos numéricos, categóricos e textuais.
(5a) Vetores no espaço Euclidiano
Quando nossos objetos são representados no espaço Euclidiano, medimos a similaridade entre eles através da p-Norma definida por:
$$d(x,y,p) = (\sum_{i=1}^{n}{|x_i - y_i|^p})^{1/p}$$
As normas mais utilizadas são $p=1,2,\infty$ que se reduzem em distância absoluta, Euclidiana e máxima distância:
$$d(x,y,1) = \sum_{i=1}^{n}{|x_i - y_i|}$$
$$d(x,y,2) = (\sum_{i=1}^{n}{|x_i - y_i|^2})^{1/2}$$
$$d(x,y,\infty) = \max(|x_1 - y_1|,|x_2 - y_2|, ..., |x_n - y_n|)$$
End of explanation
# Vamos criar uma função para calcular a distância de Hamming
def Hamming(x,y):
Calculates the Hamming distance between two binary vectors.
Args:
x, y (np.array): Array of binary integers x and y.
Returns:
H (int): The Hamming distance between x and y.
return (x!=y).sum()
# Vamos criar uma função para calcular a distância de Jaccard
def Jaccard(x,y):
Calculates the Jaccard distance between two binary vectors.
Args:
x, y (np.array): Array of binary integers x and y.
Returns:
J (int): The Jaccard distance between x and y.
return (x==y).sum()/float( np.maximum(x,y).sum() )
# Vamos criar uma RDD com valores categóricos
catPointsRDD = sc.parallelize(enumerate([['alto', 'caro', 'azul'],
['medio', 'caro', 'verde'],
['alto', 'barato', 'azul'],
['medio', 'caro', 'vermelho'],
['baixo', 'barato', 'verde'],
]))
# EXERCICIO
# Crie um RDD de chaves únicas utilizando flatMap
chavesRDD = catPointsRDD.flatMap(lambda x: (x[1])).distinct()#.zipWithIndex()
chaves = dict((v,k) for k,v in enumerate(chavesRDD.collect()))
nchaves = len(chaves)
print chaves, nchaves
assert chaves=={'alto': 0, 'medio': 1, 'baixo': 2, 'barato': 3, 'azul': 4, 'verde': 5, 'caro': 6, 'vermelho': 7}, 'valores incorretos!'
print "OK"
assert nchaves==8, 'número de chaves incorreta'
print "OK"
def CreateNP(atributos,chaves):
Binarize the categorical vector using a dictionary of keys.
Args:
atributos (list): List of attributes of a given object.
chaves (dict): dictionary with the relation attribute -> index
Returns:
array (np.array): Binary array of attributes.
array = np.zeros(len(chaves))
for atr in atributos:
array[ chaves[atr] ] = 1
return array
# Converte o RDD para o formato binário, utilizando o dict chaves
binRDD = catPointsRDD.map(lambda rec: (rec[0],CreateNP(rec[1], chaves)))
binRDD.collect()
# EXERCICIO
# Procure dentre os comandos do PySpark, um que consiga fazer o produto cartesiano da base com ela mesma
cartBinRDD = binRDD.cartesian(binRDD)#<COMPLETAR>
# Aplique um mapa para transformar nossa RDD em uma RDD de tuplas ((id1,id2), (vetor1,vetor2))
# DICA: primeiro utilize o comando take(1) e imprima o resultado para verificar o formato atual da RDD
cartBinParesRDD = cartBinRDD.map(lambda ((x1,x2),(y1,y2)):((x1,y1),(x2,y2)))#<COMPLETAR>
distRDD = cartPointsParesRDD.map(lambda ((x1,y1),(x2,y2)): Euclid(x2,y2))#<COMPLETAR>
# Aplique um mapa para calcular a Distância de Hamming e Jaccard entre os pares
hamRDD = cartBinParesRDD.map(lambda ((x1,y1),(x2,y2)): Hamming(x2,y2))#<COMPLETAR>
jacRDD = cartBinParesRDD.map(lambda ((x1,y1),(x2,y2)): Jaccard(x2,y2))#<COMPLETAR>
# Encontre a distância máxima, mÃnima e média, aplicando um mapa que transforma (chave,valor) --> valor
# e utilizando os comandos internos do pyspark para o cálculo da min, max, mean
#statHRDD = hamRDD.<COMPLETAR>
#statJRDD = jacRDD.<COMPLETAR>
Hmin, Hmax, Hmean = hamRDD.min(), hamRDD.max(), hamRDD.mean()
Jmin, Jmax, Jmean = jacRDD.min(), jacRDD.max(), jacRDD.mean()
#Hmin, Hmax, Hmean = statHRDD.<COMPLETAR>, statHRDD.<COMPLETAR>, statHRDD.<COMPLETAR>
#Jmin, Jmax, Jmean = statJRDD.<COMPLETAR>, statJRDD.<COMPLETAR>, statJRDD.<COMPLETAR>
print "\t\tMin\tMax\tMean"
print "Hamming:\t{:.2f}\t{:.2f}\t{:.2f}".format(Hmin, Hmax, Hmean )
print "Jaccard:\t{:.2f}\t{:.2f}\t{:.2f}".format( Jmin, Jmax, Jmean )
assert (Hmin.round(2), Hmax.round(2), Hmean.round(2)) == (0.00,6.00,3.52), 'valores incorretos'
print "OK"
assert (Jmin.round(2), Jmax.round(2), Jmean.round(2)) == (0.33,2.67,1.14), 'valores incorretos'
print "OK"
Explanation: (5b) Valores Categóricos
Quando nossos objetos são representados por atributos categóricos, eles não possuem uma similaridade espacial. Para calcularmos a similaridade entre eles podemos primeiro transformar nosso vetor de atrbutos em um vetor binário indicando, para cada possÃvel valor de cada atributo, se ele possui esse atributo ou não.
Com o vetor binário podemos utilizar a distância de Hamming definida por:
$$ H(x,y) = \sum_{i=1}^{n}{x_i != y_i} $$
Também é possÃvel definir a distância de Jaccard como:
$$ J(x,y) = \frac{\sum_{i=1}^{n}{x_i == y_i} }{\sum_{i=1}^{n}{\max(x_i, y_i}) } $$
End of explanation |
12,406 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Source localization with MNE/dSPM/sLORETA/eLORETA
The aim of this tutorial is to teach you how to compute and apply a linear
inverse method such as MNE/dSPM/sLORETA/eLORETA on evoked/raw/epochs data.
Step1: Process MEG data
Step2: Compute regularized noise covariance
For more details see tut_compute_covariance.
Step3: Compute the evoked response
Let's just use MEG channels for simplicity.
Step4: Inverse modeling
Step5: Compute inverse solution
Step6: Visualization
View activation time-series
Step7: Examine the original data and the residual after fitting
Step8: Here we use peak getter to move visualization to the time point of the peak
and draw a marker at the maximum peak vertex.
Step9: Morph data to average brain
Step10: Dipole orientations
The pick_ori parameter of the
Step11: Note that there is a relationship between the orientation of the dipoles and
the surface of the cortex. For this reason, we do not use an inflated
cortical surface for visualization, but the original surface used to define
the source space.
For more information about dipole orientations, see
sphx_glr_auto_tutorials_plot_dipole_orientations.py.
Now let's look at each solver | Python Code:
# sphinx_gallery_thumbnail_number = 10
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.datasets import sample
from mne.minimum_norm import make_inverse_operator, apply_inverse
Explanation: Source localization with MNE/dSPM/sLORETA/eLORETA
The aim of this tutorial is to teach you how to compute and apply a linear
inverse method such as MNE/dSPM/sLORETA/eLORETA on evoked/raw/epochs data.
End of explanation
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
raw = mne.io.read_raw_fif(raw_fname) # already has an average reference
events = mne.find_events(raw, stim_channel='STI 014')
event_id = dict(aud_l=1) # event trigger and conditions
tmin = -0.2 # start of each epoch (200ms before the trigger)
tmax = 0.5 # end of each epoch (500ms after the trigger)
raw.info['bads'] = ['MEG 2443', 'EEG 053']
picks = mne.pick_types(raw.info, meg=True, eeg=False, eog=True,
exclude='bads')
baseline = (None, 0) # means from the first instant to t = 0
reject = dict(grad=4000e-13, mag=4e-12, eog=150e-6)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True, picks=picks,
baseline=baseline, reject=reject)
Explanation: Process MEG data
End of explanation
noise_cov = mne.compute_covariance(
epochs, tmax=0., method=['shrunk', 'empirical'], rank=None, verbose=True)
fig_cov, fig_spectra = mne.viz.plot_cov(noise_cov, raw.info)
Explanation: Compute regularized noise covariance
For more details see tut_compute_covariance.
End of explanation
evoked = epochs.average().pick_types(meg=True)
evoked.plot(time_unit='s')
evoked.plot_topomap(times=np.linspace(0.05, 0.15, 5), ch_type='mag',
time_unit='s')
# Show whitening
evoked.plot_white(noise_cov, time_unit='s')
del epochs # to save memory
Explanation: Compute the evoked response
Let's just use MEG channels for simplicity.
End of explanation
# Read the forward solution and compute the inverse operator
fname_fwd = data_path + '/MEG/sample/sample_audvis-meg-oct-6-fwd.fif'
fwd = mne.read_forward_solution(fname_fwd)
# make an MEG inverse operator
info = evoked.info
inverse_operator = make_inverse_operator(info, fwd, noise_cov,
loose=0.2, depth=0.8)
del fwd
# You can write it to disk with::
#
# >>> from mne.minimum_norm import write_inverse_operator
# >>> write_inverse_operator('sample_audvis-meg-oct-6-inv.fif',
# inverse_operator)
Explanation: Inverse modeling: MNE/dSPM on evoked and raw data
End of explanation
method = "dSPM"
snr = 3.
lambda2 = 1. / snr ** 2
stc, residual = apply_inverse(evoked, inverse_operator, lambda2,
method=method, pick_ori=None,
return_residual=True, verbose=True)
Explanation: Compute inverse solution
End of explanation
plt.figure()
plt.plot(1e3 * stc.times, stc.data[::100, :].T)
plt.xlabel('time (ms)')
plt.ylabel('%s value' % method)
plt.show()
Explanation: Visualization
View activation time-series
End of explanation
fig, axes = plt.subplots(2, 1)
evoked.plot(axes=axes)
for ax in axes:
ax.texts = []
for line in ax.lines:
line.set_color('#98df81')
residual.plot(axes=axes)
Explanation: Examine the original data and the residual after fitting:
End of explanation
vertno_max, time_max = stc.get_peak(hemi='rh')
subjects_dir = data_path + '/subjects'
surfer_kwargs = dict(
hemi='rh', subjects_dir=subjects_dir,
clim=dict(kind='value', lims=[8, 12, 15]), views='lateral',
initial_time=time_max, time_unit='s', size=(800, 800), smoothing_steps=5)
brain = stc.plot(**surfer_kwargs)
brain.add_foci(vertno_max, coords_as_verts=True, hemi='rh', color='blue',
scale_factor=0.6, alpha=0.5)
brain.add_text(0.1, 0.9, 'dSPM (plus location of maximal activation)', 'title',
font_size=14)
Explanation: Here we use peak getter to move visualization to the time point of the peak
and draw a marker at the maximum peak vertex.
End of explanation
# setup source morph
morph = mne.compute_source_morph(
src=inverse_operator['src'], subject_from=stc.subject,
subject_to='fsaverage', spacing=5, # to ico-5
subjects_dir=subjects_dir)
# morph data
stc_fsaverage = morph.apply(stc)
brain = stc_fsaverage.plot(**surfer_kwargs)
brain.add_text(0.1, 0.9, 'Morphed to fsaverage', 'title', font_size=20)
del stc_fsaverage
Explanation: Morph data to average brain
End of explanation
stc_vec = apply_inverse(evoked, inverse_operator, lambda2,
method=method, pick_ori='vector')
brain = stc_vec.plot(**surfer_kwargs)
brain.add_text(0.1, 0.9, 'Vector solution', 'title', font_size=20)
del stc_vec
Explanation: Dipole orientations
The pick_ori parameter of the
:func:mne.minimum_norm.apply_inverse function controls
the orientation of the dipoles. One useful setting is pick_ori='vector',
which will return an estimate that does not only contain the source power at
each dipole, but also the orientation of the dipoles.
End of explanation
for mi, (method, lims) in enumerate((('dSPM', [8, 12, 15]),
('sLORETA', [3, 5, 7]),
('eLORETA', [0.75, 1.25, 1.75]),)):
surfer_kwargs['clim']['lims'] = lims
stc = apply_inverse(evoked, inverse_operator, lambda2,
method=method, pick_ori=None)
brain = stc.plot(figure=mi, **surfer_kwargs)
brain.add_text(0.1, 0.9, method, 'title', font_size=20)
del stc
Explanation: Note that there is a relationship between the orientation of the dipoles and
the surface of the cortex. For this reason, we do not use an inflated
cortical surface for visualization, but the original surface used to define
the source space.
For more information about dipole orientations, see
sphx_glr_auto_tutorials_plot_dipole_orientations.py.
Now let's look at each solver:
End of explanation |
12,407 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CrowdTruth for Multiple Choice Tasks
Step1: Declaring a pre-processing configuration
The pre-processing configuration defines how to interpret the raw crowdsourcing input. To do this, we need to define a configuration class. First, we import the default CrowdTruth configuration class
Step2: Our test class inherits the default configuration DefaultConfig, while also declaring some additional attributes that are specific to the Relation Extraction task
Step3: Pre-processing the input data
After declaring the configuration of our input file, we are ready to pre-process the crowd data
Step4: Computing the CrowdTruth metrics
The pre-processed data can then be used to calculate the CrowdTruth metrics
Step5: results is a dict object that contains the quality metrics for sentences, relations and crowd workers.
The sentence metrics are stored in results["units"]
Step6: The uqs column in results["units"] contains the sentence quality scores, capturing the overall workers agreement over each sentence. Here we plot its histogram
Step7: The unit_annotation_score column in results["units"] contains the sentence-relation scores, capturing the likelihood that a relation is expressed in a sentence. For each sentence, we store a dictionary mapping each relation to its sentence-relation score.
Step8: The worker metrics are stored in results["workers"]
Step9: The wqs columns in results["workers"] contains the worker quality scores, capturing the overall agreement between one worker and all the other workers.
Step10: The relation metrics are stored in results["annotations"]. The aqs column contains the relation quality scores, capturing the overall worker agreement over one relation. | Python Code:
import pandas as pd
test_data = pd.read_csv("../data/relex-multiple-choice.csv")
test_data.head()
Explanation: CrowdTruth for Multiple Choice Tasks: Relation Extraction
In this tutorial, we will apply CrowdTruth metrics to a multiple choice crowdsourcing task for Relation Extraction from sentences. The workers were asked to read a sentence with 2 highlighted terms, then pick from a multiple choice list what are the relations expressed between the 2 terms in the sentence. The task was executed on FigureEight. For more crowdsourcing annotation task examples, click here.
To replicate this experiment, the code used to design and implement this crowdsourcing annotation template is available here: template, css, javascript.
This is a screenshot of the task as it appeared to workers:
A sample dataset for this task is available in this file, containing raw output from the crowd on FigureEight. Download the file and place it in a folder named data that has the same root as this notebook. Now you can check your data:
End of explanation
import crowdtruth
from crowdtruth.configuration import DefaultConfig
Explanation: Declaring a pre-processing configuration
The pre-processing configuration defines how to interpret the raw crowdsourcing input. To do this, we need to define a configuration class. First, we import the default CrowdTruth configuration class:
End of explanation
class TestConfig(DefaultConfig):
inputColumns = ["sent_id", "term1", "b1", "e1", "term2", "b2", "e2", "sentence"]
outputColumns = ["relations"]
annotation_separator = "\n"
# processing of a closed task
open_ended_task = False
annotation_vector = [
"title", "founded_org", "place_of_birth", "children", "cause_of_death",
"top_member_employee_of_org", "employee_or_member_of", "spouse",
"alternate_names", "subsidiaries", "place_of_death", "schools_attended",
"place_of_headquarters", "charges", "origin", "places_of_residence",
"none"]
def processJudgments(self, judgments):
# pre-process output to match the values in annotation_vector
for col in self.outputColumns:
# transform to lowercase
judgments[col] = judgments[col].apply(lambda x: str(x).lower())
return judgments
Explanation: Our test class inherits the default configuration DefaultConfig, while also declaring some additional attributes that are specific to the Relation Extraction task:
inputColumns: list of input columns from the .csv file with the input data
outputColumns: list of output columns from the .csv file with the answers from the workers
annotation_separator: string that separates between the crowd annotations in outputColumns
open_ended_task: boolean variable defining whether the task is open-ended (i.e. the possible crowd annotations are not known beforehand, like in the case of free text input); in the task that we are processing, workers pick the answers from a pre-defined list, therefore the task is not open ended, and this variable is set to False
annotation_vector: list of possible crowd answers, mandatory to declare when open_ended_task is False; for our task, this is the list of relations
processJudgments: method that defines processing of the raw crowd data; for this task, we process the crowd answers to correspond to the values in annotation_vector
The complete configuration class is declared below:
End of explanation
data, config = crowdtruth.load(
file = "../data/relex-multiple-choice.csv",
config = TestConfig()
)
data['judgments'].head()
Explanation: Pre-processing the input data
After declaring the configuration of our input file, we are ready to pre-process the crowd data:
End of explanation
results = crowdtruth.run(data, config)
Explanation: Computing the CrowdTruth metrics
The pre-processed data can then be used to calculate the CrowdTruth metrics:
End of explanation
results["units"].head()
Explanation: results is a dict object that contains the quality metrics for sentences, relations and crowd workers.
The sentence metrics are stored in results["units"]:
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
plt.hist(results["units"]["uqs"])
plt.xlabel("Sentence Quality Score")
plt.ylabel("Sentences")
Explanation: The uqs column in results["units"] contains the sentence quality scores, capturing the overall workers agreement over each sentence. Here we plot its histogram:
End of explanation
results["units"]["unit_annotation_score"].head()
Explanation: The unit_annotation_score column in results["units"] contains the sentence-relation scores, capturing the likelihood that a relation is expressed in a sentence. For each sentence, we store a dictionary mapping each relation to its sentence-relation score.
End of explanation
results["workers"].head()
Explanation: The worker metrics are stored in results["workers"]:
End of explanation
plt.hist(results["workers"]["wqs"])
plt.xlabel("Worker Quality Score")
plt.ylabel("Workers")
Explanation: The wqs columns in results["workers"] contains the worker quality scores, capturing the overall agreement between one worker and all the other workers.
End of explanation
results["annotations"]
Explanation: The relation metrics are stored in results["annotations"]. The aqs column contains the relation quality scores, capturing the overall worker agreement over one relation.
End of explanation |
12,408 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
GitHub Analiza
iz
V tem projektu bomo analizirali najpopularnejše odprte repozitorije na priljubljeni strani GitHub. Podatki so bili zajeti iz https
Step1: NaloÅŸimo zajete podatke in si oglejmo primer.
Step2: Pandas nam omogoÄi hiter pregled osnovnih Å¡tevilÄnih izraÄunov.
Step3: Sedaj si poglejmo, katerega leta so bili ti repozitoriji ustvarjeni.
Step4: Vidimo, da je najveÄ projektov nastalo v letih 2013 in 2014, prej GitHub Å¡e ni bil tako popularen, novejÅ¡i projekti pa Å¡e niso imeli Äasa da zaslovijo.
Pogosto bomo uporabljali repozitorije grupirane po jezikih, da pa bo analiza laÅŸja vzemimo le tiste jezike z veÄ kot 10 repozitoriji.
Step5: Zdaj pa si oglejmo, kateri programski jeziki se najpogosteje pojavijo. Prikazali bomo le tiste z veÄ kot 10 repozitoriji, saj drugaÄe ne dobimo uporabnega grafa. Vidimo, da prevladuje JavaScript, sledijo pa mu Java, Objective-C in Python.
Step6: NariÅ¡imo Å¡e graf, ki prikazuje odvisnost Å¡tevila forkov od Å¡tevila zvezdic. Poskusimo lahko to odvisnost predstaviti kot linearno funkcijo, a hitro vidimo, da prihaja do velikih odstopanj. Vseeno pa iz naraÅ¡ÄujoÄe funkcije lahko sklepamo, da imajo repozitoriji z veÄ zvezdicamo tudi veÄ forkov.
Step7: Oglejmo si še kako je povezano število commitov z programskim jezikom uporabljenim. Vidimo, da so se najbolj spreminjali repozitoriji v VimL, Shell in Objective-C, najmanj pa C++, Ruby in Coffescript.
Step8: Izpis licenc uporabljenih v teh repozitorijih, razvrÅ¡Äene po vrstnem redu. None pomeni, da licenca ni objavljena, Other pa da uporabljajo lastno licenco.
Step9: Za konec ugotovimo še, koliko izmed teh repozitorijev je sveşe posodobljenih (imajo spremebe v zadnje mesecu). True pomeni, da ima spremebe, False da ne. | Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
pd.options.display.max_rows = 20
Explanation: GitHub Analiza
iz
V tem projektu bomo analizirali najpopularnejše odprte repozitorije na priljubljeni strani GitHub. Podatki so bili zajeti iz https://api.github.com, kar pa v tem REST API-ju ni bilo dosegljivo pa iz https://github.com.
Zajeti podatki
lastnik
ime
programski jezik
število zvezdic
število commitov
število vej
število forkov
število izdaj
število gledalcev
število contributorjev
licenca
datum nastanka
datum zadnjega commita
Analizirali bomo, kateri so najpriljubljenejši programski jeziki, kako "veliki" so ti repozitoriji,... in pa kako so ti podatki povezani med seboj.
End of explanation
repos = pd.read_csv('../data/repositories.csv', parse_dates=[11,12,13])
repos
Explanation: NaloÅŸimo zajete podatke in si oglejmo primer.
End of explanation
repos.describe()
Explanation: Pandas nam omogoÄi hiter pregled osnovnih Å¡tevilÄnih izraÄunov.
End of explanation
repos.groupby(repos.created_at.dt.year).size().plot(kind='bar')
Explanation: Sedaj si poglejmo, katerega leta so bili ti repozitoriji ustvarjeni.
End of explanation
top_languages = repos.groupby("language").filter(lambda x: len(x) > 10).groupby("language")
Explanation: Vidimo, da je najveÄ projektov nastalo v letih 2013 in 2014, prej GitHub Å¡e ni bil tako popularen, novejÅ¡i projekti pa Å¡e niso imeli Äasa da zaslovijo.
Pogosto bomo uporabljali repozitorije grupirane po jezikih, da pa bo analiza laÅŸja vzemimo le tiste jezike z veÄ kot 10 repozitoriji.
End of explanation
by_lang_pie = top_languages.size().sort_values().plot(kind='pie',figsize=(7, 7), fontsize=13)
by_lang_pie.set_ylabel("") #Removes the none
Explanation: Zdaj pa si oglejmo, kateri programski jeziki se najpogosteje pojavijo. Prikazali bomo le tiste z veÄ kot 10 repozitoriji, saj drugaÄe ne dobimo uporabnega grafa. Vidimo, da prevladuje JavaScript, sledijo pa mu Java, Objective-C in Python.
End of explanation
df = repos.copy()
z = np.polyfit(x=df.loc[:, 'stargazers_count'], y=df['forks_count'], deg=1)
p = np.poly1d(z)
df['trendline'] = p(df.loc[:, 'stargazers_count'])
ax = df.plot.scatter(x='stargazers_count', y='forks_count', color='green')
df.set_index('stargazers_count', inplace=True)
df.trendline.sort_index(ascending=False).plot(ax=ax, color='red')
plt.gca().invert_xaxis()
ax.set_ylim(0, 6000)
ax.set_xlim(0, 30000)
Explanation: NariÅ¡imo Å¡e graf, ki prikazuje odvisnost Å¡tevila forkov od Å¡tevila zvezdic. Poskusimo lahko to odvisnost predstaviti kot linearno funkcijo, a hitro vidimo, da prihaja do velikih odstopanj. Vseeno pa iz naraÅ¡ÄujoÄe funkcije lahko sklepamo, da imajo repozitoriji z veÄ zvezdicamo tudi veÄ forkov.
End of explanation
top_languages[["commit_count"]].mean().sort_values("commit_count").plot(kind='bar')
Explanation: Oglejmo si še kako je povezano število commitov z programskim jezikom uporabljenim. Vidimo, da so se najbolj spreminjali repozitoriji v VimL, Shell in Objective-C, najmanj pa C++, Ruby in Coffescript.
End of explanation
repos.groupby("license").size().sort_values(ascending=False)
Explanation: Izpis licenc uporabljenih v teh repozitorijih, razvrÅ¡Äene po vrstnem redu. None pomeni, da licenca ni objavljena, Other pa da uporabljajo lastno licenco.
End of explanation
pushed_pie = repos.groupby(repos.pushed_at >= "2016-10-03").size().plot(kind='pie',figsize=(7, 7), autopct='%.2f%%')
pushed_pie.set_ylabel("") #Removes the none
Explanation: Za konec ugotovimo še, koliko izmed teh repozitorijev je sveşe posodobljenih (imajo spremebe v zadnje mesecu). True pomeni, da ima spremebe, False da ne.
End of explanation |
12,409 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
How many movies are listed in the titles dataframe?
Step1: What are the earliest two films listed in the titles dataframe?
Step2: How many movies have the title "Hamlet"?
Step3: How many movies are titled "North by Northwest"?
Step4: When was the first movie titled "Hamlet" made?
Step5: List all of the "Treasure Island" movies from earliest to most recent.
Step6: How many movies were made in the year 1950?
Step7: How many movies were made in the year 1960?
Step8: How many movies were made from 1950 through 1959?
Step9: In what years has a movie titled "Batman" been released?
How many roles were there in the movie "Inception"?
How many roles in the movie "Inception" are NOT ranked by an "n" value?
But how many roles in the movie "Inception" did receive an "n" value?
Display the cast of "North by Northwest" in their correct "n"-value order, ignoring roles that did not earn a numeric "n" value.
Display the entire cast, in "n"-order, of the 1972 film "Sleuth".
Now display the entire cast, in "n"-order, of the 2007 version of "Sleuth".
How many roles were credited in the silent 1921 version of Hamlet?
How many roles were credited in Branaghâs 1996 Hamlet?
How many "Hamlet" roles have been listed in all film credits through history?
How many people have played an "Ophelia"?
How many people have played a role called "The Dude"?
How many people have played a role called "The Stranger"?
How many roles has Sidney Poitier played throughout his career?
How many roles has Judi Dench played?
List the supporting roles (having n=2) played by Cary Grant in the 1940s, in order by year.
List the leading roles that Cary Grant played in the 1940s in order by year.
How many roles were available for actors in the 1950s?
How many roles were avilable for actresses in the 1950s?
How many leading roles (n=1) were available from the beginning of film history through 1980?
How many non-leading roles were available through from the beginning of film history through 1980?
How many roles through 1980 were minor enough that they did not warrant a numeric "n" rank? | Python Code:
titles.tail()
len(titles)
Explanation: How many movies are listed in the titles dataframe?
End of explanation
titles.sort(columns='year', ascending=True).head()[:2]
Explanation: What are the earliest two films listed in the titles dataframe?
End of explanation
titles[titles['title'].str.contains('Hamlet')].sort('year')
Explanation: How many movies have the title "Hamlet"?
End of explanation
len(titles[titles.title == 'North by Northwest'])
Explanation: How many movies are titled "North by Northwest"?
End of explanation
titles[titles['title'] 'Hamlet'].sort('year')[:1]
Explanation: When was the first movie titled "Hamlet" made?
End of explanation
titles[titles.title == 'Treasure Island'].sort('year')
Explanation: List all of the "Treasure Island" movies from earliest to most recent.
End of explanation
len(titles[titles.year == 1950])
Explanation: How many movies were made in the year 1950?
End of explanation
movies_of_1960 = titles[titles.year == 1960]
len(movies_of_1960)
Explanation: How many movies were made in the year 1960?
End of explanation
moviesOf1950And1959 = titles[(titles.year >= 1950) & (titles.year <= 1950)]
len(moviesOf1950And1959)
Explanation: How many movies were made from 1950 through 1959?
End of explanation
titles.year.value_counts().sort_index().plot()
Explanation: In what years has a movie titled "Batman" been released?
How many roles were there in the movie "Inception"?
How many roles in the movie "Inception" are NOT ranked by an "n" value?
But how many roles in the movie "Inception" did receive an "n" value?
Display the cast of "North by Northwest" in their correct "n"-value order, ignoring roles that did not earn a numeric "n" value.
Display the entire cast, in "n"-order, of the 1972 film "Sleuth".
Now display the entire cast, in "n"-order, of the 2007 version of "Sleuth".
How many roles were credited in the silent 1921 version of Hamlet?
How many roles were credited in Branaghâs 1996 Hamlet?
How many "Hamlet" roles have been listed in all film credits through history?
How many people have played an "Ophelia"?
How many people have played a role called "The Dude"?
How many people have played a role called "The Stranger"?
How many roles has Sidney Poitier played throughout his career?
How many roles has Judi Dench played?
List the supporting roles (having n=2) played by Cary Grant in the 1940s, in order by year.
List the leading roles that Cary Grant played in the 1940s in order by year.
How many roles were available for actors in the 1950s?
How many roles were avilable for actresses in the 1950s?
How many leading roles (n=1) were available from the beginning of film history through 1980?
How many non-leading roles were available through from the beginning of film history through 1980?
How many roles through 1980 were minor enough that they did not warrant a numeric "n" rank?
End of explanation |
12,410 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Conventional "dB-differencing" analysis
Step1: Set params and load clean MVBS data
Step2: dB-differencing operation
Here I used the criteria from Sato et al. 2015 for dB-differencing. The rationale is that this is the latest publication in nearby region and the classification threshold was selected based on trawl-verified animal groups. The classification rules are | Python Code:
import os, sys, glob, re
import datetime as dt
import numpy as np
from matplotlib.dates import date2num,num2date
import h5py
sys.path.insert(0,'..')
sys.path.insert(0,'../mi_instrument/')
import db_diff
import decomp_plot
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import make_axes_locatable
%matplotlib inline
Explanation: Conventional "dB-differencing" analysis
End of explanation
# Set param
ping_time_param_names = ["hour_all","min_all","sec_all"]
ping_time_param_vals = (range(24),range(20),range(0,60,5))
ping_time_param = dict(zip(ping_time_param_names,ping_time_param_vals))
ping_per_day = len(ping_time_param['hour_all'])*len(ping_time_param['min_all'])*len(ping_time_param['sec_all'])
ping_bin_range = 40
depth_bin_range = 10
tvg_correction_factor = 2
ping_per_day_mvbs = ping_per_day/ping_bin_range
MVBS_path = '/media/wu-jung/wjlee_apl_2/ooi_zplsc_new/'
MVBS_fname = '20150817-20151017_MVBS.h5'
f = h5py.File(os.path.join(MVBS_path,MVBS_fname),"r")
MVBS = np.array(f['MVBS'])
depth_bin_size = np.array(f['depth_bin_size'])
ping_time = np.array(f['ping_time'])
f.close()
# db_diff.plot_echogram(MVBS,1,62,5,ping_per_day_mvbs,depth_bin_size,ping_time,(36,8),'magma')
db_diff.plot_echogram(MVBS,1,62,5,ping_per_day_mvbs,depth_bin_size,ping_time,(36,8),db_diff.e_cmap)
Explanation: Set params and load clean MVBS data
End of explanation
Sv_1 = MVBS[2,:,:]
Sv_2 = MVBS[0,:,:]
yes_1 = ~np.isnan(Sv_1)
yes_2 = ~np.isnan(Sv_2)
Sv_diff_12 = Sv_1 - Sv_2
Sv_diff_12[yes_1 & ~yes_2] = np.inf
Sv_diff_12[~yes_1 & yes_2] = -np.inf
idx_fish = (np.isneginf(Sv_diff_12) | (Sv_diff_12<=2)) & (Sv_diff_12>-16)
idx_zoop = np.isposinf(Sv_diff_12) | ((Sv_diff_12>2) & (Sv_diff_12<30))
idx_other = (Sv_diff_12<=-16) | (Sv_diff_12>=30)
MVBS_fish = np.ma.empty(MVBS.shape)
for ff in range(MVBS.shape[0]):
MVBS_fish[ff,:,:] = np.ma.masked_where(~idx_fish,MVBS[ff,:,:])
MVBS_zoop = np.ma.empty(MVBS.shape)
for ff in range(MVBS.shape[0]):
MVBS_zoop[ff,:,:] = np.ma.masked_where(~idx_zoop,MVBS[ff,:,:])
MVBS_others = np.ma.empty(MVBS.shape)
for ff in range(MVBS.shape[0]):
MVBS_others[ff,:,:] = np.ma.masked_where(~idx_other,MVBS[ff,:,:])
# db_diff.plot_echogram(MVBS_fish,1,62,5,ping_per_day_mvbs,depth_bin_size,ping_time,(36,8),'magma')
db_diff.plot_echogram(MVBS_fish,1,62,5,ping_per_day_mvbs,depth_bin_size,ping_time,(36,8),db_diff.e_cmap)
plt.gcf()
plt.savefig(os.path.join(MVBS_path,'echogram_day01-62_ek60_fish.png'),dpi=150)
# db_diff.plot_echogram(MVBS_zoop,1,62,5,ping_per_day_mvbs,depth_bin_size,ping_time,(36,8),'magma')
db_diff.plot_echogram(MVBS_zoop,1,62,5,ping_per_day_mvbs,depth_bin_size,ping_time,(36,8),db_diff.e_cmap)
plt.gcf()
plt.savefig(os.path.join(MVBS_path,'echogram_day01-62_ek60_zoop.png'),dpi=150)
# db_diff.plot_echogram(MVBS_others,1,62,5,ping_per_day_mvbs,depth_bin_size,ping_time,(36,8),'magma')
db_diff.plot_echogram(MVBS_others,1,62,5,ping_per_day_mvbs,depth_bin_size,ping_time,(36,8),db_diff.e_cmap)
plt.gcf()
plt.savefig(os.path.join(MVBS_path,'echogram_day01-62_ek60_others.png'),dpi=150)
Explanation: dB-differencing operation
Here I used the criteria from Sato et al. 2015 for dB-differencing. The rationale is that this is the latest publication in nearby region and the classification threshold was selected based on trawl-verified animal groups. The classification rules are:
- Fish: -16dB < Sv_200-Sv_38 <= 2dB
- Zooplankton: 2dB < Sv_200-Sv_38 < 30dB
- Others: 30dB < Sv_200-Sv_38 or Sv_200-Sv_38 <= -16dB
End of explanation |
12,411 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
DATA 643 - Final Project
Sreejaya Nair and Suman K Polavarapu
Description
Step1: Prepare the pySpark Environment
Step2: Initialize Spark Context
Step3: Load and Analyse Data
Step4: Ratings Histogram
Step5: Most popular movies
Step6: Similar Movies
Find similar movies for a given movie using cosine similarity
Step7: Lets find similar movies for Toy Story (Movie ID
Step8: Recommender using MLLib
Training the recommendation model
Step9: Recommendations | Python Code:
import os
import sys
import urllib2
import collections
import matplotlib.pyplot as plt
import math
from time import time, sleep
%pylab inline
Explanation: DATA 643 - Final Project
Sreejaya Nair and Suman K Polavarapu
Description:
Explore the Apache Spark Cluster Computing Framework by analysing the movielens dataset. Provide recommendations using MLLib
End of explanation
spark_home = os.environ.get('SPARK_HOME', None)
if not spark_home:
raise ValueError("Please set SPARK_HOME environment variable!")
# Add the py4j to the path.
sys.path.insert(0, os.path.join(spark_home, 'python'))
sys.path.insert(0, os.path.join(spark_home, 'C:/spark/python/lib/py4j-0.9-src.zip'))
Explanation: Prepare the pySpark Environment
End of explanation
from pyspark.mllib.recommendation import ALS, Rating
from pyspark import SparkConf, SparkContext
conf = SparkConf().setMaster("local[*]").setAppName("MovieRecommendationsALS").set("spark.executor.memory", "2g")
sc = SparkContext(conf = conf)
Explanation: Initialize Spark Context
End of explanation
def loadMovieNames():
movieNames = {}
for line in urllib2.urlopen("https://raw.githubusercontent.com/psumank/DATA643/master/WK5/ml-100k/u.item"):
fields = line.split('|')
movieNames[int(fields[0])] = fields[1].decode('ascii', 'ignore')
return movieNames
print "\nLoading movie names..."
nameDict = loadMovieNames()
print "\nLoading ratings data..."
data = sc.textFile("file:///C:/Users/p_sum/.ipynb_checkpoints/ml-100k/u.data")
ratings = data.map(lambda x: x.split()[2])
#action -- just to trigger the driver [ lazy evaluation ]
rating_results = ratings.countByValue()
sortedResults = collections.OrderedDict(sorted(rating_results.items()))
for key, value in sortedResults.iteritems():
print "%s %i" % (key, value)
Explanation: Load and Analyse Data
End of explanation
ratPlot = plt.bar(range(len(sortedResults)), sortedResults.values(), align='center')
plt.xticks(range(len(sortedResults)), list(sortedResults.keys()))
ratPlot[3].set_color('g')
print "Ratings Histogram"
Explanation: Ratings Histogram
End of explanation
movies = data.map(lambda x: (int(x.split()[1]), 1))
movieCounts = movies.reduceByKey(lambda x, y: x + y)
flipped = movieCounts.map( lambda (x, y) : (y, x))
sortedMovies = flipped.sortByKey(False)
sortedMoviesWithNames = sortedMovies.map(lambda (count, movie) : (nameDict[movie], count))
results = sortedMoviesWithNames.collect()
subset = results[0:10]
popular_movieNm = [str(i[0]) for i in subset]
popularity_strength = [int(i[1]) for i in subset]
popMovplot = plt.barh(range(len(subset)), popularity_strength, align='center')
plt.yticks(range(len(subset)), popular_movieNm)
popMovplot[0].set_color('g')
print "Most Popular Movies from the Dataset"
Explanation: Most popular movies
End of explanation
ratingsRDD = data.map(lambda l: l.split()).map(lambda l: (int(l[0]), (int(l[1]), float(l[2]))))
ratingsRDD.takeOrdered(10, key = lambda x: x[0])
ratingsRDD.take(4)
# Movies rated by same user. ==> [ user ID ==> ( (movieID, rating), (movieID, rating)) ]
userJoinedRatings = ratingsRDD.join(ratingsRDD)
userJoinedRatings.takeOrdered(10, key = lambda x: x[0])
# Remove dups
def filterDups( (userID, ratings) ):
(movie1, rating1) = ratings[0]
(movie2, rating2) = ratings[1]
return movie1 < movie2
uniqueUserJoinedRatings = userJoinedRatings.filter(filterDups)
uniqueUserJoinedRatings.takeOrdered(10, key = lambda x: x[0])
# Now key by (movie1, movie2) pairs ==> (movie1, movie2) => (rating1, rating2)
def makeMovieRatingPairs((user, ratings)):
(movie1, rating1) = ratings[0]
(movie2, rating2) = ratings[1]
return ((movie1, movie2), (rating1, rating2))
moviePairs = uniqueUserJoinedRatings.map(makeMovieRatingPairs)
moviePairs.takeOrdered(10, key = lambda x: x[0])
#collect all ratings for each movie pair and compute similarity. (movie1, movie2) = > (rating1, rating2), (rating1, rating2) ...
moviePairRatings = moviePairs.groupByKey()
moviePairRatings.takeOrdered(10, key = lambda x: x[0])
#Compute Similarity
def cosineSimilarity(ratingPairs):
numPairs = 0
sum_xx = sum_yy = sum_xy = 0
for ratingX, ratingY in ratingPairs:
sum_xx += ratingX * ratingX
sum_yy += ratingY * ratingY
sum_xy += ratingX * ratingY
numPairs += 1
numerator = sum_xy
denominator = sqrt(sum_xx) * sqrt(sum_yy)
score = 0
if (denominator):
score = (numerator / (float(denominator)))
return (score, numPairs)
moviePairSimilarities = moviePairRatings.mapValues(cosineSimilarity).cache()
moviePairSimilarities.takeOrdered(10, key = lambda x: x[0])
Explanation: Similar Movies
Find similar movies for a given movie using cosine similarity
End of explanation
scoreThreshold = 0.97
coOccurenceThreshold = 50
inputMovieID = 1 #Toy Story.
# Filter for movies with this sim that are "good" as defined by our quality thresholds.
filteredResults = moviePairSimilarities.filter(lambda((pair,sim)): \
(pair[0] == inputMovieID or pair[1] == inputMovieID) and sim[0] > scoreThreshold and sim[1] > coOccurenceThreshold)
#Top 10 by quality score.
results = filteredResults.map(lambda((pair,sim)): (sim, pair)).sortByKey(ascending = False).take(10)
print "Top 10 similar movies for " + nameDict[inputMovieID]
for result in results:
(sim, pair) = result
# Display the similarity result that isn't the movie we're looking at
similarMovieID = pair[0]
if (similarMovieID == inputMovieID):
similarMovieID = pair[1]
print nameDict[similarMovieID] + "\tscore: " + str(sim[0]) + "\tstrength: " + str(sim[1])
Explanation: Lets find similar movies for Toy Story (Movie ID: 1)
End of explanation
ratings = data.map(lambda l: l.split()).map(lambda l: Rating(int(l[0]), int(l[1]), float(l[2]))).cache()
ratings.take(3)
nratings = ratings.count()
nUsers = ratings.keys().distinct().count()
nMovies = ratings.values().distinct().count()
print "We have Got %d ratings from %d users on %d movies." % (nratings, nUsers, nMovies)
# Build the recommendation model using Alternating Least Squares
#Train a matrix factorization model given an RDD of ratings given by users to items, in the form of
#(userID, itemID, rating) pairs. We approximate the ratings matrix as the product of two lower-rank matrices
#of a given rank (number of features). To solve for these features, we run a given number of iterations of ALS.
#The level of parallelism is determined automatically based on the number of partitions in ratings.
start = time()
seed = 5L
iterations = 10
rank = 8
model = ALS.train(ratings, rank, iterations)
duration = time() - start
print "Model trained in %s seconds" % round(duration,3)
Explanation: Recommender using MLLib
Training the recommendation model
End of explanation
#Lets recommend movies for the user id - 2
userID = 2
print "\nTop 10 recommendations:"
recommendations = model.recommendProducts(userID, 10)
for recommendation in recommendations:
print nameDict[int(recommendation[1])] + \
" score " + str(recommendation[2])
Explanation: Recommendations
End of explanation |
12,412 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Contents
Step1: 2. Millionaires
What country are most billionaires from? For the top ones, how many billionaires per billion people?
Step2: What's the average wealth of a billionaire? Male? Female?
Step3: Who is the poorest billionaire? Who are the top 10 poorest billionaires?
Step4: What is 'relationship to company'? And what are the most common relationships?
Step5: Most common source of wealth? Male vs. female?
Step6: Given the richest person in a country, what % of the GDP is their wealth?
Step7: Trains stations | Python Code:
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
df = pd.read_csv("07-hw-animals.csv")
df.columns
df.head(3)
df.sort_values(by='length', ascending=False).head(3)
df['animal'].value_counts()
dogs = df[df['animal']=='dog']
dogs
df[df['length'] > 40]
df['inches'] = .393701 * df['length']
df
cats = df[df['animal']=='cat']
dogs = df[df['animal']=='dog']
# Display all of the animals that are cats and above 12 inches long.
# First do it using the "cats" variable, then do it using your normal dataframe.
cats[cats['inches'] > 12]
df[(df['animal'] == 'cat') & (df['inches'] > 12)]
cats['length'].describe()[['mean']]
dogs['length'].describe()[['mean']]
animals = df.groupby( [ "animal"] )
animals['length'].mean()
plt.style.use('ggplot')
dogs['length'].hist()
labels = dogs['name']
sizes = dogs['length']
explode = (0.1, 0.2, 0.2) # fun
plt.pie(sizes, explode=explode, labels=labels,
autopct='%1.2f%%', shadow=True, startangle=30)
#cf: recent.head().plot(kind='pie', y='networthusbillion', labels=recent['name'].head(), legend=False)
#Make a horizontal bar graph of the length of the animals, with their name as the label
df.plot(kind='barh', x='name', y='length', legend=False)
#Make a sorted horizontal bar graph of the cats, with the larger cats on top.
cats.sort_values(by='length').plot(kind='barh', x='name', y='length', legend=False)
Explanation: Contents:
1 Cats and dogs
2 Millionaires
3 Trains stations
1. Cats and dogs
End of explanation
df2 = pd.read_excel("richpeople.xlsx")
df2.keys()
df2['citizenship'].value_counts().head(10)
# population: data from http://data.worldbank.org/indicator/SP.POP.TOTL
df_pop = pd.read_csv("world_pop.csv", header=2)
df_pop.keys()
#recent_pop = df_pop['2015']
#join: see http://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index
left = pd.DataFrame(df2, index=['countrycode'])
right = pd.DataFrame(df_pop, index=['Country Code'])
millionaires_and_pop = left.join(right)
result = pd.merge(left, right, left_index=True, right_index=True, how='outer')
millionaires_and_pop
result
#millionaires_and_pop['citizenship'].value_counts().head(10)
Explanation: 2. Millionaires
What country are most billionaires from? For the top ones, how many billionaires per billion people?
End of explanation
print("The average wealth of a billionaire (in billions) is:", df2['networthusbillion'].describe()['mean'])
print("The average wealth of a male billionaire is:", df2[df2['gender'] == 'male']['networthusbillion'].describe()['mean'])
print("The average wealth of a female billionaire is:", df2[df2['gender'] == 'female']['networthusbillion'].describe()['mean'])
Explanation: What's the average wealth of a billionaire? Male? Female?
End of explanation
print('The poorest billionaire is:', df2.get_value(df2.sort_values('networthusbillion', ascending=True).index[0],'name'))
df2.sort_values('networthusbillion', ascending=True).head(10)
Explanation: Who is the poorest billionaire? Who are the top 10 poorest billionaires?
End of explanation
#relationship_values = set
relationship_list = df2['relationshiptocompany'].tolist()
relationship_set = set(relationship_list)
relationship_set = [s.strip() for s in relationship_set if s == s] # to remove a naughty NaN and get rid of dumb whitespaces
print("The relationships are:", str.join(', ', relationship_set))
print('\nThe five most common relationships are:')
df2['relationshiptocompany'].value_counts().head(5)
Explanation: What is 'relationship to company'? And what are the most common relationships?
End of explanation
print("The three most common sources of wealth are:\n" + str(df2['typeofwealth'].value_counts().head(3)))
print("\nFor men, they are:\n" + str(df2[df2['gender'] == 'male']['typeofwealth'].value_counts().head(3)))
print("\nFor women, they are:\n" + str(df2[df2['gender'] == 'female']['typeofwealth'].value_counts().head(3)))
Explanation: Most common source of wealth? Male vs. female?
End of explanation
#per_country = df2.groupby(['citizenship'])
#per_country['networthusbillion'].max()
#per_country['networthusbillion'].idxmax() # DataFrame.max(axis=None, skipna=None, level=None, numeric_only=None, **kwargs)
# per_country['gdpcurrentus']
df2['percofgdp'] = (100*1000000000*df2['networthusbillion']) / (df2['gdpcurrentus'])
#pd.Series(["{0:.2f}%".format(percofgdp)])
print("NB: most countries doesn't have their GDP in the 'gdpcurrentus' column.")
df2.loc[per_country['networthusbillion'].idxmax()][['name', 'networthusbillion', 'percofgdp']]
Explanation: Given the richest person in a country, what % of the GDP is their wealth?
End of explanation
df_trains = pd.read_csv("stations.csv", delimiter=';')
df_trains
Explanation: Trains stations
End of explanation |
12,413 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<div class="alert alert-block alert-info" style="margin-top
Step1: <a id="ref0"></a>
<h2> Helper Functions </h2>
Functions used to plot
Step2: dataset object
Step3: <a id='ref1'> </a>
<h2>Neural Network Module and Function for Training </h2>
Neural Network Module using <code> ModuleList() </code>
Step4: A function used to train.
Step5: A function used to calculate accuracy
Step6: <a id="ref2"></a>
<h2>Train and Validate the Model </h2>
Crate a dataset object
Step7: Create a network to classify three classes with 1 hidden layer with 50 neurons
Step8: Create a network to classify three classes with 2 hidden layer with 20 neurons | Python Code:
import matplotlib.pyplot as plt
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
from matplotlib.colors import ListedColormap
torch.manual_seed(1)
Explanation: <div class="alert alert-block alert-info" style="margin-top: 20px">
<a href="http://cocl.us/NotebooksPython101"><img src = "https://ibm.box.com/shared/static/yfe6h4az47ktg2mm9h05wby2n7e8kei3.png" width = 750, align = "center"></a>
<img src = "https://ibm.box.com/shared/static/ugcqz6ohbvff804xp84y4kqnvvk3bq1g.png" width = 200, align = "center">
<h1 align=center><font size = 5>Deeper Neural Networks with nn.ModuleList()
</font></h1>
# Table of Contents
In this lab, you will go over the basics of tensor operations
<div class="alert alert-block alert-info" style="margin-top: 20px">
<li><a href="#ref0">Helper Functions </a></li>
<li><a href="#ref1">Neural Network Module and Function for Training</a></li>
<li><a href="#ref2"> Train and Validate the Model <a></li>
<li><a href="#ref3">Practice Question</a></li>
<br>
<p></p>
Estimated Time Needed: <strong>25 min</strong>
</div>
<hr>
You'll need the following libraries:
End of explanation
def plot_decision_regions_3class(model,data_set):
cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA','#00AAFF'])
cmap_bold = ListedColormap(['#FF0000', '#00FF00','#00AAFF'])
X=data_set.x.numpy()
y=data_set.y.numpy()
h = .02
x_min, x_max = X[:, 0].min()-0.1 , X[:, 0].max()+0.1
y_min, y_max = X[:, 1].min()-0.1 , X[:, 1].max() +0.1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),np.arange(y_min, y_max, h))
XX=torch.torch.Tensor(np.c_[xx.ravel(), yy.ravel()])
_,yhat=torch.max(model(XX),1)
yhat=yhat.numpy().reshape(xx.shape)
plt.pcolormesh(xx, yy, yhat, cmap=cmap_light)
plt.plot(X[y[:]==0,0],X[y[:]==0,1],'ro',label='y=0')
plt.plot(X[y[:]==1,0],X[y[:]==1,1],'go',label='y=1')
plt.plot(X[y[:]==2,0],X[y[:]==2,1],'o',label='y=2')
plt.title("decision region")
plt.legend()
Explanation: <a id="ref0"></a>
<h2> Helper Functions </h2>
Functions used to plot:
End of explanation
from torch.utils.data import Dataset, DataLoader
class Data(Dataset):
# modified from: http://cs231n.github.io/neural-networks-case-study/
def __init__(self,K=3,N=500):
D = 2
X = np.zeros((N*K,D)) # data matrix (each row = single example)
y = np.zeros(N*K, dtype='uint8') # class labels
for j in range(K):
ix = range(N*j,N*(j+1))
r = np.linspace(0.0,1,N) # radius
t = np.linspace(j*4,(j+1)*4,N) + np.random.randn(N)*0.2 # theta
X[ix] = np.c_[r*np.sin(t), r*np.cos(t)]
y[ix] = j
self.y=torch.from_numpy(y).type(torch.LongTensor)
self.x=torch.from_numpy(X).type(torch.FloatTensor)
self.len=y.shape[0]
def __getitem__(self,index):
return self.x[index],self.y[index]
def __len__(self):
return self.len
def plot_stuff(self):
plt.plot(self.x[self.y[:]==0,0].numpy(),self.x[self.y[:]==0,1].numpy(),'o',label="y=0")
plt.plot(self.x[self.y[:]==1,0].numpy(),self.x[self.y[:]==1,1].numpy(),'ro',label="y=1")
plt.plot(self.x[self.y[:]==2,0].numpy(),self.x[self.y[:]==2,1].numpy(),'go',label="y=2")
plt.legend()
Explanation: dataset object
End of explanation
class Net(nn.Module):
def __init__(self,Layers):
super(Net,self).__init__()
self.hidden = nn.ModuleList()
for input_size,output_size in zip(Layers,Layers[1:]):
self.hidden.append(nn.Linear(input_size,output_size))
def forward(self,activation):
L=len(self.hidden)
for (l,linear_transform) in zip(range(L),self.hidden):
if l<L-1:
activation =F.relu(linear_transform (activation))
else:
activation =linear_transform (activation)
return activation
Explanation: <a id='ref1'> </a>
<h2>Neural Network Module and Function for Training </h2>
Neural Network Module using <code> ModuleList() </code>
End of explanation
def train(data_set,model,criterion, train_loader, optimizer, epochs=100):
LOSS=[]
ACC=[]
for epoch in range(epochs):
for x,y in train_loader:
optimizer.zero_grad()
yhat=model(x)
loss=criterion(yhat,y)
optimizer.zero_grad()
loss.backward()
optimizer.step()
LOSS.append(loss.item())
ACC.append(accuracy(model,data_set))
fig, ax1 = plt.subplots()
color = 'tab:red'
ax1.plot(LOSS,color=color)
ax1.set_xlabel('epoch',color=color)
ax1.set_ylabel('total loss',color=color)
ax1.tick_params(axis='y', color=color)
ax2 = ax1.twinx()
color = 'tab:blue'
ax2.set_ylabel('accuracy', color=color) # we already handled the x-label with ax1
ax2.plot( ACC, color=color)
ax2.tick_params(axis='y', labelcolor=color)
fig.tight_layout() # otherwise the right y-label is slightly clipped
plt.show()
return LOSS
Explanation: A function used to train.
End of explanation
def accuracy(model,data_set):
_,yhat=torch.max(model(data_set.x),1)
return (yhat==data_set.y).numpy().mean()
Explanation: A function used to calculate accuracy
End of explanation
data_set=Data()
data_set.plot_stuff()
data_set.y=data_set.y.view(-1)
Explanation: <a id="ref2"></a>
<h2>Train and Validate the Model </h2>
Crate a dataset object:
End of explanation
Layers=[2,50,3]
model=Net(Layers)
learning_rate=0.10
optimizer=torch.optim.SGD(model.parameters(), lr=learning_rate)
train_loader=DataLoader(dataset=data_set,batch_size=20)
criterion=nn.CrossEntropyLoss()
LOSS=train(data_set,model,criterion, train_loader, optimizer, epochs=100)
plot_decision_regions_3class(model,data_set)
Explanation: Create a network to classify three classes with 1 hidden layer with 50 neurons
End of explanation
Layers=[2,10,10,3]
model=Net(Layers)
learning_rate=0.01
optimizer=torch.optim.SGD(model.parameters(), lr=learning_rate)
train_loader=DataLoader(dataset=data_set,batch_size=20)
criterion=nn.CrossEntropyLoss()
LOSS=train(data_set,model,criterion, train_loader, optimizer, epochs=1000)
plot_decision_regions_3class(model,data_set)
Explanation: Create a network to classify three classes with 2 hidden layer with 20 neurons
End of explanation |
12,414 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Face Generation
In this project, you'll use generative adversarial networks to generate new images of faces.
Get the Data
You'll be using two datasets in this project
Step3: Explore the Data
MNIST
As you're aware, the MNIST dataset contains images of handwritten digits. You can view the first number of examples by changing show_n_images.
Step5: CelebA
The CelebFaces Attributes Dataset (CelebA) dataset contains over 200,000 celebrity images with annotations. Since you're going to be generating faces, you won't need the annotations. You can view the first number of examples by changing show_n_images.
Step7: Preprocess the Data
Since the project's main focus is on building the GANs, we'll preprocess the data for you. The values of the MNIST and CelebA dataset will be in the range of -0.5 to 0.5 of 28x28 dimensional images. The CelebA images will be cropped to remove parts of the image that don't include a face, then resized down to 28x28.
The MNIST images are black and white images with a single [color channel](https
Step10: Input
Implement the model_inputs function to create TF Placeholders for the Neural Network. It should create the following placeholders
Step13: Discriminator
Implement discriminator to create a discriminator neural network that discriminates on images. This function should be able to reuse the variabes in the neural network. Use tf.variable_scope with a scope name of "discriminator" to allow the variables to be reused. The function should return a tuple of (tensor output of the generator, tensor logits of the generator).
Step16: Generator
Implement generator to generate an image using z. This function should be able to reuse the variabes in the neural network. Use tf.variable_scope with a scope name of "generator" to allow the variables to be reused. The function should return the generated 28 x 28 x out_channel_dim images.
Step19: Loss
Implement model_loss to build the GANs for training and calculate the loss. The function should return a tuple of (discriminator loss, generator loss). Use the following functions you implemented
Step22: Optimization
Implement model_opt to create the optimization operations for the GANs. Use tf.trainable_variables to get all the trainable variables. Filter the variables with names that are in the discriminator and generator scope names. The function should return a tuple of (discriminator training operation, generator training operation).
Step25: Neural Network Training
Show Output
Use this function to show the current output of the generator during training. It will help you determine how well the GANs is training.
Step27: Train
Implement train to build and train the GANs. Use the following functions you implemented
Step29: MNIST
Test your GANs architecture on MNIST. After 2 epochs, the GANs should be able to generate images that look like handwritten digits. Make sure the loss of the generator is lower than the loss of the discriminator or close to 0.
Step31: CelebA
Run your GANs on CelebA. It will take around 20 minutes on the average GPU to run one epoch. You can run the whole epoch or stop when it starts to generate realistic faces. | Python Code:
data_dir = './data'
# FloydHub - Use with data ID "R5KrjnANiKVhLWAkpXhNBe"
#data_dir = '/input/R5KrjnANiKVhLWAkpXhNBe'
import time
import pylab as pl
from IPython import display
DON'T MODIFY ANYTHING IN THIS CELL
import helper
helper.download_extract('mnist', data_dir)
helper.download_extract('celeba', data_dir)
Explanation: Face Generation
In this project, you'll use generative adversarial networks to generate new images of faces.
Get the Data
You'll be using two datasets in this project:
- MNIST
- CelebA
Since the celebA dataset is complex and you're doing GANs in a project for the first time, we want you to test your neural network on MNIST before CelebA. Running the GANs on MNIST will allow you to see how well your model trains sooner.
If you're using FloydHub, set data_dir to "/input" and use the FloydHub data ID "R5KrjnANiKVhLWAkpXhNBe".
End of explanation
show_n_images = 25
DON'T MODIFY ANYTHING IN THIS CELL
%matplotlib inline
import os
from glob import glob
from matplotlib import pyplot
mnist_images = helper.get_batch(glob(os.path.join(data_dir, 'mnist/*.jpg'))[:show_n_images], 28, 28, 'L')
pyplot.imshow(helper.images_square_grid(mnist_images, 'L'), cmap='gray')
Explanation: Explore the Data
MNIST
As you're aware, the MNIST dataset contains images of handwritten digits. You can view the first number of examples by changing show_n_images.
End of explanation
show_n_images = 25
DON'T MODIFY ANYTHING IN THIS CELL
mnist_images = helper.get_batch(glob(os.path.join(data_dir, 'img_align_celeba/*.jpg'))[:show_n_images], 28, 28, 'RGB')
pyplot.imshow(helper.images_square_grid(mnist_images, 'RGB'))
Explanation: CelebA
The CelebFaces Attributes Dataset (CelebA) dataset contains over 200,000 celebrity images with annotations. Since you're going to be generating faces, you won't need the annotations. You can view the first number of examples by changing show_n_images.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer. You are using {}'.format(tf.__version__)
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Preprocess the Data
Since the project's main focus is on building the GANs, we'll preprocess the data for you. The values of the MNIST and CelebA dataset will be in the range of -0.5 to 0.5 of 28x28 dimensional images. The CelebA images will be cropped to remove parts of the image that don't include a face, then resized down to 28x28.
The MNIST images are black and white images with a single [color channel](https://en.wikipedia.org/wiki/Channel_(digital_image%29) while the CelebA images have [3 color channels (RGB color channel)](https://en.wikipedia.org/wiki/Channel_(digital_image%29#RGB_Images).
Build the Neural Network
You'll build the components necessary to build a GANs by implementing the following functions below:
- model_inputs
- discriminator
- generator
- model_loss
- model_opt
- train
Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
End of explanation
import problem_unittests as tests
def model_inputs(image_width, image_height, image_channels, z_dim):
Create the model inputs
:param image_width: The input image width
:param image_height: The input image height
:param image_channels: The number of image channels
:param z_dim: The dimension of Z
:return: Tuple of (tensor of real input images, tensor of z data, learning rate)
return tf.placeholder(tf.float32, [None, image_width, image_height, image_channels], name='input'),\
tf.placeholder(tf.float32, [None, z_dim], name='z_input'),\
tf.placeholder(tf.float32, name='learn_rate')
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
Explanation: Input
Implement the model_inputs function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Real input images placeholder with rank 4 using image_width, image_height, and image_channels.
- Z input placeholder with rank 2 using z_dim.
- Learning rate placeholder with rank 0.
Return the placeholders in the following the tuple (tensor of real input images, tensor of z data)
End of explanation
def discriminator(images, reuse=False, n_units=128, alpha=0.2):
Create the discriminator network
:param image: Tensor of input image(s)
:param reuse: Boolean if the weights should be reused
:return: Tuple of (tensor output of the discriminator, tensor logits of the discriminator)
with tf.variable_scope('discriminator', reuse=reuse):
x1 = tf.layers.conv2d(images, 64, 5, strides=2, padding='same')
relu1 = tf.maximum(alpha * x1, x1)
x2 = tf.layers.conv2d(relu1, 128, 5, strides=2, padding='same')
bn2 = tf.layers.batch_normalization(x2, training=True)
relu2 = tf.maximum(alpha * bn2, bn2)
x3 = tf.layers.conv2d(relu2, 256, 5, strides=2, padding='same')
bn3 = tf.layers.batch_normalization(x3, training=True)
relu3 = tf.maximum(alpha * bn3, bn3)
x4 = tf.layers.conv2d(relu3, 512, 5, strides=2, padding='same')
bn4 = tf.layers.batch_normalization(x4, training=True)
relu4 = tf.maximum(alpha * bn4, bn4)
logits = tf.layers.dense(tf.reshape(relu4, (-1, 4*4*512)), 1)
out = tf.sigmoid(logits)
return out, logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_discriminator(discriminator, tf)
Explanation: Discriminator
Implement discriminator to create a discriminator neural network that discriminates on images. This function should be able to reuse the variabes in the neural network. Use tf.variable_scope with a scope name of "discriminator" to allow the variables to be reused. The function should return a tuple of (tensor output of the generator, tensor logits of the generator).
End of explanation
def generator(z, out_channel_dim, is_train=True, alpha = 0.1):
Create the generator network
:param z: Input z
:param out_channel_dim: The number of channels in the output image
:param is_train: Boolean if generator is being used for training
:return: The tensor output of the generator
with tf.variable_scope('generator', reuse=not is_train):
# First fully connected layer
x1 = tf.layers.dense(z, 7*7*512)
x1 = tf.reshape(x1, (-1, 7, 7, 512))
x1 = tf.layers.batch_normalization(x1, training=is_train)
x1 = tf.maximum(alpha * x1, x1)
x1 = tf.reshape(x1, (-1, 7, 7, 256))
x1 = tf.layers.batch_normalization(x1, training=is_train)
x1 = tf.maximum(alpha * x1, x1)
x2 = tf.layers.conv2d_transpose(x1, 128, 5, strides=1, padding='same')
x2 = tf.layers.batch_normalization(x2, training=is_train)
x2 = tf.maximum(alpha * x2, x2)
x3 = tf.layers.conv2d_transpose(x2, 64, 5, strides=2, padding='same')
x3 = tf.layers.batch_normalization(x3, training=is_train)
x3 = tf.maximum(alpha * x3, x3)
# Output layer
logits = tf.layers.conv2d_transpose(x3, out_channel_dim, 5, strides=2, padding='same')
return tf.tanh(logits)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_generator(generator, tf)
Explanation: Generator
Implement generator to generate an image using z. This function should be able to reuse the variabes in the neural network. Use tf.variable_scope with a scope name of "generator" to allow the variables to be reused. The function should return the generated 28 x 28 x out_channel_dim images.
End of explanation
def model_loss(input_real, input_z, out_channel_dim):
Get the loss for the discriminator and generator
:param input_real: Images from the real dataset
:param input_z: Z input
:param out_channel_dim: The number of channels in the output image
:return: A tuple of (discriminator loss, generator loss)
gen = generator(input_z, out_channel_dim)
disc_real, disc_logits_real = discriminator(input_real)
disc_fake, disc_logits_fake = discriminator(gen, reuse=True)
disc_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=disc_logits_real, \
labels=tf.ones_like(disc_real)*0.9))
disc_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=disc_logits_fake, \
labels=tf.zeros_like(disc_fake)))
gen_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=disc_logits_fake, \
labels=tf.ones_like(disc_fake)))
disc_loss = disc_loss_real + disc_loss_fake
return disc_loss, gen_loss
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_loss(model_loss)
Explanation: Loss
Implement model_loss to build the GANs for training and calculate the loss. The function should return a tuple of (discriminator loss, generator loss). Use the following functions you implemented:
- discriminator(images, reuse=False)
- generator(z, out_channel_dim, is_train=True)
End of explanation
def model_opt(d_loss, g_loss, learning_rate, beta1):
Get optimization operations
:param d_loss: Discriminator loss Tensor
:param g_loss: Generator loss Tensor
:param learning_rate: Learning Rate Placeholder
:param beta1: The exponential decay rate for the 1st moment in the optimizer
:return: A tuple of (discriminator training operation, generator training operation)
trainable_variables = tf.trainable_variables()
d_vars = [var for var in trainable_variables if var.name.startswith('discriminator')]
g_vars = [var for var in trainable_variables if var.name.startswith('generator')]
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
return tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(d_loss, var_list=d_vars), \
tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(g_loss, var_list=g_vars)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_opt(model_opt, tf)
Explanation: Optimization
Implement model_opt to create the optimization operations for the GANs. Use tf.trainable_variables to get all the trainable variables. Filter the variables with names that are in the discriminator and generator scope names. The function should return a tuple of (discriminator training operation, generator training operation).
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
def show_generator_output(sess, n_images, input_z, out_channel_dim, image_mode):
Show example output for the generator
:param sess: TensorFlow session
:param n_images: Number of Images to display
:param input_z: Input Z Tensor
:param out_channel_dim: The number of channels in the output image
:param image_mode: The mode to use for images ("RGB" or "L")
cmap = None if image_mode == 'RGB' else 'gray'
z_dim = input_z.get_shape().as_list()[-1]
example_z = np.random.uniform(-1, 1, size=[n_images, z_dim])
samples = sess.run(
generator(input_z, out_channel_dim, False),
feed_dict={input_z: example_z})
images_grid = helper.images_square_grid(samples, image_mode)
pyplot.imshow(images_grid, cmap=cmap)
pyplot.show()
Explanation: Neural Network Training
Show Output
Use this function to show the current output of the generator during training. It will help you determine how well the GANs is training.
End of explanation
def plot_losses_graph(steps, gen_loss, disc_loss):
pl.ylim(min(min(gen_loss), min(disc_loss)), 2)
pl.xlim(min(steps), max(steps))
pl.plot(steps, gen_loss, label = 'Generator Loss', color = 'green')
pl.plot(steps, disc_loss, label = 'Discriminator Loss', color = 'red')
pl.legend(loc='upper right')
pl.xlabel('Steps')
display.clear_output(wait=True)
display.display(pl.gcf())
pl.gcf().clear()
time.sleep(0.1)
def train(epoch_count, batch_size, z_dim, learning_rate, beta1, get_batches, data_shape, data_image_mode):
Train the GAN
:param epoch_count: Number of epochs
:param batch_size: Batch Size
:param z_dim: Z dimension
:param learning_rate: Learning Rate
:param beta1: The exponential decay rate for the 1st moment in the optimizer
:param get_batches: Function to get batches
:param data_shape: Shape of the data
:param data_image_mode: The image mode to use for images ("RGB" or "L")
image_channels = 3 if data_image_mode == 'RGB' else 1
input_real, input_z, _ = model_inputs(data_shape[1], data_shape[2], data_shape[3], z_dim)
d_loss, g_loss = model_loss(input_real, input_z, data_shape[3])
d_opt, g_opt = model_opt(d_loss, g_loss, learning_rate, beta1)
steps = 0
samples, losses = [], []
steps_list, gen_loss, disc_loss = [], [], []
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epoch_count):
for batch_images in get_batches(batch_size):
batch_images = batch_images * 2
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_dim))
_ = sess.run(d_opt, feed_dict={input_real: batch_images, input_z: batch_z})
_ = sess.run(g_opt, feed_dict={input_real: batch_images, input_z: batch_z})
_ = sess.run(g_opt, feed_dict={input_real: batch_images, input_z: batch_z})
steps += 1
if steps % 10 == 0:
# At the end of each epoch, get the losses and print them out
train_loss_d = d_loss.eval({input_z: batch_z, input_real: batch_images})
train_loss_g = g_loss.eval({input_z: batch_z})
steps_list.append(steps/10)
gen_loss.append(train_loss_g)
disc_loss.append(train_loss_d)
plot_losses_graph(steps_list, gen_loss, disc_loss)
print('Epoch {}/{} - Step {}: Discriminator Loss - {:>3.3f}, Generator Loss - {:>3.3f}'\
.format(epoch_i+1, epoch_count, steps, train_loss_d, train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
show_generator_output(sess, 9, input_z, data_shape[3], data_image_mode)
Explanation: Train
Implement train to build and train the GANs. Use the following functions you implemented:
- model_inputs(image_width, image_height, image_channels, z_dim)
- model_loss(input_real, input_z, out_channel_dim)
- model_opt(d_loss, g_loss, learning_rate, beta1)
Use the show_generator_output to show generator output while you train. Running show_generator_output for every batch will drastically increase training time and increase the size of the notebook. It's recommended to print the generator output every 100 batches.
End of explanation
batch_size = 64
z_dim = 100
learning_rate = 0.0002
beta1 = 0.5
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
epochs = 20
mnist_dataset = helper.Dataset('mnist', glob(os.path.join(data_dir, 'mnist/*.jpg')))
with tf.Graph().as_default():
train(epochs, batch_size, z_dim, learning_rate, beta1, mnist_dataset.get_batches,
mnist_dataset.shape, mnist_dataset.image_mode)
Explanation: MNIST
Test your GANs architecture on MNIST. After 2 epochs, the GANs should be able to generate images that look like handwritten digits. Make sure the loss of the generator is lower than the loss of the discriminator or close to 0.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
epochs = 3
celeba_dataset = helper.Dataset('celeba', glob(os.path.join(data_dir, 'img_align_celeba/*.jpg')))
with tf.Graph().as_default():
train(epochs, batch_size, z_dim, learning_rate, beta1, celeba_dataset.get_batches,
celeba_dataset.shape, celeba_dataset.image_mode)
Explanation: CelebA
Run your GANs on CelebA. It will take around 20 minutes on the average GPU to run one epoch. You can run the whole epoch or stop when it starts to generate realistic faces.
End of explanation |
12,415 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Hadamard Multitask GP Regression
Introduction
This notebook demonstrates how to perform "Hadamard" multitask regression.
This differs from the multitask gp regression example notebook in one key way
Step1: Set up training data
In the next cell, we set up the training data for this example. For each task we'll be using 50 random points on [0,1), which we evaluate the function on and add Gaussian noise to get the training labels. Note that different inputs are used for each task.
We'll have two functions - a sine function (y1) and a cosine function (y2).
Step2: Set up a Hadamard multitask model
The model should be somewhat similar to the ExactGP model in the simple regression example.
The differences
Step3: Training the model
In the next cell, we handle using Type-II MLE to train the hyperparameters of the Gaussian process.
See the simple regression example for more info on this step.
Step4: Make predictions with the model | Python Code:
import math
import torch
import gpytorch
from matplotlib import pyplot as plt
%matplotlib inline
%load_ext autoreload
%autoreload 2
Explanation: Hadamard Multitask GP Regression
Introduction
This notebook demonstrates how to perform "Hadamard" multitask regression.
This differs from the multitask gp regression example notebook in one key way:
Here, we assume that we have observations for one task per input. For each input, we specify the task of the input that we observe. (The kernel that we learn is expressed as a Hadamard product of an input kernel and a task kernel)
In the other notebook, we assume that we observe all tasks per input. (The kernel in that notebook is the Kronecker product of an input kernel and a task kernel).
Multitask regression, first introduced in this paper learns similarities in the outputs simultaneously. It's useful when you are performing regression on multiple functions that share the same inputs, especially if they have similarities (such as being sinusodial).
Given inputs $x$ and $x'$, and tasks $i$ and $j$, the covariance between two datapoints and two tasks is given by
$$ k([x, i], [x', j]) = k_\text{inputs}(x, x') * k_\text{tasks}(i, j)
$$
where $k_\text{inputs}$ is a standard kernel (e.g. RBF) that operates on the inputs.
$k_\text{task}$ is a special kernel - the IndexKernel - which is a lookup table containing inter-task covariance.
End of explanation
train_x1 = torch.rand(50)
train_x2 = torch.rand(50)
train_y1 = torch.sin(train_x1 * (2 * math.pi)) + torch.randn(train_x1.size()) * 0.2
train_y2 = torch.cos(train_x2 * (2 * math.pi)) + torch.randn(train_x2.size()) * 0.2
Explanation: Set up training data
In the next cell, we set up the training data for this example. For each task we'll be using 50 random points on [0,1), which we evaluate the function on and add Gaussian noise to get the training labels. Note that different inputs are used for each task.
We'll have two functions - a sine function (y1) and a cosine function (y2).
End of explanation
class MultitaskGPModel(gpytorch.models.ExactGP):
def __init__(self, train_x, train_y, likelihood):
super(MultitaskGPModel, self).__init__(train_x, train_y, likelihood)
self.mean_module = gpytorch.means.ConstantMean()
self.covar_module = gpytorch.kernels.RBFKernel()
# We learn an IndexKernel for 2 tasks
# (so we'll actually learn 2x2=4 tasks with correlations)
self.task_covar_module = gpytorch.kernels.IndexKernel(num_tasks=2, rank=1)
def forward(self,x,i):
mean_x = self.mean_module(x)
# Get input-input covariance
covar_x = self.covar_module(x)
# Get task-task covariance
covar_i = self.task_covar_module(i)
# Multiply the two together to get the covariance we want
covar = covar_x.mul(covar_i)
return gpytorch.distributions.MultivariateNormal(mean_x, covar)
likelihood = gpytorch.likelihoods.GaussianLikelihood()
train_i_task1 = torch.full_like(train_x1, dtype=torch.long, fill_value=0)
train_i_task2 = torch.full_like(train_x2, dtype=torch.long, fill_value=1)
full_train_x = torch.cat([train_x1, train_x2])
full_train_i = torch.cat([train_i_task1, train_i_task2])
full_train_y = torch.cat([train_y1, train_y2])
# Here we have two iterms that we're passing in as train_inputs
model = MultitaskGPModel((full_train_x, full_train_i), full_train_y, likelihood)
Explanation: Set up a Hadamard multitask model
The model should be somewhat similar to the ExactGP model in the simple regression example.
The differences:
The model takes two input: the inputs (x) and indices. The indices indicate which task the observation is for.
Rather than just using a RBFKernel, we're using that in conjunction with a IndexKernel.
We don't use a ScaleKernel, since the IndexKernel will do some scaling for us. (This way we're not overparameterizing the kernel.)
End of explanation
# this is for running the notebook in our testing framework
import os
smoke_test = ('CI' in os.environ)
training_iterations = 2 if smoke_test else 50
# Find optimal model hyperparameters
model.train()
likelihood.train()
# Use the adam optimizer
optimizer = torch.optim.Adam(model.parameters(), lr=0.1) # Includes GaussianLikelihood parameters
# "Loss" for GPs - the marginal log likelihood
mll = gpytorch.mlls.ExactMarginalLogLikelihood(likelihood, model)
for i in range(training_iterations):
optimizer.zero_grad()
output = model(full_train_x, full_train_i)
loss = -mll(output, full_train_y)
loss.backward()
print('Iter %d/50 - Loss: %.3f' % (i + 1, loss.item()))
optimizer.step()
Explanation: Training the model
In the next cell, we handle using Type-II MLE to train the hyperparameters of the Gaussian process.
See the simple regression example for more info on this step.
End of explanation
# Set into eval mode
model.eval()
likelihood.eval()
# Initialize plots
f, (y1_ax, y2_ax) = plt.subplots(1, 2, figsize=(8, 3))
# Test points every 0.02 in [0,1]
test_x = torch.linspace(0, 1, 51)
tast_i_task1 = torch.full_like(test_x, dtype=torch.long, fill_value=0)
test_i_task2 = torch.full_like(test_x, dtype=torch.long, fill_value=1)
# Make predictions - one task at a time
# We control the task we cae about using the indices
# The gpytorch.settings.fast_pred_var flag activates LOVE (for fast variances)
# See https://arxiv.org/abs/1803.06058
with torch.no_grad(), gpytorch.settings.fast_pred_var():
observed_pred_y1 = likelihood(model(test_x, tast_i_task1))
observed_pred_y2 = likelihood(model(test_x, test_i_task2))
# Define plotting function
def ax_plot(ax, train_y, train_x, rand_var, title):
# Get lower and upper confidence bounds
lower, upper = rand_var.confidence_region()
# Plot training data as black stars
ax.plot(train_x.detach().numpy(), train_y.detach().numpy(), 'k*')
# Predictive mean as blue line
ax.plot(test_x.detach().numpy(), rand_var.mean.detach().numpy(), 'b')
# Shade in confidence
ax.fill_between(test_x.detach().numpy(), lower.detach().numpy(), upper.detach().numpy(), alpha=0.5)
ax.set_ylim([-3, 3])
ax.legend(['Observed Data', 'Mean', 'Confidence'])
ax.set_title(title)
# Plot both tasks
ax_plot(y1_ax, train_y1, train_x1, observed_pred_y1, 'Observed Values (Likelihood)')
ax_plot(y2_ax, train_y2, train_x2, observed_pred_y2, 'Observed Values (Likelihood)')
Explanation: Make predictions with the model
End of explanation |
12,416 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ordinary Differential Equations Exercise 3
Imports
Step1: Damped, driven nonlinear pendulum
The equations of motion for a simple pendulum of mass $m$, length $l$ are
Step4: Write a function derivs for usage with scipy.integrate.odeint that computes the derivatives for the damped, driven harmonic oscillator. The solution vector at each time will be $\vec{y}(t) = (\theta(t),\omega(t))$.
Step5: Simple pendulum
Use the above functions to integrate the simple pendulum for the case where it starts at rest pointing vertically upwards. In this case, it should remain at rest with constant energy.
Integrate the equations of motion.
Plot $E/m$ versus time.
Plot $\theta(t)$ and $\omega(t)$ versus time.
Tune the atol and rtol arguments of odeint until $E/m$, $\theta(t)$ and $\omega(t)$ are constant.
Anytime you have a differential equation with a a conserved quantity, it is critical to make sure the numerical solutions conserve that quantity as well. This also gives you an opportunity to find other bugs in your code. The default error tolerances (atol and rtol) used by odeint are not sufficiently small for this problem. Start by trying atol=1e-3, rtol=1e-2 and then decrease each by an order of magnitude until your solutions are stable.
Step7: Damped pendulum
Write a plot_pendulum function that integrates the damped, driven pendulum differential equation for a particular set of parameters $[a,b,\omega_0]$.
Use the initial conditions $\theta(0)=-\pi + 0.1$ and $\omega=0$.
Decrease your atol and rtol even futher and make sure your solutions have converged.
Make a parametric plot of $[\theta(t),\omega(t)]$ versus time.
Use the plot limits $\theta \in [-2 \pi,2 \pi]$ and $\theta \in [-10,10]$
Label your axes and customize your plot to make it beautiful and effective.
Step8: Here is an example of the output of your plot_pendulum function that should show a decaying spiral.
Step9: Use interact to explore the plot_pendulum function with | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from scipy.integrate import odeint
from IPython.html.widgets import interact, fixed
Explanation: Ordinary Differential Equations Exercise 3
Imports
End of explanation
g = 9.81 # m/s^2
l = 0.5 # length of pendulum, in meters
tmax = 50. # seconds
t = np.linspace(0, tmax, int(100*tmax))
Explanation: Damped, driven nonlinear pendulum
The equations of motion for a simple pendulum of mass $m$, length $l$ are:
$$
\frac{d^2\theta}{dt^2} = \frac{-g}{\ell}\sin\theta
$$
When a damping and periodic driving force are added the resulting system has much richer and interesting dynamics:
$$
\frac{d^2\theta}{dt^2} = \frac{-g}{\ell}\sin\theta - a \omega - b \sin(\omega_0 t)
$$
In this equation:
$a$ governs the strength of the damping.
$b$ governs the strength of the driving force.
$\omega_0$ is the angular frequency of the driving force.
When $a=0$ and $b=0$, the energy/mass is conserved:
$$E/m =g\ell(1-\cos(\theta)) + \frac{1}{2}\ell^2\omega^2$$
Basic setup
Here are the basic parameters we are going to use for this exercise:
End of explanation
def derivs(y, t, a, b, omega0):
Compute the derivatives of the damped, driven pendulum.
Parameters
----------
y : ndarray
The solution vector at the current time t[i]: [theta[i],omega[i]].
t : float
The current time t[i].
a, b, omega0: float
The parameters in the differential equation.
Returns
-------
dy : ndarray
The vector of derviatives at t[i]: [dtheta[i],domega[i]].
# YOUR CODE HERE
raise NotImplementedError()
assert np.allclose(derivs(np.array([np.pi,1.0]), 0, 1.0, 1.0, 1.0), [1.,-1.])
def energy(y):
Compute the energy for the state array y.
The state array y can have two forms:
1. It could be an ndim=1 array of np.array([theta,omega]) at a single time.
2. It could be an ndim=2 array where each row is the [theta,omega] at single
time.
Parameters
----------
y : ndarray, list, tuple
A solution vector
Returns
-------
E/m : float (ndim=1) or ndarray (ndim=2)
The energy per mass.
# YOUR CODE HERE
raise NotImplementedError()
assert np.allclose(energy(np.array([np.pi,0])),g)
assert np.allclose(energy(np.ones((10,2))), np.ones(10)*energy(np.array([1,1])))
Explanation: Write a function derivs for usage with scipy.integrate.odeint that computes the derivatives for the damped, driven harmonic oscillator. The solution vector at each time will be $\vec{y}(t) = (\theta(t),\omega(t))$.
End of explanation
# YOUR CODE HERE
raise NotImplementedError()
# YOUR CODE HERE
raise NotImplementedError()
# YOUR CODE HERE
raise NotImplementedError()
assert True # leave this to grade the two plots and their tuning of atol, rtol.
Explanation: Simple pendulum
Use the above functions to integrate the simple pendulum for the case where it starts at rest pointing vertically upwards. In this case, it should remain at rest with constant energy.
Integrate the equations of motion.
Plot $E/m$ versus time.
Plot $\theta(t)$ and $\omega(t)$ versus time.
Tune the atol and rtol arguments of odeint until $E/m$, $\theta(t)$ and $\omega(t)$ are constant.
Anytime you have a differential equation with a a conserved quantity, it is critical to make sure the numerical solutions conserve that quantity as well. This also gives you an opportunity to find other bugs in your code. The default error tolerances (atol and rtol) used by odeint are not sufficiently small for this problem. Start by trying atol=1e-3, rtol=1e-2 and then decrease each by an order of magnitude until your solutions are stable.
End of explanation
def plot_pendulum(a=0.0, b=0.0, omega0=0.0):
Integrate the damped, driven pendulum and make a phase plot of the solution.
# YOUR CODE HERE
raise NotImplementedError()
Explanation: Damped pendulum
Write a plot_pendulum function that integrates the damped, driven pendulum differential equation for a particular set of parameters $[a,b,\omega_0]$.
Use the initial conditions $\theta(0)=-\pi + 0.1$ and $\omega=0$.
Decrease your atol and rtol even futher and make sure your solutions have converged.
Make a parametric plot of $[\theta(t),\omega(t)]$ versus time.
Use the plot limits $\theta \in [-2 \pi,2 \pi]$ and $\theta \in [-10,10]$
Label your axes and customize your plot to make it beautiful and effective.
End of explanation
plot_pendulum(0.5, 0.0, 0.0)
Explanation: Here is an example of the output of your plot_pendulum function that should show a decaying spiral.
End of explanation
# YOUR CODE HERE
raise NotImplementedError()
Explanation: Use interact to explore the plot_pendulum function with:
a: a float slider over the interval $[0.0,1.0]$ with steps of $0.1$.
b: a float slider over the interval $[0.0,10.0]$ with steps of $0.1$.
omega0: a float slider over the interval $[0.0,10.0]$ with steps of $0.1$.
End of explanation |
12,417 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Astropy quantities
Astropy quantitites are a great way to handle all sorts of messy unit conversions. Careful unit conversions save lives! https
Step1: A brief word of warning, the pretty printing shown above will take longer than just printing out the values of an array. This is unnoticable in this case, but is evident if you try to print a large array of quantities.
Step2: Constants
Physical constants are found in the astropy.constants module, and work just like units.
Step3: Calculations with quantity arrays
Step4: Quantities as sanity checks
Step5: Unit equivalencies
Equivalencies allow you to do unit conversions under certain physical assumptions. For instance, it makes sense to talk about converting wavelength to frequency when you are discussing the properties of light waves in a vacuum. See http
Step6: Spectral energy density equivalencies
Step7: Other cool equivalencies
Doppler shifts (for both radio velocities and optical velocities), dimensionless angles, parallax.
Words of warning
Quantity arrays will often break functions that aren't prepared for them. Simple numpy operations still work, but for more complicated routines you'll have to convert to the units you want and then take the underlying array with the quantity.value attribute.
Step8: Even when you think you have a dimensionless array, it can still be a dimensionless quantity.
Step9: Using units in your own code
You can use the decorator quality_input as a clean way of ensuring your functions get the proper input.
Step10: SkyCoord
The SkyCoord class, from astropy.coordinates, is a convenient way of dealing with astronomical coordinate systems.
http
Step11: The attributes ra and dec are Angles. They are subclasses of Quantity, and so they behave similarly, but have more specific functionality. See http
Step12: Matching coordinates
There are lots of specific use cases outlined here, but let's go over a simple catalog matching exercise. | Python Code:
print(type(u.Msun))
u.Msun
Explanation: Astropy quantities
Astropy quantitites are a great way to handle all sorts of messy unit conversions. Careful unit conversions save lives! https://en.wikipedia.org/wiki/Gimli_Glider
The simplest way to create a new quantity object is multiply or divide a number by a Unit instance.
End of explanation
mass = 1 * u.Msun
print(type(mass))
mass
# quantities subclass numpy ndarray, so you can make handle arrays of quantities like you would
# any other array object
mass.__class__.__bases__
# we can convert units to other equivalent units
mass.to(u.kg)
# there are shortcuts for converting to the relevent system, regardless of what type of quantity it is
mass.cgs
mass.si
# we can inspect their unit and their numeric value with that unit
print(mass.value, mass.unit)
# calculations with quantities can produce quantities with new units
average_density = mass / (4 / 3 * np.pi * u.Rearth ** 3)
average_density.cgs
Explanation: A brief word of warning, the pretty printing shown above will take longer than just printing out the values of an array. This is unnoticable in this case, but is evident if you try to print a large array of quantities.
End of explanation
# Newton's constant
c.G
# Planck's constant
c.h
# speed of light
c.c
Explanation: Constants
Physical constants are found in the astropy.constants module, and work just like units.
End of explanation
# made into a quantity array by multiplying numeric array by unit
R = np.linspace(1, 5) * u.Rearth
v = np.sqrt(2 * c.G * u.Msun / R)
print(v)
v = v.to(u.km / u.s)
print(v)
plt.plot(R, v)
plt.xlabel(r"Radius [R$_\oplus$]")
plt.ylabel(r"Escape velocity [km s$^{-1}$]")
Explanation: Calculations with quantity arrays
End of explanation
obscure_quantity = 42 * c.G * c.m_e ** 2 / c.k_B ** 2 * c.c ** 3 * (5700 * u.K) ** -2 * u.Msun / u.Mpc
obscure_quantity
# what the heck is a m^6 kg Msun / (Mpc J^2 s^5)??
obscure_quantity.decompose()
# will fail!
obscure_quantity.to(u.m)
# addition works for like units
(1 * u.m) + (1 * u.cm)
# and fails for the wrong dimensions
(1 * u.m) + (1 * u.s)
Explanation: Quantities as sanity checks
End of explanation
wavelengths = np.linspace(0.1, 1, 100) * u.micron
# will fail without the correct equivalency passed in!
frequencies = wavelengths.to(u.THz, equivalencies=u.spectral())
plt.plot(wavelengths, frequencies)
plt.xlabel(r"$\lambda$ [$\mu$m]")
plt.ylabel(r"$\nu$ [THz]")
Explanation: Unit equivalencies
Equivalencies allow you to do unit conversions under certain physical assumptions. For instance, it makes sense to talk about converting wavelength to frequency when you are discussing the properties of light waves in a vacuum. See http://docs.astropy.org/en/stable/units/equivalencies.html#unit-equivalencies.
Spectral equivalence
End of explanation
intensity_unit = blackbody_nu(wavelengths[0], temperature=1e3 * u.K).unit
wavelengths = np.logspace(-1, 1, 100) * u.micron
temperatures = np.linspace(5e3, 1e4, 5) * u.K
for T in temperatures:
plt.plot(wavelengths, blackbody_nu(wavelengths, temperature=T),
label='{:.2e}'.format(T))
plt.xscale('log')
plt.legend(loc='best')
plt.xlabel(r'$\lambda$ [$\mu$m]')
plt.ylabel('$I_\\nu$ [{}]'.format(texify(intensity_unit)))
intensity_unit = blackbody_lambda(wavelengths[0], temperature=1e3 * u.K).unit
for T in temperatures:
plt.plot(wavelengths, blackbody_lambda(wavelengths, temperature=T),
label='{:.2e}'.format(T))
plt.xscale('log')
plt.legend(loc='best')
plt.xlabel(r'$\lambda$ [$\mu$m]')
plt.ylabel('$I_\\lambda$ [{}]'.format(texify(intensity_unit)))
T = 1e4 * u.K
solid_angle = ((1 * u.arcsec) ** 2).to(u.sr)
f_nu = blackbody_nu(wavelengths, temperature=T) * solid_angle
f_lambda = blackbody_lambda(wavelengths, temperature=T) * solid_angle
print(f_nu.unit)
print(f_lambda.unit)
# I_nu.to(I_lambda.unit) # would fail
# for conversion of spectral energy density, we need to specify what part of the spectra we're looking at
f_lambda_converted = f_nu.to(f_lambda.unit, equivalencies=u.spectral_density(wavelengths))
print(f_lambda_converted.unit)
# shouldn't raise any exceptions!
assert np.all(np.isclose(f_lambda.value, f_lambda_converted.value))
Explanation: Spectral energy density equivalencies
End of explanation
# e.g., from the example above
np.isclose(f_lambda, f_lambda_converted)
Explanation: Other cool equivalencies
Doppler shifts (for both radio velocities and optical velocities), dimensionless angles, parallax.
Words of warning
Quantity arrays will often break functions that aren't prepared for them. Simple numpy operations still work, but for more complicated routines you'll have to convert to the units you want and then take the underlying array with the quantity.value attribute.
End of explanation
print((1 * u.m) / (2 * u.m), type((1 * u.m) / (2 * u.m)))
Explanation: Even when you think you have a dimensionless array, it can still be a dimensionless quantity.
End of explanation
@u.quantity_input(angle=u.arcsec, distance=u.Mpc)
def angle_to_size(angle, distance):
return angle.to(u.radian).value * distance
# this should work
angle_to_size(1 * u.arcsec, 25 * u.Mpc).to(u.kpc)
# quantity_input only checks for convertability, not that it's the same unit
angle_to_size(1 * u.arcmin, 25 * u.Mpc).to(u.kpc)
# this should raise an error
angle_to_size(1 * u.m, 25 * u.Mpc)
Explanation: Using units in your own code
You can use the decorator quality_input as a clean way of ensuring your functions get the proper input.
End of explanation
coord = SkyCoord(45, 30, unit=u.deg)
# ICRS is the reference frame
coord
# we can transform between coordinate frames
coord.fk4
coord.fk5
coord.galactic
# latitude and longitude are accessed with ra and dec (when in icrs or fk frames)
coord.ra
coord.dec
Explanation: SkyCoord
The SkyCoord class, from astropy.coordinates, is a convenient way of dealing with astronomical coordinate systems.
http://docs.astropy.org/en/stable/coordinates/index.html
End of explanation
print(coord.to_string())
print(coord.to_string('dms'))
print(coord.to_string('hmsdms'))
print(coord.to_string('hmsdms', sep=':'))
print(coord.to_string('hmsdms', sep=' '))
Explanation: The attributes ra and dec are Angles. They are subclasses of Quantity, and so they behave similarly, but have more specific functionality. See http://docs.astropy.org/en/stable/coordinates/angles.html#working-with-angles for more details.
You can get nice string representations of angles for all your inane legacy software requirements.
End of explanation
# need network connection
center_coord = SkyCoord.from_name('M31')
center_coord
# some mock coordinates
n = 500
ra_values = np.random.randn(n) + center_coord.ra.deg
dec_values = np.random.randn(n) + center_coord.dec.deg
coords = SkyCoord(ra_values, dec_values, unit=u.deg)
plt.scatter(coords.ra.deg, coords.dec.deg, s=100,
edgecolor='k', label='Parent sample')
plt.xlim(plt.xlim()[::-1]) # ra increases right to left
plt.xlabel("Right ascension [deg]")
plt.ylabel("Declination [deg]")
# mock measurements
n_sample = 100
astrometric_noise = 1 * u.arcsec
sample_indices = np.random.choice(np.arange(len(coords)), n_sample)
sample_ra = coords[sample_indices].ra.deg
sample_dec = coords[sample_indices].dec.deg
angles = 2 * np.pi * np.random.rand(n_sample)
dr = astrometric_noise.to(u.deg).value * np.random.randn(n_sample)
dx = np.cos(angles) * dr - np.sin(angles) * dr
dy = np.sin(angles) * dr + np.cos(angles) * dr
sample_coords = SkyCoord(sample_ra + dx, sample_dec + dy, unit=u.deg)
plt.scatter(coords.ra.deg, coords.dec.deg, s=100,
edgecolor='k', marker='o', alpha=0.8, label='Parent sample')
plt.scatter(sample_coords.ra.deg, sample_coords.dec.deg, s=100,
edgecolor='k', marker='v', alpha=0.8, label='Child sample')
plt.xlim(plt.xlim()[::-1]) # ra increases right to left
plt.xlabel("Right ascension [deg]")
plt.ylabel("Declination [deg]")
plt.legend(bbox_to_anchor=(1, 1))
# match_to_catalog_sky will return indices into coords of the closest matching objects,
# the angular separation, and the physical distance (ignored here)
idx, sep, dist = sample_coords.match_to_catalog_sky(coords)
ideal_sep = astrometric_noise.to(u.deg) * np.random.randn(int(1e6))
plt.hist(np.abs(ideal_sep.to(u.arcsec)), histtype='step', lw=4, bins='auto', normed=True, label='Ideal')
plt.hist(sep.arcsec, histtype='step', lw=4, bins='auto', normed=True, label='Data')
plt.xlim(0, 3)
plt.xlabel("Separation [arcsec]")
plt.legend(loc='best')
plt.scatter(coords[idx].ra.deg, coords[idx].dec.deg, s=100,
edgecolor='k', marker='o', alpha=0.8, label='Matched parent sample')
plt.scatter(sample_coords.ra.deg, sample_coords.dec.deg, s=100,
edgecolor='k', marker='v', alpha=0.8, label='Child sample')
plt.xlim(plt.xlim()[::-1]) # ra increases right to left
plt.xlabel("Right ascension [deg]")
plt.ylabel("Declination [deg]")
plt.legend(bbox_to_anchor=(1, 1))
Explanation: Matching coordinates
There are lots of specific use cases outlined here, but let's go over a simple catalog matching exercise.
End of explanation |
12,418 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Landice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Ice Albedo
Is Required
Step7: 1.4. Atmospheric Coupling Variables
Is Required
Step8: 1.5. Oceanic Coupling Variables
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required
Step11: 2.2. Code Version
Is Required
Step12: 2.3. Code Languages
Is Required
Step13: 3. Grid
Land ice grid
3.1. Overview
Is Required
Step14: 3.2. Adaptive Grid
Is Required
Step15: 3.3. Base Resolution
Is Required
Step16: 3.4. Resolution Limit
Is Required
Step17: 3.5. Projection
Is Required
Step18: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required
Step19: 4.2. Description
Is Required
Step20: 4.3. Dynamic Areal Extent
Is Required
Step21: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required
Step22: 5.2. Grounding Line Method
Is Required
Step23: 5.3. Ice Sheet
Is Required
Step24: 5.4. Ice Shelf
Is Required
Step25: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required
Step26: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required
Step27: 7.2. Ocean
Is Required
Step28: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required
Step29: 8.2. Melting
Is Required
Step30: 9. Ice --> Dynamics
**
9.1. Description
Is Required
Step31: 9.2. Approximation
Is Required
Step32: 9.3. Adaptive Timestep
Is Required
Step33: 9.4. Timestep
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'fio-ronm', 'sandbox-1', 'landice')
Explanation: ES-DOC CMIP6 Model Properties - Landice
MIP Era: CMIP6
Institute: FIO-RONM
Source ID: SANDBOX-1
Topic: Landice
Sub-Topics: Glaciers, Ice.
Properties: 30 (21 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:01
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.ice_albedo')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "function of ice age"
# "function of ice density"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Ice Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify how ice albedo is modelled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Atmospheric Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the atmosphere and ice (e.g. orography, ice mass)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Oceanic Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the ocean and ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice velocity"
# "ice thickness"
# "ice temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which variables are prognostically calculated in the ice model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Grid
Land ice grid
3.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.2. Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is an adative grid being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.base_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Base Resolution
Is Required: TRUE Type: FLOAT Cardinality: 1.1
The base resolution (in metres), before any adaption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.resolution_limit')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Resolution Limit
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If an adaptive grid is being used, what is the limit of the resolution (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.projection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.5. Projection
Is Required: TRUE Type: STRING Cardinality: 1.1
The projection of the land ice grid (e.g. albers_equal_area)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of glaciers in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of glaciers, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 4.3. Dynamic Areal Extent
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does the model include a dynamic glacial extent?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the ice sheet and ice shelf in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.grounding_line_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grounding line prescribed"
# "flux prescribed (Schoof)"
# "fixed grid size"
# "moving grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.2. Grounding Line Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_sheet')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.3. Ice Sheet
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice sheets simulated?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_shelf')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Ice Shelf
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice shelves simulated?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over bedrock
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Ocean
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of calving from the front of the ice shelf
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Melting
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of melting from the front of the ice shelf
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Ice --> Dynamics
**
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description if ice sheet and ice shelf dynamics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.approximation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SIA"
# "SAA"
# "full stokes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Approximation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Approximation type used in modelling ice dynamics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.3. Adaptive Timestep
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there an adaptive time scheme for the ice scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep.
End of explanation |
12,419 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<small><i>This notebook was put together by Jake Vanderplas. Source and license info is on GitHub.</i></small>
Density Estimation
Step1: Introducing Gaussian Mixture Models
We previously saw an example of K-Means, which is a clustering algorithm which is most often fit using an expectation-maximization approach.
Here we'll consider an extension to this which is suitable for both clustering and density estimation.
For example, imagine we have some one-dimensional data in a particular distribution
Step2: Gaussian mixture models will allow us to approximate this density
Step3: Note that this density is fit using a mixture of Gaussians, which we can examine by looking at the means_, covars_, and weights_ attributes
Step4: These individual Gaussian distributions are fit using an expectation-maximization method, much as in K means, except that rather than explicit cluster assignment, the posterior probability is used to compute the weighted mean and covariance.
Somewhat surprisingly, this algorithm provably converges to the optimum (though the optimum is not necessarily global).
How many Gaussians?
Given a model, we can use one of several means to evaluate how well it fits the data.
For example, there is the Aikaki Information Criterion (AIC) and the Bayesian Information Criterion (BIC)
Step5: Let's take a look at these as a function of the number of gaussians
Step6: It appears that for both the AIC and BIC, 4 components is preferred.
Example
Step7: Now let's evaluate the log-likelihood of each point under the model, and plot these as a function of y
Step8: The algorithm misses a few of these points, which is to be expected (some of the "outliers" actually land in the middle of the distribution!)
Here are the outliers that were missed
Step9: And here are the non-outliers which were spuriously labeled outliers
Step10: Finally, we should note that although all of the above is done in one dimension, GMM does generalize to multiple dimensions, as we'll see in the breakout session.
Other Density Estimators
The other main density estimator that you might find useful is Kernel Density Estimation, which is available via sklearn.neighbors.KernelDensity. In some ways, this can be thought of as a generalization of GMM where there is a gaussian placed at the location of every training point! | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
plt.style.use('seaborn')
Explanation: <small><i>This notebook was put together by Jake Vanderplas. Source and license info is on GitHub.</i></small>
Density Estimation: Gaussian Mixture Models
Here we'll explore Gaussian Mixture Models, which is an unsupervised clustering & density estimation technique.
We'll start with our standard set of initial imports
End of explanation
np.random.seed(2)
x = np.concatenate([np.random.normal(0, 2, 2000),
np.random.normal(5, 5, 2000),
np.random.normal(3, 0.5, 600)])
plt.hist(x, 80, normed=True)
plt.xlim(-10, 20);
Explanation: Introducing Gaussian Mixture Models
We previously saw an example of K-Means, which is a clustering algorithm which is most often fit using an expectation-maximization approach.
Here we'll consider an extension to this which is suitable for both clustering and density estimation.
For example, imagine we have some one-dimensional data in a particular distribution:
End of explanation
from sklearn.mixture import GaussianMixture as GMM
X = x[:, np.newaxis]
clf = GMM(4, max_iter=500, random_state=3).fit(X)
xpdf = np.linspace(-10, 20, 1000)
density = np.array([np.exp(clf.score([[xp]])) for xp in xpdf])
plt.hist(x, 80, density=True, alpha=0.5)
plt.plot(xpdf, density, '-r')
plt.xlim(-10, 20);
Explanation: Gaussian mixture models will allow us to approximate this density:
End of explanation
clf.means_
clf.covariances_
clf.weights_
plt.hist(x, 80, normed=True, alpha=0.3)
plt.plot(xpdf, density, '-r')
for i in range(clf.n_components):
pdf = clf.weights_[i] * stats.norm(clf.means_[i, 0],
np.sqrt(clf.covariances_[i, 0])).pdf(xpdf)
plt.fill(xpdf, pdf, facecolor='gray',
edgecolor='none', alpha=0.3)
plt.xlim(-10, 20);
Explanation: Note that this density is fit using a mixture of Gaussians, which we can examine by looking at the means_, covars_, and weights_ attributes:
End of explanation
print(clf.bic(X))
print(clf.aic(X))
Explanation: These individual Gaussian distributions are fit using an expectation-maximization method, much as in K means, except that rather than explicit cluster assignment, the posterior probability is used to compute the weighted mean and covariance.
Somewhat surprisingly, this algorithm provably converges to the optimum (though the optimum is not necessarily global).
How many Gaussians?
Given a model, we can use one of several means to evaluate how well it fits the data.
For example, there is the Aikaki Information Criterion (AIC) and the Bayesian Information Criterion (BIC)
End of explanation
n_estimators = np.arange(1, 10)
clfs = [GMM(n, max_iter=1000).fit(X) for n in n_estimators]
bics = [clf.bic(X) for clf in clfs]
aics = [clf.aic(X) for clf in clfs]
plt.plot(n_estimators, bics, label='BIC')
plt.plot(n_estimators, aics, label='AIC')
plt.legend();
Explanation: Let's take a look at these as a function of the number of gaussians:
End of explanation
np.random.seed(0)
# Add 20 outliers
true_outliers = np.sort(np.random.randint(0, len(x), 20))
y = x.copy()
y[true_outliers] += 50 * np.random.randn(20)
clf = GMM(4, max_iter=500, random_state=0).fit(y[:, np.newaxis])
xpdf = np.linspace(-10, 20, 1000)
density_noise = np.array([np.exp(clf.score([[xp]])) for xp in xpdf])
plt.hist(y, 80, density=True, alpha=0.5)
plt.plot(xpdf, density_noise, '-r')
plt.xlim(-15, 30);
Explanation: It appears that for both the AIC and BIC, 4 components is preferred.
Example: GMM For Outlier Detection
GMM is what's known as a Generative Model: it's a probabilistic model from which a dataset can be generated.
One thing that generative models can be useful for is outlier detection: we can simply evaluate the likelihood of each point under the generative model; the points with a suitably low likelihood (where "suitable" is up to your own bias/variance preference) can be labeld outliers.
Let's take a look at this by defining a new dataset with some outliers:
End of explanation
log_likelihood = np.array([clf.score_samples([[yy]]) for yy in y])
# log_likelihood = clf.score_samples(y[:, np.newaxis])[0]
plt.plot(y, log_likelihood, '.k');
detected_outliers = np.where(log_likelihood < -9)[0]
print("true outliers:")
print(true_outliers)
print("\ndetected outliers:")
print(detected_outliers)
Explanation: Now let's evaluate the log-likelihood of each point under the model, and plot these as a function of y:
End of explanation
set(true_outliers) - set(detected_outliers)
Explanation: The algorithm misses a few of these points, which is to be expected (some of the "outliers" actually land in the middle of the distribution!)
Here are the outliers that were missed:
End of explanation
set(detected_outliers) - set(true_outliers)
Explanation: And here are the non-outliers which were spuriously labeled outliers:
End of explanation
from sklearn.neighbors import KernelDensity
kde = KernelDensity(0.15).fit(x[:, None])
density_kde = np.exp(kde.score_samples(xpdf[:, None]))
plt.hist(x, 80, density=True, alpha=0.5)
plt.plot(xpdf, density, '-b', label='GMM')
plt.plot(xpdf, density_kde, '-r', label='KDE')
plt.xlim(-10, 20)
plt.legend();
Explanation: Finally, we should note that although all of the above is done in one dimension, GMM does generalize to multiple dimensions, as we'll see in the breakout session.
Other Density Estimators
The other main density estimator that you might find useful is Kernel Density Estimation, which is available via sklearn.neighbors.KernelDensity. In some ways, this can be thought of as a generalization of GMM where there is a gaussian placed at the location of every training point!
End of explanation |
12,420 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Model Complexity, Overfitting and Underfitting
Step1: Validation Curves
Step2: Exercise
Plot the validation curve on the digit dataset for | Python Code:
from plots import plot_kneighbors_regularization
plot_kneighbors_regularization()
Explanation: Model Complexity, Overfitting and Underfitting
End of explanation
from sklearn.datasets import load_digits
from sklearn.ensemble import RandomForestClassifier
from sklearn.learning_curve import validation_curve
digits = load_digits()
X, y = digits.data, digits.target
model = RandomForestClassifier(n_estimators=20)
param_range = range(1, 13)
training_scores, validation_scores = validation_curve(model, X, y,
param_name="max_depth",
param_range=param_range, cv=5)
training_scores.shape
training_scores
def plot_validation_curve(parameter_values, train_scores, validation_scores):
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
validation_scores_mean = np.mean(validation_scores, axis=1)
validation_scores_std = np.std(validation_scores, axis=1)
plt.fill_between(parameter_values, train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std, alpha=0.1,
color="r")
plt.fill_between(parameter_values, validation_scores_mean - validation_scores_std,
validation_scores_mean + validation_scores_std, alpha=0.1, color="g")
plt.plot(parameter_values, train_scores_mean, 'o-', color="r",
label="Training score")
plt.plot(parameter_values, validation_scores_mean, 'o-', color="g",
label="Cross-validation score")
plt.ylim(validation_scores_mean.min() - .1, train_scores_mean.max() + .1)
plt.legend(loc="best")
plt.figure()
plot_validation_curve(param_range, training_scores, validation_scores)
Explanation: Validation Curves
End of explanation
# %load solutions/validation_curve.py
Explanation: Exercise
Plot the validation curve on the digit dataset for:
* a LinearSVC with a logarithmic range of regularization parameters C.
* KNeighborsClassifier with a linear range of neighbors n_neighbors.
What do you expect them to look like? How do they actually look like?
End of explanation |
12,421 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A class to represent Phase Noise
imports
Step1: Introduction
This document present a class in the plldesinger python module used to represent phase noise. The class have methods to compute the basic operation over phase noise, and they are
Step2: Noise integration
In order to find the integral of the noise of phase noise when few points are available it is necesary to be carefull and not introduce and error due to the logaritmic caracteristic of the the curve. Simple numerical integration of the curve in the figure gives a large error, specially if few points are available. There are two ways to avoid this problem one is to calculate the area under the curve between two points considering the log behavior of the curve [1], the other way to deal with the asymptocally behavior of the noise is to do the integration under the assumption of a linear behavior after log-log conversion.
The following examples introduce the two methods and compare them with the result of the symbolic integration performed with Sympy [2].
Using Gardner equation
By default the phase noise class integrate the noise using the equation find in the book of Gardner [1]. This can be done by using the integrate method as
Step3: Noise integration with the trapz method
Some times when used with noisy power spectrum densities, as the ones that result from transient simulations, the gardner method producing large truncation erros. In this case is better to interpolate points assuming the logaritmic tendency of the curve and then do a simple integration. This is implemented as and option in the integrate method and is ilustrated in the following
Step4: Integration using Sympy
The same calculation symbolically using Sympy that is ilustrated in the next code
Step5: integrating over similar limits that before
Step6: This is exactly what we found numerically in previous examples
Generating samples of the phase noise
It is also posible to generate samples of the noise for that we generate AWNG noise in the frequency domain we weight it with the power spectrum density and we calcualte the inverse FFT to have samples back in a similar fashion tha the algorithm described in [3] This notebook shows how to generate phase noise with certain power spectral density. The signal $
\phi[n]$ is generated by first creating AWNG noise for every of the frequency points where the spectrum is sampled. The signal constructed like that is just transform to a time sequence by means of the inverse FFT.
Create noise with points and the slope
Step7: Generate AWGN noise and calculate the phase in the time domain
Step8: notice that at low frequency offsets is it is difficult to have the noise right, the reason being that the number of samples is small.
Create noise samples using the pnoise class
This algoritm it is implemented as a method in the pnoise class and can be used, conveniently, as described in the following code. | Python Code:
from __future__ import division
# Matrix computation
import numpy as np
from numpy import sqrt, diff, conj
from numpy.random import randn
# Signal processing routines
import scipy.signal as sig
# Symbolic math
import sympy as sym
sym.init_printing(use_latex='mathjax')
# Plotting
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
matplotlib.rcParams['font.size'] = 12
matplotlib.rcParams['figure.figsize'] = (9,6)
matplotlib.rcParams['axes.labelsize'] = 'large'
matplotlib.rcParams['axes.grid'] = True
# Pll designer class to manipulate phase noise data
import plldesigner.pnoise as pn
Explanation: A class to represent Phase Noise
imports
End of explanation
fm = np.logspace(4,8,100)
lorentzian = pn.Pnoise(fm,10*np.log10(1/(fm*fm)),label='Lorentzian')
fig = lorentzian.plot()
white = pn.Pnoise(fm,-120*np.ones(fm.shape),label='white')
fig = white.plot()
added = lorentzian+white
added.label = 'addition'
# Ploting in semilogx
added.plot()
leg = plt.legend()
ix, = np.where(fm>1e6)
added.ldbc[ix[0]]
Explanation: Introduction
This document present a class in the plldesinger python module used to represent phase noise. The class have methods to compute the basic operation over phase noise, and they are:
* Integration
* Interpolation
* Addition assuming statistically independence
* Multiplication by a transfer function
* Ploting
* Model fit
The phase noise is represented by its Doubled-Sided Band(SSB) spectrum:
$$
\mathcal{L}(f_m) = 10 \log{\frac{\phi(f_m)^{2}}{2}} (dBc/Hz),
$$
where $\phi(f_m)^2$ is the Singled-Sided band (SSB) power sepectral density (rad/Hz).
Noise addition
The addition is done assuming both signal are statistically independent. The following code shows a example of definition of noise with two vectors and the resulting power spectral density.
End of explanation
x = added.integrate()
x
Explanation: Noise integration
In order to find the integral of the noise of phase noise when few points are available it is necesary to be carefull and not introduce and error due to the logaritmic caracteristic of the the curve. Simple numerical integration of the curve in the figure gives a large error, specially if few points are available. There are two ways to avoid this problem one is to calculate the area under the curve between two points considering the log behavior of the curve [1], the other way to deal with the asymptocally behavior of the noise is to do the integration under the assumption of a linear behavior after log-log conversion.
The following examples introduce the two methods and compare them with the result of the symbolic integration performed with Sympy [2].
Using Gardner equation
By default the phase noise class integrate the noise using the equation find in the book of Gardner [1]. This can be done by using the integrate method as:
End of explanation
added.integrate(method='trapz')
Explanation: Noise integration with the trapz method
Some times when used with noisy power spectrum densities, as the ones that result from transient simulations, the gardner method producing large truncation erros. In this case is better to interpolate points assuming the logaritmic tendency of the curve and then do a simple integration. This is implemented as and option in the integrate method and is ilustrated in the following
End of explanation
fm,LdBc,fl,fh = sym.symbols('fm,LdBc,fl,fh', postive=True)
phi2_white = 2*10**(-120/10)
phi2_lorentzian = 2/(fm*fm)
phi2_added = phi2_white+phi2_lorentzian
fig = plt.figure()
ax = sym.plot(10*sym.log(phi2_added/2,10),(fm,1e4,1e8), xscale = 'log')
x = 10*sym.log(phi2_added.subs({fm:1e6})/2,10)
x.evalf()
phi2_int = sym.integrate(phi2_added,fm)
phi2_int
Explanation: Integration using Sympy
The same calculation symbolically using Sympy that is ilustrated in the next code
End of explanation
sym.sqrt(phi2_int.subs({fm:1e8})-phi2_int.subs({fm:1e4}))
Explanation: integrating over similar limits that before
End of explanation
# Create a noise represenation with the noise points and slopes
pnobj = pn.Pnoise.with_points_slopes([1e5, 1e6, 1e9],[-80,-100,-120],[-30,-20,0])
pnobj.plot('s')
pnobj_ext = pnobj
# Interpolate the noise in the logaritmic scale
npoints = 2**16
fs = 500e6
fm = np.linspace( fs/npoints, fs/2, npoints)
dfm = fm[1]-fm[0]
pnobj_ext.fm=fm
pnobj.plot('-', marker='x')
Explanation: This is exactly what we found numerically in previous examples
Generating samples of the phase noise
It is also posible to generate samples of the noise for that we generate AWNG noise in the frequency domain we weight it with the power spectrum density and we calcualte the inverse FFT to have samples back in a similar fashion tha the algorithm described in [3] This notebook shows how to generate phase noise with certain power spectral density. The signal $
\phi[n]$ is generated by first creating AWNG noise for every of the frequency points where the spectrum is sampled. The signal constructed like that is just transform to a time sequence by means of the inverse FFT.
Create noise with points and the slope
End of explanation
# create phase noise samples
def gen_phase_noise_samples(fm, ldbc_fm, npoints):
awgn_P1 = ( sqrt(0.5)*(randn(npoints) +1j*randn(npoints)) )
P = 2*10**(ldbc_fm/10)
dfm = fm[1]-fm[0]
X = 2 * (npoints-1) * sqrt( dfm * P ) * awgn_P1
X = np.r_[0,X, X.conj()[::-1]]
phi = np.fft.ifft(X)
return phi
phi = gen_phase_noise_samples(fm, pnobj.ldbc, npoints)
f, pxx = sig.welch(phi,fs, window='blackman', nperseg=2**12)
pnobj.fm = f[1:]
plt.semilogx(f[1:],10*np.log10(pxx[1:]/2))
ax = pnobj.plot('-')
Explanation: Generate AWGN noise and calculate the phase in the time domain
End of explanation
## create a phase noise model with the points and slops
pnobj = pn.Pnoise.with_points_slopes([1e5, 1e6, 1e9],[-80,-100,-120],[-30,-20,0])
fs = 500e6
npoints = 2**16
phi = pnobj.generate_samples(npoints, fs)
##Calculate the power spectral density
f, pxx = sig.welch(phi,fs, window='blackman', nperseg=2**12)
pnobj.fm = f[1:]
plt.semilogx(f[1:],10*np.log10(pxx[1:]/2), label = 'Sampled noise')
pnobj.plot('-', label='Asymptotic Model')
leg = plt.legend()
Explanation: notice that at low frequency offsets is it is difficult to have the noise right, the reason being that the number of samples is small.
Create noise samples using the pnoise class
This algoritm it is implemented as a method in the pnoise class and can be used, conveniently, as described in the following code.
End of explanation |
12,422 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Rabi model fitting
Step1: The model
According to the Rabi model, the probability of being in the excited state, $p_e$ is given by,
\begin{equation}
P_e = a_0 + a_1 \frac{\Omega^2}{W^2} \left[ 1 - \left( e^{-\frac{t}{T_1}} \cos \left( 2\pi Wt + \phi \right) \right) \right] + a_2 \left( 1 - e^{-\frac{t}{T_{decay}}} \right)
\end{equation}
where $W = \sqrt{\Omega^2 + \delta^2}$, $\Omega$ is the Rabi frequency, $\delta$ is the detuning, $T_1$ is the relaxational coherence time, and $t$ is time.
This gives a form which looks like the plot below.
Step2: Plot example model
Step3: Plot data and initial guess
Step4: Curve fitting | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import scipy.stats as stats
from scipy.optimize import curve_fit
%matplotlib inline
Explanation: Rabi model fitting
End of explanation
def rabiModel(time, rabiFreq, T1, Tdec, phi, a0, a1, a2, detuning=0.0):
phi_deg = phi*(np.pi/180)
W = np.sqrt(rabiFreq**2 + detuning**2)
ampl = rabiFreq**2 / W**2
osci = np.cos(2*np.pi*W*time + phi_deg)
expo = np.exp(-time/T1)
decay = np.exp(-time/Tdec)
return a0 + a1*ampl*(1 - (expo * osci)) + a2 * (1 - decay)
Explanation: The model
According to the Rabi model, the probability of being in the excited state, $p_e$ is given by,
\begin{equation}
P_e = a_0 + a_1 \frac{\Omega^2}{W^2} \left[ 1 - \left( e^{-\frac{t}{T_1}} \cos \left( 2\pi Wt + \phi \right) \right) \right] + a_2 \left( 1 - e^{-\frac{t}{T_{decay}}} \right)
\end{equation}
where $W = \sqrt{\Omega^2 + \delta^2}$, $\Omega$ is the Rabi frequency, $\delta$ is the detuning, $T_1$ is the relaxational coherence time, and $t$ is time.
This gives a form which looks like the plot below.
End of explanation
time_start = 0.0
time_end = 7.0
time_steps = 1000
time_fit = np.linspace(time_start,time_end,time_steps)
rabiFreq = 3
T1 = 2
Tdec = 20
phi = -65
a0 = 0
a1 = 0.5
a2 = -1.0
p_e_fit = rabiModel(time_fit, rabiFreq, T1, Tdec, phi, a0, a1, a2, detuning=0)
plt.plot(time_fit, p_e_fit, 'g-', label='$P_e (\Delta = 0)$')
plt.plot(time_fit, np.exp(-time_fit/T1), 'r--', label='$e^{-t/T_1}$')
plt.xlabel("Time, ($\mu s$)")
plt.ylabel("Prob. excited state, $P_e$")
plt.title("Rabi model")
plt.legend()
plt.ylim([0.0, 1.0])
plt.grid()
Explanation: Plot example model
End of explanation
d0,d1,d2,d3,d4,d5,d6,d7,d8,d9 = np.loadtxt('SR080317_026.dat',delimiter="\t",unpack=True)
time_exp = d1*1e6
p_e_exp = d4-min(d4)
p_e_exp = p_e_exp/max(p_e_exp)
cropNum = 170
time_exp = time_exp[1:len(time_exp)-cropNum]
p_e_exp = p_e_exp[1:len(p_e_exp)-cropNum]
time_start = 0.0
time_end = 6.0
time_steps = 1000
time_fit = np.linspace(time_start,time_end,time_steps)
# Initial guess
rabiFreq = 3
T1 = 2
Tdec = 20
phi = -65
a0 = 0
a1 = 0.5
a2 = -1.0
p_e_fit = rabiModel(time_fit, rabiFreq, T1, Tdec, phi, a0, a1, a2, detuning=0)
plt.plot(time_fit, p_e_fit, 'g-', label='$P_e (\Delta = 0)$')
plt.plot(time_exp,p_e_exp, 'b-', label='data')
plt.xlabel("Time, ($\mu s$)")
plt.ylabel("Prob. excited state, $P_e$")
plt.title("Initial guess")
plt.legend()
plt.ylim([0.0, 1.0])
plt.grid()
Explanation: Plot data and initial guess
End of explanation
guess = [rabiFreq, T1, Tdec, phi, a0, a1, a2]
popt,pcov = curve_fit(rabiModel, time_exp, p_e_exp, p0=guess)
perr = np.sqrt(np.diag(pcov))
params = ['rabiFreq', 'T1', 'Tdec', 'phi', 'a0', 'a1', 'a2']
for idx in range(len(params)):
print( "The fitted value of ", params[idx], " is ", popt[idx], " with error ", perr[idx] )
p_e_fit = rabiModel(time_fit,*popt)
plt.plot(time_fit, p_e_fit, 'g-', label='$P_e (\Delta = 0)$')
plt.plot(time_exp,p_e_exp, 'b-', label='data')
plt.xlabel("Time, ($\mu s$)")
plt.ylabel("Prob. excited state, $P_e$")
plt.title("Rabi model fit")
plt.legend()
plt.ylim([0.0, 1.0])
plt.grid()
Explanation: Curve fitting
End of explanation |
12,423 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Encontro 03
Step1: Carregando e visualizando o grafo | Python Code:
import sys
sys.path.append('..')
import socnet as sn
Explanation: Encontro 03: Grafos Reais
Importando a biblioteca:
End of explanation
sn.node_size = 3
sn.node_color = (0, 0, 0)
sn.edge_width = 1
sn.edge_color = (192, 192, 192)
sn.node_label_position = 'top center'
g = sn.load_graph('twitter.gml')
sn.show_graph(g, nlab=True)
Explanation: Carregando e visualizando o grafo:
End of explanation |
12,424 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Network Analysis
This group exercise is designed to develop an understanding of basic network measures and to start participants thinking about interesting research questions that can be enabled by network science.
<ol>
<li>Divide yourselves into groups of four by counting off in order around the room.</li>
<li>For 10 minutes, explore the <a href="https
Step1: Introduction to networkx
For very small networks, it can be helpful to visualize the nodes and edges. Below we have colored the nodes with respect to their group within the karate club.
Step2: A natural question you might like to ask about a network, is what are the most "important" nodes? There are many definitions of network importance or centrality. Here let's just consider one of the most straightforward measures
Step3: NetworkX can be used to return a normalized (divided by the maximum possible degree of the network) degree centrality for all nodes in the network.
Step4: From both measures, we can see that nodes 1 and 34 have the highest degree. (These happen to be the two leaders from the two groups within the club.)
On large networks, you might want to look at the degree distribution of your network ...
Step5: Another network feature that you might like to know about your network, is how assortative or modular is it. Another way of asking this, is how likely is it for similar nodes to be connected to each other? This similarity can be measured along any number of network attributes. Here we ask, how much more likely are nodes from the same group within the karate club connected to each, than we would expect at random?
Step6: You can also add edge attributes, either all at once using set_edge_attributes (like we did above for set_node_attributes), or on an edge by edge basis as shown below. The shortest path between two nodes using that weight can then be calculated.
Step7: Lastly, one might want to create a function on top of these networks. For example, to measure the average degree of a node's neighbors | Python Code:
with open('karate_edges_77.txt', 'rb') as file:
karate_club = nx.read_edgelist(file) # Read in the edges
groups = {}
with open('karate_groups.txt', 'r') as file:
for line in file:
[node, group] = re.split(r'\t+', line.strip())
groups[node] = int(group)
nx.set_node_attributes(karate_club, name = 'group', values = groups) # Add attributes to the nodes (e.g. group membership)
Explanation: Network Analysis
This group exercise is designed to develop an understanding of basic network measures and to start participants thinking about interesting research questions that can be enabled by network science.
<ol>
<li>Divide yourselves into groups of four by counting off in order around the room.</li>
<li>For 10 minutes, explore the <a href="https://icon.colorado.edu/#!/networks">Index of Complex Networks (ICON)</a> database and identify a network your group might like to investigate further. (If someone in your group has a network ready, that you'd all like to analyze feel free to work on this network instead.)</li>
<li>Write code to import this network into Python. Play with the <a href="https://networkx.github.io/documentation/stable/reference/algorithms/index.html">built-in functionality</a> of `networkx`. (See the code below for help with this step.)</li>
<li>For 15 minutes, identify a possible research question using this data. Evaluate the strengths and weaknesses of this data.</li>
<li>Outline a research design that could be used to address the weaknesses of the data you collected (e.g. think about possible data sets you could combine with this network), or otherwise improve your ability to answer the research question.</li>
</ol>
There is only one requirement: the group member with the least amount of experience coding should be responsible for typing the code into a computer. After 40 minutes you should be prepared to give a 3 minute presentation of your work. Remember that these daily exercises are for you to get to know each other better, are not expected to be fully-fleshed out research project, and a way for you to explore research areas that may be new to you.
Importing ICON data
Visit the ICON website (<a href="https://icon.colorado.edu/#!/networks">link</a>). You can search the index using the checkboxes under the tabs "network domain," "subdomain," "graph properties," and "size". You can also type in keywords related to the network you would like to find. Here is a screenshot:
<img src="https://user-images.githubusercontent.com/6633242/45270410-79e66a00-b45a-11e8-83df-852d919cdcec.png"></img>
To download a network, click the small yellow downward arrow and follow the link listed under "source". Importing this data into Python using networkx will depend on the file type of the network you download. (Check out the <a href="https://networkx.github.io/documentation/stable/reference/readwrite/index.html">package's documentation</a> for how to import networks from different file types.)
Here's what it looks like to import the Zachary Karate Club from the edglist provided:
End of explanation
position = nx.spring_layout(karate_club)
nx.draw_networkx_labels(karate_club, pos = position)
colors = [] # Color the nodes acording to their group
for attr in nx.get_node_attributes(karate_club, 'group').values():
if attr == 1: colors.append('blue')
else: colors.append('green')
nx.draw(karate_club, position, node_color = colors) # Visualize the graph
Explanation: Introduction to networkx
For very small networks, it can be helpful to visualize the nodes and edges. Below we have colored the nodes with respect to their group within the karate club.
End of explanation
print([(n, karate_club.degree(n)) for n in karate_club.nodes()])
Explanation: A natural question you might like to ask about a network, is what are the most "important" nodes? There are many definitions of network importance or centrality. Here let's just consider one of the most straightforward measures: degree centrality -- the number of edges that start or end at a given node.
End of explanation
degrees = nx.degree_centrality(karate_club)
print(degrees)
Explanation: NetworkX can be used to return a normalized (divided by the maximum possible degree of the network) degree centrality for all nodes in the network.
End of explanation
# Enron email data set: http://snap.stanford.edu/data/email-Enron.html.
# (You can search "Email network (Enron corpus)" in ICON.)
with open('email_enron.txt', 'rb') as file:
enron = nx.read_edgelist(file, comments='#') # Read in the edges
print("Enron network contains {0} nodes, and {1} edges.".format(len(enron.nodes()), len(enron.edges())))
degree_sequence = list(dict(enron.degree()).values())
print("Average degree: {0}, Maximum degree: {1}".format(np.mean(degree_sequence), max(degree_sequence)))
plt.hist(degree_sequence, bins=30) # Plots histogram of degree sequence
plt.show()
Explanation: From both measures, we can see that nodes 1 and 34 have the highest degree. (These happen to be the two leaders from the two groups within the club.)
On large networks, you might want to look at the degree distribution of your network ...
End of explanation
assort = nx.attribute_assortativity_coefficient(karate_club, 'group')
print("Assortativity coefficient: {0}".format(assort))
Explanation: Another network feature that you might like to know about your network, is how assortative or modular is it. Another way of asking this, is how likely is it for similar nodes to be connected to each other? This similarity can be measured along any number of network attributes. Here we ask, how much more likely are nodes from the same group within the karate club connected to each, than we would expect at random?
End of explanation
# Example borrowed from: https://www.cl.cam.ac.uk/teaching/1314/L109/tutorial.pdf
g = nx.Graph()
g.add_edge('a', 'b', weight=0.1)
g.add_edge('b', 'c', weight=1.5)
g.add_edge('a', 'c', weight=1.0)
g.add_edge('c', 'd', weight=2.2)
print(nx.shortest_path(g, 'b', 'd'))
print(nx.shortest_path(g, 'b', 'd', weight='weight'))
Explanation: You can also add edge attributes, either all at once using set_edge_attributes (like we did above for set_node_attributes), or on an edge by edge basis as shown below. The shortest path between two nodes using that weight can then be calculated.
End of explanation
# Example borrowed from: https://www.cl.cam.ac.uk/teaching/1314/L109/tutorial.pdf
def avg_neigh_degree(g):
data = {}
for n in g.nodes():
if g.degree(n):
data[n] = float(sum(g.degree(i) for i in g[n]))/g.degree(n)
return data
avg_neigh_degree(g) # Can you confirm that this is returning the correct results?
Explanation: Lastly, one might want to create a function on top of these networks. For example, to measure the average degree of a node's neighbors:
End of explanation |
12,425 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Compute iterative reweighted TF-MxNE with multiscale time-frequency dictionary
The iterative reweighted TF-MxNE solver is a distributed inverse method
based on the TF-MxNE solver, which promotes focal (sparse) sources
Step1: Load somatosensory MEG data
Step2: Run iterative reweighted multidict TF-MxNE solver
Step3: Generate stc from dipoles
Step4: Show the evoked response and the residual for gradiometers | Python Code:
# Author: Mathurin Massias <[email protected]>
# Yousra Bekhti <[email protected]>
# Daniel Strohmeier <[email protected]>
# Alexandre Gramfort <[email protected]>
#
# License: BSD-3-Clause
import os.path as op
import mne
from mne.datasets import somato
from mne.inverse_sparse import tf_mixed_norm, make_stc_from_dipoles
from mne.viz import plot_sparse_source_estimates
print(__doc__)
Explanation: Compute iterative reweighted TF-MxNE with multiscale time-frequency dictionary
The iterative reweighted TF-MxNE solver is a distributed inverse method
based on the TF-MxNE solver, which promotes focal (sparse) sources
:footcite:StrohmeierEtAl2015. The benefits of this approach are that:
it is spatio-temporal without assuming stationarity (source properties
can vary over time),
activations are localized in space, time, and frequency in one step,
the solver uses non-convex penalties in the TF domain, which results in a
solution less biased towards zero than when simple TF-MxNE is used,
using a multiscale dictionary allows to capture short transient
activations along with slower brain waves :footcite:BekhtiEtAl2016.
End of explanation
data_path = somato.data_path()
subject = '01'
task = 'somato'
raw_fname = op.join(data_path, 'sub-{}'.format(subject), 'meg',
'sub-{}_task-{}_meg.fif'.format(subject, task))
fwd_fname = op.join(data_path, 'derivatives', 'sub-{}'.format(subject),
'sub-{}_task-{}-fwd.fif'.format(subject, task))
# Read evoked
raw = mne.io.read_raw_fif(raw_fname)
raw.pick_types(meg=True, eog=True, stim=True)
events = mne.find_events(raw, stim_channel='STI 014')
reject = dict(grad=4000e-13, eog=350e-6)
event_id, tmin, tmax = dict(unknown=1), -0.5, 0.5
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, reject=reject,
baseline=(None, 0))
evoked = epochs.average()
evoked.crop(tmin=0.0, tmax=0.2)
# Compute noise covariance matrix
cov = mne.compute_covariance(epochs, rank='info', tmax=0.)
del epochs, raw
# Handling forward solution
forward = mne.read_forward_solution(fwd_fname)
Explanation: Load somatosensory MEG data
End of explanation
alpha, l1_ratio = 20, 0.05
loose, depth = 0.9, 1.
# Use a multiscale time-frequency dictionary
wsize, tstep = [4, 16], [2, 4]
n_tfmxne_iter = 10
# Compute TF-MxNE inverse solution with dipole output
dipoles, residual = tf_mixed_norm(
evoked, forward, cov, alpha=alpha, l1_ratio=l1_ratio,
n_tfmxne_iter=n_tfmxne_iter, loose=loose,
depth=depth, tol=1e-3,
wsize=wsize, tstep=tstep, return_as_dipoles=True,
return_residual=True)
Explanation: Run iterative reweighted multidict TF-MxNE solver
End of explanation
stc = make_stc_from_dipoles(dipoles, forward['src'])
plot_sparse_source_estimates(
forward['src'], stc, bgcolor=(1, 1, 1), opacity=0.1,
fig_name=f"irTF-MxNE (cond {evoked.comment})")
Explanation: Generate stc from dipoles
End of explanation
ylim = dict(grad=[-300, 300])
evoked.copy().pick_types(meg='grad').plot(
titles=dict(grad='Evoked Response: Gradiometers'), ylim=ylim)
residual.copy().pick_types(meg='grad').plot(
titles=dict(grad='Residuals: Gradiometers'), ylim=ylim)
Explanation: Show the evoked response and the residual for gradiometers
End of explanation |
12,426 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Reproducing "Observations on the statistical iterations of matrices" by J.H. Hetherington*
I. Introduction
We can use stochastic iteration to effect the power method for sampling the extremal eigenvalues/eigenvectors of a matrix with sampling access to a matrix-vector product. In the context of many-body physics, this matrix might be a Hamiltonian and we might be interested in, e.g, an approximation of the affiliated Green's function as in diffusion Monte Carlo.
Naive stochastic iteration can result in two effects that we would like to mitigate if we are to use it
Step1: V. The Dilemma of the Symmetric Matrix
Of course, none of this is particularly useful as it assumes that we know the spectrum and eigenvectors a priori. Next we move onto the central example of the paper in which we estimate the largest eigenvalue of a symmetric non-negative matrix (something that is starting to look like a Hamiltonian).
VI. Using Non-Stochastic Matrices
Assume for a moment that we have access to a factorized form of a symmetric, non-negative (but non-stochastic) matrix. We will demonstrate via stochastic iteration that we can estimate the largest eigenvalue of this matrix.
Step2: To estimate the largest eigenvector of A, we will now generate a Markov Chain from the probabilities in the stochastic factor, and weight the configurations with the respective elements of the diagonal factor. We will then take the estimate of the eigenvector, and extract its associated eigenvalue.
Step3: This is a decent estimate, but not perfect. Looks good, right?
Step4: It turns out that the weights used in our Markov Chain explode, which means that we cannot carry this out indefinitely to reduce variance.
Step5: Running this calculation for many different chain lengths, we see that increasing the chain length does not appear to generally make things better. Of course, this calculation is only for a single walker.
VII. Difficulties Arise
Hetherington quantifies the difficulties that we are observing by plotting the distribution of weights in Figure 2. Let us reproduce this plot
Step6: An important thing to note in comparing this with the paper is that depending on the length of the Markov Chain (in this case 50) there are certain values of n that will never be reached. That is why every other entry (all of the even values of n) have proabability 0.
For the interested reader, it might be fun to watch the separation between these two sets of peaks grow as the length of the Markov Chain increases.
What is really important here?
* The most probable answer is not the same as the average answer.
* The variance associated with estimators from this Markov chain will grow with the length of the chain.
* The formal variance is non-monotonic in the length of the chain but it quickly turns over to exponential growth. This means that there is an optimal sequence length, beyond which continued iteration will help less than reducing statistical error by increasing the number of walkers.
Next we reproduce Figure 3. Here we demonstrate the the eigenvector estimate not only looks very different from the actual eigenvector, but that it does not resemble the eigenvector estimate predicted via matrix multiplication (i.e., deterministic power method).
Step7: VIII. The Weighted Average
Rather than using the eigenvector itself to compute the eigenvalue estimate, this section introduces a functional form for an estimator. We use it to reproduce Figure 4.
Step8: Here we see the fundamental tradeoff that we have to deal with. Even for a Markov Chain with 20,000 iterations we have a choice between problems | Python Code:
# construct an exemplary non-negative matrix
A = zeros([2,2])
A[0,0] = 2.0
A[1,1] = 1.0
A[1,0] = 3.0
A[0,1] = 4.0
print 'Matrix A: \n', A
# verify that columns do not sum to 1
print 'Column 0 sum: ', sum(A[:,0])
print 'Column 1 sum: ', sum(A[:,1])
print 'Matrix A is non-negative but the columns do not sum to 1'
# compute the eigenvalues and left/right eigenvectors
w, vl, vr = eig(A, left=True)
# fill out the entries of the stochastic part of A in the silliest way possible
# direct application of Equation 2
M = zeros([2,2])
scl = 1.0/abs(w[0])
M[0,0] = scl*vl[0,0]*A[0,0]/vl[0,0]
M[0,1] = scl*vl[0,0]*A[0,1]/vl[1,0]
M[1,0] = scl*vl[1,0]*A[1,0]/vl[0,0]
M[1,1] = scl*vl[1,0]*A[1,1]/vl[1,0]
print 'Stochastic factor of A (M): \n', M
# compute the eigenvalues and left/right eigenvectors
wStoc, vlStoc, vrStoc = eig(M, left=True)
print 'Eigenvalues of M (largest is 1): ', wStoc
print 'Left eigenvectors of M (eigenvector corresponding to 1 is ~ to all 1s): \n', vlStoc
# compute the diagonal factor of A (below Equation 4)
wDiag = eye(2)
wDiag[0,0] *= sum(A[:,0])
wDiag[1,1] *= sum(A[:,1])
print 'Diagonal factor of A (w): \n', wDiag
print 'Matrix A: \n', A
print 'Product M*w: \n', dot(M,wDiag)
Explanation: Reproducing "Observations on the statistical iterations of matrices" by J.H. Hetherington*
I. Introduction
We can use stochastic iteration to effect the power method for sampling the extremal eigenvalues/eigenvectors of a matrix with sampling access to a matrix-vector product. In the context of many-body physics, this matrix might be a Hamiltonian and we might be interested in, e.g, an approximation of the affiliated Green's function as in diffusion Monte Carlo.
Naive stochastic iteration can result in two effects that we would like to mitigate if we are to use it:
Seemingly anomalous growth in variance as the number of iterations increases.
Stable or reducing variance, but the introduction of bias.
In this notebook, we will walk through the results in Hetherington's paper and reproduce them one by one. This culminates in a simple stochastic reconfiguration implementation for controlling variance and bias with a fixed population of walkers.
*Fun fact: Hetherington is one of a select few physicists to have co-authored a paper with a domestic cat (https://en.wikipedia.org/wiki/F.D.C._Willard)
II and III. Stochastic Matrices and Markov Chains
Most of us probably know what a stochastic matrix is. It is a matrix that is:
* Comprised of non-negative entries...
* ...and columns that sum to 1
We might interpret the entries of such a matrix as probabilities. The rows and columns of our matrix comprise a discrete state space through which we might imagine a system evolving. The entry in the ith row and jth column is then to be interpreted as the probability of the system transitioning into basis state i, given that it is found in basis state j.
What do we know about these matrices?
* They have a left eigenvector proportional to all 1s, with eigenvalue 1.
* 1 is the largest eigenvalue.
* If the matrix cannot be put in block diagonal form via permutation, this maximum eigenvalue is non-degenerate.
If we conceive of repeated application of this matrix as applying the power method to project out the largest eigenvector/eigenvalue of our matrix (which we assume to be non-degenerate), then the relationship between the properties of this type of matrix and the properties of a Markov Chain are evident. This largest eigenvector is, indeed, the stationary distribution of some Markov Chain. Thus, we see that there is a relationship between a Markov Chain generated by iterating some matrix and its extremal eigenvalues.
IV. Non-negative Matrices
What happens if rather than considering matrices that are both non-negative and have columns that sum to unity, we consider more general non-negative matrices. We can factorize these matrices into a product of a stochastic matrix and a diagonal matrix. In Equation 2* of Hetherington's paper, he demonstrates that a non-negative matrix can be "made stochastic" given access to its extremal left eigenpair. Below we demonstrate how to do this for a contrived 2x2 example:
*Note: Equation 2 has a typo, it should be $M_{ij} = \frac{1}{\lambda} Z_i A_{ij} Z_{j}^{-1}$
End of explanation
# create the SYMMETRIC A matrix of interest in the paper
A = zeros([2,2])
scl = 1.0/sqrt(6.0*4.0)
A[0,0] = scl*5
A[0,1] = scl*1
A[1,0] = scl*1
A[1,1] = scl*3
w, vl, vr = eig(A, left=True)
# we know that the eigenvalues are real, so just print them out as such
print 'Eigenvalues of A: ', real(w)
wExact = real(w[0])
# create the diagonal factor of A
w = zeros([2,2])
w[0,0] = sum(A[:,0])
w[1,1] = sum(A[:,1])
print 'Diagonal factor of A: \n', w
# create the stochastic factor of A
M = zeros([2,2])
M[0,0] = A[0,0]/w[0,0]
M[0,1] = A[0,1]/w[1,1]
M[1,0] = A[1,0]/w[0,0]
M[1,1] = A[1,1]/w[1,1]
print 'Stochastic factor of A: \n', M
Explanation: V. The Dilemma of the Symmetric Matrix
Of course, none of this is particularly useful as it assumes that we know the spectrum and eigenvectors a priori. Next we move onto the central example of the paper in which we estimate the largest eigenvalue of a symmetric non-negative matrix (something that is starting to look like a Hamiltonian).
VI. Using Non-Stochastic Matrices
Assume for a moment that we have access to a factorized form of a symmetric, non-negative (but non-stochastic) matrix. We will demonstrate via stochastic iteration that we can estimate the largest eigenvalue of this matrix.
End of explanation
# the paper makes a point of using weights that are reciprocals,
# such that the weight of a given configuration is (6/4)**(n/2)...
# given such a weight, this routine computes the integer n
def getN(weight):
return int(log(weight**2)/log(6.0/4.0))
def estimateEig(nIterations, M, w):
# starting from state 0
currentState = 0
weight = zeros(nIterations)
weight[0] = 1.0 # start with a weight of 1
nvec = zeros(nIterations) # for keeping track of the weight (in terms of its integer power) at each iteration
nvec[0] = 0
rvec = rand(nIterations-1)
# for storing the running estimate of the largest eigenvector of A
eigEst = zeros([2,nIterations])
eigEst[0,0] = M[0,currentState]*weight[0]
eigEst[1,0] = M[1,currentState]*weight[0]
for i in range(1,nIterations):
# the probability of staying in the current state is given by a diagonal element of the stochastic factor
# ...and because this is a 2-state example, this is enough to constrain the probability of switching into the other state
probStay = M[currentState,currentState]
# if random number is greater than probability of staying,
# then for high probability of staying, probStay is close to 1
# and it is less likely that a number will be greater than probStay
# so you should switch if this does happen
if(rvec[i-1]>probStay):
currentState = mod(currentState+1,2)
weight[i] = w[currentState,currentState]*weight[i-1]
nvec[i] = getN(weight[i])
eigEst[0,i] = eigEst[0,i-1] + M[0,currentState]*weight[i]
eigEst[1,i] = eigEst[1,i-1] + M[1,currentState]*weight[i]
return weight, eigEst
nIterations = 1000
weight, eigEst = estimateEig( nIterations, M, w )
# given the unnormalized eigenvector estimate, normalize it and compute the associated eigenvalue
# looks kind of like the local energy, yes?
nrmEst = sqrt(dot(eigEst[:,nIterations-1],eigEst[:,nIterations-1]))
eigNrm = eigEst[:,nIterations-1]/nrmEst
print 'Estimate of largest eigenvalue of A: ', dot(eigNrm,dot(A,eigNrm))
print 'Actual eigenvalue: ', wExact
Explanation: To estimate the largest eigenvector of A, we will now generate a Markov Chain from the probabilities in the stochastic factor, and weight the configurations with the respective elements of the diagonal factor. We will then take the estimate of the eigenvector, and extract its associated eigenvalue.
End of explanation
# weight vs iteration for the 1000 iteration Markov Chain
figure(figsize=[12,10])
plot(log10(weight))
xlabel('Markov Chain Iteration')
ylabel('Log10 Weight')
title('Demonstration that the weights explode with the number of iterations')
Explanation: This is a decent estimate, but not perfect. Looks good, right?
End of explanation
# vary the length of the Markov chain
iterationList = linspace(1,1000,1000)
wEstList = zeros(len(iterationList))
for idx,nIterations in enumerate(iterationList):
weight, eigEst = estimateEig( int(nIterations), M, w )
nrmEst = sqrt(dot(eigEst[:,nIterations-1],eigEst[:,nIterations-1]))
eigNrm = eigEst[:,nIterations-1]/nrmEst
wEstList[idx] = dot(eigNrm,dot(A,eigNrm))
# estimates vs length of Markov chain
figure(figsize=[12,10])
plot(iterationList, wEstList, 'o', label='Estimated eigenvalue')
axhline(wExact, color='black', label='Exact eigenvalue')
xlabel('Length of Markov Chain')
ylabel('Eigenvalue estimate')
title('Stochastic iteration to compute eigenvalue estimate from estimated eigenvector')
legend()
Explanation: It turns out that the weights used in our Markov Chain explode, which means that we cannot carry this out indefinitely to reduce variance.
End of explanation
nWalkers = 500000
nIterations = 50
ensembleP = zeros([nIterations*2+1,2])
def getN(weight):
return int(round(log(weight**2)/log(6.0/4.0)))
for walker in arange(nWalkers):
# decide randomly on the initial state
if(rand(1)<0.5):
currentState = 0
increment = True
else:
currentState = 1
increment = False
nvec = zeros(nIterations,dtype=int)
nvec[0] = 0
rvec = rand(nIterations-1)
for i in range(1,nIterations):
probStay = M[currentState,currentState]
if(increment):
nvec[i] = nvec[i-1] + 1
else:
nvec[i] = nvec[i-1] - 1
if(rvec[i-1]>probStay):
currentState = mod(currentState+1,2)
increment = not(increment)
ensembleP[nIterations+1+nvec[-1],currentState] += 1.0
n = linspace(-nIterations,nIterations,2*nIterations+1)
figure(figsize=[12,10])
plot(n,ensembleP[:,0], 'o', label='i=1')
plot(n,ensembleP[:,1], 'o', label='i=2')
xlabel('n')
ylabel('p(n,i)')
title('Reproduction of Fig. 2a')
legend()
xlim([-30,50])
figure(figsize=[12,10])
plot(n,ensembleP[:,0]*(1.5)**(n/2.), 'o', label='i=1')
plot(n,ensembleP[:,1]*(1.5)**(n/2.), 'o', label='i=2')
xlabel('n')
ylabel('wn p(n,i)')
title('Reproduction of Fig. 2b')
xlim([-30,50])
legend()
Explanation: Running this calculation for many different chain lengths, we see that increasing the chain length does not appear to generally make things better. Of course, this calculation is only for a single walker.
VII. Difficulties Arise
Hetherington quantifies the difficulties that we are observing by plotting the distribution of weights in Figure 2. Let us reproduce this plot:
End of explanation
nWalkers = 512
nIterations = 500
ensembleX = zeros([nIterations,2])
ensembleX[0,0] = 1.0
ensembleX[0,1] = 0.0
def getN(weight):
return int(round(log(weight**2)/log(6.0/4.0)))
for walker in arange(nWalkers):
# do not decide randomly on the initial state, you know it is 0
currentState = 0
nvec = zeros(nIterations,dtype=int)
nvec[0] = 0
rvec = rand(nIterations-1)
weight = zeros(nIterations)
weight[0] = 1.0
for i in range(1,nIterations):
probStay = M[currentState,currentState]
if(rvec[i-1]>probStay):
currentState = mod(currentState+1,2)
weight[i] = w[currentState,currentState]*weight[i-1]
ensembleX[i,:] += M[:,currentState]*weight[i]
analX = zeros([nIterations,2])
analX[0,0] = 1.0
analX[0,1] = 0.0
for i in range(1,nIterations):
analX[i,:] = dot(A,analX[i-1,:])
figure(figsize=[12,10])
plot(arctan2(ensembleX[:,1],ensembleX[:,0]),'.', label='Stochastic Iteration')
plot(arctan2(analX[:,1],analX[:,0]), '-', label='Matrix Multiplication')
xlabel('N')
ylabel('arctan($X_2/X_1$)')
title('Exponential Growth in Error for Stochastic Eigenvector Estimate')
legend()
figure(figsize=[12,10])
xlim([0,50])
ylim([0,0.7])
plot(arctan2(ensembleX[:,1],ensembleX[:,0]),'.', label='Stochastic Iteration')
plot(arctan2(analX[:,1],analX[:,0]), '-', label='Matrix Multiplication')
xlabel('N')
ylabel('arctan($X_2/X_1$)')
title('Exponential Growth in Error for Stochastic Eigenvector Estimate')
legend()
Explanation: An important thing to note in comparing this with the paper is that depending on the length of the Markov Chain (in this case 50) there are certain values of n that will never be reached. That is why every other entry (all of the even values of n) have proabability 0.
For the interested reader, it might be fun to watch the separation between these two sets of peaks grow as the length of the Markov Chain increases.
What is really important here?
* The most probable answer is not the same as the average answer.
* The variance associated with estimators from this Markov chain will grow with the length of the chain.
* The formal variance is non-monotonic in the length of the chain but it quickly turns over to exponential growth. This means that there is an optimal sequence length, beyond which continued iteration will help less than reducing statistical error by increasing the number of walkers.
Next we reproduce Figure 3. Here we demonstrate the the eigenvector estimate not only looks very different from the actual eigenvector, but that it does not resemble the eigenvector estimate predicted via matrix multiplication (i.e., deterministic power method).
End of explanation
def weightedAverage( L, nTrials ):
nIterations = 20000
M = zeros([2,2])
M[0,0] = 5./6.
M[0,1] = 1./4.
M[1,0] = 1./6.
M[1,1] = 3./4.
w = zeros([2,2])
w[0,0] = sqrt(6./4.)
w[1,1] = sqrt(4./6.)
lambdaEsts = zeros(nTrials)
for trial in arange(nTrials):
# do not decide randomly on the initial state, you know it is 0
currentState = 0
Gnvec = zeros(nIterations)
wvec = zeros(nIterations)
rvec = rand(nIterations)
weightList = []
cweight = w[currentState,currentState]
wvec[0] = cweight
Gnvec[0] = 1.0
weightList.append( cweight )
for i in range(1,nIterations):
probStay = M[currentState,currentState]
if(rvec[i]>probStay):
currentState = mod(currentState+1,2)
cweight = w[currentState,currentState]
wvec[i] = cweight
if(L==0):
Gnvec[i] = 1.0
else:
Gnvec[i] = reduce( lambda x, y: x*y, weightList )
weightList.append(cweight)
if(len(weightList)>L):
weightList.pop(0)
lambdaEsts[trial] = dot(wvec,Gnvec)/sum(Gnvec)
return mean(lambdaEsts), var(lambdaEsts)
# this guy will take like 5 minutes to run
Lvals = [0,1,2,3,4,5,6,7,12,18,20,24,30,35,41,48]
NL = len(Lvals)
lammeans = zeros(NL)
lamvars = zeros(NL)
for idx,L in enumerate(Lvals):
lammeans[idx], lamvars[idx] = weightedAverage( L, 50 )
figure(figsize=[12,10])
errorbar(Lvals, lammeans, yerr=100*lamvars,fmt='o',label='Stochastic Estimate')
axhline(1.10517,color='black', label='Exact Eigenvalue')
ylim([1.04,1.14])
mpl.rcParams['font.size']=18
xlabel('L')
ylabel('$\lambda$(L)')
title('Reproduction of Fig. 4')
Explanation: VIII. The Weighted Average
Rather than using the eigenvector itself to compute the eigenvalue estimate, this section introduces a functional form for an estimator. We use it to reproduce Figure 4.
End of explanation
# the input variables are:
# -L : number of iterations to average over
# -M : population
# -N : length of Markov chain
# -S : S+1 = number of configurations to average over at the end of the chain
def stochasticReconfiguration( L, M, N, S ):
# p = the probabilities of staying in state 0 or 1
p = zeros(2)
w = zeros(2)
p[0] = 5./6.
p[1] = 3./4.
# the weights affiliated with states 0 and 1
w[0] = sqrt(6./4.)
w[1] = sqrt(4./6.)
# all walkers start in state 0
popStates = zeros(M)
popWeights = w[0]*ones(M)
# allocate the global weight
globalWeight = zeros(N+1)
# initialize for the first iteration
globalWeight[0] = w[0]
# iterate over the length of the Markov chain
for i in range(1,N+1):
# compute a set of random numbers for testing each moving in the population, at this iteration
rvec = rand(M)
# accumulate population weight
totalWeight = 0.0
# accumulate the global weight
for walker in range(M):
cState = popStates[walker]
if(rvec[walker]>p[cState]):
popStates[walker] = mod(cState+1,2)
popWeights[walker] = w[cState]
totalWeight += w[cState]
globalWeight[i] = totalWeight/float(M)
# reconfigure
cumWeights = cumsum(popWeights)
rvec = rand(M)
tmpStates = copy.deepcopy(popStates)
tmpWeights = copy.deepcopy(popWeights)
for walker in range(M):
choice = bisect.bisect( cumWeights, rvec[walker]*totalWeight )
popStates[walker] = tmpStates[choice]
popWeights[walker] = tmpWeights[choice]
Gvec = zeros(S+1)
for s in range(0,S+1):
Gvec[s] = prod( globalWeight[N-s-L:N-s] )
lambdaEst = dot(globalWeight[N-S:],Gvec)/sum(Gvec)
return lambdaEst, globalWeight
# nTrials for statistics
nTrials = 10
# 2,000 samples per test
LList = [0,1,2,3,4,5,6,7,12,18,20,24,30,35,41,48]
LEst = zeros(len(LList))
LVar = zeros(len(LList))
for idx,pop in enumerate(LList):
trialResults = zeros(nTrials)
M = 30
N = 2000
for trial in range(nTrials):
lE, gW = stochasticReconfiguration(L, M, N, N-1)
trialResults[trial] = lE
LEst[idx] = mean(trialResults)
LVar[idx] = var(trialResults)
figure(figsize=[12,10])
mpl.rcParams['font.size']=12
errorbar( LList, LEst, yerr=LVar, fmt='o' )
axhline(1.10517)
xlabel('L')
ylabel('Eigenvalue')
title('For M=30 walkers, the small L bias is removed')
ylim([1.04,1.14])
# nTrials for statistics
nTrials = 10
# 2,000 samples per test
LList = [0,1,2,3,4,5,6,7,12,18,20,24,30,35,41,48]
LEst = zeros(len(LList))
LVar = zeros(len(LList))
for idx,pop in enumerate(LList):
trialResults = zeros(nTrials)
M = 2
N = 2000
for trial in range(nTrials):
lE, gW = stochasticReconfiguration(L, M, N, N-1)
trialResults[trial] = lE
LEst[idx] = mean(trialResults)
LVar[idx] = var(trialResults)
figure(figsize=[12,10])
mpl.rcParams['font.size']=12
errorbar( LList, LEst, yerr=LVar, fmt='o' )
axhline(1.10517)
xlabel('L')
ylabel('Eigenvalue')
title('For M=2 walkers, the small L bias is removed (and made worse)')
ylim([1.04,1.14])
Explanation: Here we see the fundamental tradeoff that we have to deal with. Even for a Markov Chain with 20,000 iterations we have a choice between problems:
If we truncate the product of weights to L<10, we have small variance but statistical bias.
If we let the product of weights increase to encompass the whole chain, the variance will grow exponentially with L.
Reconfiguration is the solution to this problem.
IX, X, and XI. Carrying Many Configurations Simultaneously (and More)
The solution to these problems is to carry many walkers together, and to assign a global weight to the group of walkers. The algorithm implemented goes through the following:
Create a population of walkers and sum up their total weight.
Randomly move each walker individually as before.
Compute the population averaged weight, which will be used in the estimator.
Choose to copy (or not copy) walkers among this fixed population based upon their relative weights.
Repeat.
The basic idea here is that a single walker is liable to have weights that explode. Rather than relying on these individual weights, we aggregate them every so often (in this case every iteration) and reconfigure the walker population relative to their cumulative weights. This has the net effect of damping out huge fluctuations in the weights, while introducing a statistical bias that scales inversely with the walker population.
End of explanation |
12,427 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Regression Week 4
Step1: If we want to do any "feature engineering" like creating new features or adjusting existing ones we should do this directly using the SFrames as seen in the first notebook of Week 2. For this notebook, however, we will work with the existing features.
Loading and Plotting the house sales data
Step2: Import useful functions from previous notebook
As in Week 2, we convert the SFrame into a 2D Numpy array. Copy and paste get_num_data() from the second notebook of Week 2.
Step3: Also, copy and paste the predict_output() function to compute the predictions for an entire matrix of features given the matrix and the weights
Step4: Computing the Derivative
We are now going to move to computing the derivative of the regression cost function. Recall that the cost function is the sum over the data points of the squared difference between an observed output and a predicted output, plus the L2 penalty term.
Cost(w)
= SUM[ (prediction - output)^2 ]
+ l2_penalty*(w[0]^2 + w[1]^2 + ... + w[k]^2).
Since the derivative of a sum is the sum of the derivatives, we can take the derivative of the first part (the RSS) as we did in the notebook for the unregularized case in Week 2 and add the derivative of the regularization part. As we saw, the derivative of the RSS with respect to w[i] can be written as
Step5: To test your feature derivartive run the following
Step6: Gradient Descent
Now we will write a function that performs a gradient descent. The basic premise is simple. Given a starting point we update the current weights by moving in the negative gradient direction. Recall that the gradient is the direction of increase and therefore the negative gradient is the direction of decrease and we're trying to minimize a cost function.
The amount by which we move in the negative gradient direction is called the 'step size'. We stop when we are 'sufficiently close' to the optimum. Unlike in Week 2, this time we will set a maximum number of iterations and take gradient steps until we reach this maximum number. If no maximum number is supplied, the maximum should be set 100 by default. (Use default parameter values in Python.)
With this in mind, complete the following gradient descent function below using your derivative function above. For each step in the gradient descent, we update the weight for each feature before computing our stopping criteria.
Step7: Visualizing effect of L2 penalty
The L2 penalty gets its name because it causes weights to have small L2 norms than otherwise. Let's see how large weights get penalized. Let us consider a simple model with 1 feature
Step8: Load the training set and test set.
Step9: In this part, we will only use 'sqft_living' to predict 'price'. Use the get_numpy_data function to get a Numpy versions of your data with only this feature, for both the train_data and the test_data.
Step10: Let's set the parameters for our optimization
Step11: First, let's consider no regularization. Set the l2_penalty to 0.0 and run your ridge regression algorithm to learn the weights of your model. Call your weights
Step12: Next, let's consider high regularization. Set the l2_penalty to 1e11 and run your ridge regression algorithm to learn the weights of your model. Call your weights
Step13: This code will plot the two learned models. (The green line is for the model with no regularization and the red line is for the one with high regularization.)
Step14: Compute the RSS on the TEST data for the following three sets of weights
Step15: QUIZ QUESTIONS
Q1
Step16: Q2
Step17: Q3
Step18: Initial weights learned with no regularization performed best on the Test Set (lowest RSS value)
Running a multiple regression with L2 penalty
Let us now consider a model with 2 features
Step19: We need to re-inialize the weights, since we have one extra parameter. Let us also set the step size and maximum number of iterations.
Step20: First, let's consider no regularization. Set the l2_penalty to 0.0 and run your ridge regression algorithm to learn the weights of your model. Call your weights
Step21: Next, let's consider high regularization. Set the l2_penalty to 1e11 and run your ridge regression algorithm to learn the weights of your model. Call your weights
Step22: Compute the RSS on the TEST data for the following three sets of weights
Step23: Predict the house price for the 1st house in the test set using the no regularization and high regularization models. (Remember that python starts indexing from 0.) How far is the prediction from the actual price? Which weights perform best for the 1st house?
Step24: Weights with high regularization perform best on 1st house in Test Set
QUIZ QUESTIONS
Q1
Step25: Q2 | Python Code:
import graphlab
import numpy as np
import pandas as pd
from sklearn import linear_model
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('darkgrid')
%matplotlib inline
Explanation: Regression Week 4: Ridge Regression (gradient descent)
In this notebook, we will implement ridge regression via gradient descent. You will:
* Convert an SFrame into a Numpy array
* Write a Numpy function to compute the derivative of the regression weights with respect to a single feature
* Write gradient descent function to compute the regression weights given an initial weight vector, step size, tolerance, and L2 penalty
Importing Libraries
End of explanation
sales = graphlab.SFrame('kc_house_data.gl/')
plt.figure(figsize=(8,6))
plt.plot(sales['sqft_living'], sales['price'],'.')
plt.xlabel('Living Area (ft^2)', fontsize=16)
plt.ylabel('House Price ($)', fontsize=16)
plt.title('King County, Seattle House Price Data', fontsize=18)
plt.axis([0.0, 14000.0, 0.0, 8000000.0])
plt.show()
Explanation: If we want to do any "feature engineering" like creating new features or adjusting existing ones we should do this directly using the SFrames as seen in the first notebook of Week 2. For this notebook, however, we will work with the existing features.
Loading and Plotting the house sales data
End of explanation
def get_numpy_data(input_sframe, features, output):
input_sframe['constant'] = 1 # Adding column 'constant' to input SFrame with all values = 1.0
features = ['constant'] + features # Adding 'constant' to List of features
# Selecting the columns for the feature_matrux and output_array
features_sframe = input_sframe[features]
output_sarray = input_sframe[output]
# Converting sframes to numpy.ndarrays
feature_matrix = features_sframe.to_numpy()
output_array = output_sarray.to_numpy()
return(feature_matrix, output_array)
Explanation: Import useful functions from previous notebook
As in Week 2, we convert the SFrame into a 2D Numpy array. Copy and paste get_num_data() from the second notebook of Week 2.
End of explanation
def predict_output(feature_matrix, weights):
predictions = np.dot(feature_matrix, weights)
return predictions
Explanation: Also, copy and paste the predict_output() function to compute the predictions for an entire matrix of features given the matrix and the weights:
End of explanation
def feature_derivative_ridge(errors, feature, weight, l2_penalty, feature_is_constant):
# If feature_is_constant is True, derivative is twice the dot product of errors and feature
if feature_is_constant==True:
derivative = 2.0*np.dot(errors, feature)
# Otherwise, derivative is twice the dot product plus 2*l2_penalty*weight
else:
derivative = 2.0*np.dot(errors, feature) + 2.0*l2_penalty*weight
return derivative
Explanation: Computing the Derivative
We are now going to move to computing the derivative of the regression cost function. Recall that the cost function is the sum over the data points of the squared difference between an observed output and a predicted output, plus the L2 penalty term.
Cost(w)
= SUM[ (prediction - output)^2 ]
+ l2_penalty*(w[0]^2 + w[1]^2 + ... + w[k]^2).
Since the derivative of a sum is the sum of the derivatives, we can take the derivative of the first part (the RSS) as we did in the notebook for the unregularized case in Week 2 and add the derivative of the regularization part. As we saw, the derivative of the RSS with respect to w[i] can be written as:
2*SUM[ error*[feature_i] ].
The derivative of the regularization term with respect to w[i] is:
2*l2_penalty*w[i].
Summing both, we get
2*SUM[ error*[feature_i] ] + 2*l2_penalty*w[i].
That is, the derivative for the weight for feature i is the sum (over data points) of 2 times the product of the error and the feature itself, plus 2*l2_penalty*w[i].
We will not regularize the constant. Thus, in the case of the constant, the derivative is just twice the sum of the errors (without the 2*l2_penalty*w[0] term).
Recall that twice the sum of the product of two vectors is just twice the dot product of the two vectors. Therefore the derivative for the weight for feature_i is just two times the dot product between the values of feature_i and the current errors, plus 2*l2_penalty*w[i].
With this in mind complete the following derivative function which computes the derivative of the weight given the value of the feature (over all data points) and the errors (over all data points). To decide when to we are dealing with the constant (so we don't regularize it) we added the extra parameter to the call feature_is_constant which you should set to True when computing the derivative of the constant and False otherwise.
End of explanation
(example_features, example_output) = get_numpy_data(sales, ['sqft_living'], 'price')
my_weights = np.array([1., 10.])
test_predictions = predict_output(example_features, my_weights)
errors = test_predictions - example_output # prediction errors
# next two lines should print the same values
print feature_derivative_ridge(errors, example_features[:,1], my_weights[1], 1, False)
print np.sum(errors*example_features[:,1])*2+20.
print ''
# next two lines should print the same values
print feature_derivative_ridge(errors, example_features[:,0], my_weights[0], 1, True)
print np.sum(errors)*2.
Explanation: To test your feature derivartive run the following:
End of explanation
def ridge_regression_gradient_descent(feature_matrix, output, initial_weights, step_size, l2_penalty, max_iterations):
weights = np.array(initial_weights) # make sure it's a numpy array
iteration_count = 0
#while not reached maximum number of iterations:
while iteration_count < max_iterations:
predictions = predict_output(feature_matrix, weights) # computing predictions w/ feature_matrix and weights
errors = predictions - output # compute the errors as predictions - output
# loop over each weight
for i in xrange(len(weights)):
# Recall that feature_matrix[:,i] is the feature column associated with weights[i]
# compute the derivative for weight[i].
#(Remember: when i=0, you are computing the derivative of the constant!)
if i == 0:
derivative = feature_derivative_ridge(errors, feature_matrix[:,0], weights[0], l2_penalty, True)
else:
derivative = feature_derivative_ridge(errors, feature_matrix[:,i], weights[i], l2_penalty, False)
weights[i] = weights[i] - step_size*derivative
# Incrementing the iteration count
iteration_count += 1
return weights
Explanation: Gradient Descent
Now we will write a function that performs a gradient descent. The basic premise is simple. Given a starting point we update the current weights by moving in the negative gradient direction. Recall that the gradient is the direction of increase and therefore the negative gradient is the direction of decrease and we're trying to minimize a cost function.
The amount by which we move in the negative gradient direction is called the 'step size'. We stop when we are 'sufficiently close' to the optimum. Unlike in Week 2, this time we will set a maximum number of iterations and take gradient steps until we reach this maximum number. If no maximum number is supplied, the maximum should be set 100 by default. (Use default parameter values in Python.)
With this in mind, complete the following gradient descent function below using your derivative function above. For each step in the gradient descent, we update the weight for each feature before computing our stopping criteria.
End of explanation
simple_features = ['sqft_living']
my_output = 'price'
Explanation: Visualizing effect of L2 penalty
The L2 penalty gets its name because it causes weights to have small L2 norms than otherwise. Let's see how large weights get penalized. Let us consider a simple model with 1 feature:
End of explanation
train_data,test_data = sales.random_split(.8,seed=0)
Explanation: Load the training set and test set.
End of explanation
(simple_feature_matrix, output) = get_numpy_data(train_data, simple_features, my_output)
(simple_test_feature_matrix, test_output) = get_numpy_data(test_data, simple_features, my_output)
Explanation: In this part, we will only use 'sqft_living' to predict 'price'. Use the get_numpy_data function to get a Numpy versions of your data with only this feature, for both the train_data and the test_data.
End of explanation
initial_weights = np.array([0.0, 0.0])
step_size = 1e-12
max_iterations=1000
Explanation: Let's set the parameters for our optimization:
End of explanation
l2_penalty = 0.0
simple_weights_0_penalty = ridge_regression_gradient_descent(simple_feature_matrix, output, initial_weights, step_size, l2_penalty, max_iterations)
Explanation: First, let's consider no regularization. Set the l2_penalty to 0.0 and run your ridge regression algorithm to learn the weights of your model. Call your weights:
simple_weights_0_penalty
we'll use them later.
End of explanation
l2_penalty = 1.0e11
simple_weights_high_penalty = ridge_regression_gradient_descent(simple_feature_matrix, output, initial_weights, step_size, l2_penalty, max_iterations)
Explanation: Next, let's consider high regularization. Set the l2_penalty to 1e11 and run your ridge regression algorithm to learn the weights of your model. Call your weights:
simple_weights_high_penalty
we'll use them later.
End of explanation
plt.figure(figsize=(8,6))
plt.plot(simple_feature_matrix[:,1],output,'.', label= 'House Price Data')
plt.hold(True)
plt.plot(simple_feature_matrix[:,1], predict_output(simple_feature_matrix, simple_weights_0_penalty),'-', label= 'No L2 Penalty')
plt.plot(simple_feature_matrix[:,1], predict_output(simple_feature_matrix, simple_weights_high_penalty),'-', label= 'Large L2 Penalty')
plt.hold(False)
plt.legend(loc='upper left', fontsize=16)
plt.xlabel('Living Area (ft^2)', fontsize=16)
plt.ylabel('House Price ($)', fontsize=16)
plt.title('King County, Seattle House Price Data', fontsize=18)
plt.axis([0.0, 14000.0, 0.0, 8000000.0])
plt.show()
Explanation: This code will plot the two learned models. (The green line is for the model with no regularization and the red line is for the one with high regularization.)
End of explanation
test_pred_weights_0 = predict_output(simple_test_feature_matrix, initial_weights)
RSS_test_weights_0 = sum( (test_output - test_pred_weights_0)**2.0 )
test_pred_no_reg = predict_output(simple_test_feature_matrix, simple_weights_0_penalty)
RSS_test_no_reg = sum( (test_output - test_pred_no_reg)**2.0 )
test_pred_high_reg = predict_output(simple_test_feature_matrix, simple_weights_high_penalty)
RSS_test_high_reg = sum( (test_output - test_pred_high_reg)**2.0 )
Explanation: Compute the RSS on the TEST data for the following three sets of weights:
1. The initial weights (all zeros)
2. The weights learned with no regularization
3. The weights learned with high regularization
Which weights perform best?
End of explanation
print 'No Regulatization sqft_living weight: %.1f' %(simple_weights_0_penalty[1])
print 'High Regulatization sqft_living weight: %.1f' %(simple_weights_high_penalty[1])
Explanation: QUIZ QUESTIONS
Q1: What is the value of the coefficient for sqft_living that you learned with no regularization, rounded to 1 decimal place? What about the one with high regularization?
End of explanation
print 'Line with No Regularization is steeper'
Explanation: Q2: Comparing the lines you fit with the with no regularization versus high regularization, which one is steeper?
End of explanation
print 'Test set RSS with initial weights all set to 0.0: %.1e' %(RSS_test_weights_0)
print 'Test set RSS with initial weights set to weights learned with no regularization: %.1e' %(RSS_test_no_reg)
print 'Test set RSS with initial weights set to weights learned with high regularization: %.1e' %(RSS_test_high_reg)
Explanation: Q3: What are the RSS on the test data for each of the set of weights above (initial, no regularization, high regularization)?
End of explanation
model_features = ['sqft_living', 'sqft_living15'] # sqft_living15 is the average squarefeet for the nearest 15 neighbors.
my_output = 'price'
(feature_matrix, output) = get_numpy_data(train_data, model_features, my_output)
(test_feature_matrix, test_output) = get_numpy_data(test_data, model_features, my_output)
Explanation: Initial weights learned with no regularization performed best on the Test Set (lowest RSS value)
Running a multiple regression with L2 penalty
Let us now consider a model with 2 features: ['sqft_living', 'sqft_living15'].
First, create Numpy versions of your training and test data with these two features.
End of explanation
initial_weights = np.array([0.0,0.0,0.0])
step_size = 1e-12
max_iterations = 1000
Explanation: We need to re-inialize the weights, since we have one extra parameter. Let us also set the step size and maximum number of iterations.
End of explanation
l2_penalty = 0.0
multiple_weights_0_penalty = ridge_regression_gradient_descent(feature_matrix, output, initial_weights, step_size, l2_penalty, max_iterations)
Explanation: First, let's consider no regularization. Set the l2_penalty to 0.0 and run your ridge regression algorithm to learn the weights of your model. Call your weights:
multiple_weights_0_penalty
End of explanation
l2_penalty = 1.0e11
multiple_weights_high_penalty = ridge_regression_gradient_descent(feature_matrix, output, initial_weights, step_size, l2_penalty, max_iterations)
Explanation: Next, let's consider high regularization. Set the l2_penalty to 1e11 and run your ridge regression algorithm to learn the weights of your model. Call your weights:
multiple_weights_high_penalty
End of explanation
test_pred_mul_feat_weights_0 = predict_output(test_feature_matrix, initial_weights)
RSS_test_mul_feat_weights_0 = sum( (test_output - test_pred_mul_feat_weights_0)**2.0 )
test_pred_mul_feat_no_reg = predict_output(test_feature_matrix, multiple_weights_0_penalty)
RSS_test_mul_feat_no_reg = sum( (test_output - test_pred_mul_feat_no_reg)**2.0 )
test_pred_mul_feat_high_reg = predict_output(test_feature_matrix, multiple_weights_high_penalty)
RSS_test_mul_feat_high_reg = sum( (test_output - test_pred_mul_feat_high_reg)**2.0 )
Explanation: Compute the RSS on the TEST data for the following three sets of weights:
1. The initial weights (all zeros)
2. The weights learned with no regularization
3. The weights learned with high regularization
Which weights perform best?
End of explanation
print 'Pred. price of 1st house in Test Set with weights learned with no reg.: %.2f' %(test_pred_mul_feat_no_reg[0])
print 'Pred. price of 1st house in Test Set with weights learned with high reg.: %.2f' %(test_pred_mul_feat_high_reg[0])
print 'Pred. price - actual prize of 1st house in Test Set, using weights w/ no reg.: %.2f' %(abs(test_output[0] - test_pred_mul_feat_no_reg[0]))
print 'Pred. price - actual prize of 1st house in Test Set, using weights w/ high reg.: %.2f' %(abs(test_output[0] - test_pred_mul_feat_high_reg[0]))
Explanation: Predict the house price for the 1st house in the test set using the no regularization and high regularization models. (Remember that python starts indexing from 0.) How far is the prediction from the actual price? Which weights perform best for the 1st house?
End of explanation
print 'No Regulatization sqft_living weight: %.1f' %(multiple_weights_0_penalty[1])
print 'High Regulatization sqft_living weight: %.1f' %(multiple_weights_high_penalty[1])
Explanation: Weights with high regularization perform best on 1st house in Test Set
QUIZ QUESTIONS
Q1: What is the value of the coefficient for sqft_living that you learned with no regularization, rounded to 1 decimal place? What about the one with high regularization?
End of explanation
print 'Test set RSS with initial weights all set to 0.0: %.1e' %(RSS_test_mul_feat_weights_0)
print 'Test set RSS with initial weights set to weights learned with no regularization: %.1e' %(RSS_test_mul_feat_no_reg)
print 'Test set RSS with initial weights set to weights learned with high regularization: %.1e' %(RSS_test_mul_feat_high_reg)
Explanation: Q2: What are the RSS on the test data for each of the set of weights above (initial, no regularization, high regularization)?
End of explanation |
12,428 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Azimuthal Integral
Introduction
This tutorial demonstrates how to acquire an azimuthal integral profile from a multidimensional data set in pyXem.
The data set is a 10x10x256x256 data set of a polycrystalline gold film acquired using a Medipix3 256 by 256 pixel detector.
This functionality has been checked to run in pyxem-0.13.2 (May 2021). Bugs are always possible, do not trust the code blindly, and if you experience any issues please report them here
Step1: Assert that pyxem is properly loaded into hyperspy..
Step2: Load polycrystalline SED data
Step3: Check the data size and type. It is also a good idea to look at your data as well just to make sure it loaded properly. Then we will set the diffraction and scan calibration based on values found by calibrating the dataset to a known standard.
Step4: <a id='c1'></a>
Case 1
Step5: <a id='c2'></a>
Case 2
Step6: Calibration
Step7: <a id='c3'></a>
Additional Parameters
Step8: Let's take a different example using an obviously distorted ring pattern. The goal is to apply an affine transformation such that the rings become lines with minimal distortion. In general just appling a azimuthal integration2d to your diffraction image is a good way to determine if there is any distortion in the image. Waves in the image are much more obvious than slight ellipticity in an image with cartesian coodinates.
Step9: Masking
Below we show how a mask might be applied. For the time being it is much faster to have mask, center and affine be numpy arrays rather than BaseSignals as making a new integration object for each is quite a costly computation.
Step10: Methods
This final section gives a breif explination of the different methods avaible through pyfai. They all also have the option to correctSolidAngle. It is best just to show you the output for each method and from there you can determine which you might perfer. The correctSolid angle parameter is largely unimportant in electron microcopy becuase of the size of the Ewald's Sphere | Python Code:
%matplotlib inline
import hyperspy.api as hs
import numpy as np
import matplotlib.pyplot as plt
Explanation: Azimuthal Integral
Introduction
This tutorial demonstrates how to acquire an azimuthal integral profile from a multidimensional data set in pyXem.
The data set is a 10x10x256x256 data set of a polycrystalline gold film acquired using a Medipix3 256 by 256 pixel detector.
This functionality has been checked to run in pyxem-0.13.2 (May 2021). Bugs are always possible, do not trust the code blindly, and if you experience any issues please report them here: https://github.com/pyxem/pyxem-demos/issues
Contents
At this point things are mostly set up. There are a couple of different work flows moving forward which allow the user a fair degree of control. In general the key parameter is the unit keyword in the integration. Let's show the different use cases and you can choose which works the best into your workflow. Case 1 and Case 2 should be suffient for most use cases but case 3 gives you additional functionality by allowing to user to predefine their detector to thier specifications.
<a href='#c0'> Loading and Inspection</a>
<a href='#c1'> PyXEM units based integration</a>
<a href='#c2'> PyFAI units based integration</a>
<a href='#c3'> Additional Parameters (center, affine, masks, methods)</a>
<a id='c0'></a>
0. Loading and Inspection
Import pyxem and other required libraries
End of explanation
print(hs.print_known_signal_types())
Explanation: Assert that pyxem is properly loaded into hyperspy..
End of explanation
dp = hs.load("./data/07/azimuthal_integration.hspy", signal_type='electron_diffraction')
Explanation: Load polycrystalline SED data
End of explanation
calib = 0.009197
dp.diffraction_calibration=calib
dp.scan_calibration= 5
dp.axes_manager
print("Signal type and dimensions: ", dp)
dp.inav[1,1].plot(vmax=1000)
plt.show()
Explanation: Check the data size and type. It is also a good idea to look at your data as well just to make sure it loaded properly. Then we will set the diffraction and scan calibration based on values found by calibrating the dataset to a known standard.
End of explanation
dp.unit = "k_A^-1"
dp.beam_energy = 200 # in 200 keV
dp.axes_manager # see how the units now are set for the signal axis.
#if we want to see more about the function
dp.set_ai?
dp.set_ai()
integration = dp.get_azimuthal_integral1d(npt=100, )
integration2d = dp.get_azimuthal_integral2d(npt=100)
# Excluding the zero beam...
integration.inav[0,0].isig[10:].plot()
integration2d.inav[0,0].isig[:,10:].plot()
plt.show()
Explanation: <a id='c1'></a>
Case 1: PyXEM units based integration.
The key difference between Case 1 and Case 2 is that for Case 1 the units are already set in PyXEM so the detector distance and dectector set up are just handled by creating a generic set up that aligns with how pyXEM deals with calibrations.
All of these integrations are done by pyFAI so it might be worth it to discuss how pyFAI does their integration.
PyFAI Integration
In pyFAI there are three geometries that are of interest. The best way to think about them are as two concentric spheres with a real detector at the apex of one sphere and an imaginary detector at the apex of the other sphere. This gives rise to two corrections that are applied to the data.
1- correctSolidAngle: This is to correct for a sphere being projected onto a flat detector. It takes and makes pixel values farther from the center more intense to account for their lower intensity.
2- Ewald Sphere Correction: This takes into account the change in intensity for dealing with the ewald sphere as well as the distorition in the spacing.
What connects these two spheres is that their solid angles are equal, which gives rise to the 2th_deg and 2th_rad formalisim. By their nature these two ignore the ewald
To simplify things when pyXEM deals with this integration we assume a flat detector and constant radius of one sphere and then just change the pixel size of the detector to
The key things that need to be set are the "unit" and the "beam_energy" which are both attribuites that can be set with:
dp.unit = "k_A^-1"
dp.beam_energy = 200 # keV
The other acceptable units are "k_nm^-1", "q_nm^-1", "q_A^-1", "2th_deg", "2th_rad".
- "q_nm^-1" (q inverse spacing, mostly used with Xray data)
- "q_A^-1"
- "k_nm^-1" (k inverse spacing, mostly used for electron diffraction data, factor of 2 pi less than q)
- "k_A^-1"
- "2th_deg" (degree spacing, doesn't account for ewald sphere)
- "2th_rad" (radial spacing, doesn't account for ewald sphere)
Note:
For electron diffraction the Ewald sphere is largely not considered. Rather some reflection in a standard material is used to calibrate the pixel size. Then that pixel size is used consistantly assuming a constant. This is mostly correct, and largely what we do in the first case expect we use that calibration to define one point on the Ewald sphere and then calculate the scale from there.
End of explanation
from pyxem.detectors import Medipix256x256Detector
detector = Medipix256x256Detector()
print(detector)
Explanation: <a id='c2'></a>
Case 2: PyFAI based Integration
The second case involves setting up your own detector, detector distance, wavelength etc. If you have a good understanding of pyFAI then this might actually be the best route. There are a significant number of parameters to play with when using pyFAI, however, so there is a litte bit of extra set up involved and in my opinion a little more difficulty in getting more advanced cases. There are a couple of detectors already set up in pyxem dectectors and more in pyFAI so check there first to see what other people have done.
End of explanation
# Reading the camera length from microscope
camera_length = 0.24 #in metres
# Calculating camera length from real pixel size and recoporical pixel size
wavelength = 2.5079e-12
pix_size = 55e-6 #change to 1 if using the GenericFlatDetector()
camera_length = pix_size / (wavelength * calib * 1e10)
print('Camera Length:', camera_length)
from pyFAI.azimuthalIntegrator import AzimuthalIntegrator
center=(128,128)
ai = AzimuthalIntegrator(dist=camera_length, detector=detector, wavelength=wavelength)
ai.setFit2D(directDist=camera_length*1000, centerX=center[1], centerY=center[0])
dp.metadata.set_item("Signal.ai", ai)
integration1d = dp.get_azimuthal_integral1d(npt =100)
integration2d = dp.get_azimuthal_integral2d(npt =100)
integration1d.inav[1,1].isig[0.2:].plot()
integration2d.inav[1,1].isig[:, 0.2:].plot()
Explanation: Calibration:
In addition to specifying the detector, to accurately calculate the curvature of the Ewald Sphere, it is important to specify a calibration. In addition, the wavelength is specified to do that calculation.
The calibration is calculated by knowing the camera length. Alternatively, by assuming a no curvature in the detector, it is possible to calculate the camera length from an "inverse angstroms per pixel" calibration value. We suggest calibrating to a gold pattern for a calibration value and using the latter (for electron microscopy).
End of explanation
affine = np.array([[0.99978285, 0.00341758, 0.],
[0.00341758, 0.94621262, 0.],
[0., 0., 1.]])
dp.set_ai(affine=affine)
integration = dp.get_azimuthal_integral1d(npt=100)
integration2d = dp.get_azimuthal_integral2d(npt=100)
# Excluding the zero beam...
integration.inav[0,0].isig[0.2:].plot()
integration2d.inav[0,0].isig[:,.2:].plot()
plt.show()
Explanation: <a id='c3'></a>
Additional Parameters:
There are a couple of different thing you can play around with at this point. For one there are three additional parameters that are useful for more advanced calibrations. These will work in all three of the cases but with case 3 some of these parameters can be intialized as you instantiate the detector. For the most part though these should be passed in as the method is calls.
The three parameters are:
center - The center of the diffraction pattern if it is not the center of the image
affine - A 3x3 matrix which represents an affine transformation to the singal.
mask - A mask with the same size as the singal.
These three parameters can also be passed as BaseSignal objects from hyperspy and with the same size navigation axes as the original signal. In this case they will be iterated with the diffraction signal and a different calibration is applied to every diffraciton pattern
Affine
This applies an affine transfromation to the dataset before the integration.
End of explanation
from diffsims.utils.ring_pattern_utils import generate_ring_pattern
x0 = [95, 1200, 2.8, 450, 1.5, 10]
ring_data = generate_ring_pattern(
image_size=256,
mask=True,
mask_radius=10,
scale=x0[0],
amplitude=x0[1],
spread=x0[2],
direct_beam_amplitude=x0[3],
asymmetry=x0[4],
rotation=x0[5],
)
import pyxem
import numpy as np
d =pyxem.signals.ElectronDiffraction2D(data=ring_data)
a = np.asarray(
[[1.06651526, 0.10258988, 0.0], [0.10258988, 1.15822961, 0.0], [0.0, 0.0, 1.0]]
)
d.plot()
d.unit = "2th_deg"
d.set_ai(affine=a)
d.get_azimuthal_integral2d(npt=100, correctSolidAngle=False).plot()
Explanation: Let's take a different example using an obviously distorted ring pattern. The goal is to apply an affine transformation such that the rings become lines with minimal distortion. In general just appling a azimuthal integration2d to your diffraction image is a good way to determine if there is any distortion in the image. Waves in the image are much more obvious than slight ellipticity in an image with cartesian coodinates.
End of explanation
mask = dp.get_direct_beam_mask(radius=30)
integration = dp.get_azimuthal_integral1d(npt=100, mask=mask.data)
integration2d = dp.get_azimuthal_integral2d(npt=100, mask=mask.data)
integration.inav[1,1].isig[:].plot()
integration2d.inav[1,1].isig[:,:].plot()
Explanation: Masking
Below we show how a mask might be applied. For the time being it is much faster to have mask, center and affine be numpy arrays rather than BaseSignals as making a new integration object for each is quite a costly computation.
End of explanation
methods = ["numpy", "cython", "BBox","splitpixel", "lut", "csr", "nosplit_csr", "full_csr"]
littledp = dp.inav[1,1]
import time
integrations= []
times = []
for method in methods:
tic = time.time()
no_sa = littledp.get_azimuthal_integral2d(npt=100, method=method, correctSolidAngle=False)
toc = time.time()
sa = littledp.get_azimuthal_integral2d(npt=100, method=method, correctSolidAngle=True)
toc2 =time.time()
integrations.append(no_sa)
integrations.append(sa)
times.append([toc-tic, toc2-toc])
lab = ["numpy","numpy_SA", "cython","cython_SA", "BBox", "BBox_SA", "splitpixel","splitpixel_SA",
"lut","lut_SA", "csr","csr_SA", "nosplit_csr","nosplit_csr_SA", "full_csr","full_csr_SA"]
lab_time = [ l+" ("+str(round(t[0],2))+" sec)" for l,t in zip(lab,times)]
f = plt.figure(figsize=(20,30))
hs.plot.plot_images(integrations,vmax=(1000), per_row=2, fig=f, label=lab_time)
plt.show()
Explanation: Methods
This final section gives a breif explination of the different methods avaible through pyfai. They all also have the option to correctSolidAngle. It is best just to show you the output for each method and from there you can determine which you might perfer. The correctSolid angle parameter is largely unimportant in electron microcopy becuase of the size of the Ewald's Sphere
End of explanation |
12,429 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Parameter selection, Validation & Testing
Most models have parameters that influence how complex a model they can learn. Remember using KNeighborsRegressor.
If we change the number of neighbors we consider, we get a smoother and smoother prediction
Step1: In the above figure, we see fits for three different values of n_neighbors.
For n_neighbors=2, the data is overfit, the model is too flexible and can adjust too much to the noise in the training data. For n_neighbors=20, the model is not flexible enough, and can not model the variation in the data appropriately.
In the middle, for n_neighbors = 5, we have found a good mid-point. It fits
the data fairly well, and does not suffer from the overfit or underfit
problems seen in the figures on either side. What we would like is a
way to quantitatively identify overfit and underfit, and optimize the
hyperparameters (in this case, the polynomial degree d) in order to
determine the best algorithm.
We trade off remembering too much about the particularities and noise of the training data vs. not modeling enough of the variability. This is a trade-off that needs to be made in basically every machine learning application and is a central concept, called bias-variance-tradeoff or "overfitting vs underfitting".
Hyperparameters, Over-fitting, and Under-fitting
Unfortunately, there is no general rule how to find the sweet spot, and so machine learning practitioners have to find the best trade-off of model-complexity and generalization by trying several parameter settings.
Most commonly this is done using a brute force search, for example over multiple values of n_neighbors
Step2: There is a function in scikit-learn, called validation_plot to reproduce the cartoon figure above. It plots one parameter, such as the number of neighbors, against training and validation error (using cross-validation)
Step3: Note that many neighbors mean a "smooth" or "simple" model, so the plot is the mirror image of the diagram above.
If multiple parameters are important, like the parameters C and gamma in an SVM (more about that later), all possible combinations are tried
Step4: As this is such a very common pattern, there is a built-in class for this in scikit-learn, GridSearchCV. GridSearchCV takes a dictionary that describes the parameters that should be tried and a model to train.
The grid of parameters is defined as a dictionary, where the keys are the parameters and the values are the settings to be tested.
Step5: One of the great things about GridSearchCV is that it is a meta-estimator. It takes an estimator like SVR above, and creates a new estimator, that behaves exactly the same - in this case, like a regressor.
So we can call fit on it, to train it
Step6: What fit does is a bit more involved then what we did above. First, it runs the same loop with cross-validation, to find the best parameter combination.
Once it has the best combination, it runs fit again on all data passed to fit (without cross-validation), to built a single new model using the best parameter setting.
Then, as with all models, we can use predict or score
Step7: You can inspect the best parameters found by GridSearchCV in the best_params_ attribute, and the best score in the best_score_ attribute
Step8: There is a problem with using this score for evaluation, however. You might be making what is called a multiple hypothesis testing error. If you try very many parameter settings, some of them will work better just by chance, and the score that you obtained might not reflect how your model would perform on new unseen data.
Therefore, it is good to split off a separate test-set before performing grid-search. This pattern can be seen as a training-validation-test split, and is common in machine learning
Step9: Some practitioners go for an easier scheme, splitting the data simply into three parts, training, validation and testing. This is a possible alternative if your training set is very large, or it is infeasible to train many models using cross-validation because training a model takes very long.
You can do this with scikit-learn for example by splitting of a test-set and then applying GridSearchCV with ShuffleSplit cross-validation with a single iteration | Python Code:
from figures import plot_kneighbors_regularization
plot_kneighbors_regularization()
Explanation: Parameter selection, Validation & Testing
Most models have parameters that influence how complex a model they can learn. Remember using KNeighborsRegressor.
If we change the number of neighbors we consider, we get a smoother and smoother prediction:
End of explanation
from sklearn.cross_validation import cross_val_score, KFold
from sklearn.neighbors import KNeighborsRegressor
# generate toy dataset:
x = np.linspace(-3, 3, 100)
y = np.sin(4 * x) + x + np.random.normal(size=len(x))
X = x[:, np.newaxis]
cv = KFold(n=len(x), shuffle=True)
# for each parameter setting do cross_validation:
for n_neighbors in [1, 3, 5, 10, 20]:
scores = cross_val_score(KNeighborsRegressor(n_neighbors=n_neighbors), X, y, cv=cv)
print("n_neighbors: %d, average score: %f" % (n_neighbors, np.mean(scores)))
Explanation: In the above figure, we see fits for three different values of n_neighbors.
For n_neighbors=2, the data is overfit, the model is too flexible and can adjust too much to the noise in the training data. For n_neighbors=20, the model is not flexible enough, and can not model the variation in the data appropriately.
In the middle, for n_neighbors = 5, we have found a good mid-point. It fits
the data fairly well, and does not suffer from the overfit or underfit
problems seen in the figures on either side. What we would like is a
way to quantitatively identify overfit and underfit, and optimize the
hyperparameters (in this case, the polynomial degree d) in order to
determine the best algorithm.
We trade off remembering too much about the particularities and noise of the training data vs. not modeling enough of the variability. This is a trade-off that needs to be made in basically every machine learning application and is a central concept, called bias-variance-tradeoff or "overfitting vs underfitting".
Hyperparameters, Over-fitting, and Under-fitting
Unfortunately, there is no general rule how to find the sweet spot, and so machine learning practitioners have to find the best trade-off of model-complexity and generalization by trying several parameter settings.
Most commonly this is done using a brute force search, for example over multiple values of n_neighbors:
End of explanation
from sklearn.learning_curve import validation_curve
n_neighbors = [1, 3, 5, 10, 20, 50]
train_errors, test_errors = validation_curve(KNeighborsRegressor(), X, y, param_name="n_neighbors", param_range=n_neighbors)
plt.plot(n_neighbors, train_errors.mean(axis=1), label="train error")
plt.plot(n_neighbors, test_errors.mean(axis=1), label="test error")
plt.legend(loc="best")
Explanation: There is a function in scikit-learn, called validation_plot to reproduce the cartoon figure above. It plots one parameter, such as the number of neighbors, against training and validation error (using cross-validation):
End of explanation
from sklearn.cross_validation import cross_val_score, KFold
from sklearn.svm import SVR
# each parameter setting do cross_validation:
for C in [0.001, 0.01, 0.1, 1, 10]:
for gamma in [0.001, 0.01, 0.1, 1]:
scores = cross_val_score(SVR(C=C, gamma=gamma), X, y, cv=cv)
print("C: %f, gamma: %f, average score: %f" % (C, gamma, np.mean(scores)))
Explanation: Note that many neighbors mean a "smooth" or "simple" model, so the plot is the mirror image of the diagram above.
If multiple parameters are important, like the parameters C and gamma in an SVM (more about that later), all possible combinations are tried:
End of explanation
from sklearn.grid_search import GridSearchCV
param_grid = {'C': [0.001, 0.01, 0.1, 1, 10], 'gamma': [0.001, 0.01, 0.1, 1]}
grid = GridSearchCV(SVR(), param_grid=param_grid, cv=cv, verbose=3)
Explanation: As this is such a very common pattern, there is a built-in class for this in scikit-learn, GridSearchCV. GridSearchCV takes a dictionary that describes the parameters that should be tried and a model to train.
The grid of parameters is defined as a dictionary, where the keys are the parameters and the values are the settings to be tested.
End of explanation
grid.fit(X, y)
Explanation: One of the great things about GridSearchCV is that it is a meta-estimator. It takes an estimator like SVR above, and creates a new estimator, that behaves exactly the same - in this case, like a regressor.
So we can call fit on it, to train it:
End of explanation
grid.predict(X)
Explanation: What fit does is a bit more involved then what we did above. First, it runs the same loop with cross-validation, to find the best parameter combination.
Once it has the best combination, it runs fit again on all data passed to fit (without cross-validation), to built a single new model using the best parameter setting.
Then, as with all models, we can use predict or score:
End of explanation
print(grid.best_score_)
print(grid.best_params_)
Explanation: You can inspect the best parameters found by GridSearchCV in the best_params_ attribute, and the best score in the best_score_ attribute:
End of explanation
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
param_grid = {'C': [0.001, 0.01, 0.1, 1, 10], 'gamma': [0.001, 0.01, 0.1, 1]}
cv = KFold(n=len(X_train), n_folds=10, shuffle=True)
grid = GridSearchCV(SVR(), param_grid=param_grid, cv=cv)
grid.fit(X_train, y_train)
grid.score(X_test, y_test)
Explanation: There is a problem with using this score for evaluation, however. You might be making what is called a multiple hypothesis testing error. If you try very many parameter settings, some of them will work better just by chance, and the score that you obtained might not reflect how your model would perform on new unseen data.
Therefore, it is good to split off a separate test-set before performing grid-search. This pattern can be seen as a training-validation-test split, and is common in machine learning:
We can do this very easily by splitting of some test data using train_test_split, training GridSearchCV on the training set, and applying the score method to the test set:
End of explanation
from sklearn.cross_validation import train_test_split, ShuffleSplit
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
param_grid = {'C': [0.001, 0.01, 0.1, 1, 10], 'gamma': [0.001, 0.01, 0.1, 1]}
single_split_cv = ShuffleSplit(len(X_train), 1)
grid = GridSearchCV(SVR(), param_grid=param_grid, cv=single_split_cv, verbose=3)
grid.fit(X_train, y_train)
grid.score(X_test, y_test)
Explanation: Some practitioners go for an easier scheme, splitting the data simply into three parts, training, validation and testing. This is a possible alternative if your training set is very large, or it is infeasible to train many models using cross-validation because training a model takes very long.
You can do this with scikit-learn for example by splitting of a test-set and then applying GridSearchCV with ShuffleSplit cross-validation with a single iteration:
End of explanation |
12,430 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Create The Data
The dataset used in this tutorial is the famous iris dataset. The Iris target data contains 50 samples from three species of Iris, y and four feature variables, X.
Step2: View The Data
Step3: Split The Data Into Training And Test Sets
Step4: Train A Random Forest Classifier
Step5: The scores above are the importance scores for each variable. There are two things to note. First, all the importance scores add up to 100%. Second, Petal Length and Petal Width are far more important than the other two features. Combined, Petal Length and Petal Width have an importance of ~0.86! Clearly these are the most importance features.
Identify And Select Most Important Features
Step6: Create A Data Subset With Only The Most Important Features
Step7: Train A New Random Forest Classifier Using Only Most Important Features
Step8: Compare The Accuracy Of Our Full Feature Classifier To Our Limited Feature Classifier | Python Code:
import numpy as np
from sklearn.ensemble import RandomForestClassifier
from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn.feature_selection import SelectFromModel
from sklearn.metrics import accuracy_score
Explanation: Title: Feature Selection Using Random Forest
Slug: feature_selection_using_random_forest
Summary: Feature Selection Using Random Forest with scikit-learn.
Date: 2016-12-01 12:00
Category: Machine Learning
Tags: Feature Selection
Authors: Chris Albon
Often in data science we have hundreds or even millions of features and we want a way to create a model that only includes the most important features. This has three benefits. First, we make our model more simple to interpret. Second, we can reduce the variance of the model, and therefore overfitting. Finally, we can reduce the computational cost (and time) of training a model. The process of identifying only the most relevant features is called "feature selection."
Random Forests are often used for feature selection in a data science workflow. The reason is because the tree-based strategies used by random forests naturally ranks by how well they improve the purity of the node. This mean decrease in impurity over all trees (called gini impurity). Nodes with the greatest decrease in impurity happen at the start of the trees, while notes with the least decrease in impurity occur at the end of trees. Thus, by pruning trees below a particular node, we can create a subset of the most important features.
In this tutorial we will:
Prepare the dataset
Train a random forest classifier
Identify the most important features
Create a new 'limited featured' dataset containing only those features
Train a second classifier on this new dataset
Compare the accuracy of the 'full featured' classifier to the accuracy of the 'limited featured' classifier
Note: There are other definitions of importance, however in this tutorial we limit our discussion to gini importance.
Preliminaries
End of explanation
# Load the iris dataset
iris = datasets.load_iris()
# Create a list of feature names
feat_labels = ['Sepal Length','Sepal Width','Petal Length','Petal Width']
# Create X from the features
X = iris.data
# Create y from output
y = iris.target
Explanation: Create The Data
The dataset used in this tutorial is the famous iris dataset. The Iris target data contains 50 samples from three species of Iris, y and four feature variables, X.
End of explanation
# View the features
X[0:5]
# View the target data
y
Explanation: View The Data
End of explanation
# Split the data into 40% test and 60% training
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=0)
Explanation: Split The Data Into Training And Test Sets
End of explanation
# Create a random forest classifier
clf = RandomForestClassifier(n_estimators=10000, random_state=0, n_jobs=-1)
# Train the classifier
clf.fit(X_train, y_train)
# Print the name and gini importance of each feature
for feature in zip(feat_labels, clf.feature_importances_):
print(feature)
Explanation: Train A Random Forest Classifier
End of explanation
# Create a selector object that will use the random forest classifier to identify
# features that have an importance of more than 0.15
sfm = SelectFromModel(clf, threshold=0.15)
# Train the selector
sfm.fit(X_train, y_train)
# Print the names of the most important features
for feature_list_index in sfm.get_support(indices=True):
print(feat_labels[feature_list_index])
Explanation: The scores above are the importance scores for each variable. There are two things to note. First, all the importance scores add up to 100%. Second, Petal Length and Petal Width are far more important than the other two features. Combined, Petal Length and Petal Width have an importance of ~0.86! Clearly these are the most importance features.
Identify And Select Most Important Features
End of explanation
# Transform the data to create a new dataset containing only the most important features
# Note: We have to apply the transform to both the training X and test X data.
X_important_train = sfm.transform(X_train)
X_important_test = sfm.transform(X_test)
Explanation: Create A Data Subset With Only The Most Important Features
End of explanation
# Create a new random forest classifier for the most important features
clf_important = RandomForestClassifier(n_estimators=10000, random_state=0, n_jobs=-1)
# Train the new classifier on the new dataset containing the most important features
clf_important.fit(X_important_train, y_train)
Explanation: Train A New Random Forest Classifier Using Only Most Important Features
End of explanation
# Apply The Full Featured Classifier To The Test Data
y_pred = clf.predict(X_test)
# View The Accuracy Of Our Full Feature (4 Features) Model
accuracy_score(y_test, y_pred)
# Apply The Full Featured Classifier To The Test Data
y_important_pred = clf_important.predict(X_important_test)
# View The Accuracy Of Our Limited Feature (2 Features) Model
accuracy_score(y_test, y_important_pred)
Explanation: Compare The Accuracy Of Our Full Feature Classifier To Our Limited Feature Classifier
End of explanation |
12,431 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using Lime with Pytorch
In this tutorial we will show how to use Lime framework with Pytorch. Specifically, we will use Lime to explain the prediction generated by one of the pretrained ImageNet models.
Let's start with importing our dependencies. This code is tested with Pytorch 1.0 but should work with older versions as well.
Step1: Load our test image and see how it looks.
Step2: We need to convert this image to Pytorch tensor and also apply whitening as used by our pretrained model.
Step3: Load the pretrained model for Resnet50 available in Pytorch.
Step4: Load label texts for ImageNet predictions so we know what model is predicting
Step5: Get the predicition for our image.
Step6: Predicitions we got are logits. Let's pass that through softmax to get probabilities and class labels for top 5 predictions.
Step7: We are getting ready to use Lime. Lime produces the array of images from original input image by pertubation algorithm. So we need to provide two things
Step8: Now we are ready to define classification function that Lime needs. The input to this function is numpy array of images where each image is ndarray of shape (channel, height, width). The output is numpy aaray of shape (image index, classes) where each value in array should be probability for that image, class combination.
Step9: Let's test our function for the sample image.
Step10: Import lime and create explanation for this prediciton.
Step11: Let's use mask on image and see the areas that are encouraging the top prediction.
Step12: Let's turn on areas that contributes against the top prediction. | Python Code:
import matplotlib.pyplot as plt
from PIL import Image
import torch.nn as nn
import numpy as np
import os, json
import torch
from torchvision import models, transforms
from torch.autograd import Variable
import torch.nn.functional as F
Explanation: Using Lime with Pytorch
In this tutorial we will show how to use Lime framework with Pytorch. Specifically, we will use Lime to explain the prediction generated by one of the pretrained ImageNet models.
Let's start with importing our dependencies. This code is tested with Pytorch 1.0 but should work with older versions as well.
End of explanation
def get_image(path):
with open(os.path.abspath(path), 'rb') as f:
with Image.open(f) as img:
return img.convert('RGB')
img = get_image('./data/dogs.png')
plt.imshow(img)
Explanation: Load our test image and see how it looks.
End of explanation
# resize and take the center part of image to what our model expects
def get_input_transform():
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
transf = transforms.Compose([
transforms.Resize((256, 256)),
transforms.CenterCrop(224),
transforms.ToTensor(),
normalize
])
return transf
def get_input_tensors(img):
transf = get_input_transform()
# unsqeeze converts single image to batch of 1
return transf(img).unsqueeze(0)
Explanation: We need to convert this image to Pytorch tensor and also apply whitening as used by our pretrained model.
End of explanation
model = models.inception_v3(pretrained=True)
Explanation: Load the pretrained model for Resnet50 available in Pytorch.
End of explanation
idx2label, cls2label, cls2idx = [], {}, {}
with open(os.path.abspath('./data/imagenet_class_index.json'), 'r') as read_file:
class_idx = json.load(read_file)
idx2label = [class_idx[str(k)][1] for k in range(len(class_idx))]
cls2label = {class_idx[str(k)][0]: class_idx[str(k)][1] for k in range(len(class_idx))}
cls2idx = {class_idx[str(k)][0]: k for k in range(len(class_idx))}
Explanation: Load label texts for ImageNet predictions so we know what model is predicting
End of explanation
img_t = get_input_tensors(img)
model.eval()
logits = model(img_t)
Explanation: Get the predicition for our image.
End of explanation
probs = F.softmax(logits, dim=1)
probs5 = probs.topk(5)
tuple((p,c, idx2label[c]) for p, c in zip(probs5[0][0].detach().numpy(), probs5[1][0].detach().numpy()))
Explanation: Predicitions we got are logits. Let's pass that through softmax to get probabilities and class labels for top 5 predictions.
End of explanation
def get_pil_transform():
transf = transforms.Compose([
transforms.Resize((256, 256)),
transforms.CenterCrop(224)
])
return transf
def get_preprocess_transform():
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
transf = transforms.Compose([
transforms.ToTensor(),
normalize
])
return transf
pill_transf = get_pil_transform()
preprocess_transform = get_preprocess_transform()
Explanation: We are getting ready to use Lime. Lime produces the array of images from original input image by pertubation algorithm. So we need to provide two things: (1) original image as numpy array (2) classification function that would take array of purturbed images as input and produce the probabilities for each class for each image as output.
For Pytorch, first we need to define two separate transforms: (1) to take PIL image, resize and crop it (2) take resized, cropped image and apply whitening.
End of explanation
def batch_predict(images):
model.eval()
batch = torch.stack(tuple(preprocess_transform(i) for i in images), dim=0)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
batch = batch.to(device)
logits = model(batch)
probs = F.softmax(logits, dim=1)
return probs.detach().cpu().numpy()
Explanation: Now we are ready to define classification function that Lime needs. The input to this function is numpy array of images where each image is ndarray of shape (channel, height, width). The output is numpy aaray of shape (image index, classes) where each value in array should be probability for that image, class combination.
End of explanation
test_pred = batch_predict([pill_transf(img)])
test_pred.squeeze().argmax()
Explanation: Let's test our function for the sample image.
End of explanation
from lime import lime_image
explainer = lime_image.LimeImageExplainer()
explanation = explainer.explain_instance(np.array(pill_transf(img)),
batch_predict, # classification function
top_labels=5,
hide_color=0,
num_samples=1000) # number of images that will be sent to classification function
Explanation: Import lime and create explanation for this prediciton.
End of explanation
from skimage.segmentation import mark_boundaries
temp, mask = explanation.get_image_and_mask(explanation.top_labels[0], positive_only=True, num_features=5, hide_rest=False)
img_boundry1 = mark_boundaries(temp/255.0, mask)
plt.imshow(img_boundry1)
Explanation: Let's use mask on image and see the areas that are encouraging the top prediction.
End of explanation
temp, mask = explanation.get_image_and_mask(explanation.top_labels[0], positive_only=False, num_features=10, hide_rest=False)
img_boundry2 = mark_boundaries(temp/255.0, mask)
plt.imshow(img_boundry2)
Explanation: Let's turn on areas that contributes against the top prediction.
End of explanation |
12,432 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Training a better model
Step1: Are we underfitting?
Our validation accuracy so far has generally been higher than our training accuracy. That leads to two obvious questions
Step2: ...and load our fine-tuned weights.
Step3: Split conv and dense layers
We're going to be training a number of iterations without dropout, so it would be best for us to pre-calculate the input to the fully connected layers - i.e. the Flatten() layer. We'll start by finding this layer in our model, and creating a new model that contains just the layers up to and including this layer
Step4: Generate features for the FC layers by precalculating conv output
Now we can use the exact same approach to creating features as we used when we created the linear model from the imagenet predictions in the last lesson - it's only the model that has changed. As you're seeing, there's a fairly small number of "recipes" that can get us a long way!
Step5: Below
Step6: Remove dropout from the fully-connected layer model
For our new fully connected model, we'll create it using the exact same architecture as the last layers of VGG 16, so that we can conveniently copy pre-trained weights over from that model. However, we'll set the dropout layer's p values to zero, so as to effectively remove dropout.
Step7: Fit the FC model to the training and validation data
And fit the model in the usual way
Step8: Save the weights (no dropout)
Step9: Reducing overfitting
Now that we've gotten the model to overfit, we can take a number of steps to reduce this.
Add data augmentation to the training data
Let's try adding a small amount of data augmentation, and see if we reduce overfitting as a result. The approach will be identical to the method we used to finetune the dense layers in lesson 2, except that we will use a generator with augmentation configured. Here's how we set up the generator, and create batches from it
Step10: Combine the Conv and FC layers into a single model
When using data augmentation, we can't pre-compute our convolutional layer features, since randomized changes are being made to every input image. That is, even if the training process sees the same image multiple times, each time it will have undergone different data augmentation, so the results of the convolutional layers will be different.
Therefore, in order to allow data to flow through all the conv layers and our new dense layers, we attach our fully connected model to the convolutional model--after ensuring that the convolutional layers are not trainable
Step11: Now we can compile, train, and save our model as usual - note that we use fit_generator() since we want to pull random images from the directories on every batch.
Compile and train the combined model on augmented data
Step12: Save the weights (combined)
Step13: Add batchnorm to the model
We can use nearly the same approach as before - but this time we'll add batchnorm layers (and dropout layers)
Step14: Create a standalone model from the BN layers of vgg16bn
Step15: Compile and fit the model
Step16: Save the weights (batchnorm)
Step17: Create another BN model and combine it with the conv layers into a final model
Step18: Set the BN layers weights from the first BN model
Step19: Fit the model
Step20: Save the weights (final model)
Step21: Fit the model
Step22: Save the weights (final model)
Step23: Fit the model
Step24: Save the weights (final model) | Python Code:
from theano.sandbox import cuda
%matplotlib inline
import utils; reload(utils)
from utils import *
from __future__ import division, print_function
#path = os.path.join('input','sample')
path = os.path.join('input','sample-10')
#path = os.path.join('input')
output_path = os.path.join('output','sample')
model_path = os.path.join(output_path, 'models')
if not os.path.exists(model_path): os.mkdir(model_path)
#batch_size=64
#batch_size=32
#batch_size=16
batch_size=8
Explanation: Training a better model
End of explanation
model = vgg_ft(2)
??vgg_ft
Explanation: Are we underfitting?
Our validation accuracy so far has generally been higher than our training accuracy. That leads to two obvious questions:
How is this possible?
Is this desirable?
The answer to (1) is that this is happening because of dropout. Dropout refers to a layer that randomly deletes (i.e. sets to zero) each activation in the previous layer with probability p (generally 0.5). This only happens during training, not when calculating the accuracy on the validation set, which is why the validation set can show higher accuracy than the training set.
The purpose of dropout is to avoid overfitting. By deleting parts of the neural network at random during training, it ensures that no one part of the network can overfit to one part of the training set. The creation of dropout was one of the key developments in deep learning, and has allowed us to create rich models without overfitting. However, it can also result in underfitting if overused, and this is something we should be careful of with our model.
So the answer to (2) is: this is probably not desirable. It is likely that we can get better validation set results with less (or no) dropout, if we're seeing that validation accuracy is higher than training accuracy - a strong sign of underfitting. So let's try removing dropout entirely, and see what happens!
(We had dropout in this model already because the VGG authors found it necessary for the imagenet competition. But that doesn't mean it's necessary for dogs v cats, so we will do our own analysis of regularization approaches from scratch.)
Removing dropout
Our high level approach here will be to start with our fine-tuned cats vs dogs model (with dropout), then fine-tune all the dense layers, after removing dropout from them. The steps we will take are:
- Re-create and load our modified VGG model with binary dependent (i.e. dogs v cats)
- Split the model between the convolutional (conv) layers and the dense layers
- Pre-calculate the output of the conv layers, so that we don't have to redundently re-calculate them on every epoch
- Create a new model with just the dense layers, and dropout p set to zero
- Train this new model using the output of the conv layers as training data.
Start with Vgg + binary output (dogs/cats)
As before we need to start with a working model, so let's bring in our working VGG 16 model and change it to predict our binary dependent...
End of explanation
#model.load_weights(model_path+'finetune3.h5')
model.load_weights(os.path.join(model_path, 'finetune_1_ll.h5'))
Explanation: ...and load our fine-tuned weights.
End of explanation
layers = model.layers
last_conv_idx = [index for index,layer in enumerate(layers)
if type(layer) is Convolution2D][-1]
last_conv_idx
layers[last_conv_idx]
conv_layers = layers[:last_conv_idx+1] # convolutional layers only; i.e. first N layers to the index
conv_model = Sequential(conv_layers)
# Dense layers - also known as fully connected or 'FC' layers
fc_layers = layers[last_conv_idx+1:] # remaining layers are Dense/FC
Explanation: Split conv and dense layers
We're going to be training a number of iterations without dropout, so it would be best for us to pre-calculate the input to the fully connected layers - i.e. the Flatten() layer. We'll start by finding this layer in our model, and creating a new model that contains just the layers up to and including this layer:
End of explanation
batches = get_batches(os.path.join(path,'train'), shuffle=False, batch_size=batch_size)
val_batches = get_batches(os.path.join(path,'valid'), shuffle=False, batch_size=batch_size)
val_classes = val_batches.classes
trn_classes = batches.classes
val_labels = onehot(val_classes)
trn_labels = onehot(trn_classes)
Explanation: Generate features for the FC layers by precalculating conv output
Now we can use the exact same approach to creating features as we used when we created the linear model from the imagenet predictions in the last lesson - it's only the model that has changed. As you're seeing, there's a fairly small number of "recipes" that can get us a long way!
End of explanation
val_features = conv_model.predict_generator(val_batches, val_batches.nb_sample)
trn_features = conv_model.predict_generator(batches, batches.nb_sample)
save_array(os.path.join(model_path, 'train_convlayer_features.bc'), trn_features)
save_array(os.path.join(model_path,'valid_convlayer_features.bc'), val_features)
??save_array
trn_features = load_array(os.path.join(model_path, 'train_convlayer_features.bc'))
val_features = load_array(os.path.join(model_path,'valid_convlayer_features.bc'))
trn_features.shape
val_features.shape
Explanation: Below:
We're pre-calculating the inputs to the new model. The inputs are the training and valiation sets. So, we basically want to get the result of running those two sets thru the conv layers only and save them so we can run them through the new model.
So, we use the conv-only model to run prediction on the training and validation sets in order to precompute the values we need.
End of explanation
# Copy the weights from the pre-trained model.
# NB: Since we're removing dropout, we want to half the weights
def proc_wgts(layer): return [o/2 for o in layer.get_weights()]
# Such a finely tuned model needs to be updated very slowly!
opt = RMSprop(lr=0.00001, rho=0.7)
def get_fc_model():
model = Sequential([
MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]),
Flatten(),
Dense(4096, activation='relu'),
Dropout(0.),
Dense(4096, activation='relu'),
Dropout(0.),
Dense(2, activation='softmax')
])
for l1,l2 in zip(model.layers, fc_layers): l1.set_weights(proc_wgts(l2))
model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy'])
return model
fc_model = get_fc_model()
Explanation: Remove dropout from the fully-connected layer model
For our new fully connected model, we'll create it using the exact same architecture as the last layers of VGG 16, so that we can conveniently copy pre-trained weights over from that model. However, we'll set the dropout layer's p values to zero, so as to effectively remove dropout.
End of explanation
fc_model.fit(trn_features, trn_labels, nb_epoch=8,
batch_size=batch_size, validation_data=(val_features, val_labels))
??fc_model.fit
Explanation: Fit the FC model to the training and validation data
And fit the model in the usual way:
End of explanation
fc_model.save_weights(os.path.join(model_path,'lesson3_no_dropout.h5'))
fc_model.load_weights(os.path.join(model_path,'lesson3_no_dropout.h5'))
Explanation: Save the weights (no dropout)
End of explanation
gen = image.ImageDataGenerator(rotation_range=15, width_shift_range=0.1,
height_shift_range=0.1, zoom_range=0.1, horizontal_flip=True)
batches = get_batches(os.path.join(path, 'train'), gen, batch_size=batch_size)
# NB: We don't want to augment or shuffle the validation set
val_batches = get_batches(os.path.join(path, 'valid'), shuffle=False, batch_size=batch_size)
Explanation: Reducing overfitting
Now that we've gotten the model to overfit, we can take a number of steps to reduce this.
Add data augmentation to the training data
Let's try adding a small amount of data augmentation, and see if we reduce overfitting as a result. The approach will be identical to the method we used to finetune the dense layers in lesson 2, except that we will use a generator with augmentation configured. Here's how we set up the generator, and create batches from it:
End of explanation
fc_model = get_fc_model()
for layer in conv_model.layers: layer.trainable = False
# Look how easy it is to connect two models together!
conv_model.add(fc_model)
Explanation: Combine the Conv and FC layers into a single model
When using data augmentation, we can't pre-compute our convolutional layer features, since randomized changes are being made to every input image. That is, even if the training process sees the same image multiple times, each time it will have undergone different data augmentation, so the results of the convolutional layers will be different.
Therefore, in order to allow data to flow through all the conv layers and our new dense layers, we attach our fully connected model to the convolutional model--after ensuring that the convolutional layers are not trainable:
End of explanation
conv_model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy'])
conv_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=8,
validation_data=val_batches, nb_val_samples=val_batches.nb_sample)
Explanation: Now we can compile, train, and save our model as usual - note that we use fit_generator() since we want to pull random images from the directories on every batch.
Compile and train the combined model on augmented data
End of explanation
conv_model.save_weights(os.path.join(model_path, 'aug1.h5'))
conv_model.load_weights(os.path.join(model_path, 'aug1.h5'))
Explanation: Save the weights (combined)
End of explanation
conv_layers[-1].output_shape[1:] # last layer shape
def get_bn_layers(p):
return [
MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]),
Flatten(),
Dense(4096, activation='relu'),
BatchNormalization(),
Dropout(p),
Dense(4096, activation='relu'),
BatchNormalization(),
Dropout(p),
Dense(1000, activation='softmax')
]
def load_fc_weights_from_vgg16bn(model):
"Load weights for model from the dense layers of the Vgg16BN model."
# See imagenet_batchnorm.ipynb for info on how the weights for
# Vgg16BN can be generated from the standard Vgg16 weights.
from vgg16bn import Vgg16BN
vgg16_bn = Vgg16BN()
_, fc_layers = split_at(vgg16_bn.model, Convolution2D)
copy_weights(fc_layers, model.layers)
p=0.6
Explanation: Add batchnorm to the model
We can use nearly the same approach as before - but this time we'll add batchnorm layers (and dropout layers):
End of explanation
#bn_model = Sequential(get_bn_layers(0.6))
bn_model = Sequential(get_bn_layers(p))
load_fc_weights_from_vgg16bn(bn_model)
def proc_wgts(layer, prev_p, new_p):
scal = (1-prev_p)/(1-new_p)
return [o*scal for o in layer.get_weights()]
for l in bn_model.layers:
if type(l)==Dense: l.set_weights(proc_wgts(l, 0.5, 0.6)) # here l == 'l' not '1'
# Remove last layer and lock all the others
bn_model.pop()
for layer in bn_model.layers: layer.trainable=False
# Add linear layer (2-class) (just doing the ImageNet mapping to Kaggle dogs and cats)
bn_model.add(Dense(2,activation='softmax'))
Explanation: Create a standalone model from the BN layers of vgg16bn
End of explanation
bn_model.compile(Adam(), 'categorical_crossentropy', metrics=['accuracy'])
bn_model.fit(trn_features, trn_labels, nb_epoch=8, validation_data=(val_features, val_labels))
Explanation: Compile and fit the model
End of explanation
bn_model.save_weights(os.path.join(model_path,'bn.h5'))
bn_model.load_weights(os.path.join(model_path,'bn.h5'))
Explanation: Save the weights (batchnorm)
End of explanation
bn_layers = get_bn_layers(0.6)
bn_layers.pop()
bn_layers.append(Dense(2,activation='softmax'))
final_model = Sequential(conv_layers)
for layer in final_model.layers: layer.trainable = False
for layer in bn_layers: final_model.add(layer)
Explanation: Create another BN model and combine it with the conv layers into a final model
End of explanation
for l1,l2 in zip(bn_model.layers, bn_layers):
l2.set_weights(l1.get_weights())
Explanation: Set the BN layers weights from the first BN model
End of explanation
final_model.compile(optimizer=Adam(),
loss='categorical_crossentropy', metrics=['accuracy'])
final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=1,
validation_data=val_batches, nb_val_samples=val_batches.nb_sample)
Explanation: Fit the model
End of explanation
final_model.save_weights(os.path.join(model_path, 'final1.h5'))
Explanation: Save the weights (final model)
End of explanation
final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=4,
validation_data=val_batches, nb_val_samples=val_batches.nb_sample)
Explanation: Fit the model
End of explanation
final_model.save_weights(os.path.join(model_path, 'final2.h5'))
final_model.optimizer.lr=0.001
Explanation: Save the weights (final model)
End of explanation
final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=4,
validation_data=val_batches, nb_val_samples=val_batches.nb_sample)
Explanation: Fit the model
End of explanation
bn_model.save_weights(os.path.join(model_path, 'final3.h5'))
Explanation: Save the weights (final model)
End of explanation |
12,433 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
NormMUSIC Demo
The goal of this notebook is to demonstrate the effect of frequency normalization when MUSIC is applied on broadband signals
The notebook is structured as follows
Step1: <a id='dataset'></a>
Dataset generation
In the following, we simulate a small dataset to evaluate the performance of MUSIC and NormMUSIC.
We assume a single sound source.
- Simulate different rooms
Step2: <a id='prediction'></a>
Prediction
In the following, we apply MUSIC and NormMUSIC to the simulated dataset.
The results are stored in a pandas DataFrame
Step3: <a id='evaluation'></a>
Evaluation
In the next cells we calculate the following metrics
Step4: <a id='intuition'></a>
Intuition
Step5: In the next cell, we calculate the maxima of the individual MUSIC pseudp-spectra per frequency bin $k$, i.e.,
$$ \hat{\theta}k = max\theta \;\hat{P}_{MUSIC}(\theta, k)$$
and visualize them in a swarm plot.
From the plot, it should be clear that effectively only a few frequency bins contribute to the solution (if no normalization is applied). | Python Code:
# imports
import pickle
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from scipy.signal import stft
from random import uniform, sample
from pyroomacoustics import doa, Room, ShoeBox
Explanation: NormMUSIC Demo
The goal of this notebook is to demonstrate the effect of frequency normalization when MUSIC is applied on broadband signals
The notebook is structured as follows:
1. Dataset generation
2. Prediction
<ol>
<li>MUSIC: Standard implementation without normalization</li>
<li>NormMUSIC: Implementation with frequency normalization as suggested in [1]</li>
</ol>
3. Evaluation
4. Intuition: Why normalization?
4. Recommendation
[1] D. Salvati, C. Drioli and G. L. Foresti, "Incoherent Frequency Fusion for Broadband Steered Response Power Algorithms in
Noisy Environments," in IEEE Signal Processing Letters, vol. 21, no. 5, pp. 581-585, 2014.
End of explanation
# constants / config
fs = 16000
nfft = 1024
n = 5*fs # simulation length of source signal (3 seconds)
n_frames = 30
max_order = 10
doas_deg = np.linspace(start=0, stop=359, num=360, endpoint=True)
rs = [0.5, 1, 1.5]
mic_center = np.c_[[2,2,1]]
mic_locs = mic_center + np.c_[[ 0.04, 0.0, 0.0],
[ 0.0, 0.04, 0.0],
[-0.04, 0.0, 0.0],
[ 0.0, -0.04, 0.0],
]
snr_lb, snr_ub = 0, 30
# room simulation
data = []
for r in rs:
for i, doa_deg in enumerate(doas_deg):
doa_rad = np.deg2rad(doa_deg)
source_loc = mic_center[:,0] + np.c_[r*np.cos(doa_rad), r*np.sin(doa_rad), 0][0]
room_dim = [uniform(4,6), uniform(4,6), uniform(2, 4)] # meters
room = ShoeBox(room_dim, fs=fs, max_order=max_order)
room.add_source(source_loc, signal=np.random.random(n))
room.add_microphone_array(mic_locs)
room.simulate(snr=uniform(snr_lb, snr_ub))
signals = room.mic_array.signals
# calculate n_frames stft frames starting at 1 second
stft_signals = stft(signals[:,fs:fs+n_frames*nfft], fs=fs, nperseg=nfft, noverlap=0, boundary=None)[2]
data.append([r, doa_deg, stft_signals])
Explanation: <a id='dataset'></a>
Dataset generation
In the following, we simulate a small dataset to evaluate the performance of MUSIC and NormMUSIC.
We assume a single sound source.
- Simulate different rooms:
- DOA on 1° grid
- 3 samples per DOA
- Source signal: Random Gaussian
- Different SNRs between 0 and 30 dB
- Calculate 30 STFT time frames for each sample
End of explanation
kwargs = {'L': mic_locs,
'fs': fs,
'nfft': nfft,
'azimuth': np.deg2rad(np.arange(360))
}
algorithms = {
'MUSIC': doa.music.MUSIC(**kwargs),
'NormMUSIC': doa.normmusic.NormMUSIC(**kwargs),
}
columns = ["r", "DOA"] + list(algorithms.keys())
predictions = {n:[] for n in columns}
for r, doa_deg, stft_signals in data:
predictions['r'].append(r)
predictions['DOA'].append(doa_deg)
for algo_name, algo in algorithms.items():
algo.locate_sources(stft_signals)
predictions[algo_name].append(np.rad2deg(algo.azimuth_recon[0]))
df = pd.DataFrame.from_dict(predictions)
Explanation: <a id='prediction'></a>
Prediction
In the following, we apply MUSIC and NormMUSIC to the simulated dataset.
The results are stored in a pandas DataFrame
End of explanation
MAE, MEDAE = {}, {}
def calc_ae(a,b):
x = np.abs(a-b)
return np.min(np.array((x, np.abs(360-x))), axis=0)
for algo_name in algorithms.keys():
ae = calc_ae(df.loc[:,["DOA"]].to_numpy(), df.loc[:,[algo_name]].to_numpy())
MAE[algo_name] = np.mean(ae)
MEDAE[algo_name] = np.median(ae)
print(f"MAE\t MUSIC: {MAE['MUSIC']:5.2f}\t NormMUSIC: {MAE['NormMUSIC']:5.2f}")
print(f"MEDAE\t MUSIC: {MEDAE['MUSIC']:5.2f}\t NormMUSIC: {MEDAE['NormMUSIC']:5.2f}")
Explanation: <a id='evaluation'></a>
Evaluation
In the next cells we calculate the following metrics:
- Mean Absolute Error (MAE)
- Median Absolute error (MEDAE)
End of explanation
fig = plt.figure(figsize=(14,10))
frequencies = sample(list(range(algorithms['MUSIC'].Pssl.shape[1])), k=10)
for i, k in enumerate(frequencies):
plt.plot(algorithms["MUSIC"].Pssl[:,k])
plt.xlabel("angle [°]")
plt.title("Multiple narrowband MUSIC pseudo spectra in one plot", fontsize=15)
Explanation: <a id='intuition'></a>
Intuition: Why normalization?
MUSIC revisited: Complex narrowband signals
Before we discuss the intution behind frequency normalization, let's revisit the MUSIC algorithm for complex narrowband signals.
The MUSIC pseudo spectrum
The MUSIC pseudo spectrum $\hat{P}_{MUSIC}(\theta)$ is defined as:
$$\hat{P}{MUSIC}(\mathbf{e}(\theta)) = \frac{1}{\sum{i=p+1}^{N} |\mathbf{e}(\theta)^H \mathbf{v}_i|}$$,
where
- $\mathbf{v}_i$ are the noise eigenvectors
- $\mathbf{e}(\theta)$ is the candidate steering vector
- $\theta$ is the candidate DOA
MUSIC obtains its estimated DOA $\hat{\theta}$ by maximizing the pseudo spectrum $\hat{P}{MUSIC}(\theta)$ over the candidate DOAs $\theta \in {\theta_1, \theta_2, ... \theta_I}$, i.e.,
$$\hat{\theta} = {arg max}\theta \;\hat{P}_{MUSIC}(\theta)$$
Orthogonality
The main property that is exploited by MUSIC is the orthogonality between the noise eigenvectors $\mathbf{v}i$ and the steering vector $\mathbf{e}(\theta^{\star})$ of the DOA $\theta^{\star}$, i.e.,
$$ \mathbf{e}(\theta^{\star}) \perp span(\mathbf{v}{p+1}, \mathbf{v}{p+2}, \mathbf{v}{N}) \;\; \Leftrightarrow \;\; |\mathbf{e}(\theta^{\star})^H \mathbf{v}_i| = 0 \;\; \forall i \in {p+1, p+2, ... N}$$
In practice, $\hat{\theta}$ is approximately orthogonal to the noise eigenvectors $\mathbf{v}_i$, i.e.,
$$ |\mathbf{e}(\hat{\theta})^H \mathbf{v}_i| \approx 0 \;\; \forall i \in {p+1, p+2, ... N}$$
Therefore, ${max}{\theta} \; \hat{P}{MUSIC}(\theta) = \frac{1}{\epsilon}$, with $\epsilon \ll 1$, which results in a curve with (one of more) distict peak(s) that is characteristic for the MUSIC pseudo spectrum.
An example of a MUSIC pseudo spectrum is plotted below:
MUSIC revisited: STFT processing for broadband signals
When MUSIC is applied to broadband signals via STFT-processing, a pseudo spectrum $\hat{P}_{MUSIC}(\theta, k)$ is calculated for each individual frequency bin $k$.
The individual MUSIC pseudo spectra $\hat{P}_{MUSIC}(\theta, k)$ are summed across all frequency bins $k \in {1, 2, \ldots, K}$.
In the simplest case, this is performed without normalization, i.e.,
$$\tilde{P}{MUSIC}(\theta) = \sum{k=1}^{K} \hat{P}_{MUSIC}(\theta, k)$$
While this is the commonly used implementation, it is far from being optimal, since
the maxima of the MUSIC pseudospectra $\hat{P}_{MUSIC}(\theta, k)$ may differ in orders of magnitude. By summing across frequencies without normalization, only the information of the few frequencies with the highest peaks in the MUSIC pseudo spectra are used. The information of frequencies with lower peaks in the MUSIC pseudo spectra are practically not used to estimate the DOA.
To illustrate this, let's first plot the MUSIC pseudo spectra of 10 randomly selected frequency bins.
End of explanation
# calculation
maxima = sorted(np.max(algorithms["MUSIC"].Pssl, axis=0))
#plotting
fig, ax = plt.subplots(1, 1, figsize=(14,10))
sns.swarmplot(data=maxima, ax=ax, size=6)
ax.set_title("\nDistribution: Maxima of the MUSIC pseudo spectra of multiple frequency bins\n", fontsize=20)
ax.set_xticks([1])
Explanation: In the next cell, we calculate the maxima of the individual MUSIC pseudp-spectra per frequency bin $k$, i.e.,
$$ \hat{\theta}k = max\theta \;\hat{P}_{MUSIC}(\theta, k)$$
and visualize them in a swarm plot.
From the plot, it should be clear that effectively only a few frequency bins contribute to the solution (if no normalization is applied).
End of explanation |
12,434 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Overview
Contents
Introduction to Python
The Python Interpreter
First Steps with Python
Importing Libraries
About the Data
Arrays and their Attributes
Getting Help
More on Arrays
Basic Data Visualization
Repeating Tasks with Loops
Sequences
More Complex Loops
Iterators and Generators
Analyzing Data from Multiple Files
Looping over Files
Generating a Plot
Putting it All Together
Conditional Evaluation
Conditional Expressions in Python
Checking our Data
Creating Functions for Reuse
Composing Multiple Functions
Cleaning Up our Analysis Code
Positional versus Keyword Arguments
Documenting Functions
Capstone
Step1: Python at the Command Line
We've seen a lot of tools and techniques for improving our productivity through reproducible Python code. So far, however, we've been working exclusively within Jupyter Notebook. Jupyter Notebook is great for interactive, exploratory work in Python and encourages literate programming, as we discussed earlier. A Notebook is a great place to demonstrate to your future self or your peers how some Python code works.
But when it's time to scale-up your work and process data, you want to be on the command line, for all the reasons we saw when we discussed the Unix shell earlier.
Let's explore Python programs at the command line using the following Python script, temp_extremes.py.
```py
'''
Reports the min and max July temperatures for each file
that matches the given filename pattern.
'''
import csv
import os
import sys
import glob
def main()
Step2: Assertions Regarding Inherited Type (Advanced)
What if we used OrderedDict to represent our metadata?
Duck Typing (Advanced)
For more information about types, classes, and how Python represents objects, see
Step3: When in Doubt...
Testing for Quality Control
Unit Testing
Analyzing and Optimizing Performance
"Premature optimization is the root of all evil." - Sir Tony Hoare (later popularized by Donald Knuth)
Benchmarking
Line and Memory Profiling
Line and memory profilers aren't available in the Anaconda installation I had you use, but you can read all about this topic on this excellent blog post.
Capstone
Now you have a chance to bring together everything you've learned in this Software Carpentry workshop, particularly
Step4: Hint
Step5: The range() function in Python returns a list of consecutive integers between the first and the second number. The np.in1d function tests each element of our "hour" column to see if it is in the list of numbers from 7 to 18, inclusive.
Step6: To filter the full NumPy array to just those rows where the "hour" is between 7 and 18 (inclusive), we can take this vector of True and False values and put inside brackets, as below. Remember that the three dots (...) just mean "everything else" or, more specifically, "all the columns."
Step7: Hint
Step8: For another way to get all the records associated with a single day, remember that every day has the same number of records. Now that we've filtered to just the daytime records, there should be 36 records per day.
Step9: We can slice the first 36 rows of our data table to obtain the first day. The second day would then be rows 37 through 72, and so on.
Step10: Connecting to SQLite with Python | Python Code:
import glob
filenames = glob.glob('*.csv')
filenames
Explanation: Overview
Contents
Introduction to Python
The Python Interpreter
First Steps with Python
Importing Libraries
About the Data
Arrays and their Attributes
Getting Help
More on Arrays
Basic Data Visualization
Repeating Tasks with Loops
Sequences
More Complex Loops
Iterators and Generators
Analyzing Data from Multiple Files
Looping over Files
Generating a Plot
Putting it All Together
Conditional Evaluation
Conditional Expressions in Python
Checking our Data
Creating Functions for Reuse
Composing Multiple Functions
Cleaning Up our Analysis Code
Positional versus Keyword Arguments
Documenting Functions
Capstone: Fitting Linear Models
About the Data
Introducing statsmodels
Python at the Command Line
Our First Python Script
Alternative Command-Line Tools
Modularization
Understanding and Handling Errors
Defensive Programming
Assertions
Test-Driven Development
Testing for Quality Control
Unit Testing
Analyzing and Optimizing Performance
Benchmarking
Capstone
Your Tasks
Getting Started
Connecting to SQLite with Python
Introduction to Python
The Python Interpreter
Jupyter Notebook
First Steps with Python
Importing Libraries
About the Data
The data are formatted such that:
Each column is the monthly mean, January (1) through December (12)
Each row is a year, starting from January 1948 (1) through December 2016 (69)
More information on the data can be found here.
Arrays and their Attributes
How many rows and columns are there in the barrow array?
Challenge
What do each of the following code samples do?
py
barrow[0]
barrow[0,]
barrow[-1]
barrow[-3:-1]
Slicing NumPy Arrays
Challenge
What's the mean monthly temperature in August of 2016? Converted to degrees Fahrenheit?
Degrees F can be calculated from degrees K by the formula:
$$
T_F = \left(T_K \times \frac{9}{5}\right) - 459.67
$$
Calculating on NumPy Arrays
What is the overall mean temperature in any month in Barrow between 1948 and 2016 in degrees C?
How cold was the coldest February in Barrow, by monthly mean temperatures, in degrees C?
Challenge
What's the minimum, maximum, and mean monthly temperature for August in Barrow, in degrees C?
Getting Help
More on Arrays
What is the mean temperature in 1948? In 1949? And so on...
Remembering the difference between axis = 0 and axis = 1 is tricky, even for experienced Python programmers. Here are some helpful, visual reminders:
Integrating across rows or columns (StackOverflow)
Another way of visualizing the same thing (Site44)
Basic Data Visualization
Repeating Tasks with Loops
Sequences
Character Strings
Lists and Tuples
Performing Calculations with Lists
More Complex Loops
Challenge: Looping over Sequences
Write a for loop that iterates through the letters of your favorite city, putting each letter inside a list. The result should be a list with an element for each letter.
Hint: You can create an empty list like this:
py
letters = []
Hint: You can confirm you have the right result by comparing it to:
py
list("my favorite city")
Challenge: Sequences and Mutability
Which of the sequences we've learned about are immutable (i.e., they can't be changed)?
Strings are (immutable / mutable)?
Lists are (immutable / mutable)?
Tuples are (immutable / mutable)?
And what does this mean for working with each data type?
```py
"birds".upper()
[1, 2, 3].append(4)
(1, 2, 3)
```
Iterators and Generators
Analyzing Data from Multiple Files
First Step: Looping over Files
Second Step: Generating a Plot
Third Step: Putting It All Together
Challenge: Integrating over Multiple File Datasets
For each location (each file), plot the difference between that location's mean temperature and the mean across all locations.
Hint: One way to calculate the mean across five (5) files is by adding the 5 arrays together, then dividing by 5. You can add arrays together in a loop like this:
```py
Start with an array full of zeros that is 69-elements long
running_total = np.zeros((69))
for fname in filenames:
data = np.loadtxt(fname, delimiter = ',')
running_total = running_total + data.mean(axis = 1)
```
Hint: How do you difference two arrays? Remember how the plus, +, and minus, -, operators work on arrays?
Conditional Evaluation
This code can be represented by the following workflow.
Challenge: Conditional Expressions
How can you make this code print "Greater" by changing only one line?
```py
a_number = 42
if a_number > 100:
print('Greater')
else:
print('Not greater')
print('Done')
```
There are two (2) one-line changes you could make. Can you find them both?
Conditional Expressions in Python
What do each of the following evaluate to, True or False?
py
1 < 2
1 <= 1
3 == 3
2 != 3
Checking our Data
Challenge: Fitting a Line over Multiple File Datasets
Write a for loop, with an if statement inside, that calculates a line of best fit for each dataset's temperature anomalies and prints out a message as to whether that trend line is positive or negative.
Hint: What we want to know about each trend line is whether, for:
py
results = sm.OLS(y_data, x_data).fit()
b0, b1 = results.params
If b1, the slope of the line, is positive or negative.
Creating Functions for Reuse
What happens if we remove the keyword return from this function? Make this change and call the function again.
Composing Multiple Functions
Now that we've created a function that converts temperatures in degrees Kelvin to degrees Celsius, let's see if we can write a function that converts from degrees Celsius to degrees Fahrenheit.
$$
T_F = \left(T_C \times \frac{9}{5}\right) + 32
$$
Cleaning Up our Analysis Code
Positional versus Keyword Arguments
Documenting Functions
Challenge: Functions
Create one (or both, for an extra challenge) of the following functions...
A function called fences that takes an input character string and surrounds it on both sides with another string, e.g., "pasture" becomes "|pasture|" or "@pasture@" if either "|" or "@" are provided.
A function called rescale that takes an array and returns a corresponding array of values scaled to lie in the range 0.0 to 1.0.
Hint: Strings can be concatenated with the plus operator.
py
'cat' + 's'
Hint: If $x_0$ and $x_1$ are the lowest and highest values in an array, respectively, then the replacement value for any element $x$, scaled to between 0.0 and 1.0, should be:
$$
\frac{x - x_0}{x_1 - x_0}
$$
Capstone: Fitting Linear Models
We've seen the basics of the Python programming language. Now, let's get some hands-on experience applying what we've learned to real scientific data. In this exercise, we'll see how to fit linear trends to data using a new Python library, statsmodels. After you see an initial example, you'll have time to extend the example on your own.
About the Data
The data are formatted such that:
Each column is the monthly mean, January (1) through December (12)
Each row is a year, starting from January 1948 (1) through December 2016 (69)
More information on the data can be found here.
Introducing statsmodels
Without going into too much detail, our linear trend line has two components: a constant term ($\alpha$) and the slope of the trend line ($\beta$). Using linear algebra, we represent these two terms as two columns in a matrix. To fit a linear model with a constant term, the first column is a column of ones.
$$
\begin{align}
[\mathrm{Temp.\ anomaly}]&=[\mathrm{Some\ constant,\ }\alpha] + [\mathrm{Slope\ of\ trend\ line},\beta]\times[\mathrm{Year}]\
\left[\begin{array}{r}
-2.04\
-0.20\
0.88\
\vdots\
\end{array}\right] &=
\left[\begin{array}{rr}
1 & 1948\
1 & 1949\
1 & 1950\
\vdots & \vdots\
\end{array}\right]
\left[\begin{array}{r}
\alpha\
\beta\end{array}\right]
\end{align}
$$
Challenge: Fitting a Line over Multiple Datasets
Write a for loop, with an if statement inside, that calculates a line of best fit for each dataset's temperature anomalies and prints out a message as to whether that trend line is positive or negative.
Hint: What we want to know about each trend line is whether, for:
py
results = sm.OLS(y_data, x_data).fit()
b0, b1 = results.params
If b1, the slope of the line, is positive or negative. So, to break that down:
Loop over all the temperature files;
Calculate the temperature anomaly;
Fit an OLS regression to the anomaly data;
print() out whether the trend line is "positive" or "negative;"
Hint: Looping over Files
End of explanation
def celsius_to_fahr(temp_c):
return (temp_c * (9/5)) + 32
Explanation: Python at the Command Line
We've seen a lot of tools and techniques for improving our productivity through reproducible Python code. So far, however, we've been working exclusively within Jupyter Notebook. Jupyter Notebook is great for interactive, exploratory work in Python and encourages literate programming, as we discussed earlier. A Notebook is a great place to demonstrate to your future self or your peers how some Python code works.
But when it's time to scale-up your work and process data, you want to be on the command line, for all the reasons we saw when we discussed the Unix shell earlier.
Let's explore Python programs at the command line using the following Python script, temp_extremes.py.
```py
'''
Reports the min and max July temperatures for each file
that matches the given filename pattern.
'''
import csv
import os
import sys
import glob
def main():
# Get the user-specified directory
directory = sys.argv[1]
# Pattern to use in searching for files
filename_pattern = os.path.join(directory, '*temperature.csv')
for filename in glob.glob(filename_pattern):
july_temps = []
# While the file is open...
with open(filename, 'r') as stream:
# Use a function to read the file
reader = csv.reader(stream)
# Each row is a year
for row in reader:
# Add this year's July temperature to the list
july_temps.append(row[6])
# A human-readable name for the file
pretty_name = os.path.basename(filename)
print(pretty_name, '--Hottest July mean temp. was', max(july_temps), 'deg K')
print(pretty_name, '--Coolest July mean temp. was', min(july_temps), 'deg K')
if name == 'main':
main()
```
We can run this script on the command line by typing the following:
sh
$ python3 temp_extremes.py .
Remember that the single dot, . represents the current working directory, which is where all of our temperature CSV files are located.
Our First Python Script
Encapsulating to Keep the Namespace Clean
Alternative Command-Line Tools
sys.argv is a rather crude tool for processing command-line arguments. There are a couple of alternatives I suggest you look into if you are going to be writing command-line programs in Python:
argparse, another built-in library, that handles common cases in a systematic way. Check out this tutorial.
Fire, a very new Python module from Google, which can turn any Python object (function, class, etc.) into a command-line API.
Optional: Try out Python Fire
Installation instructions and source code here.
Modularization
Installing Your Project as a Module
We haven't covered installing new Python modules, but when the time is right for you to package your code together as a single module (e.g., as package_name, in the example above), consider installing your module "in development mode" first.
Understanding and Handling Errors
Defensive Programming
Assertions
Assertations Regarding Type (Advanced)
Challenge: Asserting Type in a Function
Recall the celsius_to_fahr() function we saw earlier. Implement a type-checking assertion that produces an AssertionError when the temp_c argument is not a number. Remember that there are two types of numbers we've see in Python so far:
float
int
You can decide whether the celsius_to_fahr() function should accept one or both of these types as inputs. Don't forget to provide a helpful message as part of the AssertionError.
End of explanation
def range_overlap(ranges):
'''Return common overlap among a set of [low, high] ranges.'''
for i, (low, high) in enumerate(ranges):
if i == 0:
lowest, highest = low, high
continue
lowest = max(lowest, low)
highest = min(highest, high)
if lowest >= highest:
return None
return (lowest, highest)
test_range_overlap()
Explanation: Assertions Regarding Inherited Type (Advanced)
What if we used OrderedDict to represent our metadata?
Duck Typing (Advanced)
For more information about types, classes, and how Python represents objects, see:
Python 3 Documentation: The Python Data Model
Test-Driven Development
For example, suppose we need to find where two or more time series overlap. The range of each time series is represented as a pair of numbers, which are the time the interval started and ended. The output is the largest range that they all include.
Challenge: Fix the Range Overlap Function
Fix range_overlap(); re-run test_range_overlap() after each change you make.
End of explanation
import numpy as np
data = np.loadtxt('/home/arthur/Desktop/ocean.txt', delimiter = ' ')
# How many rows and columns?
data.shape
Explanation: When in Doubt...
Testing for Quality Control
Unit Testing
Analyzing and Optimizing Performance
"Premature optimization is the root of all evil." - Sir Tony Hoare (later popularized by Donald Knuth)
Benchmarking
Line and Memory Profiling
Line and memory profilers aren't available in the Anaconda installation I had you use, but you can read all about this topic on this excellent blog post.
Capstone
Now you have a chance to bring together everything you've learned in this Software Carpentry workshop, particularly:
Using the Unix shell to download and manage delimited text data files;
Importing data into Python;
Using NumPy or other Python tools to summarize the data and diagnose any data issues;
Cleaning and plotting the data using reproducible Python functions;
For this open-ended exercise, we'll use data from the Woods Hole Oceanographic Institution's "Martha's Vineyard Coastal Observatory." Choose which of the following datasets you want to work with:
The meteorological record, which includes solar irradiance (solar_campmt_m50[W/m^2]) and rainfall (rain_campmt[mm]); the order of the fields in the delimited files is listed here.
The oceanographic record, which includes wave period and direction, ocean bottom temperature, and many other variables.
Each of these data sources has an online file directory:
Directory to download meteorological records
Directory to download oceanographic records
Once you've decided which dataset you want to work with, follow along with me using that particular record. I'm going to use the oceanographic data in this example.
Your Tasks
Download 3 days worth of meteorological or oceanographic data files using wget or curl in the Unix shell.
Join the files together as one file using cat in the Unix shell.
Read the data into a Python session.
Create a time plot of a variable of your choice.
Filter the rows of the table to only daytime observations.
Write a function to calculate the range of values in air temperature (meteorological record)) or water temperature (oceanographic record). Apply this function teach of the 3+ days for which you obtained data.
Getting Started
For this exercise, we want to work with multiple data files. Each data file in the index is one day, but we want to work with multiple days worth of data. How can we quickly and conveniently download multiple data files from the web?
This is something= the Unix shell is really great at automating. The WHOI dataset we're using exposes multiple data files at unique URLs. Below is an example file from the 120th day of 2018.
ftp://mvcodata.whoi.edu/pub/mvcodata/data/OcnDat_s/2018/2018120_OcntDat_s.C99
To download other days, we need only change one number in the URL:
ftp://mvcodata.whoi.edu/pub/mvcodata/data/OcnDat_s/2018/2018120_OcnDat_s.C99
ftp://mvcodata.whoi.edu/pub/mvcodata/data/OcnDat_s/2018/2018119_OcnDat_s.C99
ftp://mvcodata.whoi.edu/pub/mvcodata/data/OcnDat_s/2018/2018118_OcnDat_s.C99
...
How can we automate this with the Unix shell? First we need to figure out which shell program to use. Recall that the Unix shell offers many different small programs; depending on which variant of the Unix shell we're using, a certain program might not be available.
sh
which wget
which curl
So you may have both installed. If you do, use wget instead of curl; both programs do the same thing.
Downloading Online Datafiles with wget
Here's an example shell script to get us started. We iterate over three (3) days (118, 119, 120), storing each day number in a variable called day. In each iteration, we use wget to download the file, inserting that day number stored in the variable day.
```sh
cd
cd Desktop
for day in 118 119 120
do
wget "ftp://mvcodata.whoi.edu/pub/mvcodata/data/OcnDat_s/2018/2018${day}_OcnDat_s.C99"
done
```
Downloading Online Datafiles with curl
Unlike wget, the curl program prints the downloaded text data directly to the screen. Remember how we dealt with taking screen output and redirecting it to a file?
```sh
cd
cd Desktop
for day in 118 119 120
do
curl "ftp://mvcodata.whoi.edu/pub/mvcodata/data/OcnDat_s/2018/2018${day}_OcnDat_s.C99" > 2018${day}_OcnDat_s.C99
done
```
Good Luck!
Try the rest of the Capstone on your own. Helpful hints are provided throughout. You're encouraged to work with a partner but feel free to work independently if that suits you.
The hints below use only the packages we've seen in Python 3 so far, but if you're feeling adventurous, the pandas package has more and better tools for dealing with mixed, tabular data like the meteorology and oceanographic records here.
py
import pandas as pd
For help getting started with pandas, check out 10 Minutes to Pandas, in particular, the sections:
Getting data in/out - CSV
Viewing data
Plotting data
Hint: Combining Multiple Text Files in the Unix Shell
Remember the cat program?
sh
cat 2018120_OcnDat_s.C99
You can provide multiple files to the cat program and it will combine them line-by-line.
sh
cat 2018120_OcnDat_s.C99 2018119_OcnDat_s.C99 2018118_OcnDat_s.C99
But this just prints everything out to the screen. Use redirection to store the output in a new file called ocean.txt or met.txt, depending on which data source you're using.
Hint: Reading in Data Using Python
The WHOI datasets are apparently space-delimited. They might also be called "fixed width" because each data column appears at the same distance along each line.
End of explanation
data[...,4]
Explanation: Hint: Plotting Data in Python
Revisit your notes from when we did this earlier! Remember you need to import the plotting capability in Jupyter Notebook:
py
import matplotlib.pyplot as pyplot
%matplotlib inline
Hint: Filter Data Using Python
The fifth column contains the hour of the day. Recall that Python starts counting at zero, so column 5 is at position 4. The three dots (...) just mean "everything else" or, more specifically, "all the rows." Recall that with NumPy arrays, we count the number of rows, then columns: "rows comma columns."
End of explanation
np.in1d(data[...,4], range(7,19))
Explanation: The range() function in Python returns a list of consecutive integers between the first and the second number. The np.in1d function tests each element of our "hour" column to see if it is in the list of numbers from 7 to 18, inclusive.
End of explanation
daytime = data[np.in1d(data[...,4], range(7,19)),...]
daytime.shape
Explanation: To filter the full NumPy array to just those rows where the "hour" is between 7 and 18 (inclusive), we can take this vector of True and False values and put inside brackets, as below. Remember that the three dots (...) just mean "everything else" or, more specifically, "all the columns."
End of explanation
daytime[...,3]
daytime[...,3] == 28
Explanation: Hint: Creating a Function to Calculate Temperature Ranges
The range of daytime temperatures for a given day is the daily maximum temperature minus the daily minimum temperature. Create a single function to do this, then call it at least 3 times, once for each unique day in your dataset.
How can you present this function with just the data for a given day? There are a few different ways. In both datasets, the 4th column (column 3 in Python, where we start counting from zero) has an integer representing the day of the month.
End of explanation
daytime.shape
108 / 3
Explanation: For another way to get all the records associated with a single day, remember that every day has the same number of records. Now that we've filtered to just the daytime records, there should be 36 records per day.
End of explanation
daytime[0:36,3]
Explanation: We can slice the first 36 rows of our data table to obtain the first day. The second day would then be rows 37 through 72, and so on.
End of explanation
cursor.close()
connection.close()
Explanation: Connecting to SQLite with Python
End of explanation |
12,435 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href='http
Step1: Visualizing eclipse data
Let us find some interesting data to generate elements from, before we consider how to customize them. Here is a dataset containing information about all the eclipses of the 21st century
Step2: Here we have the date of each eclipse, what time of day the eclipse reached its peak in both local time and in UTC, the type of eclipse, its magnitude (fraction of the Sun's diameter obscured by the Moon) and the position of the peak in latitude and longitude.
Let's see what happens if we pass this dataframe to the Curve element
Step3: We see that, by default, the first dataframe column becomes the key dimension (corresponding to the x-axis) and the second column becomes the value dimension (corresponding to the y-axis). There is clearly structure in this data, but the plot is too highly compressed in the x direction to see much detail, and you may not like the particular color or line style. So we can start customizing the appearance of this curve using the HoloViews options system.
Types of option
If we want to change the appearance of what we can already see in the plot, we're no longer focusing on the data and metadata stored in the elements, but about details of the presentation. Details specific to the final plo tare handled by the separate "options" system, not the element objects. HoloViews allows you to set three types of options
Step4: The top line uses a special IPython/Jupyter syntax called the %%opts cell magic to specify the width plot option for all Curve objects in this cell. %%opts accepts a simple specification where we pass the width=900 keyword argument to Curve as a plot option (denoted by the square brackets).
Of course, there are other ways of applying options in HoloViews that do not require this IPython-specific syntax, but for this tutorial, we will only be covering the more-convenient magic-based syntax. You can read about the alternative approaches in the user guide.
Step5: Aside
Step6: Style options
The plot options earlier instructed HoloViews to build a plot 900 pixels wide, when rendered with the Bokeh plotting extension. Now let's specify that the Bokeh glyph should be 'red' and slightly thicker, which is information passed on directly to Bokeh (making it a style option)
Step7: Note how the plot options applied above to hour_curve are remembered! The %%opts magic is used to customize the object displayed as output for a particular code cell
Step8: Switching to matplotlib
Let us now view our curve with matplotlib using the %%output cell magic
Step9: All our options are gone! This is because the options are associated with the corresponding plotting extension---if you switch back to 'bokeh', the options will be applicable again. In general, options have to be specific to backends; e.g. the line_width style option accepted by Bokeh is called linewidth in matplotlib
Step10: The %output line magic
In the two cells above we repeated %%output backend='matplotlib' to use matplotlib to render those two cells. Instead of repeating ourselves with the cell magic, we can use a "line magic" (similar syntax to the cell magic but with one %) to set things globally. Let us switch to matplotlib with a line magic and specify that we want SVG output
Step11: Unlike the cell magic, the line magic doesn't need to be followed by any expression and can be used anywhere in the notebook. Both the %output and %opts line magics set things globally so it is recommended you declare them at the top of your notebooks. Now let us look at the SVG matplotlib output we requested
Step12: Switching back to bokeh
In previous releases of HoloViews, it was typical to switch to matplotlib in order to export to PNG or SVG, because Bokeh did not support these file formats. Since Bokeh 0.12.6 we can now easily use HoloViews to export Bokeh plots to a PNG file, as we will now demonstrate
Step13: By passing fig='png' and a filename='eclipses' to %output we can both render to PNG and save the output to file
Step14: Here we have requested PNG format using fig='png' and that the output is output to eclipses.png using filename='eclipses'
Step15: Bokeh also has some SVG support, but it is not yet exposed in HoloViews.
Using group and label
The above examples showed how to customize by type, but HoloViews offers multiple additional levels of customization that should be sufficient to cover any purpose. For our last example, let us split our eclipse dataframe based on the type ('Total' or 'Partial')
Step16: We'll now introduce the Spikes element, and display it with a large width and without a y-axis. We can specify those options for all following Spikes elements using the %opts line magic
Step17: Now let us look at the hour of day at which these two types of eclipses occur (local time) by overlaying the two types of eclipse as Spikes elements. The problem then is finding a way to visually distinguish the spikes corresponding to the different ellipse types.
We can do this using the element group and label introduced in the introduction to elements section as follows
Step18: Using these options to distinguish between the two categories of data with the same type, you can now see clear patterns of grouping between the two types, with many more total eclipses around noon in local time. Similar techniques can be used to provide arbitrarily specific customizations when needed. | Python Code:
import pandas as pd
import holoviews as hv
hv.extension('bokeh', 'matplotlib')
Explanation: <a href='http://www.holoviews.org'><img src="assets/hv+bk.png" alt="HV+BK logos" width="40%;" align="left"/></a>
<div style="float:right;"><h2>02. Customizing Visual Appearance</h2></div>
Section 01 focused on specifying elements and simple collections of them. This section explains how the visual appearance can be adjusted to bring out the most salient aspects of your data, or just to make the style match the overall theme of your document.
Preliminaries
In the introduction to elements, hv.extension('bokeh') was used at the start to load and activate the bokeh plotting extension. In this notebook, we will also briefly use matplotlib which we will load, but not yet activate, by listing it second:
End of explanation
eclipses = pd.read_csv('../data/eclipses_21C.csv', parse_dates=['date'])
eclipses.head()
Explanation: Visualizing eclipse data
Let us find some interesting data to generate elements from, before we consider how to customize them. Here is a dataset containing information about all the eclipses of the 21st century:
End of explanation
hv.Curve(eclipses)
Explanation: Here we have the date of each eclipse, what time of day the eclipse reached its peak in both local time and in UTC, the type of eclipse, its magnitude (fraction of the Sun's diameter obscured by the Moon) and the position of the peak in latitude and longitude.
Let's see what happens if we pass this dataframe to the Curve element:
End of explanation
%%opts Curve [width=900]
hour_curve = hv.Curve(eclipses).redim.label(hour_local='Hour (local time)', date='Date (21st century)')
hour_curve
Explanation: We see that, by default, the first dataframe column becomes the key dimension (corresponding to the x-axis) and the second column becomes the value dimension (corresponding to the y-axis). There is clearly structure in this data, but the plot is too highly compressed in the x direction to see much detail, and you may not like the particular color or line style. So we can start customizing the appearance of this curve using the HoloViews options system.
Types of option
If we want to change the appearance of what we can already see in the plot, we're no longer focusing on the data and metadata stored in the elements, but about details of the presentation. Details specific to the final plo tare handled by the separate "options" system, not the element objects. HoloViews allows you to set three types of options:
plot options: Options that tell HoloViews how to construct the plot.
style options: Options that tell the underlying plotting extension (Bokeh, matplotlib, etc.) how to style the plot
normalization options: Options that tell HoloViews how to normalize the various elements in the plot against each other (not covered in this tutorial)
Plot options
We noted that the data is too compressed in the x direction. Let us fix that by specifying the width plot option:
End of explanation
%%opts Curve [width=900 height=200]
# Exercise: Try setting the height plot option of the Curve above.
# Hint: the magic supports tab completion when the cursor is in the square brackets!
# Note: The %%opts cell magic *must* appear at the top of the code cell!
hour_curve = hv.Curve(eclipses).redim.label(hour_local='Hour (local time)', date='Date (21st century)')
hour_curve
%%opts Curve [width=900 show_grid=True]
# Exercise: Try enabling the boolean show_grid plot option for the curve above
# Note: The %%opts cell magic *must* appear at the top of the code cell!
hour_curve = hv.Curve(eclipses).redim.label(hour_local='Hour (local time)', date='Date (21st century)')
hour_curve
%%opts Curve [width=900 xrotation=45]
# Exercise: Try set the x-axis label rotation (in degrees) with the xrotation plot option
# Note: The %%opts cell magic *must* appear at the top of the code cell!
hour_curve = hv.Curve(eclipses).redim.label(hour_local='Hour (local time)', date='Date (21st century)')
hour_curve
Explanation: The top line uses a special IPython/Jupyter syntax called the %%opts cell magic to specify the width plot option for all Curve objects in this cell. %%opts accepts a simple specification where we pass the width=900 keyword argument to Curve as a plot option (denoted by the square brackets).
Of course, there are other ways of applying options in HoloViews that do not require this IPython-specific syntax, but for this tutorial, we will only be covering the more-convenient magic-based syntax. You can read about the alternative approaches in the user guide.
End of explanation
hv.help(hv.Curve, pattern='lod') # Example of help, matching 'lod' substring
Explanation: Aside: hv.help
Tab completion helps discover what keywords are available but you can get more complete help using the hv.help utility. For instance, to learn more about the options for hv.Curve run hv.help(hv.Curve):
End of explanation
%%opts Curve (color='red' line_width=2)
# Note: The %%opts cell magic *must* appear at the top of the code cell!
hour_curve
Explanation: Style options
The plot options earlier instructed HoloViews to build a plot 900 pixels wide, when rendered with the Bokeh plotting extension. Now let's specify that the Bokeh glyph should be 'red' and slightly thicker, which is information passed on directly to Bokeh (making it a style option):
End of explanation
# Exercise: Display hour_curve without any new options to verify it stays red
hour_curve
%%opts Curve (color='red' line_width=1)
# Exercise: Try setting the line_width style options to 1
# Note: The %%opts cell magic *must* appear at the top of the code cell!
hour_curve
%%opts Curve (color='red' line_dash='dotdash')
# Exercise: Try setting the line_dash style option to 'dotdash'
# Note: The %%opts cell magic *must* appear at the top of the code cell!
hour_curve
Explanation: Note how the plot options applied above to hour_curve are remembered! The %%opts magic is used to customize the object displayed as output for a particular code cell: behind the scenes HoloViews has linked the specified options to the hour_curve object via a hidden integer id attribute.
Having used the %%opts magic on hour_curve again, we have now associated the 'red' color style option to it. In the options specification syntax, style options are the keywords in parentheses and are keywords defined and used by Bokeh to style line glyphs.
End of explanation
%%output backend='matplotlib'
hour_curve
Explanation: Switching to matplotlib
Let us now view our curve with matplotlib using the %%output cell magic:
End of explanation
%%output backend='matplotlib'
%%opts Curve [aspect=4 fig_size=400 xrotation=90] (color='blue' linewidth=2)
# Note: Both the %%output and %%opts cell magics *must* appear at the top of the code cell!
hour_curve
%%output backend='matplotlib'
%%opts Curve [aspect=4 fig_size=400 xrotation=90] (color='blue' linewidth=2 linestyle='-.')
# Exercise: Apply the matplotlib equivalent to line_dash above using linestyle='-.'
# Note: Both the %%output and %%opts cell magics *must* appear at the top of the code cell!
hour_curve
Explanation: All our options are gone! This is because the options are associated with the corresponding plotting extension---if you switch back to 'bokeh', the options will be applicable again. In general, options have to be specific to backends; e.g. the line_width style option accepted by Bokeh is called linewidth in matplotlib:
End of explanation
%output backend='matplotlib' fig='svg'
Explanation: The %output line magic
In the two cells above we repeated %%output backend='matplotlib' to use matplotlib to render those two cells. Instead of repeating ourselves with the cell magic, we can use a "line magic" (similar syntax to the cell magic but with one %) to set things globally. Let us switch to matplotlib with a line magic and specify that we want SVG output:
End of explanation
%%opts Curve [aspect=4 fig_size=400 xrotation=70] (color='green' linestyle='--')
hour_curve
# Exercise: Verify for yourself that the output above is SVG and not PNG
# You can do this by right-clicking above then selecting 'Open Image in a new Tab' (Chrome) or 'View Image' (Firefox)
# Solution: Look at the URL in the new tab, it will start with 'data:image/svg+xml;base64,' as the output is SVG
Explanation: Unlike the cell magic, the line magic doesn't need to be followed by any expression and can be used anywhere in the notebook. Both the %output and %opts line magics set things globally so it is recommended you declare them at the top of your notebooks. Now let us look at the SVG matplotlib output we requested:
End of explanation
%output backend='bokeh'
Explanation: Switching back to bokeh
In previous releases of HoloViews, it was typical to switch to matplotlib in order to export to PNG or SVG, because Bokeh did not support these file formats. Since Bokeh 0.12.6 we can now easily use HoloViews to export Bokeh plots to a PNG file, as we will now demonstrate:
End of explanation
%%output fig='png' filename='eclipses'
hour_curve.clone()
Explanation: By passing fig='png' and a filename='eclipses' to %output we can both render to PNG and save the output to file:
End of explanation
ls *.png
Explanation: Here we have requested PNG format using fig='png' and that the output is output to eclipses.png using filename='eclipses':
End of explanation
total_eclipses = eclipses[eclipses.type=='Total']
partial_eclipses = eclipses[eclipses.type=='Partial']
Explanation: Bokeh also has some SVG support, but it is not yet exposed in HoloViews.
Using group and label
The above examples showed how to customize by type, but HoloViews offers multiple additional levels of customization that should be sufficient to cover any purpose. For our last example, let us split our eclipse dataframe based on the type ('Total' or 'Partial'):
End of explanation
%opts Spikes [width=900 yaxis=None]
Explanation: We'll now introduce the Spikes element, and display it with a large width and without a y-axis. We can specify those options for all following Spikes elements using the %opts line magic:
End of explanation
%%opts Spikes.Eclipses.Total (line_dash='solid')
%%opts Spikes.Eclipses.Partial (line_dash='dotted')
total = hv.Spikes(total_eclipses, kdims=['hour_local'], vdims=[], group='Eclipses', label='Total')
partial = hv.Spikes(partial_eclipses, kdims=['hour_local'], vdims=[], group='Eclipses', label='Partial')
(total * partial).redim.label(hour_local='Local time (hour)')
Explanation: Now let us look at the hour of day at which these two types of eclipses occur (local time) by overlaying the two types of eclipse as Spikes elements. The problem then is finding a way to visually distinguish the spikes corresponding to the different ellipse types.
We can do this using the element group and label introduced in the introduction to elements section as follows:
End of explanation
# Exercise: Remove the two %%opts lines above and observe the effect
# Solution: We can no longer distinguish between the total and partial eclipses as they all have solid lines!
total = hv.Spikes(total_eclipses, kdims=['hour_local'], vdims=[], group='Eclipses', label='Total')
partial = hv.Spikes(partial_eclipses, kdims=['hour_local'], vdims=[], group='Eclipses', label='Partial')
(total * partial).redim.label(hour_local='Local time (hour)')
%%opts Spikes.Eclipses.Total (line_dash='solid' color='black')
%%opts Spikes.Eclipses.Partial (line_dash='solid' color='lightgray')
# Exercise: Show all spikes with 'solid' line_dash, total eclipses in black and the partial ones in 'lightgray'
# Note: The %%opts cell magic *must* appear at the top of the code cell!
total = hv.Spikes(total_eclipses, kdims=['hour_local'], vdims=[], group='Eclipses', label='Total')
partial = hv.Spikes(partial_eclipses, kdims=['hour_local'], vdims=[], group='Eclipses', label='Partial')
(total * partial).redim.label(hour_local='Local time (hour)')
%%opts Spikes.Total_Eclipses (line_dash='solid' color='black')
%%opts Spikes.Partial_Eclipses (line_dash='solid' color='lightgray')
# Optional Exercise: Try differentiating the two sets of spikes by group and not label
# Note: The %%opts cell magic *must* appear at the top of the code cell!
total = hv.Spikes(total_eclipses, kdims=['hour_local'], vdims=[], group='Total_Eclipses')
partial = hv.Spikes(partial_eclipses, kdims=['hour_local'], vdims=[], group='Partial_Eclipses')
(total * partial).redim.label(hour_local='Local time (hour)')
Explanation: Using these options to distinguish between the two categories of data with the same type, you can now see clear patterns of grouping between the two types, with many more total eclipses around noon in local time. Similar techniques can be used to provide arbitrarily specific customizations when needed.
End of explanation |
12,436 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Review from the previous lecture
In yesterday's Lecture 2, you learned how to use the numpy module, how to make your own functions, and how to import and export data. Below is a quick review before we move on to Lecture 3.
Remember, to use the numpy module, first it must be imported
Step1: You can do a lot with the numpy module. Below is an example to jog your memory
Step2: Do you remember the Fibonacci sequence from yesterday's Lecture 1? Let's define our own function that will help us to write the Fibonacci sequence.
Step3: Remember loops too? Let's get the first 10 numbers in the Fibonacci sequence.
Step4: There's your quick review of numpy and functions along with a while loop thrown in. Now we can move on to the content of Lecture 3.
Lecture 3 - Distributions, Histograms, and Curve Fitting
In the previous lecture, you learned how to import the module numpy and how to use many of its associated functions. As you've seen, numpy gives us the ability to generate arrays of numbers using commands usch as np.linspace and others.
In addition to these commands, you can also use numpy to generate distributions of numbers. The two most frequently used distributions are the following
Step5: Let's generate a vector of length 5 populated with uniformly distributed random numbers. The function np.random.rand takes the array output size as an argument (in this case, 5).
Step6: Additionally, you are not limited to one-dimensional arrays! Let's make a 5x5, two-dimensional array
Step7: Great, so now you have a handle on generating uniform distributions. Let's quickly look at one more type of distribution.
The normal distribution (randn) selects numbers from a Gaussian curve, sometimes called a bell curve, also from the interval [0,1).
The equation for a Gaussian curve is the following
Step8: So these numbers probably don't mean that much to you. Don't worry; they don't mean much to me either!
Instead of trying to derive meaning from a list of numbers, let's actually plot these outputs and see what they look like. This will allow us to determine whether or not these distributions actually look like what we are expecting. How do we do that? The answer is with histograms!
B. Plotting distributions
Histogram documentation
Step9: Now, let's plot a uniform distribution and take a look.
Use what you learned above to define your variable X as a uniformly distributed array with 5000 elements.
Step10: Now, let's use plt.hist to see what X looks like. First, run the cell below. Then, vary bins -- doing so will either increase or decrease the apparent effect of noise in your distribution.
Step11: Nice job! Do you see why the "uniform distribution" is referred to as such?
Next, let's take a look at the Gaussian distribution using histograms.
In the cell below, generate a vector of length 5000, called X, from the normal (Gaussian) distribution and plot a histogram with 50 bins.
HINT
Step12: Nice job! You just plotted a Gaussian distribution with mean of 0 and a standard deviation of 1.
As a reminder, this is considered the "standard" normal distribution, and it's not particularly interesting. We can transform the distribution given by np.random.randn (and make it more interesting!) using simple arithmetic.
Run the cell below to see. How is the code below different from the code you've already written?
Step13: Before moving onto the next section, vary the values of mu and sigma in the above code to see how your histogram changes. You should find that changing mu (the mean) affects the center of the distribution while changing sigma (the standard deviation) affects the width of the distribution.
Take a look at the histograms you have generated and compare them. Do the histograms of the uniform and normal (Gaussian) distributions look different? If so, how? Describe your observations in the cell below.
Step14: For simplicity's sake, we've used plt.hist without generating any return variables. Remember that plt.hist takes in your data (X) and the number of bins, and it makes histograms from it. In the process, plt.hist generates variables that you can store; we just haven't thus far. Run the cell below to see -- it should replot the Gaussian from above while also generating the output variables.
Step15: Something that might be useful to you is that you can make use of variables outputted by plt.hist -- particularly bins and N.
The bins array returned by plt.hist is longer (by one element) than the actual number of bins. Why? Because the bins array contains all the edges of the bins. For example, if you have 2 bins, you will have 3 edges. Does this make sense?
So you can generate these outputs, but what can you do with them? You can average consecutive elements from the bins output to get, in a sense, a location of the center of a bin. Let's call it bin_avg. Then you can plot the number of observations in that bin (N) against the bin location (bin_avg).
Step16: The plot above (red stars) should look like it overlays the histogram plot above it. If that's what you see, nice job! If not, let your instructor and/or TAs know before moving onto the next section.
C. Checking your distributions with statistics
If you ever want to check that your distributions are giving you what you expect, you can use numpy to calculate the mean and standard deviation of your distribution. Let's do this for X, our Gaussian distribution, and print the results.
Run the cell below. Are your mean and standard deviation what you expect them to be?
Step17: So you've learned how to generate distributions of numbers, plot them, and generate statistics on them. This is a great starting point, but let's try working with some real data!
D. Visualizing and understanding real data
Hope you're excited -- we're about to get our hands on some real data! Let's import a list of fluorescence lifetimes in nanoseconds from Nitrogen-Vacancy defects in diamond.
(While it is not at all necessary to understand the physics behind this, know that this is indeed real data! You can read more about it at http
Step18: Next, plot a histogram of this data set (play around with the number of bins, too).
Step19: Now, calculate and print the mean and standard deviation of this distribution.
Step20: Nice job! Now that you're used to working with real data, we're going to try to fit some more real data to known functions to gain a better understanding of that data.
E. Basic curve fitting
In this section, we're going to introduce you to the Python module known as scipy (short for Scientific Python).
scipy allows you to perform a range of functions such as numerical integration and optimization. In particular, it's useful for data analysis, which we shall see shortly. In particular, we will do curve fitting using curve_fit from scipy.optimize.
Curve fitting documentation
Step21: We will show you an example, and then you get to try it out for yourself!
We start by creating an equally-spaced numpy array x consisting of 100 numbers from -5 to 5. Try it out yourself below.
Step22: Next, we will define a function $f(x) = \frac 1 3x^2+3$ that will square the elements in x and add an offset. Call this function f_scalar, and implement it (for scalar values) below.
Step23: We will then vectorize the function to allow it to act on all elements of an array at once. Magic!
Step24: Now we will add some noise to the array y using the np.random.rand() function and store it in a new variable called y_noisy.
Important question
Step25: Let's see what the y values look like now
Step26: It seems like there's still a rough parabolic shape, so let's see if we can recover the original y values without any noise.
We can treat this y_noisy as data values that we want to fit with a parabolic funciton. To do this, we first need to define the general form of a quadratic function
Step27: Then, we want to find the optimal values of a, b, and c that will give a function that fits best to y_noisy.
We do this using the curve_fit function in the following way
Step28: Now that we have the fitted parameters, let's use quadratic to plot the fitted parabola alongside the noisy y values.
Step29: And we can also compare y_fitted to the original y values without any noise
Step30: Not a bad job for your first fit function!
F. More advanced curve fitting
In this section, you will visualize real data and plot a best-fit function to model the underlying physics.
You just used curve_fit above to fit simulated data to a linear function. Using that code as your guide, combined with the steps below, you will use curve_fit to fit your real data to a non-linear function that you define. This exercise will combine most of what you've learned so far!
Steps for using curve_fit
Here is the basic outline on how to use curve_fit. As this is the last section, you will mostly be on your own. Try your best with new skills you have learned here and feel free to ask for help!
1) Load in your x and y data. You will be using "photopeak.txt", which is in the folder Data.
HINT 1
Step31: So you've imported your data and plotted it. It should look similar to the figure below. Run the next cell to see.
Step32: What type of function would you say this is? Think back to the distributions we've learned about today. Any ideas?
Based on what you think, define your function below. | Python Code:
import numpy as np
Explanation: Review from the previous lecture
In yesterday's Lecture 2, you learned how to use the numpy module, how to make your own functions, and how to import and export data. Below is a quick review before we move on to Lecture 3.
Remember, to use the numpy module, first it must be imported:
End of explanation
np.linspace(0,10,11)
Explanation: You can do a lot with the numpy module. Below is an example to jog your memory:
End of explanation
def myFib(a,b):
return a+b
Explanation: Do you remember the Fibonacci sequence from yesterday's Lecture 1? Let's define our own function that will help us to write the Fibonacci sequence.
End of explanation
fibLength = 10 #the length we want for our Fibonacci sequence
fibSeq = np.zeros(fibLength) #make a numpy array of 10 zeros
# Let's define the first 2 elements of the Fibonacci sequence
fibSeq[0] = 0
fibSeq[1] = 1
i = 2 #with the first 2 elements defined, we can calculate the rest of the sequence beginning with the 3rd element
while i-1 < fibLength-1:
nextFib = myFib(fibSeq[i-2],fibSeq[i-1])
fibSeq[i] = nextFib
i = i + 1
print(fibSeq)
Explanation: Remember loops too? Let's get the first 10 numbers in the Fibonacci sequence.
End of explanation
import numpy as np
Explanation: There's your quick review of numpy and functions along with a while loop thrown in. Now we can move on to the content of Lecture 3.
Lecture 3 - Distributions, Histograms, and Curve Fitting
In the previous lecture, you learned how to import the module numpy and how to use many of its associated functions. As you've seen, numpy gives us the ability to generate arrays of numbers using commands usch as np.linspace and others.
In addition to these commands, you can also use numpy to generate distributions of numbers. The two most frequently used distributions are the following:
the uniform distribution: np.random.rand
the normal (Gaussian) distribution: np.random.randn
(notice the "n" that distinguishes the functions for generating normal vs. uniform distributions)
A. Generating distributions
Let's start with the uniform distribution (rand), which gives numbers uniformly distributed over the interval [0,1).
If you haven't already, import the numpy module.
End of explanation
np.random.rand(5)
Explanation: Let's generate a vector of length 5 populated with uniformly distributed random numbers. The function np.random.rand takes the array output size as an argument (in this case, 5).
End of explanation
np.random.rand(5,5)
Explanation: Additionally, you are not limited to one-dimensional arrays! Let's make a 5x5, two-dimensional array:
End of explanation
np.random.randn(5)
Explanation: Great, so now you have a handle on generating uniform distributions. Let's quickly look at one more type of distribution.
The normal distribution (randn) selects numbers from a Gaussian curve, sometimes called a bell curve, also from the interval [0,1).
The equation for a Gaussian curve is the following:
$f(x) = \frac{1}{\sqrt{2\pi\sigma^2}}e^{\frac{-(x-\mu)^2}{2\sigma^2}}$
where $\mu$ is the mean and $\sigma$ is the standard deviation.
Don't worry about memorizing this equation, but do know that it exists and that numbers can be randomly drawn from it.
In python, the command np.random.randn selects numbers from the "standard" normal distribution.
All this means is that, in the equation above, $\mu$ (mean) = 0 and $\sigma$ (standard deviation ) 1. randn takes the size of the output as an argument just like rand does.
Try running the cell below to see the numbers you get from a normal distribution.
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: So these numbers probably don't mean that much to you. Don't worry; they don't mean much to me either!
Instead of trying to derive meaning from a list of numbers, let's actually plot these outputs and see what they look like. This will allow us to determine whether or not these distributions actually look like what we are expecting. How do we do that? The answer is with histograms!
B. Plotting distributions
Histogram documentation: http://matplotlib.org/1.2.1/api/pyplot_api.html?highlight=hist#matplotlib.pyplot.hist
Understanding distributions is perhaps best done by plotting them in a histogram. Lucky for us, matplotlib makes that very simple for us.
To make a histogram, we use the command plt.hist, which takes -- at minimum -- a vector of values that we want to plot as a histogram. We can also specify the number of bins.
First things first: let's import matplotlib:
End of explanation
#your code here
X = ...
Explanation: Now, let's plot a uniform distribution and take a look.
Use what you learned above to define your variable X as a uniformly distributed array with 5000 elements.
End of explanation
plt.hist(X, bins=20);
Explanation: Now, let's use plt.hist to see what X looks like. First, run the cell below. Then, vary bins -- doing so will either increase or decrease the apparent effect of noise in your distribution.
End of explanation
#your code here
Explanation: Nice job! Do you see why the "uniform distribution" is referred to as such?
Next, let's take a look at the Gaussian distribution using histograms.
In the cell below, generate a vector of length 5000, called X, from the normal (Gaussian) distribution and plot a histogram with 50 bins.
HINT: You will use a similar format as above when you defined and plotted a uniform distribution.
End of explanation
mu = 5 #the mean of the distribution
sigma = 3 #the standard deviation
X = sigma * np.random.randn(5000) + mu
plt.hist(X,bins=50);
Explanation: Nice job! You just plotted a Gaussian distribution with mean of 0 and a standard deviation of 1.
As a reminder, this is considered the "standard" normal distribution, and it's not particularly interesting. We can transform the distribution given by np.random.randn (and make it more interesting!) using simple arithmetic.
Run the cell below to see. How is the code below different from the code you've already written?
End of explanation
#write your observations here
Explanation: Before moving onto the next section, vary the values of mu and sigma in the above code to see how your histogram changes. You should find that changing mu (the mean) affects the center of the distribution while changing sigma (the standard deviation) affects the width of the distribution.
Take a look at the histograms you have generated and compare them. Do the histograms of the uniform and normal (Gaussian) distributions look different? If so, how? Describe your observations in the cell below.
End of explanation
N,bins,patches = plt.hist(X, bins=50)
Explanation: For simplicity's sake, we've used plt.hist without generating any return variables. Remember that plt.hist takes in your data (X) and the number of bins, and it makes histograms from it. In the process, plt.hist generates variables that you can store; we just haven't thus far. Run the cell below to see -- it should replot the Gaussian from above while also generating the output variables.
End of explanation
bin_avg = (bins[1:]+bins[:-1])/2
plt.plot(bin_avg, N, 'r*')
plt.show()
Explanation: Something that might be useful to you is that you can make use of variables outputted by plt.hist -- particularly bins and N.
The bins array returned by plt.hist is longer (by one element) than the actual number of bins. Why? Because the bins array contains all the edges of the bins. For example, if you have 2 bins, you will have 3 edges. Does this make sense?
So you can generate these outputs, but what can you do with them? You can average consecutive elements from the bins output to get, in a sense, a location of the center of a bin. Let's call it bin_avg. Then you can plot the number of observations in that bin (N) against the bin location (bin_avg).
End of explanation
mean = np.mean(X)
std = np.std(X)
print('mean: '+ repr(mean) )
print('standard deviation: ' + repr(std))
Explanation: The plot above (red stars) should look like it overlays the histogram plot above it. If that's what you see, nice job! If not, let your instructor and/or TAs know before moving onto the next section.
C. Checking your distributions with statistics
If you ever want to check that your distributions are giving you what you expect, you can use numpy to calculate the mean and standard deviation of your distribution. Let's do this for X, our Gaussian distribution, and print the results.
Run the cell below. Are your mean and standard deviation what you expect them to be?
End of explanation
lifetimes = np.loadtxt('Data\LifetimeData.txt')
Explanation: So you've learned how to generate distributions of numbers, plot them, and generate statistics on them. This is a great starting point, but let's try working with some real data!
D. Visualizing and understanding real data
Hope you're excited -- we're about to get our hands on some real data! Let's import a list of fluorescence lifetimes in nanoseconds from Nitrogen-Vacancy defects in diamond.
(While it is not at all necessary to understand the physics behind this, know that this is indeed real data! You can read more about it at http://www.nature.com/articles/ncomms11820 if you are so inclined. This data is from Fig. 6a).
Do you remember learning how to import data in yesterday's Lecture 2? The command you want to use is np.loadtxt. The data we'll be working with is called LifetimeData.txt, and it's located in the Data folder.
End of explanation
#your code here
Explanation: Next, plot a histogram of this data set (play around with the number of bins, too).
End of explanation
#your code here
Explanation: Now, calculate and print the mean and standard deviation of this distribution.
End of explanation
import numpy as np
from scipy.optimize import curve_fit
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Nice job! Now that you're used to working with real data, we're going to try to fit some more real data to known functions to gain a better understanding of that data.
E. Basic curve fitting
In this section, we're going to introduce you to the Python module known as scipy (short for Scientific Python).
scipy allows you to perform a range of functions such as numerical integration and optimization. In particular, it's useful for data analysis, which we shall see shortly. In particular, we will do curve fitting using curve_fit from scipy.optimize.
Curve fitting documentation: https://docs.scipy.org/doc/scipy-0.18.1/reference/generated/scipy.optimize.curve_fit.html
In this section, you will learn how to use curve fitting on simulated data. Next will be real data!
First, let's load the modules.
End of explanation
# your code here
Explanation: We will show you an example, and then you get to try it out for yourself!
We start by creating an equally-spaced numpy array x consisting of 100 numbers from -5 to 5. Try it out yourself below.
End of explanation
# your code here
Explanation: Next, we will define a function $f(x) = \frac 1 3x^2+3$ that will square the elements in x and add an offset. Call this function f_scalar, and implement it (for scalar values) below.
End of explanation
f = np.vectorize(f_scalar)
y = f(x)
Explanation: We will then vectorize the function to allow it to act on all elements of an array at once. Magic!
End of explanation
# your code here
Explanation: Now we will add some noise to the array y using the np.random.rand() function and store it in a new variable called y_noisy.
Important question: What value for the array size should we pass into this function?
End of explanation
plt.plot(x,y_noisy)
Explanation: Let's see what the y values look like now
End of explanation
def quadratic(x,a,b,c):
return a*x**2 + b*x + c
Explanation: It seems like there's still a rough parabolic shape, so let's see if we can recover the original y values without any noise.
We can treat this y_noisy as data values that we want to fit with a parabolic funciton. To do this, we first need to define the general form of a quadratic function:
End of explanation
optimal_values, _ = curve_fit(quadratic,x,y_noisy)
a = optimal_values[0]
b = optimal_values[1]
c = optimal_values[2]
print(a, b, c)
Explanation: Then, we want to find the optimal values of a, b, and c that will give a function that fits best to y_noisy.
We do this using the curve_fit function in the following way:
curve_fit(f,xdata,ydata)
where f is the model we're fitting to (quadratic in this case).
This function will return the optimal values for a, b, and c in a list. Try it out!
End of explanation
y_fitted = quadratic(x,a,b,c)
plt.plot(x,y_fitted)
plt.plot(x,y_noisy)
Explanation: Now that we have the fitted parameters, let's use quadratic to plot the fitted parabola alongside the noisy y values.
End of explanation
plt.plot(x,y_fitted)
plt.plot(x,y)
Explanation: And we can also compare y_fitted to the original y values without any noise:
End of explanation
# Step 1: Import the data
# Step 2: Plot the data to see what it looks like
data = np.loadtxt("Data/photopeak.txt")
x_data = data[:,0]
y_data = data[:,1]
plt.scatter(x,y)
Explanation: Not a bad job for your first fit function!
F. More advanced curve fitting
In this section, you will visualize real data and plot a best-fit function to model the underlying physics.
You just used curve_fit above to fit simulated data to a linear function. Using that code as your guide, combined with the steps below, you will use curve_fit to fit your real data to a non-linear function that you define. This exercise will combine most of what you've learned so far!
Steps for using curve_fit
Here is the basic outline on how to use curve_fit. As this is the last section, you will mostly be on your own. Try your best with new skills you have learned here and feel free to ask for help!
1) Load in your x and y data. You will be using "photopeak.txt", which is in the folder Data.
HINT 1: When you load your data, I recommend making use of the usecols and unpack argument.
HINT 2: Make sure the arrays are the same length!
2) Plot this data to see what it looks like. Determine the function your data most resembles.
3) Define the function to which your data will be fit.
4) PART A: Use curve_fit and point the output to popt and pcov. These are the fitted parameters (popt) and their estimated errors (pcov).
4) PART B - OPTIONAL (only do this if you get through all the other steps): Input a guess (p0) and bounds (bounds) into curve_fit. For p0, I would suggest [0.5, 0.1, 5].
5) Pass the popt parameters into the function you've defined to create the model fit.
6) Plot your data and your fitted function.
7) Pat yourself on the back!
End of explanation
from IPython.display import display, Image
display(Image(filename='Data/photopeak.png'))
Explanation: So you've imported your data and plotted it. It should look similar to the figure below. Run the next cell to see.
End of explanation
# Step 3: Define your function here
def gaussian(x,A,mu,sigma):
pi = np.pi
return A/np.sqrt(2*pi*sigma)*np.exp(-(x-mu)**2/(2*sigma**2))
# Step 3.5: SANITY CHECK! Use this step as a way to check that the function you defined above is mathematically correct.
x = np.linspace(-8,12,100)
A = 1
mu = 2
sigma = 3
y = gaussian(x,A,mu,sigma)
plt.plot(x,y)
# Step 4: Use curve_fit to generate your output parameters
popt,pcov = curve_fit(gaussian,x_data,y_data)
# Step 5: Generate your model fit
y_fitted = gaussian(x_data,popt[0],popt[1],popt[2])
plt.scatter(x_data,y_fitted)
plt.scatter(x_data,y_data)
# Step 6: Plot the best fit function and the scatter plot of data
Explanation: What type of function would you say this is? Think back to the distributions we've learned about today. Any ideas?
Based on what you think, define your function below.
End of explanation |
12,437 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I'm using tensorflow 2.10.0. | Problem:
import tensorflow as tf
A = tf.constant([-0.5, -0.1, 0, 0.1, 0.5, 2], dtype=tf.float32)
def g(A):
return tf.math.reciprocal(A)
result = g(A.__copy__()) |
12,438 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Time Maps
inspiration from https
Step1: the example in the blog post is for a heat map of the amount of time in between events in a sequence.
Step2: Our data file includes equal intervals, so this approach is worthless
Step3: However, perhaps the axes could reflect time of day and day of week to show mean values for those instead.
Step4: This data file is every 30 mins... | Python Code:
import os; os.sys.path.append(os.path.dirname(os.path.abspath('.'))) # for relative imports
from utils.nab_data import NABData
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
data = NABData()
data.summary().head()
data.data.keys()
data.plot('nyc_taxi')
Explanation: Time Maps
inspiration from https://districtdatalabs.silvrback.com/time-maps-visualizing-discrete-events-across-many-timescales
End of explanation
import scipy.ndimage as ndi
df = data['nyc_taxi']
times = df.index
# calculate time differences:
diffs = np.diff(times)
xcoords = diffs[:-1] # all differences except the last
ycoords = diffs[1:] # all differences except the first
Nside=256 # this is the number of bins along x and y for the histogram
width=8 # the width of the Gaussian function along x and y when applying the blur operation
H = np.zeros((Nside,Nside)) # a 'histogram' matrix that counts the number of points in each grid-square
max_diff = np.max(diffs) # maximum time difference
x_heat = (Nside-1)*xcoords/max_diff # the xy coordinates scaled to the size of the matrix
y_heat = (Nside-1)*ycoords/max_diff # subtract 1 since Python starts counting at 0, unlike Fortran and R
for i in range(len(xcoords)): # loop over all points to calculate the population of each bin
H[x_heat[i], y_heat[i]] += 1 # Increase count by 1
#here, the integer part of x/y_heat[i] is automatically taken
H = ndi.gaussian_filter(H,width) # apply Gaussian blur
H = np.transpose(H) # so that the orientation is the same as the scatter plot
plt.imshow(H, origin='lower') # display H as an image
plt.show()
Explanation: the example in the blog post is for a heat map of the amount of time in between events in a sequence.
End of explanation
len(set(np.diff(data['nyc_taxi'].index)))
Explanation: Our data file includes equal intervals, so this approach is worthless
End of explanation
df.head()
Explanation: However, perhaps the axes could reflect time of day and day of week to show mean values for those instead.
End of explanation
df.index[0].weekday()
np.unique(df.index.map(lambda x: x.weekday()))
days_legend = dict(zip(range(7), ['Mon','Tue','Wed','Thur','Fri','Sat','Sun']))
days_legend
day_masks = dict(zip(range(7), [df.index.map(lambda x: x.weekday() == d) for d in range(7)]))
df.loc[day_masks[0]].shape
times = np.unique(df.index.map(lambda x: x.time()))
tindex = df.index.map(lambda x: x.time())
time_masks = dict(zip(times, [(tindex == t) for t in times]))
k = time_masks.keys()[0]
print k
print df.loc[time_masks[k]].shape
dtmap = pd.DataFrame(np.zeros((len(time_masks), len(day_masks))), index=sorted(times), columns=range(7))
print dtmap.shape
fn = np.mean
for day, daymask in day_masks.iteritems():
for time, timemask in time_masks.iteritems():
val = fn(df.loc[daymask & timemask])
dtmap.loc[time, day] = val.values
dtmap.head()
fig = plt.figure(figsize=(5, 15))
plt.imshow(dtmap, origin='lower') # display H as an image
plt.yticks(range(dtmap.shape[0])[::4], sorted(times)[::4])
plt.xticks(range(7), [days_legend[x] for x in range(7)], rotation=60)
plt.colorbar()
plt.show()
Explanation: This data file is every 30 mins...
End of explanation |
12,439 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Atmos
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required
Step9: 2.2. Canonical Horizontal Resolution
Is Required
Step10: 2.3. Range Horizontal Resolution
Is Required
Step11: 2.4. Number Of Vertical Levels
Is Required
Step12: 2.5. High Top
Is Required
Step13: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required
Step14: 3.2. Timestep Shortwave Radiative Transfer
Is Required
Step15: 3.3. Timestep Longwave Radiative Transfer
Is Required
Step16: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required
Step17: 4.2. Changes
Is Required
Step18: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required
Step19: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required
Step20: 6.2. Scheme Method
Is Required
Step21: 6.3. Scheme Order
Is Required
Step22: 6.4. Horizontal Pole
Is Required
Step23: 6.5. Grid Type
Is Required
Step24: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required
Step25: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required
Step26: 8.2. Name
Is Required
Step27: 8.3. Timestepping Type
Is Required
Step28: 8.4. Prognostic Variables
Is Required
Step29: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required
Step30: 9.2. Top Heat
Is Required
Step31: 9.3. Top Wind
Is Required
Step32: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required
Step33: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required
Step34: 11.2. Scheme Method
Is Required
Step35: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required
Step36: 12.2. Scheme Characteristics
Is Required
Step37: 12.3. Conserved Quantities
Is Required
Step38: 12.4. Conservation Method
Is Required
Step39: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required
Step40: 13.2. Scheme Characteristics
Is Required
Step41: 13.3. Scheme Staggering Type
Is Required
Step42: 13.4. Conserved Quantities
Is Required
Step43: 13.5. Conservation Method
Is Required
Step44: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required
Step45: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required
Step46: 15.2. Name
Is Required
Step47: 15.3. Spectral Integration
Is Required
Step48: 15.4. Transport Calculation
Is Required
Step49: 15.5. Spectral Intervals
Is Required
Step50: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required
Step51: 16.2. ODS
Is Required
Step52: 16.3. Other Flourinated Gases
Is Required
Step53: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required
Step54: 17.2. Physical Representation
Is Required
Step55: 17.3. Optical Methods
Is Required
Step56: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required
Step57: 18.2. Physical Representation
Is Required
Step58: 18.3. Optical Methods
Is Required
Step59: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required
Step60: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required
Step61: 20.2. Physical Representation
Is Required
Step62: 20.3. Optical Methods
Is Required
Step63: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required
Step64: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required
Step65: 22.2. Name
Is Required
Step66: 22.3. Spectral Integration
Is Required
Step67: 22.4. Transport Calculation
Is Required
Step68: 22.5. Spectral Intervals
Is Required
Step69: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required
Step70: 23.2. ODS
Is Required
Step71: 23.3. Other Flourinated Gases
Is Required
Step72: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required
Step73: 24.2. Physical Reprenstation
Is Required
Step74: 24.3. Optical Methods
Is Required
Step75: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required
Step76: 25.2. Physical Representation
Is Required
Step77: 25.3. Optical Methods
Is Required
Step78: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required
Step79: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required
Step80: 27.2. Physical Representation
Is Required
Step81: 27.3. Optical Methods
Is Required
Step82: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required
Step83: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required
Step84: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required
Step85: 30.2. Scheme Type
Is Required
Step86: 30.3. Closure Order
Is Required
Step87: 30.4. Counter Gradient
Is Required
Step88: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required
Step89: 31.2. Scheme Type
Is Required
Step90: 31.3. Scheme Method
Is Required
Step91: 31.4. Processes
Is Required
Step92: 31.5. Microphysics
Is Required
Step93: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required
Step94: 32.2. Scheme Type
Is Required
Step95: 32.3. Scheme Method
Is Required
Step96: 32.4. Processes
Is Required
Step97: 32.5. Microphysics
Is Required
Step98: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required
Step99: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required
Step100: 34.2. Hydrometeors
Is Required
Step101: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required
Step102: 35.2. Processes
Is Required
Step103: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required
Step104: 36.2. Name
Is Required
Step105: 36.3. Atmos Coupling
Is Required
Step106: 36.4. Uses Separate Treatment
Is Required
Step107: 36.5. Processes
Is Required
Step108: 36.6. Prognostic Scheme
Is Required
Step109: 36.7. Diagnostic Scheme
Is Required
Step110: 36.8. Prognostic Variables
Is Required
Step111: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required
Step112: 37.2. Cloud Inhomogeneity
Is Required
Step113: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required
Step114: 38.2. Function Name
Is Required
Step115: 38.3. Function Order
Is Required
Step116: 38.4. Convection Coupling
Is Required
Step117: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required
Step118: 39.2. Function Name
Is Required
Step119: 39.3. Function Order
Is Required
Step120: 39.4. Convection Coupling
Is Required
Step121: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required
Step122: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required
Step123: 41.2. Top Height Direction
Is Required
Step124: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required
Step125: 42.2. Number Of Grid Points
Is Required
Step126: 42.3. Number Of Sub Columns
Is Required
Step127: 42.4. Number Of Levels
Is Required
Step128: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required
Step129: 43.2. Type
Is Required
Step130: 43.3. Gas Absorption
Is Required
Step131: 43.4. Effective Radius
Is Required
Step132: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required
Step133: 44.2. Overlap
Is Required
Step134: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required
Step135: 45.2. Sponge Layer
Is Required
Step136: 45.3. Background
Is Required
Step137: 45.4. Subgrid Scale Orography
Is Required
Step138: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required
Step139: 46.2. Source Mechanisms
Is Required
Step140: 46.3. Calculation Method
Is Required
Step141: 46.4. Propagation Scheme
Is Required
Step142: 46.5. Dissipation Scheme
Is Required
Step143: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required
Step144: 47.2. Source Mechanisms
Is Required
Step145: 47.3. Calculation Method
Is Required
Step146: 47.4. Propagation Scheme
Is Required
Step147: 47.5. Dissipation Scheme
Is Required
Step148: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required
Step149: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required
Step150: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required
Step151: 50.2. Fixed Value
Is Required
Step152: 50.3. Transient Characteristics
Is Required
Step153: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required
Step154: 51.2. Fixed Reference Date
Is Required
Step155: 51.3. Transient Method
Is Required
Step156: 51.4. Computation Method
Is Required
Step157: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required
Step158: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required
Step159: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ncc', 'noresm2-mh', 'atmos')
Explanation: ES-DOC CMIP6 Model Properties - Atmos
MIP Era: CMIP6
Institute: NCC
Source ID: NORESM2-MH
Topic: Atmos
Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos.
Properties: 156 (127 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:24
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of atmospheric model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the atmosphere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.4. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on the computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 2.5. High Top
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required: TRUE Type: STRING Cardinality: 1.1
Timestep for the dynamics, e.g. 30 min.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. Timestep Shortwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the shortwave radiative transfer, e.g. 1.5 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestep Longwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the longwave radiative transfer, e.g. 3 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the orography.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
Explanation: 4.2. Changes
Is Required: TRUE Type: ENUM Cardinality: 1.N
If the orography type is modified describe the time adaptation changes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of grid discretisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
Explanation: 6.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.3. Scheme Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation function order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. Horizontal Pole
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal discretisation pole singularity treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.5. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type of vertical coordinate system
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere dynamical core
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the dynamical core of the model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Timestepping Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Timestepping framework type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of the model prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Top boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Top Heat
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary heat treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Top Wind
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary wind treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required: FALSE Type: ENUM Cardinality: 0.1
Type of lateral boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Horizontal diffusion scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal diffusion scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Tracer advection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.3. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.4. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracer advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Momentum advection schemes name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Scheme Staggering Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required: TRUE Type: ENUM Cardinality: 1.N
Aerosols whose radiative effect is taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of shortwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Shortwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Shortwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Shortwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of longwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the longwave radiation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Longwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Longwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 22.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Longwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.2. Physical Reprenstation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere convection and turbulence
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Boundary layer turbulence scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Boundary layer turbulence scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Closure Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Boundary layer turbulence scheme closure order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Counter Gradient
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Uses boundary layer turbulence scheme counter gradient
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Deep convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of deep convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Shallow convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
shallow convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
Explanation: 32.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
shallow convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of large scale cloud microphysics and precipitation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the large scale precipitation parameterisation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34.2. Hydrometeors
Is Required: TRUE Type: ENUM Cardinality: 1.N
Precipitating hydrometeors taken into account in the large scale precipitation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the microphysics parameterisation scheme used for large scale clouds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Large scale cloud microphysics processes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the atmosphere cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
Explanation: 36.3. Atmos Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Atmosphere components that are linked to the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.4. Uses Separate Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.6. Prognostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a prognostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.7. Diagnostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a diagnostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.8. Prognostic Variables
Is Required: FALSE Type: ENUM Cardinality: 0.N
List the prognostic variables used by the cloud scheme, if applicable.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required: FALSE Type: ENUM Cardinality: 0.1
Method for taking into account overlapping of cloud layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Cloud Inhomogeneity
Is Required: FALSE Type: STRING Cardinality: 0.1
Method for taking into account cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale water distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 38.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale water distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 38.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale water distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 38.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale water distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale ice distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 39.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale ice distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 39.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale ice distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 39.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale ice distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of observation simulator characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator ISSCP top height estimation methodUo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. Top Height Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator ISSCP top height direction
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator COSP run configuration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.2. Number Of Grid Points
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of grid points
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.3. Number Of Sub Columns
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.4. Number Of Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of levels
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Cloud simulator radar frequency (Hz)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 43.2. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator radar type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.3. Gas Absorption
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses gas absorption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.4. Effective Radius
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses effective radius
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator lidar ice type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44.2. Overlap
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator lidar overlap
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of gravity wave parameterisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.2. Sponge Layer
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sponge layer in the upper levels in order to avoid gravity wave reflection at the top.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.3. Background
Is Required: TRUE Type: ENUM Cardinality: 1.1
Background wave distribution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.4. Subgrid Scale Orography
Is Required: TRUE Type: ENUM Cardinality: 1.N
Subgrid scale orography effects taken into account.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the non-orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
Explanation: 47.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of solar insolation of the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required: TRUE Type: ENUM Cardinality: 1.N
Pathways for the solar forcing of the atmosphere model domain
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the solar constant.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 50.2. Fixed Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If the solar constant is fixed, enter the value of the solar constant (W m-2).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 50.3. Transient Characteristics
Is Required: TRUE Type: STRING Cardinality: 1.1
solar constant transient characteristics (W m-2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 51.2. Fixed Reference Date
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Reference date for fixed orbital parameters (yyyy)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 51.3. Transient Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Description of transient orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 51.4. Computation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used for computing orbital parameters.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does top of atmosphere insolation impact on stratospheric ozone?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the implementation of volcanic effects in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How volcanic effects are modeled in the atmosphere.
End of explanation |
12,440 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Agents
Step2: Conway's Game of Life
A simple agent model is Conway's Game of Life, which is an example of Cellular automota. A two-dimensional square grid of cells are either "dead" or "alive". At each iteration, each cell checks its neighbours (including diagonals | Python Code:
from IPython.core.display import HTML
css_file = 'https://raw.githubusercontent.com/ngcm/training-public/master/ipython_notebook_styles/ngcmstyle.css'
HTML(url=css_file)
Explanation: Agents: Lab 1
End of explanation
%matplotlib inline
import numpy
from matplotlib import pyplot, animation
from matplotlib import rcParams
rcParams['font.family'] = 'serif'
rcParams['font.size'] = 16
rcParams['figure.figsize'] = (12,6)
from __future__ import division
def conway_iteration(grid):
Take one iteration of Conway's game of life.
Parameters
----------
grid : array
(N+2) x (N+2) numpy array representing the grid (1: live, 0: dead)
# Code to go here
return grid
# Try the loaf - this is static
grid_loaf = numpy.array([[0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0],
[0,0,0,1,1,0,0,0],
[0,0,1,0,0,1,0,0],
[0,0,0,1,0,1,0,0],
[0,0,0,0,1,0,0,0],
[0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0]])
fig = pyplot.figure()
im = pyplot.imshow(grid_loaf[1:-1,1:-1], cmap=pyplot.get_cmap('gray'))
def init():
im.set_array(grid_loaf[1:-1,1:-1])
return im,
def animate(i):
conway_iteration(grid_loaf)
im.set_array(grid_loaf[1:-1,1:-1])
return im,
# This will only work if you have ffmpeg installed
anim = animation.FuncAnimation(fig, animate, init_func=init, interval=50, frames=10, blit=True)
HTML(anim.to_html5_video())
Explanation: Conway's Game of Life
A simple agent model is Conway's Game of Life, which is an example of Cellular automota. A two-dimensional square grid of cells are either "dead" or "alive". At each iteration, each cell checks its neighbours (including diagonals: each cell has 8 neighbours).
Any live cell with fewer than two live neighbours dies ("under-population")
Any live cell with two or three neighbours lives ("survival")
Any live cell with four or more neighbours dies ("over-population")
Any dead cell with exactly three neigbours lives ("reproduction")
At the boundaries of the grid periodic boundary conditions are imposed.
Write a function that takes a numpy array representing the grid. Test it on some of the standard example patterns. The matplotlib imshow function, and the matplotlib FuncAnimation function may help; if running in the notebook, the instructions on installing and using ffmpeg and html5 may also be useful.
End of explanation |
12,441 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook shows the how tallies can be combined (added, subtracted, multiplied, etc.) using the Python API in order to create derived tallies. Since no covariance information is obtained, it is assumed that tallies are completely independent of one another when propagating uncertainties. The target problem is a simple pin cell.
Step1: Generate Input Files
First we need to define materials that will be used in the problem. We'll create three materials for the fuel, water, and cladding of the fuel pin.
Step2: With our three materials, we can now create a materials file object that can be exported to an actual XML file.
Step3: Now let's move on to the geometry. Our problem will have three regions for the fuel, the clad, and the surrounding coolant. The first step is to create the bounding surfaces -- in this case two cylinders and six planes.
Step4: With the surfaces defined, we can now create cells that are defined by intersections of half-spaces created by the surfaces.
Step5: OpenMC requires that there is a "root" universe. Let us create a root cell that is filled by the pin cell universe and then assign it to the root universe.
Step6: We now must create a geometry that is assigned a root universe, put the geometry into a geometry file, and export it to XML.
Step7: With the geometry and materials finished, we now just need to define simulation parameters. In this case, we will use 5 inactive batches and 15 active batches each with 2500 particles.
Step8: Let us also create a plot file that we can use to verify that our pin cell geometry was created successfully.
Step9: With the plots.xml file, we can now generate and view the plot. OpenMC outputs plots in .ppm format, which can be converted into a compressed format like .png with the convert utility.
Step10: As we can see from the plot, we have a nice pin cell with fuel, cladding, and water! Before we run our simulation, we need to tell the code what we want to tally. The following code shows how to create a variety of tallies.
Step11: Now we a have a complete set of inputs, so we can go ahead and run our simulation.
Step12: Tally Data Processing
Our simulation ran successfully and created a statepoint file with all the tally data in it. We begin our analysis here loading the statepoint file and 'reading' the results. By default, the tally results are not read into memory because they might be large, even large enough to exceed the available memory on a computer.
Step13: We have a tally of the total fission rate and the total absorption rate, so we can calculate k-eff as
Step14: Notice that even though the neutron production rate, absorption rate, and current are separate tallies, we still get a first-order estimate of the uncertainty on the quotient of them automatically!
Often in textbooks you'll see k-eff represented using the six-factor formula $$k_{eff} = p \epsilon f \eta P_{FNL} P_{TNL}.$$ Let's analyze each of these factors, starting with the resonance escape probability which is defined as $$p=\frac{\langle\Sigma_a\phi\rangle_T + \langle L \rangle_T}{\langle\Sigma_a\phi\rangle + \langle L \rangle_T}$$ where the subscript $T$ means thermal energies.
Step15: The fast fission factor can be calculated as
$$\epsilon=\frac{\langle\nu\Sigma_f\phi\rangle}{\langle\nu\Sigma_f\phi\rangle_T}$$
Step16: The thermal flux utilization is calculated as
$$f=\frac{\langle\Sigma_a\phi\rangle^F_T}{\langle\Sigma_a\phi\rangle_T}$$
where the superscript $F$ denotes fuel.
Step17: The next factor is the number of fission neutrons produced per absorption in fuel, calculated as $$\eta = \frac{\langle \nu\Sigma_f\phi \rangle_T}{\langle \Sigma_a \phi \rangle^F_T}$$
Step18: There are two leakage factors to account for fast and thermal leakage. The fast non-leakage probability is computed as $$P_{FNL} = \frac{\langle \Sigma_a\phi \rangle + \langle L \rangle_T}{\langle \Sigma_a \phi \rangle + \langle L \rangle}$$
Step19: The final factor is the thermal non-leakage probability and is computed as $$P_{TNL} = \frac{\langle \Sigma_a\phi \rangle_T}{\langle \Sigma_a \phi \rangle_T + \langle L \rangle_T}$$
Step20: Now we can calculate $k_{eff}$ using the product of the factors form the four-factor formula.
Step21: We see that the value we've obtained here has exactly the same mean as before. However, because of the way it was calculated, the standard deviation appears to be larger.
Let's move on to a more complicated example now. Before we set up tallies to get reaction rates in the fuel and moderator in two energy groups for two different nuclides. We can use tally arithmetic to divide each of these reaction rates by the flux to get microscopic multi-group cross sections.
Step22: We see that when the two tallies with multiple bins were divided, the derived tally contains the outer product of the combinations. If the filters/scores are the same, no outer product is needed. The get_values(...) method allows us to obtain a subset of tally scores. In the following example, we obtain just the neutron production microscopic cross sections.
Step23: The same idea can be used not only for scores but also for filters and nuclides.
Step24: A more advanced method is to use get_slice(...) to create a new derived tally that is a subset of an existing tally. This has the benefit that we can use get_pandas_dataframe() to see the tallies in a more human-readable format. | Python Code:
import glob
from IPython.display import Image
import numpy as np
import openmc
Explanation: This notebook shows the how tallies can be combined (added, subtracted, multiplied, etc.) using the Python API in order to create derived tallies. Since no covariance information is obtained, it is assumed that tallies are completely independent of one another when propagating uncertainties. The target problem is a simple pin cell.
End of explanation
# 1.6 enriched fuel
fuel = openmc.Material(name='1.6% Fuel')
fuel.set_density('g/cm3', 10.31341)
fuel.add_nuclide('U235', 3.7503e-4)
fuel.add_nuclide('U238', 2.2625e-2)
fuel.add_nuclide('O16', 4.6007e-2)
# borated water
water = openmc.Material(name='Borated Water')
water.set_density('g/cm3', 0.740582)
water.add_nuclide('H1', 4.9457e-2)
water.add_nuclide('O16', 2.4732e-2)
water.add_nuclide('B10', 8.0042e-6)
# zircaloy
zircaloy = openmc.Material(name='Zircaloy')
zircaloy.set_density('g/cm3', 6.55)
zircaloy.add_nuclide('Zr90', 7.2758e-3)
Explanation: Generate Input Files
First we need to define materials that will be used in the problem. We'll create three materials for the fuel, water, and cladding of the fuel pin.
End of explanation
# Instantiate a Materials collection
materials_file = openmc.Materials([fuel, water, zircaloy])
# Export to "materials.xml"
materials_file.export_to_xml()
Explanation: With our three materials, we can now create a materials file object that can be exported to an actual XML file.
End of explanation
# Create cylinders for the fuel and clad
fuel_outer_radius = openmc.ZCylinder(x0=0.0, y0=0.0, R=0.39218)
clad_outer_radius = openmc.ZCylinder(x0=0.0, y0=0.0, R=0.45720)
# Create boundary planes to surround the geometry
# Use both reflective and vacuum boundaries to make life interesting
min_x = openmc.XPlane(x0=-0.63, boundary_type='reflective')
max_x = openmc.XPlane(x0=+0.63, boundary_type='reflective')
min_y = openmc.YPlane(y0=-0.63, boundary_type='reflective')
max_y = openmc.YPlane(y0=+0.63, boundary_type='reflective')
min_z = openmc.ZPlane(z0=-100., boundary_type='vacuum')
max_z = openmc.ZPlane(z0=+100., boundary_type='vacuum')
Explanation: Now let's move on to the geometry. Our problem will have three regions for the fuel, the clad, and the surrounding coolant. The first step is to create the bounding surfaces -- in this case two cylinders and six planes.
End of explanation
# Create a Universe to encapsulate a fuel pin
pin_cell_universe = openmc.Universe(name='1.6% Fuel Pin')
# Create fuel Cell
fuel_cell = openmc.Cell(name='1.6% Fuel')
fuel_cell.fill = fuel
fuel_cell.region = -fuel_outer_radius
pin_cell_universe.add_cell(fuel_cell)
# Create a clad Cell
clad_cell = openmc.Cell(name='1.6% Clad')
clad_cell.fill = zircaloy
clad_cell.region = +fuel_outer_radius & -clad_outer_radius
pin_cell_universe.add_cell(clad_cell)
# Create a moderator Cell
moderator_cell = openmc.Cell(name='1.6% Moderator')
moderator_cell.fill = water
moderator_cell.region = +clad_outer_radius
pin_cell_universe.add_cell(moderator_cell)
Explanation: With the surfaces defined, we can now create cells that are defined by intersections of half-spaces created by the surfaces.
End of explanation
# Create root Cell
root_cell = openmc.Cell(name='root cell')
root_cell.fill = pin_cell_universe
# Add boundary planes
root_cell.region = +min_x & -max_x & +min_y & -max_y & +min_z & -max_z
# Create root Universe
root_universe = openmc.Universe(universe_id=0, name='root universe')
root_universe.add_cell(root_cell)
Explanation: OpenMC requires that there is a "root" universe. Let us create a root cell that is filled by the pin cell universe and then assign it to the root universe.
End of explanation
# Create Geometry and set root Universe
geometry = openmc.Geometry(root_universe)
# Export to "geometry.xml"
geometry.export_to_xml()
Explanation: We now must create a geometry that is assigned a root universe, put the geometry into a geometry file, and export it to XML.
End of explanation
# OpenMC simulation parameters
batches = 20
inactive = 5
particles = 2500
# Instantiate a Settings object
settings_file = openmc.Settings()
settings_file.batches = batches
settings_file.inactive = inactive
settings_file.particles = particles
settings_file.output = {'tallies': True}
# Create an initial uniform spatial source distribution over fissionable zones
bounds = [-0.63, -0.63, -100., 0.63, 0.63, 100.]
uniform_dist = openmc.stats.Box(bounds[:3], bounds[3:], only_fissionable=True)
settings_file.source = openmc.source.Source(space=uniform_dist)
# Export to "settings.xml"
settings_file.export_to_xml()
Explanation: With the geometry and materials finished, we now just need to define simulation parameters. In this case, we will use 5 inactive batches and 15 active batches each with 2500 particles.
End of explanation
# Instantiate a Plot
plot = openmc.Plot(plot_id=1)
plot.filename = 'materials-xy'
plot.origin = [0, 0, 0]
plot.width = [1.26, 1.26]
plot.pixels = [250, 250]
plot.color_by = 'material'
# Instantiate a Plots collection and export to "plots.xml"
plot_file = openmc.Plots([plot])
plot_file.export_to_xml()
Explanation: Let us also create a plot file that we can use to verify that our pin cell geometry was created successfully.
End of explanation
# Run openmc in plotting mode
openmc.plot_geometry(output=False)
# Convert OpenMC's funky ppm to png
!convert materials-xy.ppm materials-xy.png
# Display the materials plot inline
Image(filename='materials-xy.png')
Explanation: With the plots.xml file, we can now generate and view the plot. OpenMC outputs plots in .ppm format, which can be converted into a compressed format like .png with the convert utility.
End of explanation
# Instantiate an empty Tallies object
tallies_file = openmc.Tallies()
# Create Tallies to compute microscopic multi-group cross-sections
# Instantiate energy filter for multi-group cross-section Tallies
energy_filter = openmc.EnergyFilter([0., 0.625, 20.0e6])
# Instantiate flux Tally in moderator and fuel
tally = openmc.Tally(name='flux')
tally.filters = [openmc.CellFilter([fuel_cell, moderator_cell])]
tally.filters.append(energy_filter)
tally.scores = ['flux']
tallies_file.append(tally)
# Instantiate reaction rate Tally in fuel
tally = openmc.Tally(name='fuel rxn rates')
tally.filters = [openmc.CellFilter(fuel_cell)]
tally.filters.append(energy_filter)
tally.scores = ['nu-fission', 'scatter']
tally.nuclides = ['U238', 'U235']
tallies_file.append(tally)
# Instantiate reaction rate Tally in moderator
tally = openmc.Tally(name='moderator rxn rates')
tally.filters = [openmc.CellFilter(moderator_cell)]
tally.filters.append(energy_filter)
tally.scores = ['absorption', 'total']
tally.nuclides = ['O16', 'H1']
tallies_file.append(tally)
# Instantiate a tally mesh
mesh = openmc.Mesh(mesh_id=1)
mesh.type = 'regular'
mesh.dimension = [1, 1, 1]
mesh.lower_left = [-0.63, -0.63, -100.]
mesh.width = [1.26, 1.26, 200.]
meshsurface_filter = openmc.MeshSurfaceFilter(mesh)
# Instantiate thermal, fast, and total leakage tallies
leak = openmc.Tally(name='leakage')
leak.filters = [meshsurface_filter]
leak.scores = ['current']
tallies_file.append(leak)
thermal_leak = openmc.Tally(name='thermal leakage')
thermal_leak.filters = [meshsurface_filter, openmc.EnergyFilter([0., 0.625])]
thermal_leak.scores = ['current']
tallies_file.append(thermal_leak)
fast_leak = openmc.Tally(name='fast leakage')
fast_leak.filters = [meshsurface_filter, openmc.EnergyFilter([0.625, 20.0e6])]
fast_leak.scores = ['current']
tallies_file.append(fast_leak)
# K-Eigenvalue (infinity) tallies
fiss_rate = openmc.Tally(name='fiss. rate')
abs_rate = openmc.Tally(name='abs. rate')
fiss_rate.scores = ['nu-fission']
abs_rate.scores = ['absorption']
tallies_file += (fiss_rate, abs_rate)
# Resonance Escape Probability tallies
therm_abs_rate = openmc.Tally(name='therm. abs. rate')
therm_abs_rate.scores = ['absorption']
therm_abs_rate.filters = [openmc.EnergyFilter([0., 0.625])]
tallies_file.append(therm_abs_rate)
# Thermal Flux Utilization tallies
fuel_therm_abs_rate = openmc.Tally(name='fuel therm. abs. rate')
fuel_therm_abs_rate.scores = ['absorption']
fuel_therm_abs_rate.filters = [openmc.EnergyFilter([0., 0.625]),
openmc.CellFilter([fuel_cell])]
tallies_file.append(fuel_therm_abs_rate)
# Fast Fission Factor tallies
therm_fiss_rate = openmc.Tally(name='therm. fiss. rate')
therm_fiss_rate.scores = ['nu-fission']
therm_fiss_rate.filters = [openmc.EnergyFilter([0., 0.625])]
tallies_file.append(therm_fiss_rate)
# Instantiate energy filter to illustrate Tally slicing
fine_energy_filter = openmc.EnergyFilter(np.logspace(np.log10(1e-2), np.log10(20.0e6), 10))
# Instantiate flux Tally in moderator and fuel
tally = openmc.Tally(name='need-to-slice')
tally.filters = [openmc.CellFilter([fuel_cell, moderator_cell])]
tally.filters.append(fine_energy_filter)
tally.scores = ['nu-fission', 'scatter']
tally.nuclides = ['H1', 'U238']
tallies_file.append(tally)
# Export to "tallies.xml"
tallies_file.export_to_xml()
Explanation: As we can see from the plot, we have a nice pin cell with fuel, cladding, and water! Before we run our simulation, we need to tell the code what we want to tally. The following code shows how to create a variety of tallies.
End of explanation
# Run OpenMC!
openmc.run()
Explanation: Now we a have a complete set of inputs, so we can go ahead and run our simulation.
End of explanation
# Load the statepoint file
sp = openmc.StatePoint('statepoint.20.h5')
Explanation: Tally Data Processing
Our simulation ran successfully and created a statepoint file with all the tally data in it. We begin our analysis here loading the statepoint file and 'reading' the results. By default, the tally results are not read into memory because they might be large, even large enough to exceed the available memory on a computer.
End of explanation
# Get the fission and absorption rate tallies
fiss_rate = sp.get_tally(name='fiss. rate')
abs_rate = sp.get_tally(name='abs. rate')
# Get the leakage tally
leak = sp.get_tally(name='leakage')
leak = leak.summation(filter_type=openmc.MeshSurfaceFilter, remove_filter=True)
# Compute k-infinity using tally arithmetic
keff = fiss_rate / (abs_rate + leak)
keff.get_pandas_dataframe()
Explanation: We have a tally of the total fission rate and the total absorption rate, so we can calculate k-eff as:
$$k_{eff} = \frac{\langle \nu \Sigma_f \phi \rangle}{\langle \Sigma_a \phi \rangle + \langle L \rangle}$$
In this notation, $\langle \cdot \rangle^a_b$ represents an OpenMC that is integrated over region $a$ and energy range $b$. If $a$ or $b$ is not reported, it means the value represents an integral over all space or all energy, respectively.
End of explanation
# Compute resonance escape probability using tally arithmetic
therm_abs_rate = sp.get_tally(name='therm. abs. rate')
thermal_leak = sp.get_tally(name='thermal leakage')
thermal_leak = thermal_leak.summation(filter_type=openmc.MeshSurfaceFilter, remove_filter=True)
res_esc = (therm_abs_rate + thermal_leak) / (abs_rate + thermal_leak)
res_esc.get_pandas_dataframe()
Explanation: Notice that even though the neutron production rate, absorption rate, and current are separate tallies, we still get a first-order estimate of the uncertainty on the quotient of them automatically!
Often in textbooks you'll see k-eff represented using the six-factor formula $$k_{eff} = p \epsilon f \eta P_{FNL} P_{TNL}.$$ Let's analyze each of these factors, starting with the resonance escape probability which is defined as $$p=\frac{\langle\Sigma_a\phi\rangle_T + \langle L \rangle_T}{\langle\Sigma_a\phi\rangle + \langle L \rangle_T}$$ where the subscript $T$ means thermal energies.
End of explanation
# Compute fast fission factor factor using tally arithmetic
therm_fiss_rate = sp.get_tally(name='therm. fiss. rate')
fast_fiss = fiss_rate / therm_fiss_rate
fast_fiss.get_pandas_dataframe()
Explanation: The fast fission factor can be calculated as
$$\epsilon=\frac{\langle\nu\Sigma_f\phi\rangle}{\langle\nu\Sigma_f\phi\rangle_T}$$
End of explanation
# Compute thermal flux utilization factor using tally arithmetic
fuel_therm_abs_rate = sp.get_tally(name='fuel therm. abs. rate')
therm_util = fuel_therm_abs_rate / therm_abs_rate
therm_util.get_pandas_dataframe()
Explanation: The thermal flux utilization is calculated as
$$f=\frac{\langle\Sigma_a\phi\rangle^F_T}{\langle\Sigma_a\phi\rangle_T}$$
where the superscript $F$ denotes fuel.
End of explanation
# Compute neutrons produced per absorption (eta) using tally arithmetic
eta = therm_fiss_rate / fuel_therm_abs_rate
eta.get_pandas_dataframe()
Explanation: The next factor is the number of fission neutrons produced per absorption in fuel, calculated as $$\eta = \frac{\langle \nu\Sigma_f\phi \rangle_T}{\langle \Sigma_a \phi \rangle^F_T}$$
End of explanation
p_fnl = (abs_rate + thermal_leak) / (abs_rate + leak)
p_fnl.get_pandas_dataframe()
Explanation: There are two leakage factors to account for fast and thermal leakage. The fast non-leakage probability is computed as $$P_{FNL} = \frac{\langle \Sigma_a\phi \rangle + \langle L \rangle_T}{\langle \Sigma_a \phi \rangle + \langle L \rangle}$$
End of explanation
p_tnl = therm_abs_rate / (therm_abs_rate + thermal_leak)
p_tnl.get_pandas_dataframe()
Explanation: The final factor is the thermal non-leakage probability and is computed as $$P_{TNL} = \frac{\langle \Sigma_a\phi \rangle_T}{\langle \Sigma_a \phi \rangle_T + \langle L \rangle_T}$$
End of explanation
keff = res_esc * fast_fiss * therm_util * eta * p_fnl * p_tnl
keff.get_pandas_dataframe()
Explanation: Now we can calculate $k_{eff}$ using the product of the factors form the four-factor formula.
End of explanation
# Compute microscopic multi-group cross-sections
flux = sp.get_tally(name='flux')
flux = flux.get_slice(filters=[openmc.CellFilter], filter_bins=[(fuel_cell.id,)])
fuel_rxn_rates = sp.get_tally(name='fuel rxn rates')
mod_rxn_rates = sp.get_tally(name='moderator rxn rates')
fuel_xs = fuel_rxn_rates / flux
fuel_xs.get_pandas_dataframe()
Explanation: We see that the value we've obtained here has exactly the same mean as before. However, because of the way it was calculated, the standard deviation appears to be larger.
Let's move on to a more complicated example now. Before we set up tallies to get reaction rates in the fuel and moderator in two energy groups for two different nuclides. We can use tally arithmetic to divide each of these reaction rates by the flux to get microscopic multi-group cross sections.
End of explanation
# Show how to use Tally.get_values(...) with a CrossScore
nu_fiss_xs = fuel_xs.get_values(scores=['(nu-fission / flux)'])
print(nu_fiss_xs)
Explanation: We see that when the two tallies with multiple bins were divided, the derived tally contains the outer product of the combinations. If the filters/scores are the same, no outer product is needed. The get_values(...) method allows us to obtain a subset of tally scores. In the following example, we obtain just the neutron production microscopic cross sections.
End of explanation
# Show how to use Tally.get_values(...) with a CrossScore and CrossNuclide
u235_scatter_xs = fuel_xs.get_values(nuclides=['(U235 / total)'],
scores=['(scatter / flux)'])
print(u235_scatter_xs)
# Show how to use Tally.get_values(...) with a CrossFilter and CrossScore
fast_scatter_xs = fuel_xs.get_values(filters=[openmc.EnergyFilter],
filter_bins=[((0.625, 20.0e6),)],
scores=['(scatter / flux)'])
print(fast_scatter_xs)
Explanation: The same idea can be used not only for scores but also for filters and nuclides.
End of explanation
# "Slice" the nu-fission data into a new derived Tally
nu_fission_rates = fuel_rxn_rates.get_slice(scores=['nu-fission'])
nu_fission_rates.get_pandas_dataframe()
# "Slice" the H-1 scatter data in the moderator Cell into a new derived Tally
need_to_slice = sp.get_tally(name='need-to-slice')
slice_test = need_to_slice.get_slice(scores=['scatter'], nuclides=['H1'],
filters=[openmc.CellFilter], filter_bins=[(moderator_cell.id,)])
slice_test.get_pandas_dataframe()
Explanation: A more advanced method is to use get_slice(...) to create a new derived tally that is a subset of an existing tally. This has the benefit that we can use get_pandas_dataframe() to see the tallies in a more human-readable format.
End of explanation |
12,442 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Fantasy (first steps)
This shows how to use Pandas to manipulate roster data.
$\rightarrow$ Use [Control] + [Enter] to evaluate a cell. (Check the 'Help' menu above for more.)
Step1: First we need to load the data
It's the total 2014 stats for people who started in NFL games last year. Thanks to TeamRankings.com. We're missing the count of safeties but hopefully they're so infrequent they don't affect rankings.
Step2: Column definitions
Team - the full team name, exactly the same as in the other datasets
player - the player name, plus team totals (TOTAL) and opponent totals (OPPONENT TOTAL)
pos - abbreviation for positions, such as QB=quarterback, K=kicker
starts - number of starts in the 2014 season
fumblesLost - fumbles lost to the opposing team in 2014
fumblesRecoveredTD - fumbles recovered for a touchdown in 2014
twoPt - two point conversion
Passing
passingATT - attempted passes in 2014
passingCOMP - completed passes in 2014
passingINT - intercepted passes in 2014
passingTD - passing touchdowns in 2014
passingYDS - passing yards in 2014
Receiving
receivingREC - receptions in 2014
receivingTD - touchdowns made off of a reception in 2014
receivingYDS - receiving yards in 2014
Rushing
rushingATT - rushing attempts in 2014
rushingTD - rushing touchdowns in 2014
rushingYDS - rushing yards in 2014
Kicking
kicking_extraPt - extra points made in 2014
kicking_FGge50A - field golds $\ge$ 50 yards attempted
kicking_FGge50M - field golds $\ge$ 50 yards made
kicking_FGlt50A - field golds $\lt$ 50 yards attempted
kicking_FGlt50M - field golds $\lt$ yards made
Defense
defenseF - fumbles forced in 2014
defenseSCK - sacks in 2014
defenseTOTAL - tackles in 2014
defenseFumblesRecovered - fumbles recovered in 2014
pointreturnsFC - fair catches on point returns in 2014
pointreturnsRETURNS - returns made on point returns in 2014
pointreturnsTD - point returns for a touchdown in 2014
interceptionsINT - interceptions in 2014
interceptionsTD - interceptions for a touchdown in 2014
interceptionsYDS - yards gained on interceptions in 2014
kickreturnsRETURNS - kick returns in 2014
kickreturnsTD - kick returns returned for a touchdown in 2014
kickreturnsYDS - yards gained during kick returns in 2014
Aggregate the fumble recovery data for the defense
In Fantasy Football, you choose a defensive team, not individual players.
The dataset we have has rows in the column player as 'TOTAL' and 'OPPONENT TOTAL' for total defensive stats, except for safeties (which I couldn't get easily - but which also are so rare that they shoudn't affect rankings too badly) and fumbles recovered for a touchdown (fumblesRecoveredTD) which were from a separate dataset and not added in.
We need to aggregate fumblesRecoveredTD over the individual players to get a score for the defense.
Step3: Add the fumble recoveries to the overall defensive team's stats
The overall stats are in rows for each team with the player name as TOTAL
Step4: Merge the byteam_fumbleRecoveries stats back into the original stats frame
Add a false player name 'TOTAL' to the byteam_fumbleRecoveries data frame. We will merge on that name plus the team to select the correct row.
Merge byteam_fumbleRecoveries back into the stats data frame, matching on the same team name (and with player as the TOTAL).
Coalesce the aggregated column and the original column together
Delete the column totalFumblesRecoveredTD (which we made for the aggregation).
Step5: Select only the players we want
For standard fantasy, you get 9 players on your starting roster and 6 players on the bench.
The defensive players are rolled into one (we just did that for the last relevant type of scoring play) and we don't care about the other positions.
The parenthesized abbreviations are the corresponding value in the pos column in this dataset
Step7: Ranking the players
...finally!
Fantasy scoring
The scoring shown in the table below uses the NFL fantasy standard rules. The fixed width denotes the corresponding column name in the dataset.
<table style="font-size
Step8: Estimate the score
Step9: Clean up, convert to 2015
The stats are for the previous season. Use the roster data ('data/nfl_rosters2015.csv') we have to map the player to the correct team, and add the bye week.
Rename the player 'TOTAL' to the player 'Defense' because that's what it is.
Read in 'data/nfl_rosters2015.csv' and merge it with stats to correctly map players to their 2015 team.
After we merge, we will have pairs of columns that need to be reconciled
Step10: Remove players we can't use
The player's 2015 position is 'Pos'. Delete players who can't be on the fantasy roster.
Look at the positions listed
Set the Team of the defense rows to be the same as in 2014
First, let's add a value 'DEFENSE' as the position when the player is named 'TOTAL'. Then we'll delete all of the players who aren't the kinds we can use.
Step11: Picking strategy
The obvious advice is don't pick a backup player that's got the same bye week as your primary one.
Next, picking order could be
- by straight points (the highest rated player in one of your open slots)
- or by point differential (the biggest drop between this player
The dataset we have now is the total Fantasy points (according to the standard rules) that the player earned in the regular 2014 season.
Step12: Some insight from the differences
The script below prints out the top 15 players in each slot.
- Last year's point totals are in the column 'Points'.
- '(d1)' shows the the difference in points between
the player and the next best player.
- '(d10)' shows the worst case points difference if you wait until the
next round and everybody chooses that position.
observations | Python Code:
##
# Setup -- import the modules we want and set up inline plotting
#
from __future__ import print_function
import datetime
import matplotlib
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
# Bigger fonts and figures for the demo
matplotlib.rcParams.update({
'font.size': 14,
'figure.figsize':(10.0, 8.0),
'axes.formatter.useoffset':False })
# Better data frame display for the demo
pd.set_option('expand_frame_repr', True)
pd.set_option('max_rows', 18)
pd.set_option('max_colwidth', 14)
pd.set_option('precision',2)
Explanation: Fantasy (first steps)
This shows how to use Pandas to manipulate roster data.
$\rightarrow$ Use [Control] + [Enter] to evaluate a cell. (Check the 'Help' menu above for more.)
End of explanation
##
# Player data and the season schedule (for byes)
import os
player_stats_file = os.path.join('data', 'nfl_player_stats_season2014.csv')
season2015_file = os.path.join('data', 'nfl_season2015.csv')
prior_seasons_file = os.path.join('data', 'nfl_season2008to2014.csv')
stats = pd.read_csv(player_stats_file)
season2015 = pd.read_csv(season2015_file)
prior_seasons = pd.read_csv(prior_seasons_file) # Don't delete this, we need it later
stats.columns
Explanation: First we need to load the data
It's the total 2014 stats for people who started in NFL games last year. Thanks to TeamRankings.com. We're missing the count of safeties but hopefully they're so infrequent they don't affect rankings.
End of explanation
byteam_fumbleRecoveries = stats.groupby('Team').fumblesRecoveredTD.agg({'totalFumblesRecoveredTD': 'sum'}).reset_index()
byteam_fumbleRecoveries.describe()
Explanation: Column definitions
Team - the full team name, exactly the same as in the other datasets
player - the player name, plus team totals (TOTAL) and opponent totals (OPPONENT TOTAL)
pos - abbreviation for positions, such as QB=quarterback, K=kicker
starts - number of starts in the 2014 season
fumblesLost - fumbles lost to the opposing team in 2014
fumblesRecoveredTD - fumbles recovered for a touchdown in 2014
twoPt - two point conversion
Passing
passingATT - attempted passes in 2014
passingCOMP - completed passes in 2014
passingINT - intercepted passes in 2014
passingTD - passing touchdowns in 2014
passingYDS - passing yards in 2014
Receiving
receivingREC - receptions in 2014
receivingTD - touchdowns made off of a reception in 2014
receivingYDS - receiving yards in 2014
Rushing
rushingATT - rushing attempts in 2014
rushingTD - rushing touchdowns in 2014
rushingYDS - rushing yards in 2014
Kicking
kicking_extraPt - extra points made in 2014
kicking_FGge50A - field golds $\ge$ 50 yards attempted
kicking_FGge50M - field golds $\ge$ 50 yards made
kicking_FGlt50A - field golds $\lt$ 50 yards attempted
kicking_FGlt50M - field golds $\lt$ yards made
Defense
defenseF - fumbles forced in 2014
defenseSCK - sacks in 2014
defenseTOTAL - tackles in 2014
defenseFumblesRecovered - fumbles recovered in 2014
pointreturnsFC - fair catches on point returns in 2014
pointreturnsRETURNS - returns made on point returns in 2014
pointreturnsTD - point returns for a touchdown in 2014
interceptionsINT - interceptions in 2014
interceptionsTD - interceptions for a touchdown in 2014
interceptionsYDS - yards gained on interceptions in 2014
kickreturnsRETURNS - kick returns in 2014
kickreturnsTD - kick returns returned for a touchdown in 2014
kickreturnsYDS - yards gained during kick returns in 2014
Aggregate the fumble recovery data for the defense
In Fantasy Football, you choose a defensive team, not individual players.
The dataset we have has rows in the column player as 'TOTAL' and 'OPPONENT TOTAL' for total defensive stats, except for safeties (which I couldn't get easily - but which also are so rare that they shoudn't affect rankings too badly) and fumbles recovered for a touchdown (fumblesRecoveredTD) which were from a separate dataset and not added in.
We need to aggregate fumblesRecoveredTD over the individual players to get a score for the defense.
End of explanation
stats[stats.Player == 'TOTAL'][[
'Player', 'Team',
'interceptionsINT', 'interceptionsTD',
'pointreturnsTD', 'kickreturnsTD', 'fumblesRecoveredTD',
'defenseSCK', 'defenseF']]
Explanation: Add the fumble recoveries to the overall defensive team's stats
The overall stats are in rows for each team with the player name as TOTAL:
End of explanation
# 1. Add the 'player' column with value 'TOTAL'
byteam_fumbleRecoveries['Player'] = 'TOTAL'
# 2. Merge (left join) to add the 'fumblesRecoveredTD' values to the 'stats' data frame.
stats = stats.merge(byteam_fumbleRecoveries, on=['Team', 'Player'], how='left')
# 3. Coalesce the two columns together into the original column
# (when I am assigning to a subset of rows I have to use the '.ix' accessor.
# The other one -- just [] -- will return a copy of the column and so
# the assignment will take place on the copy, not the original.)
stats.ix[stats.fumblesRecoveredTD.isnull(), 'fumblesRecoveredTD'] = stats[
stats.fumblesRecoveredTD.isnull()].totalFumblesRecoveredTD
# 4. Delete the column totalFumblesRecoveredTD.
del stats['totalFumblesRecoveredTD']
# Show the result
stats[stats.Player=='TOTAL'][['Team'] + [c for c in stats.columns if c.endswith('TD')]]
Explanation: Merge the byteam_fumbleRecoveries stats back into the original stats frame
Add a false player name 'TOTAL' to the byteam_fumbleRecoveries data frame. We will merge on that name plus the team to select the correct row.
Merge byteam_fumbleRecoveries back into the stats data frame, matching on the same team name (and with player as the TOTAL).
Coalesce the aggregated column and the original column together
Delete the column totalFumblesRecoveredTD (which we made for the aggregation).
End of explanation
# These are the positions
stats.pos.unique()
# 1. Add a value 'DEFENSE' as the position when the player is named 'TOTAL'
stats.ix[stats.Player == 'TOTAL', 'pos'] = 'DEFENSE'
# 2. Delete players that aren't the types used in Fantasy
stats = stats[stats.pos.isin(('QB', 'RB', 'WR', 'TE', 'K', 'DEFENSE'))]
print(stats.shape)
stats.head(6)
Explanation: Select only the players we want
For standard fantasy, you get 9 players on your starting roster and 6 players on the bench.
The defensive players are rolled into one (we just did that for the last relevant type of scoring play) and we don't care about the other positions.
The parenthesized abbreviations are the corresponding value in the pos column in this dataset:
Quarterback (QB):1
Running Back (RB):2
Wide Receiver (WR):2
Tight End (TE):1
Wide Receiver / Running Back:1
Kicker (K):1
Defensive Team:1
Bench:6
First, let's add a value 'DEFENSE' as the position when the player is named 'TOTAL'. Then we'll delete all of the players who aren't the kinds we can use.
End of explanation
# 0. Function that will be used in (1) to assign points to each game in a season
def fantasy_points(points_allowed):
Calculate the fantasy score for the season, given the points allowed.
Points: (0) (1-6) (7-13) (14-20) (21-27) (28-34) (35+)
+10 +7 +4 +1 +0 -1 -4
points = 10 * (points_allowed == 0).sum()
points += 7 * points_allowed.between(1,6).sum()
points += 4 * points_allowed.between(7,13).sum()
points += 1 * points_allowed.between(14,20).sum()
points += 0 * points_allowed.between(21,27).sum()
points += -1 * points_allowed.between(28,34).sum()
points += -4 * (points_allowed >= 35).sum()
return points
# 1. Assign each game from 2014 in the `prior_seasons` data frame its respective Fantasy points
defense_fantasy_points = prior_seasons[(prior_seasons.Season == 2014) & (prior_seasons.Week <= 17)
].groupby('Team').PointsAllowed.agg({'fantasyPA' :fantasy_points})
# reset the index (the groupby() made the 'Team' column into the index...get it out)
defense_fantasy_points.reset_index(inplace=True)
# 2 Merge it into the main 'stats' data frame
# 2a -- Add the 'player' column with value 'TOTAL'
defense_fantasy_points['Player'] = 'TOTAL'
# 2b -- Merge the 'defense_fantasy_points' into the 'stats' data frame.
# This will add the column 'fantasyPA' to 'stats'
if 'fantasyPA' in stats.columns:
del stats['fantasyPA'] # in case people run this cell more than once
stats = stats.merge(defense_fantasy_points, on=['Team', 'Player'], how='left')
stats[stats.fantasyPA.notnull()][['Team', 'fantasyPA']].head()
Explanation: Ranking the players
...finally!
Fantasy scoring
The scoring shown in the table below uses the NFL fantasy standard rules. The fixed width denotes the corresponding column name in the dataset.
<table style="font-size:70%;">
<tr><th>Offense</th><th>Kicking</th><th>Defense</th>
</tr><td style="padding:0;vertical-align:top;">
<ul style="padding-left:1;">
<li>Passing
<ul style="margin-top:0;padding-left:-1;">
<li>Yards: +1 / 25 yds <br/> `passingYDS`
<li>Touchdowns: +4 <br/> `passingTD`
<li>Interceptions: -2 <br/> `passingINT`
</ul>
<li>Rushing
<ul style="margin-top:0;padding-left:-1;">
<li>Yards: +1 / 10 yds <br/> `rushingYDS`
<li>Touchdowns: +6 <br/> `rushingTD`
</ul>
<li>Receiving
<ul style="margin-top:0;padding-left:-1;">
<li>Yards: +1 / 10 yds <br/> `receivingYDS`
<li>Touchdowns: +6 <br/> `receivingTD`
</ul>
<li>Fumbles recovered<br/>for Touchdown: +6 `fumblesRecoveredTD`
<li>2-Point Conversions: +2 <br/> `twoPt`
<li>Fumbles Lost: -2 <br/> `fumblesLost`
</ul>
</td><td style="padding:0;vertical-align:top;">
Point Attempts Made
<ul style="margin-top:0;padding-left:-1;"><li>+1 `kicking_extraPt`</ul>
Field Goals Made
<ul style="margin-top:0;padding-left:-1;">
<li>0-49 yds: +3 <br/> `kicking_FGlt50M`
<li>50+ yds: +5 <br/> `kicking_FGge50M`
</ul>
</td><td style="padding:0;vertical-align:top;">
<ul>
<li>Sacks: +1 <br/> `defenseSCK`
<li>Interceptions: +2 <br/> `interceptionsINT`
<li>Fumbles Recovered: +2 <br/> `defenseF`
<li>Safeties: +2 <br/> **no data**
<li>Defensive Touchdowns: +6 <br/> `interceptionsTD` and the `fumblesRecoveredTD` attributed to defensive players
<li>Kick / Punt Return Touchdowns: +6 <br/> `kickreturnsTD` and `pointreturnsTD`
<li>Points Allowed<br/>
(get by aggregating the 2014 data from `nfl_season2008to2014.csv`)
<ul>
<li>(0): +10
<li>(1-6): +7
<li>(7-13): +4
<li>(14-20): +1
<li>(21-27): +0
<li>(28-34): -1
<li>(35+): -4
</ul>
</ul>
</td><tr>
</table>
Calculate the fantasy points for 'Points Allowed'
We have the game-by-game data from the 2014 season in the dataset prior_seasons. Since all we have is aggregates everywhere else, there's no shame in just aggregating here too.
$\rightarrow$ Assign each game from 2014 in the prior_seasons data frame its respective Fantasy points according to the chart for Points Allowed, and sum them. This will give us one row per team.
Merge it into the main stats data frame the same way as we merged fumblesRecoveredTD
Add the player column with value TOTAL
Merge (left join) to add the 'fantasyPA' values to the 'stats' data frame.
That's it.
End of explanation
# 1. Fill all of the null entries with zero
stats = stats.fillna(0)
# 2. Add `FantasyPtsTotal` to the dataset, set to zero
stats['FantasyPtsTotal'] = 0
# 3. Go through each column separately and add the correct points from each one
### --------------------------------------------- Passing ----- ###
## passingYDS -- Yards: +1 / 25 yds
stats.ix[:, 'FantasyPtsTotal'] += 1 * stats.passingYDS % 25
## passingTD -- Touchdowns: +4
stats.ix[:, 'FantasyPtsTotal'] += 4 * stats.passingTD
## passingINT -- Interceptions: -2
stats.ix[:, 'FantasyPtsTotal'] -= 2 * stats.passingINT
### --------------------------------------------- Rushing ----- ###
## rushingYDS -- Yards: +1 / 10 yds
stats.ix[:, 'FantasyPtsTotal'] += 1 * stats.rushingYDS % 10
## rushingTD -- Touchdowns: +6
stats.ix[:, 'FantasyPtsTotal'] += 6 * stats.rushingTD
### ------------------------------------------- Receiving ----- ###
## receivingYDS -- Yards: +1 / 10 yds
stats.ix[:, 'FantasyPtsTotal'] += 1 * stats.receivingYDS % 10
## receivingTD -- Touchdowns: +6
stats.ix[:, 'FantasyPtsTotal'] += 6 * stats.receivingTD
### --------------------------------------------- General ----- ###
## fumblesRecoveredTD -- Fumbles recovered for Touchdown: +6
stats.ix[:, 'FantasyPtsTotal'] += 6 * stats.fumblesRecoveredTD
## twoPt -- 2-Point Conversions: +2
stats.ix[:, 'FantasyPtsTotal'] += 2 * stats.twoPt
## fumblesLost -- Fumbles Lost: -2
stats.ix[:, 'FantasyPtsTotal'] -= 2 * stats.fumblesLost
### --------------------------------------------- Kicking ----- ###
## kicking_extraPt -- Point Attempts Made: +1
stats.ix[:, 'FantasyPtsTotal'] += 1 * stats.kicking_extraPt
## kicking_FGlt50M -- Field Goals made at 0-49 yds: +3
stats.ix[:, 'FantasyPtsTotal'] += 3 * stats.kicking_FGlt50M
## kicking_FGge50M -- Field Goals made at 50+ yds: +5
stats.ix[:, 'FantasyPtsTotal'] += 3 * stats.kicking_FGge50M
### --------------------------------------------- Defense ----- ###
## defenseSCK -- Sacks: +1
stats.ix[:, 'FantasyPtsTotal'] += 1 * stats.defenseSCK
## interceptionsINT -- Interceptions: +2
stats.ix[:, 'FantasyPtsTotal'] += 2 * stats.interceptionsINT
## defenseF -- Fumbles Recovered: +2
stats.ix[:, 'FantasyPtsTotal'] += 2 * stats.defenseF
## interceptionsTD -- Defensive Touchdowns: +6
stats.ix[:, 'FantasyPtsTotal'] += 6 * stats.interceptionsTD
## fumblesRecoveredTD -- Defensive Touchdowns: +6
stats.ix[:, 'FantasyPtsTotal'] += 6 * stats.fumblesRecoveredTD
## kickreturnsTD -- Defensive Touchdowns: +6
stats.ix[:, 'FantasyPtsTotal'] += 6 * stats.kickreturnsTD
## pointreturnsTD -- Defensive Touchdowns: +6
stats.ix[:, 'FantasyPtsTotal'] += 6 * stats.pointreturnsTD
## ............................ Defense - Points Allowed ..... ###
stats.ix[:, 'FantasyPtsTotal'] += stats.fantasyPA
stats.head(3)
## Take a look at what we made:
# the top 10 people in each position.
#
for position, points in stats.sort(columns='FantasyPtsTotal', ascending=False)[
['Player', 'pos', 'Team', 'FantasyPtsTotal']].groupby('pos'):
print(" ".join(('#', '-' * 50, position, '-' * 3)))
print("Points\tTeam\t\t\tPlayer")
for row in points.head(10).apply(
lambda x: '{}\t{:<20}\t{}'.format(int(x.FantasyPtsTotal), x.Team, x.Player),
axis=1):
print(row)
Explanation: Estimate the score:
Approach:
Fill all of the null entries with zero
Add FantasyPtsTotal to the dataset, set to zero
Go through each column separately and add the correct points from each one
End of explanation
# Rename the player 'TOTAL' to the player 'Defense' because that's what it is.
stats.ix[stats.Player == 'TOTAL', 'Player'] = 'Defense'
# Read in 'data/nfl_rosters2015.csv' and merge it with stats
stats.columns = ['Team2014' if c == 'Team' else c for c in stats.columns]
rosters2015 = pd.read_csv(os.path.join('data', 'nfl_rosters2015.csv'))
rosters2015.head()
full_stats = stats.merge(rosters2015, on='Player', how='outer')
print(full_stats.shape)
print('Total players with fantasy numbers:', full_stats.FantasyPtsTotal.count())
print('Total players on 2015 rosters with fantasy numbers:',
((full_stats.Team.notnull()) & (full_stats.FantasyPtsTotal.notnull())).sum())
# Reconcile 'Team' and 'Team2014' for the player 'Defense'
full_stats.ix[full_stats.Player == 'Defense', 'Team'
] = full_stats[full_stats.Player == 'Defense'].Team2014
# Reconcile 'Pos' (for 2015) and 'pos' (for 2014).
full_stats.ix[full_stats.Pos.isnull(), 'Pos'] = full_stats[full_stats.Pos.isnull()].pos
del full_stats['pos']
# Read in 'nfl_top_100s.csv' and add a column indicating the player's rank
# in the 2015 list that the NFL players vote on.
nfl100 = pd.read_csv(os.path.join('data', 'nfl_top_100s.csv'))
nfl100 = nfl100[nfl100.year == 2015]
nfl100.columns = nfl100.columns.str.capitalize()
nfl100.columns = ['Top100Rank' if c == 'Rank' else c for c in nfl100.columns]
full_stats = full_stats.merge(
nfl100[['Player', 'Team', 'Top100Rank']],
on=['Player', 'Team'],
how='outer')
# Peek at the data
full_stats[full_stats.FantasyPtsTotal.notnull()][
['Team', 'Pos', 'Player', 'FantasyPtsTotal', 'Top100Rank']
].sort('Top100Rank').head()
##
# Show teams by bye week
season2015 = pd.read_csv("data/nfl_season2015.csv")
all_teams = set(season2015.homeTeam.unique())
byes = dict(ByeWeek=[], Team=[])
for wk, dataset in season2015.groupby('week'):
bye_teams = all_teams.difference(dataset.homeTeam).difference(dataset.awayTeam)
byes['ByeWeek'].extend([wk] * len(bye_teams))
byes['Team'].extend(bye_teams)
byes = pd.DataFrame(byes)
for wk, dat in byes.groupby('ByeWeek'):
print('------------------ week {} '.format(wk))
print("\n".join(dat.Team))
# Add the player's 2015 bye week
if 'ByeWeek' in full_stats: # This part is in case someone runs the cell twice
del full_stats['ByeWeek']
full_stats = full_stats.merge(byes, on='Team', how='left')
full_stats[full_stats.FantasyPtsTotal.notnull()][
['Team', 'Pos', 'Player', 'ByeWeek', 'FantasyPtsTotal', 'Top100Rank']
].sort('Top100Rank').head()
Explanation: Clean up, convert to 2015
The stats are for the previous season. Use the roster data ('data/nfl_rosters2015.csv') we have to map the player to the correct team, and add the bye week.
Rename the player 'TOTAL' to the player 'Defense' because that's what it is.
Read in 'data/nfl_rosters2015.csv' and merge it with stats to correctly map players to their 2015 team.
After we merge, we will have pairs of columns that need to be reconciled:
- 'Team' and 'Team2014' -- the player 'Defense' doesn't actually exist so after the merge the 'Team' will be null for 'Defense'...make sure the 'Team' column is populated with the 2014 value
- 'Pos' (for 2015) and 'pos' (for 2014). <br/>
Take the 2015 value if it exists, otherwise the 2014 value
And then add the Bye Week for each player
- Use the game schedule in 'nfl_season2015.csv' to add a column Bye to indicate the player's bye week
- Read in 'data/nfl_top_100s.csv' and add a column indicating the player's rank in the 2015 list
(it is the NFL one the players vote in. ESPN also has one.)
End of explanation
# Show the unique positions
full_stats.Pos.unique()
# Change anything that starts with an 'WR' to 'WR' and anything that starts with 'RB' to 'RB'
full_stats.ix[full_stats.Pos.notnull() & full_stats.Pos.str.startswith('WR'), 'Pos'] = 'WR'
full_stats.ix[full_stats.Pos.notnull() & full_stats.Pos.str.startswith('RB'), 'Pos'] = 'RB'
# And then keep only the players we will use
full_stats = full_stats[full_stats.Pos.isin(('QB', 'RB', 'WR', 'TE', 'K', 'DEFENSE'))]
# And drop people who aren't on a team in 2015
full_stats = full_stats[full_stats.Team.notnull()]
full_stats.Pos.value_counts()
## Take a look at what we made:
# the top 10 people in each position.
#
for position, points in full_stats.sort(columns='FantasyPtsTotal', ascending=False)[
['Player', 'Pos', 'Team', 'ByeWeek', 'Top100Rank', 'FantasyPtsTotal']].groupby('Pos'):
print(" ".join(('#', '-' * 50, position, '-' * 3)))
print("Points\tTeam\t\t\tBye NFL100\tPlayer")
for row in points.head(10).apply(
lambda x: '{}\t{:<20}\t{}\t{}\t{}'.format(
int(x.FantasyPtsTotal),
x.Team,
int(x.ByeWeek),
'--' if np.isnan(x.Top100Rank) else int(x.Top100Rank),
x.Player),
axis=1):
print(row)
Explanation: Remove players we can't use
The player's 2015 position is 'Pos'. Delete players who can't be on the fantasy roster.
Look at the positions listed
Set the Team of the defense rows to be the same as in 2014
First, let's add a value 'DEFENSE' as the position when the player is named 'TOTAL'. Then we'll delete all of the players who aren't the kinds we can use.
End of explanation
# Add the difference in total points between each player and one below,
# and each player and 10 below (worst case draft pick)
full_stats = full_stats[full_stats.Team.notnull()]
full_stats = full_stats.sort(['Pos', 'FantasyPtsTotal']) # Don't forget to sort
full_stats['FantasyPtsDelta'] = full_stats.groupby('Pos').FantasyPtsTotal.diff(1)
full_stats['FantasyPtsDelta10'] = full_stats.groupby('Pos').FantasyPtsTotal.diff(10)
full_stats['FantasyPtsBelowBest'] = full_stats.groupby('Pos').FantasyPtsTotal.apply(
lambda x: max(x) - x)
full_stats = full_stats.sort(['Pos', 'FantasyPtsTotal'], ascending=['True', 'False'])
full_stats.to_csv(os.path.join('excel_files', 'fantasy_points.xls'), index=False)
Explanation: Picking strategy
The obvious advice is don't pick a backup player that's got the same bye week as your primary one.
Next, picking order could be
- by straight points (the highest rated player in one of your open slots)
- or by point differential (the biggest drop between this player
The dataset we have now is the total Fantasy points (according to the standard rules) that the player earned in the regular 2014 season.
End of explanation
for position, points in full_stats.sort(columns='FantasyPtsTotal', ascending=False)[
['Player', 'Pos', 'Team', 'ByeWeek',
'Top100Rank',
'FantasyPtsTotal', 'FantasyPtsDelta', 'FantasyPtsDelta10']].groupby('Pos'):
print(" ".join(('#', '-' * 70, position, '-' * 3)))
print("Points\t(d1)\t(d10)\tTeam\t\t\tBye NFL100\tPlayer")
nrows = min(15, points.shape[0])
for row in points.head(nrows).apply(
lambda x: '{}\t{}\t{}\t{:<20}\t{}\t{}\t{}'.format(
'--' if np.isnan(x.FantasyPtsTotal) else int(x.FantasyPtsTotal),
'--' if np.isnan(x.FantasyPtsDelta) else int(x.FantasyPtsDelta),
'--' if np.isnan(x.FantasyPtsDelta10) else int(x.FantasyPtsDelta10),
x.Team,
int(x.ByeWeek),
'--' if np.isnan(x.Top100Rank) else int(x.Top100Rank),
x.Player),
axis=1):
print(row)
fig, axs = plt.subplots(3, 2, sharex=True)
positions = ('DEFENSE', 'WR', 'QB', 'RB', 'K', 'TE')
for (ax, position) in zip(axs.flatten(), positions):
p = full_stats[full_stats.Pos == position]
p = p[p.FantasyPtsTotal.notnull()].sort('FantasyPtsTotal', ascending=False).head(15)
p = p.sort('FantasyPtsTotal')
pos = [0.5 + r for r in range(p.shape[0])]
ax.barh(pos, p.FantasyPtsTotal, align='center')
ax.set_yticks(pos)
if position == 'DEFENSE':
ax.set_yticklabels(list(p.Team), size='x-small')
else:
ax.set_yticklabels(list(p.Player), size='x-small')
ax.set_title(position)
for s in ('top', 'right', 'bottom', 'left'):
ax.spines[s].set_visible(False)
if ax.is_last_row():
ax.set_xlabel('Fantasy Points, 2014')
plt.show()
fig, axs = plt.subplots(3, 2, sharex=True)
for (ax, position) in zip(axs.flatten(), positions):
p = full_stats[full_stats.Pos == position]
p = p[p.FantasyPtsBelowBest.notnull()].sort('FantasyPtsTotal', ascending=False).head(20)
p = p.sort('FantasyPtsTotal')
pos = [0.5 + r for r in range(p.shape[0])]
ax.barh(pos, p.FantasyPtsBelowBest, align='center')
ax.set_yticks(pos)
if position == 'DEFENSE':
ax.set_yticklabels(list(p.Team), size='x-small')
else:
ax.set_yticklabels(list(p.Player), size='x-small')
ax.set_title(position)
for s in ('top', 'right', 'bottom', 'left'):
ax.spines[s].set_visible(False)
if ax.is_last_row():
ax.set_xlabel('Fantasy Points below Top Player, 2014')
plt.show()
Explanation: Some insight from the differences
The script below prints out the top 15 players in each slot.
- Last year's point totals are in the column 'Points'.
- '(d1)' shows the the difference in points between
the player and the next best player.
- '(d10)' shows the worst case points difference if you wait until the
next round and everybody chooses that position.
observations:
The highest points for a single slot last year is the Philadelphia Eagles' defense, at 282 fantasy points total for the season.
If they perform like last year, everyone else will be down 51 points for the season -- which averages out to just about a field goal per game (the typical point differential for a win).
The rest of the top 10 defenses were within 21 points of each other, then the dropoff gets a little steeper, so probably don't let anyone pick two defensive teams before you've gotten your first.
If you don't get the Seahawks' Jimmy Graham as Tight End it almost doesn't matter who you get.
He's got 21 points over the next two players (Martellus Bennett (Bears) and Greg Olson (Panthers)).
...and after Bennett and Olson, the point differential between tight ends was almost nonexistent.
Quarterbacks and kickers have almost equal points, but Quarterback quality falls off faster
Adam Vinatieri (Colts), Mason Crosby (Packers), and Justin Tucker (Ravens) were worth more than any Quarterback except for Aaron Rodgers and Andrew Luck last year.
Look at the second set of plots below, it shows
End of explanation |
12,443 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step2: Oregon Curriculum Network <br />
Discovering Math with Python
Crystal Ball Sequence
The face-centered cubic (FCC) lattice is not always presented in this simplest form, ditto the cubic close packing (CCP), which amounts to the same thing. A nuclear ball is surrounded by a layer of twelve, all touching it, and adjacent neighbors. The shape so formed is not a cube, but a cuboctahedron, with eight triangular faces and six square. This is where I can type stuff.
As the cuboctahedral packing continues to expand outward, layer by layer, the cumulative number of balls or points forms the Crystal Ball Sequence.
cubocta(), a generator, yields the number of balls in each successive layer of the cuboctahedron, according to a simple formula derived by R. Buckminster Fuller, a prolific inventor and philosopher [1]. cummulative( ) delegates to cubocta( ) while accumulating the number in each layer to provide a running total.
Step3: Octet Truss
When adjacent CCP ball centers interconnect, what do you get? Why the octet truss of course, a well known space frame, used a lot in architecture. Alexander Graham Bell was fascinated by this construction.[2]
<a data-flickr-embed="true" href="https
Step4: Each number in Pascal's Triangle may be understood as the number of unique pathways to that position, were falling balls introduced through the top and allowed to fall left or right to the next row down. This apparatus is sometimes called a Galton Board.
For example, a ball could reach the 6 in the middle of the 5th row going 1,1,2,3,6 in four ways (counting left and right mirrors), or 1,1,1,3,6 in two ways. The likely pattern when many balls fall through this maze will be a bell curve, as shown in the simulation below. | Python Code:
from itertools import accumulate, islice
def cubocta():
Classic Generator: Cuboctahedral / Icosahedral #s
https://oeis.org/A005901
yield 1 # nuclear ball
f = 1
while True:
elem = 10 * f * f + 2 # f for frequency
yield elem # <--- pause / resume here
f += 1
def cummulative(n):
https://oeis.org/A005902 (crystal ball sequence)
yield from islice(accumulate(cubocta()),0,n)
print("{:=^30}".format(" Crystal Ball Sequence "))
print("{:^10} {:^10}".format("Layers", "Points"))
for f, out in enumerate(cummulative(30),start=1):
print("{:>10} {:>10}".format(f, out))
Explanation: Oregon Curriculum Network <br />
Discovering Math with Python
Crystal Ball Sequence
The face-centered cubic (FCC) lattice is not always presented in this simplest form, ditto the cubic close packing (CCP), which amounts to the same thing. A nuclear ball is surrounded by a layer of twelve, all touching it, and adjacent neighbors. The shape so formed is not a cube, but a cuboctahedron, with eight triangular faces and six square. This is where I can type stuff.
As the cuboctahedral packing continues to expand outward, layer by layer, the cumulative number of balls or points forms the Crystal Ball Sequence.
cubocta(), a generator, yields the number of balls in each successive layer of the cuboctahedron, according to a simple formula derived by R. Buckminster Fuller, a prolific inventor and philosopher [1]. cummulative( ) delegates to cubocta( ) while accumulating the number in each layer to provide a running total.
End of explanation
from itertools import islice
def pascal():
row = [1]
while True:
yield row
row = [i+j for i,j in zip([0]+row, row+[0])]
print("{0:=^60}".format(" Pascal's Triangle "))
print()
for r in islice(pascal(),0,11):
print("{:^60}".format("".join(map(lambda n: "{:>5}".format(n), r))))
Explanation: Octet Truss
When adjacent CCP ball centers interconnect, what do you get? Why the octet truss of course, a well known space frame, used a lot in architecture. Alexander Graham Bell was fascinated by this construction.[2]
<a data-flickr-embed="true" href="https://www.flickr.com/photos/kirbyurner/23636692173/in/album-72157664250599655/" title="Business Accelerator Building"><img src="https://farm2.staticflickr.com/1584/23636692173_103b411737.jpg" width="500" height="375" alt="Business Accelerator Building"></a><script async src="//embedr.flickr.com/assets/client-code.js" charset="utf-8"></script>
[1] Siobahn Roberts. King of Infinite Space. New York: Walker & Company (2006). pp 179-180.
"Coxeter sent back a letter saying that one equation would be 'a remarkable discovery, justifying Bucky's evident pride,' if only it weren't too good to be true. The next day, Coxeter called: 'On further reflection, I see that it is true'. Coxeter told Fuller how impressed he was with his formula -- on the cubic close-packing of balls."
[2] http://worldgame.blogspot.com/2006/02/octet-truss.html (additional info on the octet truss)
Pascal's Triangle
Pascal's Triangle connects to the Binomial Theorem (originally proved by Sir Isaac Newton) and to numerous topics in probability theory. The triangular and tetrahedral number sequences may be discovered lurking in its columns.
pascal(), a generator, yields successive rows of Pascal's Triangle. By prepending and appending a zero element and adding vertically, a next row is obtained. For example, from [1] we get [0, 1] + [1, 0] = [1, 1]. From [1, 1] we get [0, 1, 1] + [1, 1, 0] = [1, 2, 1] and so on.
Notice the triangular numbers (1, 3, 6, 10...) and tetrahedral number sequences (1, 4, 10, 20...) appear in the slanted columns. [3]
End of explanation
from IPython.display import YouTubeVideo
YouTubeVideo("9xUBhhM4vbM")
Explanation: Each number in Pascal's Triangle may be understood as the number of unique pathways to that position, were falling balls introduced through the top and allowed to fall left or right to the next row down. This apparatus is sometimes called a Galton Board.
For example, a ball could reach the 6 in the middle of the 5th row going 1,1,2,3,6 in four ways (counting left and right mirrors), or 1,1,1,3,6 in two ways. The likely pattern when many balls fall through this maze will be a bell curve, as shown in the simulation below.
End of explanation |
12,444 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Description
Step1: Time to equilibrium
Step2: sigma as a function of time & fragment length
Step3: Graphing sigma as a function of time & fragment length
Step4: Low GC
Step5: High GC
Step6: Plotting Clay et al,. method
Step7: --Sandbox--
Graphing the equations above
Step8: Generating fragments
Step9: Setting variables
Step10: Calculation functions
Step11: Testing out speed of mixture models
Step12: Notes | Python Code:
%load_ext rpy2.ipython
%%R
library(dplyr)
library(tidyr)
library(ggplot2)
library(gridExtra)
%%R
GC2MW = function(x){
A = 313.2
T = 304.2
C = 289.2
G = 329.2
GC = G + C
AT = A + T
x = x / 100
x*GC + (1-x)*AT
}
GC2BD = function(GC){
# GC = percentage
BD = GC / 100 * 0.098 + 1.66
return(BD)
}
calc_BD_macro = function(p_i, w, B, r)
rpm2w2 = function(rpm){
x = 2 * pi * rpm / 60
return(x**2)
}
calc_R_c = function(r_t, r_b){
x = r_t**2 + r_t * r_b + r_b**2
return(sqrt(x/3))
}
calc_R_p = function(p_p, p_m, B, w, r_c){
# distance of the particle from the axis of rotation (at equilibrium)
x = ((p_p - p_m) * (2 * B / w)) + r_c**2
return(sqrt(x))
}
calc_S = function(l, GC){
# l = dsDNA length (bp)
MW = GC2MW(GC)
S = 0.00834 * (l * MW)**0.479 + 2.8
S = S * 1e-13
return(S)
}
calc_dif_sigma_OLD = function(L, w, r_p, S, t, B, p_p, p_m){
nom = w**2 * r_p**2 * S
denom = B * (p_p - p_m)
x = nom / denom * t - 1.26
sigma = L / exp(x)
return(sigma)
}
calc_dif_sigma = function(L, w, r_c, S, t, B, p_p, p_m){
nom = w**2 * r_c**2 * S
denom = B * (p_p - p_m)
x = nom / denom * t - 1.26
sigma = L / exp(x)
return(sigma)
}
R_p2BD = function(r_p, p_m, B, w, r_c){
# converting a distance from center of rotation of a particle to buoyant density
## inverse of `calc_R_p`
nom = (r_p**2 - r_c**2) * w
return(nom / (2 * B) + p_m)
}
sigma2BD = function(r_p, sigma, p_m, B, w, r_c){
BD_low = R_p2BD(r_p - sigma, p_m, B, w, r_c)
BD_high = R_p2BD(r_p + sigma, p_m, B, w, r_c)
return(BD_high - BD_low)
}
time2eq = function(B, p_p, p_m, w, r_c, s, L, sigma){
x = (B * (p_p - p_m)) / (w**2 * r_c**2 * s)
y = 1.26 + log(L / sigma)
return(x * y)
}
Explanation: Description:
calculations for modeling fragments in a CsCl gradient under non-equilibrium conditions
Notes
Good chapter on determining G+C content from CsCl gradient analysis
http://www.academia.edu/428160/Using_Analytical_Ultracentrifugation_of_DNA_in_CsCl_Gradients_to_Explore_Large-Scale_Properties_of_Genomes
http://www.analyticalultracentrifugation.com/dynamic_density_gradients.htm
Meselson et al. - 1957 - Equilibrium Sedimentation of Macromolecules in Den
Vinograd et al. - 1963 - Band-Centrifugation of Macromolecules and Viruses
http://onlinelibrary.wiley.com.proxy.library.cornell.edu/doi/10.1002/bip.360101011/pdf
Ultracentrigation book
http://books.google.com/books?hl=en&lr=&id=vxcSBQAAQBAJ&oi=fnd&pg=PA143&dq=Measurement+of+Density+Heterogeneity+by+Sedimentation+in&ots=l8ObYN-zVv&sig=Vcldf9_aqrJ-u7nQ1lBRKbknHps#v=onepage&q&f=false
Forum info
http://stackoverflow.com/questions/18624005/how-do-i-perform-a-convolution-in-python-with-a-variable-width-gaussian
http://timstaley.co.uk/posts/convolving-pdfs-in-python/
Possible workflows:
KDE convolution
KDE of fragment GC values
bandwidth cross validation: https://jakevdp.github.io/blog/2013/12/01/kernel-density-estimation/
convolution of KDE with diffusion function:
gaussian w/ mean of 0 and scale param = 44.5 (kb) / (mean fragment length)
http://www.academia.edu/428160/Using_Analytical_Ultracentrifugation_of_DNA_in_CsCl_Gradients_to_Explore_Large-Scale_Properties_of_Genomes
http://nbviewer.ipython.org/github/timstaley/ipython-notebooks/blob/compiled/probabilistic_programming/convolving_distributions_illustration.ipynb
variable KDE
variable KDE of fragment GC values where kernel sigma is determined by mean fragment length
gaussian w/ scale param = 44.5 (kb) / fragment length
Standard deviation of homogeneous DNA fragments
Vinograd et al., 1963; (band-centrifugation):
\begin{align}
\sigma^2 = \frac{r_0}{r_0^0} \left{ \frac{r_0}{r_0^0} + 2D \left( t - t^0 \right) \right}
\end{align}
Standard deviation of Gaussian band (assuming equilibrium), Meselson et al., 1957:
\begin{align}
\sigma^2 = -\sqrt{w} \
w = \textrm{molecular weight}
\end{align}
Standard deviation of Gaussian band at a given time, Meselson et al., 1957:
\begin{equation}
t^* = \frac{\sigma^2}{D} \left(ln \frac{L}{\sigma} + 1.26 \right), \quad L\gg\sigma \
\sigma^2 = \textrm{stdev at equilibrium} \
L = \textrm{length of column}
\end{equation}
Gaussian within 1% of equillibrium value from center.
! assumes density gradient established at t = 0
Alternative form (from Birne and Rickwood 1978; eq 6.22):
\begin{align}
t = \frac{\beta^{\circ}(p_p - p_m)}{w^4 r_p^2 s} \left(1.26 + ln \frac{r_b - r_t}{\sigma}\right)
\end{align}
\begin{equation}
t = \textrm{time in seconds} \
\beta^{\circ} = \beta^{\circ} \textrm{ of salt forming the density gradient (CsCl = ?)} \
p_p = \textrm{buoyant density of the the particle at equilibrium} \
p_m = \textrm{average density of the medium} \
w = \textrm{angular velocity} \
r_p = \textrm{distance (cm) of particle from from the axis of rotation (at equilibrium)} \
s = \textrm{sedimentation rate} (S_{20,w} * 10^{-13}) \
r_b = \textrm{distance to top of gradient (cm)} \
r_t = \textrm{distance to bottom of gradient (cm)} \
r_b - r_t = \textrm{length of gradient (L)}
\end{equation}
Solving for sigma:
\begin{align}
\sigma = \frac{L}{e^{\left(\frac{t w^4 r_p^2 s}{\beta^{\circ}(p_p - p_m)} - 1.26\right)}}
\end{align}
sigma (alternative; but assuming sedimentation equilibrium reached; no time component)
\begin{align}
{\sigma} = \frac{\theta}{M_{app}} \frac{RT}{ \frac{w^2r_c}{\beta} * w^2r_o }
\end{align}
\begin{equation}
{\theta} = \textrm{buoyant dnesity of the macromolecules} \
M_{app} = \textrm{apparent molecular weight oif the solvated macromolecules} \
R = \textrm{universal gas constant} \
T = \textrm{Temperate in K} \
w = \textrm{angular velocity} \
\beta^{\circ} = \beta^{\circ} \textrm{ coef. of salt forming the density gradient} \
r_c = \textrm{isoconcentration point} \
r_o = \textrm{distance (cm) of particle from from the axis of rotation (at equilibrium)} \
\end{equation}
Clay et al., 2003 method (assumes sedimentation equilibrium)
\begin{align}
\sigma = \sqrt{\frac{\rho R T}{B^2 G M_C l}}
\end{align}
\begin{equation}
{\rho} = \textrm{buoyant dnesity of the macromolecules} \
R = \textrm{universal gas constant} \
T = \textrm{Temperate in K} \
\beta = \beta^{\circ} \textrm{ coef. of salt forming the density gradient} \
M_C = \textrm{molecular weight per base pair of dry cesium DNA} \
G = \textrm{Constant from Clay et al., 2003 (7.87x10^-10) } \
l = \textrm{fragment length (bp)} \
\end{equation}
Variables specific to the Buckley lab setup
\begin{equation}
\omega = (2\pi \times \textrm{RPM}) /60, \quad \textrm{RPM} = 55000 \
\beta^{\circ} = 1.14 \times 10^9 \
r_b = 4.85 \
r_t = 2.6 \
L = r_b - r_t \
s = S_{20,w} * 10^{-13} \
S_{20,w} = 2.8 + 0.00834 * (l*666)^{0.479}, \quad \textrm{where l = length of fragment; S in Svedberg units} \
p_m = 1.7 \
p_p = \textrm{buoyant density of the particle in CsCl} \
r_p = ? \
t = \textrm{independent variable}
\end{equation}
isoconcentration point
\begin{equation}
r_c = \sqrt{(r_t^2 + r_t * r_b + r_b^2)/3}
\end{equation}
r<sub>p</sub> in relation to the particle's buoyant density
\begin{equation}
r_p = \sqrt{ ((p_p-p_m)\frac{2\beta^{\circ}}{w}) + r_c^2 } \
p_p = \textrm{buoyant density}
\end{equation}
buoyant density of a DNA fragment in CsCl
\begin{equation}
p_p = 0.098F + 1.66, \quad \textrm{where F = G+C molar fraction}
\end{equation}
info needed on a DNA fragment to determine it's sigma of the Guassian distribution
fragment length
fragment G+C
Coding equations
End of explanation
%%R -w 450 -h 300
# time to eq
calc_time2eq = function(x, B, L, rpm, r_t, r_b, sigma, p_m){
l = x[1]
GC = x[2]
s = calc_S(l, GC)
w = rpm2w2(rpm)
p_p = GC2BD(GC)
r_c = calc_R_c(r_t, r_b)
#r_p = calc_R_p(p_p, p_m, B, w, r_c)
t = time2eq(B, p_p, p_m, w, r_c, s, L, sigma)
t = t / 360
return(t)
}
rpm = 55000
B = 1.14e9
r_b = 4.85
r_t = 2.6
L = r_b - r_t
p_m = 1.7
l = seq(100,20000,100) # bp
GC = 1:100 # percent
sigma = 0.01
df = expand.grid(l, GC)
df$t = apply(df, 1, calc_time2eq, B=B, L=L, rpm=rpm, r_t=r_t, r_b=r_b, sigma=sigma, p_m=p_m)
colnames(df) = c('length', 'GC', 'time')
df %>% head
cols = rev(rainbow(12))
p1 = ggplot(df, aes(GC, length, fill=time)) +
geom_tile() +
scale_x_continuous(expand=c(0,0)) +
scale_y_continuous(expand=c(0,0)) +
scale_fill_gradientn(colors=cols) +
geom_hline(yintercept=4000, linetype='dashed', color='black') +
#geom_vline(xintercept=60*60*66, linetype='dashed', color='black') +
labs(x='GC (%)', y='dsDNA length (bp)') +
theme_bw() +
theme(
text = element_text(size=16)
)
p1
Explanation: Time to equilibrium
End of explanation
%%R
rpm = 55000
B = 1.14e9
r_b = 4.85
r_t = 2.6
L = r_b - r_t
p_m = 1.7
l = 500 # bp
GC = 50 # pebrcent
t = 60 * 60 * 66 # sec
S = calc_S(l, GC)
w2 = rpm2w2(rpm)
p_p = GC2BD(GC)
r_c = calc_R_c(r_t, r_b)
r_p = calc_R_p(p_p, p_m, B, w2, r_c)
sigma = calc_dif_sigma(L, w2, r_p, S, t, B, p_p, p_m)
print(sigma)
#sigma_BD = sigma2BD(r_p, sigma, p_m, B, w2, r_c)
#print(sigma_BD)
%%R
#-- alternative calculation
p_p = 1.7
M = l * 882
R = 8.3144598 #J mol^-1 K^-1
T = 293.15
calc_stdev(p_p, M, R, T, w2, r_c, B, r_p)
Explanation: sigma as a function of time & fragment length
End of explanation
%%R -h 300 -w 850
calc_sigma_BD = function(x, rpm, GC, r_t, r_b, p_m, B, L){
l = x[1]
t = x[2]
S = calc_S(l, GC)
w2 = rpm2w2(rpm)
p_p = GC2BD(GC)
r_c = calc_R_c(r_t, r_b)
r_p = calc_R_p(p_p, p_m, B, w2, r_c)
sigma = calc_dif_sigma(L, w2, r_p, S, t, B, p_p, p_m)
if (sigma > L){
return(NA)
} else {
return(sigma)
}
}
# params
GC = 50
rpm = 55000
B = 1.14e9
r_b = 4.85
r_t = 2.6
L = r_b - r_t
p_m = 1.66
# pairwise calculations of all parameters
l = 50**seq(1,3, by=0.05)
t = 6**seq(3,8, by=0.05)
df = expand.grid(l, t)
df$sigma = apply(df, 1, calc_sigma_BD, rpm=rpm, GC=GC, r_t=r_t, r_b=r_b, p_m=p_m, B=B, L=L)
colnames(df) = c('length', 'time', 'sigma')
df= df %>%
mutate(sigma = ifelse((sigma < 1e-20 | sigma > 1e20), NA, sigma))
# plotting
cols = rev(rainbow(12))
p1 = ggplot(df, aes(time, length, fill=sigma)) +
geom_tile() +
scale_x_log10(expand=c(0,0)) +
scale_y_log10(expand=c(0,0)) +
scale_fill_gradientn(colors=cols) +
#geom_hline(yintercept=4000, linetype='dashed', color='black') +
geom_vline(xintercept=60*60*66, linetype='dashed', color='black') +
labs(x='Time', y='Length') +
theme_bw() +
theme(
text = element_text(size=16)
)
p2 = p1 + scale_fill_gradientn(colors=cols, trans='log10')
grid.arrange(p1, p2, ncol=2)
Explanation: Graphing sigma as a function of time & fragment length
End of explanation
%%R -h 300 -w 850
# params
GC = 20
rpm = 55000
B = 1.14e9
r_b = 4.85
r_t = 2.6
L = r_b - r_t
p_m = 1.66
# pairwise calculations of all parameters
l = 50**seq(1,3, by=0.05)
t = 6**seq(3,8, by=0.05)
df = expand.grid(l, t)
df$sigma = apply(df, 1, calc_sigma_BD, rpm=rpm, GC=GC, r_t=r_t, r_b=r_b, p_m=p_m, B=B, L=L)
colnames(df) = c('length', 'time', 'sigma')
df= df %>%
mutate(sigma = ifelse((sigma < 1e-20 | sigma > 1e20), NA, sigma))
# plotting
cols = rev(rainbow(12))
p1 = ggplot(df, aes(time, length, fill=sigma)) +
geom_tile() +
scale_x_log10(expand=c(0,0)) +
scale_y_log10(expand=c(0,0)) +
scale_fill_gradientn(colors=cols) +
#geom_hline(yintercept=4000, linetype='dashed', color='black') +
geom_vline(xintercept=60*60*66, linetype='dashed', color='black') +
labs(x='Time', y='Length') +
theme_bw() +
theme(
text = element_text(size=16)
)
p2 = p1 + scale_fill_gradientn(colors=cols, trans='log10')
grid.arrange(p1, p2, ncol=2)
Explanation: Low GC
End of explanation
%%R -h 300 -w 850
# params
GC = 80
rpm = 55000
B = 1.14e9
r_b = 4.85
r_t = 2.6
L = r_b - r_t
p_m = 1.66
# pairwise calculations of all parameters
l = 50**seq(1,3, by=0.05)
t = 6**seq(3,8, by=0.05)
df = expand.grid(l, t)
df$sigma = apply(df, 1, calc_sigma_BD, rpm=rpm, GC=GC, r_t=r_t, r_b=r_b, p_m=p_m, B=B, L=L)
colnames(df) = c('length', 'time', 'sigma')
df= df %>%
mutate(sigma = ifelse((sigma < 1e-20 | sigma > 1e20), NA, sigma))
# plotting
cols = rev(rainbow(12))
p1 = ggplot(df, aes(time, length, fill=sigma)) +
geom_tile() +
scale_x_log10(expand=c(0,0)) +
scale_y_log10(expand=c(0,0)) +
scale_fill_gradientn(colors=cols) +
#geom_hline(yintercept=4000, linetype='dashed', color='black') +
geom_vline(xintercept=60*60*66, linetype='dashed', color='black') +
labs(x='Time', y='Length') +
theme_bw() +
theme(
text = element_text(size=16)
)
p2 = p1 + scale_fill_gradientn(colors=cols, trans='log10')
grid.arrange(p1, p2, ncol=2)
Explanation: High GC
End of explanation
%%R
calc_dif_sigma_Clay = function(rho, R, T, B, G, M, l){
sigma = sqrt((rho*R*T)/(B**2*G*M*l))
return(sigma)
}
%%R -w 850 -h 300
wrap_calc_sigma_Clay = function(x, R, T, B, G, m){
l= x[1]
GC = x[2]
rho = GC2BD(GC)
sigma = calc_dif_sigma_Clay(rho, R, T, B, G, m, l)
return(sigma)
}
# params
R = 8.3145e7
T = 293.15
G = 7.87e-10
M = 882
B = 1.14e9
l = 50**seq(1,3, by=0.05)
GC = 1:100
# pairwise calculations of all parameters
df = expand.grid(l, GC)
df$sigma = apply(df, 1, wrap_calc_sigma_Clay, R=R, T=T, B=B, G=G, m=M)
colnames(df) = c('length', 'GC', 'sigma')
# plotting
cols = rev(rainbow(12))
p1 = ggplot(df, aes(GC, length, fill=sigma)) +
geom_tile() +
scale_y_log10(expand=c(0,0)) +
scale_x_continuous(expand=c(0,0)) +
scale_fill_gradientn(colors=cols) +
labs(y='length (bp)', x='G+C') +
theme_bw() +
theme(
text = element_text(size=16)
)
p2 = p1 + scale_fill_gradientn(colors=cols, trans='log10')
grid.arrange(p1, p2, ncol=2)
Explanation: Plotting Clay et al,. method
End of explanation
%pylab inline
import scipy as sp
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import mixture
#import sklearn.mixture as mixture
Explanation: --Sandbox--
Graphing the equations above
End of explanation
n_frags = 10000
frag_GC = np.random.normal(0.5,0.1,n_frags)
frag_GC[frag_GC < 0] = 0
frag_GC[frag_GC > 1] = 1
frag_len = np.random.normal(10000,1000,n_frags)
ret = plt.hist2d(frag_GC, frag_len, bins=100)
Explanation: Generating fragments
End of explanation
RPM = 55000
omega = (2 * np.pi * RPM) / 60
beta_o = 1.14 * 10**9
radius_bottom = 4.85
radius_top = 2.6
col_len = radius_bottom - radius_top
density_medium = 1.7
Explanation: Setting variables
End of explanation
# BD from GC
frag_BD = 0.098 * frag_GC + 1.66
ret = plt.hist(frag_BD, bins=100)
sedimentation = (frag_len*666)**0.479 * 0.00834 + 2.8 # l = length of fragment
ret = plt.hist(sedimentation, bins=100)
# sedimentation as a function of fragment length
len_range = np.arange(1,10000, 100)
ret = plt.scatter(len_range, 2.8 + 0.00834 * (len_range*666)**0.479 )
# isoconcentration point
iso_point = sqrt((radius_top**2 + radius_top * radius_bottom + radius_bottom**2)/3)
iso_point
# radius of particle
#radius_particle = np.sqrt( (frag_BD - density_medium)*2*(beta_o/omega) + iso_point**2 )
#ret = plt.hist(radius_particle)
Explanation: Calculation functions
End of explanation
n_dists = 10
n_samp = 10000
def make_mm(n_dists):
dist_loc = np.random.uniform(0,1,n_dists)
dist_scale = np.random.uniform(0,0.1, n_dists)
dists = [mixture.NormalDistribution(x,y) for x,y in zip(dist_loc, dist_scale)]
eq_weights = np.array([1.0 / n_dists] * n_dists)
eq_weights[0] += 1.0 - np.sum(eq_weights)
return mixture.MixtureModel(n_dists, eq_weights, dists)
mm = make_mm(n_dists)
%%timeit
smp = mm.sampleDataSet(n_samp).getInternalFeature(0).flatten()
%%timeit
smp = np.array([mm.sample() for i in arange(n_samp)])
n_dists = 1000
mm = make_mm(n_dists)
%%timeit
smp = mm.sampleDataSet(n_samp).getInternalFeature(0).flatten()
%%timeit
smp = np.array([mm.sample() for i in arange(n_samp)])
n_dists = 10000
mm = make_mm(n_dists)
%%timeit
smp = mm.sampleDataSet(n_samp).getInternalFeature(0).flatten()
%%timeit
smp = np.array([mm.sample() for i in arange(n_samp)])
n_samp = 100000
%%timeit
smp = mm.sampleDataSet(n_samp).getInternalFeature(0).flatten()
%%timeit
smp = np.array([mm.sample() for i in arange(n_samp)])
Explanation: Testing out speed of mixture models
End of explanation
x = np.random.normal(3, 1, 100)
y = np.random.normal(1, 1, 100)
H, xedges, yedges = np.histogram2d(y, x, bins=100)
H
Explanation: Notes:
a mixture model with many distributions (>1000) is very slow for sampling
End of explanation |
12,445 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Seaice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required
Step7: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required
Step8: 3.2. Ocean Freezing Point Value
Is Required
Step9: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required
Step10: 4.2. Canonical Horizontal Resolution
Is Required
Step11: 4.3. Number Of Horizontal Gridpoints
Is Required
Step12: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required
Step13: 5.2. Target
Is Required
Step14: 5.3. Simulations
Is Required
Step15: 5.4. Metrics Used
Is Required
Step16: 5.5. Variables
Is Required
Step17: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required
Step18: 6.2. Additional Parameters
Is Required
Step19: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required
Step20: 7.2. On Diagnostic Variables
Is Required
Step21: 7.3. Missing Processes
Is Required
Step22: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required
Step23: 8.2. Properties
Is Required
Step24: 8.3. Budget
Is Required
Step25: 8.4. Was Flux Correction Used
Is Required
Step26: 8.5. Corrected Conserved Prognostic Variables
Is Required
Step27: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required
Step28: 9.2. Grid Type
Is Required
Step29: 9.3. Scheme
Is Required
Step30: 9.4. Thermodynamics Time Step
Is Required
Step31: 9.5. Dynamics Time Step
Is Required
Step32: 9.6. Additional Details
Is Required
Step33: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required
Step34: 10.2. Number Of Layers
Is Required
Step35: 10.3. Additional Details
Is Required
Step36: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required
Step37: 11.2. Number Of Categories
Is Required
Step38: 11.3. Category Limits
Is Required
Step39: 11.4. Ice Thickness Distribution Scheme
Is Required
Step40: 11.5. Other
Is Required
Step41: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required
Step42: 12.2. Number Of Snow Levels
Is Required
Step43: 12.3. Snow Fraction
Is Required
Step44: 12.4. Additional Details
Is Required
Step45: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required
Step46: 13.2. Transport In Thickness Space
Is Required
Step47: 13.3. Ice Strength Formulation
Is Required
Step48: 13.4. Redistribution
Is Required
Step49: 13.5. Rheology
Is Required
Step50: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required
Step51: 14.2. Thermal Conductivity
Is Required
Step52: 14.3. Heat Diffusion
Is Required
Step53: 14.4. Basal Heat Flux
Is Required
Step54: 14.5. Fixed Salinity Value
Is Required
Step55: 14.6. Heat Content Of Precipitation
Is Required
Step56: 14.7. Precipitation Effects On Salinity
Is Required
Step57: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required
Step58: 15.2. Ice Vertical Growth And Melt
Is Required
Step59: 15.3. Ice Lateral Melting
Is Required
Step60: 15.4. Ice Surface Sublimation
Is Required
Step61: 15.5. Frazil Ice
Is Required
Step62: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Is Required
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required
Step65: 17.2. Constant Salinity Value
Is Required
Step66: 17.3. Additional Details
Is Required
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required
Step68: 18.2. Constant Salinity Value
Is Required
Step69: 18.3. Additional Details
Is Required
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required
Step72: 20.2. Additional Details
Is Required
Step73: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required
Step74: 21.2. Formulation
Is Required
Step75: 21.3. Impacts
Is Required
Step76: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required
Step77: 22.2. Snow Aging Scheme
Is Required
Step78: 22.3. Has Snow Ice Formation
Is Required
Step79: 22.4. Snow Ice Formation Scheme
Is Required
Step80: 22.5. Redistribution
Is Required
Step81: 22.6. Heat Diffusion
Is Required
Step82: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required
Step83: 23.2. Ice Radiation Transmission
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'miroc', 'nicam16-7s', 'seaice')
Explanation: ES-DOC CMIP6 Model Properties - Seaice
MIP Era: CMIP6
Institute: MIROC
Source ID: NICAM16-7S
Topic: Seaice
Sub-Topics: Dynamics, Thermodynamics, Radiative Processes.
Properties: 80 (63 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-20 15:02:40
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of sea ice model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the sea ice component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Ocean Freezing Point Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant seawater freezing point, specify this value.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Target
Is Required: TRUE Type: STRING Cardinality: 1.1
What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Simulations
Is Required: TRUE Type: STRING Cardinality: 1.1
*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Metrics Used
Is Required: TRUE Type: STRING Cardinality: 1.1
List any observed metrics used in tuning model/parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.5. Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Which variables were changed during the tuning process?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required: FALSE Type: ENUM Cardinality: 0.N
What values were specificed for the following parameters if used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Additional Parameters
Is Required: FALSE Type: STRING Cardinality: 0.N
If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.N
General overview description of any key assumptions made in this model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. On Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Missing Processes
Is Required: TRUE Type: STRING Cardinality: 1.N
List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Provide a general description of conservation methodology.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Properties
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in sea ice by the numerical schemes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.4. Was Flux Correction Used
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does conservation involved flux correction?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Corrected Conserved Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List any variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required: TRUE Type: ENUM Cardinality: 1.1
Grid on which sea ice is horizontal discretised?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the type of sea ice grid?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the advection scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Thermodynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model thermodynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.5. Dynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model dynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional horizontal discretisation details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required: TRUE Type: ENUM Cardinality: 1.N
What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.2. Number Of Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using multi-layers specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional vertical grid details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Set to true if the sea ice model has multiple sea ice categories.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Number Of Categories
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using sea ice categories specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Category Limits
Is Required: TRUE Type: STRING Cardinality: 1.1
If using sea ice categories specify each of the category limits.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Ice Thickness Distribution Scheme
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the sea ice thickness distribution scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Other
Is Required: FALSE Type: STRING Cardinality: 0.1
If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow on ice represented in this model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12.2. Number Of Snow Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels of snow on ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Snow Fraction
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how the snow fraction on sea ice is determined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.4. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional details related to snow on ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of horizontal advection of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Transport In Thickness Space
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice transport in thickness space (i.e. in thickness categories)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Ice Strength Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Which method of sea ice strength formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Redistribution
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which processes can redistribute sea ice (including thickness)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Rheology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Rheology, what is the ice deformation formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the energy formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Thermal Conductivity
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of thermal conductivity is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.3. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of heat diffusion?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.4. Basal Heat Flux
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method by which basal ocean heat flux is handled?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.5. Fixed Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.6. Heat Content Of Precipitation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which the heat content of precipitation is handled.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.7. Precipitation Effects On Salinity
Is Required: FALSE Type: STRING Cardinality: 0.1
If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which new sea ice is formed in open water.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Ice Vertical Growth And Melt
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs the vertical growth and melt of sea ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Ice Lateral Melting
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice lateral melting?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.4. Ice Surface Sublimation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs sea ice surface sublimation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.5. Frazil Ice
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of frazil ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16.2. Sea Ice Salinity Thermal Impacts
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does sea ice salinity impact the thermal properties of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the mass transport of salt calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the thermodynamic calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice thickness distribution represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice floe-size represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Please provide further details on any parameterisation of floe-size.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are melt ponds included in the sea ice model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.2. Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What method of melt pond formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.3. Impacts
Is Required: TRUE Type: ENUM Cardinality: 1.N
What do melt ponds have an impact on?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has a snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Snow Aging Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.3. Has Snow Ice Formation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has snow ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.4. Snow Ice Formation Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow ice formation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.5. Redistribution
Is Required: TRUE Type: STRING Cardinality: 1.1
What is the impact of ridging on snow cover?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.6. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the heat diffusion through snow methodology in sea ice thermodynamics?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used to handle surface albedo.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Ice Radiation Transmission
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method by which solar radiation through sea ice is handled.
End of explanation |
12,446 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Plotting the location of the North Sea Harding Oil and Gas field
This example is taken out of my Ph.D. thesis. It shows the North Sea bathymetry and the topography of the surrounding countries, the locations of Edinburgh, Bergen, and the Harding Oil and Gas field, the outline of Block 9 (the hydrocarbon exploration areas are divided into blocks, which are made up of the 1° longitudes and latitudes). It also includes the median line, which defines the economic sectors of the countries adjacent to the North Sea.
All data I use here are freely available online
Step1: Load data
Step2: Create the figure | Python Code:
import shapefile
import numpy as np
from matplotlib import cm, rcParams
import matplotlib.pyplot as plt
from mpl_toolkits.basemap import Basemap
rcParams.update({'font.size': 16}) # Increase font-size
Explanation: Plotting the location of the North Sea Harding Oil and Gas field
This example is taken out of my Ph.D. thesis. It shows the North Sea bathymetry and the topography of the surrounding countries, the locations of Edinburgh, Bergen, and the Harding Oil and Gas field, the outline of Block 9 (the hydrocarbon exploration areas are divided into blocks, which are made up of the 1° longitudes and latitudes). It also includes the median line, which defines the economic sectors of the countries adjacent to the North Sea.
All data I use here are freely available online:
<ul>
<li> `etopo1_bedrock.asc`: You can download the etopo-data from the
U.S. [National Geophysical Data Center (NGDC)](http://maps.ngdc.noaa.gov/viewers/wcs-client/).
For this example, I downloaded a data set with the following parameters:</li>
<ol>
<li> ETOPO1 (bedrock)</li>
<li> -8 West to 10 East, 64 North to 54 South</li>
<li> ArcGIS ASCII Grid</li>
</ol>
<li> `DECC_OFF_Median_Line`: The median line is downloaded from the
UK [Department of Energy & Climate Change (DECC)](https://www.gov.uk/oil-and-gas-offshore-maps-and-gis-shapefiles).
The coordinates of Harding are from Well 9/23b-7, which are also available at the
[DECC](https://www.gov.uk/oil-and-gas-wells).</li>
<li> The coordinates of Edinburgh and Bergen are taken from Wikipedia.</li>
</ul>
End of explanation
# Load the topo file to get header information
etopo1name = 'data/basemap/etopo1_bedrock.asc'
topo_file = open(etopo1name, 'r')
# Read header (number of columns and rows, cell-size, and lower left coordinates)
ncols = int(topo_file.readline().split()[1])
nrows = int(topo_file.readline().split()[1])
xllcorner = float(topo_file.readline().split()[1])
yllcorner = float(topo_file.readline().split()[1])
cellsize = float(topo_file.readline().split()[1])
topo_file.close()
# Read in topography as a whole, disregarding first five rows (header)
etopo = np.loadtxt(etopo1name, skiprows=5)
# Data resolution is quite high. I decrease the data resolution
# to decrease the size of the final figure
dres = 2
# Swap the rows
etopo[:nrows+1, :] = etopo[nrows+1::-1, :]
etopo = etopo[::dres, ::dres]
# Create longitude and latitude vectors for etopo
lons = np.arange(xllcorner, xllcorner+cellsize*ncols, cellsize)[::dres]
lats = np.arange(yllcorner, yllcorner+cellsize*nrows, cellsize)[::dres]
Explanation: Load data
End of explanation
fig = plt.figure(figsize=(8, 6))
# Create basemap, 870 km east-west, 659 km north-south,
# intermediate resolution, Transverse Mercator projection,
# centred around lon/lat 1°/58.5°
m = Basemap(width=870000, height=659000,
resolution='i', projection='tmerc',
lon_0=1, lat_0=58.5)
# Draw coast line
m.drawcoastlines(color='k')
# Draw continents and lakes
m.fillcontinents(lake_color='b', color='none')
# Draw a think border around the whole map
m.drawmapboundary(linewidth=3)
# Convert etopo1 coordinates lon/lat in ° to x/y in m
# (From the basemap help: Calling a Basemap class instance with the arguments
# lon, lat will convert lon/lat (in degrees) to x/y map projection coordinates
# (in meters).)
rlons, rlats = m(*np.meshgrid(lons,lats))
# Draw etopo1, first for land and then for the ocean, with different colormaps
llevels = np.arange(-500,2251,100) # check etopo.ravel().max()
lcs = m.contourf(rlons, rlats, etopo, llevels, cmap=cm.terrain)
olevels = np.arange(-3500,1,100) # check etopo.ravel().min()
cso = m.contourf(rlons, rlats, etopo, olevels, cmap=cm.ocean)
# Draw parallels and meridians
m.drawparallels(np.arange(-56,63.,2.), color='.2', labels=[1,0,0,0])
m.drawparallels(np.arange(-55,63.,2.), color='.2', labels=[0,0,0,0])
m.drawmeridians(np.arange(-6.,12.,2.), color='.2', labels=[0,0,0,1])
m.drawmeridians(np.arange(-7.,12.,2.), color='.2', labels=[0,0,0,0])
# Draw Block 9 boundaries
m.plot([1, 2, 2, 1, 1], [59, 59, 60, 60, 59], 'b', linewidth=2, latlon=True)
plt.annotate('9', m(1.1, 59.7), color='b')
# Draw maritime boundaries
m.readshapefile('data/basemap/DECC_OFF_Median_Line', 'medline', linewidth=2)
# Add Harding, Edinburgh, Bergen
# 1. Convert coordinates
EDIx, EDIy = m(-3.188889, 55.953056)
BERx, BERy = m(5.33, 60.389444)
HARx, HARy = m(1.5, 59.29)
# 2. Plot symbol
plt.plot(HARx, HARy, mfc='r', mec='k', marker='s', markersize=10)
plt.plot(EDIx, EDIy, mfc='r', mec='k', marker='o', markersize=10)
plt.plot(BERx, BERy, mfc='r', mec='k', marker='o', markersize=10)
# 3. Plot name
plt.text(EDIx+50000, EDIy+10000,'Edinburgh', color='r')
plt.text(BERx-140000, BERy, 'Bergen', color='r')
plt.text(HARx-160000, HARy, 'Harding', color='r')
plt.show()
Explanation: Create the figure
End of explanation |
12,447 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Reading data from Excel
Let's get some data. Download Sample Superstore Sales .xls file or my local copy and open it in Excel to see what it looks like.
Data of interest that we want to process in Python often comes in the form of an Excel spreadsheet, but the data is in a special format that we can't read directly
Step1: Converting Excel files with csvkit
There's a really useful tool kit called csvkit, which you can install with
Step2: CSV data
Grab the CSV version of the Excel file SampleSuperstoreSales.csv we've been playing with.
Dealing with commas double quotes in CSV
For the most part, CSV files are very simple, but they can get complicated when we need to embed a comma. One such case from the above file shows how fields with commas get quoted
Step3: Or add to a numpy array
Step4: Reading CSV into Pandas Data frames
In the end, the easiest way to deal with loading CSV files is probably with Pandas. For example, to load our sales CSV, we don't even have to manually open and close a file
Step5: Pandas hides all of the details. I also find that pulling out columns is nice with pandas. Here's how to print the customer name column
Step6: You can learn more about slicing and dicing data from our Boot Camp notes.
Exercise
Read the AAPL.csv file into a data frame using Pandas.
Exercise
From the sales CSV file, use pandas to read in the data and multiple the Order Quantity and Unit Price columns to get a new column. | Python Code:
with open('data/SampleSuperstoreSales.xls', "rb") as f:
txt = f.read()
print(txt[0:100])
Explanation: Reading data from Excel
Let's get some data. Download Sample Superstore Sales .xls file or my local copy and open it in Excel to see what it looks like.
Data of interest that we want to process in Python often comes in the form of an Excel spreadsheet, but the data is in a special format that we can't read directly:
End of explanation
import pandas
table = pandas.read_excel("data/SampleSuperstoreSales.xls")
table.head()
Explanation: Converting Excel files with csvkit
There's a really useful tool kit called csvkit, which you can install with:
bash
pip install csvkit
Unfortunately, at the moment, there is some kind of a weird bug at the moment, unrelated to csvkit, so we get lots of warnings even though it works.
/Users/parrt/anaconda3/lib/python3.6/importlib/_bootstrap.py:219: ImportWarning: can't resolve package from __spec__ or __package__, falling back on __name__ and __path__
So, the following command works without having to run or even own Excel on your laptop, but you get lots of warnings:
bash
$ in2csv data/SampleSuperstoreSales.xls > /tmp/t.csv
Reading Excel files with Pandas
The easiest way to read Excel files with Python is to use Pandas:
End of explanation
import sys
import csv
table_file = "data/SampleSuperstoreSales.csv"
with open(table_file, "r") as csvfile:
f = csv.reader(csvfile, dialect='excel')
data = []
for row in f:
data.append(row)
print(data[:6])
Explanation: CSV data
Grab the CSV version of the Excel file SampleSuperstoreSales.csv we've been playing with.
Dealing with commas double quotes in CSV
For the most part, CSV files are very simple, but they can get complicated when we need to embed a comma. One such case from the above file shows how fields with commas get quoted:
"Eldon Base for stackable storage shelf, platinum"
What happens when we want to encode a quote? Well, somehow people decided that "" double quotes was the answer (not!) and we get fields encoded like this:
"1.7 Cubic Foot Compact ""Cube"" Office Refrigerators"
The good news is that Python's csv package knows how to read Excel-generated files that use such encoding. Here's a sample script that reads such a file into a list of lists:
End of explanation
import numpy as np
np.array(data)
Explanation: Or add to a numpy array:
End of explanation
import pandas
df = pandas.read_csv("data/SampleSuperstoreSales.csv")
df.head()
Explanation: Reading CSV into Pandas Data frames
In the end, the easiest way to deal with loading CSV files is probably with Pandas. For example, to load our sales CSV, we don't even have to manually open and close a file:
End of explanation
df['Customer Name'].head()
df.Profit.head()
Explanation: Pandas hides all of the details. I also find that pulling out columns is nice with pandas. Here's how to print the customer name column:
End of explanation
df = pandas.read_csv("data/SampleSuperstoreSales.csv")
(df['Order Quantity']*df['Unit Price']).head()
Explanation: You can learn more about slicing and dicing data from our Boot Camp notes.
Exercise
Read the AAPL.csv file into a data frame using Pandas.
Exercise
From the sales CSV file, use pandas to read in the data and multiple the Order Quantity and Unit Price columns to get a new column.
End of explanation |
12,448 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: åŸé
ã®èšç®
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https
Step2: TensorFlow Quantum ãã€ã³ã¹ããŒã«ããŸãã
Step3: 次ã«ãTensorFlow ãšã¢ãžã¥ãŒã«ã®äŸåé¢ä¿ãã€ã³ããŒãããŸãã
Step4: 1. äºå
éååè·¯ã®åŸé
èšç®ã®æŠå¿µãããå°ãå
·äœçã«èŠãŠã¿ãŸããããæ¬¡ã®ãããªãã©ã¡ãŒã¿åãããåè·¯ããããšããŸãã
Step5: ãªãã¶ãŒããã«ã¯ä»¥äžã®ãšããã§ãã
Step7: ãã®æŒç®åãèŠããšã$âšY(\alpha)| X | Y(\alpha)â© = \sin(\pi \alpha)$ ã§ããããšãåãããŸãã
Step8: $f_{1}(\alpha) = âšY(\alpha)| X | Y(\alpha)â©$ ãšå®çŸ©ãããš $f_{1}^{'}(\alpha) = \pi \cos(\pi \alpha)$ ã«ãªããŸãã確èªããŸãããã
Step9: 2. 埮ååšã®å¿
èŠæ§
ãã倧ããªåè·¯ã§ã¯ãäžããããéååè·¯ã®åŸé
ãæ£ç¢ºã«èšç®ããåŒã¯å¿
ããããããŸãããåçŽãªåŒã§ã¯åŸé
ãèšç®ã§ããªãå Žåãtfq.differentiators.Differentiatorã¯ã©ã¹ã䜿çšãããšãåè·¯ã®åŸé
ãèšç®ããããã®ã¢ã«ãŽãªãºã ãå®çŸ©ã§ããŸããããšãã°ãTensorFlow QuantumïŒTFQïŒã§äžèšã®äŸã次ã®ããã«åçŸã§ããŸãã
Step10: ãã ãããµã³ããªã³ã°ã«åºã¥ããŠæåŸ
å€ãæšå®ããããã«åãæ¿ãããšïŒå®éã®ããã€ã¹ã§äœãèµ·ãããïŒãå€ãå°ãå€ããå¯èœæ§ããããŸããããã¯ãæåŸ
å€ãäžæ£ç¢ºã«ãªãããšãæå³ããŸãã
Step11: ããã¯ãåŸé
ã«ãããæ·±å»ãªç²ŸåºŠã®åé¡ã«ã€ãªããå¯èœæ§ããããŸãã
Step12: ããã§ã¯ãè§£æã®å Žåã¯æéå·®ååŒã¯åŸé
èªäœãé«éã«èšç®ã§ããŸããããµã³ããªã³ã°ããŒã¹ã®æ¹æ³ã®å Žåã§ã¯ãã€ãºãå€ãããããšãåãããŸããé©åãªåŸé
ãèšç®ããã«ã¯ãããæ³šææ·±ãææ³ã䜿çšããå¿
èŠããããŸããæ¬¡ã«ãè§£æçæåŸ
å€ã®åŸé
èšç®ã«ã¯ããŸãé©ããŠããŸããããå®éã®ãµã³ãã«ããŒã¹ã®æ¹æ³ã®å Žåã§ã¯ããåªããããã©ãŒãã³ã¹ãçºæ®ããã倧å¹
ã«äœéãªææ³ãèŠãŠãããŸãã
Step13: äžèšãããç¹å®ã®åŸ®ååšãç¹å®ã®ç ç©¶ã·ããªãªã«æé©ã§ããããšãããããŸããäžè¬ã«ãããã€ã¹ãã€ãºãªã©ã«å¯ŸããŠå
ç¢ãªãäœéã®ãµã³ãã«ããŒã¹ã®æ¹æ³ã¯ããããçŸå®çãèšå®ã§ã¢ã«ãŽãªãºã ããã¹ããŸãã¯å®è£
ããå Žåã«é©ãã埮ååšã§ããæéå·®åã®ãããªããé«éãªæ¹æ³ã¯ã¢ã«ãŽãªãºã ã®ããã€ã¹ã«ãããå®è¡å¯èœæ§ã«ã¯ãŸã é¢å¿ããªããè§£æçèšç®ãããé«ãã¹ã«ãŒããããå¿
èŠãªå Žåã«æé©ã§ãã
3. è€æ°ã®ãªãã¶ãŒããã«
2 çªç®ã®ãªãã¶ãŒããã«ã䜿çšããTensorFlow Quantum ã 1 ã€ã®åè·¯ã«å¯ŸããŠè€æ°ã®ãªãã¶ãŒããã«ããµããŒãããæ¹æ³ãèŠãŠã¿ãŸãããã
Step14: ãã®ãªãã¶ãŒããã«ã以åãšåãåè·¯ã§äœ¿çšãããŠããå Žåã$f_{2}(\alpha) = âšY(\alpha)| Z | Y(\alpha)â© = \cos(\pi \alpha)$ ããã³$f_{2}^{'}(\alpha) = -\pi \sin(\pi \alpha)$ã«ãªããŸãã確èªããŸãã
Step15: ïŒã»ãŒïŒäžèŽããŸãã
次ã«ã$g(\alpha) = f_{1}(\alpha) + f_{2}(\alpha)$ ãå®çŸ©ãããšã$g'(\alpha) = f_{1}^{'}(\alpha) + f^{'}_{2}(\alpha)$ã«ãªããŸããTensorFlow Quantum ã§è€æ°ã®ãªãã¶ãŒããã«ãå®çŸ©ããŠåè·¯ãšå
±ã«äœ¿çšããã«ã¯ã$g$ ã«ããã«é
ã远å ããŸãã
ããã¯ãåè·¯å
ã®ç¹å®ã®ã·ã³ãã«ã®åŸé
ãããã®åè·¯ã«é©çšããããã®ã·ã³ãã«ã®åãªãã¶ãŒããã«ã«é¢ããåŸé
ã®åèšã«çããããšãæå³ããŸããããã¯ãTensorFlow ã®åŸé
ååŸããã³ããã¯ãããã²ãŒã·ã§ã³ãšäºææ§ããããŸãïŒç¹å®ã®ã·ã³ãã«ã®åŸé
ãšããŠããã¹ãŠã®ãªãã¶ãŒããã«ã®åŸé
ã®åèšãæå®ããŸãïŒã
Step16: ããã§ãæåã®ãšã³ããªã¯æåŸ
å€ w.r.t Pauli X ã§ããã2 çªç®ã®ãšã³ããªã¯æåŸ
å€ w.r.t Pauli Z ã§ããåŸé
ã¯ä»¥äžã®ãšããã§ãã
Step19: ããã§ãåãªãã¶ãŒããã«ã®åŸé
ã®åèšãå®éã« $\alpha$ ã®åŸé
ã§ããããšã確èªããŸããããã®åäœã¯ããã¹ãŠã® TensorFlow Quantum 埮ååšã«ãã£ãŠãµããŒããããŠãããTensorFlow ã®ä»ã®éšåãšã®äºææ§ã«ãããŠéèŠãªåœ¹å²ãæãããŸãã
4. é«åºŠãªäœ¿çš
TensorFlow Quantum å
ã«ãããã¹ãŠã®åŸ®ååšã¯ tfq.differentiators.Differentiator ããµãã¯ã©ã¹åããŸãã埮ååšãå®è£
ããã«ã¯ããŠãŒã¶ãŒã¯ 2 ã€ã®ã€ã³ã¿ãŒãã§ãŒã¹ã®ãããããå®è£
ããå¿
èŠããããŸããæšæºçãªã®ã¯ get_gradient_circuits ãå®è£
ããããšã§ãããã¯åŸé
ã®æšå®ãååŸããããã«ã©ã®åè·¯ãæž¬å®ããã®ããåºæ¬ã¯ã©ã¹ã«æå®ããŸãããŸãã¯ãdifferentiate_analytic ãš differentiate_sampled ããªãŒããŒããŒãããããšãã§ããŸããã¯ã©ã¹ tfq.differentiators.Adjoint ã¯ãã®çµè·¯ã䜿çšããŸãã
以äžã¯ TensorFlow Quantum ã䜿çšããŠããµãŒãããã®åŸé
ãå®è£
ããŠããŸãããã©ã¡ãŒã¿ãŒã·ããã®å°ããªäŸã䜿çšããŸãã
äžèšã§å®çŸ©ãã $|\alphaâ© = Y^{\alpha}|0â©$ ãšããåè·¯ãæãåºããŠã¿ãŸããããåãšåæ§ã«ã$X$ 芳枬å¯èœã«å¯Ÿãããã®åè·¯ã®æåŸ
å€ãšããŠé¢æ°ãå®çŸ©ã§ããŸãïŒ$f(\alpha) = âš\alpha|X|\alphaâ©$ïŒããã©ã¡ãŒã¿ãŒã·ããã®èŠåã䜿çšãããšããã®åè·¯ã§ã¯ãå°é¢æ°ã¯ $$\frac{\partial}{\partial \alpha} f(\alpha) = \frac{\pi}{2} f\left(\alpha + \frac{1}{2}\right) - \frac{ \pi}{2} f\left(\alpha - \frac{1}{2}\right)$$ ã§ããããšãèŠã€ãåºãããšãã§ããŸããget_gradient_circuits 颿°ã¯ããã®å°é¢æ°ã®æåãè¿ããŸãã
Step20: Differentiator ã®åºæ¬ã¯ã©ã¹ã¯ãäžèšã§èŠããã©ã¡ãŒã¿ãŒã·ããã®åŒã®ããã«ãget_gradient_circuits ããè¿ãããæåã䜿çšããŠå°é¢æ°ãèšç®ããŸãããã®æ°ãã埮ååšã tfq.layer ãªããžã§ã¯ãã§äœ¿çšããããšãã§ããŸãã
Step21: ãã®æ°ãã埮ååšã䜿çšããŠã埮åå¯èœãªæŒç®ãçæã§ããããã«ãªããŸããã
éèŠç¹ïŒåŸ®ååšã¯äžåºŠã« 1 ã€ã®æŒç®ã«ããæ¥ç¶ã§ããªãããã以åã«æŒç®ã«æ¥ç¶ãããŠãã埮ååšã¯ãæ°ããæŒç®ã«æ¥ç¶ããåã«æŽæ°ããå¿
èŠããããŸãã | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
!pip install tensorflow==2.7.0
Explanation: åŸé
ã®èšç®
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https://www.tensorflow.org/quantum/tutorials/gradients"> <img src="https://www.tensorflow.org/images/tf_logo_32px.png"> TensorFlow.org ã§è¡šç€º</a>
</td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/quantum/tutorials/gradients.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png"> Google Colab ã§å®è¡</a></td>
<td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/quantum/tutorials/gradients.ipynb"> <img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png"> GitHub ã§ãœãŒã¹ã衚瀺</a> </td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/quantum/tutorials/gradients.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ããŒãããã¯ãããŠã³ããŒã</a></td>
</table>
ãã®ãã¥ãŒããªã¢ã«ã§ã¯ãéååè·¯ã®æåŸ
å€ã®åŸé
èšç®ã¢ã«ãŽãªãºã ã«ã€ããŠèª¬æããŸãã
éååè·¯ã§ç¹å®ã®ãªãã¶ãŒããã«ã®æåŸ
å€ã®åŸé
ãèšç®ããããšã¯ãè€éãªããã»ã¹ã§ããè¡åã®ä¹ç®ããã¯ãã«ã®å ç®ãªã©ã®åŸæ¥ã®æ©æ¢°åŠç¿å€æã§ã¯ç°¡åã«äœ¿çšã§ããè§£æçåŸé
åŒããããŸããããªãã¶ãŒããã«ã®æåŸ
å€ã«ã¯ããã®ãããªè§£æçåŸé
åŒã¯å¿
ããããããŸããããã®ãããã·ããªãªã«é©ããããŸããŸãªéååŸé
èšç®æ¹æ³ã䜿çšããå¿
èŠããããŸãããã®ãã¥ãŒããªã¢ã«ã§ã¯ã2 ã€ã®ç°ãªã埮åã¹ããŒã ãæ¯èŒå¯Ÿç
§ããŸãã
ã»ããã¢ãã
End of explanation
!pip install tensorflow-quantum
# Update package resources to account for version changes.
import importlib, pkg_resources
importlib.reload(pkg_resources)
Explanation: TensorFlow Quantum ãã€ã³ã¹ããŒã«ããŸãã
End of explanation
import tensorflow as tf
import tensorflow_quantum as tfq
import cirq
import sympy
import numpy as np
# visualization tools
%matplotlib inline
import matplotlib.pyplot as plt
from cirq.contrib.svg import SVGCircuit
Explanation: 次ã«ãTensorFlow ãšã¢ãžã¥ãŒã«ã®äŸåé¢ä¿ãã€ã³ããŒãããŸãã
End of explanation
qubit = cirq.GridQubit(0, 0)
my_circuit = cirq.Circuit(cirq.Y(qubit)**sympy.Symbol('alpha'))
SVGCircuit(my_circuit)
Explanation: 1. äºå
éååè·¯ã®åŸé
èšç®ã®æŠå¿µãããå°ãå
·äœçã«èŠãŠã¿ãŸããããæ¬¡ã®ãããªãã©ã¡ãŒã¿åãããåè·¯ããããšããŸãã
End of explanation
pauli_x = cirq.X(qubit)
pauli_x
Explanation: ãªãã¶ãŒããã«ã¯ä»¥äžã®ãšããã§ãã
End of explanation
def my_expectation(op, alpha):
Compute âšY(alpha)| `op` | Y(alpha)â©
params = {'alpha': alpha}
sim = cirq.Simulator()
final_state_vector = sim.simulate(my_circuit, params).final_state_vector
return op.expectation_from_state_vector(final_state_vector, {qubit: 0}).real
my_alpha = 0.3
print("Expectation=", my_expectation(pauli_x, my_alpha))
print("Sin Formula=", np.sin(np.pi * my_alpha))
Explanation: ãã®æŒç®åãèŠããšã$âšY(\alpha)| X | Y(\alpha)â© = \sin(\pi \alpha)$ ã§ããããšãåãããŸãã
End of explanation
def my_grad(obs, alpha, eps=0.01):
grad = 0
f_x = my_expectation(obs, alpha)
f_x_prime = my_expectation(obs, alpha + eps)
return ((f_x_prime - f_x) / eps).real
print('Finite difference:', my_grad(pauli_x, my_alpha))
print('Cosine formula: ', np.pi * np.cos(np.pi * my_alpha))
Explanation: $f_{1}(\alpha) = âšY(\alpha)| X | Y(\alpha)â©$ ãšå®çŸ©ãããš $f_{1}^{'}(\alpha) = \pi \cos(\pi \alpha)$ ã«ãªããŸãã確èªããŸãããã
End of explanation
expectation_calculation = tfq.layers.Expectation(
differentiator=tfq.differentiators.ForwardDifference(grid_spacing=0.01))
expectation_calculation(my_circuit,
operators=pauli_x,
symbol_names=['alpha'],
symbol_values=[[my_alpha]])
Explanation: 2. 埮ååšã®å¿
èŠæ§
ãã倧ããªåè·¯ã§ã¯ãäžããããéååè·¯ã®åŸé
ãæ£ç¢ºã«èšç®ããåŒã¯å¿
ããããããŸãããåçŽãªåŒã§ã¯åŸé
ãèšç®ã§ããªãå Žåãtfq.differentiators.Differentiatorã¯ã©ã¹ã䜿çšãããšãåè·¯ã®åŸé
ãèšç®ããããã®ã¢ã«ãŽãªãºã ãå®çŸ©ã§ããŸããããšãã°ãTensorFlow QuantumïŒTFQïŒã§äžèšã®äŸã次ã®ããã«åçŸã§ããŸãã
End of explanation
sampled_expectation_calculation = tfq.layers.SampledExpectation(
differentiator=tfq.differentiators.ForwardDifference(grid_spacing=0.01))
sampled_expectation_calculation(my_circuit,
operators=pauli_x,
repetitions=500,
symbol_names=['alpha'],
symbol_values=[[my_alpha]])
Explanation: ãã ãããµã³ããªã³ã°ã«åºã¥ããŠæåŸ
å€ãæšå®ããããã«åãæ¿ãããšïŒå®éã®ããã€ã¹ã§äœãèµ·ãããïŒãå€ãå°ãå€ããå¯èœæ§ããããŸããããã¯ãæåŸ
å€ãäžæ£ç¢ºã«ãªãããšãæå³ããŸãã
End of explanation
# Make input_points = [batch_size, 1] array.
input_points = np.linspace(0, 5, 200)[:, np.newaxis].astype(np.float32)
exact_outputs = expectation_calculation(my_circuit,
operators=pauli_x,
symbol_names=['alpha'],
symbol_values=input_points)
imperfect_outputs = sampled_expectation_calculation(my_circuit,
operators=pauli_x,
repetitions=500,
symbol_names=['alpha'],
symbol_values=input_points)
plt.title('Forward Pass Values')
plt.xlabel('$x$')
plt.ylabel('$f(x)$')
plt.plot(input_points, exact_outputs, label='Analytic')
plt.plot(input_points, imperfect_outputs, label='Sampled')
plt.legend()
# Gradients are a much different story.
values_tensor = tf.convert_to_tensor(input_points)
with tf.GradientTape() as g:
g.watch(values_tensor)
exact_outputs = expectation_calculation(my_circuit,
operators=pauli_x,
symbol_names=['alpha'],
symbol_values=values_tensor)
analytic_finite_diff_gradients = g.gradient(exact_outputs, values_tensor)
with tf.GradientTape() as g:
g.watch(values_tensor)
imperfect_outputs = sampled_expectation_calculation(
my_circuit,
operators=pauli_x,
repetitions=500,
symbol_names=['alpha'],
symbol_values=values_tensor)
sampled_finite_diff_gradients = g.gradient(imperfect_outputs, values_tensor)
plt.title('Gradient Values')
plt.xlabel('$x$')
plt.ylabel('$f^{\'}(x)$')
plt.plot(input_points, analytic_finite_diff_gradients, label='Analytic')
plt.plot(input_points, sampled_finite_diff_gradients, label='Sampled')
plt.legend()
Explanation: ããã¯ãåŸé
ã«ãããæ·±å»ãªç²ŸåºŠã®åé¡ã«ã€ãªããå¯èœæ§ããããŸãã
End of explanation
# A smarter differentiation scheme.
gradient_safe_sampled_expectation = tfq.layers.SampledExpectation(
differentiator=tfq.differentiators.ParameterShift())
with tf.GradientTape() as g:
g.watch(values_tensor)
imperfect_outputs = gradient_safe_sampled_expectation(
my_circuit,
operators=pauli_x,
repetitions=500,
symbol_names=['alpha'],
symbol_values=values_tensor)
sampled_param_shift_gradients = g.gradient(imperfect_outputs, values_tensor)
plt.title('Gradient Values')
plt.xlabel('$x$')
plt.ylabel('$f^{\'}(x)$')
plt.plot(input_points, analytic_finite_diff_gradients, label='Analytic')
plt.plot(input_points, sampled_param_shift_gradients, label='Sampled')
plt.legend()
Explanation: ããã§ã¯ãè§£æã®å Žåã¯æéå·®ååŒã¯åŸé
èªäœãé«éã«èšç®ã§ããŸããããµã³ããªã³ã°ããŒã¹ã®æ¹æ³ã®å Žåã§ã¯ãã€ãºãå€ãããããšãåãããŸããé©åãªåŸé
ãèšç®ããã«ã¯ãããæ³šææ·±ãææ³ã䜿çšããå¿
èŠããããŸããæ¬¡ã«ãè§£æçæåŸ
å€ã®åŸé
èšç®ã«ã¯ããŸãé©ããŠããŸããããå®éã®ãµã³ãã«ããŒã¹ã®æ¹æ³ã®å Žåã§ã¯ããåªããããã©ãŒãã³ã¹ãçºæ®ããã倧å¹
ã«äœéãªææ³ãèŠãŠãããŸãã
End of explanation
pauli_z = cirq.Z(qubit)
pauli_z
Explanation: äžèšãããç¹å®ã®åŸ®ååšãç¹å®ã®ç ç©¶ã·ããªãªã«æé©ã§ããããšãããããŸããäžè¬ã«ãããã€ã¹ãã€ãºãªã©ã«å¯ŸããŠå
ç¢ãªãäœéã®ãµã³ãã«ããŒã¹ã®æ¹æ³ã¯ããããçŸå®çãèšå®ã§ã¢ã«ãŽãªãºã ããã¹ããŸãã¯å®è£
ããå Žåã«é©ãã埮ååšã§ããæéå·®åã®ãããªããé«éãªæ¹æ³ã¯ã¢ã«ãŽãªãºã ã®ããã€ã¹ã«ãããå®è¡å¯èœæ§ã«ã¯ãŸã é¢å¿ããªããè§£æçèšç®ãããé«ãã¹ã«ãŒããããå¿
èŠãªå Žåã«æé©ã§ãã
3. è€æ°ã®ãªãã¶ãŒããã«
2 çªç®ã®ãªãã¶ãŒããã«ã䜿çšããTensorFlow Quantum ã 1 ã€ã®åè·¯ã«å¯ŸããŠè€æ°ã®ãªãã¶ãŒããã«ããµããŒãããæ¹æ³ãèŠãŠã¿ãŸãããã
End of explanation
test_value = 0.
print('Finite difference:', my_grad(pauli_z, test_value))
print('Sin formula: ', -np.pi * np.sin(np.pi * test_value))
Explanation: ãã®ãªãã¶ãŒããã«ã以åãšåãåè·¯ã§äœ¿çšãããŠããå Žåã$f_{2}(\alpha) = âšY(\alpha)| Z | Y(\alpha)â© = \cos(\pi \alpha)$ ããã³$f_{2}^{'}(\alpha) = -\pi \sin(\pi \alpha)$ã«ãªããŸãã確èªããŸãã
End of explanation
sum_of_outputs = tfq.layers.Expectation(
differentiator=tfq.differentiators.ForwardDifference(grid_spacing=0.01))
sum_of_outputs(my_circuit,
operators=[pauli_x, pauli_z],
symbol_names=['alpha'],
symbol_values=[[test_value]])
Explanation: ïŒã»ãŒïŒäžèŽããŸãã
次ã«ã$g(\alpha) = f_{1}(\alpha) + f_{2}(\alpha)$ ãå®çŸ©ãããšã$g'(\alpha) = f_{1}^{'}(\alpha) + f^{'}_{2}(\alpha)$ã«ãªããŸããTensorFlow Quantum ã§è€æ°ã®ãªãã¶ãŒããã«ãå®çŸ©ããŠåè·¯ãšå
±ã«äœ¿çšããã«ã¯ã$g$ ã«ããã«é
ã远å ããŸãã
ããã¯ãåè·¯å
ã®ç¹å®ã®ã·ã³ãã«ã®åŸé
ãããã®åè·¯ã«é©çšããããã®ã·ã³ãã«ã®åãªãã¶ãŒããã«ã«é¢ããåŸé
ã®åèšã«çããããšãæå³ããŸããããã¯ãTensorFlow ã®åŸé
ååŸããã³ããã¯ãããã²ãŒã·ã§ã³ãšäºææ§ããããŸãïŒç¹å®ã®ã·ã³ãã«ã®åŸé
ãšããŠããã¹ãŠã®ãªãã¶ãŒããã«ã®åŸé
ã®åèšãæå®ããŸãïŒã
End of explanation
test_value_tensor = tf.convert_to_tensor([[test_value]])
with tf.GradientTape() as g:
g.watch(test_value_tensor)
outputs = sum_of_outputs(my_circuit,
operators=[pauli_x, pauli_z],
symbol_names=['alpha'],
symbol_values=test_value_tensor)
sum_of_gradients = g.gradient(outputs, test_value_tensor)
print(my_grad(pauli_x, test_value) + my_grad(pauli_z, test_value))
print(sum_of_gradients.numpy())
Explanation: ããã§ãæåã®ãšã³ããªã¯æåŸ
å€ w.r.t Pauli X ã§ããã2 çªç®ã®ãšã³ããªã¯æåŸ
å€ w.r.t Pauli Z ã§ããåŸé
ã¯ä»¥äžã®ãšããã§ãã
End of explanation
class MyDifferentiator(tfq.differentiators.Differentiator):
A Toy differentiator for <Y^alpha | X |Y^alpha>.
def __init__(self):
pass
def get_gradient_circuits(self, programs, symbol_names, symbol_values):
Return circuits to compute gradients for given forward pass circuits.
Every gradient on a quantum computer can be computed via measurements
of transformed quantum circuits. Here, you implement a custom gradient
for a specific circuit. For a real differentiator, you will need to
implement this function in a more general way. See the differentiator
implementations in the TFQ library for examples.
# The two terms in the derivative are the same circuit...
batch_programs = tf.stack([programs, programs], axis=1)
# ... with shifted parameter values.
shift = tf.constant(1/2)
forward = symbol_values + shift
backward = symbol_values - shift
batch_symbol_values = tf.stack([forward, backward], axis=1)
# Weights are the coefficients of the terms in the derivative.
num_program_copies = tf.shape(batch_programs)[0]
batch_weights = tf.tile(tf.constant([[[np.pi/2, -np.pi/2]]]),
[num_program_copies, 1, 1])
# The index map simply says which weights go with which circuits.
batch_mapper = tf.tile(
tf.constant([[[0, 1]]]), [num_program_copies, 1, 1])
return (batch_programs, symbol_names, batch_symbol_values,
batch_weights, batch_mapper)
Explanation: ããã§ãåãªãã¶ãŒããã«ã®åŸé
ã®åèšãå®éã« $\alpha$ ã®åŸé
ã§ããããšã確èªããŸããããã®åäœã¯ããã¹ãŠã® TensorFlow Quantum 埮ååšã«ãã£ãŠãµããŒããããŠãããTensorFlow ã®ä»ã®éšåãšã®äºææ§ã«ãããŠéèŠãªåœ¹å²ãæãããŸãã
4. é«åºŠãªäœ¿çš
TensorFlow Quantum å
ã«ãããã¹ãŠã®åŸ®ååšã¯ tfq.differentiators.Differentiator ããµãã¯ã©ã¹åããŸãã埮ååšãå®è£
ããã«ã¯ããŠãŒã¶ãŒã¯ 2 ã€ã®ã€ã³ã¿ãŒãã§ãŒã¹ã®ãããããå®è£
ããå¿
èŠããããŸããæšæºçãªã®ã¯ get_gradient_circuits ãå®è£
ããããšã§ãããã¯åŸé
ã®æšå®ãååŸããããã«ã©ã®åè·¯ãæž¬å®ããã®ããåºæ¬ã¯ã©ã¹ã«æå®ããŸãããŸãã¯ãdifferentiate_analytic ãš differentiate_sampled ããªãŒããŒããŒãããããšãã§ããŸããã¯ã©ã¹ tfq.differentiators.Adjoint ã¯ãã®çµè·¯ã䜿çšããŸãã
以äžã¯ TensorFlow Quantum ã䜿çšããŠããµãŒãããã®åŸé
ãå®è£
ããŠããŸãããã©ã¡ãŒã¿ãŒã·ããã®å°ããªäŸã䜿çšããŸãã
äžèšã§å®çŸ©ãã $|\alphaâ© = Y^{\alpha}|0â©$ ãšããåè·¯ãæãåºããŠã¿ãŸããããåãšåæ§ã«ã$X$ 芳枬å¯èœã«å¯Ÿãããã®åè·¯ã®æåŸ
å€ãšããŠé¢æ°ãå®çŸ©ã§ããŸãïŒ$f(\alpha) = âš\alpha|X|\alphaâ©$ïŒããã©ã¡ãŒã¿ãŒã·ããã®èŠåã䜿çšãããšããã®åè·¯ã§ã¯ãå°é¢æ°ã¯ $$\frac{\partial}{\partial \alpha} f(\alpha) = \frac{\pi}{2} f\left(\alpha + \frac{1}{2}\right) - \frac{ \pi}{2} f\left(\alpha - \frac{1}{2}\right)$$ ã§ããããšãèŠã€ãåºãããšãã§ããŸããget_gradient_circuits 颿°ã¯ããã®å°é¢æ°ã®æåãè¿ããŸãã
End of explanation
custom_dif = MyDifferentiator()
custom_grad_expectation = tfq.layers.Expectation(differentiator=custom_dif)
# Now let's get the gradients with finite diff.
with tf.GradientTape() as g:
g.watch(values_tensor)
exact_outputs = expectation_calculation(my_circuit,
operators=[pauli_x],
symbol_names=['alpha'],
symbol_values=values_tensor)
analytic_finite_diff_gradients = g.gradient(exact_outputs, values_tensor)
# Now let's get the gradients with custom diff.
with tf.GradientTape() as g:
g.watch(values_tensor)
my_outputs = custom_grad_expectation(my_circuit,
operators=[pauli_x],
symbol_names=['alpha'],
symbol_values=values_tensor)
my_gradients = g.gradient(my_outputs, values_tensor)
plt.subplot(1, 2, 1)
plt.title('Exact Gradient')
plt.plot(input_points, analytic_finite_diff_gradients.numpy())
plt.xlabel('x')
plt.ylabel('f(x)')
plt.subplot(1, 2, 2)
plt.title('My Gradient')
plt.plot(input_points, my_gradients.numpy())
plt.xlabel('x')
Explanation: Differentiator ã®åºæ¬ã¯ã©ã¹ã¯ãäžèšã§èŠããã©ã¡ãŒã¿ãŒã·ããã®åŒã®ããã«ãget_gradient_circuits ããè¿ãããæåã䜿çšããŠå°é¢æ°ãèšç®ããŸãããã®æ°ãã埮ååšã tfq.layer ãªããžã§ã¯ãã§äœ¿çšããããšãã§ããŸãã
End of explanation
# Create a noisy sample based expectation op.
expectation_sampled = tfq.get_sampled_expectation_op(
cirq.DensityMatrixSimulator(noise=cirq.depolarize(0.01)))
# Make it differentiable with your differentiator:
# Remember to refresh the differentiator before attaching the new op
custom_dif.refresh()
differentiable_op = custom_dif.generate_differentiable_op(
sampled_op=expectation_sampled)
# Prep op inputs.
circuit_tensor = tfq.convert_to_tensor([my_circuit])
op_tensor = tfq.convert_to_tensor([[pauli_x]])
single_value = tf.convert_to_tensor([[my_alpha]])
num_samples_tensor = tf.convert_to_tensor([[5000]])
with tf.GradientTape() as g:
g.watch(single_value)
forward_output = differentiable_op(circuit_tensor, ['alpha'], single_value,
op_tensor, num_samples_tensor)
my_gradients = g.gradient(forward_output, single_value)
print('---TFQ---')
print('Foward: ', forward_output.numpy())
print('Gradient:', my_gradients.numpy())
print('---Original---')
print('Forward: ', my_expectation(pauli_x, my_alpha))
print('Gradient:', my_grad(pauli_x, my_alpha))
Explanation: ãã®æ°ãã埮ååšã䜿çšããŠã埮åå¯èœãªæŒç®ãçæã§ããããã«ãªããŸããã
éèŠç¹ïŒåŸ®ååšã¯äžåºŠã« 1 ã€ã®æŒç®ã«ããæ¥ç¶ã§ããªãããã以åã«æŒç®ã«æ¥ç¶ãããŠãã埮ååšã¯ãæ°ããæŒç®ã«æ¥ç¶ããåã«æŽæ°ããå¿
èŠããããŸãã
End of explanation |
12,449 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
prelim_month - create Reliability_Names data
2016.12.04 - work log - prelim_month - create Reliability_Names
original file name
Step1: Setup - virtualenv jupyter kernel
Back to Table of Contents
If you are using a virtualenv, make sure that you
Step2: Data characterization
Back to Table of Contents
Description of data, for paper.
grp_month article count = 441
Step3: Reliability data creation - prelim_month
Back to Table of Contents
Create the data.
Initialize from file
Step4: Example snapshot of configuration in this file | Python Code:
import datetime
print( "packages imported at " + str( datetime.datetime.now() ) )
Explanation: prelim_month - create Reliability_Names data
2016.12.04 - work log - prelim_month - create Reliability_Names
original file name: 2016.12.04-work_log-prelim_month-create_Reliability_Names.ipynb
This is the notebook where the underlying name comparison data was created - one row per person per article, columns for the ways up to ten different coders captured that person from the text.
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Setup" data-toc-modified-id="Setup-1"><span class="toc-item-num">1 </span>Setup</a></span><ul class="toc-item"><li><span><a href="#Setup---Imports" data-toc-modified-id="Setup---Imports-1.1"><span class="toc-item-num">1.1 </span>Setup - Imports</a></span></li><li><span><a href="#Setup---virtualenv-jupyter-kernel" data-toc-modified-id="Setup---virtualenv-jupyter-kernel-1.2"><span class="toc-item-num">1.2 </span>Setup - virtualenv jupyter kernel</a></span></li><li><span><a href="#Setup---Initialize-Django" data-toc-modified-id="Setup---Initialize-Django-1.3"><span class="toc-item-num">1.3 </span>Setup - Initialize Django</a></span></li></ul></li><li><span><a href="#Data-characterization" data-toc-modified-id="Data-characterization-2"><span class="toc-item-num">2 </span>Data characterization</a></span></li><li><span><a href="#Reliability-data-creation---prelim_month" data-toc-modified-id="Reliability-data-creation---prelim_month-3"><span class="toc-item-num">3 </span>Reliability data creation - <code>prelim_month</code></a></span></li><li><span><a href="#Database-backup---sourcenet-2016.12.04.pgsql.gz" data-toc-modified-id="Database-backup---sourcenet-2016.12.04.pgsql.gz-4"><span class="toc-item-num">4 </span>Database backup - <code>sourcenet-2016.12.04.pgsql.gz</code></a></span></li><li><span><a href="#Data-cleanup" data-toc-modified-id="Data-cleanup-5"><span class="toc-item-num">5 </span>Data cleanup</a></span><ul class="toc-item"><li><span><a href="#Remove-single-name-reliability-data" data-toc-modified-id="Remove-single-name-reliability-data-5.1"><span class="toc-item-num">5.1 </span>Remove single-name reliability data</a></span><ul class="toc-item"><li><span><a href="#Single-name-data-assessment" data-toc-modified-id="Single-name-data-assessment-5.1.1"><span class="toc-item-num">5.1.1 </span>Single-name data assessment</a></span></li><li><span><a href="#Delete-selected-single-name-data" data-toc-modified-id="Delete-selected-single-name-data-5.1.2"><span class="toc-item-num">5.1.2 </span>Delete selected single-name data</a></span></li></ul></li></ul></li></ul></div>
Setup
Back to Table of Contents
Setup - Imports
Back to Table of Contents
End of explanation
%pwd
%ls
%run ../django_init.py
Explanation: Setup - virtualenv jupyter kernel
Back to Table of Contents
If you are using a virtualenv, make sure that you:
have installed your virtualenv as a kernel.
choose the kernel for your virtualenv as the kernel for your notebook (Kernel --> Change kernel).
Since I use a virtualenv, need to get that activated somehow inside this notebook. One option is to run ../dev/wsgi.py in this notebook, to configure the python environment manually as if you had activated the sourcenet virtualenv. To do this, you'd make a code cell that contains:
%run ../dev/wsgi.py
This is sketchy, however, because of the changes it makes to your Python environment within the context of whatever your current kernel is. I'd worry about collisions with the actual Python 3 kernel. Better, one can install their virtualenv as a separate kernel. Steps:
activate your virtualenv:
workon sourcenet
in your virtualenv, install the package ipykernel.
pip install ipykernel
use the ipykernel python program to install the current environment as a kernel:
python -m ipykernel install --user --name <env_name> --display-name "<display_name>"
sourcenet example:
python -m ipykernel install --user --name sourcenet --display-name "sourcenet (Python 3)"
More details: http://ipython.readthedocs.io/en/stable/install/kernel_install.html
Setup - Initialize Django
Back to Table of Contents
First, initialize my dev django project, so I can run code in this notebook that references my django models and can talk to the database using my project's settings.
End of explanation
from context_text.models import Article
# how many articles in "grp_month"?
article_qs = Article.objects.filter( tags__name__in = [ "grp_month" ] )
grp_month_count = article_qs.count()
print( "grp_month count = {}".format( grp_month_count ) )
Explanation: Data characterization
Back to Table of Contents
Description of data, for paper.
grp_month article count = 441
End of explanation
%run ../config-coder_index-prelim_month.py
Explanation: Reliability data creation - prelim_month
Back to Table of Contents
Create the data.
Initialize from file:
End of explanation
# output debug JSON to file
my_reliability_instance.debug_output_json_file_path = "/home/jonathanmorgan/" + label + ".json"
#===============================================================================
# process
#===============================================================================
# process articles
#my_reliability_instance.process_articles( tag_list )
# output to database.
#my_reliability_instance.output_reliability_data( label )
print( "reliability data created at " + str( datetime.datetime.now() ) )
Explanation: Example snapshot of configuration in this file:
'''
You must create an index-able instance and place it in my_index_instance before
you run this code. The index configuration in this file will be applied to
the instance stored in "my_index_instance".
Objects you can pass in this instance:
from context_analysis.reliability.reliability_names_builder import ReliabilityNamesBuilder
from context_analysis.network.network_person_info import NetworkPersonInfo
'''
# imports
import datetime
# sourcenet imports
from context_text.shared.context_text_base import ContextTextBase
# context_analysis imports
from context_analysis.reliability.reliability_names_builder import ReliabilityNamesBuilder
from context_analysis.network.network_person_info import NetworkPersonInfo
# return reference
index_helper_OUT = None
# declare variables
tag_list = None
label = ""
# declare variables - user setup
my_info_instance = None
my_reliability_instance = None
current_coder = None
current_coder_id = -1
current_priority = -1
# declare variables - Article_Data filtering.
coder_type = ""
#===============================================================================
# configure
#===============================================================================
# list of tags of articles we want to process.
tag_list = [ "grp_month", ]
# label to associate with results, for subsequent lookup.
label = "prelim_month"
# create index instances
my_info_instance = NetworkPersonInfo()
my_reliability_instance = ReliabilityNamesBuilder()
# ! ====> map coders to indices
# set it up so that...
# ...the ground truth user has highest priority (4) for index 1...
current_coder = ContextTextBase.get_ground_truth_coding_user()
current_coder_id = current_coder.id
current_index = 1
current_priority = 4
my_info_instance.add_coder_at_index( current_coder_id, current_index, priority_IN = current_priority )
my_reliability_instance.add_coder_at_index( current_coder_id, current_index, priority_IN = current_priority )
# ...coder ID 8 is priority 3 for index 1...
current_coder_id = 8
current_index = 1
current_priority = 3
my_info_instance.add_coder_at_index( current_coder_id, current_index, priority_IN = current_priority )
my_reliability_instance.add_coder_at_index( current_coder_id, current_index, priority_IN = current_priority )
# ...coder ID 9 is priority 2 for index 1...
current_coder_id = 9
current_index = 1
current_priority = 2
my_info_instance.add_coder_at_index( current_coder_id, current_index, priority_IN = current_priority )
my_reliability_instance.add_coder_at_index( current_coder_id, current_index, priority_IN = current_priority )
# ...coder ID 10 is priority 1 for index 1...
current_coder_id = 10
current_index = 1
current_priority = 1
my_info_instance.add_coder_at_index( current_coder_id, current_index, priority_IN = current_priority )
my_reliability_instance.add_coder_at_index( current_coder_id, current_index, priority_IN = current_priority )
# ...and automated coder (2) is index 2
current_coder = ContextTextBase.get_automated_coding_user()
current_coder_id = current_coder.id
current_index = 2
current_priority = 1
my_info_instance.add_coder_at_index( current_coder_id, current_index, priority_IN = current_priority )
my_reliability_instance.add_coder_at_index( current_coder_id, current_index, priority_IN = current_priority )
# and only look at coding by those users. And...
# configure so that it limits to automated coder_type of OpenCalais_REST_API_v2.
coder_type = "OpenCalais_REST_API_v2"
#my_reliability_instance.limit_to_automated_coder_type = "OpenCalais_REST_API_v2"
my_info_instance.automated_coder_type_include_list.append( coder_type )
my_reliability_instance.automated_coder_type_include_list.append( coder_type )
index_helper_OUT = my_info_instance.get_index_helper()
print( "indexing for grp_month/prelim_month initialized at " + str( datetime.datetime.now() ) )
End of explanation |
12,450 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Resonances of Jupiter's moons, Io, Europa, and Ganymede
Example provided by Deborah Lokhorst. In this example, the four Galilean moons of Jupiter are downloaded from HORIZONS and their orbits are integrated forwards in time. This is a well-known example of a 1
Step1: Let us now calculate the mean motions and periods of the inner three moons.
Step2: We can see that the mean motions of each moon are twice that of the moon inner to it and the periods of each moon are half that of the moon inner to it. This means we are close to a 4
Step3: Note that REBOUND automatically plots Jupiter as the central body in this frame, complete with a star symbol (not completely representative of this case, but it'll do).
We can now start integrating the system forward in time. This example uses the symplectic Wisdom-Holman type whfast integrator since no close encounters are expected. The timestep is set to 5% of one of Io's orbits.
Step4: Similar to as was done in the Fourier analysis & resonances example, we set up several arrays to hold values as the simulation runs. This includes the positions of the moons, eccentricities, mean longitudes, and longitude of pericentres.
Step5: If we plot the eccentricities as a function of time, one can see that they oscillate significantly for the three inner moons, which are in resonance with each other. Contrasting with these large oscillations, is the smaller oscillation of the outer Galilean moon, Callisto, which is shown for comparison. The three inner moons are in resonance, 1
Step6: We can plot their x-locations as a function of time as well, and observe their relative motions around Jupiter.
Step7: Resonances are identified by looking at the resonant arguments, which are defined as
Step8: Io, Europa and Ganymede are in a Laplace 1
Step9: For completeness, let's take a brief look at the Fourier transforms of the x-positions
of Io, and see if it has oscillations related to the MMR.
We are going to use the scipy Lomb-Scargle periodogram function,
which is good for non-uniform time series analysis. Therefore,
if we used the IAS15 integrator, which has adaptive timesteps,
this function would still work. | Python Code:
import rebound
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
sim = rebound.Simulation()
sim.units = ('AU', 'days', 'Msun')
# We can add Jupiter and four of its moons by name, since REBOUND is linked to the HORIZONS database.
labels = ["Jupiter", "Io", "Europa","Ganymede","Callisto"]
sim.add(labels)
Explanation: Resonances of Jupiter's moons, Io, Europa, and Ganymede
Example provided by Deborah Lokhorst. In this example, the four Galilean moons of Jupiter are downloaded from HORIZONS and their orbits are integrated forwards in time. This is a well-known example of a 1:2:4 resonance (also called Laplace resonance) in orbiting bodies. We calculate the resonant arguments see them oscillate with time. We also perform a Fast Fourier Transform (FFT) on the x-position of Io, to look for the period of oscillations caused by the 2:1 resonance between Io and Europa.
Let us first import REBOUND, numpy and matplotlib. We then download the current coordinates for Jupiter and its moons from the NASA HORIZONS database. We work in units of AU, days and solar masses.
End of explanation
os = sim.calculate_orbits()
print("n_i (in rad/days) = %6.3f, %6.3f, %6.3f" % (os[0].n,os[1].n,os[2].n))
print("P_i (in days) = %6.3f, %6.3f, %6.3f" % (os[0].P,os[1].P,os[2].P))
Explanation: Let us now calculate the mean motions and periods of the inner three moons.
End of explanation
sim.move_to_com()
fig = rebound.OrbitPlot(sim, unitlabel="[AU]", color=True, periastron=True)
Explanation: We can see that the mean motions of each moon are twice that of the moon inner to it and the periods of each moon are half that of the moon inner to it. This means we are close to a 4:2:1 resonance.
Let's move to the center of mass (COM) frame and plot the orbits of the four moons around Jupiter:
End of explanation
sim.integrator = "whfast"
sim.dt = 0.05 * os[0].P # 5% of Io's period
Nout = 100000 # number of points to display
tmax = 80*365.25 # let the simulation run for 80 years
Nmoons = 4
Explanation: Note that REBOUND automatically plots Jupiter as the central body in this frame, complete with a star symbol (not completely representative of this case, but it'll do).
We can now start integrating the system forward in time. This example uses the symplectic Wisdom-Holman type whfast integrator since no close encounters are expected. The timestep is set to 5% of one of Io's orbits.
End of explanation
x = np.zeros((Nmoons,Nout))
ecc = np.zeros((Nmoons,Nout))
longitude = np.zeros((Nmoons,Nout))
varpi = np.zeros((Nmoons,Nout))
times = np.linspace(0.,tmax,Nout)
ps = sim.particles
for i,time in enumerate(times):
sim.integrate(time)
# note we use integrate() with the default exact_finish_time=1, which changes the timestep near
# the outputs to match the output times we want. This is what we want for a Fourier spectrum,
# but technically breaks WHFast's symplectic nature. Not a big deal here.
os = sim.calculate_orbits()
for j in range(Nmoons):
x[j][i] = ps[j+1].x
ecc[j][i] = os[j].e
longitude[j][i] = os[j].l
varpi[j][i] = os[j].Omega + os[j].omega
Explanation: Similar to as was done in the Fourier analysis & resonances example, we set up several arrays to hold values as the simulation runs. This includes the positions of the moons, eccentricities, mean longitudes, and longitude of pericentres.
End of explanation
fig = plt.figure(figsize=(12,5))
ax = plt.subplot(111)
plt.plot(times,ecc[0],label=labels[1])
plt.plot(times,ecc[1],label=labels[2])
plt.plot(times,ecc[2],label=labels[3])
plt.plot(times,ecc[3],label=labels[4])
ax.set_xlabel("Time (days)")
ax.set_ylabel("Eccentricity")
plt.legend();
Explanation: If we plot the eccentricities as a function of time, one can see that they oscillate significantly for the three inner moons, which are in resonance with each other. Contrasting with these large oscillations, is the smaller oscillation of the outer Galilean moon, Callisto, which is shown for comparison. The three inner moons are in resonance, 1:2:4, but Callisto is not quite in resonance with them, though it is expected to migrate into resonance with them eventually.
Also visible is the gradual change in eccentricity as a function of time: Callisto's mean eccentricity is decreasing and Ganymede's mean eccentricity is increasing. This is a secular change due to the interactions with the inner moons.
End of explanation
fig = plt.figure(figsize=(12,5))
ax = plt.subplot(111)
plt.plot(times,x[0],label=labels[1])
plt.plot(times,x[1],label=labels[2])
plt.plot(times,x[2],label=labels[3])
plt.plot(times,x[3],label=labels[4])
ax.set_xlim(0,0.2*365.25)
ax.set_xlabel("Time (days)")
ax.set_ylabel("x locations (AU)")
ax.tick_params()
plt.legend();
Explanation: We can plot their x-locations as a function of time as well, and observe their relative motions around Jupiter.
End of explanation
def zeroTo360(val):
while val < 0:
val += 2*np.pi
while val > 2*np.pi:
val -= 2*np.pi
return (val*180/np.pi)
def min180To180(val):
while val < -np.pi:
val += 2*np.pi
while val > np.pi:
val -= 2*np.pi
return (val*180/np.pi)
# We can calculate theta, the resonant argument of the 1:2 Io-Europa orbital resonance,
# which oscillates about 0 degrees:
theta = [min180To180(2.*longitude[1][i] - longitude[0][i] - varpi[0][i]) for i in range(Nout)]
# There is also a secular resonance argument, corresponding to the difference in the longitude of perihelions:
# This angle oscillates around 180 degs, with a longer period component.
theta_sec = [zeroTo360(-varpi[1][i] + varpi[0][i]) for i in range(Nout)]
fig = plt.figure(figsize=(12,5))
ax = plt.subplot(111)
ax.plot(times,theta)
ax.plot(times,theta_sec) # secular resonance argument
ax.set_xlim([0,20.*365.25])
ax.set_ylim([-180,360.])
ax.set_xlabel("time (days)")
ax.set_ylabel(r"resonant argument $\theta_{2:1}$")
ax.plot([0,100],[180,180],'k--')
ax.plot([0,100],[0,0],'k--')
Explanation: Resonances are identified by looking at the resonant arguments, which are defined as:
$$ \theta = (p + q)\lambda_{\rm out} - p \lambda_{\rm in} - q \omega_{\rm out/in}$$
where $\lambda_{\rm out}$ and $\lambda_{\rm in}$ are the mean longitudes of the outer and inner bodies, respectively,
and $\omega_{\rm out}$ is the longitude of pericenter of the outer/inner body.
The ratio of periods is defined as : $$P_{\rm in}/P_{\rm out} ~= p / (p + q)$$
If the resonant argument, $\theta$, oscillates but is constrained within some range of angles, then
there is a resonance between the inner and outer bodies. We call this libration of the angle $\theta$.
The trick is to find what the values of q and p are. For our case, we can easily see that
there are two 2:1 resonances between the moons, so their resonant arguments would follow
the function:
$$\theta = 2 \lambda_{\rm out} - \lambda_{\rm in} - \omega_{\rm out}$$
To make the plotting easier, we can borrow this helper function that puts angles into 0 to 360 degrees
from another example (Fourier analysis & resonances), and define a new one that puts angles
into -180 to 180 degrees.
End of explanation
thetaL = [zeroTo360(-longitude[0][i] + 3.*longitude[1][i] - 2.*longitude[2][i]) for i in range(Nout)]
fig = plt.figure(figsize=(12,5))
ax = plt.subplot(111)
ax.plot(times,thetaL)
ax.set_ylim([0,360.])
ax.set_xlabel("time (days)")
ax.set_ylabel(r"libration argument $\theta_{2:1}$")
ax.plot([0,200],[180,180],'k--')
Explanation: Io, Europa and Ganymede are in a Laplace 1:2:4 resonance,
which additionally has a longer period libration argument that depends on all three of
their mean longitudes, that appears slightly in the other resonant arguments:
End of explanation
from scipy import signal
Npts = 3000
# look for periodicities with periods logarithmically spaced between 0.01 yrs and 100 yrs
logPmin = np.log10(0.001*365.25)
logPmax = np.log10(10.*365.25)
# set up a logspaced array from 0.01 to 100 yrs
Ps = np.logspace(logPmin,logPmax,Npts)
# calculate an array of corresponding angular frequencies
ws = np.asarray([2*np.pi/P for P in Ps])
# calculate the periogram (for Io) (using ws as the values for which to compute it)
periodogram = signal.lombscargle(times,x[0],ws)
fig = plt.figure(figsize=(12,5))
ax = plt.subplot(111)
# Since the computed periodogram is unnormalized, taking the value A**2*N/4,
# we renormalize the results by applying these functions inversely to the output:
ax.set_xscale('log')
ax.set_xlim([10**logPmin,10**logPmax])
ax.set_xlabel("Period (days)")
ax.set_ylabel("Power")
ax.plot(Ps,np.sqrt(4*periodogram/Nout))
Explanation: For completeness, let's take a brief look at the Fourier transforms of the x-positions
of Io, and see if it has oscillations related to the MMR.
We are going to use the scipy Lomb-Scargle periodogram function,
which is good for non-uniform time series analysis. Therefore,
if we used the IAS15 integrator, which has adaptive timesteps,
this function would still work.
End of explanation |
12,451 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Annotation
Consider a binary classification problem. We will fit a predictor and use it to assign a weight score to each node in each instance; this operation is referred to as "annotation". For illustration purposes we will display a few annotated graphs. We will see that building a predictor on the annotated instances can increase the predictive performance.
load data and convert it to graphs
Step1: setup the vectorizer
Step2: annotate instances and list all resulting graphs
display one graph as an example. Color the vertices using the annotated 'importance' attribute.
Step3: Create a data matrix this time using the annotated graphs. Note that now graphs are weighted.
Evaluate the predictive performance on the weighted graphs. | Python Code:
pos = 'bursi.pos.gspan'
neg = 'bursi.neg.gspan'
from eden.converter.graph.gspan import gspan_to_eden
iterable_pos = gspan_to_eden( pos )
iterable_neg = gspan_to_eden( neg )
#split train/test
train_test_split=0.9
from eden.util import random_bipartition_iter
iterable_pos_train, iterable_pos_test = random_bipartition_iter(iterable_pos, relative_size=train_test_split)
iterable_neg_train, iterable_neg_test = random_bipartition_iter(iterable_neg, relative_size=train_test_split)
Explanation: Annotation
Consider a binary classification problem. We will fit a predictor and use it to assign a weight score to each node in each instance; this operation is referred to as "annotation". For illustration purposes we will display a few annotated graphs. We will see that building a predictor on the annotated instances can increase the predictive performance.
load data and convert it to graphs
End of explanation
from eden.graph import Vectorizer
vectorizer = Vectorizer( complexity=2 )
%%time
from itertools import tee
iterable_pos_train,iterable_pos_train_=tee(iterable_pos_train)
iterable_neg_train,iterable_neg_train_=tee(iterable_neg_train)
iterable_pos_test,iterable_pos_test_=tee(iterable_pos_test)
iterable_neg_test,iterable_neg_test_=tee(iterable_neg_test)
from eden.util import fit,estimate
estimator = fit(iterable_pos_train_, iterable_neg_train_, vectorizer, n_iter_search=5)
estimate(iterable_pos_test_, iterable_neg_test_, estimator, vectorizer)
Explanation: setup the vectorizer
End of explanation
help(vectorizer.annotate)
%matplotlib inline
from itertools import tee
iterable_pos_train,iterable_pos_train_=tee(iterable_pos_train)
graphs = vectorizer.annotate( iterable_pos_train_, estimator=estimator )
import itertools
graphs = itertools.islice( graphs, 3 )
from eden.util.display import draw_graph
for graph in graphs: draw_graph( graph, vertex_color='importance', size=10 )
%matplotlib inline
from itertools import tee
iterable_pos_train,iterable_pos_train_=tee(iterable_pos_train)
graphs = vectorizer.annotate( iterable_pos_train_, estimator=estimator )
from eden.modifier.graph.vertex_attributes import colorize_binary
graphs = colorize_binary(graph_list = graphs, output_attribute = 'color_value', input_attribute='importance', level=0)
import itertools
graphs = itertools.islice( graphs, 3 )
from eden.util.display import draw_graph
for graph in graphs: draw_graph( graph, vertex_color='color_value', size=10 )
Explanation: annotate instances and list all resulting graphs
display one graph as an example. Color the vertices using the annotated 'importance' attribute.
End of explanation
%%time
a_estimator=estimator
num_iterations = 3
reweight = 0.6
for i in range(num_iterations):
print 'Iteration %d'%i
from itertools import tee
iterable_pos_train_=vectorizer.annotate( iterable_pos_train, estimator=a_estimator, reweight=reweight )
iterable_neg_train_=vectorizer.annotate( iterable_neg_train, estimator=a_estimator, reweight=reweight )
iterable_pos_test_=vectorizer.annotate( iterable_pos_test, estimator=a_estimator, reweight=reweight )
iterable_neg_test_=vectorizer.annotate( iterable_neg_test, estimator=a_estimator, reweight=reweight )
iterable_pos_train,iterable_pos_train_=tee(iterable_pos_train_)
iterable_neg_train,iterable_neg_train_=tee(iterable_neg_train_)
iterable_pos_test,iterable_pos_test_=tee(iterable_pos_test_)
iterable_neg_test,iterable_neg_test_=tee(iterable_neg_test_)
from eden.util import fit,estimate
a_estimator = fit(iterable_pos_train_, iterable_neg_train_, vectorizer)
estimate(iterable_pos_test_, iterable_neg_test_, a_estimator, vectorizer)
Explanation: Create a data matrix this time using the annotated graphs. Note that now graphs are weighted.
Evaluate the predictive performance on the weighted graphs.
End of explanation |
12,452 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Table of Contents
<p><div class="lev1 toc-item"><a href="#Booleans" data-toc-modified-id="Booleans-1"><span class="toc-item-num">1 </span>Booleans</a></div><div class="lev2 toc-item"><a href="#Not-True-/-Not-False?" data-toc-modified-id="Not-True-/-Not-False?-11"><span class="toc-item-num">1.1 </span>Not True / Not False?</a></div><div class="lev2 toc-item"><a href="#and-/-or-?" data-toc-modified-id="and-/-or-?-12"><span class="toc-item-num">1.2 </span>and / or ?</a></div><div class="lev1 toc-item"><a href="#Boolean-Operations" data-toc-modified-id="Boolean-Operations-2"><span class="toc-item-num">2 </span>Boolean Operations</a></div><div class="lev2 toc-item"><a href="#Exercise" data-toc-modified-id="Exercise-21"><span class="toc-item-num">2.1 </span>Exercise</a></div>
# Booleans
Booleans are a separate data type. The origins of it lie in the work of [George Boole](https
Step1: Not True / Not False?
What's not True?
* False
What's not False?
* True
Step2: and / or ?
a and b will return True if both a and b are True
a or b will return True if either a or b are True
Step3: Boolean Operations
Step4: Exercise
Predict the outcome of the cells below | Python Code:
mybool_1 = True
print(mybool_1)
mybool_2 = False
print(mybool_2)
Explanation: Table of Contents
<p><div class="lev1 toc-item"><a href="#Booleans" data-toc-modified-id="Booleans-1"><span class="toc-item-num">1 </span>Booleans</a></div><div class="lev2 toc-item"><a href="#Not-True-/-Not-False?" data-toc-modified-id="Not-True-/-Not-False?-11"><span class="toc-item-num">1.1 </span>Not True / Not False?</a></div><div class="lev2 toc-item"><a href="#and-/-or-?" data-toc-modified-id="and-/-or-?-12"><span class="toc-item-num">1.2 </span>and / or ?</a></div><div class="lev1 toc-item"><a href="#Boolean-Operations" data-toc-modified-id="Boolean-Operations-2"><span class="toc-item-num">2 </span>Boolean Operations</a></div><div class="lev2 toc-item"><a href="#Exercise" data-toc-modified-id="Exercise-21"><span class="toc-item-num">2.1 </span>Exercise</a></div>
# Booleans
Booleans are a separate data type. The origins of it lie in the work of [George Boole](https://en.wikipedia.org/wiki/George_Boole), and has its own branch of algebra called [Boolean Algebra](https://en.wikipedia.org/wiki/Boolean_algebra).
Booleans have two values - True or False. That's it. End of lesson. Go home!
Ok, maybe not, let's show you how easy this is.
End of explanation
not True
not False
Explanation: Not True / Not False?
What's not True?
* False
What's not False?
* True
End of explanation
a = True
b = True
print(a and b)
a = True
b = False
a or b
a = False
b = False
a or b
a and b
Explanation: and / or ?
a and b will return True if both a and b are True
a or b will return True if either a or b are True
End of explanation
var1 = 10
var2 = 20
var3 = 30
print((var1+var2) == var3)
print((var1+var3) == 40 and var2*2 ==40)
print((var1-var2)==100 or var3-var1 == var2)
print(not(var1 - 100)==var2 or var3-var1 == 900)
Explanation: Boolean Operations
End of explanation
True and True
True or False
not(True) or False
not(not(False)) or not(True or False)
True and 100 == 10**2
"Hello" == "hello" and "Howdy" == "Howdy"
not(not(1==2)) and (not(False) or (not(2==2)))
Explanation: Exercise
Predict the outcome of the cells below:
End of explanation |
12,453 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Recently, Iâve become quite interested in fandoms. In particular, Iâm curious about the process through which characters become the collective property of amateur writers.
With multiple authors involved, do these âshared charactersâ maintain a consistent storyline or mythos? What topics return again and again, and what new elements are introduced? In short, is an e pluribus unum for characters written by multiple people even possible?
My hypothesis is that such a thing is possible, due to recurring topics that hold together the core of a character even as its edges expand, sometimes in radical ways.
Source texts
All characters are, in some regard, the product of multiple authors. (Just as weâre rewriting the same seven plots over and over again, arguably weâre recycling the same characters, just in slightly different garb. Merlin, Gandalf, and Dumbledore come immediately to mind. ) But there are particular places, especially with the advent of online writing communities, where the collaborative process of writing characters is on full display.
One of these places is creepypasta, where amateur writers submit and comment on âpastasâ or small, easily sharable stories. On creepypasta, these pastas are based in the horror genre, and often involve figures familiar to the genreâdevils and ghosts and haunted houses. Among their ranks is a character by the name of Slender Man, who, while not as recognizable, perhaps, as Satan, nevertheless carries his own bit of infamy.
Slender Man is an internet creation (he was born on an internet forum in 2009). He also, tragically, managed to cross into the real world when, in 2014, two young girls from Wisconsin stabbed a classmate 19 times in order to impress him.
I wanted to learn more about a character whose unrealness is so profound, whose seams show so intensely for anyone with the slightest bit of skill with a web browser, but who still manages to occupy a solid enough role in the cultural imagination that people continue to write him. And write him they do. He has his own tag on creepypasta, as well as one of the most viewed pastasâthe pasta entitled simply âSlendermanâ has been viewed over 1.2 million times.
To get these stories, I scraped the six pages of entries for the tag âslendermanâ into separate .txt files, each labeled with cleaned versions of their titles. I parsed the .htm pages with BeautifulSoup, and ran loops over the titles and entries separately to clean them, and then rejoined them when writing to the separate .txt files. To compare the effects of cleaning (case-folding, removing stop words, and removing non-nouns) the data vs. leaving it natural, I created two separate versions and handed them over to MALLET.
Step1: Goal
The goal of this project was to see what topics suture the fragmented, yet remarkably strong pieces of a modern urban legend together. Topic modeling helped me to accomplish this goal by turning the nebulous concept of âfeelâ (as in, this piece âfeels likeâ something that belongs in the Slender Man universe) into actual categories of words that create this feel.
Data
Cleaned
By case folding, removing non-nouns, stopwords (both the nltk stopwords list and Matt Jockers expanded stopwords list), and double quotation marks (pastas have a lot of dialogue), left me with, I think, the essence of each text.
I created twenty categories, optimized, in MALLET. I tried my hand at providing them with topics. Because they share so many words between them (knives, woods, and eyes are mentioned a great deal), distinct categories were hard to create. While it seems that these topics share approximately the same essence (there are knives, and woods, and eyes involved in some way, shape, or form throughout most of them), they repeat these elements in slightly different ways.
Step2: Body parts, for instance, come up in several topics (GIRL, CAMP), but not with the same insistence as in the BODY category. In the BODY category, bodies are also linked to trees, which makes sense as Slender Manâs body is known to be tree-like.
Some categories, like TRAVEL and CAR, are very strong. Others, like OUTSIDE, are really just a loosely connected grouping of words that reminded me of someone looking at something (like a house) or experiencing something from the outside of it. While OUTSIDE is garbled, it contrasts pretty strongly with INSIDE, for instance, which seems to be connected by concern with the insides of a house, as well as what terrors might be contained within.
Perhaps most surprising is the GIRL category, which could have easily been categorized as the TECHNOLOGY category. In Slender Man pastas, there is apparently a connection between girls and computers, while there also appears to be a connection between boys and school.
Uncleaned
Not cleaning the data before running them through a topic modeler produced mostly garbage. | Python Code:
#1. import
from bs4 import BeautifulSoup
import nltk
with open ('stopwords_names.txt') as f:
stopwords_string = f.read()
names_tokenizer = nltk.word_tokenize(stopwords_string)
names_tokens = [word.lower() for word in names_tokenizer if word[0].isalpha()]
stop_words = nltk.corpus.stopwords.words("english") + names_tokens
#2. get tags
tagged_titles = []
tagged_entries = []
for file in files:
soup = BeautifulSoup(open(path + file), 'html.parser')
tagged_entries.append(soup.find_all('div', class_='entry'))
tagged_titles.append(soup.find_all('h2'))
#3. detag
detagged_titles = []
for titles_per_page in tagged_titles:
for title in titles_per_page:
detagged_titles.append(title.get_text())
detagged_entries = []
for entries_per_page in tagged_entries:
for entry in entries_per_page:
detagged_entries.append(entry.get_text())
#4. clean (include removing non-nouns)
final_title = []
for title in detagged_titles:
title_tokenize = title.split()
lower_tokens = [word.lower() for word in title_tokenize if word[0].isalpha()]
deapost_tokens = [word.replace("â", '') for word in lower_tokens if word.find("â")]
title_string = ' '.join(deapost_tokens).replace(' ','')
final_title.append(title_string)
final_entry = []
for entry in detagged_entries:
entry_tokenize = entry.split()
tagged_tuple = nltk.pos_tag(entry_tokenize)
lower_tokens = [word.lower() for word, tag in tagged_tuple if word[0].isalpha() and tag == 'NN' or tag =='NNS']
stopped_cleared_tokens = [word.replace('\n', ' ').replace('â', ' ').replace('â',' ').replace('.',' ').replace('âŠ', ' ').replace(';', ' ') for word in lower_tokens if word not in stop_words]
entry_string = ' '.join(stopped_cleared_tokens)
final_entry.append(entry_string)
#5. print to separate .txt files
for tm_title in final_title:
for tm_entry in final_entry:
print_file(tm_title, tm_entry)
Explanation: Recently, Iâve become quite interested in fandoms. In particular, Iâm curious about the process through which characters become the collective property of amateur writers.
With multiple authors involved, do these âshared charactersâ maintain a consistent storyline or mythos? What topics return again and again, and what new elements are introduced? In short, is an e pluribus unum for characters written by multiple people even possible?
My hypothesis is that such a thing is possible, due to recurring topics that hold together the core of a character even as its edges expand, sometimes in radical ways.
Source texts
All characters are, in some regard, the product of multiple authors. (Just as weâre rewriting the same seven plots over and over again, arguably weâre recycling the same characters, just in slightly different garb. Merlin, Gandalf, and Dumbledore come immediately to mind. ) But there are particular places, especially with the advent of online writing communities, where the collaborative process of writing characters is on full display.
One of these places is creepypasta, where amateur writers submit and comment on âpastasâ or small, easily sharable stories. On creepypasta, these pastas are based in the horror genre, and often involve figures familiar to the genreâdevils and ghosts and haunted houses. Among their ranks is a character by the name of Slender Man, who, while not as recognizable, perhaps, as Satan, nevertheless carries his own bit of infamy.
Slender Man is an internet creation (he was born on an internet forum in 2009). He also, tragically, managed to cross into the real world when, in 2014, two young girls from Wisconsin stabbed a classmate 19 times in order to impress him.
I wanted to learn more about a character whose unrealness is so profound, whose seams show so intensely for anyone with the slightest bit of skill with a web browser, but who still manages to occupy a solid enough role in the cultural imagination that people continue to write him. And write him they do. He has his own tag on creepypasta, as well as one of the most viewed pastasâthe pasta entitled simply âSlendermanâ has been viewed over 1.2 million times.
To get these stories, I scraped the six pages of entries for the tag âslendermanâ into separate .txt files, each labeled with cleaned versions of their titles. I parsed the .htm pages with BeautifulSoup, and ran loops over the titles and entries separately to clean them, and then rejoined them when writing to the separate .txt files. To compare the effects of cleaning (case-folding, removing stop words, and removing non-nouns) the data vs. leaving it natural, I created two separate versions and handed them over to MALLET.
End of explanation
Image(filename = Path + "cleaned_keys.png", width=1100, height=1100)
Explanation: Goal
The goal of this project was to see what topics suture the fragmented, yet remarkably strong pieces of a modern urban legend together. Topic modeling helped me to accomplish this goal by turning the nebulous concept of âfeelâ (as in, this piece âfeels likeâ something that belongs in the Slender Man universe) into actual categories of words that create this feel.
Data
Cleaned
By case folding, removing non-nouns, stopwords (both the nltk stopwords list and Matt Jockers expanded stopwords list), and double quotation marks (pastas have a lot of dialogue), left me with, I think, the essence of each text.
I created twenty categories, optimized, in MALLET. I tried my hand at providing them with topics. Because they share so many words between them (knives, woods, and eyes are mentioned a great deal), distinct categories were hard to create. While it seems that these topics share approximately the same essence (there are knives, and woods, and eyes involved in some way, shape, or form throughout most of them), they repeat these elements in slightly different ways.
End of explanation
Image(filename = Path + "unclean_keys.png", width=1100, height=1100)
Explanation: Body parts, for instance, come up in several topics (GIRL, CAMP), but not with the same insistence as in the BODY category. In the BODY category, bodies are also linked to trees, which makes sense as Slender Manâs body is known to be tree-like.
Some categories, like TRAVEL and CAR, are very strong. Others, like OUTSIDE, are really just a loosely connected grouping of words that reminded me of someone looking at something (like a house) or experiencing something from the outside of it. While OUTSIDE is garbled, it contrasts pretty strongly with INSIDE, for instance, which seems to be connected by concern with the insides of a house, as well as what terrors might be contained within.
Perhaps most surprising is the GIRL category, which could have easily been categorized as the TECHNOLOGY category. In Slender Man pastas, there is apparently a connection between girls and computers, while there also appears to be a connection between boys and school.
Uncleaned
Not cleaning the data before running them through a topic modeler produced mostly garbage.
End of explanation |
12,454 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Domestic Load Research Programme Social Survey Exploration
This notebook requires access to a data directory with DLR survey data saved as feather objects. The data files must be saved in /data/tables/ .
Step1: List of Questionaires
Step2: Search Questions
Step3: Search Answers
Step4: List of Site Locations and Corresponding RecorderIDs by Year | Python Code:
import processing.procore as pcore
import features.socios as s
tbls = pcore.loadTables()
print("Stored Data Tables\n")
for k in sorted(list(tbls.keys())):
print(k)
Explanation: Domestic Load Research Programme Social Survey Exploration
This notebook requires access to a data directory with DLR survey data saved as feather objects. The data files must be saved in /data/tables/ .
End of explanation
tbls['questionaires'][tbls['questionaires'].QuestionaireID.isin([3, 4, 6, 7, 1000000, 1000001, 1000002])]
Explanation: List of Questionaires
End of explanation
searchterm = ['earn per month', 'watersource', 'GeyserNumber', 'GeyserBroken', 'roof', 'wall', 'main switch', 'floor area']
questionaire_id = 3
s.searchQuestions(searchterm, questionaire_id)
Explanation: Search Questions
End of explanation
searchterm = ['earn per month', 'watersource', 'GeyserNumber', 'GeyserBroken', 'roof', 'wall', 'main switch', 'floor area']
questionaire_id = 3
answers = s.searchAnswers(searchterm, questionaire_id)
print(answers[1])
answers[0].head()
Explanation: Search Answers
End of explanation
s.recorderLocations(year = 2011)
Explanation: List of Site Locations and Corresponding RecorderIDs by Year
End of explanation |
12,455 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CMEMS Visualization
Import packages
For this visualization of a sample <i>index_latest.txt</i> dataset of the Copernicus Marine Environment Monitoring Service, we use the two packages
Step1: Selection criteria
Provider
The list dataprovider contains the name of the providers we want to keep for the plot.
Step2: Here we could also add something for the time or space domain.<br>
Easy but not time to do it now.
Load and prepare data
Since the <i>index_latest.txt</i> is a formatted file, we use the numpy function <a href="http
Step3: To define the position shown on the map, we use the mean of the stored <i>geospatial_lat/lon_min/max</i> for each dataset.
Step4: Select by data provider
We create a list of indices corresponding to the entries with a provider belonging to the list specified at the beginning.
Step5: Could do intersection of the list, but for that we need to specify the provider name as specified in the index file.
netCDF file name conventions
The data specifications are coded within the netCDF file name following the conventions
Step7: Visualization
Finally, we create the map object.
Step8: Add some tiles to the dataset. | Python Code:
import numpy as np
import folium
Explanation: CMEMS Visualization
Import packages
For this visualization of a sample <i>index_latest.txt</i> dataset of the Copernicus Marine Environment Monitoring Service, we use the two packages:
* <a href="https://github.com/python-visualization/folium">folium</a> for the visualization and
* <a href="http://www.numpy.org/">numpy</a> for the data reading / processing.
End of explanation
dataproviderlist = ['IEO', 'INSTITUTO ESPANOL DE OCEANOGRAFIA', 'SOCIB']
Explanation: Selection criteria
Provider
The list dataprovider contains the name of the providers we want to keep for the plot.
End of explanation
indexfile = "./index_latest.txt"
dataindex = np.genfromtxt(indexfile, skip_header=6, unpack=True, delimiter=',', dtype=None, \
names=['catalog_id', 'file_name', 'geospatial_lat_min', 'geospatial_lat_max',
'geospatial_lon_min', 'geospatial_lon_max',
'time_coverage_start', 'time_coverage_end',
'provider', 'date_update', 'data_mode', 'parameters'])
Explanation: Here we could also add something for the time or space domain.<br>
Easy but not time to do it now.
Load and prepare data
Since the <i>index_latest.txt</i> is a formatted file, we use the numpy function <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.genfromtxt.html">genfromtxt</a> to extract the data from the document.
End of explanation
lon_min = dataindex['geospatial_lon_min']
lon_max = dataindex['geospatial_lon_max']
lat_min = dataindex['geospatial_lat_min']
lat_max = dataindex['geospatial_lat_max']
lonmean, latmean = 0.5*(lon_min + lon_max), 0.5*(lat_min + lat_max)
Explanation: To define the position shown on the map, we use the mean of the stored <i>geospatial_lat/lon_min/max</i> for each dataset.
End of explanation
indexlist = []
for np, provider in enumerate(dataindex['provider']):
matching = [s for s in dataproviderlist if s in provider]
if matching:
indexlist.append(np)
Explanation: Select by data provider
We create a list of indices corresponding to the entries with a provider belonging to the list specified at the beginning.
End of explanation
regions_lut = dict()
regions_lut['GL'] = 'Global'
regions_lut['AR'] = 'Arctic'
regions_lut['BO'] = 'Baltic'
regions_lut['NO'] = 'North West Shelf'
regions_lut['IR'] = 'IBI (Iberia-Biscay-Ireland)'
regions_lut['MO'] = 'Mediterranean'
regions_lut['BS'] = 'Black Sea'
data_types_lut = dict()
data_types_lut['BA'] = 'data from Bathy messages on GTS'
data_types_lut['CT'] = 'CTD profiles'
data_types_lut['DB'] = 'Drifting buoys'
data_types_lut['FB'] = 'FerryBox'
data_types_lut['GL'] = 'Gliders'
data_types_lut['MO'] = 'Fixed buoys or mooring time series'
data_types_lut['PF'] = 'Profiling floats vertical profiles'
data_types_lut['RE'] = 'Recopesca'
data_types_lut['RF'] = 'River flows'
data_types_lut['TE'] = 'data from TESAC messages on GTS'
data_types_lut['TS'] = 'Thermosalinographs'
data_types_lut['XB'] = 'XBT or XCTD profiles'
data_specs_lut = dict()
data_specs_lut['TS'] = 'Timeseries'
data_specs_lut['PR'] = 'Profile'
Explanation: Could do intersection of the list, but for that we need to specify the provider name as specified in the index file.
netCDF file name conventions
The data specifications are coded within the netCDF file name following the conventions:
<p><b>File naming convention in the latest directory:</b></p>
<ul>
<li>RR_LATEST_XX_YY_CODE_YYYYMMDD.nc</li>
<li>RR: region bigram</li>
<li>LATEST: fixed name</li>
<li>XX: TS (timeserie) or PR (profile)</li>
<li>YY: data type</li>
<li>CODE: platform code</li>
<li>YYYYMMDD: year month day of observations</li>
<li>.nc: NetCDF file name suffix</li>
Example: GL_LATEST_PR_GL_58970_20151112.nc
</ul>
<p><b>Data types</b></p>
<ul>
<li>BA: data from Bathy messages on GTS</li>
<li>CT: CTD profiles</li>
<li>DB: Drifting buoys</li>
<li>FB: FerryBox</li>
<li>GL: Gliders</li>
<li>MO: Fixed buoys or mooring time series</li>
<li>PF: Profiling floats vertical profiles</li>
<li>RE: Recopesca</li>
<li>RF: River flows</li>
<li>TE: data from TESAC messages on GTS</li>
<li>TS: Thermosalinographs</li>
<li>XB: XBT or XCTD profiles</li>
</ul>
<p><b>Region bigram</b></p>
<ul>
<li>GL: Global</li>
<li>AR: Arctic</li>
<li>BO: Baltic</li>
<li>NO: North West Shelf</li>
<li>IR: IBI (Iberia-Biscay-Ireland)</li>
<li>MO: Mediterranean</li>
<li>BS: Black Sea</li>
</ul>
We convert these information to a python dictionary:
End of explanation
map = folium.Map(location=[39.5, 2], zoom_start=8)
cntr = 0
for i in indexlist:
curr_data = dataindex[i]
link = curr_data[1]
last_idx_slash = link.rfind('/')
ncdf_file_name = link[last_idx_slash+1::]
if ncdf_file_name[10:12] in data_specs_lut:
data_spec = data_specs_lut[ncdf_file_name[10:12]]
else:
data_spec = ncdf_file_name[10:12]
if ncdf_file_name[13:15] in data_types_lut:
data_type = data_types_lut[ncdf_file_name[13:15]]
else:
data_type = ncdf_file_name[13:15]
if ncdf_file_name[0:2] in regions_lut:
region = regions_lut[ncdf_file_name[0:2]]
else:
region = ncdf_file_name[0:2]
platform_code = ncdf_file_name[16:-12]
#observation_date = ncdf_file_name[-11:-3]
provider = curr_data['provider']
data_parameters = curr_data['parameters']
time_start = curr_data['time_coverage_start']
time_end = curr_data['time_coverage_end']
popup_html =
<table border=0 width=300px>
<tr>
<td width="40%">Platform Code</td>
<td width="60%">{platform_code}</td>
</tr>
<tr>
<td>Provider</td>
<td>{provider}</td>
</tr>
<tr>
<td>Type of Data</td>
<td>{data_spec}</td>
</tr>
<tr>
<td>Region</td>
<td>{region}</td>
</tr>
<tr>
<td>Data Information</td>
<td>{data_type}</td>
</tr>
<tr>
<td>Provided Data</td>
<td>{data_parameters}</td>
</tr>
<tr>
<td>Time Coverage Start</td>
<td>{time_start}</td>
</tr>
<tr>
<td>Time Coverage End</td>
<td>{time_end}</td>
</tr>
<tr>
<td>NetCDF File</td>
<td><a href="{link}">FTP Server Link</a></td>
</tr>
</table>
.format(platform_code=platform_code, provider=provider, data_spec=data_spec, region=region,
data_type=data_type, data_parameters=data_parameters, time_start=time_start,
time_end=time_end, link=link)
map.simple_marker( location = [latmean[i], lonmean[i]], clustered_marker = True, popup=popup_html)
Explanation: Visualization
Finally, we create the map object.
End of explanation
map.add_tile_layer(tile_name='World Ocean Base', tile_url='http://services.arcgisonline.com/arcgis/rest/services/Ocean/World_Ocean_Base/MapServer/tile/{z}/{y}/{x}')
map.add_tile_layer(tile_name='World Topo Map', tile_url='http://services.arcgisonline.com/arcgis/rest/services/World_Topo_Map/MapServer/MapServer/tile/{z}/{y}/{x}')
map.add_tile_layer(tile_name='World Ocean Reference', tile_url='http://services.arcgisonline.com/arcgis/rest/services/Ocean/World_Ocean_Reference/MapServer/tile/{z}/{y}/{x}')
map.add_layers_to_map()
map
map.create_map(path='CMEMS_latest_index.html')
Explanation: Add some tiles to the dataset.
End of explanation |
12,456 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sentiment analysis with TFLearn
In this notebook, we'll continue Andrew Trask's work by building a network for sentiment analysis on the movie review data. Instead of a network written with Numpy, we'll be using TFLearn, a high-level library built on top of TensorFlow. TFLearn makes it simpler to build networks just by defining the layers. It takes care of most of the details for you.
We'll start off by importing all the modules we'll need, then load and prepare the data.
Step1: Preparing the data
Following along with Andrew, our goal here is to convert our reviews into word vectors. The word vectors will have elements representing words in the total vocabulary. If the second position represents the word 'the', for each review we'll count up the number of times 'the' appears in the text and set the second position to that count. I'll show you examples as we build the input data from the reviews data. Check out Andrew's notebook and video for more about this.
Read the data
Use the pandas library to read the reviews and postive/negative labels from comma-separated files. The data we're using has already been preprocessed a bit and we know it uses only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like The, the, and THE, all the same way.
Step2: Counting word frequency
To start off we'll need to count how often each word appears in the data. We'll use this count to create a vocabulary we'll use to encode the review data. This resulting count is known as a bag of words. We'll use it to select our vocabulary and build the word vectors. You should have seen how to do this in Andrew's lesson. Try to implement it here using the Counter class.
Exercise
Step3: Let's keep the first 10000 most frequent words. As Andrew noted, most of the words in the vocabulary are rarely used so they will have little effect on our predictions. Below, we'll sort vocab by the count value and keep the 10000 most frequent words.
Step4: What's the last word in our vocabulary? We can use this to judge if 10000 is too few. If the last word is pretty common, we probably need to keep more words.
Step5: The last word in our vocabulary shows up in 30 reviews out of 25000. I think it's fair to say this is a tiny proportion of reviews. We are probably fine with this number of words.
Note
Step6: Text to vector function
Now we can write a function that converts a some text to a word vector. The function will take a string of words as input and return a vector with the words counted up. Here's the general algorithm to do this
Step7: If you do this right, the following code should return
```
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[
Step8: Now, run through our entire review data set and convert each review to a word vector.
Step9: Train, Validation, Test sets
Now that we have the word_vectors, we're ready to split our data into train, validation, and test sets. Remember that we train on the train data, use the validation data to set the hyperparameters, and at the very end measure the network performance on the test data. Here we're using the function to_categorical from TFLearn to reshape the target data so that we'll have two output units and can classify with a softmax activation function. We actually won't be creating the validation set here, TFLearn will do that for us later.
Step10: Building the network
TFLearn lets you build the network by defining the layers.
Input layer
For the input layer, you just need to tell it how many units you have. For example,
net = tflearn.input_data([None, 100])
would create a network with 100 input units. The first element in the list, None in this case, sets the batch size. Setting it to None here leaves it at the default batch size.
The number of inputs to your network needs to match the size of your data. For this example, we're using 10000 element long vectors to encode our input data, so we need 10000 input units.
Adding layers
To add new hidden layers, you use
net = tflearn.fully_connected(net, n_units, activation='ReLU')
This adds a fully connected layer where every unit in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call. It's telling the network to use the output of the previous layer as the input to this layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling net = tflearn.fully_connected(net, n_units).
Output layer
The last layer you add is used as the output layer. There for, you need to set the number of units to match the target data. In this case we are predicting two classes, positive or negative sentiment. You also need to set the activation function so it's appropriate for your model. Again, we're trying to predict if some input data belongs to one of two classes, so we should use softmax.
net = tflearn.fully_connected(net, 2, activation='softmax')
Training
To set how you train the network, use
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
Again, this is passing in the network you've been building. The keywords
Step11: Intializing the model
Next we need to call the build_model() function to actually build the model. In my solution I haven't included any arguments to the function, but you can add arguments so you can change parameters in the model if you want.
Note
Step12: Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively. Below is the code to fit our the network to our word vectors.
You can rerun model.fit to train the network further if you think you can increase the validation accuracy. Remember, all hyperparameter adjustments must be done using the validation set. Only use the test set after you're completely done training the network.
Step13: Testing
After you're satisified with your hyperparameters, you can run the network on the test set to measure it's performance. Remember, only do this after finalizing the hyperparameters.
Step14: Try out your own text! | Python Code:
import pandas as pd
import numpy as np
import tensorflow as tf
import tflearn
from tflearn.data_utils import to_categorical
Explanation: Sentiment analysis with TFLearn
In this notebook, we'll continue Andrew Trask's work by building a network for sentiment analysis on the movie review data. Instead of a network written with Numpy, we'll be using TFLearn, a high-level library built on top of TensorFlow. TFLearn makes it simpler to build networks just by defining the layers. It takes care of most of the details for you.
We'll start off by importing all the modules we'll need, then load and prepare the data.
End of explanation
reviews = pd.read_csv('reviews.txt', header=None)
labels = pd.read_csv('labels.txt', header=None)
Explanation: Preparing the data
Following along with Andrew, our goal here is to convert our reviews into word vectors. The word vectors will have elements representing words in the total vocabulary. If the second position represents the word 'the', for each review we'll count up the number of times 'the' appears in the text and set the second position to that count. I'll show you examples as we build the input data from the reviews data. Check out Andrew's notebook and video for more about this.
Read the data
Use the pandas library to read the reviews and postive/negative labels from comma-separated files. The data we're using has already been preprocessed a bit and we know it uses only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like The, the, and THE, all the same way.
End of explanation
from collections import Counter
from itertools import chain
total_counts = Counter(list(
chain.from_iterable([ row[0].split(' ') for idx, row in reviews.iterrows()])))
print("Total words in data set: ", len(total_counts))
Explanation: Counting word frequency
To start off we'll need to count how often each word appears in the data. We'll use this count to create a vocabulary we'll use to encode the review data. This resulting count is known as a bag of words. We'll use it to select our vocabulary and build the word vectors. You should have seen how to do this in Andrew's lesson. Try to implement it here using the Counter class.
Exercise: Create the bag of words from the reviews data and assign it to total_counts. The reviews are stores in the reviews Pandas DataFrame. If you want the reviews as a Numpy array, use reviews.values. You can iterate through the rows in the DataFrame with for idx, row in reviews.iterrows(): (documentation). When you break up the reviews into words, use .split(' ') instead of .split() so your results match ours.
End of explanation
vocab = sorted(total_counts, key=total_counts.get, reverse=True)[:10000]
print(vocab[:60])
Explanation: Let's keep the first 10000 most frequent words. As Andrew noted, most of the words in the vocabulary are rarely used so they will have little effect on our predictions. Below, we'll sort vocab by the count value and keep the 10000 most frequent words.
End of explanation
print(vocab[-1], ': ', total_counts[vocab[-1]])
Explanation: What's the last word in our vocabulary? We can use this to judge if 10000 is too few. If the last word is pretty common, we probably need to keep more words.
End of explanation
word2idx = {w: i for i, w in enumerate(vocab)}
Explanation: The last word in our vocabulary shows up in 30 reviews out of 25000. I think it's fair to say this is a tiny proportion of reviews. We are probably fine with this number of words.
Note: When you run, you may see a different word from the one shown above, but it will also have the value 30. That's because there are many words tied for that number of counts, and the Counter class does not guarantee which one will be returned in the case of a tie.
Now for each review in the data, we'll make a word vector. First we need to make a mapping of word to index, pretty easy to do with a dictionary comprehension.
Exercise: Create a dictionary called word2idx that maps each word in the vocabulary to an index. The first word in vocab has index 0, the second word has index 1, and so on.
End of explanation
def text_to_vector(text):
words = np.zeros(len(word2idx))
for word in text.split(' '):
i = word2idx.get(word, None)
if i:
words[i] += 1
return words
Explanation: Text to vector function
Now we can write a function that converts a some text to a word vector. The function will take a string of words as input and return a vector with the words counted up. Here's the general algorithm to do this:
Initialize the word vector with np.zeros, it should be the length of the vocabulary.
Split the input string of text into a list of words with .split(' '). Again, if you call .split() instead, you'll get slightly different results than what we show here.
For each word in that list, increment the element in the index associated with that word, which you get from word2idx.
Note: Since all words aren't in the vocab dictionary, you'll get a key error if you run into one of those words. You can use the .get method of the word2idx dictionary to specify a default returned value when you make a key error. For example, word2idx.get(word, None) returns None if word doesn't exist in the dictionary.
End of explanation
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[:65]
Explanation: If you do this right, the following code should return
```
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[:65]
array([0, 1, 0, 0, 2, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 1, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0,
0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0])
```
End of explanation
word_vectors = np.zeros((len(reviews), len(vocab)), dtype=np.int_)
for ii, (_, text) in enumerate(reviews.iterrows()):
word_vectors[ii] = text_to_vector(text[0])
# Printing out the first 5 word vectors
word_vectors[:5, :23]
Explanation: Now, run through our entire review data set and convert each review to a word vector.
End of explanation
Y = (labels=='positive').astype(np.int_)
records = len(labels)
shuffle = np.arange(records)
np.random.shuffle(shuffle)
test_fraction = 0.9
train_split, test_split = shuffle[:int(records*test_fraction)], shuffle[int(records*test_fraction):]
trainX, trainY = word_vectors[train_split,:], to_categorical(Y.values[train_split], 2)
testX, testY = word_vectors[test_split,:], to_categorical(Y.values[test_split], 2)
trainY
Explanation: Train, Validation, Test sets
Now that we have the word_vectors, we're ready to split our data into train, validation, and test sets. Remember that we train on the train data, use the validation data to set the hyperparameters, and at the very end measure the network performance on the test data. Here we're using the function to_categorical from TFLearn to reshape the target data so that we'll have two output units and can classify with a softmax activation function. We actually won't be creating the validation set here, TFLearn will do that for us later.
End of explanation
# Network building
def build_model():
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
#### Your code ####
net = tflearn.input_data([None, 10000])
net = tflearn.fully_connected(net, 1000, activation='ReLU')
net = tflearn.fully_connected(net, 500, activation='ReLU')
net = tflearn.fully_connected(net, 2, activation='softmax')
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1,
loss='categorical_crossentropy')
model = tflearn.DNN(net)
return model
Explanation: Building the network
TFLearn lets you build the network by defining the layers.
Input layer
For the input layer, you just need to tell it how many units you have. For example,
net = tflearn.input_data([None, 100])
would create a network with 100 input units. The first element in the list, None in this case, sets the batch size. Setting it to None here leaves it at the default batch size.
The number of inputs to your network needs to match the size of your data. For this example, we're using 10000 element long vectors to encode our input data, so we need 10000 input units.
Adding layers
To add new hidden layers, you use
net = tflearn.fully_connected(net, n_units, activation='ReLU')
This adds a fully connected layer where every unit in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call. It's telling the network to use the output of the previous layer as the input to this layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling net = tflearn.fully_connected(net, n_units).
Output layer
The last layer you add is used as the output layer. There for, you need to set the number of units to match the target data. In this case we are predicting two classes, positive or negative sentiment. You also need to set the activation function so it's appropriate for your model. Again, we're trying to predict if some input data belongs to one of two classes, so we should use softmax.
net = tflearn.fully_connected(net, 2, activation='softmax')
Training
To set how you train the network, use
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
Again, this is passing in the network you've been building. The keywords:
optimizer sets the training method, here stochastic gradient descent
learning_rate is the learning rate
loss determines how the network error is calculated. In this example, with the categorical cross-entropy.
Finally you put all this together to create the model with tflearn.DNN(net). So it ends up looking something like
net = tflearn.input_data([None, 10]) # Input
net = tflearn.fully_connected(net, 5, activation='ReLU') # Hidden
net = tflearn.fully_connected(net, 2, activation='softmax') # Output
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
model = tflearn.DNN(net)
Exercise: Below in the build_model() function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc.
End of explanation
model = build_model()
Explanation: Intializing the model
Next we need to call the build_model() function to actually build the model. In my solution I haven't included any arguments to the function, but you can add arguments so you can change parameters in the model if you want.
Note: You might get a bunch of warnings here. TFLearn uses a lot of deprecated code in TensorFlow. Hopefully it gets updated to the new TensorFlow version soon.
End of explanation
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=128, n_epoch=50)
Explanation: Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively. Below is the code to fit our the network to our word vectors.
You can rerun model.fit to train the network further if you think you can increase the validation accuracy. Remember, all hyperparameter adjustments must be done using the validation set. Only use the test set after you're completely done training the network.
End of explanation
predictions = (np.array(model.predict(testX))[:,0] >= 0.5).astype(np.int_)
test_accuracy = np.mean(predictions == testY[:,0], axis=0)
print("Test accuracy: ", test_accuracy)
Explanation: Testing
After you're satisified with your hyperparameters, you can run the network on the test set to measure it's performance. Remember, only do this after finalizing the hyperparameters.
End of explanation
# Helper function that uses your model to predict sentiment
def test_sentence(sentence):
positive_prob = model.predict([text_to_vector(sentence.lower())])[0][1]
print('Sentence: {}'.format(sentence))
print('P(positive) = {:.3f} :'.format(positive_prob),
'Positive' if positive_prob > 0.5 else 'Negative')
sentence = "Moonlight is by far the best movie of 2016."
test_sentence(sentence)
sentence = "It's amazing anyone could be talented enough to make something this spectacularly awful"
test_sentence(sentence)
sentence = "Terrible example for this genre"
test_sentence(sentence)
Explanation: Try out your own text!
End of explanation |
12,457 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Start with our simple example
Let's start with $f(x) = x^2$
Step1: Let's assume we start at the top of the curve, at x = -4, and want to get down to x=0.
Step2: In this algorithm, alpha is known as the "learning rate", and all it does is keep us from taking steps that are too aggressive, where we could shoot past the minimum.
Step3: Here we can see that we took a pretty big step towards the minimum, just as we'd like.
Let's take another step
Step4: At this point, we've taken two steps in our Gradient Descent, and we've gone from x=-4 all the way to x=-1.44. Every additional step we take is going to give us smaller and smaller returns, so instead of writing out each additional step, let's do this programatically. | Python Code:
# make our x array
x = np.linspace(-4, 4, 801)
# f(x) = x^2
def f(x):
return x**2
# derivative of x^2 is 2x
def f_prime(x):
return 2*x
# take a look at the curve
plt.plot(x, f(x), c='black')
sns.despine();
Explanation: Start with our simple example
Let's start with $f(x) = x^2$:
End of explanation
# starting position on the curve
x_start = -4.0
# looking at the values of the derivative, for each value of x.
# we see the greatest change at the tops of the curve, namely 4 and -4
plt.plot(x, f_prime(x), c='black');
# learning rate
alpha = 0.2
Explanation: Let's assume we start at the top of the curve, at x = -4, and want to get down to x=0.
End of explanation
# let's take our first step!
step1 = alpha*f_prime(x_start)
# our new value of x is just the previous value, minus the step
next_x = x_start - step1
# take a look at the step we took, with respect to the curve
plt.plot(x, f(x), c='black')
plt.scatter([x_start, next_x], [f(x_start), f(next_x)], c='red')
plt.plot([x_start, next_x], [f(x_start), f(next_x)], c='red')
plt.xlim((-4, 4))
plt.ylim((0, 16))
sns.despine();
Explanation: In this algorithm, alpha is known as the "learning rate", and all it does is keep us from taking steps that are too aggressive, where we could shoot past the minimum.
End of explanation
another_x = next_x - alpha*f_prime(next_x)
# take a look at the combination of the two steps we've taken
plt.plot(x, f(x), c='black')
plt.plot([x_start, next_x], [f(x_start), f(next_x)], c='red')
plt.scatter([x_start, next_x], [f(x_start), f(next_x)], c='red')
plt.plot([x_start, next_x, another_x], [f(x_start), f(next_x), f(another_x)], c='red')
plt.scatter([x_start, next_x, another_x], [f(x_start), f(next_x), f(another_x)], c='red')
plt.xlim((-4, 4))
plt.ylim((0, 16))
sns.despine();
Explanation: Here we can see that we took a pretty big step towards the minimum, just as we'd like.
Let's take another step:
End of explanation
# how many steps we're going to take in our Descent
num_steps = 101
# hold our steps, including our initial starting position
x_steps = [x_start]
# do num_steps iterations
for i in xrange(num_steps):
prev_x = x_steps[i]
new_x = prev_x - alpha*f_prime(prev_x)
x_steps.append(new_x)
# plot the gradient descent as we go down the curve
plt.plot(x, f(x), c='black')
plt.plot(x_steps, [f(xi) for xi in x_steps], c='red')
plt.scatter(x_steps, [f(xi) for xi in x_steps], c='red')
plt.xlim((-4, 4))
plt.ylim((0, 16))
sns.despine();
# check the size of the derivative when we finished the iteration
print 'gradient at the end of our interations:', x_steps[-1]
# it's zero, for all intensive purposes
print 'Is the derivative effectively equal to zero at the bottom?', np.isclose(x_steps[-1], 0.0)
Explanation: At this point, we've taken two steps in our Gradient Descent, and we've gone from x=-4 all the way to x=-1.44. Every additional step we take is going to give us smaller and smaller returns, so instead of writing out each additional step, let's do this programatically.
End of explanation |
12,458 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step3: A Primer
Finding Hyperparameters
Grid Search?
python
param_grid = [
{'C'
Step4: Setup
Assumptions
search space for each parameter is random uniform between 0 and 1
200 iterations for each algorithm
utils
Step5: score functions
Step6: generator functions
Step8: evaluation functions
Step10: discriminative model functions
Step14: model score functions
NOTE
Step15: plotting
Step16: baselines
Step17: Play time! | Python Code:
def smbo(generator_fn,
score_fn,
evaluation_fn,
num_dims,
num_initial_points=10,
num_iter=200,
num_generated=10000):
general sequential model based optimization to minimize the result
of some presumably expensive function
generator_fn:
function that takes in the algorithm's history (X and y) and generates a
number of candidates
score_fn:
function that scores candidates given the algorithm's history
evaluation_fn:
function that computes the "true" score for a chosen candidate
# evaluated candidates
X = []
# actual scores for the evaluated candidates
y = []
for i in range(num_iter):
if i < num_initial_points:
best_candidate = generator_fn(X, y, 1, num_dims)[0]
else:
candidates = generator_fn(X, y, num_generated, num_dims)
scores = score_fn(X, y, candidates)
argmin = min(range(num_generated), key=lambda x: scores[x])
best_candidate = candidates[argmin]
actual_result = evaluation_fn(best_candidate)
X.append(best_candidate)
y.append(actual_result)
return dict(
X=X,
y=y,
best=min(y),
)
def discriminative_smbo(discriminative_model_fn, model_score_fn, **kwargs):
sequential model based optimization using a discriminative model to
directly predict how good each candidate is
discriminative_model_fn:
function that given X, y, and candidates, predicts mean expected value for
each candidate, as well as the std of the prediction
model_score_fn:
function that takes in the best score so far, candidate prediction means
and variances, and scores each candidate
def score_fn(X, y, candidates):
means, stds = discriminative_model_fn(X, y, candidates)
return model_score_fn(min(y), means, stds)
return smbo(score_fn=score_fn, **kwargs)
def generative_smbo(generative_model_fn, percentile, epsilon=1e-8, **kwargs):
sequential model based optimization using a generative model to
predict the likelihood that each candidate is in a good or bad region
of the search space
generative_model_fn:
function that given X and candidates, computes the likelihood that each
candidate is from the X distribution
percentile:
scores below this percentile of scores are considered "good"
def score_fn(X, y, candidates):
good_idxs = y < np.percentile(y, percentile)
good_X = X[good_idxs]
bad_X = X[~good_idxs]
# generate probability that each candidate is from good distribution
p_good = generative_model_fn(good_X, candidates)
# generate probability that each candidate is from bad distribution
p_bad = generative_model_fn(bad_X, candidates)
# want to minimize p(bad) / p(good)
return p_bad / (p_good + epsilon)
return smbo(score_fn=score_fn, **kwargs)
Explanation: A Primer
Finding Hyperparameters
Grid Search?
python
param_grid = [
{'C': [1, 10, 100, 1000], 'kernel': ['linear']},
{'C': [1, 10, 100, 1000], 'gamma': [0.001, 0.0001], 'kernel': ['rbf']},
]
Scales exponentially in number of hyperparameters!
Random Search > Grid search
Random Search for Hyper-Parameter Optimization
Random searching is pretty "stupid" though...
Can we do better?
Sequential Model-Based Optimization for General Algorithm Configuration
Hyperopt: A Python Library for Optimizing the
Hyperparameters of Machine Learning Algorithms
End of explanation
@contextlib.contextmanager
def timer(title):
start_time = time.time()
try:
yield
finally:
duration = time.time() - start_time
print("%s took %fs" % (title, duration))
Explanation: Setup
Assumptions
search space for each parameter is random uniform between 0 and 1
200 iterations for each algorithm
utils
End of explanation
def random_score(X, y, candidates):
return np.random.rand(len(candidates))
Explanation: score functions
End of explanation
def random_generator(X, y, num_candidates, num_dims):
return np.random.rand(num_candidates, num_dims)
Explanation: generator functions
End of explanation
def branin_hoo(params, noisy=False):
http://www.sfu.ca/~ssurjano/branin.html
unscaled_x1, unscaled_x2 = params
x1 = unscaled_x1 * 15 - 5
x2 = unscaled_x2 * 15
a = 1
b = 5.1 / (4 * np.pi ** 2)
c = 5 / np.pi
r = 6
s = 10
t = 1 / (8 * np.pi)
term1 = a * (x2 - b * x1 ** 2 + c * x1 - r) ** 2
term2 = s * (1 - t) * np.cos(x1)
return term1 + term2 + s
branin_hoo([(np.pi + 5) / 15, 2.275 / 15])
def branin_hoo_with_useless_dimensions(total_dims):
assert total_dims > 2
p1 = np.random.randint(total_dims)
p2 = np.random.randint(total_dims)
def inner(params, noisy=False):
return branin_hoo([params[p1], params[p2]], noisy)
return inner
Explanation: evaluation functions
End of explanation
def extra_trees_fn(X, y, candidates):
kwargs = dict(
n_estimators=10,
max_depth=6,
min_samples_split=2,
min_samples_leaf=1,
bootstrap=True,
max_features="sqrt"
)
clf = ensemble.ExtraTreesRegressor(**kwargs)
clf.fit(X, y)
results = [est.predict(candidates) for est in clf.estimators_]
means = np.mean(results, axis=0)
stds = np.std(results, axis=0)
return means, stds
def random_forest_fn(X, y, candidates):
kwargs = dict(
n_estimators=10,
max_depth=6,
min_samples_split=2,
min_samples_leaf=1,
bootstrap=True,
max_features="sqrt"
)
clf = ensemble.RandomForestRegressor(**kwargs)
clf.fit(X, y)
results = [est.predict(candidates) for est in clf.estimators_]
means = np.mean(results, axis=0)
stds = np.std(results, axis=0)
return means, stds
# FIXME
def gaussian_process_fn(X, y, candidates):
http://scikit-learn.org/stable/modules/gaussian_process.html
kwargs = dict(
theta0=1e-2,
thetaL=1e-4,
thetaU=1e-1,
)
clf = GaussianProcess(**kwargs)
clf.fit(X, y)
y_pred, sigma2_pred = clf.predict(candidates, eval_MSE=True)
return y_pred, np.sqrt(sigma2_pred)
Explanation: discriminative model functions
End of explanation
def minimum_mean(f_min, means, stds):
return -means
def maximum_uncertainty(f_min, means, stds):
return stds
def expected_improvement_1(f_min, means, stds):
assumes log scale cost
made for optimizing run time of optimization algorithms
http://www.cs.ubc.ca/~hutter/papers/11-LION5-SMAC.pdf
# v is a scaled version of the best score
v = (f_min - means) / stds
term1 = f_min * scipy.stats.norm.cdf(v)
# NOTE (v - stds) seems super wrong to me!
term2 = np.exp(0.5 * stds ** 2 + means) * scipy.stats.norm.cdf(v - stds)
return term1 - term2
def expected_improvement_2(f_min, means, stds):
older version of equation 1
probably also assumes log scale cost
http://www.cs.ubc.ca/~hutter/papers/11-LION5-SMAC.pdf
# ignore f_min and replace with means + stds
return expected_improvement_1(means + stds, means, stds)
def expected_improvement_3(f_min, means, stds):
http://arxiv.org/abs/1208.3719
u = (f_min - means) / stds
return stds * (u * scipy.stats.norm.cdf(u) + scipy.stats.norm.pdf(u))
Explanation: model score functions
NOTE: higher score = better
End of explanation
def plot_bests(results):
lines = []
for name, res in results:
y = res["y"]
label = "%s=%f" % (name, min(y))
bests = [np.min(y[:l]) for l in range(1, len(y) + 1)]
line, = pylab.plot(bests, label=label)
lines.append(line)
pylab.legend(handles=lines)
Explanation: plotting
End of explanation
def grid_search(evaluation_fn, num_dims, num_iter=200, shuffle=False):
iters_per_dim = int(np.ceil(num_iter ** (1.0 / num_dims)))
X = []
y = []
spaces = [np.linspace(0, 1, iters_per_dim) for _ in range(num_dims)]
if shuffle:
for s in spaces:
np.random.shuffle(s)
for params in itertools.product(*spaces):
y.append(evaluation_fn(params))
X.append(params)
if len(X) == num_iter:
break
return dict(
X=X,
y=y,
best=min(y),
)
def random_search(evaluation_fn, num_dims, num_iter=200):
return smbo(generator_fn=random_generator,
score_fn=random_score,
evaluation_fn=evaluation_fn,
num_dims=num_dims,
num_iter=num_iter,
num_generated=1)
def hyperopt_search(evaluation_fn,
num_dims,
num_iter=200):
space = [hp.uniform(str(idx), 0, 1) for idx in range(num_dims)]
X = []
y = []
def objective(candidate):
result = evaluation_fn(candidate)
y.append(result)
X.append(candidate)
return result
hyperopt.fmin(objective,
space=space,
algo=hyperopt.tpe.suggest,
max_evals=num_iter)
return dict(
X=X,
y=y,
best=min(y),
)
Explanation: baselines
End of explanation
extra_trees_search = partial(
discriminative_smbo,
discriminative_model_fn=extra_trees_fn,
model_score_fn=maximum_uncertainty,
generator_fn=random_generator,
)
gaussian_process_search = partial(
discriminative_smbo,
discriminative_model_fn=gaussian_process_fn,
model_score_fn=expected_improvement_3,
generator_fn=random_generator,
)
names_and_search_fns = [
("grid", grid_search),
("grid_shuffle", partial(grid_search, shuffle=True)),
("rand", random_search),
("hyperopt", hyperopt_search),
("xtrees", extra_trees_search),
# ("gp", gaussian_process_search),
]
num_dims = 2
eval_fn = branin_hoo
results = []
for name, search_fn in names_and_search_fns:
with timer(name):
res = search_fn(evaluation_fn=eval_fn, num_dims=num_dims)
results.append((name, res))
plot_bests(results)
num_dims = 100
eval_fn = branin_hoo_with_useless_dimensions(num_dims)
results = []
for name, search_fn in names_and_search_fns:
with timer(name):
res = search_fn(evaluation_fn=eval_fn, num_dims=num_dims)
results.append((name, res))
plot_bests(results)
Explanation: Play time!
End of explanation |
12,459 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Dual CRISPR Screen Analysis
Step 1
Step1: Automated Set-Up
Step2: Scaffold Trimming Functions
Step3: Gzipped FASTQ Filenames
Step4: FASTQ Gunzip Execution
Step5: FASTQ Filenames
Step6: Scaffold Trim Execution
Step7: Trimmed FASTQ Filenames | Python Code:
g_num_processors = 3
g_fastqs_dir = '~/dual_crispr/test_data/test_set_1'
g_trimmed_fastqs_dir = '~/dual_crispr/test_outputs/test_set_1'
g_full_5p_r1 = 'TATATATCTTGTGGAAAGGACGAAACACCG'
g_full_5p_r2 = 'CCTTATTTTAACTTGCTATTTCTAGCTCTAAAAC'
g_full_3p_r1 = 'GTTTCAGAGCTATGCTGGAAACTGCATAGCAAGTTGAAATAAGGCTAGTCCGTTATCAACTTGAAAAAGTGGCACCGAGTCGGTGCTTTTTTGTACTGAG'
g_full_3p_r2 = 'CAAACAAGGCTTTTCTCCAAGGGATATTTATAGTCTCAAAACACACAATTACTTTACAGTTAGGGTGAGTTTCCTTTTGTGCTGTTTTTTAAAATA'
g_keep_gzs = False # True only works for gzip 1.6+ (apparently not available on AWS linux)
Explanation: Dual CRISPR Screen Analysis
Step 1: Construct Scaffold Trimming
Amanda Birmingham, CCBB, UCSD ([email protected])
Instructions
To run this notebook reproducibly, follow these steps:
1. Click Kernel > Restart & Clear Output
2. When prompted, click the red Restart & clear all outputs button
3. Fill in the values for your analysis for each of the variables in the Input Parameters section
4. Click Cell > Run All
Input Parameters
End of explanation
import inspect
import ccbb_pyutils.analysis_run_prefixes as ns_runs
import ccbb_pyutils.files_and_paths as ns_files
import ccbb_pyutils.notebook_logging as ns_logs
def describe_var_list(input_var_name_list):
description_list = ["{0}: {1}\n".format(name, eval(name)) for name in input_var_name_list]
return "".join(description_list)
ns_logs.set_stdout_info_logger()
g_fastqs_dir = ns_files.expand_path(g_fastqs_dir)
g_trimmed_fastqs_dir = ns_files.expand_path(ns_runs.check_or_set(g_trimmed_fastqs_dir, g_fastqs_dir))
print(describe_var_list(['g_fastqs_dir','g_trimmed_fastqs_dir']))
ns_files.verify_or_make_dir(g_trimmed_fastqs_dir)
Explanation: Automated Set-Up
End of explanation
import dual_crispr.scaffold_trim as trim
print(inspect.getsource(trim))
def trim_fw_and_rv_reads(output_dir, full_5p_r1, full_3p_r1, full_5p_r2, full_3p_r2, fw_fastq_fp, rv_fastq_fp):
trim.trim_linked_scaffold(output_dir, fw_fastq_fp, full_5p_r1, full_3p_r1)
trim.trim_linked_scaffold(output_dir, rv_fastq_fp, full_5p_r2, full_3p_r2)
Explanation: Scaffold Trimming Functions
End of explanation
g_seq_file_ext_name = ".fastq"
g_gzip_ext_name = ".gz"
print(ns_files.check_file_presence(g_fastqs_dir, "", "{0}{1}".format(g_seq_file_ext_name, g_gzip_ext_name),
all_subdirs=True, check_failure_msg=None, just_warn=True))
Explanation: Gzipped FASTQ Filenames
End of explanation
import ccbb_pyutils.files_and_paths as ns_files
def unzip_and_flatten_seq_files(top_fastqs_dir, ext_name, gzip_ext_name, keep_gzs):
# first, recursively unzip all fastq.gz files anywhere under the input dir
ns_files.gunzip_wildpath(top_fastqs_dir, ext_name + gzip_ext_name, keep_gzs, True) # True = do recursive
# now move all fastqs to top-level directory so don't have to work recursively in future
ns_files.move_to_dir_and_flatten(top_fastqs_dir, top_fastqs_dir, ext_name)
# False = don't keep gzs as well as expanding, True = do keep them (True only works for gzip 1.6+)
unzip_and_flatten_seq_files(g_fastqs_dir, g_seq_file_ext_name, g_gzip_ext_name, g_keep_gzs)
Explanation: FASTQ Gunzip Execution
End of explanation
print(ns_files.check_file_presence(g_fastqs_dir, "", g_seq_file_ext_name,
check_failure_msg="No fastq files to trim were detected."))
Explanation: FASTQ Filenames
End of explanation
import ccbb_pyutils.parallel_process_fastqs as ns_parallel
g_parallel_results = ns_parallel.parallel_process_paired_reads(g_fastqs_dir, g_seq_file_ext_name, g_num_processors,
trim_fw_and_rv_reads, [g_trimmed_fastqs_dir, g_full_5p_r1,
g_full_3p_r1, g_full_5p_r2, g_full_3p_r2])
print(ns_parallel.concatenate_parallel_results(g_parallel_results))
Explanation: Scaffold Trim Execution
End of explanation
print(ns_files.check_file_presence(g_trimmed_fastqs_dir, "", trim.get_trimmed_suffix(trim.TrimType.FIVE_THREE),
check_failure_msg="Scaffold trimming failed to produce trimmed file(s)."))
Explanation: Trimmed FASTQ Filenames
End of explanation |
12,460 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
What are the ten most common movie names of all time?
Step1: Which three years of the 1930s saw the most films released?
Step2: Plot the number of films that have been released each decade over the history of cinema.
Step3: Plot the number of "Hamlet" films made each decade.
Step4: Plot the number of "Rustler" characters in each decade of the history of film.
Step5: Plot the number of "Hamlet" characters each decade.
Step6: What are the 11 most common character names in movie history?
Step7: Who are the 10 people most often credited as "Herself" in film history?
Step8: Who are the 10 people most often credited as "Himself" in film history?
Step9: Which actors or actresses appeared in the most movies in the year 1945?
Step10: Which actors or actresses appeared in the most movies in the year 1985?
Step11: Plot how many roles Mammootty has played in each year of his career.
Step12: What are the 10 most frequent roles that start with the phrase "Patron in"?
Step13: What are the 10 most frequent roles that start with the word "Science"?
Step14: Plot the n-values of the roles that Judi Dench has played over her career.
Step15: Plot the n-values of Cary Grant's roles through his career.
Plot the n-value of the roles that Sidney Poitier has acted over the years.
Step16: How many leading (n=1) roles were available to actors, and how many to actresses, in the 1950s?
Step17: How many supporting (n=2) roles were available to actors, and how many to actresses, in the 1950s? | Python Code:
titles['title'].value_counts()[:10]
Explanation: What are the ten most common movie names of all time?
End of explanation
titles[(titles['year']<1940)&(titles['year']>=1930)]['year'].value_counts()
Explanation: Which three years of the 1930s saw the most films released?
End of explanation
dec=((titles['year']//10)*10)
print(dec.max())
print(dec.min())
dec.hist(bins=(dec.max()-dec.min())/10+1)
Explanation: Plot the number of films that have been released each decade over the history of cinema.
End of explanation
hamdec=titles[titles['title']=="Hamlet"]
hamdec['year']=(hamdec['year']//10)*10
hamdec['year'].hist(bins=(hamdec['year'].max()-hamdec['year'].min())/10+1)
hamdec
Explanation: Plot the number of "Hamlet" films made each decade.
End of explanation
hamdec=cast[cast['character']=="Rustler"]
hamdec['year']=(hamdec['year']//10)*10
hamdec['year'].hist(bins=(hamdec['year'].max()-hamdec['year'].min())/10+1)
Explanation: Plot the number of "Rustler" characters in each decade of the history of film.
End of explanation
hamdec=cast[cast['character']=="Hamlet"]
hamdec['year']=(hamdec['year']//10)*10
hamdec['year'].hist(bins=(hamdec['year'].max()-hamdec['year'].min())/10+1)
Explanation: Plot the number of "Hamlet" characters each decade.
End of explanation
cast['character'].value_counts()[:11]
Explanation: What are the 11 most common character names in movie history?
End of explanation
cast[cast['character']=='Herself']['name'].value_counts()[:10]
Explanation: Who are the 10 people most often credited as "Herself" in film history?
End of explanation
cast[cast['character']=='Himself']['name'].value_counts()[:10]
Explanation: Who are the 10 people most often credited as "Himself" in film history?
End of explanation
cast[cast['year']==1945]['name'].value_counts()[:10]
Explanation: Which actors or actresses appeared in the most movies in the year 1945?
End of explanation
cast[cast['year']==1985]['name'].value_counts()[:10]
Explanation: Which actors or actresses appeared in the most movies in the year 1985?
End of explanation
cast[cast['name']=='Mammootty'].hist(column='year')
cast.hist?
Explanation: Plot how many roles Mammootty has played in each year of his career.
End of explanation
cast[cast['character'].str.startswith('Patron in')]['character'].value_counts()[:10]
Explanation: What are the 10 most frequent roles that start with the phrase "Patron in"?
End of explanation
cast[cast['character'].str.startswith('Science')]['character'].value_counts()[:10]
Explanation: What are the 10 most frequent roles that start with the word "Science"?
End of explanation
cast[cast['name']=='Judi Dench'].plot(kind='scatter',x='year',y='n')
Explanation: Plot the n-values of the roles that Judi Dench has played over her career.
End of explanation
cast[cast['name']=='Sidney Poitier'].plot(kind='scatter',x='year',y='n')
Explanation: Plot the n-values of Cary Grant's roles through his career.
Plot the n-value of the roles that Sidney Poitier has acted over the years.
End of explanation
cast[(cast['n']==1)&(cast['type']=='actor')&(cast['year']<1960)&(cast['year']>=1950)].shape[0]
cast[(cast['n']==1)&(cast['type']=='actress')&(cast['year']<1960)&(cast['year']>=1950)].shape[0]
Explanation: How many leading (n=1) roles were available to actors, and how many to actresses, in the 1950s?
End of explanation
cast[(cast['n']==2)&(cast['type']=='actor')&(cast['year']<1960)&(cast['year']>=1950)].shape[0]
cast[(cast['n']==2)&(cast['type']=='actress')&(cast['year']<1960)&(cast['year']>=1950)].shape[0]
Explanation: How many supporting (n=2) roles were available to actors, and how many to actresses, in the 1950s?
End of explanation |
12,461 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href='http
Step1: Exercises
Recreate the plots below using the titanic dataframe. There are very few hints since most of the plots can be done with just one or two lines of code and a hint would basically give away the solution. Keep careful attention to the x and y labels for hints.
Note! In order to not lose the plot image, make sure you don't code in the cell that is directly above the plot, there is an extra cell above that one which won't overwrite that plot! | Python Code:
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
sns.set_style('whitegrid')
titanic = sns.load_dataset('titanic')
titanic.head()
titanic.shape
Explanation: <a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>
Seaborn Exercises
Time to practice your new seaborn skills! Try to recreate the plots below (don't worry about color schemes, just the plot itself.
The Data
We will be working with a famous titanic data set for these exercises. Later on in the Machine Learning section of the course, we will revisit this data, and use it to predict survival rates of passengers. For now, we'll just focus on the visualization of the data with seaborn:
End of explanation
sns.jointplot(x='fare', y='age', data=titanic)
sns.distplot(titanic['fare'], kde=False, hist=True)
sns.boxplot(x='class', y='age', data=titanic)
sns.swarmplot(x='class', y='age', data=titanic)
sns.countplot(x='sex', data=titanic)
sns.heatmap(titanic.corr())
import matplotlib.pyplot as plt
fg = sns.FacetGrid(data=titanic ,col='sex')
fg.map(plt.hist, 'age')
Explanation: Exercises
Recreate the plots below using the titanic dataframe. There are very few hints since most of the plots can be done with just one or two lines of code and a hint would basically give away the solution. Keep careful attention to the x and y labels for hints.
Note! In order to not lose the plot image, make sure you don't code in the cell that is directly above the plot, there is an extra cell above that one which won't overwrite that plot!
End of explanation |
12,462 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Homework Part 1
Step1: that is a cue to use a particular command, in this case, plot. Run the cell to see documentation for that command. (To quickly close the Help window, press q.)
For more documentation, visit the links in the Help menu above.
Goals
In this exercise, you will segment, feature extract, and analyze audio files.
Detect onsets in an audio signal.
Segment the audio signal at each onset.
Compute features for each segment.
Gain intuition into the features by listening to each segment separately.
Step 1
Step2: Make sure the download worked
Step3: Save the audio signal into an array.
Step4: Show the sample rate
Step5: Listen to the audio signal.
Step6: Display the audio signal.
Step7: Compute the short-time Fourier transform
Step8: For display purposes, compute the log amplitude of the STFT
Step9: Display the spectrogram.
Step10: Step 2
Step11: Convert the onset times into sample indices.
Step12: Play a "beep" at each onset.
Step13: Step 3
Step14: Here is a function that adds 300 ms of silence onto the end of each segment and concatenates them into one signal.
Later, we will use this function to listen to each segment, perhaps sorted in a different order.
Step15: Listen to the newly concatenated signal.
Step16: Step 4
Step17: Use argsort to find an index array, ind, such that segments[ind] is sorted by zero crossing rate.
Step18: Sort the segments by zero crossing rate, and concatenate the sorted segments.
Step19: Step 5
Step20: More Exercises
Repeat the steps above for the following audio files | Python Code:
plt.plot?
Explanation: Homework Part 1: Understanding Audio Features through Sonification
There is no written component to be submitted for this part, Part 1. This section is intended to acquaint you with Python, the IPython notebook, and librosa.
When you see a cell that looks like this:
End of explanation
filename = 'simple_loop.wav'
url = 'http://audio.musicinformationretrieval.com/' + filename
urllib.urlretrieve?
Explanation: that is a cue to use a particular command, in this case, plot. Run the cell to see documentation for that command. (To quickly close the Help window, press q.)
For more documentation, visit the links in the Help menu above.
Goals
In this exercise, you will segment, feature extract, and analyze audio files.
Detect onsets in an audio signal.
Segment the audio signal at each onset.
Compute features for each segment.
Gain intuition into the features by listening to each segment separately.
Step 1: Retrieve Audio
Download the file simple_loop.wav onto your local machine.
End of explanation
%ls *.wav
Explanation: Make sure the download worked:
End of explanation
librosa.load?
Explanation: Save the audio signal into an array.
End of explanation
print fs
Explanation: Show the sample rate:
End of explanation
IPython.display.Audio?
Explanation: Listen to the audio signal.
End of explanation
librosa.display.waveplot?
Explanation: Display the audio signal.
End of explanation
librosa.stft?
Explanation: Compute the short-time Fourier transform:
End of explanation
librosa.logamplitude?
Explanation: For display purposes, compute the log amplitude of the STFT:
End of explanation
# Play with the parameters, including x_axis and y_axis
librosa.display.specshow?
Explanation: Display the spectrogram.
End of explanation
librosa.onset.onset_detect?
librosa.frames_to_time?
Explanation: Step 2: Detect Onsets
Find the times, in seconds, when onsets occur in the audio signal.
End of explanation
librosa.frames_to_samples?
Explanation: Convert the onset times into sample indices.
End of explanation
# Use the `length` parameter so the click track is the same length as the original signal
librosa.clicks?
# Play the click track "added to" the original signal
IPython.display.Audio?
Explanation: Play a "beep" at each onset.
End of explanation
# Assuming these variables exist:
# x: array containing the audio signal
# fs: corresponding sampling frequency
# onset_samples: array of onsets in units of samples
frame_sz = int(0.100*fs)
segments = numpy.array([x[i:i+frame_sz] for i in onset_samples])
Explanation: Step 3: Segment the Audio
Save into an array, segments, 100-ms segments beginning at each onset.
End of explanation
def concatenate_segments(segments, fs=44100, pad_time=0.300):
padded_segments = [numpy.concatenate([segment, numpy.zeros(int(pad_time*fs))]) for segment in segments]
return numpy.concatenate(padded_segments)
concatenated_signal = concatenate_segments(segments, fs)
Explanation: Here is a function that adds 300 ms of silence onto the end of each segment and concatenates them into one signal.
Later, we will use this function to listen to each segment, perhaps sorted in a different order.
End of explanation
IPython.display.Audio?
Explanation: Listen to the newly concatenated signal.
End of explanation
# returns a boolean array of zero crossing locations, not a total count
librosa.core.zero_crossings?
# you'll need this to actually count the number of zero crossings per segment
sum?
Explanation: Step 4: Extract Features
For each segment, compute the zero crossing rate.
End of explanation
# zcrs: array, number of zero crossings in each frame
ind = numpy.argsort(zcrs)
print ind
Explanation: Use argsort to find an index array, ind, such that segments[ind] is sorted by zero crossing rate.
End of explanation
concatenated_signal = concatenate_segments(segments[ind], fs)
Explanation: Sort the segments by zero crossing rate, and concatenate the sorted segments.
End of explanation
IPython.display.Audio?
Explanation: Step 5: Listen to Segments
Listen to the sorted segments. What do you hear?
End of explanation
#url = 'http://audio.musicinformationretrieval.com/125_bounce.wav'
#url = 'http://audio.musicinformationretrieval.com/conga_groove.wav'
#url = 'http://audio.musicinformationretrieval.com/58bpm.wav'
Explanation: More Exercises
Repeat the steps above for the following audio files:
End of explanation |
12,463 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Feature engineering on NCAA data
Domain knowledge is critical to getting the best out of data analysis and machine learning.
In the case of basketball, Dean Oliver identified four factors that are critical to success
Step1: Let's remove the entries corresponding to teams that played fewer than 100 games, and then plot it.
Step2: Does the relationship make sense? Do you think offensive and defensive efficiency are good predictors of a team's performance?
Turnover Percentage
Turnover percentage is measured as
Step3: Rebounding
Again, we'd have to measure both sides, but for simplicity, we'll do only the offensive rebounds.
$ORB / (ORB + Opp DRB)$
Step4: The relationship doesn't seem all that strong here. One way to measure the strength of the relationship is through the correlation. Numbers near 0 mean not correlated and numbers near +/- 1 indicate high correlation
Step5: The correlation between rebounding and win_rate is 0.38. Compare that to the first data frame
Step6: Notice that the offensive and opponents efficiency have correlation of 0.67 and -0.66, which are higher.
Step7: Free throw factor
This is a measure of both how often a team gets to the line and how often they make them
Step8: Machine Learning
Let's use these factors to create a simple ML model
Step9: 87% isn't bad, but ... there is a huge problem with the above approach.
How are we supposed to know Team A's free throw shooting percentage against Team B before the game is played?
What we could do is to get the free throw shooting percentage of Team A in the 3 games prior to this one and use that. This requires analytic functions in SQL. If you are not familar with these, make a copy of the select statement and modify it in stages until you grasp what is happening.
Step10: Based on just the teams' performance coming in, we can predict the outcome of games with a 69.4% accuracy.
More complex ML model
We can write a more complex ML model using Keras and a deep neural network.
The code is not that hard but you'll have to do a lot more work (scaling, hyperparameter tuning)
to get better performance than you did with the BigQuery ML model.
Step11: With a deep neural network, we are able to get 71.5% accuracy using the four factors model. | Python Code:
%%bigquery df1
SELECT
team_code,
AVG(SAFE_DIVIDE(fgm + 0.5 * fgm3,fga)) AS offensive_shooting_efficiency,
AVG(SAFE_DIVIDE(opp_fgm + 0.5 * opp_fgm3,opp_fga)) AS opponents_shooting_efficiency,
AVG(win) AS win_rate,
COUNT(win) AS num_games
FROM lab_dev.team_box
WHERE fga IS NOT NULL
GROUP BY team_code
Explanation: Feature engineering on NCAA data
Domain knowledge is critical to getting the best out of data analysis and machine learning.
In the case of basketball, Dean Oliver identified four factors that are critical to success:
* Shooting
* Turnovers
* Rebounding
* Free Throws
Of course, it is not enough to identify factors, you need a way to measure them.
Read this article about the four factors and how they are measured. In this notebook, we will compute them from the box score data. The numbers are slightly different from that of the article because the article is about the NBA, but these numbers are Dean Oliver's variants for NCAA games.
Shooting efficiency
Shooting is measured as the fraction of field goal attempts made, weighting 3 points higher:
$(FG + 0.5 * 3P) / FGA$
Let's compute the offensive and defensive shooting efficiency and see how correlated they are to winning teams.
See %%bigquery documentation for how to use it.
End of explanation
df1 = df1[df1['num_games'] > 100]
df1.plot(x='offensive_shooting_efficiency', y='win_rate', style='o');
df1.plot(x='opponents_shooting_efficiency', y='win_rate', style='o');
Explanation: Let's remove the entries corresponding to teams that played fewer than 100 games, and then plot it.
End of explanation
%%bigquery df2
SELECT
team_code,
AVG(SAFE_DIVIDE(tov,fga+0.475*fta+tov-oreb)) AS turnover_percent,
AVG(win) AS win_rate,
COUNT(win) AS num_games
FROM lab_dev.team_box
WHERE fga IS NOT NULL
GROUP BY team_code
HAVING num_games > 100
df2.plot(x='turnover_percent', y='win_rate', style='o');
Explanation: Does the relationship make sense? Do you think offensive and defensive efficiency are good predictors of a team's performance?
Turnover Percentage
Turnover percentage is measured as:
$TOV / (FGA + 0.475 * FTA + TOV - OREB)$
As before, let's compute this, and see whether it is a good predictor. For simplicity, we will compute only offensive turnover percentage, although we should really compute both sides as we did for scoring efficiency.
End of explanation
%%bigquery df3
SELECT
team_code,
AVG(SAFE_DIVIDE(oreb,oreb + opp_dreb)) AS rebounding,
AVG(win) AS win_rate,
COUNT(win) AS num_games
FROM lab_dev.team_box
WHERE fga IS NOT NULL
GROUP BY team_code
HAVING num_games > 100
df3.plot(x='rebounding', y='win_rate', style='o');
Explanation: Rebounding
Again, we'd have to measure both sides, but for simplicity, we'll do only the offensive rebounds.
$ORB / (ORB + Opp DRB)$
End of explanation
df3.corr()['win_rate']
Explanation: The relationship doesn't seem all that strong here. One way to measure the strength of the relationship is through the correlation. Numbers near 0 mean not correlated and numbers near +/- 1 indicate high correlation:
End of explanation
df1.corr()['win_rate']
Explanation: The correlation between rebounding and win_rate is 0.38. Compare that to the first data frame:
End of explanation
df2.corr()['win_rate']
Explanation: Notice that the offensive and opponents efficiency have correlation of 0.67 and -0.66, which are higher.
End of explanation
%%bigquery df3
SELECT
team_code,
AVG(SAFE_DIVIDE(ftm,fga)) AS freethrows,
AVG(win) AS win_rate,
COUNT(win) AS num_games
FROM lab_dev.team_box
WHERE fga IS NOT NULL
GROUP BY team_code
HAVING num_games > 100
df3.plot(x='freethrows', y='win_rate', style='o');
df3.corr()['win_rate']
Explanation: Free throw factor
This is a measure of both how often a team gets to the line and how often they make them:
$FT / FGA$
End of explanation
%%bigquery
SELECT
team_code,
is_home,
SAFE_DIVIDE(fgm + 0.5 * fgm3,fga) AS offensive_shooting_efficiency,
SAFE_DIVIDE(opp_fgm + 0.5 * opp_fgm3,opp_fga) AS opponents_shooting_efficiency,
SAFE_DIVIDE(tov,fga+0.475*fta+tov-oreb) AS turnover_percent,
SAFE_DIVIDE(opp_tov,opp_fga+0.475*opp_fta+opp_tov-opp_oreb) AS opponents_turnover_percent,
SAFE_DIVIDE(oreb,oreb + opp_dreb) AS rebounding,
SAFE_DIVIDE(opp_oreb,opp_oreb + dreb) AS opponents_rebounding,
SAFE_DIVIDE(ftm,fga) AS freethrows,
SAFE_DIVIDE(opp_ftm,opp_fga) AS opponents_freethrows,
win
FROM lab_dev.team_box
WHERE fga IS NOT NULL and win IS NOT NULL
LIMIT 10
%%bigquery
CREATE OR REPLACE MODEL lab_dev.four_factors_model
OPTIONS(model_type='logistic_reg', input_label_cols=['win'])
AS
SELECT
team_code,
is_home,
SAFE_DIVIDE(fgm + 0.5 * fgm3,fga) AS offensive_shooting_efficiency,
SAFE_DIVIDE(opp_fgm + 0.5 * opp_fgm3,opp_fga) AS opponents_shooting_efficiency,
SAFE_DIVIDE(tov,fga+0.475*fta+tov-oreb) AS turnover_percent,
SAFE_DIVIDE(opp_tov,opp_fga+0.475*opp_fta+opp_tov-opp_oreb) AS opponents_turnover_percent,
SAFE_DIVIDE(oreb,oreb + opp_dreb) AS rebounding,
SAFE_DIVIDE(opp_oreb,opp_oreb + dreb) AS opponents_rebounding,
SAFE_DIVIDE(ftm,fga) AS freethrows,
SAFE_DIVIDE(opp_ftm,opp_fga) AS opponents_freethrows,
win
FROM lab_dev.team_box
WHERE fga IS NOT NULL and win IS NOT NULL
%%bigquery
SELECT * FROM ML.EVALUATE(MODEL lab_dev.four_factors_model)
Explanation: Machine Learning
Let's use these factors to create a simple ML model
End of explanation
%%bigquery
CREATE OR REPLACE MODEL lab_dev.four_factors_model
OPTIONS(model_type='logistic_reg', input_label_cols=['win'])
AS
WITH all_games AS (
SELECT
game_date,
team_code,
is_home,
SAFE_DIVIDE(fgm + 0.5 * fgm3,fga) AS offensive_shooting_efficiency,
SAFE_DIVIDE(opp_fgm + 0.5 * opp_fgm3,opp_fga) AS opponents_shooting_efficiency,
SAFE_DIVIDE(tov,fga+0.475*fta+tov-oreb) AS turnover_percent,
SAFE_DIVIDE(opp_tov,opp_fga+0.475*opp_fta+opp_tov-opp_oreb) AS opponents_turnover_percent,
SAFE_DIVIDE(oreb,oreb + opp_dreb) AS rebounding,
SAFE_DIVIDE(opp_oreb,opp_oreb + dreb) AS opponents_rebounding,
SAFE_DIVIDE(ftm,fga) AS freethrows,
SAFE_DIVIDE(opp_ftm,opp_fga) AS opponents_freethrows,
win
FROM lab_dev.team_box
WHERE fga IS NOT NULL and win IS NOT NULL
)
, prevgames AS (
SELECT
is_home,
AVG(offensive_shooting_efficiency)
OVER(PARTITION BY team_code ORDER BY game_date ASC ROWS BETWEEN 4 PRECEDING AND 1 PRECEDING) AS offensive_shooting_efficiency,
AVG(opponents_shooting_efficiency)
OVER(PARTITION BY team_code ORDER BY game_date ASC ROWS BETWEEN 4 PRECEDING AND 1 PRECEDING)AS opponents_shooting_efficiency,
AVG(turnover_percent)
OVER(PARTITION BY team_code ORDER BY game_date ASC ROWS BETWEEN 4 PRECEDING AND 1 PRECEDING) AS turnover_percent,
AVG(opponents_turnover_percent)
OVER(PARTITION BY team_code ORDER BY game_date ASC ROWS BETWEEN 4 PRECEDING AND 1 PRECEDING) AS opponents_turnover_percent,
AVG(rebounding)
OVER(PARTITION BY team_code ORDER BY game_date ASC ROWS BETWEEN 4 PRECEDING AND 1 PRECEDING) AS rebounding,
AVG(opponents_rebounding)
OVER(PARTITION BY team_code ORDER BY game_date ASC ROWS BETWEEN 4 PRECEDING AND 1 PRECEDING) AS opponents_rebounding,
AVG(freethrows)
OVER(PARTITION BY team_code ORDER BY game_date ASC ROWS BETWEEN 4 PRECEDING AND 1 PRECEDING) AS freethrows,
AVG(opponents_freethrows)
OVER(PARTITION BY team_code ORDER BY game_date ASC ROWS BETWEEN 4 PRECEDING AND 1 PRECEDING) AS oppponents_freethrows,
win
FROM all_games
)
SELECT * FROM prevgames
WHERE offensive_shooting_efficiency IS NOT NULL
%%bigquery
SELECT * FROM ML.EVALUATE(MODEL lab_dev.four_factors_model)
Explanation: 87% isn't bad, but ... there is a huge problem with the above approach.
How are we supposed to know Team A's free throw shooting percentage against Team B before the game is played?
What we could do is to get the free throw shooting percentage of Team A in the 3 games prior to this one and use that. This requires analytic functions in SQL. If you are not familar with these, make a copy of the select statement and modify it in stages until you grasp what is happening.
End of explanation
%%bigquery games
WITH all_games AS (
SELECT
game_date,
team_code,
is_home,
SAFE_DIVIDE(fgm + 0.5 * fgm3,fga) AS offensive_shooting_efficiency,
SAFE_DIVIDE(opp_fgm + 0.5 * opp_fgm3,opp_fga) AS opponents_shooting_efficiency,
SAFE_DIVIDE(tov,fga+0.475*fta+tov-oreb) AS turnover_percent,
SAFE_DIVIDE(opp_tov,opp_fga+0.475*opp_fta+opp_tov-opp_oreb) AS opponents_turnover_percent,
SAFE_DIVIDE(oreb,oreb + opp_dreb) AS rebounding,
SAFE_DIVIDE(opp_oreb,opp_oreb + dreb) AS opponents_rebounding,
SAFE_DIVIDE(ftm,fga) AS freethrows,
SAFE_DIVIDE(opp_ftm,opp_fga) AS opponents_freethrows,
win
FROM lab_dev.team_box
WHERE fga IS NOT NULL and win IS NOT NULL
)
, prevgames AS (
SELECT
is_home,
AVG(offensive_shooting_efficiency)
OVER(PARTITION BY team_code ORDER BY game_date ASC ROWS BETWEEN 4 PRECEDING AND 1 PRECEDING) AS offensive_shooting_efficiency,
AVG(opponents_shooting_efficiency)
OVER(PARTITION BY team_code ORDER BY game_date ASC ROWS BETWEEN 4 PRECEDING AND 1 PRECEDING)AS opponents_shooting_efficiency,
AVG(turnover_percent)
OVER(PARTITION BY team_code ORDER BY game_date ASC ROWS BETWEEN 4 PRECEDING AND 1 PRECEDING) AS turnover_percent,
AVG(opponents_turnover_percent)
OVER(PARTITION BY team_code ORDER BY game_date ASC ROWS BETWEEN 4 PRECEDING AND 1 PRECEDING) AS opponents_turnover_percent,
AVG(rebounding)
OVER(PARTITION BY team_code ORDER BY game_date ASC ROWS BETWEEN 4 PRECEDING AND 1 PRECEDING) AS rebounding,
AVG(opponents_rebounding)
OVER(PARTITION BY team_code ORDER BY game_date ASC ROWS BETWEEN 4 PRECEDING AND 1 PRECEDING) AS opponents_rebounding,
AVG(freethrows)
OVER(PARTITION BY team_code ORDER BY game_date ASC ROWS BETWEEN 4 PRECEDING AND 1 PRECEDING) AS freethrows,
AVG(opponents_freethrows)
OVER(PARTITION BY team_code ORDER BY game_date ASC ROWS BETWEEN 4 PRECEDING AND 1 PRECEDING) AS oppponents_freethrows,
win
FROM all_games
)
SELECT * FROM prevgames
WHERE offensive_shooting_efficiency IS NOT NULL
import tensorflow as tf
import tensorflow.keras as keras
nrows = len(games)
ncols = len(games.iloc[0])
ntrain = (nrows * 7) // 10
print(nrows, ncols, ntrain)
# 0:ntrain are the training data; remaining rows are testing
# last col is the label
train_x = games.iloc[:ntrain, 0:(ncols-1)]
train_y = games.iloc[:ntrain, ncols-1]
test_x = games.iloc[ntrain:, 0:(ncols-1)]
test_y = games.iloc[ntrain:, ncols-1]
model = keras.models.Sequential()
model.add(keras.layers.Dense(5, input_dim=ncols-1, activation='relu'))
model.add(keras.layers.Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
history = model.fit(train_x, train_y, epochs=5, batch_size=32)
score = model.evaluate(test_x, test_y, batch_size=512)
print(score)
Explanation: Based on just the teams' performance coming in, we can predict the outcome of games with a 69.4% accuracy.
More complex ML model
We can write a more complex ML model using Keras and a deep neural network.
The code is not that hard but you'll have to do a lot more work (scaling, hyperparameter tuning)
to get better performance than you did with the BigQuery ML model.
End of explanation
# Copyright 2019 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: With a deep neural network, we are able to get 71.5% accuracy using the four factors model.
End of explanation |
12,464 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Matplotlib Exercise 3
Imports
Step2: Contour plots of 2d wavefunctions
The wavefunction of a 2d quantum well is
Step3: The contour, contourf, pcolor and pcolormesh functions of Matplotlib can be used for effective visualizations of 2d scalar fields. Use the Matplotlib documentation to learn how to use these functions along with the numpy.meshgrid function to visualize the above wavefunction
Step4: Next make a visualization using one of the pcolor functions | Python Code:
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
Explanation: Matplotlib Exercise 3
Imports
End of explanation
def well2d(x, y, nx, ny, L=1):
Compute the 2d quantum well wave function.
return (2.0/L*np.sin((nx*np.pi*x)/L)*np.sin((ny*np.pi*y)/L))
?np.zeros
psi = well2d(np.linspace(0,1,10), np.linspace(0,1,10), 1, 1)
assert len(psi)==10
assert psi.shape==(10,)
Explanation: Contour plots of 2d wavefunctions
The wavefunction of a 2d quantum well is:
$$ \psi_{n_x,n_y}(x,y) = \frac{2}{L}
\sin{\left( \frac{n_x \pi x}{L} \right)}
\sin{\left( \frac{n_y \pi y}{L} \right)} $$
This is a scalar field and $n_x$ and $n_y$ are quantum numbers that measure the level of excitation in the x and y directions. $L$ is the size of the well.
Define a function well2d that computes this wavefunction for values of x and y that are NumPy arrays.
End of explanation
nx,ny,L=(3,2,1)
xlist = np.linspace(0,1,1000)
ylist = np.linspace(0,1,1000)
x,y = np.meshgrid(xlist,ylist)
z = well2d(x,y,nx,ny,L)
my_cmap = matplotlib.cm.get_cmap('BuPu')
plt.contour(z, cmap = my_cmap)
plt.colorbar()
plt.show()
assert True # use this cell for grading the contour plot
Explanation: The contour, contourf, pcolor and pcolormesh functions of Matplotlib can be used for effective visualizations of 2d scalar fields. Use the Matplotlib documentation to learn how to use these functions along with the numpy.meshgrid function to visualize the above wavefunction:
Use $n_x=3$, $n_y=2$ and $L=0$.
Use the limits $[0,1]$ for the x and y axis.
Customize your plot to make it effective and beautiful.
Use a non-default colormap.
Add a colorbar to you visualization.
First make a plot using one of the contour functions:
End of explanation
plt.pcolormesh(x,y,z)
assert True # use this cell for grading the pcolor plot
Explanation: Next make a visualization using one of the pcolor functions:
End of explanation |
12,465 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Big Data Applications and Analytics
Step1: Explore data frames to check headers
Look at columns headers, variable information, type, etc.
Step2: Explore data frames to check headers and data types
AGECAT 57146 non-null int64
SEX 57146 non-null int64
MARRIED 57146 non-null float64
EDUCAT 57146 non-null int64
EMPLOY18 57146 non-null float64
CTYMETRO 57146 non-null int64
HEALTH 57146 non-null float64
MENTHLTH 57146 non-null float64
SUICATT 57146 non-null float64
PRLMISEVR 57146 non-null int64
PRLMISAB 57146 non-null float64
PRLANY 57146 non-null int64
HEROINEVR 57146 non-null int64
HEROINUSE 57146 non-null int64
HEROINFQY 57146 non-null float64
TRQLZRS 57146 non-null int64
SEDATVS 57146 non-null int64
COCAINE 57146 non-null int64
AMPHETMN 57146 non-null int64
TRTMENT 57146 non-null float64
MHTRTMT 57146 non-null float64
First plot
Step3: Check PRLMISAB effects HEROINUSE, controlling for CTYMETRO.
No real hypothes, just to show you how we can do this.
Code for race
Step4: Third Plot
Step5: Fourth Plot | Python Code:
import seaborn as sns
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
df = pd.read_csv('~/project-data.csv')
df.drop(df.columns[[0,1]], axis=1, inplace=True)
df.shape
Explanation: Big Data Applications and Analytics: Term Project
Sean M. Shiverick Fall 2017
Data Visualization
Resources:
'Python for Data Analysis' by Wes McKinney: https://github.com/wesm/pydata-book
'Python Data Science Handbook': https://jakevdp.github.io/PythonDataScienceHandbook/
Dataset: 2015 NSDUH
1. Import modules and Load the data
Import python modules
load data file and save as DataFrame object
Subset dataframe by column
End of explanation
df.columns
df.info()
Explanation: Explore data frames to check headers
Look at columns headers, variable information, type, etc.
End of explanation
sns.set(style='ticks')
sns.lmplot(y='PRLMISAB',x='HEROINUSE',data=df)
Explanation: Explore data frames to check headers and data types
AGECAT 57146 non-null int64
SEX 57146 non-null int64
MARRIED 57146 non-null float64
EDUCAT 57146 non-null int64
EMPLOY18 57146 non-null float64
CTYMETRO 57146 non-null int64
HEALTH 57146 non-null float64
MENTHLTH 57146 non-null float64
SUICATT 57146 non-null float64
PRLMISEVR 57146 non-null int64
PRLMISAB 57146 non-null float64
PRLANY 57146 non-null int64
HEROINEVR 57146 non-null int64
HEROINUSE 57146 non-null int64
HEROINFQY 57146 non-null float64
TRQLZRS 57146 non-null int64
SEDATVS 57146 non-null int64
COCAINE 57146 non-null int64
AMPHETMN 57146 non-null int64
TRTMENT 57146 non-null float64
MHTRTMT 57146 non-null float64
First plot: scatterplot with linear correlation
Compare Y == PRLMISANY and X == AGE.
Pass BWT as Y variable and AGE as X variable to seaborns lmplot (linear model plot)
It plot points, axes, and regression line, and also plots an error field. Super handy!
End of explanation
sns.lmplot(y='PRLMISAB',x='HEROINUSE',hue='CTYMETRO',data=df)
p = sns.lmplot(y='PRLMISAB',x='HEROINUSE',hue='CTYMETRO',data=df)
p.savefig('fancy-regression-chart.png')
Explanation: Check PRLMISAB effects HEROINUSE, controlling for CTYMETRO.
No real hypothes, just to show you how we can do this.
Code for race: 1=white, 2=black, 3=other
Use command below to save this plot
End of explanation
sns.factorplot(x='HEROINEVR', hue='PRLMISEVR',col='SEX',kind='count',data=df)
Explanation: Third Plot: Factorplot
Compare interaction of SMOKE, BWT, HT using bar charts.
End of explanation
'AGECAT', 'SEX', 'MARRIED', 'EDUCAT', 'EMPLOY18', 'CTYMETRO', 'HEALTH',
'MENTHLTH', 'SUICATT', 'PRLMISEVR', 'PRLMISAB', 'PRLANY', 'HEROINEVR',
'HEROINUSE', 'HEROINFQY', 'TRQLZRS', 'SEDATVS', 'COCAINE', 'AMPHETMN',
'TRTMENT', 'MHTRTMT'
df1 = df[['MENTHLTH','PRLMISAB','HEROINUSE','CTYMETRO']]
sns.pairplot(df1, hue = 'CTYMETRO',size=2.5);
plt.savefig('Figure3.png', bbox_inches='tight')
df1 = df[['AGECAT','SEX','PRLMISAB','HEROINUSE']]
sns.pairplot(df1, hue = 'SEX',size=2.5);
Explanation: Fourth Plot: Pairplots
To understand the distribution of each variable and
Also plot it against all other variables to understand their relationship.
Graph can be visualized for different values of a chosen 'hue' variable
End of explanation |
12,466 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Profiling
Step2: timeit module
Step3: cProfile
Step4: Use pstats.Stats to parse and print cProfile output
You can sort the records
Step5: Networking
Standard library provides some modules for network operation
Step6: compare it with using urllib2
https | Python Code:
# EXERCISE:
# Execute the following command:
!python -m timeit '"-".join([str(n) for n in range(100)])'
# Now execute the following:
!python -m timeit '"-".join(map(str, range(100)))'
# Now execute:
!python -m timeit --setup 'func = lambda n: "-".join(map(str, range(n)))' 'func(100)'
# And finally:
!python -m timeit --setup 'func = lambda n: "-".join(map(str, xrange(n)))' 'func(100)'
Explanation: Profiling
End of explanation
import timeit
print timeit.timeit(stmt='func(100)', setup='func = lambda n: "-".join(map(str, xrange(n)))', number=10000)
def fibonacci(n):
Return the nth fibonacci number
if n < 2:
return n
return fibonacci(n - 1) + fibonacci(n - 2)
def fib_15():
return fibonacci(15)
print timeit.timeit(stmt=fib_15, number=100)
# Actually, a Timer class is provided inside timeit module
t = timeit.Timer(stmt=fib_15)
print t.repeat(repeat=3, number=100)
# EXERCISE:
# Execute the following command:
!python -m cProfile fib_fac.py
# Now execute the following:
!python -m cProfile -s time fib_fac.py
# Now execute:
!python -m cProfile -s cumulative fib_fac.py
# And finally:
!python -m cProfile -s calls fib_fac.py
Explanation: timeit module:
Provides a simple way to time the execution of Python statements.
Provides both command line and programatic interfaces.
End of explanation
import cProfile
import pstats
filename = "cprofile_fib_fac.log"
max_num_lines = 3
# Note that in normal execution the import is not needed inside the statement string (incompatibility with pydemo)
cProfile.run(statement="from fib_fac import fib_fac; fib_fac()", filename=filename)
stats = pstats.Stats(filename)
stats.strip_dirs().sort_stats('time').print_stats(max_num_lines)
stats.strip_dirs().sort_stats('cumulative').print_stats(max_num_lines)
stats.strip_dirs().sort_stats('calls').print_stats(max_num_lines)
Explanation: cProfile:
Deterministic profiling of Python programs.
C extension with reasonable overhead.
Provides both command line and programatic interfaces.
There is a pure Python alternative module with the same interface: profile
End of explanation
# Exercise: which option is better
def opc1():
fruits = tuple(str(i) for i in xrange(100))
out = ''
for fruit in fruits:
out += fruit +':'
return out
def opc2():
format_str = '%s:' * 100
fruits = tuple(str(i) for i in xrange(100))
out = format_str % fruits
return out
def opc3():
format_str = '{}:' * 100
fruits = tuple(str(i) for i in xrange(100))
out = format_str.format(*fruits)
return out
def opc4():
fruits = tuple(str(i) for i in xrange(100))
out = ':'.join(fruits)
return out
Explanation: Use pstats.Stats to parse and print cProfile output
You can sort the records:
time: single execution time of a function
cumulative: accumulated execution time of a function
calls: number of times a function was called
Others: http://docs.python.org/2/library/profile.html#pstats.Stats.sort_stats
End of explanation
import socket
# In addition to typical socket class, some useful functions are provided
print socket.gethostname()
print socket.getfqdn()
print socket.gethostbyname(socket.getfqdn())
#Let's see how to perform HTTP requests
import requests # Requests is much better than any other standard library alternative
location = "41.41,2.22"
key = "5nrhptjvus6gdnf9e6x75as9"
num_days = 3
url_pattern = "http://api.worldweatheronline.com/free/v1/weather.ashx?q={loc}&format=json&num_of_days={days}&key={key}"
r = requests.get(url=url_pattern.format(loc=location, days=num_days, key=key),
headers={'content-type': 'application/json'}) # It supports all HTTP methods, auth, proxies, post multipart...
# Let's check the response
print r.status_code
print r.encoding
print r.text
# And of course it parses the JSON
print type(r.json()) # Uses simplejson or std lib json
from pprint import pprint
pprint(r.json()["data"]["current_condition"][0])
Explanation: Networking
Standard library provides some modules for network operation:
socket: provides access to the low-level C BSD socket interface, includes
a 'socket' class and some useful functions
urllib2: a library to perform HTTP requests (get, post, multipart...)
httplib: client side libraries of HTTP and HTTPS protocols, used by urllib2
urlparse: library with functions to parse URLs
Note that in Py3k urlparse, urllib and urllib2 have been merged in package urllib
End of explanation
# Implement a connection pool with requests
requestsSession = requests.session()
httpAdapter = requests.adapters.HTTPAdapter(pool_connections=10,
pool_maxsize=15)
requestsSession.mount('http://', httpAdapter)
requestsSession.get(url=url_pattern.format(loc=location, days=num_days, key=key),
headers={'content-type': 'application/json'})
Explanation: compare it with using urllib2
https://gist.github.com/kennethreitz/973705
For low level socket operations use 'socket'
Use 'requests' always if possible for HTTP operation
Use 'urllib2' or 'httplib' as a fallback for special behaviour
End of explanation |
12,467 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CNN transfer learning - Keras+TensorFlow
This is for CNN models transferred from pretrained model, using Keras based on TensorFlow. First, some preparation work.
Step1: Read the MNIST data. Notice that we assume that it's 'kaggle-DigitRecognizer/data/train.csv', and we use helper function to read into a dictionary.
Step2: Freeze-weights transfer
We would use ResNet50 provided in Keras. In this section, the pretrained model would all be freezed, and new output layer would be attatched to the model, and only this output layer would be trained.
Step3: Fine-tune transfer
In this section, the model is the same as before, but all weights are trained along with the final layer using smaller learning rate.
Step4: Fine-tune transfer with early stopping
Based on the previous section, the test set is used as the validation set, so as to monitor for early stopping.
Step5: Create submissions
Load the saved trained models and produce predictions for submission on Kaggle. | Python Code:
from keras.layers import Conv2D, MaxPooling2D, Input, Dense, Flatten, Activation, add, Lambda
from keras.layers.normalization import BatchNormalization
from keras.layers.pooling import GlobalAveragePooling2D
from keras.optimizers import RMSprop
from keras.backend import tf as ktf
from keras.models import Model, Sequential, load_model
from keras.callbacks import ModelCheckpoint, EarlyStopping
from keras.applications.resnet50 import ResNet50
from lib.data_utils import get_MNIST_data
Explanation: CNN transfer learning - Keras+TensorFlow
This is for CNN models transferred from pretrained model, using Keras based on TensorFlow. First, some preparation work.
End of explanation
data = get_MNIST_data(num_validation=0, fit=True)
# see if we get the data correctly
print('image size: ', data['X_train'].shape)
Explanation: Read the MNIST data. Notice that we assume that it's 'kaggle-DigitRecognizer/data/train.csv', and we use helper function to read into a dictionary.
End of explanation
# build the model
# preprocess to (28,28,3), then build a resize layer using tf.resize_images() to (224,224,3) as input
inputs = Input(shape=(28,28,3))
inputs_resize = Lambda(lambda img: ktf.image.resize_images(img, (224,224)))(inputs) # resize layer
resnet50 = ResNet50(include_top=False, input_tensor=inputs_resize, input_shape=(224,224,3), pooling='avg')
x = resnet50.output
#x = Dense(units=1024, activation='relu')(x)
predictions = Dense(units=10, activation='softmax')(x)
# connect the model
freezemodel = Model(inputs=inputs, outputs=predictions)
#freezemodel.summary()
# freeze all ResNet50 layers
for layer in resnet50.layers:
layer.trainable = False
# set the loss and optimizer
freezemodel.compile(optimizer='rmsprop', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
# fit the model
checkpoint = ModelCheckpoint('../models/freezeResNet_{epoch:02d}-{loss:.2f}.h5',
monitor='loss',
save_best_only=True)
freezemodel.fit(data['X_train'], data['y_train'].reshape(-1,1),
batch_size=16, epochs=10, callbacks=[checkpoint], initial_epoch=1)
# test the model and see accuracy
score = freezemodel.evaluate(data['X_test'], data['y_test'].reshape(-1, 1), batch_size=32)
print(score)
# save the model: 0.96
freezemodel.save('ResNet50_freeze.h5')
# continue the model training
freezemodel = load_model('../models/ResNet50_freeze.h5', custom_objects={'ktf': ktf})
# set the loss and optimizer
rmsprop = RMSprop(lr=0.0001)
freezemodel.compile(optimizer=rmsprop, loss='sparse_categorical_crossentropy', metrics=['accuracy'])
# fit the model
checkpoint = ModelCheckpoint('../models/freezeResNet_{epoch:02d}-{loss:.2f}.h5',
monitor='loss',
save_best_only=True)
freezemodel.fit(data['X_train'], data['y_train'].reshape(-1, 1),
batch_size=16, epochs=10, callbacks=[checkpoint], initial_epoch=4)
Explanation: Freeze-weights transfer
We would use ResNet50 provided in Keras. In this section, the pretrained model would all be freezed, and new output layer would be attatched to the model, and only this output layer would be trained.
End of explanation
# build the model
# preprocess to (28,28,3), then build a resize layer using tf.resize_images() to (224,224,3) as input
inputs = Input(shape=(28,28,3))
inputs_resize = Lambda(lambda img: ktf.image.resize_images(img, (224,224)))(inputs) # resize layer
resnet50 = ResNet50(include_top=False, input_tensor=inputs_resize, input_shape=(224,224,3), pooling='avg')
x = resnet50.output
#x = Dense(units=1024, activation='relu')(x)
predictions = Dense(units=10, activation='softmax')(x)
# connect the model
tunemodel = Model(inputs=inputs, outputs=predictions)
#freezemodel.summary()
# set the loss and optimizer
rmsprop = RMSprop(lr=0.0001)
tunemodel.compile(optimizer=rmsprop, loss='sparse_categorical_crossentropy', metrics=['accuracy'])
# fit the model
checkpoint = ModelCheckpoint('../models/tuneResNet_{epoch:02d}-{loss:.2f}.h5',
monitor='loss',
save_best_only=True)
tunemodel.fit(data['X_train'], data['y_train'].reshape(-1, 1),
batch_size=16, epochs=10, callbacks=[checkpoint], initial_epoch=0)
# test the model and see accuracy
score = tunemodel.evaluate(data['X_test'], data['y_test'].reshape(-1, 1), batch_size=32)
print(score)
Explanation: Fine-tune transfer
In this section, the model is the same as before, but all weights are trained along with the final layer using smaller learning rate.
End of explanation
# build the model
# preprocess to (28,28,3), then build a resize layer using tf.resize_images() to (224,224,3) as input
inputs = Input(shape=(28,28,3))
inputs_resize = Lambda(lambda img: ktf.image.resize_images(img, (224,224)))(inputs) # resize layer
resnet50 = ResNet50(include_top=False, input_tensor=inputs_resize, input_shape=(224,224,3), pooling='avg')
x = resnet50.output
predictions = Dense(units=10, activation='softmax')(x)
# connect the model
tunemodel = Model(inputs=inputs, outputs=predictions)
# set the loss and optimizer
rmsprop = RMSprop(lr=0.0001)
tunemodel.compile(optimizer=rmsprop, loss='sparse_categorical_crossentropy', metrics=['accuracy'])
# fit the model
checkpoint = ModelCheckpoint('../models/tuneResNet_early_{epoch:02d}-{loss:.2f}.h5',
monitor='loss',
save_best_only=True)
earlystop = EarlyStopping(min_delta=0.0001, patience=1)
tunemodel.fit(data['X_train'], data['y_train'].reshape(-1, 1),
batch_size=16, epochs=10, callbacks=[checkpoint, earlystop], initial_epoch=0)
# test the model and see accuracy
score = tunemodel.evaluate(data['X_test'], data['y_test'].reshape(-1, 1),
batch_size=16, validation_data=(data['X_test'], data['y_test'].reshape(-1, 1)))
print(score)
Explanation: Fine-tune transfer with early stopping
Based on the previous section, the test set is used as the validation set, so as to monitor for early stopping.
End of explanation
from lib.data_utils import create_submission
from keras.models import load_model
# for freeze ResNet50 model (3 epochs)
simple_CNN = load_model('../models/freezeResNet_03-0.09.h5', custom_objects={'ktf': ktf})
print('Load model successfully.')
create_submission(simple_CNN, '../data/test.csv', '../submission/submission_freezeResNet_03.csv', 16, fit=True)
Explanation: Create submissions
Load the saved trained models and produce predictions for submission on Kaggle.
End of explanation |
12,468 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Toplevel
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required
Step7: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required
Step8: 3.2. CMIP3 Parent
Is Required
Step9: 3.3. CMIP5 Parent
Is Required
Step10: 3.4. Previous Name
Is Required
Step11: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required
Step12: 4.2. Code Version
Is Required
Step13: 4.3. Code Languages
Is Required
Step14: 4.4. Components Structure
Is Required
Step15: 4.5. Coupler
Is Required
Step16: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required
Step17: 5.2. Atmosphere Double Flux
Is Required
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Is Required
Step19: 5.4. Atmosphere Relative Winds
Is Required
Step20: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required
Step21: 6.2. Global Mean Metrics Used
Is Required
Step22: 6.3. Regional Metrics Used
Is Required
Step23: 6.4. Trend Metrics Used
Is Required
Step24: 6.5. Energy Balance
Is Required
Step25: 6.6. Fresh Water Balance
Is Required
Step26: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required
Step27: 7.2. Atmos Ocean Interface
Is Required
Step28: 7.3. Atmos Land Interface
Is Required
Step29: 7.4. Atmos Sea-ice Interface
Is Required
Step30: 7.5. Ocean Seaice Interface
Is Required
Step31: 7.6. Land Ocean Interface
Is Required
Step32: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required
Step33: 8.2. Atmos Ocean Interface
Is Required
Step34: 8.3. Atmos Land Interface
Is Required
Step35: 8.4. Atmos Sea-ice Interface
Is Required
Step36: 8.5. Ocean Seaice Interface
Is Required
Step37: 8.6. Runoff
Is Required
Step38: 8.7. Iceberg Calving
Is Required
Step39: 8.8. Endoreic Basins
Is Required
Step40: 8.9. Snow Accumulation
Is Required
Step41: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required
Step42: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required
Step43: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required
Step45: 12.2. Additional Information
Is Required
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required
Step47: 13.2. Additional Information
Is Required
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required
Step49: 14.2. Additional Information
Is Required
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required
Step51: 15.2. Additional Information
Is Required
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required
Step53: 16.2. Additional Information
Is Required
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required
Step55: 17.2. Equivalence Concentration
Is Required
Step56: 17.3. Additional Information
Is Required
Step57: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required
Step58: 18.2. Additional Information
Is Required
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required
Step60: 19.2. Additional Information
Is Required
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required
Step62: 20.2. Additional Information
Is Required
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required
Step64: 21.2. Additional Information
Is Required
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required
Step66: 22.2. Aerosol Effect On Ice Clouds
Is Required
Step67: 22.3. Additional Information
Is Required
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required
Step69: 23.2. Aerosol Effect On Ice Clouds
Is Required
Step70: 23.3. RFaci From Sulfate Only
Is Required
Step71: 23.4. Additional Information
Is Required
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required
Step73: 24.2. Additional Information
Is Required
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step77: 25.4. Additional Information
Is Required
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step81: 26.4. Additional Information
Is Required
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required
Step83: 27.2. Additional Information
Is Required
Step84: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required
Step85: 28.2. Crop Change Only
Is Required
Step86: 28.3. Additional Information
Is Required
Step87: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required
Step88: 29.2. Additional Information
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'inpe', 'sandbox-3', 'toplevel')
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: INPE
Source ID: SANDBOX-3
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:07
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation |
12,469 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data pre-processing in python, using Pima Indians diabetes dataset from National Institute of Diabetes and Digestive and Kidney Diseases
Citation
Step1: 2.0 Split data into feature (input) and target (output) set
Step2: 3.0 Rescale
Homogenise data of varying scales to take values between 0 and 1
Step3: <!--more-->
4.0 Standardisation
Standardise normally distributed data to have a mean of 0 and standard deviation of 1
Step4: 5.0 Normalisation
Normalise data such that each row has a vector length of 1 | Python Code:
import pandas as pd
from pandas import read_csv
pd.set_option('precision', 3) # set display precision to 3 significant figures
filename = 'C:/Users/craigrshenton/Desktop/Dropbox/python/python_pro/machine_learning_mastery_with_python/machine_learning_mastery_with_python_code/chapter_07/pima-indians-diabetes.data.csv'
names = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class']
df = read_csv(filename, names=names)
df.head()
Explanation: Data pre-processing in python, using Pima Indians diabetes dataset from National Institute of Diabetes and Digestive and Kidney Diseases
Citation: Lichman, M. (2013). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science.
Feature Information:
Number of times pregnant
Plasma glucose concentration a 2 hours in an oral glucose tolerance test
Diastolic blood pressure (mm Hg)
Triceps skin fold thickness (mm)
2-Hour serum insulin (mu U/ml)
Body mass index (weight in kg/(height in m)^2)
Diabetes pedigree function
Age (years)
Class variable (0 or 1) i.e., Diabetes found? (no/yes)
1.0 Load data from CSV
End of explanation
feature_cols = df.columns[0:8]
X = df[feature_cols] # first 8 cols are features
y = df['class'] # last col is target data
Explanation: 2.0 Split data into feature (input) and target (output) set
End of explanation
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler(feature_range=(0, 1))
rescaledX = scaler.fit_transform(X)
df_scaled = pd.DataFrame(data=rescaledX, columns=feature_cols)
df_scaled.head()
Explanation: 3.0 Rescale
Homogenise data of varying scales to take values between 0 and 1
End of explanation
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler().fit(X)
standardX = scaler.transform(X)
df_standard = pd.DataFrame(data=standardX, columns=feature_cols)
df_standard.head()
Explanation: <!--more-->
4.0 Standardisation
Standardise normally distributed data to have a mean of 0 and standard deviation of 1
End of explanation
from sklearn.preprocessing import Normalizer
scaler = Normalizer().fit(X)
normalizedX = scaler.transform(X)
df_norm = pd.DataFrame(data=normalizedX, columns=feature_cols)
df_norm.head()
Explanation: 5.0 Normalisation
Normalise data such that each row has a vector length of 1
End of explanation |
12,470 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Implementing logistic regression from scratch
The goal of this notebook is to implement your own logistic regression classifier. You will
Step1: Load review dataset
For this assignment, we will use a subset of the Amazon product review dataset. The subset was chosen to contain similar numbers of positive and negative reviews, as the original dataset consisted primarily of positive reviews.
Step2: One column of this dataset is 'sentiment', corresponding to the class label with +1 indicating a review with positive sentiment and -1 indicating one with negative sentiment.
Step3: Let us quickly explore more of this dataset. The 'name' column indicates the name of the product. Here we list the first 10 products in the dataset. We then count the number of positive and negative reviews.
Step4: Note
Step5: Now, we will perform 2 simple data transformations
Step6: Now we proceed with Step 2. For each word in important_words, we compute a count for the number of times the word occurs in the review. We will store this count in a separate column (one for each word). The result of this feature processing is a single column for each word in important_words which keeps a count of the number of times the respective word occurs in the review text.
Note
Step7: The SFrame products now contains one column for each of the 193 important_words. As an example, the column perfect contains a count of the number of times the word perfect occurs in each of the reviews.
Step8: Now, write some code to compute the number of product reviews that contain the word perfect.
Hint
Step9: Quiz Question. How many reviews contain the word perfect?
Convert SFrame to NumPy array
As you have seen previously, NumPy is a powerful library for doing matrix manipulation. Let us convert our data to matrices and then implement our algorithms with matrices.
First, make sure you can perform the following import.
Step10: We now provide you with a function that extracts columns from an SFrame and converts them into a NumPy array. Two arrays are returned
Step11: Let us convert the data into NumPy arrays.
Step12: Quiz Question
Step13: Estimating conditional probability with link function
Recall from lecture that the link function is given by
Step14: Aside. How the link function works with matrix algebra
Since the word counts are stored as columns in feature_matrix, each $i$-th row of the matrix corresponds to the feature vector $h(\mathbf{x}_i)$
Step15: Compute derivative of log likelihood with respect to a single coefficient
Recall from lecture
Step16: In the main lecture, our focus was on the likelihood. In the advanced optional video, however, we introduced a transformation of this likelihood---called the log likelihood---that simplifies the derivation of the gradient and is more numerically stable. Due to its numerical stability, we will use the log likelihood instead of the likelihood to assess the algorithm.
The log likelihood is computed using the following formula (see the advanced optional video if you are curious about the derivation of this equation)
Step17: Checkpoint
Just to make sure we are on the same page, run the following code block and check that the outputs match.
Step18: Taking gradient steps
Now we are ready to implement our own logistic regression. All we have to do is to write a gradient ascent function that takes gradient steps towards the optimum.
Complete the following function to solve the logistic regression model using gradient ascent
Step19: Now, let us run the logistic regression solver.
Step20: Quiz question
Step21: Now, complete the following code block for Step 2 to compute the class predictions using the scores obtained above
Step22: Quiz question
Step23: Measuring accuracy
We will now measure the classification accuracy of the model. Recall from the lecture that the classification accuracy can be computed as follows
Step24: Quiz question
Step25: Now, word_coefficient_tuples contains a sorted list of (word, coefficient_value) tuples. The first 10 elements in this list correspond to the words that are most positive.
Ten "most positive" words
Now, we compute the 10 words that have the most positive coefficient values. These words are associated with positive sentiment.
Step26: Quiz question | Python Code:
import graphlab
Explanation: Implementing logistic regression from scratch
The goal of this notebook is to implement your own logistic regression classifier. You will:
Extract features from Amazon product reviews.
Convert an SFrame into a NumPy array.
Implement the link function for logistic regression.
Write a function to compute the derivative of the log likelihood function with respect to a single coefficient.
Implement gradient ascent.
Given a set of coefficients, predict sentiments.
Compute classification accuracy for the logistic regression model.
Let's get started!
Fire up GraphLab Create
Make sure you have the latest version of GraphLab Create. Upgrade by
pip install graphlab-create --upgrade
See this page for detailed instructions on upgrading.
End of explanation
products = graphlab.SFrame('amazon_baby_subset.gl/')
Explanation: Load review dataset
For this assignment, we will use a subset of the Amazon product review dataset. The subset was chosen to contain similar numbers of positive and negative reviews, as the original dataset consisted primarily of positive reviews.
End of explanation
products['sentiment']
Explanation: One column of this dataset is 'sentiment', corresponding to the class label with +1 indicating a review with positive sentiment and -1 indicating one with negative sentiment.
End of explanation
products.head(10)['name']
print '# of positive reviews =', len(products[products['sentiment']==1])
print '# of negative reviews =', len(products[products['sentiment']==-1])
Explanation: Let us quickly explore more of this dataset. The 'name' column indicates the name of the product. Here we list the first 10 products in the dataset. We then count the number of positive and negative reviews.
End of explanation
import json
with open('important_words.json', 'r') as f: # Reads the list of most frequent words
important_words = json.load(f)
important_words = [str(s) for s in important_words]
print important_words
Explanation: Note: For this assignment, we eliminated class imbalance by choosing
a subset of the data with a similar number of positive and negative reviews.
Apply text cleaning on the review data
In this section, we will perform some simple feature cleaning using SFrames. The last assignment used all words in building bag-of-words features, but here we limit ourselves to 193 words (for simplicity). We compiled a list of 193 most frequent words into a JSON file.
Now, we will load these words from this JSON file:
End of explanation
def remove_punctuation(text):
import string
return text.translate(None, string.punctuation)
products['review_clean'] = products['review'].apply(remove_punctuation)
Explanation: Now, we will perform 2 simple data transformations:
Remove punctuation using Python's built-in string functionality.
Compute word counts (only for important_words)
We start with Step 1 which can be done as follows:
End of explanation
for word in important_words:
products[word] = products['review_clean'].apply(lambda s : s.split().count(word))
Explanation: Now we proceed with Step 2. For each word in important_words, we compute a count for the number of times the word occurs in the review. We will store this count in a separate column (one for each word). The result of this feature processing is a single column for each word in important_words which keeps a count of the number of times the respective word occurs in the review text.
Note: There are several ways of doing this. In this assignment, we use the built-in count function for Python lists. Each review string is first split into individual words and the number of occurances of a given word is counted.
End of explanation
products['perfect']
Explanation: The SFrame products now contains one column for each of the 193 important_words. As an example, the column perfect contains a count of the number of times the word perfect occurs in each of the reviews.
End of explanation
products['perfect'].apply(lambda i: 1 if i>=1 else 0).sum()
Explanation: Now, write some code to compute the number of product reviews that contain the word perfect.
Hint:
* First create a column called contains_perfect which is set to 1 if the count of the word perfect (stored in column perfect) is >= 1.
* Sum the number of 1s in the column contains_perfect.
End of explanation
import numpy as np
Explanation: Quiz Question. How many reviews contain the word perfect?
Convert SFrame to NumPy array
As you have seen previously, NumPy is a powerful library for doing matrix manipulation. Let us convert our data to matrices and then implement our algorithms with matrices.
First, make sure you can perform the following import.
End of explanation
def get_numpy_data(data_sframe, features, label):
data_sframe['intercept'] = 1
features = ['intercept'] + features
features_sframe = data_sframe[features]
feature_matrix = features_sframe.to_numpy()
label_sarray = data_sframe[label]
label_array = label_sarray.to_numpy()
return(feature_matrix, label_array)
Explanation: We now provide you with a function that extracts columns from an SFrame and converts them into a NumPy array. Two arrays are returned: one representing features and another representing class labels. Note that the feature matrix includes an additional column 'intercept' to take account of the intercept term.
End of explanation
# Warning: This may take a few minutes...
feature_matrix, sentiment = get_numpy_data(products, important_words, 'sentiment')
feature_matrix.shape
Explanation: Let us convert the data into NumPy arrays.
End of explanation
sentiment
Explanation: Quiz Question: How many features are there in the feature_matrix?
Quiz Question: Assuming that the intercept is present, how does the number of features in feature_matrix relate to the number of features in the logistic regression model?
Now, let us see what the sentiment column looks like:
End of explanation
'''
produces probablistic estimate for P(y_i = +1 | x_i, w).
estimate ranges between 0 and 1.
'''
def predict_probability(feature_matrix, coefficients):
# Take dot product of feature_matrix and coefficients
# YOUR CODE HERE
v = np.dot(feature_matrix, coefficients)
# Compute P(y_i = +1 | x_i, w) using the link function
# YOUR CODE HERE
predictions = [1/(1+np.e**(-x)) for x in v]
# return predictions
return predictions
Explanation: Estimating conditional probability with link function
Recall from lecture that the link function is given by:
$$
P(y_i = +1 | \mathbf{x}_i,\mathbf{w}) = \frac{1}{1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))},
$$
where the feature vector $h(\mathbf{x}_i)$ represents the word counts of important_words in the review $\mathbf{x}_i$. Complete the following function that implements the link function:
End of explanation
dummy_feature_matrix = np.array([[1.,2.,3.], [1.,-1.,-1]])
dummy_coefficients = np.array([1., 3., -1.])
correct_scores = np.array( [ 1.*1. + 2.*3. + 3.*(-1.), 1.*1. + (-1.)*3. + (-1.)*(-1.) ] )
correct_predictions = np.array( [ 1./(1+np.exp(-correct_scores[0])), 1./(1+np.exp(-correct_scores[1])) ] )
print 'The following outputs must match '
print '------------------------------------------------'
print 'correct_predictions =', correct_predictions
print 'output of predict_probability =', predict_probability(dummy_feature_matrix, dummy_coefficients)
Explanation: Aside. How the link function works with matrix algebra
Since the word counts are stored as columns in feature_matrix, each $i$-th row of the matrix corresponds to the feature vector $h(\mathbf{x}_i)$:
$$
[\text{feature_matrix}] =
\left[
\begin{array}{c}
h(\mathbf{x}_1)^T \
h(\mathbf{x}_2)^T \
\vdots \
h(\mathbf{x}_N)^T
\end{array}
\right] =
\left[
\begin{array}{cccc}
h_0(\mathbf{x}_1) & h_1(\mathbf{x}_1) & \cdots & h_D(\mathbf{x}_1) \
h_0(\mathbf{x}_2) & h_1(\mathbf{x}_2) & \cdots & h_D(\mathbf{x}_2) \
\vdots & \vdots & \ddots & \vdots \
h_0(\mathbf{x}_N) & h_1(\mathbf{x}_N) & \cdots & h_D(\mathbf{x}_N)
\end{array}
\right]
$$
By the rules of matrix multiplication, the score vector containing elements $\mathbf{w}^T h(\mathbf{x}_i)$ is obtained by multiplying feature_matrix and the coefficient vector $\mathbf{w}$.
$$
[\text{score}] =
[\text{feature_matrix}]\mathbf{w} =
\left[
\begin{array}{c}
h(\mathbf{x}_1)^T \
h(\mathbf{x}_2)^T \
\vdots \
h(\mathbf{x}_N)^T
\end{array}
\right]
\mathbf{w}
= \left[
\begin{array}{c}
h(\mathbf{x}_1)^T\mathbf{w} \
h(\mathbf{x}_2)^T\mathbf{w} \
\vdots \
h(\mathbf{x}_N)^T\mathbf{w}
\end{array}
\right]
= \left[
\begin{array}{c}
\mathbf{w}^T h(\mathbf{x}_1) \
\mathbf{w}^T h(\mathbf{x}_2) \
\vdots \
\mathbf{w}^T h(\mathbf{x}_N)
\end{array}
\right]
$$
Checkpoint
Just to make sure you are on the right track, we have provided a few examples. If your predict_probability function is implemented correctly, then the outputs will match:
End of explanation
def feature_derivative(errors, feature):
# Compute the dot product of errors and feature
derivative = np.dot(errors, feature)
# Return the derivative
return derivative
Explanation: Compute derivative of log likelihood with respect to a single coefficient
Recall from lecture:
$$
\frac{\partial\ell}{\partial w_j} = \sum_{i=1}^N h_j(\mathbf{x}_i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right)
$$
We will now write a function that computes the derivative of log likelihood with respect to a single coefficient $w_j$. The function accepts two arguments:
* errors vector containing $\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})$ for all $i$.
* feature vector containing $h_j(\mathbf{x}_i)$ for all $i$.
Complete the following code block:
End of explanation
def compute_log_likelihood(feature_matrix, sentiment, coefficients):
indicator = (sentiment==+1)
scores = np.dot(feature_matrix, coefficients)
logexp = np.log(1. + np.exp(-scores))
# Simple check to prevent overflow
mask = np.isinf(logexp)
logexp[mask] = -scores[mask]
lp = np.sum((indicator-1)*scores - logexp)
return lp
Explanation: In the main lecture, our focus was on the likelihood. In the advanced optional video, however, we introduced a transformation of this likelihood---called the log likelihood---that simplifies the derivation of the gradient and is more numerically stable. Due to its numerical stability, we will use the log likelihood instead of the likelihood to assess the algorithm.
The log likelihood is computed using the following formula (see the advanced optional video if you are curious about the derivation of this equation):
$$\ell\ell(\mathbf{w}) = \sum_{i=1}^N \Big( (\mathbf{1}[y_i = +1] - 1)\mathbf{w}^T h(\mathbf{x}_i) - \ln\left(1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))\right) \Big) $$
We provide a function to compute the log likelihood for the entire dataset.
End of explanation
dummy_feature_matrix = np.array([[1.,2.,3.], [1.,-1.,-1]])
dummy_coefficients = np.array([1., 3., -1.])
dummy_sentiment = np.array([-1, 1])
correct_indicators = np.array( [ -1==+1, 1==+1 ] )
correct_scores = np.array( [ 1.*1. + 2.*3. + 3.*(-1.), 1.*1. + (-1.)*3. + (-1.)*(-1.) ] )
correct_first_term = np.array( [ (correct_indicators[0]-1)*correct_scores[0], (correct_indicators[1]-1)*correct_scores[1] ] )
correct_second_term = np.array( [ np.log(1. + np.exp(-correct_scores[0])), np.log(1. + np.exp(-correct_scores[1])) ] )
correct_ll = sum( [ correct_first_term[0]-correct_second_term[0], correct_first_term[1]-correct_second_term[1] ] )
print 'The following outputs must match '
print '------------------------------------------------'
print 'correct_log_likelihood =', correct_ll
print 'output of compute_log_likelihood =', compute_log_likelihood(dummy_feature_matrix, dummy_sentiment, dummy_coefficients)
Explanation: Checkpoint
Just to make sure we are on the same page, run the following code block and check that the outputs match.
End of explanation
from math import sqrt
def logistic_regression(feature_matrix, sentiment, initial_coefficients, step_size, max_iter):
coefficients = np.array(initial_coefficients) # make sure it's a numpy array
for itr in xrange(max_iter):
# Predict P(y_i = +1|x_i,w) using your predict_probability() function
# YOUR CODE HERE
predictions = predict_probability(feature_matrix, coefficients)
# Compute indicator value for (y_i = +1)
indicator = (sentiment==+1)
# Compute the errors as indicator - predictions
errors = indicator - predictions
for j in xrange(len(coefficients)): # loop over each coefficient
# Recall that feature_matrix[:,j] is the feature column associated with coefficients[j].
# Compute the derivative for coefficients[j]. Save it in a variable called derivative
# YOUR CODE HERE
derivative = feature_derivative(errors, feature_matrix[:, j])
# add the step size times the derivative to the current coefficient
## YOUR CODE HERE
coefficients[j] += derivative * step_size
# Checking whether log likelihood is increasing
if itr <= 15 or (itr <= 100 and itr % 10 == 0) or (itr <= 1000 and itr % 100 == 0) \
or (itr <= 10000 and itr % 1000 == 0) or itr % 10000 == 0:
lp = compute_log_likelihood(feature_matrix, sentiment, coefficients)
print 'iteration %*d: log likelihood of observed labels = %.8f' % \
(int(np.ceil(np.log10(max_iter))), itr, lp)
return coefficients
Explanation: Taking gradient steps
Now we are ready to implement our own logistic regression. All we have to do is to write a gradient ascent function that takes gradient steps towards the optimum.
Complete the following function to solve the logistic regression model using gradient ascent:
End of explanation
coefficients = logistic_regression(feature_matrix, sentiment, initial_coefficients=np.zeros(194),
step_size=1e-7, max_iter=301)
Explanation: Now, let us run the logistic regression solver.
End of explanation
# Compute the scores as a dot product between feature_matrix and coefficients.
scores = np.dot(feature_matrix, coefficients)
Explanation: Quiz question: As each iteration of gradient ascent passes, does the log likelihood increase or decrease?
Predicting sentiments
Recall from lecture that class predictions for a data point $\mathbf{x}$ can be computed from the coefficients $\mathbf{w}$ using the following formula:
$$
\hat{y}_i =
\left{
\begin{array}{ll}
+1 & \mathbf{x}_i^T\mathbf{w} > 0 \
-1 & \mathbf{x}_i^T\mathbf{w} \leq 0 \
\end{array}
\right.
$$
Now, we will write some code to compute class predictions. We will do this in two steps:
* Step 1: First compute the scores using feature_matrix and coefficients using a dot product.
* Step 2: Using the formula above, compute the class predictions from the scores.
Step 1 can be implemented as follows:
End of explanation
probs = [1 if x > 0 else 0 for x in scores]
Explanation: Now, complete the following code block for Step 2 to compute the class predictions using the scores obtained above:
End of explanation
np.sum(probs)
print coefficients
np.sum([1 if x > 0 else 0 for x in sentiment])
Explanation: Quiz question: How many reviews were predicted to have positive sentiment?
End of explanation
predict_labels = [1 if x > 0 else -1 for x in scores]
num_mistakes = np.sum(sentiment != predict_labels) # YOUR CODE HERE
accuracy = 1- num_mistakes * 1.0 / len(sentiment)# YOUR CODE HERE
print "-----------------------------------------------------"
print '# Reviews correctly classified =', len(products) - num_mistakes
print '# Reviews incorrectly classified =', num_mistakes
print '# Reviews total =', len(products)
print "-----------------------------------------------------"
print 'Accuracy = %.2f' % accuracy
Explanation: Measuring accuracy
We will now measure the classification accuracy of the model. Recall from the lecture that the classification accuracy can be computed as follows:
$$
\mbox{accuracy} = \frac{\mbox{# correctly classified data points}}{\mbox{# total data points}}
$$
Complete the following code block to compute the accuracy of the model.
End of explanation
coefficients = list(coefficients[1:]) # exclude intercept
word_coefficient_tuples = [(word, coefficient) for word, coefficient in zip(important_words, coefficients)]
word_coefficient_tuples = sorted(word_coefficient_tuples, key=lambda x:x[1], reverse=True)
Explanation: Quiz question: What is the accuracy of the model on predictions made above? (round to 2 digits of accuracy)
Which words contribute most to positive & negative sentiments?
Recall that in Module 2 assignment, we were able to compute the "most positive words". These are words that correspond most strongly with positive reviews. In order to do this, we will first do the following:
* Treat each coefficient as a tuple, i.e. (word, coefficient_value).
* Sort all the (word, coefficient_value) tuples by coefficient_value in descending order.
End of explanation
word_coefficient_tuples[:11]
Explanation: Now, word_coefficient_tuples contains a sorted list of (word, coefficient_value) tuples. The first 10 elements in this list correspond to the words that are most positive.
Ten "most positive" words
Now, we compute the 10 words that have the most positive coefficient values. These words are associated with positive sentiment.
End of explanation
word_coefficient_tuples[-10:]
Explanation: Quiz question: Which word is not present in the top 10 "most positive" words?
Ten "most negative" words
Next, we repeat this exercise on the 10 most negative words. That is, we compute the 10 words that have the most negative coefficient values. These words are associated with negative sentiment.
End of explanation |
12,471 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<p><font size="6"><b>03 - Pandas
Step1: <div class="alert alert-info" style="font-size
Step2: Reversing this operation, is reset_index
Step3: Selecting data based on the index
<div class="alert alert-warning" style="font-size
Step4: But the row or column indexer can also be a list, slice, boolean array (see next section), ..
Step5: <div class="alert alert-danger">
<b>NOTE</b>
Step6: The different indexing methods can also be used to assign data
Step7: <div class="alert alert-info" style="font-size
Step8: <div class="alert alert-success">
<b>EXERCISE 2</b>
Step9: <div class="alert alert-success">
<b>EXERCISE 3</b>
Step10: <div class="alert alert-success">
<b>EXERCISE 4</b>
Step11: <div class="alert alert-success">
<b>EXERCISE 5</b>
Step12: The next exercise uses the titanic data set
Step13: <div class="alert alert-success">
<b>EXERCISE 6</b>
Step14: We will later see an easier way to calculate both averages at the same time with groupby.
Alignment on the index
<div class="alert alert-danger">
**WARNING**
Step15: Pitfall
Step16: When updating values in a DataFrame, you can run into the infamous "SettingWithCopyWarning" and issues with chained indexing.
Assume we want to cap the population and replace all values above 50 with 50. We can do this using the basic [] indexing operation twice ("chained indexing")
Step17: However, we get a warning, and we can also see that the original dataframe did not change
Step18: The warning message explains that we should use .loc[row_indexer,col_indexer] = value instead. That is what we just learned in this notebook, so we can do
Step19: And now the dataframe actually changed
Step20: To explain why the original df[df['population'] > 50]['population'] = 50 didn't work, we can do the "chained indexing" in two explicit steps | Python Code:
import pandas as pd
# redefining the example dataframe
data = {'country': ['Belgium', 'France', 'Germany', 'Netherlands', 'United Kingdom'],
'population': [11.3, 64.3, 81.3, 16.9, 64.9],
'area': [30510, 671308, 357050, 41526, 244820],
'capital': ['Brussels', 'Paris', 'Berlin', 'Amsterdam', 'London']}
countries = pd.DataFrame(data)
countries
Explanation: <p><font size="6"><b>03 - Pandas: Indexing and selecting data - part II</b></font></p>
© 2021, Joris Van den Bossche and Stijn Van Hoey (jorisvandenbossche@gmail.com, stijnvanhoey@gmail.com). Licensed under CC BY 4.0 Creative Commons
End of explanation
countries = countries.set_index('country')
countries
Explanation: <div class="alert alert-info" style="font-size:120%">
<b>REMEMBER</b>: <br><br>
So as a summary, `[]` provides the following convenience shortcuts:
* **Series**: selecting a **label**: `s[label]`
* **DataFrame**: selecting a single or multiple **columns**:`df['col']` or `df[['col1', 'col2']]`
* **DataFrame**: slicing or filtering the **rows**: `df['row_label1':'row_label2']` or `df[mask]`
</div>
Changing the DataFrame index
We have mostly worked with DataFrames with the default 0, 1, 2, ... N row labels (except for the time series data). But, we can also set one of the columns as the index.
Setting the index to the country names:
End of explanation
countries.reset_index('country')
Explanation: Reversing this operation, is reset_index:
End of explanation
countries.loc['Germany', 'area']
Explanation: Selecting data based on the index
<div class="alert alert-warning" style="font-size:120%">
<b>ATTENTION!</b>: <br><br>
One of pandas' basic features is the labeling of rows and columns, but this makes indexing also a bit more complex compared to numpy. <br><br> We now have to distuinguish between:
* selection by **label** (using the row and column names)
* selection by **position** (using integers)
</div>
Systematic indexing with loc and iloc
When using [] like above, you can only select from one axis at once (rows or columns, not both). For more advanced indexing, you have some extra attributes:
loc: selection by label
iloc: selection by position
Both loc and iloc use the following pattern: df.loc[ <selection of the rows> , <selection of the columns> ].
This 'selection of the rows / columns' can be: a single label, a list of labels, a slice or a boolean mask.
Selecting a single element:
End of explanation
countries.loc['France':'Germany', ['area', 'population']]
Explanation: But the row or column indexer can also be a list, slice, boolean array (see next section), ..
End of explanation
countries.iloc[0:2,1:3]
Explanation: <div class="alert alert-danger">
<b>NOTE</b>:
* Unlike slicing in numpy, the end label is **included**!
</div>
Selecting by position with iloc works similar as indexing numpy arrays:
End of explanation
countries2 = countries.copy()
countries2.loc['Belgium':'Germany', 'population'] = 10
countries2
Explanation: The different indexing methods can also be used to assign data:
End of explanation
# %load _solutions/pandas_03b_indexing1.py
Explanation: <div class="alert alert-info" style="font-size:120%">
<b>REMEMBER</b>: <br><br>
Advanced indexing with **loc** and **iloc**
* **loc**: select by label: `df.loc[row_indexer, column_indexer]`
* **iloc**: select by position: `df.iloc[row_indexer, column_indexer]`
</div>
<div class="alert alert-success">
<b>EXERCISE 1</b>:
<p>
<ul>
<li>Add the population density as column to the DataFrame.</li>
</ul>
</p>
Note: the population column is expressed in millions.
</div>
End of explanation
# %load _solutions/pandas_03b_indexing2.py
Explanation: <div class="alert alert-success">
<b>EXERCISE 2</b>:
<ul>
<li>Select the capital and the population column of those countries where the density is larger than 300</li>
</ul>
</div>
End of explanation
# %load _solutions/pandas_03b_indexing3.py
Explanation: <div class="alert alert-success">
<b>EXERCISE 3</b>:
<ul>
<li>Add a column 'density_ratio' with the ratio of the population density to the average population density for all countries.</li>
</ul>
</div>
End of explanation
# %load _solutions/pandas_03b_indexing4.py
Explanation: <div class="alert alert-success">
<b>EXERCISE 4</b>:
<ul>
<li>Change the capital of the UK to Cambridge</li>
</ul>
</div>
End of explanation
# %load _solutions/pandas_03b_indexing5.py
Explanation: <div class="alert alert-success">
<b>EXERCISE 5</b>:
<ul>
<li>Select all countries whose population density is between 100 and 300 people/km²</li>
</ul>
</div>
End of explanation
df = pd.read_csv("data/titanic.csv")
df.head()
Explanation: The next exercise uses the titanic data set:
End of explanation
# %load _solutions/pandas_03b_indexing6.py
# %load _solutions/pandas_03b_indexing7.py
Explanation: <div class="alert alert-success">
<b>EXERCISE 6</b>:
* Select all rows for male passengers and calculate the mean age of those passengers. Do the same for the female passengers. Do this now using `.loc`.
</div>
End of explanation
population = countries['population']
s1 = population[['Belgium', 'France']]
s2 = population[['France', 'Germany']]
s1
s2
s1 + s2
Explanation: We will later see an easier way to calculate both averages at the same time with groupby.
Alignment on the index
<div class="alert alert-danger">
**WARNING**: **Alignment!** (unlike numpy)
* Pay attention to **alignment**: operations between series will align on the index:
</div>
End of explanation
df = countries.copy()
Explanation: Pitfall: chained indexing (and the 'SettingWithCopyWarning')
End of explanation
df[df['population'] > 50]['population'] = 50
Explanation: When updating values in a DataFrame, you can run into the infamous "SettingWithCopyWarning" and issues with chained indexing.
Assume we want to cap the population and replace all values above 50 with 50. We can do this using the basic [] indexing operation twice ("chained indexing"):
End of explanation
df
Explanation: However, we get a warning, and we can also see that the original dataframe did not change:
End of explanation
df.loc[df['population'] > 50, 'population'] = 50
Explanation: The warning message explains that we should use .loc[row_indexer,col_indexer] = value instead. That is what we just learned in this notebook, so we can do:
End of explanation
df
Explanation: And now the dataframe actually changed:
End of explanation
temp = df[df['population'] > 50]
temp['population'] = 50
Explanation: To explain why the original df[df['population'] > 50]['population'] = 50 didn't work, we can do the "chained indexing" in two explicit steps:
End of explanation |
12,472 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
High Schools dataset cleaning and exploration
In this notebook we will clean and explore the 2017 High Schools dataset by the NYC Department of Education.
Let's start by opening and examining it.
Step1: As ou can see there are over 400 columns so let's keep only columns of interest.
Also notice that the boys column is a flag for boys-only schools. Since we are trying to help solving the problem of women in tech it wouldn't make sense to keep them - let's filter them out.
Step2: Let's now make a quick comparison of college_career_rate in girls-only schools vs mixed ones.
Step5: Now notice that there are 5 columns on "academic opportunities", 10 columns on "programs", and 10 more columns on "interests", and that in each of these areas some schools might have something that is tech-related. For each school, let's try to find whether we can find some tech related words in any of those areas and let's call it "tech inclination".
Step6: Since 46% of schools are tech inclined and our assumption here was that 200 high schools were enough let's use only tech-inclined schools going forward. It could help the canvassing team if they were talking to female students from schools that have some tech-inclination.
However, let's first see how schools compare with each other taking that into consideration.
Step7: We can see from the violin plots above that even though tech inclined high schools have a sligtly higher college career rate median, they have slightly lower 25% and 75% quartiles. On the other hand, most high schools with 1500 or more students seem to have some kind of tech inclination.
Step8: Let's now shift our focus to the graduation_rate and college_career_rate columns. In particular, college_career_rate's definition is "at the end of the 2014-15 school year, the percent of students who graduated 'on time' by earning a diploma four years after they entered 9th grade".
We could multiply that by the total number of students in each school and calculate the potential number of college schools each school has.
Step9: We can see that graduation_rate and college_career_rate have a strong correlation. That means if we have too many college_career_rate null values we can use graduation_rate as a proxy.
Step10: It seems that 14% of schools don't have figures on the graduation rate. Let's plot its distribution to help decide if we should either ignore the column or the schools without that data.
Step11: Since some schools have a really low college career rate let's use that data and filter schools that don't have that data point.
Let's do that and also plot the distribution of schools by their number of potential college students.
Step12: There seems to be a big gap in the number of schools with more than 1000 potential college students as compared to the number of schools with fewer potential college students.
Since we want to reduce the number of recommended stations by at least 90% and there are 24 schools with at least 1000 potential college students let's filter those and ignore the other ones.
Next, let's examine the subway and bus columns, which tells us which subway and bus lines are near each school.
Step13: Notice how the 75% percentile of college career rate in high schools with a subway nearby is much higher. Also notice that the schools with the highest number of students all seem to have a subway nearby.
Going forward we will filter schools without a subway station nearby.
Step14: Let's turn our attention to the location column. We have to extract latitude and longitude in order to be able to match this dataset with the subway stations location coordinates. Let's use add_coord_columns() which is defined in coordinates.py.
Step15: Let's plot the the schools coordinates to see their geographical distribution
Step16: The interactive map is available here.
It seems like we have all school data we need to perform the recommendations. Let's just clean the DataFrame columns and save it as a pickle binary file for later use in another Jupyter notebook. | Python Code:
import pandas as pd
all_high_schools = pd.read_csv('data/DOE_High_School_Directory_2017.csv')
all_high_schools.shape
pd.set_option('display.max_columns', 453)
all_high_schools.head(3)
Explanation: High Schools dataset cleaning and exploration
In this notebook we will clean and explore the 2017 High Schools dataset by the NYC Department of Education.
Let's start by opening and examining it.
End of explanation
boys_only = all_high_schools['boys'] == 1
columns_of_interest = ['dbn', 'school_name', 'boro', 'academicopportunities1',
'academicopportunities2', 'academicopportunities3',
'academicopportunities4', 'academicopportunities5', 'neighborhood',
'location', 'subway', 'bus', 'total_students', 'start_time', 'end_time',
'graduation_rate', 'attendance_rate', 'pct_stu_enough_variety',
'college_career_rate', 'girls', 'specialized', 'earlycollege',
'program1', 'program2', 'program3', 'program4', 'program5', 'program6',
'program7', 'program8', 'program9', 'program10', 'interest1',
'interest2', 'interest3', 'interest4', 'interest5', 'interest6',
'interest7', 'interest8', 'interest9', 'interest10', 'city', 'zip']
df = all_high_schools[~boys_only][columns_of_interest]
df.set_index('dbn', inplace=True)
df.shape
df.head(3)
Explanation: As ou can see there are over 400 columns so let's keep only columns of interest.
Also notice that the boys column is a flag for boys-only schools. Since we are trying to help solving the problem of women in tech it wouldn't make sense to keep them - let's filter them out.
End of explanation
df['all'] = ""
df['girls'] = df['girls'].map({1: 'Girls-only'})
df['girls'].fillna('Mixed', inplace=True)
%matplotlib inline
import pylab as plt
import seaborn as sns
sns.set(style="whitegrid", color_codes=True)
ax = sns.violinplot(data=df, x='all', y="college_career_rate", hue="girls", split=True)
sns.despine(left=True)
ax.set_xlabel("")
plt.suptitle('College Career Rate by Type of School (Mixed or Girls-only)')
plt.savefig('figures/girls-only.png', bbox_inches='tight')
Explanation: Let's now make a quick comparison of college_career_rate in girls-only schools vs mixed ones.
End of explanation
import numpy as np
def contains_terms(column_name, terms=["tech"]):
Checks if at least one of the terms is present in the given column.
contains = []
for i, term in enumerate(terms):
contains.append(df[column_name].str.contains(terms[i], case=False))
not_null = df[column_name].notnull()
return (not_null) & (np.any(contains, axis=0))
def contains_terms_columns(column_root, n_columns, terms=["tech"]):
Checks if at least one of the terms is present in the columns given by its root name.
if n_columns == 1:
return contains_terms(column_root, terms)
tech = []
for i in range(n_columns):
column_name = column_root + str(i + 1)
tech.append(contains_terms(column_name, terms))
return np.any(tech, axis=0)
tech_academicopportunities = contains_terms_columns('academicopportunities', 5,
terms=['technology', 'computer', 'web',
'programming', 'coding'])
len(df[tech_academicopportunities])
# searching for 'tech' might match the word 'technical'
all_tech_program = contains_terms_columns('program', 10, terms=['programming', 'computer',
'tech'])
technical_program = contains_terms_columns('program', 10, terms=['technical'])
tech_program = (all_tech_program) & ~(technical_program)
len(df[tech_program])
tech_interest = contains_terms_columns('interest', 10, terms=['computer', 'technology'])
len(df[tech_interest])
tech_inclined = (tech_academicopportunities) | (tech_program) | (tech_interest)
print(len(df[tech_inclined]))
print("{:.1f}%".format(100 * len(df[tech_inclined]) / len(df)))
Explanation: Now notice that there are 5 columns on "academic opportunities", 10 columns on "programs", and 10 more columns on "interests", and that in each of these areas some schools might have something that is tech-related. For each school, let's try to find whether we can find some tech related words in any of those areas and let's call it "tech inclination".
End of explanation
df['tech_academicopportunities'] = tech_academicopportunities.astype(int)
df['tech_program'] = tech_program.astype(int)
df['tech_interest'] = tech_interest.astype(int)
df.head(3)
def fill_tech_summary(academicopportunities, program, interest):
if academicopportunities:
if program:
if interest:
return 'tech_academicopportunities+program+interest'
else:
return 'tech_academicopportunities+program'
elif interest:
return 'tech_academicopportunities+interest'
else:
return 'tech_academicopportunities'
elif program:
if interest:
return 'tech_program+interest'
else:
return 'tech_program'
elif interest:
return 'tech_interest'
else:
return 'no_tech_inclination'
df['tech_summary'] = df.apply(lambda x: fill_tech_summary(x.loc['tech_academicopportunities'],
x.loc['tech_program'],
x.loc['tech_interest']),
axis='columns')
df['tech_summary'].head()
fig, ax = plt.subplots(figsize=(20, 10))
ax = sns.violinplot(data=df, x='all', y="college_career_rate", hue="tech_summary", ax=ax,
hue_order=['no_tech_inclination', 'tech_interest', 'tech_program',
'tech_academicopportunities', 'tech_program+interest',
'tech_academicopportunities+program',
'tech_academicopportunities+interest',
'tech_academicopportunities+program+interest'])
sns.despine(left=True)
ax.set_xlabel("")
plt.suptitle('College Career Rate by Types of Tech Inclination')
plt.savefig('figures/types-tech-inclination.png', bbox_inches='tight')
def fill_tech_summary_compact(academicopportunities, program, interest):
if academicopportunities or program or interest:
return 'tech_inclined'
else:
return 'not_tech_inclined'
df['tech_summary_compact'] = df.apply(lambda x: fill_tech_summary_compact(
x.loc['tech_academicopportunities'],
x.loc['tech_program'],
x.loc['tech_interest']),
axis='columns')
df['tech_summary_compact'].head()
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(20, 8))
ax1 = sns.violinplot(data=df, x='all', y="college_career_rate", hue="tech_summary_compact",
split=True, inner="quartile", ax=ax1)
ax2 = sns.violinplot(data=df, x='all', y="total_students", hue="tech_summary_compact",
split=True, inner="quartile", ax=ax2)
ax1.set_xlabel("")
ax2.set_xlabel("")
sns.despine(left=True)
plt.suptitle('College Career Rate and Total Students by Tech Inclination')
plt.savefig('figures/breakdown-tech-inclination.png', bbox_inches='tight')
Explanation: Since 46% of schools are tech inclined and our assumption here was that 200 high schools were enough let's use only tech-inclined schools going forward. It could help the canvassing team if they were talking to female students from schools that have some tech-inclination.
However, let's first see how schools compare with each other taking that into consideration.
End of explanation
new_columns = ['school_name', 'boro', 'tech_academicopportunities', 'neighborhood', 'location',
'subway', 'bus', 'total_students', 'start_time', 'end_time', 'graduation_rate',
'attendance_rate', 'pct_stu_enough_variety', 'college_career_rate', 'girls',
'specialized', 'earlycollege', 'tech_program', 'tech_interest', 'city', 'zip']
tech_schools = df[tech_inclined][new_columns]
tech_schools.head(3)
Explanation: We can see from the violin plots above that even though tech inclined high schools have a sligtly higher college career rate median, they have slightly lower 25% and 75% quartiles. On the other hand, most high schools with 1500 or more students seem to have some kind of tech inclination.
End of explanation
fig, ax = plt.subplots(figsize=(20, 8))
ax.set_xlim(0, 1.02)
ax.set_ylim(0, 1.05)
sns.regplot(tech_schools['graduation_rate'], tech_schools['college_career_rate'], order=3)
ax.set_xlabel('Graduation Rate')
ax.set_ylabel('College Career Rate')
plt.suptitle('College Career Rate by Graduation Rate')
plt.savefig('figures/college-career-and-graduation-rate.png', bbox_inches='tight')
Explanation: Let's now shift our focus to the graduation_rate and college_career_rate columns. In particular, college_career_rate's definition is "at the end of the 2014-15 school year, the percent of students who graduated 'on time' by earning a diploma four years after they entered 9th grade".
We could multiply that by the total number of students in each school and calculate the potential number of college schools each school has.
End of explanation
potential = tech_schools['college_career_rate'] * tech_schools['total_students']
potential.sort_values(inplace=True, ascending=False)
potential
null_college_career_rate = tech_schools.college_career_rate.isnull()
print("{:.1f}%".format(100 * len(tech_schools[null_college_career_rate]) / len(tech_schools)))
null_graduation_rate = tech_schools.graduation_rate.isnull()
print("{:.1f}%".format(100 * len(tech_schools[null_graduation_rate]) / len(tech_schools)))
print("{:.1f}%".format(100 * len(tech_schools[(null_college_career_rate) & \
(null_graduation_rate)]) \
/ len(tech_schools)))
fig, ax = plt.subplots(figsize=(20, 8))
sns.distplot(tech_schools['total_students'], bins=range(0, 6000, 250), kde=False, rug=True)
ax.set_xlabel('Total Students')
ax.set_ylabel('Number of Schools with that Many Students')
plt.suptitle('Number of Schools by Total Students')
tech_schools[(null_college_career_rate) & (null_graduation_rate)]['total_students'].max()
Explanation: We can see that graduation_rate and college_career_rate have a strong correlation. That means if we have too many college_career_rate null values we can use graduation_rate as a proxy.
End of explanation
import numpy as np
fig, ax = plt.subplots(figsize=(20, 8))
ax.set_xlim(0.05, 1.15)
schools_to_plot = tech_schools[~(null_college_career_rate)]
sns.distplot(schools_to_plot['college_career_rate'], bins=np.arange(0, 1, 0.1))
ax.set_xlabel('College Career Rate')
ax.set_ylabel('Number of Schools with that Rate')
plt.suptitle('Number of Schools by College Career Rate')
fig.savefig('figures/college-career-rate.png', bbox_inches='tight')
Explanation: It seems that 14% of schools don't have figures on the graduation rate. Let's plot its distribution to help decide if we should either ignore the column or the schools without that data.
End of explanation
# Copy to avoid chained indexing and the SettingWithCopy warning (http://bit.ly/2kkXW5B)
tech_col_potential = pd.DataFrame(tech_schools, copy=True)
tech_col_potential.dropna(subset=['college_career_rate'], inplace=True)
tech_col_potential['potential_college_students'] = (tech_col_potential['total_students'] *\
tech_col_potential['college_career_rate'])\
.astype(int)
tech_col_potential.sort_values('potential_college_students', inplace=True, ascending=False)
tech_col_potential.head(3)
fig, ax = plt.subplots(figsize=(20, 8))
sns.distplot(tech_col_potential['potential_college_students'], bins=range(0, 6000, 250),
kde=False, rug=True)
ax.set_xlabel('Potential College Students')
ax.set_ylabel('Number of Schools')
plt.suptitle('Number of Schools by Potential College Students')
plt.savefig('figures/potential-college-students.png', bbox_inches='tight')
high_potential = tech_col_potential['potential_college_students'] > 1000
high_potential_schools = tech_col_potential[high_potential]
len(high_potential_schools)
Explanation: Since some schools have a really low college career rate let's use that data and filter schools that don't have that data point.
Let's do that and also plot the distribution of schools by their number of potential college students.
End of explanation
high_potential_schools.loc[:, ('subway', 'bus')]
high_potential_schools['subway_nearby'] = df.apply(lambda x: 'no subway' if pd.isnull(x['subway'])
else 'subway nearby',
axis='columns')
high_potential_schools['subway_nearby']
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(20, 8))
high_potential_schools['all'] = ""
ax1 = sns.violinplot(data=high_potential_schools, x='all', y="college_career_rate",
hue="subway_nearby", split=True, inner="quartile", ax=ax1)
ax2 = sns.violinplot(data=high_potential_schools, x='all', y="total_students",
hue="subway_nearby", split=True, inner="quartile", ax=ax2)
ax1.set_xlabel("")
ax2.set_xlabel("")
sns.despine(left=True)
plt.suptitle('College Career Rate and Total Students by Subway Nearby')
fig.savefig('figures/subway-vs-no-subway.png', bbox_inches='tight')
Explanation: There seems to be a big gap in the number of schools with more than 1000 potential college students as compared to the number of schools with fewer potential college students.
Since we want to reduce the number of recommended stations by at least 90% and there are 24 schools with at least 1000 potential college students let's filter those and ignore the other ones.
Next, let's examine the subway and bus columns, which tells us which subway and bus lines are near each school.
End of explanation
# Copy to avoid chained indexing and the SettingWithCopy warning (http://bit.ly/2kkXW5B)
close_to_subway = pd.DataFrame(high_potential_schools, copy=True)
close_to_subway.dropna(subset=['subway'], inplace=True)
close_to_subway
Explanation: Notice how the 75% percentile of college career rate in high schools with a subway nearby is much higher. Also notice that the schools with the highest number of students all seem to have a subway nearby.
Going forward we will filter schools without a subway station nearby.
End of explanation
import coordinates as coord
coord.add_coord_columns(close_to_subway, 'location')
close_to_subway.loc[:, ('latitude', 'longitude')]
Explanation: Let's turn our attention to the location column. We have to extract latitude and longitude in order to be able to match this dataset with the subway stations location coordinates. Let's use add_coord_columns() which is defined in coordinates.py.
End of explanation
!pip install folium
import folium
close_to_subway_map = folium.Map([40.72, -73.92], zoom_start=11, tiles='CartoDB positron',
width='60%')
for i, school in close_to_subway.iterrows():
marker = folium.RegularPolygonMarker([school['latitude'], school['longitude']],
popup=school['school_name'], color='RoyalBlue',
fill_color='RoyalBlue', radius=5)
marker.add_to(close_to_subway_map)
close_to_subway_map.save('maps/close_to_subway.html')
close_to_subway_map
Explanation: Let's plot the the schools coordinates to see their geographical distribution:
End of explanation
close_to_subway.rename(columns={'subway': 'subway_lines'}, inplace=True)
df_to_pickle = close_to_subway.loc[:, ('school_name', 'potential_college_students', 'latitude',
'longitude', 'start_time', 'end_time', 'subway_lines',
'city')]
df_to_pickle
df_to_pickle.to_pickle('pickle/high_schools.p')
Explanation: The interactive map is available here.
It seems like we have all school data we need to perform the recommendations. Let's just clean the DataFrame columns and save it as a pickle binary file for later use in another Jupyter notebook.
End of explanation |
12,473 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CÃŽnicas na forma polar
A forma geral de uma cÎnica na forma polar é
$$ r = \frac{de}{e\cos{\theta} + 1} $$
Vamos fazer dois exemplos
Step1: A Hipérbole
Note que na definição de uma hipérbole temos $\|X-F\| = ed(X,L)$ com $e>1$. Ou seja a distância de $X$ ao foco é maior que a distância de $X$ à diretriz $L$ e nesse caso podemos ter
Step2: Agora o outro ramo seria dado pela equação | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
theta = np.arange(-1.7,1.7,0.1)
r = lambda x: 2.0/(1+np.cos(x))
theta
rho =r(theta)
rho
## primeiro vou tentar um grafico polar
ax=plt.subplot(111,projection='polar')
ax.plot(theta,rho)
ponto = lambda x: [r(x)*np.cos(x) , r(x)*np.sin(x)]
par = ponto(theta)
ax2=plt.subplot(111)
plt.grid()
ax2.plot(par[0],par[1])
ax2.plot([2,2],[-2,2])
ax2.plot([0,0], "ro")
ax2.set_aspect('equal')
plt.xlim(-1,4)
plt.savefig('conica3.png', format='png')
## O outro é uma elipse
theta1=np.arange(0,6.5,0.05)
r1=lambda x: 3/(1+0.5*np.cos(x))
ax3=plt.subplot(111,projection="polar")
ax3.plot(theta1,r1(theta1))
## Em coordenadas cartesianas outra vez
ponto1 = lambda x: [r1(x)*np.cos(x),r1(x)*np.sin(x)]
elipse = ponto1(theta1)
ax4=plt.subplot(111)
plt.grid()
ax4.plot(elipse[0],elipse[1])
ax4.plot([6,6],[-3,3])
ax4.plot([0],[0], "ro")
ax4.set_aspect('equal')
plt.savefig('conica4.png',format="png")
Explanation: CÃŽnicas na forma polar
A forma geral de uma cÎnica na forma polar é
$$ r = \frac{de}{e\cos{\theta} + 1} $$
Vamos fazer dois exemplos:
$$r = \frac{2}{1+\cos{\theta}} \text{ e } r = \frac{3}{1+0.5\cos{\theta}}$$
End of explanation
theta3 = np.arange(-1.7,1.7,0.05)
r3 = lambda x : 4/(2*np.cos(x)+1)
ax4=plt.subplot(111,projection="polar")
ax4.plot(theta3,r3(theta3))
ponto3 = lambda x: [r3(x)*np.cos(x),r3(x)*np.sin(x)]
hiperbole=ponto3(theta3)
ax5=plt.subplot(111)
plt.grid()
ax5.plot(hiperbole[0],hiperbole[1])
ax5.plot([2,2],[4,-4])
ax5.plot([0],[0],"ro")
ax5.set_aspect("equal")
Explanation: A Hipérbole
Note que na definição de uma hipérbole temos $\|X-F\| = ed(X,L)$ com $e>1$. Ou seja a distância de $X$ ao foco é maior que a distância de $X$ à diretriz $L$ e nesse caso podemos ter:
$$ \|X\| = e|\|X\|\cos(\theta)-d| = e(\|X\|\cos(\theta)-d)$$
ou seja
$$\rho = \frac{ed}{e\cos(\theta) -1}$$
Vamos ver o exemplo:
$$ \rho = \frac{4}{2\cos(\theta)+1} $$
End of explanation
theta4=np.arange(-0.8,0.8,0.05)
r4= lambda x : 4/(2*np.cos(x)-1)
ax4=plt.subplot(111,projection="polar")
ax4.plot(theta4,r4(theta4))
theta4=np.arange(-0.8,0.8,0.05)
ponto4 = lambda x : [r4(x)*np.cos(x),r4(x)*np.sin(x)]
hiperbole2 = ponto4(theta4)
ax6=plt.subplot(111)
ax6.plot(hiperbole2[0],hiperbole2[1])
ax6.plot(hiperbole[0],hiperbole[1])
plt.grid()
ax6.set_aspect("equal")
ax6.plot([2,2],[4,-4])
ax6.plot([0],[0],"ro")
Explanation: Agora o outro ramo seria dado pela equação:
$$ \rho = \frac{ed}{e\cos(\theta)-1} $$ e novamente veremos o exemplo:
$$ \rho = \frac{4}{2\cos(\theta)-1} $$
End of explanation |
12,474 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: 1. Data Manipulation and Cleaning
1.1 Data Ingestion
We read the file 'data/vgsales.csv' using the pandas read_csv() function, and store it into the variable data.
Step2: We now check the overall composition of the dataset using the .info() method
Step3: 1.2 Data Cleaning
We clearly see there exists a few null values among both the Publisher and Year columns. The strategy here is to remove them. <br>
Let us remove the null values using the pandas .dropna() method
Step4: The dataset was generated at the end of the year 2016. Hence, each observation with an occurrence greater than or equal to 2017 must be considered as either corrupted or incorrect, and hence must be removed.
Step5: 1.3 Data Consistency
Before moving to a proper data analysis, we need to be sure the data is consistent. By consistency I mean an intrinsic characteristic of the data. For instance, the Publisher name might contain a typo, or sometimes a Publisher might be identified by several names.
To investigate this, let us check how Sony appears inside our dataset
Step6: Obviously, the Sony Publisher identifier is not homogeneously identified among observations.
We might think of, say, changing âSony Computer Entertainmentâ, âSony Computer Entertainment Americaâ, âSony Computer Entertainment Europeâ, âSony Music Entertainmentâ and âSony Online Entertainmentâ to âSonyâ.
To do this, we basically create a custom method, called merging_info_publisher, that should be called whenever we wish to perform such kind of cleaning on our data.
Step7: Possibly, this pattern is repeated for different publishers as well. Here we identify a few Publishers that might have different labels inside the dataset, and we apply the merging_info_publisher method for each of them.
Step8: Let us check now how "Sony" is mapped inside our dataset
Step9: An extra control is to convert EA Sports to Electronic Arts.
Step10: Also, convert ['Bandai', 'Namco Bandai', 'Namco', 'Namco Bandai Games' ] to Namco.
Step11: Let us check the absolute distribution of the top 20 Publishers in our dataset
Step12: Finally, remove the Publisher called "Unknown"
Step13: 2. Total Games Released Each Year
We now want to know when the video game industry experienced a drastic development. Based on the number of games released each year, we might be able to find out when the video games boom happened. We store the distribution of video games releases by year inside the variable counter_df_by_year.
Step14: Let us embed that dataframe into a graphical dimension
Step15: There was a significant boom in the late 2000s. Since then, the distinct number of release has shrunk possibly due to a more convergence to popular titles by both customers and developers.
3. Publisher Analysis with respect to Global Sales
Instead of considering absolute frequencies (with respect to the number of video games releases) of the top publishers, a better proxy is to consider the top publishers by Global Sales, identified by the columns "Global_sales".
Step16: 3.2 Graphical Representation of the Publishers
We again embed the above dataset into a graphical dimension to better understand the data.
Step17: 4. Understand the most popular Platform by Year
We now want to go further and try to see which Platform was the most popular for each Year. To do so, we again use a proxy the total Global Sales with respect to video games for each specific Platform.
However, this requires a little bit of data wrangling, and therefore we need to perform a few steps to be able to answer to this question.
First of all, we count the number of video games by Platform using the .groupby() method, and we store the result into the variable most_popular_platforms. This has been done for you.
Step18: We then need two steps
Step19: We aggregate the data with respect to Global Sales using the pandas .pivot_table(). We would like to have, as columns, the platforms' vendor and, as index, the year. Store the result into the pivoted_data_df.
Step20: The next part is a little bit tricky and it has been already prefilled for you.
We now want to find the Platform which has the top sell for each distinct year. To do that we employ the NumPy method argsort() which basically allows to sort, for each row, the observations in ascending order.
Step21: We then select the column names based on the sorting operation, so that in the first place we will find the platform with highest value with respect to the aggregated Global Sales.
Step22: 4.1 Distribution of the most popular platforms during the last 40 years
Store the results inside the variable most_popular_platform_by_year.
Step23: 5. Which was the most popular game in each Year?
Store the resulting dataframe object into the variable most_popular_games.
Step24: 6. Which was the most sold Title by Platform?
Create a new object, called most_popular_vg_by_platform, that join the information from most_popular_platfrom_by_year and most_popular_games. Print the result in console.
Step25: 7. Which are the most sold videogames ever?
We are interesting in investigating which were the most sold titles in the last 40 years. To do so, we employ the .groupby() method, and store the result into the most_wanted_vg variable. | Python Code:
# IF YOU ARE RUNNING THIS NOTEBOOK VIA GOOGLE COLAB, PLEASE UNCOMMENT and RUN THIS CELL
# from google.colab import drive
# drive.mount('/content/drive')
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
pd.set_option('display.max_columns', 500)
pd.set_option('display.max_rows', 500)
Explanation: <a href="https://colab.research.google.com/github/crazy54/work-related/blob/master/%5Bwebinar%5D_Hands_on_Data_Analysis_with_Python_student_nb.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
TITLE: Karting Through Video Games History with Python: how to manipulate your data from Zelda to CoD
Author: Andrea Giussani - Data Scientist at Cloud Academy
Date: Nov 12, 2020
Have you ever wondered what is the most played video game in the last thirty years? How about which gaming platform is the most used in the last decade?
In this hands-on webinar, we are going to explore advanced data manipulation techniques that are typically used to translate raw data into insightful plots and charts â enabling you to answer these types of questions. To perform this analysis, we will use Python and we will explore two of the most important data analytics libraries: Pandas and Matplotlib.
To get the most out of this webinar, we encourage some familiarity with Python, although extensive experience with the language would be best. For those who are new to Python, you can get your feet wet with the following Cloud Academy courses:
Working with Python
Python Functions, Modules, and Packages
Data Wrangling with Pandas
By the end of this webinar, the participants will be able to:
Understand the main functionalities of Pandas for data manipulation
Plot raw data into nice looking charts in Python using Matplotlib
Complete a data analytics pipeline for exploratory data analysis
Participants are strongly encouraged to download the necessary data from the following Github repo.
You can follow along with the webinar by using either your favorite local Python or Google Colab environment. For more details, please check the readme on the aforementioned Github repo.
End of explanation
import pandas as pd
data = pd.read_csv('/content/drive/My Drive/ca.webinars/zelda/data/vgsales.csv')
Explanation: 1. Data Manipulation and Cleaning
1.1 Data Ingestion
We read the file 'data/vgsales.csv' using the pandas read_csv() function, and store it into the variable data.
End of explanation
data.info()
Explanation: We now check the overall composition of the dataset using the .info() method
End of explanation
# -------------------------------------------
# TO BE FILLED
# -------------------------------------------
Explanation: 1.2 Data Cleaning
We clearly see there exists a few null values among both the Publisher and Year columns. The strategy here is to remove them. <br>
Let us remove the null values using the pandas .dropna() method:
End of explanation
# -------------------------------------------
# TO BE FILLED
# -------------------------------------------
Explanation: The dataset was generated at the end of the year 2016. Hence, each observation with an occurrence greater than or equal to 2017 must be considered as either corrupted or incorrect, and hence must be removed.
End of explanation
# -------------------------------------------
# TO BE FILLED
# -------------------------------------------
Explanation: 1.3 Data Consistency
Before moving to a proper data analysis, we need to be sure the data is consistent. By consistency I mean an intrinsic characteristic of the data. For instance, the Publisher name might contain a typo, or sometimes a Publisher might be identified by several names.
To investigate this, let us check how Sony appears inside our dataset: we hence access to the Publisher column, which is of type object, and try to check the different names Sony is used for:
End of explanation
# DO NOT REMOVE! THIS HAS BEEN PREFILLED FOR YOU
def merging_info_publisher(data: pd.DataFrame, publisher: str):
data.loc[data['Publisher'].str.contains(publisher, case=False), 'Publisher'] = publisher
return data[data['Publisher'].str.contains(publisher, case=False)]['Publisher'].value_counts()
Explanation: Obviously, the Sony Publisher identifier is not homogeneously identified among observations.
We might think of, say, changing âSony Computer Entertainmentâ, âSony Computer Entertainment Americaâ, âSony Computer Entertainment Europeâ, âSony Music Entertainmentâ and âSony Online Entertainmentâ to âSonyâ.
To do this, we basically create a custom method, called merging_info_publisher, that should be called whenever we wish to perform such kind of cleaning on our data.
End of explanation
# DO NOT REMOVE! THIS HAS BEEN PREFILLED FOR YOU
publishers = ['Sony', 'Nintendo', 'Ubisoft', 'Activision', 'Electronic Arts', 'Konami']
for publisher in publishers:
merging_info_publisher(data, publisher)
Explanation: Possibly, this pattern is repeated for different publishers as well. Here we identify a few Publishers that might have different labels inside the dataset, and we apply the merging_info_publisher method for each of them.
End of explanation
# DO NOT REMOVE! THIS HAS BEEN PREFILLED FOR YOU
data[data['Publisher'].str.contains('Sony')]['Publisher'].value_counts()
Explanation: Let us check now how "Sony" is mapped inside our dataset:
End of explanation
# DO NOT REMOVE! THIS HAS BEEN PREFILLED FOR YOU
data.loc[data['Publisher'].str.contains('EA Sports', case=False), 'Publisher'] = 'Electronic Arts'
Explanation: An extra control is to convert EA Sports to Electronic Arts.
End of explanation
# DO NOT REMOVE! THIS HAS BEEN PREFILLED FOR YOU
namco_names = ['Bandai', 'Namco Bandai', 'Namco', 'Namco Bandai Games' ]
data.loc[data['Publisher'].str.contains('|'.join(namco_names), case=False), 'Publisher'] = 'Namco'
Explanation: Also, convert ['Bandai', 'Namco Bandai', 'Namco', 'Namco Bandai Games' ] to Namco.
End of explanation
# -------------------------------------------
# TO BE FILLED
# -------------------------------------------
Explanation: Let us check the absolute distribution of the top 20 Publishers in our dataset:
End of explanation
# -------------------------------------------
# TO BE FILLED
# -------------------------------------------
Explanation: Finally, remove the Publisher called "Unknown": this is not obviously fine. Store the result into the global variable filtered_data
End of explanation
# -------------------------------------------
# TO BE FILLED
# -------------------------------------------
Explanation: 2. Total Games Released Each Year
We now want to know when the video game industry experienced a drastic development. Based on the number of games released each year, we might be able to find out when the video games boom happened. We store the distribution of video games releases by year inside the variable counter_df_by_year.
End of explanation
# -------------------------------------------
# TO BE FILLED
# -------------------------------------------
ax.set_xlabel('Year')
ax.set_ylabel('Number of Games')
ax.set_title('Evolution of Video Games Industry')
# -------------------------------------------
# TO BE FILLED
# -------------------------------------------
Explanation: Let us embed that dataframe into a graphical dimension:
End of explanation
# DO NOT REMOVE! THIS HAS BEEN PREFILLED FOR YOU
total_sales_df = filtered_data[['Global_Sales', 'Publisher']].drop_duplicates().groupby('Publisher').sum()
top_10_sales = total_sales_df.sort_values(by='Global_Sales', ascending=False).head(10)
Explanation: There was a significant boom in the late 2000s. Since then, the distinct number of release has shrunk possibly due to a more convergence to popular titles by both customers and developers.
3. Publisher Analysis with respect to Global Sales
Instead of considering absolute frequencies (with respect to the number of video games releases) of the top publishers, a better proxy is to consider the top publishers by Global Sales, identified by the columns "Global_sales".
End of explanation
# -------------------------------------------
# TO BE FILLED
# -------------------------------------------
Explanation: 3.2 Graphical Representation of the Publishers
We again embed the above dataset into a graphical dimension to better understand the data.
End of explanation
# DO NOT REMOVE! THIS HAS BEEN PREFILLED FOR YOU
most_popular_platforms = filtered_data[['Name','Platform']].drop_duplicates().groupby('Platform').count()
most_popular_platforms.rename(columns={'Name': 'Total Observations'}, inplace=True)
Explanation: 4. Understand the most popular Platform by Year
We now want to go further and try to see which Platform was the most popular for each Year. To do so, we again use a proxy the total Global Sales with respect to video games for each specific Platform.
However, this requires a little bit of data wrangling, and therefore we need to perform a few steps to be able to answer to this question.
First of all, we count the number of video games by Platform using the .groupby() method, and we store the result into the variable most_popular_platforms. This has been done for you.
End of explanation
# DO NOT REMOVE! THIS HAS BEEN PREFILLED FOR YOU
top_20_platforms = most_popular_platforms.sort_values(by='Total Observations', ascending=False).head(20)
filtered_data_top20 = filtered_data[filtered_data['Platform'].isin(list(top_20_platforms.index))]
Explanation: We then need two steps:
* store inside the variable top20_platforms the top 20 platform with respect to the column Total Observations;
* filter the filtered_data with those Platforms. Store the result into filtered_data_top20.
This has been done for you.
End of explanation
# -------------------------------------------
# TO BE FILLED
# -------------------------------------------
Explanation: We aggregate the data with respect to Global Sales using the pandas .pivot_table(). We would like to have, as columns, the platforms' vendor and, as index, the year. Store the result into the pivoted_data_df.
End of explanation
# DO NOT REMOVE! THIS HAS BEEN PREFILLED FOR YOU
rows_arrangement = np.argsort(-pivoted_data_df.values, axis=1)
Explanation: The next part is a little bit tricky and it has been already prefilled for you.
We now want to find the Platform which has the top sell for each distinct year. To do that we employ the NumPy method argsort() which basically allows to sort, for each row, the observations in ascending order.
End of explanation
# DO NOT REMOVE! THIS HAS BEEN PREFILLED FOR YOU
data_platform_by_year = pd.DataFrame(pivoted_data_df.columns[rows_arrangement], index=pivoted_data_df.index)
Explanation: We then select the column names based on the sorting operation, so that in the first place we will find the platform with highest value with respect to the aggregated Global Sales.
End of explanation
# -------------------------------------------
# TO BE FILLED
# -------------------------------------------
Explanation: 4.1 Distribution of the most popular platforms during the last 40 years
Store the results inside the variable most_popular_platform_by_year.
End of explanation
# DO NOT REMOVE! THIS HAS BEEN PREFILLED FOR YOU
most_popular_games = pd.DataFrame()
for _, row in most_popular_platform_by_year.iterrows():
year = row['Year']
platform = row['Platform']
inner_df = filtered_data.query("Year == @year & Platform==@platform")
pivoted_table_year_platform = inner_df.pivot_table(
index = 'Year',
columns='Name',
values='Global_Sales',
aggfunc='sum',
fill_value=0
)
temp_col_max_value = pivoted_table_year_platform.max(axis=1).to_frame() # finds max value by row
temp_col_max_value.rename(columns={0:'Total sells (ML of units)'}, inplace=True)
temp_col_max = pivoted_table_year_platform.idxmax(axis=1).to_frame() # find the column with the greatest value on each row
temp_col_max.rename(columns={0:'Most Wanted Title'}, inplace=True)
merging_dfs = pd.concat([temp_col_max, temp_col_max_value], axis=1)
most_popular_games = most_popular_games.append(merging_dfs)
Explanation: 5. Which was the most popular game in each Year?
Store the resulting dataframe object into the variable most_popular_games.
End of explanation
# -------------------------------------------
# TO BE FILLED
# -------------------------------------------
Explanation: 6. Which was the most sold Title by Platform?
Create a new object, called most_popular_vg_by_platform, that join the information from most_popular_platfrom_by_year and most_popular_games. Print the result in console.
End of explanation
# -------------------------------------------
# TO BE FILLED
# -------------------------------------------
Explanation: 7. Which are the most sold videogames ever?
We are interesting in investigating which were the most sold titles in the last 40 years. To do so, we employ the .groupby() method, and store the result into the most_wanted_vg variable.
End of explanation |
12,475 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
VARMAX models
This is a notebook stub for VARMAX models. Full development will be done after impulse response functions are available.
Step1: Model specification
The VARMAX class in Statsmodels allows estimation of VAR, VMA, and VARMA models (through the order argument), optionally with a constant term (via the trend argument). Exogenous regressors may also be included (as usual in Statsmodels, by the exog argument), and in this way a time trend may be added. Finally, the class allows measurement error (via the measurement_error argument) and allows specifying either a diagonal or unstructured innovation covariance matrix (via the error_cov_type argument).
Example 1
Step2: Example 2
Step3: Caution | Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
import statsmodels.api as sm
import matplotlib.pyplot as plt
dta = sm.datasets.webuse('lutkepohl2', 'http://www.stata-press.com/data/r12/')
dta.index = dta.qtr
endog = dta.ix['1960-04-01':'1978-10-01', ['dln_inv', 'dln_inc', 'dln_consump']]
Explanation: VARMAX models
This is a notebook stub for VARMAX models. Full development will be done after impulse response functions are available.
End of explanation
# exog = pd.Series(np.arange(len(endog)), index=endog.index, name='trend')
exog = endog['dln_consump']
mod = sm.tsa.VARMAX(endog[['dln_inv', 'dln_inc']], order=(2,0), trend='nc', exog=exog)
res = mod.fit(maxiter=1000)
print res.summary()
Explanation: Model specification
The VARMAX class in Statsmodels allows estimation of VAR, VMA, and VARMA models (through the order argument), optionally with a constant term (via the trend argument). Exogenous regressors may also be included (as usual in Statsmodels, by the exog argument), and in this way a time trend may be added. Finally, the class allows measurement error (via the measurement_error argument) and allows specifying either a diagonal or unstructured innovation covariance matrix (via the error_cov_type argument).
Example 1: VAR
Below is a simple VARX(2) model in two endogenous variables and an exogenous series, but no constant term. Notice that we needed to allow for more iterations than the default (which is maxiter=50) in order for the likelihood estimation to converge. This is not unusual in VAR models which have to estimate a large number of parameters, often on a relatively small number of time series: this model, for example, estimates 27 parameters off of 75 observations of 3 variables.
End of explanation
mod = sm.tsa.VARMAX(endog[['dln_inv', 'dln_inc']], order=(0,2), error_cov_type='diagonal')
res = mod.fit(maxiter=1000)
print res.summary()
Explanation: Example 2: VMA
A vector moving average model can also be formulated. Below we show a VMA(2) on the same data, but where the innovations to the process are uncorrelated. In this example we leave out the exogenous regressor but now include the constant term.
End of explanation
mod = sm.tsa.VARMAX(endog[['dln_inv', 'dln_inc']], order=(1,1))
res = mod.fit(maxiter=1000)
print res.summary()
Explanation: Caution: VARMA(p,q) specifications
Although the model allows estimating VARMA(p,q) specifications, these models are not identified without additional restrictions on the representation matrices, which are not built-in. For this reason, it is recommended that the user proceed with error (and indeed a warning is issued when these models are specified). Nonetheless, they may in some circumstances provide useful information.
End of explanation |
12,476 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
List comprehensions
Jedná se o zápis s Pythonu s jehoÅŸ pomocà se dajà jednoduÅ¡e vytváÅet seznamy.
Step1: Tabulka s hracÃm polem
ÅeÅ¡enà 1
Step2: ÅeÅ¡enà 2
Step3: Správné ÅeÅ¡enÃ
Step4: Krátké ÅeÅ¡enà (s pomocà list comprehensions)
Step5: Nejkratšà ÅeÅ¡enà (je nejlepÅ¡Ã?)
Step6: Je vytvoÅenà kompletnà tabulky se vÅ¡emi hracÃmi polÃÄky váşnÄ potÅeba?
ZvÃÅátka
Najdi chyby, nejasnosti Äi nepÅesnosti v tomto kódu
Step7: Takto by to mohlo vypadat pÅehlednÄjšà a pochopitelnÄjÅ¡Ã
Step8: Dalšà podobná funkce
Step9: A jejà pÅehlednÄjšà varianta
Step10: Je moşné tuto funkci zkrátit a zjednodušit?
Step11: Ano, je to moşné
Step12: Had | Python Code:
[x for x in range(10)]
[x**2 for x in range(10)]
[x**2 for x in range(10) if x % 2 == 0]
[(x, x**2) for x in range(10)]
[[y for y in range(3)] for x in range(10)]
Explanation: List comprehensions
Jedná se o zápis s Pythonu s jehoÅŸ pomocà se dajà jednoduÅ¡e vytváÅet seznamy.
End of explanation
def vytvor_tabulku():
zap_tabulka = []
for rada_x in '.', '.', '.', '.', '.', '.', '.', '.', '.', '.':
radek = []
for rada_y in '.', '.', '.', '.', '.', '.', '.', '.', '.', '.':
radek.append(rada_y)
zap_tabulka.append(radek)
return zap_tabulka
Explanation: Tabulka s hracÃm polem
ÅeÅ¡enà 1
End of explanation
def vytvor_prazdne_herni_pole(r,s):
seznam_radku = []
for a in range(r):
radek = ["."]
for b in range(s-1):
radek.append(".")
seznam_radku.append(radek)
return seznam_radku
Explanation: ÅeÅ¡enà 2
End of explanation
def vytvor_tabulku(velikost):
seznam_radku = []
for a in range(velikost):
radek = []
for b in range(velikost):
radek.append(".")
seznam_radku.append(radek)
return seznam_radku
Explanation: Správné ÅeÅ¡enÃ
End of explanation
def vytvor_tabulku(velikost):
tabulka = []
for x in range(velikost):
radek = ['.' for x in range(velikost)]
tabulka.append(radek)
return tabulka
Explanation: Krátké ÅeÅ¡enà (s pomocà list comprehensions)
End of explanation
def vytvor_tabulku(velikost):
return [list('.' * velikost) for x in range(velikost)]
def vytvor_tabulku(velikost):
return [['.'] * velikost] * velikost
Explanation: Nejkratšà ÅeÅ¡enà (je nejlepÅ¡Ã?)
End of explanation
zvirata = [ "pes", "koÄka", "králÃk", "had", "jeÅŸek"]
znak = "k"
def pismeno(jmeno):
for i in range(5):
if znak in jmeno[i][0]:
print(jmeno[i])
return
pismeno(zvirata)
Explanation: Je vytvoÅenà kompletnà tabulky se vÅ¡emi hracÃmi polÃÄky váşnÄ potÅeba?
ZvÃÅátka
Najdi chyby, nejasnosti Äi nepÅesnosti v tomto kódu:
End of explanation
zvirata = [ "pes", "koÄka", "králÃk", "had", "jeÅŸek"]
def s_prvnim_pismenem(seznam_zvirat, pismeno):
for zvire in seznam_zvirat:
if zvire.startswith(pismeno):
print(zvire)
s_prvnim_pismenem(zvirata, 'k')
Explanation: Takto by to mohlo vypadat pÅehlednÄjšà a pochopitelnÄjÅ¡Ã
End of explanation
def kratke(jmeno = zvirata):
for i in range(len(jmeno)):
if len(jmeno[i]) < 5:
print(jmeno[i])
else:
print(end = "")
return jmeno
kratke(zvirata)
Explanation: Dalšà podobná funkce:
End of explanation
def kratke(seznam_zvirat):
for jmeno in seznam_zvirat:
if len(jmeno) < 5:
print(jmeno)
kratke(zvirata)
Explanation: A jejà pÅehlednÄjšà varianta:
End of explanation
def overeni(seznam):
"ovÄÅÃ, zda je zadané slovo v seznamu a vrátà True/False"
otazka = input("Zadej název zvÃÅete, jeÅŸ chceÅ¡ ovÄÅit: ")
if otazka in seznam:
return True
else:
return False
Explanation: Je moşné tuto funkci zkrátit a zjednodušit?
End of explanation
def overeni(seznam):
"ovÄÅÃ, zda je zadané slovo v seznamu a vrátà True/False"
otazka = input("Zadej název zvÃÅete, jeÅŸ chceÅ¡ ovÄÅit: ")
return otazka in seznam
Explanation: Ano, je to moşné
End of explanation
from random import randrange
def velikost_hraciho_pole():
while True:
odpoved = input('Zadej velikost pole pro hada: ')
try:
velikost = int(odpoved)
except ValueError:
print('Velikost musà bÜt celé ÄÃslo')
else:
if velikost < 5:
print('Pole musà bÜt rozumnÄ veliké')
break
return velikost
def vykresli_mapu(velikost, had, ovoce):
for x in range(velikost):
for y in range(velikost):
if (x, y) in had:
print('X', end=' ')
elif (x, y) in ovoce:
print('?', end=' ')
else:
print('.', end=' ')
print()
def posun(velikost, had, ovoce):
while True:
smer = input('Zadej smer posunu [s, j, v, z]: ')
smer = smer.lower().strip()
if smer not in ('s', 'j', 'v', 'z'):
print('Nekorektni smer!')
else:
break
hlava = had[-1]
x, y = hlava
if smer == 's':
nova_hlava = x-1, y
elif smer == 'j':
nova_hlava = x+1, y
elif smer == 'v':
nova_hlava = x, y+1
elif smer == 'z':
nova_hlava = x, y-1
if nova_hlava in had:
print('Narazil si sam do sebe')
return False
x, y = nova_hlava
if x < 0 or x > velikost-1 or y < 0 or y > velikost-1:
print('Vyjel si mimo herni pole')
return False
if nova_hlava in ovoce:
ovoce.remove(nova_hlava)
else:
del had[0]
had.append(nova_hlava)
return True
def pridej_ovoce(velikost, had, ovoce):
ovoce.append((randrange(0, velikost), randrange(0, velikost)))
while ovoce[-1] in had:
del ovoce[-1]
ovoce.append((randrange(0, velikost), randrange(0, velikost)))
velikost = velikost_hraciho_pole()
had = [(0, 0), (0, 1), (0, 2)]
ovoce = []
pridej_ovoce(velikost, had, ovoce)
vykresli_mapu(velikost, had, ovoce)
while posun(velikost, had, ovoce):
vykresli_mapu(velikost, had, ovoce)
if not ovoce:
pridej_ovoce(velikost, had, ovoce)
Explanation: Had
End of explanation |
12,477 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<p>
<img src="http
Step1: <center><img src="https
Step2: past and present
From https
Step3: For a nicer description evaluate the following cell
Step4: %%bash
To start a Jupyter process to serve our notebooks
Step5: %%latex
Just a bit of math, beautified
Step6: %%timeit
Time an example from your lab sessions (link)
Step8: Pretty printers
Write code that writes your notebook, beautifully
Step9: nbviewer
If you publish your notebooks (Github, huh?), it is possible to render them as static web documents | Python Code:
__AUTHORS__ = {'am': ("Andrea Marino",
"[email protected]",),
'mn': ("Massimo Nocentini",
"[email protected]",
"https://github.com/massimo-nocentini/",)}
__KEYWORDS__ = ['Python', 'Jupyter', 'notebooks', 'keynote',]
Explanation: <p>
<img src="http://www.cerm.unifi.it/chianti/images/logo%20unifi_positivo.jpg"
alt="UniFI logo" style="float: left; width: 20%; height: 20%;">
<div align="right">
<small>
Massimo Nocentini, PhD.
<br><br>
February 7, 2020: init
</small>
</div>
</p>
<br>
<br>
<div align="center">
<b>Abstract</b><br>
A (very concise) introduction to the Python ecosystem.
</div>
End of explanation
outline = []
outline.append('Hello!')
outline.append('Python')
outline.append('Whys and refs')
outline.append('On the shoulders of giants')
outline.append('Set the env up')
outline.append('Notebooks')
outline.append('Course agenda')
import this
Explanation: <center><img src="https://upload.wikimedia.org/wikipedia/commons/c/c3/Python-logo-notext.svg"></center>
End of explanation
%lsmagic
Explanation: past and present
From https://en.wikipedia.org/wiki/Python_(programming_language)
- Python is an interpreted, high-level, general-purpose programming language. Created by Guido van Rossum and first released in 1991, Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects.
- Python is dynamically typed and garbage-collected. It supports multiple programming paradigms, including procedural, object-oriented, and functional programming. Python is often described as a "batteries included" language due to its comprehensive standard library.
- Python was conceived in the late 1980s as a successor to the ABC language. Python 2.0, released in 2000, introduced features like list comprehensions and a garbage collection system capable of collecting reference cycles. Python 3.0, released in 2008, was a major revision of the language that is not completely backward-compatible, and much Python 2 code does not run unmodified on Python 3.
- PSF Python Brochure Project
From https://www.python.org/
Functions Defined. The core of extensible programming is defining functions. Python allows mandatory and optional arguments, keyword arguments, and even arbitrary argument lists.
Compound Data Types. Lists (known as arrays in other languages) are one of the compound data types that Python understands. Lists can be indexed, sliced and manipulated with other built-in functions.
Intuitive Interpretation. Calculations are simple with Python, and expression syntax is straightforward: the operators +, -, * and / work as expected; parentheses () can be used for grouping.
Quick & Easy to Learn. Experienced programmers in any other language can pick up Python very quickly, and beginners find the clean syntax and indentation structure easy to learn.
All the Flow Youâd Expect. Python knows the usual control flow statements that other languages speak â if, for, while and range â with some of its own twists, of course.
Some supporting quotes here.
From https://docs.python.org/3/
beware of shadows
Python can be installed in many different ways with respect to different needs.
- https://www.anaconda.com/
- https://www.spyder-ide.org/
- https://www.sagemath.org/
We advice to stick to the official one for the sake of being self contained and use an unified environment.
All such distributions customize the base package for domain-specific domains, in the future you will be able to take into account the one that best suites your needs; for the present, trust the default one.
Therefore, go head and install the Python interpreter.
https://www.python.org/downloads/
our working environment
There are many different possibilities to run Python programs:
- edit the program my-script.py and then invoke the bare bone<br>interpreter with $ python my-script.py
- use an Integrated Development Environment (IDE from now on):
- https://www.spyder-ide.org/
- https://www.jetbrains.com/pycharm/
- https://vscodium.com/
Whatever you feel comfortable is okay.
The important thing is that you play in a safe environment.
Quoting the official doc:
Python provides support for creating lightweight âvirtual environmentsâ with their own site directories, optionally isolated from system site directories. Each virtual environment has its own Python binary (which matches the version of the binary that was used to create this environment) and can have its own independent set of installed Python packages in its site directories.
Those environments allow you to freely (un)install modules and customize the interpreter, without apporting those changes to the system installation.
https://docs.python.org/3/tutorial/venv.html
bash
$ python3 -m venv unifi-env # creates a virtual environment
bash
$ source unifi-env/bin/activate # enter into our safe environment
bash
(unifi-env) $ pip install ipython jupyter matplotlib numpy scipy \
sympy pandas # install some packages
bash
(unifi-env) $ python do-I-halt-or-not.py # run your cool stuff
bash
(unifi-env) $ deactivate # exit the environment
bash
$ # back to the usual shell
Notebooks
Notebooks are interactive web pages, served by a backend process called Jupyter
formely, everything within IPython. Now, refactored in many little projects:
Jupyter is just a backend, a kind of proxy built using zeroqm messaging system
Jupyter interact with pluggable kernels, namely interpreters for the choosen programming language
tying the knot: you can write and evaluate code directly on the web page
pragmatically: a notebook is a set of cells, containing both code and structured text
Jupyter architecture, precisely
The Notebook Document Format<br>
Jupyter Notebooks are an open document format based on JSON. They contain a complete record of the user's sessions and embed code, narrative text, equations and rich output.
Interactive Computing Protocol<br>
The Notebook communicates with computational Kernels using the Interactive Computing Protocol, an open network protocol based on JSON data over ZMQ and WebSockets.
The Kernel<br>
Kernels are processes that run interactive code in a particular programming language and return output to the user. Kernels also respond to tab completion and introspection requests.
Code cells
The following is a code cell, it has an identifier on the left In [ ]: and a blank space on the right where we can type code in:
Magics
All the following magics can be used in any session using the ipython interpreter:
End of explanation
%quickref
Explanation: For a nicer description evaluate the following cell:
End of explanation
%%bash
jupyter-notebook -h
Explanation: %%bash
To start a Jupyter process to serve our notebooks:
End of explanation
%%latex
\begin{eqnarray}
\nabla \times \vec{\mathbf{B}} -\, \frac1c\, \frac{\partial\vec{\mathbf{E}}}{\partial t} & = \frac{4\pi}{c}\vec{\mathbf{j}} \\
\nabla \cdot \vec{\mathbf{E}} & = 4 \pi \rho \\
\nabla \times \vec{\mathbf{E}}\, +\, \frac1c\, \frac{\partial\vec{\mathbf{B}}}{\partial t} & = \vec{\mathbf{0}} \\
\nabla \cdot \vec{\mathbf{B}} & = 0
\end{eqnarray}
Explanation: %%latex
Just a bit of math, beautified:
End of explanation
initial_conditions = {0:0, 1:1}
def make_fibonacci(maxsize=None):
'''Make the Fibonacci sequence using memoization of not (set `maxsize` arg to 0)'''
@lru_cache(maxsize=maxsize)
def fibonacci(n):
return fibonacci(n-1) + fibonacci(n-2) if n not in initial_conditions else initial_conditions[n]
return fibonacci
%%timeit
fibonacci_memoization = make_fibonacci(maxsize=None)
[fibonacci_memoization(n) for n in range(20)]
%%timeit
fibonacci_naive = make_fibonacci(maxsize=0)
[fibonacci_naive(n) for n in range(20)]
Explanation: %%timeit
Time an example from your lab sessions (link):
End of explanation
import IPython.display
dir(IPython.display)[:10]
from IPython.display import Math
Math(r'F(k) = \int_{-\infty}^{\infty} f(x) e^{2\pi i k} dx')
from IPython.display import Latex
Latex(r\begin{eqnarray}
\nabla \times \vec{\mathbf{B}} -\, \frac1c\, \frac{\partial\vec{\mathbf{E}}}{\partial t} & = \frac{4\pi}{c}\vec{\mathbf{j}} \\
\nabla \cdot \vec{\mathbf{E}} & = 4 \pi \rho \\
\nabla \times \vec{\mathbf{E}}\, +\, \frac1c\, \frac{\partial\vec{\mathbf{B}}}{\partial t} & = \vec{\mathbf{0}} \\
\nabla \cdot \vec{\mathbf{B}} & = 0
\end{eqnarray})
from IPython.display import HTML
oeis_url=r"http://oeis.org/"
HTML(r'<iframe width="100%" height="500" src="{url}" />'.format(url=oeis_url))
Explanation: Pretty printers
Write code that writes your notebook, beautifully:
End of explanation
%%bash
jupyter-nbconvert -h
Explanation: nbviewer
If you publish your notebooks (Github, huh?), it is possible to render them as static web documents:
using the proxy http://nbviewer.jupyter.org/;
from there you can browse a collection of tutorials, books and notebooks targeting different PL,
altough we're mainly interested in those using Python, of course.
nbconvert
It is possible to convert a notebook to various formats:
End of explanation |
12,478 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Demonstration for setting up an ODE system
PyGOM â A Python Package for Simplifying Modelling with Systems of Ordinary Differential Equations https
Step1: The following parameterization, run over 100 days, results in an infection rate of approximately 60%.
The disease has a latent period of 2 days ($1/\alpha$), and individuals are infectious for 1 day ($1/\gamma$).
Step2: In this simple framework, reducing $\beta$ results in a smaller epidemic
Step3: Vaccinating 5% of the population (assuming instantaneous rollout) or natural immunity, delays the peak period, and reduces its magnitude. | Python Code:
# import required packages
from pygom import DeterministicOde, Transition, SimulateOde, TransitionType
import os
from sympy import symbols, init_printing
import numpy as np
import matplotlib.pyplot as mpl
import sympy
import itertools
# Add graphvis path (N.B. set to your local circumstances)
graphvis_path = 'h:\\Programs\\Graphvis-2.38\\bin\\'
if not graphvis_path in os.environ['PATH']:
os.environ['PATH'] = os.environ['PATH'] + ';' + graphvis_path
def print_ode2(self):
'''
Prints the ode in symbolic form onto the screen/console in actual
symbols rather than the word of the symbol.
Based on the PyGOM built-in but adapted for Jupyter
'''
A = self.get_ode_eqn()
B = sympy.zeros(A.rows,2)
for i in range(A.shape[0]):
B[i,0] = sympy.symbols('d' + str(self._stateList[i]) + '/dt=')
B[i,1] = A[i]
return B
# set up the symbolic SEIR model
state = ['S', 'E', 'I', 'R']
param_list = ['beta', 'alpha', 'gamma', 'N']
# Equations can be set up in a variety of ways; either by providing the equations for each state individually,
# or listing the transitions (shown here).
transition = [
Transition(origin='S', destination='E', equation='beta*S*I/N',
transition_type=TransitionType.T),
Transition(origin='E', destination='I', equation='alpha*E',
transition_type=TransitionType.T),
Transition(origin='I', destination='R', equation='gamma*I',
transition_type=TransitionType.T)
]
SEIR_model = DeterministicOde(state, param_list, transition=transition)
# display equations
print_ode2(SEIR_model)
# display graphical representation of the model
#SEIR_model.get_transition_graph()
Explanation: Demonstration for setting up an ODE system
PyGOM â A Python Package for Simplifying Modelling with Systems of Ordinary Differential Equations https://arxiv.org/pdf/1803.06934.pdf
Using PyGOM, we will set up a simple SEIR model. This model has many simplifying assumptions, including:
- no births or deaths
- homogeneous mixing
- no interventions
Suscebtible population (S) are those that can catch the disease. A susceptible person becomes infected when they interact with an infected person. The chance of this interaction resulting in infection is described with parameter $\beta$.
$ \frac{dS}{dt} = -\beta S \frac{I}{N}$
Exposed population (E) are those that have contracted the disease but are not yet infetious. They become infectious with rate $\alpha$.
$ \frac{dE}{dt} = \beta S \frac{I}{N} - \alpha E$
Infected population (I) recover at rate $\gamma$.
$ \frac{dI}{dt} = \alpha E - \gamma I$
Removed population (R) are those who have immunity (described with initial conditions) or have recovered/died from the disease.
$ \frac{dR}{dt} = \gamma I$
Total population (N) is given by $N = S + E + I + R$.
End of explanation
# provide parameters
t = np.linspace(0, 100, 1001)
# initial conditions
# for a population of 10000, one case has presented, and we assume there is no natural immunity
x0 = [9999.0, 0.0, 1, 0.0]
# latent for 2 days
# ill for 1 day
params = {'beta': 1.6,
'alpha': 0.5,
'gamma': 1,
'N': sum(x0)}
SEIR_model.initial_values = (x0, t[0])
SEIR_model.parameters = params
solution = SEIR_model.integrate(t[1::])
SEIR_model.plot()
# calculate time point when maximum number of people are infectious
peak_i = np.argmax(solution[:,2])
print('Peak infection (days)', t[peak_i])
# calculate reproductive number R0
print('R0 (beta/gamma) = ', params['beta']/params['gamma'])
solution[:,0]
# function for altering parameters
model = DeterministicOde(state, param_list, transition=transition)
def parameterize_model(t=np.linspace(0,100,1001), beta=1.6, alpha=0.5, gamma=1, ic=[9999, 0, 1, 0], model=model):
params = {'beta': beta,
'alpha': alpha,
'gamma': gamma,
'N': sum(ic)}
model.initial_values = (ic, t[0])
model.parameters = params
sol = model.integrate(t[1::])
model.plot()
peak_i = np.argmax(sol[:,2])
print('Peak infection (days)', t[peak_i] )
print('R0 (beta/gamma) = ', params['beta']/params['gamma'])
Explanation: The following parameterization, run over 100 days, results in an infection rate of approximately 60%.
The disease has a latent period of 2 days ($1/\alpha$), and individuals are infectious for 1 day ($1/\gamma$).
End of explanation
parameterize_model(beta=1.2, t=np.linspace(0,500,5001))
Explanation: In this simple framework, reducing $\beta$ results in a smaller epidemic:
- the peak infection time is delayed
- the magnitude of peak infection is reduced.
Reducing beta may crudely represent giving out anti-virals, which make a person less infectious.
End of explanation
parameterize_model(ic=[9490,5, 5, 500], beta=0.5, gamma=0.3, t=np.linspace(0,150,10))
Explanation: Vaccinating 5% of the population (assuming instantaneous rollout) or natural immunity, delays the peak period, and reduces its magnitude.
End of explanation |
12,479 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Create Data
Step2: Detect Outliers
EllipticEnvelope assumes the data is normally distributed and based on that assumption "draws" an ellipse around the data, classifying any observation inside the ellipse as an inlier (labeled as 1) and any observation outside the ellipse as an outlier (labeled as -1). A major limitation of this approach is the need to specify a contamination parameter which is the proportion of observations that are outliers, a value that we don't know. | Python Code:
# Load libraries
import numpy as np
from sklearn.covariance import EllipticEnvelope
from sklearn.datasets import make_blobs
Explanation: Title: Detecting Outliers
Slug: detecting_outliers
Summary: How to detect outliers for machine learning in Python.
Date: 2016-09-06 12:00
Category: Machine Learning
Tags: Preprocessing Structured Data
Authors: Chris Albon
Preliminaries
End of explanation
# Create simulated data
X, _ = make_blobs(n_samples = 10,
n_features = 2,
centers = 1,
random_state = 1)
# Replace the first observation's values with extreme values
X[0,0] = 10000
X[0,1] = 10000
Explanation: Create Data
End of explanation
# Create detector
outlier_detector = EllipticEnvelope(contamination=.1)
# Fit detector
outlier_detector.fit(X)
# Predict outliers
outlier_detector.predict(X)
Explanation: Detect Outliers
EllipticEnvelope assumes the data is normally distributed and based on that assumption "draws" an ellipse around the data, classifying any observation inside the ellipse as an inlier (labeled as 1) and any observation outside the ellipse as an outlier (labeled as -1). A major limitation of this approach is the need to specify a contamination parameter which is the proportion of observations that are outliers, a value that we don't know.
End of explanation |
12,480 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Raw-data-stats" data-toc-modified-id="Raw-data-stats-1"><span class="toc-item-num">1 </span>Raw data stats</a></span></li><li><span><a href="#Read-in-data" data-toc-modified-id="Read-in-data-2"><span class="toc-item-num">2 </span>Read in data</a></span><ul class="toc-item"><li><span><a href="#Produce-latex-table" data-toc-modified-id="Produce-latex-table-2.1"><span class="toc-item-num">2.1 </span>Produce latex table</a></span></li><li><span><a href="#Add-region" data-toc-modified-id="Add-region-2.2"><span class="toc-item-num">2.2 </span>Add region</a></span></li></ul></li><li><span><a href="#Calculate-number-of-empty-tiles" data-toc-modified-id="Calculate-number-of-empty-tiles-3"><span class="toc-item-num">3 </span>Calculate number of empty tiles</a></span><ul class="toc-item"><li><span><a href="#Create-sample-to-check-what's-empty" data-toc-modified-id="Create-sample-to-check-what's-empty-3.1"><span class="toc-item-num">3.1 </span>Create sample to check what's empty</a></span></li></ul></li><li><span><a href="#highest-number-of-markings-per-tile" data-toc-modified-id="highest-number-of-markings-per-tile-4"><span class="toc-item-num">4 </span>highest number of markings per tile</a></span></li><li><span><a href="#Convert-distance-to-meters" data-toc-modified-id="Convert-distance-to-meters-5"><span class="toc-item-num">5 </span>Convert distance to meters</a></span><ul class="toc-item"><li><ul class="toc-item"><li><span><a href="#Reduction-of-number-of-fan-markings-to-finals" data-toc-modified-id="Reduction-of-number-of-fan-markings-to-finals-5.0.1"><span class="toc-item-num">5.0.1 </span>Reduction of number of fan markings to finals</a></span></li></ul></li></ul></li><li><span><a href="#Length-stats" data-toc-modified-id="Length-stats-6"><span class="toc-item-num">6 </span>Length stats</a></span><ul class="toc-item"><li><span><a href="#Blotch-sizes" data-toc-modified-id="Blotch-sizes-6.1"><span class="toc-item-num">6.1 </span>Blotch sizes</a></span></li><li><span><a href="#Longest-fans" data-toc-modified-id="Longest-fans-6.2"><span class="toc-item-num">6.2 </span>Longest fans</a></span></li></ul></li><li><span><a href="#North-azimuths" data-toc-modified-id="North-azimuths-7"><span class="toc-item-num">7 </span>North azimuths</a></span></li><li><span><a href="#User-stats" data-toc-modified-id="User-stats-8"><span class="toc-item-num">8 </span>User stats</a></span></li><li><span><a href="#pipeline-output-examples" data-toc-modified-id="pipeline-output-examples-9"><span class="toc-item-num">9 </span>pipeline output examples</a></span></li></ul></div>
Step1: Raw data stats
Step2: Read in data
Step3: Produce latex table
Step4: Add region
Adding a region identifier, immensely helpful in automatically plotting stuff across regions.
Step5: Calculate number of empty tiles
Step6: Create sample to check what's empty
Step7: highest number of markings per tile
Step8: Convert distance to meters
Step9: Reduction of number of fan markings to finals
Step10: Length stats
Percentage of fan markings below 100 m
Step11: Cumulative histogram of fan lengths
Step12: In words, the mean length of fans is {{f"{fans.distance_m.describe()['mean']
Step13: Blotch sizes
Step14: Longest fans
Step16: North azimuths
Step17: User stats
Step18: pipeline output examples | Python Code:
%matplotlib ipympl
import seaborn as sns
sns.set()
sns.set_context('paper')
sns.set_palette('colorblind')
from planet4 import io, stats, markings, plotting, region_data
from planet4.catalog_production import ReleaseManager
fans = pd.read_csv("/Users/klay6683/Dropbox/data/planet4/p4_analysis/P4_catalog_v1.0/P4_catalog_v1.0_L1C_cut_0.5_fan_meta_merged.csv")
blotch = pd.read_csv("/Users/klay6683/Dropbox/data/planet4/p4_analysis/P4_catalog_v1.0/P4_catalog_v1.0_L1C_cut_0.5_blotch_meta_merged.csv")
pd.set_option("display.max_columns", 150)
fans.head()
fans.l_s.head().values[0]
group_blotch = blotch.groupby("obsid")
type(group_blotch)
counts = group_blotch.marking_id.count()
counts.head()
counts.plot(c='r')
plt.figure()
counts.hist()
counts.max()
counts.min()
fans.head()
plt.figure(constrained_layout=True)
counts[:20].plot.bar()
plt.figure()
counts[:10].plot(use_index=True)
plt.figure()
counts[:10]
grouped = fans.groupby("obsid")
grouped.tile_id.nunique().sort_values(ascending=False).head()
%matplotlib inline
from planet4.markings import ImageID
p4id = ImageID('7t9')
p4id.image_name
p4id.plot_fans()
filtered = fans[fans.tile_id=='APF0000cia']
filtered.shape
p4id.plot_fans(data=filtered)
Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Raw-data-stats" data-toc-modified-id="Raw-data-stats-1"><span class="toc-item-num">1 </span>Raw data stats</a></span></li><li><span><a href="#Read-in-data" data-toc-modified-id="Read-in-data-2"><span class="toc-item-num">2 </span>Read in data</a></span><ul class="toc-item"><li><span><a href="#Produce-latex-table" data-toc-modified-id="Produce-latex-table-2.1"><span class="toc-item-num">2.1 </span>Produce latex table</a></span></li><li><span><a href="#Add-region" data-toc-modified-id="Add-region-2.2"><span class="toc-item-num">2.2 </span>Add region</a></span></li></ul></li><li><span><a href="#Calculate-number-of-empty-tiles" data-toc-modified-id="Calculate-number-of-empty-tiles-3"><span class="toc-item-num">3 </span>Calculate number of empty tiles</a></span><ul class="toc-item"><li><span><a href="#Create-sample-to-check-what's-empty" data-toc-modified-id="Create-sample-to-check-what's-empty-3.1"><span class="toc-item-num">3.1 </span>Create sample to check what's empty</a></span></li></ul></li><li><span><a href="#highest-number-of-markings-per-tile" data-toc-modified-id="highest-number-of-markings-per-tile-4"><span class="toc-item-num">4 </span>highest number of markings per tile</a></span></li><li><span><a href="#Convert-distance-to-meters" data-toc-modified-id="Convert-distance-to-meters-5"><span class="toc-item-num">5 </span>Convert distance to meters</a></span><ul class="toc-item"><li><ul class="toc-item"><li><span><a href="#Reduction-of-number-of-fan-markings-to-finals" data-toc-modified-id="Reduction-of-number-of-fan-markings-to-finals-5.0.1"><span class="toc-item-num">5.0.1 </span>Reduction of number of fan markings to finals</a></span></li></ul></li></ul></li><li><span><a href="#Length-stats" data-toc-modified-id="Length-stats-6"><span class="toc-item-num">6 </span>Length stats</a></span><ul class="toc-item"><li><span><a href="#Blotch-sizes" data-toc-modified-id="Blotch-sizes-6.1"><span class="toc-item-num">6.1 </span>Blotch sizes</a></span></li><li><span><a href="#Longest-fans" data-toc-modified-id="Longest-fans-6.2"><span class="toc-item-num">6.2 </span>Longest fans</a></span></li></ul></li><li><span><a href="#North-azimuths" data-toc-modified-id="North-azimuths-7"><span class="toc-item-num">7 </span>North azimuths</a></span></li><li><span><a href="#User-stats" data-toc-modified-id="User-stats-8"><span class="toc-item-num">8 </span>User stats</a></span></li><li><span><a href="#pipeline-output-examples" data-toc-modified-id="pipeline-output-examples-9"><span class="toc-item-num">9 </span>pipeline output examples</a></span></li></ul></div>
End of explanation
import dask.dataframe as dd
db = io.DBManager()
db.dbname
df = dd.read_hdf(db.dbname, 'df')
df.columns
grp = df.groupby(['user_name'])
s = grp.classification_id.nunique().compute().sort_values(ascending=False).head(5)
s
Explanation: Raw data stats
End of explanation
rm = ReleaseManager('v1.0')
db = io.DBManager()
data = db.get_all()
fans = pd.read_csv(rm.fan_merged)
fans.shape
fans.columns
from planet4.stats import define_season_column
define_season_column(fans)
fans.columns
season2 = fans[fans.season==2]
season2.shape
img223 = fans.query("image_name=='ESP_012265_0950'")
img223.shape
plt.figure()
img223.angle.hist()
fans.season.dtype
meta = pd.read_csv(rm.metadata_path, dtype='str')
cols_to_merge = ['OBSERVATION_ID',
'SOLAR_LONGITUDE', 'north_azimuth', 'map_scale']
fans = fans.merge(meta[cols_to_merge], left_on='obsid', right_on='OBSERVATION_ID')
fans.drop(rm.DROP_FOR_FANS, axis=1, inplace=True)
fans.image_x.head()
ground['image_x'] = pd.to_numeric(ground.image_x)
ground['image_y'] = pd.to_numeric(ground.image_y)
fans_new = fans.merge(ground[rm.COLS_TO_MERGE], on=['obsid', 'image_x', 'image_y'])
fans_new.shape
fans.shape
s = pd.to_numeric(ground.BodyFixedCoordinateX)
s.head()
s.round(decimals=4)
blotches = rm.read_blotch_file().assign(marking='blotch')
fans = rm.read_fan_file().assign(marking='fan')
combined = pd.concat([blotches, fans], ignore_index=True)
blotches.head()
Explanation: Read in data
End of explanation
fans.columns
cols1 = fans.columns[:13]
print(cols1)
cols2 = fans.columns[13:-4]
print(cols2)
cols3 = fans.columns[-4:-1]
cols3
fanshead1 = fans[cols1].head(10)
fanshead2 = fans[cols2].head(10)
fanshead3 = fans[cols3].head(10)
with open("fan_table1.tex", 'w') as f:
f.write(fanshead1.to_latex())
with open("fan_table2.tex", 'w') as f:
f.write(fanshead2.to_latex())
with open("fan_table3.tex", 'w') as f:
f.write(fanshead3.to_latex())
Explanation: Produce latex table
End of explanation
for Reg in region_data.regions:
reg = Reg()
print(reg.name)
combined.loc[combined.obsid.isin(reg.all_obsids), 'region'] = reg.name
fans.loc[fans.obsid.isin(reg.all_obsids), 'region']= reg.name
blotches.loc[blotches.obsid.isin(reg.all_obsids), 'region'] = reg.name
Explanation: Add region
Adding a region identifier, immensely helpful in automatically plotting stuff across regions.
End of explanation
tiles_marked = combined.tile_id.unique()
db = io.DBManager()
input_tiles = db.image_ids
input_tiles.shape[0]
n_empty = input_tiles.shape[0] - tiles_marked.shape[0]
n_empty
n_empty / input_tiles.shape[0]
empty_tiles = list(set(input_tiles) - set(tiles_marked))
all_data = db.get_all()
all_data.set_index('image_id', inplace=True)
empty_data = all_data.loc[empty_tiles]
meta = pd.read_csv(rm.metadata_path)
meta.head()
empty_tile_numbers = empty_data.reset_index().groupby('image_name')[['x_tile', 'y_tile']].max()
empty_tile_numbers['total'] = empty_tile_numbers.x_tile*empty_tile_numbers.y_tile
empty_tile_numbers.head()
n_empty_per_obsid = empty_data.reset_index().groupby('image_name').image_id.nunique()
n_empty_per_obsid = n_empty_per_obsid.to_frame()
n_empty_per_obsid.columns = ['n']
df = n_empty_per_obsid
df = df.join(empty_tile_numbers.total)
df = df.assign(ratio=df.n/df.total)
df = df.join(meta.set_index('OBSERVATION_ID'))
df['scaled_n'] = df.n / df.map_scale / df.map_scale
import seaborn as sns
sns.set_context('notebook')
df.plot(kind='scatter', y='ratio', x='SOLAR_LONGITUDE')
ax = plt.gca()
ax.set_ylabel('Fraction of empty tiles per HiRISE image')
ax.set_xlabel('Solar Longitude [$^\circ$]')
ax.set_title("Distribution of empty tiles vs time")
plt.savefig("/Users/klay6683/Dropbox/src/p4_paper1/figures/empty_data_vs_ls.pdf")
df[df.ratio > 0.8]
Explanation: Calculate number of empty tiles
End of explanation
sample = np.random.choice(empty_tiles, 200)
cd plots
from tqdm import tqdm
for image_id in tqdm(sample):
fig, ax = plt.subplots(ncols=2)
plotting.plot_raw_fans(image_id, ax=ax[0])
plotting.plot_raw_blotches(image_id, ax=ax[1])
fig.savefig(f"empty_tiles/{image_id}_input_markings.png", dpi=150)
plt.close('all')
Explanation: Create sample to check what's empty
End of explanation
fans_per_tile = fans.groupby('tile_id').size().sort_values(ascending=False)
fans_per_tile.head()
blotches_per_tile = blotches.groupby('tile_id').size().sort_values(ascending=False)
blotches_per_tile.head()
print(fans_per_tile.median())
blotches_per_tile.median()
plt.close('all')
by_image_id = combined.groupby(['marking', 'tile_id']).size()
by_image_id.name = 'Markings per tile'
by_image_id = by_image_id.reset_index()
by_image_id.columns
g = sns.FacetGrid(by_image_id, col="marking", aspect=1.2)
bins = np.arange(0, 280, 5)
g.map(sns.distplot, 'Markings per tile', kde=False, bins=bins, hist_kws={'log':True})
plt.savefig('/Users/klay6683/Dropbox/src/p4_paper1/figures/number_distributions.pdf', dpi=150)
blotches_per_tile.median()
from planet4 import plotting
# %load -n plotting.plot_finals_with_input
def plot_finals_with_input(id_, datapath=None, horizontal=True, scope='planet4'):
imgid = markings.ImageID(id_, scope=scope)
pm = io.PathManager(id_=id_, datapath=datapath)
if horizontal is True:
kwargs = {'ncols': 2}
else:
kwargs = {'nrows': 2}
fig, ax = plt.subplots(figsize=(4,5), **kwargs)
ax[0].set_title(imgid.imgid, fontsize=8)
imgid.show_subframe(ax=ax[0])
for marking in ['fan', 'blotch']:
try:
df = getattr(pm, f"final_{marking}df")
except:
continue
else:
data = df[df.image_id == imgid.imgid]
imgid.plot_markings(marking, data, ax=ax[1])
fig.subplots_adjust(top=0.95,bottom=0, left=0, right=1, hspace=0.01, wspace=0.01)
fig.savefig(f"/Users/klay6683/Dropbox/src/p4_paper1/figures/{imgid.imgid}_final.png",
dpi=150)
plot_finals_with_input('7t9', rm.savefolder, horizontal=False)
markings.ImageID('7t9').image_name
Explanation: highest number of markings per tile
End of explanation
fans['distance_m'] = fans.distance*fans.map_scale
blotches['radius_1_m'] = blotches.radius_1*blotches.map_scale
blotches['radius_2_m'] = blotches.radius_2*blotches.map_scale
Explanation: Convert distance to meters
End of explanation
n_fan_in = 2792963
fans.shape[0]
fans.shape[0] / n_fan_in
Explanation: Reduction of number of fan markings to finals
End of explanation
import scipy
scipy.stats.percentileofscore(fans.distance_m, 100)
Explanation: Length stats
Percentage of fan markings below 100 m:
End of explanation
def add_percentage_line(ax, meters, column):
y = scipy.stats.percentileofscore(column, meters)
ax.axhline(y/100, linestyle='dashed', color='black', lw=1)
ax.axvline(meters, linestyle='dashed', color='black', lw=1)
ax.text(meters, y/100, f"{y/100:0.2f}")
plt.close('all')
fans.distance_m.max()
bins = np.arange(0,380, 5)
fig, ax = plt.subplots(figsize=(8,3), ncols=2, sharey=False)
sns.distplot(fans.distance_m, bins=bins, kde=False,
hist_kws={'cumulative':False,'normed':True, 'log':True},
axlabel='Fan length [m]', ax=ax[0])
sns.distplot(fans.distance_m, bins=bins, kde=False, hist_kws={'cumulative':True,'normed':True},
axlabel='Fan length [m]', ax=ax[1])
ax[0].set_title("Normalized Log-Histogram of fan lengths ")
ax[1].set_title("Cumulative normalized histogram of fan lengths")
ax[1].set_ylabel("Fraction of fans with given length")
add_percentage_line(ax[1], 100, fans.distance_m)
add_percentage_line(ax[1], 50, fans.distance_m)
fig.tight_layout()
fig.savefig("/Users/klay6683/Dropbox/src/p4_paper1/figures/fan_lengths_histos.pdf",
dpi=150, bbox_inches='tight')
fans.query('distance_m>350')[['distance_m', 'obsid', 'l_s']]
fans.distance_m.describe()
Explanation: Cumulative histogram of fan lengths
End of explanation
fans.replace("Manhattan_Frontinella", "Manhattan_\nFrontinella", inplace=True)
fig, ax = plt.subplots()
sns.boxplot(y="region", x="distance_m", data=fans, ax=ax,
fliersize=3)
ax.set_title("Fan lengths in different ROIs")
fig.tight_layout()
fig.savefig("/Users/klay6683/Dropbox/src/p4_paper1/figures/fan_lengths_vs_regions.pdf",
dpi=150, bbox_inches='tight')
Explanation: In words, the mean length of fans is {{f"{fans.distance_m.describe()['mean']:.1f}"}} m, while the median is
{{f"{fans.distance_m.describe()['50%']:.1f}"}} m.
End of explanation
plt.figure()
cols = ['radius_1','radius_2']
sns.distplot(blotches[cols], kde=False, bins=np.arange(2.0,50.),
color=['r','g'], label=cols)
plt.legend()
plt.figure()
cols = ['radius_1_m','radius_2_m']
sns.distplot(blotches[cols], kde=False, bins=np.arange(2.0,50.),
color=['r','g'], label=cols)
plt.legend()
fig, ax = plt.subplots(figsize=(8,4))
sns.distplot(blotches.radius_2_m, bins=500, kde=False, hist_kws={'cumulative':True,'normed':True},
axlabel='Blotch radius_1 [m]', ax=ax)
ax.set_title("Cumulative normalized histogram for blotch lengths")
ax.set_ylabel("Fraction of blotches with given radius_1")
add_percentage_line(ax, 30, blotches.radius_2_m)
add_percentage_line(ax, 10, blotches.radius_2_m)
import scipy
scipy.stats.percentileofscore(blotches.radius_2_m, 30)
plt.close('all')
Explanation: Blotch sizes
End of explanation
fans.query('distance_m > 350')[
'distance_m distance obsid image_x image_y tile_id'.split()].sort_values(
by='distance_m')
from planet4 import plotting
plotting.plot_finals('de3', datapath=rm.catalog)
plt.gca().set_title('APF0000de3')
plotting.plot_image_id_pipeline('de3', datapath=rm.catalog, via_obsid=False, figsize=(12,8))
from planet4 import region_data
from planet4 import stats
stats.define_season_column(fans)
stats.define_season_column(blotches)
fans.season.value_counts()
fans.query('season==2').distance_m.median()
fans.query('season==3').distance_m.median()
from planet4 import region_data
for region in ['Manhattan2', 'Giza','Ithaca']:
print(region)
obj = getattr(region_data, region)
for s in ['season2','season3']:
print(s)
obsids = getattr(obj, s)
print(fans[fans.obsid.isin(obsids)].distance_m.median())
db = io.DBManager()
all_data = db.get_all()
image_names = db.image_names
g_all = all_data.groupby('image_id')
g_all.size().sort_values().head()
fans.columns
cols_to_drop = ['path', 'image_name', 'binning', 'LineResolution', 'SampleResolution', 'Line', 'Sample']
fans.drop(cols_to_drop, axis=1, inplace=True, errors='ignore')
fans.columns
fans.iloc[1]
Explanation: Longest fans
End of explanation
s = ESP\_011296\_0975 & -82.197 & 225.253 & 178.8 & 2008-12-23 & 17:08 & 91 \\
ESP\_011341\_0980 & -81.797 & 76.13 & 180.8 & 2008-12-27 & 17:06 & 126 \\
ESP\_011348\_0950 & -85.043 & 259.094 & 181.1 & 2008-12-27 & 18:01 & 91 \\
ESP\_011350\_0945 & -85.216 & 181.415 & 181.2 & 2008-12-27 & 16:29 & 126 \\
ESP\_011351\_0945 & -85.216 & 181.548 & 181.2 & 2008-12-27 & 18:18 & 91 \\
ESP\_011370\_0980 & -81.925 & 4.813 & 182.1 & 2008-12-29 & 17:08 & 126 \\
ESP\_011394\_0935 & -86.392 & 99.068 & 183.1 & 2008-12-31 & 19:04 & 72 \\
ESP\_011403\_0945 & -85.239 & 181.038 & 183.5 & 2009-01-01 & 16:56 & 164 \\
ESP\_011404\_0945 & -85.236 & 181.105 & 183.6 & 2009-01-01 & 18:45 & 91 \\
ESP\_011406\_0945 & -85.409 & 103.924 & 183.7 & 2009-01-01 & 17:15 & 126 \\
ESP\_011407\_0945 & -85.407 & 103.983 & 183.7 & 2009-01-01 & 19:04 & 91 \\
ESP\_011408\_0930 & -87.019 & 86.559 & 183.8 & 2009-01-01 & 19:43 & 59 \\
ESP\_011413\_0970 & -82.699 & 273.129 & 184.0 & 2009-01-01 & 17:17 & 108 \\
ESP\_011420\_0930 & -87.009 & 127.317 & 184.3 & 2009-01-02 & 20:16 & 54 \\
ESP\_011422\_0930 & -87.041 & 72.356 & 184.4 & 2009-01-02 & 20:15 & 54 \\
ESP\_011431\_0930 & -86.842 & 178.244 & 184.8 & 2009-01-03 & 19:41 & 54 \\
ESP\_011447\_0950 & -84.805 & 65.713 & 185.5 & 2009-01-04 & 17:19 & 218 \\
ESP\_011448\_0950 & -84.806 & 65.772 & 185.6 & 2009-01-04 & 19:09 & 59 \\
lines = s.split(' \\')
s.replace('\\', '')
obsids = [line.split('&')[0].strip().replace('\\','') for line in lines][:-1]
meta = pd.read_csv(rm.metadata_path)
meta.query('obsid in @obsids').sort_values(by='obsid').
blotches.groupby('obsid').north_azimuth.nunique()
Explanation: North azimuths
End of explanation
db = io.DBManager()
db.dbname = '/Users/klay6683/local_data/planet4/2018-02-11_planet_four_classifications_queryable_cleaned_seasons2and3.h5'
with pd.HDFStore(str(db.dbname)) as store:
user_names = store.select_column('df', 'user_name').unique()
user_names.shape
user_names[:10]
not_logged = [i for i in user_names if i.startswith('not-logged-in')]
logged = list(set(user_names) - set(not_logged))
len(logged)
len(not_logged)
not_logged[:20]
df = db.get_all()
df[df.marking=='fan'].shape
df[df.marking=='blotch'].shape
df[df.marking=='interesting'].shape
n_class_by_user = df.groupby('user_name').classification_id.nunique()
n_class_by_user.describe()
logged_users = df.user_name[~df.user_name.str.startswith("not-logged-in")].unique()
logged_users.shape
not_logged = list(set(df.user_name.unique()) - set(logged_users))
len(not_logged)
n_class_by_user[not_logged].describe()
n_class_by_user[logged_users].describe()
n_class_by_user[n_class_by_user>50].shape[0]/n_class_by_user.shape[0]
n_class_by_user.shape
Explanation: User stats
End of explanation
pm = io.PathManager('any', datapath=rm.savefolder)
cols1 = pm.fandf.columns[:8]
cols2 = pm.fandf.columns[8:-2]
cols3 = pm.fandf.columns[-2:]
print(pm.fandf[cols1].to_latex())
print(pm.fandf[cols2].to_latex())
print(pm.fandf[cols3].to_latex())
df = pm.fnotchdf.head(4)
cols1 = df.columns[:6]
cols2 = df.columns[6:14]
cols3 = df.columns[14:]
for i in [1,2,3]:
print(df[eval(f"cols{i}")].to_latex())
Explanation: pipeline output examples
End of explanation |
12,481 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Aerosol
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step12: 2.2. Code Version
Is Required
Step13: 2.3. Code Languages
Is Required
Step14: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required
Step15: 3.2. Split Operator Advection Timestep
Is Required
Step16: 3.3. Split Operator Physical Timestep
Is Required
Step17: 3.4. Integrated Timestep
Is Required
Step18: 3.5. Integrated Scheme Type
Is Required
Step19: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required
Step20: 4.2. Variables 2D
Is Required
Step21: 4.3. Frequency
Is Required
Step22: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required
Step23: 5.2. Canonical Horizontal Resolution
Is Required
Step24: 5.3. Number Of Horizontal Gridpoints
Is Required
Step25: 5.4. Number Of Vertical Levels
Is Required
Step26: 5.5. Is Adaptive Grid
Is Required
Step27: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required
Step28: 6.2. Global Mean Metrics Used
Is Required
Step29: 6.3. Regional Metrics Used
Is Required
Step30: 6.4. Trend Metrics Used
Is Required
Step31: 7. Transport
Aerosol transport
7.1. Overview
Is Required
Step32: 7.2. Scheme
Is Required
Step33: 7.3. Mass Conservation Scheme
Is Required
Step34: 7.4. Convention
Is Required
Step35: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required
Step36: 8.2. Method
Is Required
Step37: 8.3. Sources
Is Required
Step38: 8.4. Prescribed Climatology
Is Required
Step39: 8.5. Prescribed Climatology Emitted Species
Is Required
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required
Step41: 8.7. Interactive Emitted Species
Is Required
Step42: 8.8. Other Emitted Species
Is Required
Step43: 8.9. Other Method Characteristics
Is Required
Step44: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required
Step45: 9.2. Prescribed Lower Boundary
Is Required
Step46: 9.3. Prescribed Upper Boundary
Is Required
Step47: 9.4. Prescribed Fields Mmr
Is Required
Step48: 9.5. Prescribed Fields Mmr
Is Required
Step49: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required
Step50: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required
Step51: 11.2. Dust
Is Required
Step52: 11.3. Organics
Is Required
Step53: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required
Step54: 12.2. Internal
Is Required
Step55: 12.3. Mixing Rule
Is Required
Step56: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required
Step57: 13.2. Internal Mixture
Is Required
Step58: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required
Step59: 14.2. Shortwave Bands
Is Required
Step60: 14.3. Longwave Bands
Is Required
Step61: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required
Step62: 15.2. Twomey
Is Required
Step63: 15.3. Twomey Minimum Ccn
Is Required
Step64: 15.4. Drizzle
Is Required
Step65: 15.5. Cloud Lifetime
Is Required
Step66: 15.6. Longwave Bands
Is Required
Step67: 16. Model
Aerosol model
16.1. Overview
Is Required
Step68: 16.2. Processes
Is Required
Step69: 16.3. Coupling
Is Required
Step70: 16.4. Gas Phase Precursors
Is Required
Step71: 16.5. Scheme Type
Is Required
Step72: 16.6. Bulk Scheme Species
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cmcc', 'cmcc-esm2-sr5', 'aerosol')
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: CMCC
Source ID: CMCC-ESM2-SR5
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 69 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:50
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact internal mixture?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation |
12,482 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
# Getting Started with gensim
This section introduces the basic concepts and terms needed to understand and use gensim and provides a simple usage example.
Core Concepts and Simple Example
At a very high-level, gensim is a tool for discovering the semantic structure of documents by examining the patterns of words (or higher-level structures such as entire sentences or documents). gensim accomplishes this by taking a corpus, a collection of text documents, and producing a vector representation of the text in the corpus. The vector representation can then be used to train a model, which is an algorithms to create different representations of the data, which are usually more semantic. These three concepts are key to understanding how gensim works so let's take a moment to explain what each of them means. At the same time, we'll work through a simple example that illustrates each of them.
Corpus
A corpus is a collection of digital documents. This collection is the input to gensim from which it will infer the structure of the documents, their topics, etc. The latent structure inferred from the corpus can later be used to assign topics to new documents which were not present in the training corpus. For this reason, we also refer to this collection as the training corpus. No human intervention (such as tagging the documents by hand) is required - the topic classification is unsupervised.
For our corpus, we'll use a list of 9 strings, each consisting of only a single sentence.
Step1: This is a particularly small example of a corpus for illustration purposes. Another example could be a list of all the plays written by Shakespeare, list of all wikipedia articles, or all tweets by a particular person of interest.
After collecting our corpus, there are typically a number of preprocessing steps we want to undertake. We'll keep it simple and just remove some commonly used English words (such as 'the') and words that occur only once in the corpus. In the process of doing so, we'll tokenise our data. Tokenization breaks up the documents into words (in this case using space as a delimiter).
Step2: Before proceeding, we want to associate each word in the corpus with a unique integer ID. We can do this using the gensim.corpora.Dictionary class. This dictionary defines the vocabulary of all words that our processing knows about.
Step3: Because our corpus is small, there are only 12 different tokens in this Dictionary. For larger corpuses, dictionaries that contains hundreds of thousands of tokens are quite common.
Vector
To infer the latent structure in our corpus we need a way to represent documents that we can manipulate mathematically. One approach is to represent each document as a vector. There are various approaches for creating a vector representation of a document but a simple example is the bag-of-words model. Under the bag-of-words model each document is represented by a vector containing the frequency counts of each word in the dictionary. For example, given a dictionary containing the words ['coffee', 'milk', 'sugar', 'spoon'] a document consisting of the string "coffee milk coffee" could be represented by the vector [2, 1, 0, 0] where the entries of the vector are (in order) the occurrences of "coffee", "milk", "sugar" and "spoon" in the document. The length of the vector is the number of entries in the dictionary. One of the main properties of the bag-of-words model is that it completely ignores the order of the tokens in the document that is encoded, which is where the name bag-of-words comes from.
Our processed corpus has 12 unique words in it, which means that each document will be represented by a 12-dimensional vector under the bag-of-words model. We can use the dictionary to turn tokenized documents into these 12-dimensional vectors. We can see what these IDs correspond to
Step4: For example, suppose we wanted to vectorize the phrase "Human computer interaction" (note that this phrase was not in our original corpus). We can create the bag-of-word representation for a document using the doc2bow method of the dictionary, which returns a sparse representation of the word counts
Step5: The first entry in each tuple corresponds to the ID of the token in the dictionary, the second corresponds to the count of this token.
Note that "interaction" did not occur in the original corpus and so it was not included in the vectorization. Also note that this vector only contains entries for words that actually appeared in the document. Because any given document will only contain a few words out of the many words in the dictionary, words that do not appear in the vectorization are represented as implicitly zero as a space saving measure.
We can convert our entire original corpus to a list of vectors
Step6: Note that while this list lives entirely in memory, in most applications you will want a more scalable solution. Luckily, gensim allows you to use any iterator that returns a single document vector at a time. See the documentation for more details.
Model
Now that we have vectorized our corpus we can begin to transform it using models. We use model as an abstract term referring to a transformation from one document representation to another. In gensim documents are represented as vectors so a model can be thought of as a transformation between two vector spaces. The details of this transformation are learned from the training corpus.
One simple example of a model is tf-idf. The tf-idf model transforms vectors from the bag-of-words representation to a vector space where the frequency counts are weighted according to the relative rarity of each word in the corpus.
Here's a simple example. Let's initialize the tf-idf model, training it on our corpus and transforming the string "system minors" | Python Code:
raw_corpus = ["Human machine interface for lab abc computer applications",
"A survey of user opinion of computer system response time",
"The EPS user interface management system",
"System and human system engineering testing of EPS",
"Relation of user perceived response time to error measurement",
"The generation of random binary unordered trees",
"The intersection graph of paths in trees",
"Graph minors IV Widths of trees and well quasi ordering",
"Graph minors A survey"]
Explanation: # Getting Started with gensim
This section introduces the basic concepts and terms needed to understand and use gensim and provides a simple usage example.
Core Concepts and Simple Example
At a very high-level, gensim is a tool for discovering the semantic structure of documents by examining the patterns of words (or higher-level structures such as entire sentences or documents). gensim accomplishes this by taking a corpus, a collection of text documents, and producing a vector representation of the text in the corpus. The vector representation can then be used to train a model, which is an algorithms to create different representations of the data, which are usually more semantic. These three concepts are key to understanding how gensim works so let's take a moment to explain what each of them means. At the same time, we'll work through a simple example that illustrates each of them.
Corpus
A corpus is a collection of digital documents. This collection is the input to gensim from which it will infer the structure of the documents, their topics, etc. The latent structure inferred from the corpus can later be used to assign topics to new documents which were not present in the training corpus. For this reason, we also refer to this collection as the training corpus. No human intervention (such as tagging the documents by hand) is required - the topic classification is unsupervised.
For our corpus, we'll use a list of 9 strings, each consisting of only a single sentence.
End of explanation
# Create a set of frequent words
# stoplist = set('for a of the and to in'.split(' '))
# Lowercase each document, split it by white space and filter out stopwords
texts = [[word for word in document.lower().split()] # if word not in stoplist]
for document in raw_corpus]
# Count word frequencies
from collections import defaultdict
frequency = defaultdict(int)
for text in texts:
for token in text:
frequency[token] += 1
# Only keep words that appear more than once
processed_corpus = [[token for token in text if frequency[token] > 1] for text in texts]
processed_corpus
Explanation: This is a particularly small example of a corpus for illustration purposes. Another example could be a list of all the plays written by Shakespeare, list of all wikipedia articles, or all tweets by a particular person of interest.
After collecting our corpus, there are typically a number of preprocessing steps we want to undertake. We'll keep it simple and just remove some commonly used English words (such as 'the') and words that occur only once in the corpus. In the process of doing so, we'll tokenise our data. Tokenization breaks up the documents into words (in this case using space as a delimiter).
End of explanation
from gensim import corpora
dictionary = corpora.Dictionary(processed_corpus)
print(dictionary)
Explanation: Before proceeding, we want to associate each word in the corpus with a unique integer ID. We can do this using the gensim.corpora.Dictionary class. This dictionary defines the vocabulary of all words that our processing knows about.
End of explanation
print(dictionary.token2id)
Explanation: Because our corpus is small, there are only 12 different tokens in this Dictionary. For larger corpuses, dictionaries that contains hundreds of thousands of tokens are quite common.
Vector
To infer the latent structure in our corpus we need a way to represent documents that we can manipulate mathematically. One approach is to represent each document as a vector. There are various approaches for creating a vector representation of a document but a simple example is the bag-of-words model. Under the bag-of-words model each document is represented by a vector containing the frequency counts of each word in the dictionary. For example, given a dictionary containing the words ['coffee', 'milk', 'sugar', 'spoon'] a document consisting of the string "coffee milk coffee" could be represented by the vector [2, 1, 0, 0] where the entries of the vector are (in order) the occurrences of "coffee", "milk", "sugar" and "spoon" in the document. The length of the vector is the number of entries in the dictionary. One of the main properties of the bag-of-words model is that it completely ignores the order of the tokens in the document that is encoded, which is where the name bag-of-words comes from.
Our processed corpus has 12 unique words in it, which means that each document will be represented by a 12-dimensional vector under the bag-of-words model. We can use the dictionary to turn tokenized documents into these 12-dimensional vectors. We can see what these IDs correspond to:
End of explanation
new_doc = "Human computer interaction"
new_vec = dictionary.doc2bow(new_doc.lower().split())
new_vec
Explanation: For example, suppose we wanted to vectorize the phrase "Human computer interaction" (note that this phrase was not in our original corpus). We can create the bag-of-word representation for a document using the doc2bow method of the dictionary, which returns a sparse representation of the word counts:
End of explanation
bow_corpus = [dictionary.doc2bow(text) for text in processed_corpus]
bow_corpus
Explanation: The first entry in each tuple corresponds to the ID of the token in the dictionary, the second corresponds to the count of this token.
Note that "interaction" did not occur in the original corpus and so it was not included in the vectorization. Also note that this vector only contains entries for words that actually appeared in the document. Because any given document will only contain a few words out of the many words in the dictionary, words that do not appear in the vectorization are represented as implicitly zero as a space saving measure.
We can convert our entire original corpus to a list of vectors:
End of explanation
from gensim import models
# train the model
tfidf = models.TfidfModel(bow_corpus)
# transform the "system minors" string
tfidf[dictionary.doc2bow("system minors".lower().split())]
Explanation: Note that while this list lives entirely in memory, in most applications you will want a more scalable solution. Luckily, gensim allows you to use any iterator that returns a single document vector at a time. See the documentation for more details.
Model
Now that we have vectorized our corpus we can begin to transform it using models. We use model as an abstract term referring to a transformation from one document representation to another. In gensim documents are represented as vectors so a model can be thought of as a transformation between two vector spaces. The details of this transformation are learned from the training corpus.
One simple example of a model is tf-idf. The tf-idf model transforms vectors from the bag-of-words representation to a vector space where the frequency counts are weighted according to the relative rarity of each word in the corpus.
Here's a simple example. Let's initialize the tf-idf model, training it on our corpus and transforming the string "system minors":
End of explanation |
12,483 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1> Text Classification using TensorFlow/Keras on AI Platform </h1>
This notebook illustrates
Step1: Note
Step2: We will look at the titles of articles and figure out whether the article came from the New York Times, TechCrunch or GitHub.
We will use hacker news as our data source. It is an aggregator that displays tech related headlines from various sources.
Creating Dataset from BigQuery
Hacker news headlines are available as a BigQuery public dataset. The dataset contains all headlines from the sites inception in October 2006 until October 2015.
Here is a sample of the dataset
Step3: Let's do some regular expression parsing in BigQuery to get the source of the newspaper article from the URL. For example, if the url is http
Step5: Now that we have good parsing of the URL to get the source, let's put together a dataset of source and titles. This will be our labeled dataset for AI Platform.
Step6: For ML training, we will need to split our dataset into training and evaluation datasets (and perhaps an independent test dataset if we are going to do model or feature selection based on the evaluation dataset).
A simple, repeatable way to do this is to use the hash of a well-distributed column in our data (See https
Step7: Below we can see that roughly 75% of the data is used for training, and 25% for evaluation.
We can also see that within each dataset, the classes are roughly balanced.
Step8: Finally we will save our data, which is currently in-memory, to disk.
Step9: TensorFlow/Keras Code
Please explore the code in this <a href="txtclsmodel/trainer">directory</a>
Step10: Train on the Cloud
Let's first copy our training data to the cloud
Step11: Change the job name appropriately. View the job in the console, and wait until the job is complete.
Step12: Results
What accuracy did you get? You should see around 80%.
Rerun with Pre-trained Embedding
We will use the popular GloVe embedding which is trained on Wikipedia as well as various news sources like the New York Times.
You can read more about Glove at the project homepage | Python Code:
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
!pip install --user google-cloud-bigquery==1.25.0
Explanation: <h1> Text Classification using TensorFlow/Keras on AI Platform </h1>
This notebook illustrates:
<ol>
<li> Creating datasets for AI Platform using BigQuery
<li> Creating a text classification model using the Estimator API with a Keras model
<li> Training on Cloud AI Platform
<li> Rerun with pre-trained embedding
</ol>
End of explanation
# change these to try this notebook out
BUCKET = 'cloud-training-demos-ml'
PROJECT = 'cloud-training-demos'
REGION = 'us-central1'
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
os.environ['TFVERSION'] = '2.6'
if 'COLAB_GPU' in os.environ: # this is always set on Colab, the value is 0 or 1 depending on whether a GPU is attached
from google.colab import auth
auth.authenticate_user()
# download "sidecar files" since on Colab, this notebook will be on Drive
!rm -rf txtclsmodel
!git clone --depth 1 https://github.com/GoogleCloudPlatform/training-data-analyst
!mv training-data-analyst/courses/machine_learning/deepdive/09_sequence/txtclsmodel/ .
!rm -rf training-data-analyst
# downgrade TensorFlow to the version this notebook has been tested with
#!pip install --upgrade tensorflow==$TFVERSION
import tensorflow as tf
print(tf.__version__)
Explanation: Note: Restart your kernel to use updated packages.
Kindly ignore the deprecation warnings and incompatibility errors related to google-cloud-storage.
End of explanation
%load_ext google.cloud.bigquery
%%bigquery --project $PROJECT
SELECT
url, title, score
FROM
`bigquery-public-data.hacker_news.stories`
WHERE
LENGTH(title) > 10
AND score > 10
AND LENGTH(url) > 0
LIMIT 10
Explanation: We will look at the titles of articles and figure out whether the article came from the New York Times, TechCrunch or GitHub.
We will use hacker news as our data source. It is an aggregator that displays tech related headlines from various sources.
Creating Dataset from BigQuery
Hacker news headlines are available as a BigQuery public dataset. The dataset contains all headlines from the sites inception in October 2006 until October 2015.
Here is a sample of the dataset:
End of explanation
%%bigquery --project $PROJECT
SELECT
ARRAY_REVERSE(SPLIT(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.'))[OFFSET(1)] AS source,
COUNT(title) AS num_articles
FROM
`bigquery-public-data.hacker_news.stories`
WHERE
REGEXP_CONTAINS(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.com$')
AND LENGTH(title) > 10
GROUP BY
source
ORDER BY num_articles DESC
LIMIT 10
Explanation: Let's do some regular expression parsing in BigQuery to get the source of the newspaper article from the URL. For example, if the url is http://mobile.nytimes.com/...., I want to be left with <i>nytimes</i>
End of explanation
from google.cloud import bigquery
bq = bigquery.Client(project=PROJECT)
query=
SELECT source, LOWER(REGEXP_REPLACE(title, '[^a-zA-Z0-9 $.-]', ' ')) AS title FROM
(SELECT
ARRAY_REVERSE(SPLIT(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.'))[OFFSET(1)] AS source,
title
FROM
`bigquery-public-data.hacker_news.stories`
WHERE
REGEXP_CONTAINS(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.com$')
AND LENGTH(title) > 10
)
WHERE (source = 'github' OR source = 'nytimes' OR source = 'techcrunch')
df = bq.query(query + " LIMIT 5").to_dataframe()
df.head()
Explanation: Now that we have good parsing of the URL to get the source, let's put together a dataset of source and titles. This will be our labeled dataset for AI Platform.
End of explanation
traindf = bq.query(query + " AND ABS(MOD(FARM_FINGERPRINT(title), 4)) > 0").to_dataframe()
evaldf = bq.query(query + " AND ABS(MOD(FARM_FINGERPRINT(title), 4)) = 0").to_dataframe()
Explanation: For ML training, we will need to split our dataset into training and evaluation datasets (and perhaps an independent test dataset if we are going to do model or feature selection based on the evaluation dataset).
A simple, repeatable way to do this is to use the hash of a well-distributed column in our data (See https://www.oreilly.com/learning/repeatable-sampling-of-data-sets-in-bigquery-for-machine-learning).
End of explanation
traindf['source'].value_counts()
evaldf['source'].value_counts()
Explanation: Below we can see that roughly 75% of the data is used for training, and 25% for evaluation.
We can also see that within each dataset, the classes are roughly balanced.
End of explanation
import os, shutil
DATADIR='data/txtcls'
shutil.rmtree(DATADIR, ignore_errors=True)
os.makedirs(DATADIR)
traindf.to_csv( os.path.join(DATADIR,'train.tsv'), header=False, index=False, encoding='utf-8', sep='\t')
evaldf.to_csv( os.path.join(DATADIR,'eval.tsv'), header=False, index=False, encoding='utf-8', sep='\t')
!head -3 data/txtcls/train.tsv
!wc -l data/txtcls/*.tsv
Explanation: Finally we will save our data, which is currently in-memory, to disk.
End of explanation
%%bash
pip install google-cloud-storage
rm -rf txtcls_trained
gcloud ai-platform local train \
--module-name=trainer.task \
--package-path=${PWD}/txtclsmodel/trainer \
-- \
--output_dir=${PWD}/txtcls_trained \
--train_data_path=${PWD}/data/txtcls/train.tsv \
--eval_data_path=${PWD}/data/txtcls/eval.tsv \
--num_epochs=0.1
Explanation: TensorFlow/Keras Code
Please explore the code in this <a href="txtclsmodel/trainer">directory</a>: model.py contains the TensorFlow model and task.py parses command line arguments and launches off the training job.
In particular look for the following:
tf.keras.preprocessing.text.Tokenizer.fit_on_texts() to generate a mapping from our word vocabulary to integers
tf.keras.preprocessing.text.Tokenizer.texts_to_sequences() to encode our sentences into a sequence of their respective word-integers
tf.keras.preprocessing.sequence.pad_sequences() to pad all sequences to be the same length
The embedding layer in the keras model takes care of one-hot encoding these integers and learning a dense emedding represetation from them.
Finally we pass the embedded text representation through a CNN model pictured below
<img src=images/txtcls_model.png width=25%>
Run Locally (optional step)
Let's make sure the code compiles by running locally for a fraction of an epoch.
This may not work if you don't have all the packages installed locally for gcloud (such as in Colab).
This is an optional step; move on to training on the cloud.
End of explanation
%%bash
gsutil cp data/txtcls/*.tsv gs://${BUCKET}/txtcls/
%%bash
OUTDIR=gs://${BUCKET}/txtcls/trained_fromscratch
JOBNAME=txtcls_$(date -u +%y%m%d_%H%M%S)
gsutil -m rm -rf $OUTDIR
gcloud ai-platform jobs submit training $JOBNAME \
--region=$REGION \
--module-name=trainer.task \
--package-path=${PWD}/txtclsmodel/trainer \
--job-dir=$OUTDIR \
--scale-tier=BASIC_GPU \
--runtime-version 2.3 \
--python-version 3.7 \
-- \
--output_dir=$OUTDIR \
--train_data_path=gs://${BUCKET}/txtcls/train.tsv \
--eval_data_path=gs://${BUCKET}/txtcls/eval.tsv \
--num_epochs=5
Explanation: Train on the Cloud
Let's first copy our training data to the cloud:
End of explanation
!gcloud ai-platform jobs describe txtcls_190209_224828
Explanation: Change the job name appropriately. View the job in the console, and wait until the job is complete.
End of explanation
!gsutil cp gs://cloud-training-demos/courses/machine_learning/deepdive/09_sequence/text_classification/glove.6B.200d.txt gs://$BUCKET/txtcls/
Explanation: Results
What accuracy did you get? You should see around 80%.
Rerun with Pre-trained Embedding
We will use the popular GloVe embedding which is trained on Wikipedia as well as various news sources like the New York Times.
You can read more about Glove at the project homepage: https://nlp.stanford.edu/projects/glove/
You can download the embedding files directly from the stanford.edu site, but we've rehosted it in a GCS bucket for faster download speed.
End of explanation |
12,484 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Efficient programming for parallel computing
Timing and profiling
Step11: Profiling your code
Step12: Not covered
Step13: Pointers versus copies
Step14: Choosing the right container
Step17: Looping constructs
EXERCISE
Step18: Vector programming
Step19: Pointers versus copies
Step20: slice versus where
Step21: Vector programming versus looping
Step22: EXERCISE
Step24: Not covered
Step25: See
Step26: Sanity checking with pyflakes and pep8
Step28: Serialization and code encapsulation | Python Code:
%%file timing.py
some simple things to time
import time
def _list_comprehension(N):
return [x*x for x in xrange(N)]
def _for_append(N):
L = []
for x in xrange(N):
L.append(x*x)
return L
def _for_setitem(N):
L = [None]*N
i = 0
for x in xrange(N):
L[i] = x*x
i += 1
return L
def timed(f):
def dec(*args, **kwds):
start = time.time()
res = f(*args, **kwds)
dec.__time__[f.__name__] = time.time() - start
return res
def get_time():
return dec.__time__.values()[0]
dec.__time__ = {}
dec.timed = get_time
return dec
def compare(f1, f2, N, M=1000):
t1 = 0; t2 = 0
for i in xrange(M):
f1(N)
t1 += f1.timed()
for i in xrange(M):
f2(N)
t2 += f2.timed()
print "ratio: %s" % (t1/t2)
if __name__ == '__main__':
N = 10000
print("size = %s" % N)
start = time.time()
_list_comprehension(N)
end = time.time() - start
print("%s: list comp" % end)
start = time.time()
_for_append(N)
end = time.time() - start
print("%s: for append" % end)
start = time.time()
_for_setitem(N)
end = time.time() - start
print("%s: for setitem" % end)
# EOF
!python2.7 timing.py
import timing
@timing.timed
def sum_squared(x):
return sum(i*i for i in x)
print "result: %s" % sum_squared(xrange(50))
print "time: %s" % sum_squared.timed()
def sum_squared(x):
return sum(i*i for i in x)
%timeit sum_squared(xrange(50))
Explanation: Efficient programming for parallel computing
Timing and profiling: timeit and line_profiler
Timing your code
End of explanation
%%file profiling.py
some simple things to profile
GLOBAL = 1
def _repeat(counter):
Using the GLOBAL value directly.
for count in xrange(counter):
GLOBAL
def _repeat_local(counter):
Making GLOBAL a local variable.
local = GLOBAL
for count in xrange(counter):
local
def _repeat_2(counter):
Using the built-in `True` in a loop.
for count in xrange(counter):
True
def _repeat_local_2(counter):
Making `True` a local variable.
true = True
for count in xrange(counter):
true
def _test(counter):
Call all functions.
_repeat(counter)
_repeat_local(counter)
_repeat_2(counter)
_repeat_local_2(counter)
def profile(code_string):
Check the run times.
import cProfile
profiler = cProfile.Profile()
profiler.run(code_string)
profiler.print_stats()
if __name__ == '__main__':
profile('_test(int(1e8))')
# EOF
!python profiling.py
%%file http_search.py
http pattern search
import re
PATTERN = r"https?:\/\/[\w\-_]+(\.[\w\-_]+)+([\w\-\.,@?^=%&:/~\+#]*[\w\-\@?^=%&/~\+#])?"
@profile
def scan_for_http(f):
addresses = []
for line in f:
result = re.search(PATTERN, line)
if result:
addresses.append(result.group(0))
return addresses
if __name__ == "__main__":
import sys
f = open(sys.argv[1], 'r')
addresses = scan_for_http(f)
for address in addresses:
pass
#print(address)
# EOF
%%bash
kernprof -lv http_search.py sample.html
%%file http_search.py
http pattern search
import re
PATTERN = r"https?:\/\/[\w\-_]+(\.[\w\-_]+)+([\w\-\.,@?^=%&:/~\+#]*[\w\-\@?^=%&/~\+#])?"
@profile
def scan_for_http(f):
addresses = []
pat = re.compile(PATTERN) # <-- NOTE
for line in f:
result = pat.search(line)
if result:
addresses.append(result.group(0))
return addresses
if __name__ == "__main__":
import sys
f = open(sys.argv[1], 'r')
addresses = scan_for_http(f)
for address in addresses:
pass
#print(address)
# EOF
%%bash
kernprof -lv http_search.py sample.html
Explanation: Profiling your code
End of explanation
import timing
@timing.timed
def use_ON(iterable):
result = []
for item in iterable:
result.insert(0, item)
return result
@timing.timed
def use_O1(iterable):
result = []
for item in iterable:
result.append(item)
result.reverse()
return result
@timing.timed
def use_list(iterable):
result = list(iterable)
result.reverse()
return result
def compare_ON_O1(N):
r1 = use_ON(range(N))
r2 = use_O1(range(N))
print use_ON.timed() / use_O1.timed()
def compare_O1_list(N):
r1 = use_list(range(N))
r2 = use_O1(range(N))
print use_list.timed() / use_O1.timed()
for i in [100,1000,10000]:
print "for %s, ON:O1 =" % i,
compare_ON_O1(i)
print "for %s, O1:list =" % i,
compare_O1_list(i)
Explanation: Not covered: memory profiling. See guppy and pympler
Also see: http://pynash.org/2013/03/06/timing-and-profiling.html
Efficiency in language patterns
Global versus local
Staying native
End of explanation
import timing
@timing.timed
def double_extend(N):
x = range(N)
x.extend(x)
return x
@timing.timed
def double_concatenate(N):
x = range(N)
return x+x
for i in [100,1000,10000]:
print "N=%s" % i,
timing.compare(double_extend, double_concatenate, N=i)
Explanation: Pointers versus copies
End of explanation
import timing
@timing.timed
def search_list(N):
x = list(xrange(N))
return N in x
@timing.timed
def search_set(N):
x = set(xrange(N))
return N in x
for j in [10,100,1000]:
for i in [1000,10000,100000]:
print "M=%s, N=%s" % (j, i),
timing.compare(search_list, search_set, N=i, M=j)
N = 10000
x = set(xrange(N))
%timeit N in x
x = list(xrange(N))
%timeit N in x
Explanation: Choosing the right container
End of explanation
%%file looping.py
test some looping constructs
def generator(N):
return sum(i*i for i in xrange(N))
def list_comp(N):
return sum([i*i for i in xrange(N)])
def for_loop(N):
sum = 0
for i in xrange(N):
sum += i*i
return sum
for N in [100,1000,10000,1000000]:
print "N = %s" % N
%timeit generator(N)
%timeit list_comp(N)
%timeit for_loop(N)
# %load looping.py
test some looping constructs
def generator(N):
return sum(i*i for i in xrange(N))
def list_comp(N):
return sum([i*i for i in xrange(N)])
def for_loop(N):
sum = 0
for i in xrange(N):
sum += i*i
return sum
for N in [100,1000,10000,1000000]:
print "N = %s" % N
%timeit generator(N)
%timeit list_comp(N)
%timeit for_loop(N)
Explanation: Looping constructs
EXERCISE: Write test code that calculates the sum of all squares of the numbers from zero to one million. Use a for loop that directly adds with +=, a list comprehension, and also generator comprehension. Try it with range and xrange. Use different numbers, e.g. smaller and larger than one million. How do they compare?
End of explanation
import numpy as np
a = np.array([1,2,3,4])
b = np.array([5,6,7,8])
print a+b
print a*b
print a**2 - 2*b + 1
print np.sin(a)
print np.max(a)
print np.hstack((a,b))
print np.add(a,b)
print np.add.accumulate(a)
c = np.arange(0,8,2)
print c
d = np.empty(c.shape, dtype=int)
for i,j in enumerate(c):
d[i] = j**2
print d[:i+1]
e = np.vstack((a,b))
print e
Explanation: Vector programming: numpy
numpy basics
End of explanation
print e.shape
e.shape = (-1,8)
print e
c = e.reshape((4,2))
print c
d = c.T
print d
b = d.flatten()
print b
c[0,0] = -1
b[-1] = 10
print d
Explanation: Pointers versus copies
End of explanation
d[0,:2] = [11,13]
c[2:,0] = [15,17]
print d
b = d[0,::2]
print b
np.add(b,-10,b)
print b
print d
x = np.arange(1e8)
%timeit x.T
%timeit y = x[1::2]
%timeit np.add(c,d.T)
x = np.linspace(0,16,8, dtype=int)
print x
print np.where(x >= 10)
print x[np.where(x >= 10)]
print x[x >= 10]
x[x % 2 != 0] = x[x % 2 != 0] - 1
print x
Explanation: slice versus where
End of explanation
def _sinc(x):
if x == 0.0:
return 1.0
return np.sin(x)/x
sinc = np.vectorize(_sinc) # could use as a decorator
print sinc(d)
%timeit map(_sinc, x)
%timeit sinc(x)
%timeit np.sinc(x)
Explanation: Vector programming versus looping
End of explanation
x = range(11)
y = range(-5,6)
def add(x,y):
return x+y
def squared(x):
return x*x
print [squared(i) for i in (add(j,k) for j,k in zip(x,y))]
print map(squared, map(add, x, y))
%timeit [squared(i) for i in (add(j,k) for j,k in zip(x,y))]
%timeit map(squared, map(add, x, y))
from multiprocessing.dummy import Pool
tmap = Pool().map
%timeit map(squared, range(10))
%timeit tmap(squared, range(10))
def sleepy_squared(x):
import time
time.sleep(.1)
return x*x
%timeit map(sleepy_squared, range(10))
%timeit tmap(sleepy_squared, range(10))
Explanation: EXERCISE: Profile the functions in roll.py, strategy.py, and trials.py. Where are the hot spots? Can you make any significant improvements? Try converting to vectorized versions of the code in those three files where it seems appropriate. Can you make any gains? (Don't touch the code in optimize.py for now.) Note that some functions don't vectorize well, so make sure to retain the original verions of the code -- especially since some other techniques for speeding up code may prefer non-vector versions.
See: 'solution'
Programming efficency: testing and error handling
Functional programming as a gateway to parallel
End of explanation
%%file test_squared_map.py
sanity check for our parallel maps
from multiprocessing.dummy import Pool
tmap = Pool().map
x = range(11)
def squared(x):
return x*x
def test_squared():
assert map(squared, x) == tmap(squared, x)
# EOF
!nosetests test_squared_map.py
Explanation: Not covered: nose and saving your validation code
End of explanation
def bad(x):
import sleep
sleep(x)
return x
print map(bad, range(2))
def less_bad(x):
try:
import sleep
except ImportError:
return None
sleep(x)
return x
map(less_bad, range(2))
Explanation: See: https://nose.readthedocs.org/en/latest/
Errors and error handling
End of explanation
%%bash
pyflakes-2.7 test_squared_map.py
pep8-2.7 test_squared_map.py
Explanation: Sanity checking with pyflakes and pep8
End of explanation
%%file state.py
some good state utilities
def check_pickle(x):
"checks the pickle across a subprocess"
import pickle
import subprocess
fail = True
try:
_x = pickle.dumps(x)
fail = False
finally:
if fail:
print "DUMP FAILED"
msg = "python -c import pickle; print pickle.loads(%s)" % repr(_x)
print "SUCCESS" if not subprocess.call(msg.split(None,2)) else "LOAD FAILED"
# EOF
import pickle
print pickle.dumps([1,2,3])
print pickle.dumps(squared)
import state
state.check_pickle([1,2,3])
state.check_pickle(squared)
%%file sleepy.py
'''test file for pickling
'''
def sleepy_squared(x):
import time
time.sleep(.1)
return x*x
import sleepy
state.check_pickle(sleepy.sleepy_squared)
Explanation: Serialization and code encapsulation
End of explanation |
12,485 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to Python and Natural Language Technologies
Lecture 04, Week 04
February 28, 2018
List comprehension
transform any iterable into a list in one line
syntactic sugar
example
Step1: one-liner equivalent
Step2: The general form of list comprehension is
~~~
[<expression> for <element> in <sequence>]
~~~
conditional expressions can be added to filter the sequence
Step3: which is equivalent to
Step4: since this expression implements a filtering mechanism, there is no else clause
an if-else clause can be used as the first expression though
Step5: More than one sequence may be traversed. Is this depth-first or breadth-first traversal?
Step6: List comprehensions may be nested by replacing the first expression with another list comprehension
Step7: What is the type of a (list) comprehension?
Step8: Generator expressions
Generator expressions are a generalization of list comprehension. They were introduced in PEP 289 in 2002.
Check out the memory consumption of these cells.
Step9: Generators do not generate a list in memory
Step10: therefore they can only be traversed once
Step11: the generator is empty after the first run
Step12: calling next() raises a StopIteration exception
Step13: these are actually the defining properties of the iteration protocol
Iteration protocol
A class satisfies the iteration protocol if
Step14: Set and dict comprehension
Sets and dictionaries can be instantiated via generator expressions too.
A generator expression between curly brackets instantiates a set
Step15: if the expression in the generator is a key-value pair separated by a colon, it instantiates a dictionary
Step16: yield keyword
if a function uses yield instead of return, it becomes a generator function
yield temporarily gives back the execution to the caller
the generator function continues
Step17: this function returns a generator object
Step18: The next function returns the next element of the generator.
A StopIteration is raised when no more elements are left
Step19: the generator function returns a new generator object every time it's called
Step20: iterators can only be traversed forward, but we can easily wrap an iterator to have memory
Step21: Q. Add a memory_size parameter to the previous function which specifies how many of the previous elements are stored.
You can yield them in a list or better, wrap it in a class.
Exercises
Generator expressions can be particularly useful for formatted output. We will demonstrate this through a few examples.
Step22: ~~~
The shopping list is
Step23: Q. Print the following shopping list with quantities.
For example
Step24: Q. Print the same format in alphabetical order.
Decreasing order by quantity
Step25: Q. Print the list of students.
Step26: Q. Print one class-per-line and print the size of the class too
Example
Step27: more than one except clauses may be defined
ordered from more specific to least specific
Step28: More than one type of exception can be handled in the same except clause
Step29: except without an Exception type
without specifying a type, except catches everything but all information about the exception is lost
Step30: the empty except must be the last except block since it blocks all others
SyntaxError otherwise
Step31: Base class' except clauses catch derived classes too
Step32: finally
the finally block is guaranteed to run regardless an exception was raised or not
Step33: else
try-except blocks may have an else clause that only runs if no exception was raised
Step34: raise keyword
raise throws/raises an exception
an empty raise in an except
Step35: Defining exceptions
any type that subclasses Exception (BaseException to be exact) can be used as an exception object
Step36: Using exception for trial-and-error is considered Pythonic
Step37: Context managers
there are two types of resources
Step38: we need to manually close the file
what happens when an exception occurs
Step39: the file is never closed, the file descriptor is leaked
a solution would be to use try-except blocks with finally clauses
Step40: Context managers handle this automatically
the with keyword opens a resource
keeps it open until the execution leaves with's scope
releases the resource regardless whether an exception is raised or not
Step41: Defining context managers
any class can be a context manager if it implements
Step42: __exit__ takes 3 extra arguments that describe the exception | Python Code:
l = []
for i in range(10):
l.append(2*i+1)
l
Explanation: Introduction to Python and Natural Language Technologies
Lecture 04, Week 04
February 28, 2018
List comprehension
transform any iterable into a list in one line
syntactic sugar
example: create a list of the first N odd numbers starting from 1
End of explanation
l = [2*i+1 for i in range(10)]
l
Explanation: one-liner equivalent
End of explanation
even = [n*n for n in range(20) if n % 2 == 0]
even
Explanation: The general form of list comprehension is
~~~
[<expression> for <element> in <sequence>]
~~~
conditional expressions can be added to filter the sequence:
~~~
[<expression> for <element> in <sequence> if <condition>]
~~~
End of explanation
even = []
for n in range(20):
if n % 2 == 0:
even.append(n)
even
Explanation: which is equivalent to
End of explanation
l = [1, 0, -2, 3, -1, -5, 0]
signum_l = [int(n / abs(n)) if n != 0 else 0 for n in l]
signum_l
n = -3.2
int(n / abs(n)) if n != 0 else 0
Explanation: since this expression implements a filtering mechanism, there is no else clause
an if-else clause can be used as the first expression though:
End of explanation
l1 = [1, 2, 3]
l2 = [4, 5, 6]
[(i, j) for i in l1 for j in l2]
[(i, j) for j in l2 for i in l1]
Explanation: More than one sequence may be traversed. Is this depth-first or breadth-first traversal?
End of explanation
matrix = [
[1, 2, 3],
[5, 6, 7]
]
[[e*e for e in row] for row in matrix]
Explanation: List comprehensions may be nested by replacing the first expression with another list comprehension:
End of explanation
i = (i for i in range(10))
type(i)
Explanation: What is the type of a (list) comprehension?
End of explanation
12
N = 8
s = sum([i*2 for i in range(int(10**N))])
print(s)
s = sum(i*2 for i in range(int(10**N)))
print(s)
Explanation: Generator expressions
Generator expressions are a generalization of list comprehension. They were introduced in PEP 289 in 2002.
Check out the memory consumption of these cells.
End of explanation
even_numbers = (2*n for n in range(10))
even_numbers
Explanation: Generators do not generate a list in memory
End of explanation
for num in even_numbers:
print(num)
Explanation: therefore they can only be traversed once
End of explanation
for num in even_numbers:
print(num)
Explanation: the generator is empty after the first run
End of explanation
even_numbers = (2*n for n in range(10))
while True:
try:
print(next(even_numbers))
except StopIteration:
break
# next(even_numbers) # raises StopIteration
Explanation: calling next() raises a StopIteration exception
End of explanation
class MyIterator:
def __init__(self):
self.iter_no = 5
def __iter__(self):
return self
def __next__(self):
if self.iter_no <= 0:
raise StopIteration()
self.iter_no -= 1
print("Returning {}".format(self.iter_no))
return self.iter_no
myiter = MyIterator()
for i in myiter:
print(i)
Explanation: these are actually the defining properties of the iteration protocol
Iteration protocol
A class satisfies the iteration protocol if:
it has a __iter__ function that returns and iterator, which
has a __next__ function (this function is called next in Python 2),
raises a StopIteration after a certain number of iterations
For loops use the iteration protocol.
End of explanation
fruit_list = ["apple", "plum", "apple", "pear"]
fruits = {fruit.title() for fruit in fruit_list}
type(fruits), len(fruits), fruits
Explanation: Set and dict comprehension
Sets and dictionaries can be instantiated via generator expressions too.
A generator expression between curly brackets instantiates a set:
End of explanation
word_list = ["apple", "plum", "pear", "apple", "apple"]
word_length = {word: len(word) for word in word_list}
type(word_length), len(word_length), word_length
word_list = ["apple", "plum", "pear", "avocado"]
first_letters = {word[0]: word for word in word_list}
first_letters
Explanation: if the expression in the generator is a key-value pair separated by a colon, it instantiates a dictionary:
End of explanation
def hungarian_vowels():
alphabet = ("a", "á", "e", "é", "i", "Ã", "o", "ó",
"ö", "Å", "u", "ú", "ÃŒ", "ű")
for vowel in alphabet:
yield vowel
Explanation: yield keyword
if a function uses yield instead of return, it becomes a generator function
yield temporarily gives back the execution to the caller
the generator function continues
End of explanation
type(hungarian_vowels())
for vowel in hungarian_vowels():
print(vowel)
gen = hungarian_vowels()
print("first iteration: {}".format(", ".join(gen)))
print("second iteration: {}".format(", ".join(gen)))
Explanation: this function returns a generator object
End of explanation
gen = hungarian_vowels()
while True:
try:
print("The next element is {}".format(next(gen)))
except StopIteration:
print("No more elements left :(")
break
Explanation: The next function returns the next element of the generator.
A StopIteration is raised when no more elements are left:
End of explanation
gen1 = hungarian_vowels()
gen2 = hungarian_vowels()
print(gen1 is gen2)
print("gen1 first time:", list(gen1))
print("gen1 second time:", list(gen1))
print("gen2 first time:", list(gen2))
Explanation: the generator function returns a new generator object every time it's called
End of explanation
def iter_with_memory(orig_iter):
prev = None
for current in orig_iter:
yield current, prev
prev = current
for i in iter_with_memory(hungarian_vowels()):
print(i)
Explanation: iterators can only be traversed forward, but we can easily wrap an iterator to have memory:
End of explanation
numbers = [1, -2, 3, 1]
# print(", ".join(numbers)) # raises TypeError
print(", ".join(str(number) for number in numbers))
shopping_list = ["apple", "plum", "pear"]
Explanation: Q. Add a memory_size parameter to the previous function which specifies how many of the previous elements are stored.
You can yield them in a list or better, wrap it in a class.
Exercises
Generator expressions can be particularly useful for formatted output. We will demonstrate this through a few examples.
End of explanation
shopping_list = ["apple", "plum", "pear"]
print("The shopping list is:\n{}".format(
"\n".join("item {0}: {1}".format(idx+1, element) for idx, element in enumerate(shopping_list))
))
Explanation: ~~~
The shopping list is:
item 1: apple
item 2: plum
item 3: pear
~~~
End of explanation
shopping_list = {
"apple": 2,
"pear": 1,
"plum": 5,
}
print("\n".join(
"item {0}: {1}, quantity: {2}".format( idx+1, item, quantity)
for idx, (item, quantity) in enumerate(shopping_list.items())
))
Explanation: Q. Print the following shopping list with quantities.
For example:
~~~
item 1: apple, quantity: 2
item 2: pear, quantity: 1
~~~
End of explanation
shopping_list = {
"apple": 2,
"pear": 1,
"plum": 5,
}
print("\n".join("item {0}: {1}, quantity: {2}".format(idx+1, item, quantity)
for idx, (item, quantity) in sorted(enumerate(shopping_list.items()))
))
print("\n".join(
"item {0}: {1}, quantity: {2}".format(idx+1, item, quantity) for idx, (item, quantity) in
enumerate(sorted(shopping_list.items(), key=lambda x: -x[1]))))
Explanation: Q. Print the same format in alphabetical order.
Decreasing order by quantity
End of explanation
students = [
["Joe", "John", "Mary"],
["Tina", "Tony", "Jeff", "Béla"],
["Pete", "Dave"],
]
Explanation: Q. Print the list of students.
End of explanation
try:
int("abc")
except ValueError as e:
print(type(e), e)
print(e)
Explanation: Q. Print one class-per-line and print the size of the class too
Example:
~~~
class 1, size: 3, students: Joe, John, Mary
class 2, size: 2, students: Pete, Dave
~~~
Q. Sort the classes by size in increasing order
Example:
~~~
class 1, size: 2, students: Pete, Dave
class 2, size: 3, students: Joe, John, Mary
~~~
Exception handling
fully typed exception handling
End of explanation
try:
age = int(input())
if age < 0:
raise Exception("Age cannot be negative")
except ValueError as e:
print("ValueError caught")
except Exception as e:
print("Other exception caught: {}".format(type(e)))
Explanation: more than one except clauses may be defined
ordered from more specific to least specific
End of explanation
def age_printer(age):
next_age = age + 1
print("Next year your age will be " + next_age)
try:
your_age = input()
your_age = int(your_age)
age_printer(your_age)
except ValueError:
print("ValueError caught")
except TypeError:
print("TypeError caught")
def age_printer(age):
next_age = age + 1
print("Next year your age will be " + next_age)
try:
your_age = input()
your_age = int(your_age)
age_printer(your_age)
except (ValueError, TypeError) as e:
print("{} caught".format(type(e).__name__))
Explanation: More than one type of exception can be handled in the same except clause
End of explanation
try:
age = int(input())
if age < 0:
raise Exception("Age cannot be negative")
except ValueError:
print("ValueError caught")
except:
#except Exception as e:
print("Something else caught")
Explanation: except without an Exception type
without specifying a type, except catches everything but all information about the exception is lost
End of explanation
try:
age = int(input())
if age < 0:
raise Exception("Age cannot be negative")
#except:
#print("Something else caught")
except ValueError:
print("ValueError caught")
Explanation: the empty except must be the last except block since it blocks all others
SyntaxError otherwise
End of explanation
try:
age = int(input())
if age < 0:
raise Exception("Age cannot be negative")
except Exception as e:
print("Exception caught: {}".format(type(e)))
except ValueError:
print("ValueError caught")
Explanation: Base class' except clauses catch derived classes too
End of explanation
try:
age = int(input())
except Exception as e:
print(type(e), e)
finally:
print("this always runs")
Explanation: finally
the finally block is guaranteed to run regardless an exception was raised or not
End of explanation
try:
age = int(input())
except ValueError as e:
print("Exception", e)
else:
print("No exception was raised")
# raise Exception("Raising an exception in else")
finally:
print("this always runs")
Explanation: else
try-except blocks may have an else clause that only runs if no exception was raised
End of explanation
try:
int("not a number")
except Exception:
# important log message
# raise
pass
Explanation: raise keyword
raise throws/raises an exception
an empty raise in an except
End of explanation
class NegativeAgeError(Exception):
pass
try:
age = int(input())
if age < 0:
raise NegativeAgeError("Age cannot be negative. Invalid age: {}".format(age))
except NegativeAgeError as e:
print(e)
except Exception as e:
print("Something else happened. Caught {}, with message {}".format(type(e), e))
Explanation: Defining exceptions
any type that subclasses Exception (BaseException to be exact) can be used as an exception object
End of explanation
try:
v = input()
int(v)
except ValueError:
print("not an int")
else:
print("looks like an int")
Explanation: Using exception for trial-and-error is considered Pythonic:
End of explanation
fh = []
while True:
try:
fh.append(open("abc.txt", "w"))
except OSError:
break
len(fh)
for f in fh:
f.close()
Explanation: Context managers
there are two types of resources: managed and unmanaged
Managed resources
resource acquisition and release are automatically done
no need for manual resource management
example: memory
C++ has both managed and unmanaged memory management. The stack is managed, but the heap is not, we need to manually call new and delete.
Unmanaged resources
unmanaged resources need explicit release
otherwise the operating system may run out of the resource
examples include files, network sockets
End of explanation
s1 = "important text"
fh = open("file.txt", "w")
# fh.write(s2) # raises NameError
fh.close()
Explanation: we need to manually close the file
what happens when an exception occurs
End of explanation
from sys import stderr
fh = open("file.txt", "w")
try:
fh.write(important_variable)
except Exception as e:
stderr.write("{0} happened".format(type(e).__name__))
finally:
print("Closing file")
fh.close()
Explanation: the file is never closed, the file descriptor is leaked
a solution would be to use try-except blocks with finally clauses
End of explanation
with open("file.txt", "w") as fh:
fh.write("abc\n")
# fh.write(important_variable) # raises NameError
Explanation: Context managers handle this automatically
the with keyword opens a resource
keeps it open until the execution leaves with's scope
releases the resource regardless whether an exception is raised or not
End of explanation
class DummyContextManager:
def __init__(self, value):
self.value = value
def __enter__(self):
print("Dummy resource acquired")
return self.value
def __exit__(self, *args):
print("Dummy resource released")
with DummyContextManager(42) as d:
print("Resource: {}".format(d))
Explanation: Defining context managers
any class can be a context manager if it implements:
__enter__: runs at the beginning of the with. Returns the resource.
__exit__: runs after the with block. Releases the resource.
End of explanation
class DummyContextManager:
def __init__(self, value):
self.value = value
def __enter__(self):
print("Dummy resource acquired")
return self.value
def __exit__(self, exc_type, exc_value, traceback):
if exc_type is not None:
print("{0} with value {1} caught\n"
"Traceback: {2}".format(
exc_type, exc_value, traceback))
print("Dummy resource released")
with DummyContextManager(42) as d:
print(d)
# raise ValueError("just because I can") # __exit__ will be called anyway
Explanation: __exit__ takes 3 extra arguments that describe the exception: exc_type, exc_value, traceback
End of explanation |
12,486 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Network analysis of the data.
The goal of this notebook is to uncover several constructions inside the dataset which may help us to uncover fraud. With that, we can see whether we can create features related to connections inside the network.
In this network analysis, we see if we can use the connections an account has in order to predict behavior. We will build a graph with the following properties
Step1: Load data.
Step2: Creata a unique bank account (bank + account)
Step3: Build the graph.
We initiate the graph, add the nodes from the internal account id and the non-zero external accounts.
Step4: Add non-empty edges.
Step5: Look at the largest connected components.
A connected component is a cluster of nodes who are connected by an edge. This can uncover certain structures of possible unwanted behavor.
Step6: Look at it in a directed graph. | Python Code:
%matplotlib inline
import pandas as pd
import numpy as np
import networkx as nx
import matplotlib.pyplot as plt
Explanation: Network analysis of the data.
The goal of this notebook is to uncover several constructions inside the dataset which may help us to uncover fraud. With that, we can see whether we can create features related to connections inside the network.
In this network analysis, we see if we can use the connections an account has in order to predict behavior. We will build a graph with the following properties:
- Every account is a node (internal and external)
- Every transaction is an edge.
Once we have this, we can see if we can use different graph properties to see whether or not an account is suspicious of unwanted behaviour.
End of explanation
client_info = pd.read_csv('data/client_info.csv')
demographic_info = pd.read_csv('data/demographic_data.csv')
transaction_info = pd.read_csv('data/transction_info.csv')
order_info = pd.read_csv('data/order_info.csv')
Explanation: Load data.
End of explanation
transaction_info['bank_account'] = transaction_info['bank'] + transaction_info['account'].map(str)
Explanation: Creata a unique bank account (bank + account)
End of explanation
G = nx.Graph()
G.add_nodes_from(transaction_info['account_id'].unique().tolist())
G.add_nodes_from(transaction_info['bank_account'][transaction_info['bank_account'].notnull()].unique())
Explanation: Build the graph.
We initiate the graph, add the nodes from the internal account id and the non-zero external accounts.
End of explanation
nonEmpty = transaction_info[transaction_info['bank_account'].notnull()]
edges = zip(nonEmpty['account_id'],nonEmpty['bank_account'], nonEmpty['amount'])
G.add_weighted_edges_from(edges)
Explanation: Add non-empty edges.
End of explanation
giant = max(nx.connected_component_subgraphs(G), key=len)
nx.draw_circular(giant)
Explanation: Look at the largest connected components.
A connected component is a cluster of nodes who are connected by an edge. This can uncover certain structures of possible unwanted behavor.
End of explanation
Gdi = nx.DiGraph()
Gdi.add_nodes_from(giant.nodes())
Gdi.add_edges_from(giant.edges())
nx.draw_circular(Gdi)
Explanation: Look at it in a directed graph.
End of explanation |
12,487 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Cropping2D
[convolutional.Cropping2D.0] cropping ((1,1),(1,1)) on 3x5x4 input, data_format='channels_last'
Step1: [convolutional.Cropping2D.1] cropping ((1,1),(1,1)) on 3x5x4 input, data_format='channels_first'
Step2: [convolutional.Cropping2D.2] cropping ((4,2),(3,1)) on 8x7x6 input, data_format='channels_last'
Step3: [convolutional.Cropping2D.3] cropping ((4,2),(3,1)) on 8x7x6 input, data_format='channels_first'
Step4: [convolutional.Cropping2D.4] cropping (2,3) on 8x7x6 input, data_format='channels_last'
Step5: [convolutional.Cropping2D.5] cropping 4 on 8x7x6 input, data_format='channels_last'
Step6: export for Keras.js tests | Python Code:
data_in_shape = (3, 5, 4)
L = Cropping2D(cropping=((1,1),(1, 1)), data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(250)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['convolutional.Cropping2D.0'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: Cropping2D
[convolutional.Cropping2D.0] cropping ((1,1),(1,1)) on 3x5x4 input, data_format='channels_last'
End of explanation
data_in_shape = (3, 5, 4)
L = Cropping2D(cropping=((1,1),(1,1)), data_format='channels_first')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(251)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['convolutional.Cropping2D.1'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [convolutional.Cropping2D.1] cropping ((1,1),(1,1)) on 3x5x4 input, data_format='channels_first'
End of explanation
data_in_shape = (8, 7, 6)
L = Cropping2D(cropping=((4,2),(3,1)), data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(252)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['convolutional.Cropping2D.2'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [convolutional.Cropping2D.2] cropping ((4,2),(3,1)) on 8x7x6 input, data_format='channels_last'
End of explanation
data_in_shape = (8, 7, 6)
L = Cropping2D(cropping=((4,2),(3,1)), data_format='channels_first')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(253)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['convolutional.Cropping2D.3'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [convolutional.Cropping2D.3] cropping ((4,2),(3,1)) on 8x7x6 input, data_format='channels_first'
End of explanation
data_in_shape = (8, 7, 6)
L = Cropping2D(cropping=(2,3), data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(254)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['convolutional.Cropping2D.4'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [convolutional.Cropping2D.4] cropping (2,3) on 8x7x6 input, data_format='channels_last'
End of explanation
data_in_shape = (8, 7, 6)
L = Cropping2D(cropping=1, data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(255)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['convolutional.Cropping2D.5'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [convolutional.Cropping2D.5] cropping 4 on 8x7x6 input, data_format='channels_last'
End of explanation
import os
filename = '../../../test/data/layers/convolutional/Cropping2D.json'
if not os.path.exists(os.path.dirname(filename)):
os.makedirs(os.path.dirname(filename))
with open(filename, 'w') as f:
json.dump(DATA, f)
print(json.dumps(DATA))
Explanation: export for Keras.js tests
End of explanation |
12,488 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Some Basic Statistics
This module will cover the calculation of some basic statistical parameters using numpy and scipy, starting with a 'by hand' or from textbook formulas and using built-in functions.
By the end of this file you should have seen simple examples of
Step1: Mean, Standard Deviation, and Variance
These all give an initial sense of the distribution of a group of samples - the average value (mean) and how spread out they are (standard deviation and variance).
Keep in mind that the variance is the square of the standard deviation.
Step2: Mean
Step3: Or by using built-in functions
Step4: Confidence Intervals
How sure are we that the measurements we've taken encompass the population mean, rather than just the mean of the sample group (assuming it is a subset of the population mean)? Confidence invervals define bounds on the certainty of a reported value.
Starting with a normally distributed group of random samples, we want to state with a known amount of confidence (i.e. 95% confidence) that the reported interval will contain the population mean.
We report
Step5: We can also use the built in function for determining the 95% confidence interval
Step6: One-Way Analysis of Variance (ANOVA)
How could we determine multiple groups of samples are from the same population or from different populations? One-way analysis assumes a single factor (independent variable) affects the mean value of the group.
Keep in mind that the samples should be independent and interval or ratio data (i.e. not categorical).
Two groups
Step7: Equal sample sizes, equal variances
$t = \frac{\bar{X}1 - \bar{X}_2}{s{pool}\sqrt{\frac{2}{n}}}$
where
Step8: Calculate the Student's t- and p-values
Step9: The p-value between groups 1 and 2 is greater than our value of 0.05, so we cannot reject the null hypothesis, and assume these are from the same population.
Step10: The p-value between groups 1 and 2 is still greater than our value of 0.05, so we cannot reject the null hypothesis, and assume these are from the same population.
Step11: The p-value between groups 1 and 3 is still greater than our value of 0.05, but is much closer. We can't strictly reject the null hypothesis, but it's worth a closer look. Do we really have enough samples to draw conclusions?
Or use built-in functions
Step12: If the p-value is smaller than some threshold (i.e. 0.01 or 0.05, etc.) then the null hypothesis can be rejected. The two groups of samples have different means! Note that the first and second tests converge as sampling approaches infinity.
The goal of the Student's t-test is to determine if two groups of samples are from the same population or different populations. This comes up frequently when we want to determine if the data we're collected somehow differs from another data set (i.e. we've observed something change (before/after populations), or observe something different from what someone else claims).
To do this, first calculate a t-value, and use this t-value (to sample the t-distribution) to determine a measure of how similar the two groups of samples are. This is known as a p-value.
The p-value represents the probability that the difference between the groups of samples is observed purely by chance. A p-value below some threshold (i.e. 0.05) means there is a significant difference between the groups of samples.
Increasing t-values (increasingly different groups) lead to p-value decreases (decreasing chances that the samples are from the same distribution).
Often, this is described in terms of the null hypothesis, or that there is 'null difference' between the two groups of samples. In other words, can the null hypothesis (there is no difference between the two populations) be rejected? The goal is to determine if any difference is due to sampling, experimental, etc. error or if the means really are different.
This is intended for normally distributed, continuous distributions.
Fun fact
Step13: Here, we use
Step14: Coefficient of Determination ($R^2$)
The coefficient of determination is a measure of how closely one group of samples (i.e. measured) follows another (i.e. a model). This is accomplished via the proportion of total variation of outcomes explained by the model.
Step15: Compute by hand
Step16: Pearson's correlation coefficient
For two sets of data, how correlated are the two, on a scale of -1 to 1?
A p-value for the Pearson's correlation coefficient can also be determined, indicating the probability of an uncorrelated system producing data that has a Pearson correlation at least as extreme (as with everything, it's not reliable for small groups of samples).
Keep in mind that the correlation coefficient is defined for data sets of the same size.
Step17: Again we can do this by hand, noting that $\rho$ is different from $p$
Step18: We have a very high Pearson's correlation coefficient (highly correlated) with a low p-value (can reject null hypothsis that it's due purely due to chance). Howeever, we don't have many samples, so it's not a robust conclusion.
Or use built-in functions
Step19: Distributions
Distribution functions can be thought of as the probability of measuring a particular sample value. To get a better picture of how frequently a type of randomly distributed variable should be measured in theory, we can use the analytical distribution function.
For example, perhaps the most well known distribution is the Gaussian, Normal, or Bell-Curve distribution. This is determined from a set of gaussian distributed random numbers. We can generate a lot of these numbers and plot the frequency of each number within a set of 'bins'
Step20: The probability distribution function (PDF) is a function that represents the probability of obtaining a particular value for a population that follows that particular distribution.
Using a conversion factor, it's clear that the two overlap
Step21: A common use of distributions is to determine the chance of measuring a value of at least some amount. Instead of looking at the probability of obtaining exactly a value, we can ask
Step22: For those that are wondering, the CDF is actually less than the PDF at a certain point because both scaled but in different ways - the CDF is scaled such that it's maximum value is one but the PDF is such that the area beneath it is one. For more information, look into the integral of the PDF.
We use the CDF to determine the percent chance of obtaining something at least as large, i.e. what is the percent chance of getting at least 2?
Step23: PDF of Student's t-distribution
This doesn't seem that interesting until we consider the distributions used in the ANOVA analysis above. Here we use the Student's t-distribution, which is extremely similar to the normal distribution except that it incorporates the fact that we often use a subset of the population (and thus degrees of freedom = n-1)
Step24: Let's say our t-value is
Step25: The two-tailed test (i.e. including both sides) is simply 2x the value of the area of one of the sides
Step26: Confidence Intervals from a Distribution Perspective
This is where the critical probability (t-value) for the confidence interval comes from. Working backwards, we're interested the value of the probability distribution function that, when sampled, encompasses $\frac{1}{2}$ of the confidence interval (i.e. 95%) area under the function.
To do so, 1/2 of the confidence interval on each side of the distribution is removed, and the corresponding t-value is determined. | Python Code:
from numpy.random import normal,rand
import numpy as np
import matplotlib.pyplot as plt
import scipy.stats as stats
%matplotlib inline
Explanation: Some Basic Statistics
This module will cover the calculation of some basic statistical parameters using numpy and scipy, starting with a 'by hand' or from textbook formulas and using built-in functions.
By the end of this file you should have seen simple examples of:
1. Mean, standard deviation, and variance
2. Confidence intervals
3. One-way analysis of variance (ANOVA)
4. Student's t-test
5. F-test
6. Coefficient of determination
7. Pearson's correlation coefficient
8. Probability Distribution Functions (PDFs)
Further Reading:
https://docs.scipy.org/doc/scipy/reference/stats.html
https://docs.scipy.org/doc/scipy/reference/tutorial/stats.html
https://github.com/scipy/scipy/blob/master/scipy/stats/stats.py
http://www.itl.nist.gov/div898/handbook/eda/section3/eda3672.htm
http://www.physics.csbsju.edu/stats/t-test.html
https://onlinecourses.science.psu.edu/stat501/node/255
http://originlab.com/doc/Origin-Help/ANOVA-CRD
http://hamelg.blogspot.com/2015/11/python-for-data-analysis-part-22.html
End of explanation
# Generate some continuous data:
nums = normal(2, 3, 1000) # function of mean (mu), std (sigma), and size (n)
Explanation: Mean, Standard Deviation, and Variance
These all give an initial sense of the distribution of a group of samples - the average value (mean) and how spread out they are (standard deviation and variance).
Keep in mind that the variance is the square of the standard deviation.
End of explanation
mean = (1/len(nums))*np.sum(nums)
print('The mean is: %g' % mean)
stdev = np.sqrt(1/len(nums) * np.sum((nums - mean)**2))
print('The standard deviation (all samples, or the population) is: %g' % stdev)
stdev = np.sqrt(1/(len(nums)-1) * np.sum((nums - mean)**2))
print('The unbiased standard deviation (a group of samples, or a subset of the population) is: %g' % stdev)
var = (1/len(nums)) * np.sum((nums - mean)**2)
print('The variance is: %g' % var)
Explanation: Mean:
$\mu = \frac{1}{n}\sum_{i=1}^{n} x_i$
Standard deviation
Root of the average squared deviation from the mean (entire population):
$\sigma = \sqrt{\frac{1}{n}\sum_{i=1}^{n} (x_i-\mu)^2}$
Root of the average squared deviation from the mean (subset of population):
$\sigma = \sqrt{\frac{1}{n-1}\sum_{i=1}^{n} (x_i-\mu)^2}$
Variance
Average squared deviation from mean:
$\sigma^2 = \frac{1}{n}\sum_{i=1}^{n} (x_i-\mu)^2$
We can do these calculations manually:
End of explanation
mean = np.mean(nums)
print('The mean is: %g' % mean)
stdev = np.std(nums)
print('The standard deviation (all samples, or the population) is: %g' \
% stdev)
stdev = np.std(nums, ddof=1)
print('The standard deviation (a group of samples, or a subset of the population) is: %g' % stdev)
var = np.var(nums)
print('The variance is: %g' % var)
Explanation: Or by using built-in functions:
End of explanation
# Start with normally distributed group of samples
grp1 = normal(100, 5, 10000)
# Compute the standard error of the mean
grp1_avg = np.mean(grp1)
grp1_std = np.std(grp1, ddof=1)
standard_err = grp1_std/np.sqrt(np.size(grp1))
# Determine the critical probability that corresponds to 1/2 of the
# 95% confidence interval (see Distributions)
conf_int = 0.95
dof = len(grp1)-1 # We use the degrees of freedom of n-1 because it's
# a sample of the population
T_val = stats.t.ppf(1-(1-conf_int)/2, dof) # Use the percent point
# function (inverse of the
# CDF, more on this later)
# The average value, reported with 95% confidence is:
conf_int = standard_err*T_val
lower_int = grp1_avg - conf_int
upper_int = grp1_avg + conf_int
print("The value is {0:.3g} ± {1:.3g} (95% confidence interval)" \
.format(grp1_avg, conf_int))
print("or a range of {0:.6g} to {0:.6g}".format(lower_int,upper_int))
Explanation: Confidence Intervals
How sure are we that the measurements we've taken encompass the population mean, rather than just the mean of the sample group (assuming it is a subset of the population mean)? Confidence invervals define bounds on the certainty of a reported value.
Starting with a normally distributed group of random samples, we want to state with a known amount of confidence (i.e. 95% confidence) that the reported interval will contain the population mean.
We report:
$\mu \pm \sigma_m T$
where:
$\mu$ is the mean value
$T$ is the critical probability (t-value)
$\sigma_m = \frac{\sigma}{\sqrt{n}}$ is the standard error of the mean
$\sigma$ is the standard deviation
$n$ is the number of samples
Notes:
- Technically, it is not correct to state that mean has a 95% chance of being within the confidence inverval (using 95% confidence as an example). The mean is a number, not a probability.
**A confidence interval of 95% means that the confidence interval, if repeated with many different groups of samples, would encompass the population mean 95% of the time.** This is a slight distinction: it's not that there is a 95% chance that the value is within that particular confidence interval - it's a statement that the confidence interval, if repeated, would trend towards encompassing the population mean 95% of the time.
Assuming the group of samples is a subset and not the entire population, critical probability (t-value) is determined from the t-distribution instead of the normal distribution. This is more accurate for lower sampling because it takes into account the degrees of freedom. Keep in mind the t- and the normal distributions converge for large sampling.
For more information about determining the critical probability (t-value) from a percentage (i.e. 95%), see the 'Confidence Intervals from a Distribution Perspective' section near the end of this notebook.
End of explanation
dof = len(grp1)-1
mean = np.mean(grp1)
std_err = stats.sem(grp1) #sem = standard error (of the) mean
lower_int, upper_int = stats.t.interval(\
0.95, dof, loc=mean, scale=std_err)
print("A range of {0:.6g} to {0:.6g}".format(lower_int, upper_int))
Explanation: We can also use the built in function for determining the 95% confidence interval:
End of explanation
from scipy.stats import ttest_ind, ttest_rel
# Three groups of data, but one of these is not like the others.
grp1 = normal(45, 23, 5)
grp2 = normal(45, 23, 5)
grp3 = normal(10, 12, 5)
Explanation: One-Way Analysis of Variance (ANOVA)
How could we determine multiple groups of samples are from the same population or from different populations? One-way analysis assumes a single factor (independent variable) affects the mean value of the group.
Keep in mind that the samples should be independent and interval or ratio data (i.e. not categorical).
Two groups: Student's T-test
The goal of the Student's t-test is to determine if two groups of samples are from the same population or different populations. This comes up frequently when we want to determine if the data we're collected somehow differs from another data set (i.e. we've observed something change (before/after populations), or observe something different from what someone else claims).
To do this, first calculate a t-value, and use this t-value (to sample the t-distribution) to determine a measure of how similar the two groups of samples are. This is known as a p-value.
The p-value represents the probability that the difference between the groups of samples is observed purely by chance. A p-value below some threshold (i.e. 0.05) means there is a significant difference between the groups of samples.
Increasing t-values (increasingly different groups) lead to p-value decreases (decreasing chances that the samples are from the same distribution).
Often, this is described in terms of the null hypothesis, or that there is 'null difference' between the two groups of samples. In other words, can the null hypothesis (there is no difference between the two populations) be rejected? The goal is to determine if any difference is due to sampling, experimental, etc. error or if the means really are different.
This is intended for normally distributed, continuous distributions.
Fun fact: the 'student' is actually William S. Gossett, a brewmaster who worked at the Guinness brewery.
End of explanation
# Get some initial info about the three groups
grp1_siz = float(grp1.size)
grp1_dof = grp1_siz - 1
grp1_avg = np.sum(grp1)/grp1_siz
grp1_var = 1/(grp1_dof)* np.sum((grp1 - grp1_avg)**2)
grp2_siz = float(grp2.size)
grp2_dof = grp2_siz - 1
grp2_avg = np.sum(grp2)/grp2_siz
grp2_var = 1/(grp2_dof)* np.sum((grp2 - grp2_avg)**2)
grp3_siz = float(np.size(grp3))
grp3_avg = np.sum(grp3)/grp3_siz
grp3_dof = grp3_siz - 1
grp3_var = 1/(grp3_dof)* np.sum((grp3 - grp3_avg)**2)
Explanation: Equal sample sizes, equal variances
$t = \frac{\bar{X}1 - \bar{X}_2}{s{pool}\sqrt{\frac{2}{n}}}$
where:
$s_{pool}$ = pooled variance
$s_{pool} = \sqrt{\frac{s_1^2 + s_2^2}{2}}$
$n$ is the number of samples
$\bar{X}$ is the expectation value (if equal weights, average)
$s_1^2 = \frac{1}{n-1} \sum^{n}_{1} (x_i-\bar{X})^2$
Equal or unequal sample sizes, equal variances
$t = \frac{\bar{X}1 - \bar{X}_2}{s{pool}\sqrt{\frac{1}{n_1} + \frac{1}{n_2}}}$
where:
$s_{pool} = \sqrt{\frac{(n_1 -1) s_1^2 + (n_2 -1) s_2^2}{n_1 + n_2 -1}}$
Equal or unequal sample sizes, unequal variances
$t = \frac{\bar{X}1 - \bar{X}_2}{s\delta}$
where:
$s_\delta = \sqrt{\frac{s_1^2}{n_1} + \frac{s_2^2}{n_2}}$
The p-values can be calculated by integrating the Student's t-distribution cumulative density fuction (CDF) directly - more information is provided in the Distributions section. Alternatively, tables of precalculated CDF values (Z-tables) may be used when calculating CDF isn't practical but aren't discussed here. Below, the survival function (stats.t(degFreedom).sf, or '1 - CDF') is used to sample the t-distribution.
End of explanation
# Equal sample size, assumed equal variance:
pooled_var = np.sqrt( (grp1_var + grp2_var)/2 )
t = (grp1_avg - grp2_avg)/(pooled_var*np.sqrt(2/grp1_siz))
# Calculate p-value:
degFreedom = (grp1_var/grp1_siz + grp2_var/grp2_siz)**2/ \
((grp1_var/grp1_siz)**2/grp1_dof + (grp2_var/grp2_siz)**2/grp2_dof)
p = 2*stats.t(degFreedom).sf(np.abs(t)) # we want 2x the area under the curve,
# from neg infinity to the neg t value
print(" t = {0:g} p = {1:g}".format(t, p))
Explanation: Calculate the Student's t- and p-values:
End of explanation
# Equal or unequal sample size, assumed equal variance:
pooled_var = np.sqrt( (grp1_dof*grp1_var+grp2_dof*grp2_var)/ \
(grp1_siz+grp2_siz-2) )
t = (grp1_avg - grp2_avg)/ \
(pooled_var*np.sqrt(1/grp1_siz + 1/grp2_siz))
# Calculate p-value:
degFreedom = (grp1_var/grp1_siz + grp2_var/grp2_siz)**2/ \
((grp1_var/grp1_siz)**2/grp1_dof + (grp2_var/grp2_siz)**2/grp2_dof)
p = 2*stats.t(degFreedom).sf(np.abs(t))
print(" t = {0:g} p = {1:g}".format(t, p))
Explanation: The p-value between groups 1 and 2 is greater than our value of 0.05, so we cannot reject the null hypothesis, and assume these are from the same population.
End of explanation
# Equal or unequal sample size, assumed unequal variance:
var = np.sqrt( grp1_var/grp1_siz + grp3_var/grp3_siz )
t = (grp1_avg - grp3_avg)/var
# Calculate p-value:
degFreedom = (grp1_var/grp1_siz + grp3_var/grp3_siz)**2/ \
((grp1_var/grp1_siz)**2/grp1_dof + (grp3_var/grp3_siz)**2/grp3_dof)
p = 2*stats.t(degFreedom).sf(np.abs(t))
print(" t = {0:g} p = {1:g}".format(t, p))
Explanation: The p-value between groups 1 and 2 is still greater than our value of 0.05, so we cannot reject the null hypothesis, and assume these are from the same population.
End of explanation
# Equal sample size, assumed equal variance:
t, p = ttest_rel(grp1, grp2)
print("ttest_rel eq_var: t = %g p = %g" % (t, p))
# Equal or unequal sample size, assumed equal variance:
t, p = ttest_ind(grp1, grp2, equal_var=True)
print("ttest_ind eq_var: t = %g p = %g" % (t, p))
# Note that the first and second t-tests converge as sampling
# approaches infinity.
# Equal or unequal sample size, assumed unequal variance:
t, p = ttest_ind(grp1, grp3, equal_var=False)
print("ttest_ind uneq_var: t = %g p = %g" % (t, p))
Explanation: The p-value between groups 1 and 3 is still greater than our value of 0.05, but is much closer. We can't strictly reject the null hypothesis, but it's worth a closer look. Do we really have enough samples to draw conclusions?
Or use built-in functions:
End of explanation
grp1 = normal(45, 23, 500)
grp2 = normal(45, 23, 500)
grp3 = normal(10, 12, 500)
Explanation: If the p-value is smaller than some threshold (i.e. 0.01 or 0.05, etc.) then the null hypothesis can be rejected. The two groups of samples have different means! Note that the first and second tests converge as sampling approaches infinity.
The goal of the Student's t-test is to determine if two groups of samples are from the same population or different populations. This comes up frequently when we want to determine if the data we're collected somehow differs from another data set (i.e. we've observed something change (before/after populations), or observe something different from what someone else claims).
To do this, first calculate a t-value, and use this t-value (to sample the t-distribution) to determine a measure of how similar the two groups of samples are. This is known as a p-value.
The p-value represents the probability that the difference between the groups of samples is observed purely by chance. A p-value below some threshold (i.e. 0.05) means there is a significant difference between the groups of samples.
Increasing t-values (increasingly different groups) lead to p-value decreases (decreasing chances that the samples are from the same distribution).
Often, this is described in terms of the null hypothesis, or that there is 'null difference' between the two groups of samples. In other words, can the null hypothesis (there is no difference between the two populations) be rejected? The goal is to determine if any difference is due to sampling, experimental, etc. error or if the means really are different.
This is intended for normally distributed, continuous distributions.
Fun fact: the 'student' is actually William S. Gossett, a brewmaster who worked at the Guinness brewery.
>2 Groups: One-way ANOVA F-test statistic
The F-test can be thought of as the generalized form of the t-test for multiple groups of samples. A popular use of the F-test is to determine if one group of samples is from a different population than all other groups (i.e. one vs many), or if all are from the same population. While there are several different F-tests, the focus here is on a test to determine if the means of a given set of normally distributed values are equal.
To do this, first calculate a F-statistic, and use this F-statistic (to sample the F-distribution) to find the chance that all of the groups of samples are from the same population. Like with T-tests above, we call this sample the p-value.
The p-value represents the chance that the difference between the groups of samples is observed purely by chance. A p-value below some threshold (i.e. 0.05) means that there is a significant difference between the groups of samples.
Here, the F-statistic is the ratio of variation between sample means to the variation within the samples. Increasing F-statistics lead to decreasing the p-values (decreasing chances that the samples are from the same distribution).
Often, this is described in terms of the null hypothesis, or the hypothesis that there is null difference between the groups of samples. In other words, can the null hypothesis (all groups are from the same population) be rejected? The goal is to determine if the differencs are due to sampling, experimental, etc. error or if the means really are different.
This is intended for normally distributed, continuous distributions.
End of explanation
all_grps = [grp1, grp2, grp3] # Use some vectorization to simplify
# code
num_grps = float(len(all_grps))
alldata = np.concatenate(all_grps)
alldata_avg = np.mean(alldata)
alldata_siz = np.size(alldata)
bgv = 0
for a in all_grps:
bgv += (np.size(a) * (np.mean(a)-alldata_avg)**2)/(num_grps-1)
wgv = 0
for a in all_grps:
for i in a:
wgv += (i - np.mean(a))**2/(alldata_siz - num_grps)
f_stat = bgv/wgv
prob = stats.f(num_grps-1, alldata_siz-num_grps).sf(np.abs(f_stat))
print('F-statistic is %g p is %g' % (f_stat, prob))
f, p = stats.f_oneway(grp1,grp2,grp3)
print('F-statistic is %g p is %g' % (f, p))
Explanation: Here, we use:
$F_{stat} = \frac{\text{between set variability}}{\text{within set variability}}$
where:
between set variability = $\sum^{K}{i=1} \frac{n_i(\bar{X_i} -\bar{X})^2}{K - 1}$
within set variability = $\sum^{K}{i=1} \sum^{n_i}{j=1} \frac{(X{ij} -\bar{X_i})^2}{N - K}$
and:
$\bar{X}$ is the mean of all data
$\bar{X_i}$ is the mean of set $i$
$K$ is the number of sets
$N$ is the overall sample size
End of explanation
from scipy.optimize import curve_fit
# Create arbitrary function
x_vals = np.arange(0, 100)
y_vals = x_vals**2 + normal(0, 3000, np.size(x_vals))
# Fit and create fit line
def func(x_vals, B, C):
return x_vals**B + C
opt, cov = curve_fit(func, x_vals, y_vals)
x_fitted = np.linspace(0, max(x_vals), 100)
y_fitted = func(x_fitted, *opt)
# Show fit
plt.scatter(x_vals, y_vals)
plt.plot(x_fitted, y_fitted, color='red')
plt.show()
Explanation: Coefficient of Determination ($R^2$)
The coefficient of determination is a measure of how closely one group of samples (i.e. measured) follows another (i.e. a model). This is accomplished via the proportion of total variation of outcomes explained by the model.
End of explanation
y_avg = np.mean(y_vals)
y_fit = func(x_vals, *opt)
SSregr = np.sum( (y_fit - y_avg )**2 )
SSerror = np.sum( (y_vals - y_fit )**2 )
SStotal = np.sum( (y_vals - y_avg )**2 )
Rsq = SSregr/SStotal
print('R squared is: %g' % Rsq)
Explanation: Compute by hand:
$R^2 = \frac{SS_{regr}}{SS_{total}}$
where:
$SS$ = "sum of squares"
${rgr}$ = "regression"
$SS{regr} = \sum^{n}{1} (\hat{x_i}-\bar{X})^2$
$SS{total} = \sum^{n}_{1} (x_i-\bar{X})^2$
and:
$\hat{x_i}$ is the fitted value
$\bar{X}$ is the average value of X
$x_i$ is the measured value of x
End of explanation
from scipy.stats import pearsonr
grp1 = list(range(0, 5))
grp2 = grp1+normal(0, 1, np.size(grp1))
plt.scatter(grp1, grp1)
plt.scatter(grp1, grp2)
plt.show()
Explanation: Pearson's correlation coefficient
For two sets of data, how correlated are the two, on a scale of -1 to 1?
A p-value for the Pearson's correlation coefficient can also be determined, indicating the probability of an uncorrelated system producing data that has a Pearson correlation at least as extreme (as with everything, it's not reliable for small groups of samples).
Keep in mind that the correlation coefficient is defined for data sets of the same size.
End of explanation
grp1_siz = float(np.size(grp1))
grp1_avg = np.sum(grp1)/grp1_siz
grp1_std = np.sqrt((1/grp1_siz) * np.sum((grp1 - grp1_avg)**2))
grp2_siz = float(np.size(grp2))
grp2_avg = np.sum(grp2)/grp2_siz
grp2_std = np.sqrt((1/grp2_siz) * np.sum((grp2 - grp2_avg)**2))
dof = grp1_siz - 2
# Note that the size of the two samples must be the same
pearson_r = np.sum( 1/grp1_siz*(grp1 - grp1_avg)*(grp2 - grp2_avg) ) / \
(grp1_std * grp2_std)
t_conv = pearson_r/np.sqrt( (1 - pearson_r**2)/(grp1_siz - 2) )
# convert to student's t value
p = 2*stats.t(dof).sf(np.abs(t_conv)) # survival function of the t-dist
print("pearson_r = %g p = %g" % (pearson_r, p))
Explanation: Again we can do this by hand, noting that $\rho$ is different from $p$:
$\rho = \frac{ \sum^{n}_{1} (1/n) (X_1 - \bar{X}_1) (X_2 - \bar{X}_2)}{s_1^2 s_2^2}$
The p value can be determined by converting $\rho$ to a student's t and then determining the area under the distribution function:
$t_{conv} = \frac{\rho}{\sqrt{( 1-\rho^2) / (n-2)}} $
End of explanation
r,p = pearsonr(grp1, grp2)
print("pearson_r = %g p = %g" % (r, p))
Explanation: We have a very high Pearson's correlation coefficient (highly correlated) with a low p-value (can reject null hypothsis that it's due purely due to chance). Howeever, we don't have many samples, so it's not a robust conclusion.
Or use built-in functions:
End of explanation
num_samples = 10000
span = 10 # How wide to plot and bin
rand_gen = normal(0, 1, num_samples) # Generate num_sample numbers
bins = np.linspace(-span/2, span/2, num=100)
histogram = np.histogram(rand_gen, bins); # Use histogram to get the
# distribution
X = histogram[1][:-1]
Y = histogram[0]
plt.scatter(X,Y, label="Sample Frequency")
plt.legend(loc='best')
plt.title('Scatter Plot')
plt.show()
Explanation: Distributions
Distribution functions can be thought of as the probability of measuring a particular sample value. To get a better picture of how frequently a type of randomly distributed variable should be measured in theory, we can use the analytical distribution function.
For example, perhaps the most well known distribution is the Gaussian, Normal, or Bell-Curve distribution. This is determined from a set of gaussian distributed random numbers. We can generate a lot of these numbers and plot the frequency of each number within a set of 'bins':
End of explanation
Y_pdf = stats.norm.pdf(X) # PDF function applied to the
# X values
conv_factor = len(X)/(float(span) * float(num_samples))
# Use a normalization factor to
# demonstrate the two are
# overlapped
plt.scatter(X,Y*conv_factor, label="Sample Frequency")
plt.plot(X,Y_pdf, color='red', label="Prob. Dist. Func.")
plt.legend(loc='best')
plt.show()
Explanation: The probability distribution function (PDF) is a function that represents the probability of obtaining a particular value for a population that follows that particular distribution.
Using a conversion factor, it's clear that the two overlap:
End of explanation
X_dof = np.size(X) - 1
Y_pdf = stats.norm.pdf(X)
Y_cdf = stats.norm.cdf(X)
plt.plot(X,Y_pdf, color='red', label="Prob. Dist. Func.")
plt.plot(X,Y_cdf, color='blue', label="Cumul. Dist. Func.")
plt.legend(loc='best')
plt.show()
Explanation: A common use of distributions is to determine the chance of measuring a value of at least some amount. Instead of looking at the probability of obtaining exactly a value, we can ask: what is the probability of obtaining something at least as large?
All we need is the integration of the PDF, known as the cumulative distribution function (CDF), as we're really looking for the area under the PDF up to a certain point.
Continuing our example:
End of explanation
stats.norm.cdf(2)
Explanation: For those that are wondering, the CDF is actually less than the PDF at a certain point because both scaled but in different ways - the CDF is scaled such that it's maximum value is one but the PDF is such that the area beneath it is one. For more information, look into the integral of the PDF.
We use the CDF to determine the percent chance of obtaining something at least as large, i.e. what is the percent chance of getting at least 2?
End of explanation
dof = np.size(X) - 1
Y_t_pdf = stats.t(dof).pdf(X)
Y_t_cdf = stats.t(dof).cdf(X)
plt.plot(X, Y_t_pdf, color='red', label="Prob. Dist. Func.")
plt.plot(X, Y_t_cdf, color='blue', label="Cumul. Dist. Func.")
plt.legend(loc='best')
plt.show()
Explanation: PDF of Student's t-distribution
This doesn't seem that interesting until we consider the distributions used in the ANOVA analysis above. Here we use the Student's t-distribution, which is extremely similar to the normal distribution except that it incorporates the fact that we often use a subset of the population (and thus degrees of freedom = n-1):
End of explanation
t = -1.47254 # Use a specified t-value
# Use CDF to determine probabilities
left_prob = stats.t(dof).cdf(-np.abs(t))
right_prob = stats.t(dof).sf(np.abs(t)) # The survival function is 1-CDF
between_prob = 1-(left_prob+right_prob)
# Plot t-distribution, highlighting the different plot areas
left_ind = X <= -np.abs(t)
right_ind = X >= np.abs(t)
between_ind = (X > -np.abs(t)) & ( X < np.abs(t))
plt.fill_between(X[left_ind],stats.t(dof).pdf(X[left_ind]), facecolor='red')
plt.fill_between(X[right_ind],stats.t(dof).pdf(X[right_ind]), facecolor='red')
plt.fill_between(X[between_ind],stats.t(dof).pdf(X[between_ind]),facecolor='deepskyblue')
# Label the plot areas
plt.text(x=1.7*t,y=0.04, s='%0.3g' % left_prob)
plt.text(x=-0.4,y=0.1,s='%0.3g' % between_prob)
plt.text(x=1.1*-t,y=0.04, s='%0.3g' % right_prob)
plt.show()
Explanation: Let's say our t-value is: -1.47254.
A note about two-tailed tests: We're interested if we can reject the null hypothesis, or if the populations are the same. The t-value can be positive or negative (depending on the two means) but we're only interested only if it is different (not only + or only - difference, but both). To determine the p-value, we sample from both sides of the distribution, and this is known as a two-tailed test.
For the p-value of the t-test (two-tailed), we're concerned with getting the the value under both sides of the distribution. Here we're ignoring the sign of the t-value and treating it as a negative value for a zero-mean t-distribution probability distribution function.
For the two-tailed p-value on a t-test:
End of explanation
p = 2*stats.t(dof).sf(np.abs(t))
print("%g" % p)
Explanation: The two-tailed test (i.e. including both sides) is simply 2x the value of the area of one of the sides:
End of explanation
conf_int = 0.95 # Use a specified confidence interval
# (i.e. % of total CDF area)
# We use the t-distribution in lieu of the normal distribution because
# the samples are a subset of the population the inverse of the CDF
# for the normal distribution is known as the percent point function
t_value = stats.t.ppf(1-(1-conf_int)/2,dof)
# Use CDF to check that probabilities are correct
left_prob = stats.t.cdf(-t_value, dof)
right_prob = stats.t.sf(t_value, dof) # The survival function is 1-CDF
between_prob = 1-(left_prob+right_prob)
# Plot T distribution, highlighting the different plot areas
left_ind = X <= -t_value
right_ind = X >= t_value
between_ind = (X > -t_value) & ( X < t_value)
plt.fill_between(X[left_ind],stats.t.pdf(X[left_ind],dof), facecolor='deepskyblue')
plt.fill_between(X[right_ind],stats.t.pdf(X[right_ind],dof), facecolor='deepskyblue')
plt.fill_between(X[between_ind],stats.t.pdf(X[between_ind],dof),facecolor='red')
# Label the plot areas
plt.text(x=2.2*t,y=0.04, s='%0.3g' % left_prob)
plt.text(x=-0.4,y=0.1,s='%0.3g' % between_prob)
plt.text(x=1.6*-t,y=0.04, s='%0.3g' % right_prob)
plt.show()
print("The t_value that can be used in the 95 percent confidence interval is: %0.6g" % T_val)
Explanation: Confidence Intervals from a Distribution Perspective
This is where the critical probability (t-value) for the confidence interval comes from. Working backwards, we're interested the value of the probability distribution function that, when sampled, encompasses $\frac{1}{2}$ of the confidence interval (i.e. 95%) area under the function.
To do so, 1/2 of the confidence interval on each side of the distribution is removed, and the corresponding t-value is determined.
End of explanation |
12,489 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1> 2. Creating a sampled dataset </h1>
This notebook illustrates
Step2: <h2> Create ML dataset by sampling using BigQuery </h2>
<p>
Let's sample the BigQuery data to create smaller datasets.
</p>
Step3: There are only a limited number of years and months in the dataset. Let's see what the hashmonths are.
Step4: Here's a way to get a well distributed portion of the data in such a way that the test and train sets do not overlap
Step5: <h2> Preprocess data using Pandas </h2>
<p>
Let's add extra rows to simulate the lack of ultrasound. In the process, we'll also change the plurality column to be a string.
Step6: Also notice that there are some very important numeric fields that are missing in some rows (the count in Pandas doesn't count missing data)
Step7: <h2> Write out </h2>
<p>
In the final versions, we want to read from files, not Pandas dataframes. So, write the Pandas dataframes out as CSV files.
Using CSV files gives us the advantage of shuffling during read. This is important for distributed training because some workers might be slower than others, and shuffling the data helps prevent the same data from being assigned to the slow workers. | Python Code:
# change these to try this notebook out
BUCKET = 'cloud-training-demos-ml'
PROJECT = 'cloud-training-demos'
REGION = 'us-central1'
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}/; then
gsutil mb -l ${REGION} gs://${BUCKET}
fi
Explanation: <h1> 2. Creating a sampled dataset </h1>
This notebook illustrates:
<ol>
<li> Sampling a BigQuery dataset to create datasets for ML
<li> Preprocessing with Pandas
</ol>
End of explanation
# Create SQL query using natality data after the year 2000
from google.cloud import bigquery
query =
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth
FROM
publicdata.samples.natality
WHERE year > 2000
Explanation: <h2> Create ML dataset by sampling using BigQuery </h2>
<p>
Let's sample the BigQuery data to create smaller datasets.
</p>
End of explanation
# Call BigQuery but GROUP BY the hashmonth and see number of records for each group to enable us to get the correct train and evaluation percentages
df = bigquery.Client().query("SELECT hashmonth, COUNT(weight_pounds) AS num_babies FROM (" + query + ") GROUP BY hashmonth").to_dataframe()
print("There are {} unique hashmonths.".format(len(df)))
df.head()
Explanation: There are only a limited number of years and months in the dataset. Let's see what the hashmonths are.
End of explanation
# Added the RAND() so that we can now subsample from each of the hashmonths to get approximately the record counts we want
trainQuery = "SELECT * FROM (" + query + ") WHERE ABS(MOD(hashmonth, 4)) < 3 AND RAND() < 0.0005"
evalQuery = "SELECT * FROM (" + query + ") WHERE ABS(MOD(hashmonth, 4)) = 3 AND RAND() < 0.0005"
traindf = bigquery.Client().query(trainQuery).to_dataframe()
evaldf = bigquery.Client().query(evalQuery).to_dataframe()
print("There are {} examples in the train dataset and {} in the eval dataset".format(len(traindf), len(evaldf)))
Explanation: Here's a way to get a well distributed portion of the data in such a way that the test and train sets do not overlap:
End of explanation
traindf.head()
Explanation: <h2> Preprocess data using Pandas </h2>
<p>
Let's add extra rows to simulate the lack of ultrasound. In the process, we'll also change the plurality column to be a string.
End of explanation
# Let's look at a small sample of the training data
traindf.describe()
# It is always crucial to clean raw data before using in ML, so we have a preprocessing step
import pandas as pd
def preprocess(df):
# clean up data we don't want to train on
# in other words, users will have to tell us the mother's age
# otherwise, our ML service won't work.
# these were chosen because they are such good predictors
# and because these are easy enough to collect
df = df[df.weight_pounds > 0]
df = df[df.mother_age > 0]
df = df[df.gestation_weeks > 0]
df = df[df.plurality > 0]
# modify plurality field to be a string
twins_etc = dict(zip([1,2,3,4,5],
['Single(1)', 'Twins(2)', 'Triplets(3)', 'Quadruplets(4)', 'Quintuplets(5)']))
df['plurality'].replace(twins_etc, inplace=True)
# now create extra rows to simulate lack of ultrasound
nous = df.copy(deep=True)
nous.loc[nous['plurality'] != 'Single(1)', 'plurality'] = 'Multiple(2+)'
nous['is_male'] = 'Unknown'
return pd.concat([df, nous])
traindf.head()# Let's see a small sample of the training data now after our preprocessing
traindf = preprocess(traindf)
evaldf = preprocess(evaldf)
traindf.head()
traindf.tail()
# Describe only does numeric columns, so you won't see plurality
traindf.describe()
Explanation: Also notice that there are some very important numeric fields that are missing in some rows (the count in Pandas doesn't count missing data)
End of explanation
traindf.to_csv('train.csv', index=False, header=False)
evaldf.to_csv('eval.csv', index=False, header=False)
%%bash
wc -l *.csv
head *.csv
tail *.csv
Explanation: <h2> Write out </h2>
<p>
In the final versions, we want to read from files, not Pandas dataframes. So, write the Pandas dataframes out as CSV files.
Using CSV files gives us the advantage of shuffling during read. This is important for distributed training because some workers might be slower than others, and shuffling the data helps prevent the same data from being assigned to the slow workers.
End of explanation |
12,490 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction tutorial
In this tutorial we will perform handwriting recognition by training a
multilayer perceptron (MLP)
on the MNIST handwritten digit database.
The Task
MNIST is a dataset which consists of 70,000 handwritten digits. Each
digit is a grayscale image of 28 by 28 pixels. Our task is to classify
each of the images into one of the 10 categories representing the
numbers from 0 to 9.
The Model
We will train a simple MLP with a single hidden layer that uses the
rectifier
activation function. Our output layer will consist of a
softmax function with
10 units; one for each class. Mathematically speaking, our model is
parametrized by $\mathbf{\theta}$, defined as the weight matrices
$\mathbf{W}^{(1)}$ and $\mathbf{W}^{(2)}$, and bias vectors
$\mathbf{b}^{(1)}$ and $\mathbf{b}^{(2)}$. The rectifier
activation function is defined as
\begin{equation}
\mathrm{ReLU}(\mathbf{x})_i = \max(0, \mathbf{x}_i)
\end{equation}
and our softmax output function is defined as
\begin{equation}
\mathrm{softmax}(\mathbf{x})i = \frac{e^{\mathbf{x}_i}}{\sum{j=1}^n e^{\mathbf{x}_j}}
\end{equation}
Hence, our complete model is
\begin{equation}
f(\mathbf{x}; \mathbf{\theta}) = \mathrm{softmax}(\mathbf{W}^{(2)}\mathrm{ReLU}(\mathbf{W}^{(1)}\mathbf{x} + \mathbf{b}^{(1)}) + \mathbf{b}^{(2)})
\end{equation}
Since the output of a softmax sums to 1, we can interpret it as a
categorical probability distribution
Step1: Note that we picked the name 'features' for our input. This is
important, because the name needs to match the name of the data source
we want to train on. MNIST defines two data sources
Step2: Loss function and regularization
Now that we have built our model, let's define the cost to minimize. For
this, we will need the Theano variable representing the target labels.
Step3: To reduce the risk of overfitting, we can penalize excessive values of
the parameters by adding a \(L2\)-regularization term (also known as
weight decay) to the objective function
Step4: note
Note that we explicitly gave our variable a name. We do this so that
when we monitor the performance of our model, the progress monitor
will know what name to report in the logs.
Here we set \(\lambda_1 = \lambda_2 = 0.005\). And that's it! We now
have the final objective function we want to optimize.
But creating a simple MLP this way is rather cumbersome. In practice, we
would have used the .MLP class instead.
Step5: Initializing the parameters
When we constructed the .Linear bricks to build our model, they
automatically allocated Theano shared variables to store their
parameters in. All of these parameters were initially set to NaN.
Before we start training our network, we will want to initialize these
parameters by sampling them from a particular probability distribution.
Bricks can do this for you.
Step6: We have now initialized our weight matrices with entries drawn from a
normal distribution with a standard deviation of 0.01.
Step7: Training your model
Besides helping you build models, Blocks also provides the main other
features needed to train a model. It has a set of training algorithms
(like SGD), an interface to datasets, and a training loop that allows
you to monitor and control the training process.
We want to train our model on the training set of MNIST. We load the
data using the Fuel framework.
Have a look at this
tutorial
to get started.
After having configured Fuel, you can load the dataset.
Step8: Datasets only provide an interface to the data. For actual training, we
will need to iterate over the data in minibatches. This is done by
initiating a data stream which makes use of a particular iteration
scheme. We will use an iteration scheme that iterates over our MNIST
examples sequentially in batches of size 256.
Step9: The training algorithm we will use is straightforward SGD with a fixed
learning rate.
Step10: During training we will want to monitor the performance of our model on
a separate set of examples. Let's create a new data stream for that.
Step11: In order to monitor our performance on this data stream during training,
we need to use one of Blocks' extensions, namely the
.DataStreamMonitoring extension.
Step12: We can now use the .MainLoop to combine all the different bits and
pieces. We use two more extensions to make our training stop after a
single epoch and to make sure that our progress is printed. | Python Code:
from theano import tensor
x = tensor.matrix('features')
Explanation: Introduction tutorial
In this tutorial we will perform handwriting recognition by training a
multilayer perceptron (MLP)
on the MNIST handwritten digit database.
The Task
MNIST is a dataset which consists of 70,000 handwritten digits. Each
digit is a grayscale image of 28 by 28 pixels. Our task is to classify
each of the images into one of the 10 categories representing the
numbers from 0 to 9.
The Model
We will train a simple MLP with a single hidden layer that uses the
rectifier
activation function. Our output layer will consist of a
softmax function with
10 units; one for each class. Mathematically speaking, our model is
parametrized by $\mathbf{\theta}$, defined as the weight matrices
$\mathbf{W}^{(1)}$ and $\mathbf{W}^{(2)}$, and bias vectors
$\mathbf{b}^{(1)}$ and $\mathbf{b}^{(2)}$. The rectifier
activation function is defined as
\begin{equation}
\mathrm{ReLU}(\mathbf{x})_i = \max(0, \mathbf{x}_i)
\end{equation}
and our softmax output function is defined as
\begin{equation}
\mathrm{softmax}(\mathbf{x})i = \frac{e^{\mathbf{x}_i}}{\sum{j=1}^n e^{\mathbf{x}_j}}
\end{equation}
Hence, our complete model is
\begin{equation}
f(\mathbf{x}; \mathbf{\theta}) = \mathrm{softmax}(\mathbf{W}^{(2)}\mathrm{ReLU}(\mathbf{W}^{(1)}\mathbf{x} + \mathbf{b}^{(1)}) + \mathbf{b}^{(2)})
\end{equation}
Since the output of a softmax sums to 1, we can interpret it as a
categorical probability distribution: $f(\mathbf{x})_c = \hat p(y = c \mid\mathbf{x})$, where $\mathbf{x}$ is the 784-dimensional (28 Ã 28)
input and $c \in {0, ..., 9}$ one of the 10 classes. We can train
the parameters of our model by minimizing the negative log-likelihood
i.e. the cross-entropy between our model's output and the target
distribution. This means we will minimize the sum of
\begin{equation}
l(\mathbf{f}(\mathbf{x}), y) = -\sum_{c=0}^9 \mathbf{1}_{(y=c)} \log f(\mathbf{x})_c = -\log f(\mathbf{x})_y
\end{equation}
(where $\mathbf{1}$ is the indicator function) over all examples. We
use stochastic gradient
descent
(SGD) on mini-batches for this.
Building the model
Blocks uses "bricks" to build models. Bricks are parametrized Theano
operations. You can read more about it in the
building with bricks tutorial.
Constructing the model with Blocks is very simple. We start by defining
the input variable using Theano.
End of explanation
from blocks.bricks import Linear, Rectifier, Softmax
input_to_hidden = Linear(name='input_to_hidden', input_dim=784,output_dim=100)
h = Rectifier().apply(input_to_hidden.apply(x))
hidden_to_output = Linear(name='hidden_to_output', input_dim=100, output_dim=10)
y_hat = Softmax().apply(hidden_to_output.apply(h))
Explanation: Note that we picked the name 'features' for our input. This is
important, because the name needs to match the name of the data source
we want to train on. MNIST defines two data sources: 'features' and
'targets'.
For the sake of this tutorial, we will go through building an MLP the
long way. For a much quicker way, skip right to the end of the next
section. We begin with applying the linear transformations and
activations.
We start by initializing bricks with certain parameters e.g.
input_dim. After initialization we can apply our bricks on Theano
variables to build the model we want. We'll talk more about bricks in
the next tutorial, bricks_overview.
End of explanation
y = tensor.lmatrix('targets')
from blocks.bricks.cost import CategoricalCrossEntropy
cost = CategoricalCrossEntropy().apply(y.flatten(), y_hat)
Explanation: Loss function and regularization
Now that we have built our model, let's define the cost to minimize. For
this, we will need the Theano variable representing the target labels.
End of explanation
from blocks.bricks import WEIGHT
from blocks.graph import ComputationGraph
from blocks.filter import VariableFilter
cg = ComputationGraph(cost)
W1, W2 = VariableFilter(roles=[WEIGHT])(cg.variables)
cost = cost + 0.005 * (W1 ** 2).sum() + 0.005 * (W2 ** 2).sum()
cost.name = 'cost_with_regularization'
Explanation: To reduce the risk of overfitting, we can penalize excessive values of
the parameters by adding a \(L2\)-regularization term (also known as
weight decay) to the objective function:
\[l(\mathbf{f}(\mathbf{x}), y) = -\log f(\mathbf{x})_y + \lambda_1\|\mathbf{W}^{(1)}\|^2 + \lambda_2\|\mathbf{W}^{(2)}\|^2\]
To get the weights from our model, we will use Blocks' annotation
features (read more about them in the cg tutorial).
End of explanation
from blocks.bricks import MLP
mlp = MLP(
activations=[Rectifier(), Softmax()],
dims=[784, 100, 10]
).apply(x)
Explanation: note
Note that we explicitly gave our variable a name. We do this so that
when we monitor the performance of our model, the progress monitor
will know what name to report in the logs.
Here we set \(\lambda_1 = \lambda_2 = 0.005\). And that's it! We now
have the final objective function we want to optimize.
But creating a simple MLP this way is rather cumbersome. In practice, we
would have used the .MLP class instead.
End of explanation
from blocks.initialization import IsotropicGaussian,Constant
input_to_hidden.weights_init = hidden_to_output.weights_init = IsotropicGaussian(0.01)
input_to_hidden.biases_init = hidden_to_output.biases_init = Constant(0)
input_to_hidden.initialize()
hidden_to_output.initialize()
Explanation: Initializing the parameters
When we constructed the .Linear bricks to build our model, they
automatically allocated Theano shared variables to store their
parameters in. All of these parameters were initially set to NaN.
Before we start training our network, we will want to initialize these
parameters by sampling them from a particular probability distribution.
Bricks can do this for you.
End of explanation
W1.get_value()
# array([[ 0.01624345, -0.00611756, -0.00528172, ..., 0.00043597, ...
Explanation: We have now initialized our weight matrices with entries drawn from a
normal distribution with a standard deviation of 0.01.
End of explanation
from fuel.datasets import MNIST
mnist = MNIST("train")
Explanation: Training your model
Besides helping you build models, Blocks also provides the main other
features needed to train a model. It has a set of training algorithms
(like SGD), an interface to datasets, and a training loop that allows
you to monitor and control the training process.
We want to train our model on the training set of MNIST. We load the
data using the Fuel framework.
Have a look at this
tutorial
to get started.
After having configured Fuel, you can load the dataset.
End of explanation
from fuel.streams import DataStream
from fuel.schemes
import SequentialScheme
from fuel.transformers import Flatten
data_stream = Flatten(DataStream.default_stream( mnist,
iteration_scheme=SequentialScheme(mnist.num_examples,batch_size=256)
))
Explanation: Datasets only provide an interface to the data. For actual training, we
will need to iterate over the data in minibatches. This is done by
initiating a data stream which makes use of a particular iteration
scheme. We will use an iteration scheme that iterates over our MNIST
examples sequentially in batches of size 256.
End of explanation
from blocks.algorithms import GradientDescent, Scale
algorithm = GradientDescent(
cost=cost,
params=cg.parameters,
step_rule=Scale(learning_rate=0.1)
)
Explanation: The training algorithm we will use is straightforward SGD with a fixed
learning rate.
End of explanation
mnist_test = MNIST("test")
data_stream_test = Flatten(DataStream.default_stream(
mnist_test,
iteration_scheme=SequentialScheme(
mnist_test.num_examples,
batch_size=1024)
)
)
Explanation: During training we will want to monitor the performance of our model on
a separate set of examples. Let's create a new data stream for that.
End of explanation
from blocks.extensions.monitoring import DataStreamMonitoring
monitor = DataStreamMonitoring(
variables=[cost],
data_stream=data_stream_test,
prefix="test"
)
Explanation: In order to monitor our performance on this data stream during training,
we need to use one of Blocks' extensions, namely the
.DataStreamMonitoring extension.
End of explanation
from blocks.main_loop import MainLoop
from blocks.extensions import FinishAfter, Printing
main_loop = MainLoop(
data_stream=data_stream,
algorithm=algorithm,
extensions=[monitor, FinishAfter(after_n_epochs=1), Printing()])
main_loop.run()
Explanation: We can now use the .MainLoop to combine all the different bits and
pieces. We use two more extensions to make our training stop after a
single epoch and to make sure that our progress is printed.
End of explanation |
12,491 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Manual Commands Workbook
This notebook is a workbook for testing hardware with manual FPE commands, and general empirical testing. It turns out that it's also a handy command reference.
Start the Observatory Simulator and Load the FPE FPGA
Remember that whenever you power-cycle the Observatory Simulator, you should set preload=True below.
When you are running this notebook and it has not been power cycled, you should set preload=False.
Run the following cell to get the FPE loaded
Step1: Useful Commands
Step2: Reading a housekeeping value has this form
Step3: Setting an operating parameter has this form
Step4: Setting all the operating parameters to the default values
Step7: Workspace | Python Code:
from tessfpe.dhu.fpe import FPE
from tessfpe.dhu.unit_tests import check_house_keeping_voltages
fpe1 = FPE(1, debug=False, preload=True, FPE_Wrapper_version='6.1.1')
print fpe1.version
fpe1.cmd_start_frames()
fpe1.cmd_stop_frames()
if check_house_keeping_voltages(fpe1):
print "Wrapper load complete. Interface voltages OK."
Explanation: Manual Commands Workbook
This notebook is a workbook for testing hardware with manual FPE commands, and general empirical testing. It turns out that it's also a handy command reference.
Start the Observatory Simulator and Load the FPE FPGA
Remember that whenever you power-cycle the Observatory Simulator, you should set preload=True below.
When you are running this notebook and it has not been power cycled, you should set preload=False.
Run the following cell to get the FPE loaded:
End of explanation
from tessfpe.data.operating_parameters import operating_parameters
operating_parameters["heater_1_current"]
Explanation: Useful Commands:
ping()
fpe1.cmd_start_frames() # Starts frame generation.
fpe1.cmd_stop_frames() # Stops frame generation.
fpe1.cmd_camrst # Don't know how to work this. As-is, it fails.
fpe1.cmd_cam_status() # Returns the camera status register values.
fpe1.cmd_version() # Returns ObsSim version info.
fpe1.house_keeping # Returns a set of HK data in alphabetical order, in engineering units, without frames running. This includes all the FPGA digital housekeeping values.
fpe1.house_keeping["analogue"] #Returns only the analog values of the housekeeping set.
{fpe1.cmd_cam_hsk() # Returns raw, un-parsed housekeeping data, two samples per word (decimal), mostly useless here.}
check_house_keeping_voltages(fpe1, tolerance=0.05) # Returns True if standard set of supply voltages are in tolerance.
If you plan on setting operating parameters (DACs), run this cell:
End of explanation
fpe1.house_keeping["analogue"]["heater_1_current"]
fpe1.house_keeping["analogue"]["ccd1_input_diode_high"]
Explanation: Reading a housekeeping value has this form:
fpe1.house_keeping["analogue"]["parameter_name"]
Here's a couple of sample reads of housekeeping values:
End of explanation
fpe1.ops.heater_1_current = fpe1.ops.heater_1_current.low
fpe1.ops.heater_2_current = fpe1.ops.heater_2_current.low
fpe1.ops.heater_3_current = fpe1.ops.heater_3_current.low
fpe1.ops.send()
Explanation: Setting an operating parameter has this form:
fpe1.ops.parameter_name = value
fpe1.ops.send()
Setting the 3 trim heaters to their minimum values looks like this:
End of explanation
def set_fpe_defaults(fpe):
"Set the FPE to the default operating parameters and return a list of the default values"
defaults = {}
for k in range(len(fpe.ops.address)):
if fpe.ops.address[k] is None:
continue
fpe.ops.address[k].value = fpe.ops.address[k].default
defaults[fpe.ops.address[k].name] = fpe.ops.address[k].default
return defaults
set_fpe_defaults(fpe1)
Explanation: Setting all the operating parameters to the default values:
End of explanation
operating_parameters["ccd1_output_drain_a_offset"]
#operating_parameters["ccd1_reset_drain"]
fpe1.ops.ccd1_reset_drain = 15
fpe1.ops.ccd1_output_drain_a_offset = 10
fpe1.ops.send()
fpe1.house_keeping["analogue"]["ccd1_output_drain_a"]
#operating_parameters["ccd1_reset_high"]
operating_parameters['ccd1_reset_low_offset']
fpe1.ops.ccd1_reset_high = -10.3
fpe1.ops.ccd1_reset_low_offset = -9.9
fpe1.ops.send()
fpe1.house_keeping["analogue"]["ccd1_reset_low"]
fpe1.cmd_start_frames() # Starts frame generation.
fpe1.cmd_stop_frames() # Stops frame generation.
from tessfpe.data.housekeeping_channels import housekeeping_channels
from tessfpe.data.housekeeping_channels import housekeeping_channel_memory_map
print fpe1.house_keeping
print fpe1.house_keeping["analogue"]
from numpy import var
samples=100
from tessfpe.data.housekeeping_channels import housekeeping_channels
# We make sample_data a dictionary and each value will be a set of HK data, with key = sample_name.
sample_data = {}
# For later:
signal_names = []
signal_values = []
signal_data = {}
variance_values = {}
#my_dict["new key"] = "New value"
for i in range(samples):
# Get a new set of HK values
house_keeping_values = fpe1.house_keeping["analogue"]
data_values = house_keeping_values.values()
# Add the new HK values to the sample_data dictionary:
sample_number = "sample_" + str(i)
sample_data[sample_number] = data_values
# Get the signal names for use later
signal_names = house_keeping_values.keys()
Assign the set of all HK values of the same signal (e.g. substrate_1)
to the dictionary 'signal_data'
for k in range(len(signal_names)):
# Build the list 'signal_values' for this signal:
for i in range(samples):
sample_number = "sample_" + str(i)
signal_values.append(sample_data[sample_number][k])
# Add signal_values to the signal_data dictionary:
signal_data[signal_names[k]] = signal_values
signal_values = []
Now get the variance of each of the 'signal_values' in the
signal_data dictionary and put the result in the 'variance_values'
dictionary.
for name in signal_data:
variance_values[name] = var(signal_data[name])
# print name, str(variance_values[name])
print '{0} {1:<5}'.format(name, variance_values[name])
data = []
for i in range(10):
set_values = {}
for k in range(len(fpe1.ops.address)):
if fpe1.ops.address[k] is None:
continue
low = fpe1.ops.address[k].low
high = fpe1.ops.address[k].high
name = fpe1.ops.address[k].name
set_values[name] = fpe1.ops.address[k].value = low + i / 100. * (high - low)
fpe1.ops.send()
data.append({"set values": set_values,"measured values": fpe1.house_keeping["analogue"]})
print data
print sample_data
v = {}
for name in operating_parameters.keys():
v[name] = operating_parameters[name]
print v[name]["unit"]
print name
Explanation: Workspace:
End of explanation |
12,492 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Named Entity Recognition using Transformers
Author
Step1: We will be using the transformer implementation from this fantastic
example.
Let's start by defining a TransformerBlock layer
Step2: Next, let's define a TokenAndPositionEmbedding layer
Step3: Build the NER model class as a keras.Model subclass
Step4: Load the CoNLL 2003 dataset from the datasets library and process it
Step5: We will export this data to a tab-separated file format which will be easy to read as a
tf.data.Dataset object.
Step6: Make the NER label lookup table
NER labels are usually provided in IOB, IOB2 or IOBES formats. Checkout this link for
more information
Step7: Get a list of all tokens in the training dataset. This will be used to create the
vocabulary.
Step8: Create 2 new Dataset objects from the training and validation data
Step9: Print out one line to make sure it looks good. The first record in the line is the number of tokens.
After that we will have all the tokens followed by all the ner tags.
Step10: We will be using the following map function to transform the data in the dataset
Step11: We will be using a custom loss function that will ignore the loss from padded tokens.
Step12: Compile and fit the model
Step13: Metrics calculation
Here is a function to calculate the metrics. The function calculates F1 score for the
overall NER dataset as well as individual scores for each NER tag. | Python Code:
!pip3 install datasets
!wget https://raw.githubusercontent.com/sighsmile/conlleval/master/conlleval.py
import os
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from datasets import load_dataset
from collections import Counter
from conlleval import evaluate
Explanation: Named Entity Recognition using Transformers
Author: Varun Singh<br>
Date created: Jun 23, 2021<br>
Last modified: Jun 24, 2021<br>
Description: NER using the Transformers and data from CoNLL 2003 shared task.
Introduction
Named Entity Recognition (NER) is the process of identifying named entities in text.
Example of named entities are: "Person", "Location", "Organization", "Dates" etc. NER is
essentially a token classification task where every token is classified into one or more
predetermined categories.
In this exercise, we will train a simple Transformer based model to perform NER. We will
be using the data from CoNLL 2003 shared task. For more information about the dataset,
please visit the dataset website.
However, since obtaining this data requires an additional step of getting a free license, we will be using
HuggingFace's datasets library which contains a processed version of this dataset.
Install the open source datasets library from HuggingFace
End of explanation
class TransformerBlock(layers.Layer):
def __init__(self, embed_dim, num_heads, ff_dim, rate=0.1):
super(TransformerBlock, self).__init__()
self.att = keras.layers.MultiHeadAttention(
num_heads=num_heads, key_dim=embed_dim
)
self.ffn = keras.Sequential(
[
keras.layers.Dense(ff_dim, activation="relu"),
keras.layers.Dense(embed_dim),
]
)
self.layernorm1 = keras.layers.LayerNormalization(epsilon=1e-6)
self.layernorm2 = keras.layers.LayerNormalization(epsilon=1e-6)
self.dropout1 = keras.layers.Dropout(rate)
self.dropout2 = keras.layers.Dropout(rate)
def call(self, inputs, training=False):
attn_output = self.att(inputs, inputs)
attn_output = self.dropout1(attn_output, training=training)
out1 = self.layernorm1(inputs + attn_output)
ffn_output = self.ffn(out1)
ffn_output = self.dropout2(ffn_output, training=training)
return self.layernorm2(out1 + ffn_output)
Explanation: We will be using the transformer implementation from this fantastic
example.
Let's start by defining a TransformerBlock layer:
End of explanation
class TokenAndPositionEmbedding(layers.Layer):
def __init__(self, maxlen, vocab_size, embed_dim):
super(TokenAndPositionEmbedding, self).__init__()
self.token_emb = keras.layers.Embedding(
input_dim=vocab_size, output_dim=embed_dim
)
self.pos_emb = keras.layers.Embedding(input_dim=maxlen, output_dim=embed_dim)
def call(self, inputs):
maxlen = tf.shape(inputs)[-1]
positions = tf.range(start=0, limit=maxlen, delta=1)
position_embeddings = self.pos_emb(positions)
token_embeddings = self.token_emb(inputs)
return token_embeddings + position_embeddings
Explanation: Next, let's define a TokenAndPositionEmbedding layer:
End of explanation
class NERModel(keras.Model):
def __init__(
self, num_tags, vocab_size, maxlen=128, embed_dim=32, num_heads=2, ff_dim=32
):
super(NERModel, self).__init__()
self.embedding_layer = TokenAndPositionEmbedding(maxlen, vocab_size, embed_dim)
self.transformer_block = TransformerBlock(embed_dim, num_heads, ff_dim)
self.dropout1 = layers.Dropout(0.1)
self.ff = layers.Dense(ff_dim, activation="relu")
self.dropout2 = layers.Dropout(0.1)
self.ff_final = layers.Dense(num_tags, activation="softmax")
def call(self, inputs, training=False):
x = self.embedding_layer(inputs)
x = self.transformer_block(x)
x = self.dropout1(x, training=training)
x = self.ff(x)
x = self.dropout2(x, training=training)
x = self.ff_final(x)
return x
Explanation: Build the NER model class as a keras.Model subclass
End of explanation
conll_data = load_dataset("conll2003")
Explanation: Load the CoNLL 2003 dataset from the datasets library and process it
End of explanation
def export_to_file(export_file_path, data):
with open(export_file_path, "w") as f:
for record in data:
ner_tags = record["ner_tags"]
tokens = record["tokens"]
if len(tokens) > 0:
f.write(
str(len(tokens))
+ "\t"
+ "\t".join(tokens)
+ "\t"
+ "\t".join(map(str, ner_tags))
+ "\n"
)
os.mkdir("data")
export_to_file("./data/conll_train.txt", conll_data["train"])
export_to_file("./data/conll_val.txt", conll_data["validation"])
Explanation: We will export this data to a tab-separated file format which will be easy to read as a
tf.data.Dataset object.
End of explanation
def make_tag_lookup_table():
iob_labels = ["B", "I"]
ner_labels = ["PER", "ORG", "LOC", "MISC"]
all_labels = [(label1, label2) for label2 in ner_labels for label1 in iob_labels]
all_labels = ["-".join([a, b]) for a, b in all_labels]
all_labels = ["[PAD]", "O"] + all_labels
return dict(zip(range(0, len(all_labels) + 1), all_labels))
mapping = make_tag_lookup_table()
print(mapping)
Explanation: Make the NER label lookup table
NER labels are usually provided in IOB, IOB2 or IOBES formats. Checkout this link for
more information:
Wikipedia
Note that we start our label numbering from 1 since 0 will be reserved for padding. We
have a total of 10 labels: 9 from the NER dataset and one for padding.
End of explanation
all_tokens = sum(conll_data["train"]["tokens"], [])
all_tokens_array = np.array(list(map(str.lower, all_tokens)))
counter = Counter(all_tokens_array)
print(len(counter))
num_tags = len(mapping)
vocab_size = 20000
# We only take (vocab_size - 2) most commons words from the training data since
# the `StringLookup` class uses 2 additional tokens - one denoting an unknown
# token and another one denoting a masking token
vocabulary = [token for token, count in counter.most_common(vocab_size - 2)]
# The StringLook class will convert tokens to token IDs
lookup_layer = keras.layers.StringLookup(
vocabulary=vocabulary
)
Explanation: Get a list of all tokens in the training dataset. This will be used to create the
vocabulary.
End of explanation
train_data = tf.data.TextLineDataset("./data/conll_train.txt")
val_data = tf.data.TextLineDataset("./data/conll_val.txt")
Explanation: Create 2 new Dataset objects from the training and validation data
End of explanation
print(list(train_data.take(1).as_numpy_iterator()))
Explanation: Print out one line to make sure it looks good. The first record in the line is the number of tokens.
After that we will have all the tokens followed by all the ner tags.
End of explanation
def map_record_to_training_data(record):
record = tf.strings.split(record, sep="\t")
length = tf.strings.to_number(record[0], out_type=tf.int32)
tokens = record[1 : length + 1]
tags = record[length + 1 :]
tags = tf.strings.to_number(tags, out_type=tf.int64)
tags += 1
return tokens, tags
def lowercase_and_convert_to_ids(tokens):
tokens = tf.strings.lower(tokens)
return lookup_layer(tokens)
# We use `padded_batch` here because each record in the dataset has a
# different length.
batch_size = 32
train_dataset = (
train_data.map(map_record_to_training_data)
.map(lambda x, y: (lowercase_and_convert_to_ids(x), y))
.padded_batch(batch_size)
)
val_dataset = (
val_data.map(map_record_to_training_data)
.map(lambda x, y: (lowercase_and_convert_to_ids(x), y))
.padded_batch(batch_size)
)
ner_model = NERModel(num_tags, vocab_size, embed_dim=32, num_heads=4, ff_dim=64)
Explanation: We will be using the following map function to transform the data in the dataset:
End of explanation
class CustomNonPaddingTokenLoss(keras.losses.Loss):
def __init__(self, name="custom_ner_loss"):
super().__init__(name=name)
def call(self, y_true, y_pred):
loss_fn = keras.losses.SparseCategoricalCrossentropy(
from_logits=True, reduction=keras.losses.Reduction.NONE
)
loss = loss_fn(y_true, y_pred)
mask = tf.cast((y_true > 0), dtype=tf.float32)
loss = loss * mask
return tf.reduce_sum(loss) / tf.reduce_sum(mask)
loss = CustomNonPaddingTokenLoss()
Explanation: We will be using a custom loss function that will ignore the loss from padded tokens.
End of explanation
ner_model.compile(optimizer="adam", loss=loss)
ner_model.fit(train_dataset, epochs=10)
def tokenize_and_convert_to_ids(text):
tokens = text.split()
return lowercase_and_convert_to_ids(tokens)
# Sample inference using the trained model
sample_input = tokenize_and_convert_to_ids(
"eu rejects german call to boycott british lamb"
)
sample_input = tf.reshape(sample_input, shape=[1, -1])
print(sample_input)
output = ner_model.predict(sample_input)
prediction = np.argmax(output, axis=-1)[0]
prediction = [mapping[i] for i in prediction]
# eu -> B-ORG, german -> B-MISC, british -> B-MISC
print(prediction)
Explanation: Compile and fit the model
End of explanation
def calculate_metrics(dataset):
all_true_tag_ids, all_predicted_tag_ids = [], []
for x, y in dataset:
output = ner_model.predict(x)
predictions = np.argmax(output, axis=-1)
predictions = np.reshape(predictions, [-1])
true_tag_ids = np.reshape(y, [-1])
mask = (true_tag_ids > 0) & (predictions > 0)
true_tag_ids = true_tag_ids[mask]
predicted_tag_ids = predictions[mask]
all_true_tag_ids.append(true_tag_ids)
all_predicted_tag_ids.append(predicted_tag_ids)
all_true_tag_ids = np.concatenate(all_true_tag_ids)
all_predicted_tag_ids = np.concatenate(all_predicted_tag_ids)
predicted_tags = [mapping[tag] for tag in all_predicted_tag_ids]
real_tags = [mapping[tag] for tag in all_true_tag_ids]
evaluate(real_tags, predicted_tags)
calculate_metrics(val_dataset)
Explanation: Metrics calculation
Here is a function to calculate the metrics. The function calculates F1 score for the
overall NER dataset as well as individual scores for each NER tag.
End of explanation |
12,493 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutor Magic in IPython
This notebook demonstrates the %%tutor magic as used in IPython notebook.
First, you'll need the following installed
Step1: Finally, put a %%tutor at the top of any cell with Python code, and watch the visualization | Python Code:
from metakernel import register_ipython_magics
register_ipython_magics()
Explanation: Tutor Magic in IPython
This notebook demonstrates the %%tutor magic as used in IPython notebook.
First, you'll need the following installed:
IPython/Jupyter
Metakernel
Next, you'll need to use the magics in IPython:
End of explanation
%%tutor
mylist = []
for i in range(10):
mylist.append(i ** 2)
Explanation: Finally, put a %%tutor at the top of any cell with Python code, and watch the visualization:
End of explanation |
12,494 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Supervised learning for classification
絊äžå $x$, åä»çåé¡ïŒæåæŸåºèšç® x çåé¡çæ¹åŒ
One hot encoding
åŠææåæäžé¡çš®é¡å¥ïŒ æåå¯ä»¥äŸç·šç¢Œéäžåé¡å¥
* $(1,0,0)$
* $(0,1,0)$
* $(0,0,1)$
åé¡
çºä»éºŒäžçŽæ¥çš 1,2,3 éæš£ç線碌å¢ïŒ
Softmax Regression çæš¡åæ¯é暣ç
æåç茞å
¥ $x=\begin{pmatrix} x_0 \ x_1 \ x_2 \ x_3 \end{pmatrix} $ æ¯äžååéïŒæåçæ column vector 奜äº
è Weight
Step1: ä»»åïŒèšç®æåŸççæž¬æ©ç $q$
Hint
Step2: ç·Žç¿
èšèšäžå網路
Step3: ç·Žç¿
èšèšäžå網路
Step4: Gradient descent
èª€å·®åœæž
çºäºèŠè©æ·æåçé æž¬çå質ïŒèŠèšèšäžåè©æ·èª€å·®çæ¹åŒ
åèšèŒžå
¥åŒ $x$ å°æå°çç寊é¡å¥æ¯ $y$, 飿åå®çŸ©èª€å·®åœæž
$ loss = -\log(q_y)=- \log(Predict_{W,b}(Y=y|x)) $
éåæ¹æ³å«å Cross entropy
å
¶å¯Šæ¯èŒäžè¬äœæ¯èŒè€éäžé»çå¯«æ³æ¯
$ loss = - \sum_i p_i\log(q_i) = - p \cdot \log q$
å
¶äž $i$ æ¯ææé¡å¥ïŒ è $ p_i = \Pr(Y=i|x) $ æ¯ç寊çŒççæ©ç
äœæåç®å $x$ å°æå°çç寊é¡å¥æ¯ $y$ïŒ æä»¥çŽæ¥ $p_i = 1$
æ³èŸŠæ³æ¹é²ã
æåçšäžçš®è¢«çš±äœæ¯ gradient descent çæ¹åŒäŸæ¹åæåç誀差ã
å çºæåç¥é gradient æ¯è®åœæžäžåæå¿«çæ¹åãæä»¥æååŠææ gradient çåæ¹åèµ°äžé»é»ïŒä¹å°±æ¯äžéæå¿«çæ¹åïŒïŒé£éºŒåŸå°çåœæžåŒæè©²æå°äžé»ã
èšåŸæåçè®æžæ¯ $W$ å $b$ (è£¡é¢æäžå W_i,j b_i éäºè®æž)ïŒæä»¥æåèŠæ $loss$ å° $W$ å $b$ 裡é¢çæ¯äžååæžäŸå埮åã
é奜éåååŸ®åæ¯å¯ä»¥çšæç®åºä»ç圢åŒïŒèæåŸå埮åçåŒåä¹äžæåŸè€éã
$loss$ å±éåŸå¯ä»¥å¯«æ
$loss = -\log(q_y) = \log(\sum_j d_j) - d_i \
= \log(\sum_j e^{W_j x + b_j}) - W_i x - b_i$
泚æ $d_j = e^{W_j x + b_j}$ åªæè®æž $b_j, W_j$
å° $k \neq i$ æ, $loss$ å° $b_k$ çååŸ®åæ¯
$$ \frac{e^{W_k x + b_k}}{\sum_j e^{W_j x + b_j}} = q_k$$
å° $k = i$ æ, $loss$ å° $b_k$ çååŸ®åæ¯
$$ q_k - 1$$
å° $W$ çå埮åä¹äžé£
å° $k \neq i$ æ, $loss$ å° $W_{k,t}$ çååŸ®åæ¯
$$ \frac{e^{W_k x + b_k} x_t}{\sum_j e^{W_j x + b_j}} = q_k x_t$$
å° $k = i$ æ, $loss$ å° $W_{k,t}$ çååŸ®åæ¯
$$ q_k x_t - x_t$$
寊åéšä»œ
Step5: åé¡
W, b ç size çºä»éºŒèŠé暣èšå®ïŒ
ä»»åïŒ éšäŸ¿èšå®äžçµ x, y, æåäŸè·è·ç gradient descent
Step6: æ¥é©ïŒèšç® q
Step7: æ¥é©ïŒ èšç® loss
Step8: æ¥é©ïŒèšç®å° b ç gradient
Step9: æ¥é©ïŒèšç®å° W ç gradient
Step10: æ¥é©ïŒæŽæ° W, b åæžæ 0.5 * gradientïŒ ç¶åŸççæ°ç loss æ¯åп鲿¥äºïŒ
Step11: äžæ¬¡èšç·Žå€çµè³æ
äžé¢åªéå°äžçµ x (i=14) äŸèšç·ŽïŒåŠæäžæ¬¡å°ææ x èšç·Žå¢ïŒ
éåžžæåææçµå¥æŸåš axis-0
Step12: ä»»åïŒ å°èšç·Žåéå
Step13: å°ç
§
python
d = np.exp(W @ x + b)
q = d/d.sum()
q
Step14: å°ç
§
python
loss = -np.log(q[y])
loss
Step15: å°ç
§
python
grad_b = q - np.eye(3)[y][
Step16: å°ç
§
python
grad_W = grad_b @ x.T
Step17: ä»»åïŒå
šéšååšäžèµ·
èšå® W,b
èšå® X
èšç·Žäžå次
èšç® q å loss
èšç® grad_b å grad_W
æŽæ° W, b
ççæºç¢ºåºŠ | Python Code:
# Weight
W = Matrix([1,2],[3,4], [5,6])
W
# Bias
b = Vector(1,0,-1)
b
# 茞å
¥
x = Vector(2,-1)
x
Explanation: Supervised learning for classification
絊äžå $x$, åä»çåé¡ïŒæåæŸåºèšç® x çåé¡çæ¹åŒ
One hot encoding
åŠææåæäžé¡çš®é¡å¥ïŒ æåå¯ä»¥äŸç·šç¢Œéäžåé¡å¥
* $(1,0,0)$
* $(0,1,0)$
* $(0,0,1)$
åé¡
çºä»éºŒäžçŽæ¥çš 1,2,3 éæš£ç線碌å¢ïŒ
Softmax Regression çæš¡åæ¯é暣ç
æåç茞å
¥ $x=\begin{pmatrix} x_0 \ x_1 \ x_2 \ x_3 \end{pmatrix} $ æ¯äžååéïŒæåçæ column vector 奜äº
è Weight: $W = \begin{pmatrix} W_0 \ W_1 \ W_2 \end{pmatrix} =
\begin{pmatrix} W_{0,0} & W_{0,1} & W_{0,2} & W_{0,3}\
W_{1,0} & W_{1,1} & W_{1,2} & W_{1,3} \
W_{2,0} & W_{2,1} & W_{2,2} & W_{2,3} \end{pmatrix} $
Bias: $b=\begin{pmatrix} b_0 \ b_1 \ b_2 \end{pmatrix} $
æåå
èšç®"ç·æ§èŒžåº" $ c = \begin{pmatrix} c_0 \ c_1 \ c_2 \end{pmatrix} = Wx+b =
\begin{pmatrix} W_0 x + b_0 \ W_1 x + b_1 \ W_2 x + b_2 \end{pmatrix} $ïŒ ç¶åŸåå $exp$ (éé
å)ã æåŸåŸå°äžååéã
$d = \begin{pmatrix} d_0 \ d_1 \ d_2 \end{pmatrix} = e^{W x + b} = \begin{pmatrix} e^{c_0} \ e^{c_1} \ e^{c_2} \end{pmatrix}$
å°éäºæžåŒé€ä»¥ä»åççžœåã
絊å®èŒžå
¥ xïŒ æååžæç®åºäŸçæžå q_i æç¬Šå x çé¡å¥æ¯ i çæ©çã
$q_i = Predict_{W,b}(Y=i|x) = \frac {e^{W_i x + b_i}} {\sum_j e^{W_j x + b_j}} = \frac {d_i} {\sum_j d_j}$
åèµ·äŸçïŒå°±æ¯ $q = \frac {d} {\sum_j d_j} $
åé¡
çºä»éºŒèŠçš $exp$?
å
éšäŸ¿ç®äžå $\mathbb{R}^2 \rightarrow \mathbb{R}^3$ ç網路
End of explanation
# è«åšé裡èšç®
# åèçæ¡
#%load solutions/softmax_compute_q.py
%run -i solutions/softmax_compute_q.py
# 顯瀺 q
q
Explanation: ä»»åïŒèšç®æåŸççæž¬æ©ç $q$
Hint: np.exp å¯ä»¥ç® $exp$
End of explanation
# Hint äžé¢ç¢çæžå i ç 2 é²äœåé
i = 13
x = Vector(i%2, (i>>1)%2, (i>>2)%2, (i>>3)%2)
x
# è«åšé裡èšç®
# åèçæ¡
#%load solutions/softmax_mod4.py
Explanation: ç·Žç¿
èšèšäžå網路:
* 茞å
¥æ¯äºé²äœ 0 ~ 15
* 茞åºäŸç
§å°æŒ 4 ç逿žåæåé¡
Hint: å¯ä»¥åèäžé¢ W, b çèšå®æ¹åŒ
End of explanation
# è«åšé裡èšç®
# åèçæ¡
#%load solutions/softmax_mod3.py
Explanation: ç·Žç¿
èšèšäžå網路:
* 茞å
¥æ¯äºé²äœ 0 ~ 15
* 茞åºäŸç
§å°æŒ 3 ç逿žåæäžé¡
Hint: äžçšå
šéšæ£ç¢ºïŒçšççïŒäœæ£ç¢ºçèŠæ¯äºçé«ãå¯ä»¥å©çšçµ±èšççµæçççã
End of explanation
# å
ç¢çéšæ©ç W å b
W = Matrix(np.random.normal(size=(3,4)))
b = Vector(np.random.normal(size=(3,)))
W
b
Explanation: Gradient descent
èª€å·®åœæž
çºäºèŠè©æ·æåçé æž¬çå質ïŒèŠèšèšäžåè©æ·èª€å·®çæ¹åŒ
åèšèŒžå
¥åŒ $x$ å°æå°çç寊é¡å¥æ¯ $y$, 飿åå®çŸ©èª€å·®åœæž
$ loss = -\log(q_y)=- \log(Predict_{W,b}(Y=y|x)) $
éåæ¹æ³å«å Cross entropy
å
¶å¯Šæ¯èŒäžè¬äœæ¯èŒè€éäžé»çå¯«æ³æ¯
$ loss = - \sum_i p_i\log(q_i) = - p \cdot \log q$
å
¶äž $i$ æ¯ææé¡å¥ïŒ è $ p_i = \Pr(Y=i|x) $ æ¯ç寊çŒççæ©ç
äœæåç®å $x$ å°æå°çç寊é¡å¥æ¯ $y$ïŒ æä»¥çŽæ¥ $p_i = 1$
æ³èŸŠæ³æ¹é²ã
æåçšäžçš®è¢«çš±äœæ¯ gradient descent çæ¹åŒäŸæ¹åæåç誀差ã
å çºæåç¥é gradient æ¯è®åœæžäžåæå¿«çæ¹åãæä»¥æååŠææ gradient çåæ¹åèµ°äžé»é»ïŒä¹å°±æ¯äžéæå¿«çæ¹åïŒïŒé£éºŒåŸå°çåœæžåŒæè©²æå°äžé»ã
èšåŸæåçè®æžæ¯ $W$ å $b$ (è£¡é¢æäžå W_i,j b_i éäºè®æž)ïŒæä»¥æåèŠæ $loss$ å° $W$ å $b$ 裡é¢çæ¯äžååæžäŸå埮åã
é奜éåååŸ®åæ¯å¯ä»¥çšæç®åºä»ç圢åŒïŒèæåŸå埮åçåŒåä¹äžæåŸè€éã
$loss$ å±éåŸå¯ä»¥å¯«æ
$loss = -\log(q_y) = \log(\sum_j d_j) - d_i \
= \log(\sum_j e^{W_j x + b_j}) - W_i x - b_i$
泚æ $d_j = e^{W_j x + b_j}$ åªæè®æž $b_j, W_j$
å° $k \neq i$ æ, $loss$ å° $b_k$ çååŸ®åæ¯
$$ \frac{e^{W_k x + b_k}}{\sum_j e^{W_j x + b_j}} = q_k$$
å° $k = i$ æ, $loss$ å° $b_k$ çååŸ®åæ¯
$$ q_k - 1$$
å° $W$ çå埮åä¹äžé£
å° $k \neq i$ æ, $loss$ å° $W_{k,t}$ çååŸ®åæ¯
$$ \frac{e^{W_k x + b_k} x_t}{\sum_j e^{W_j x + b_j}} = q_k x_t$$
å° $k = i$ æ, $loss$ å° $W_{k,t}$ çååŸ®åæ¯
$$ q_k x_t - x_t$$
寊åéšä»œ
End of explanation
i = 14
x = Vector(i%2, (i>>1)%2, (i>>2)%2, (i>>3)%2)
y = i%3
Explanation: åé¡
W, b ç size çºä»éºŒèŠé暣èšå®ïŒ
ä»»åïŒ éšäŸ¿èšå®äžçµ x, y, æåäŸè·è·ç gradient descent
End of explanation
# è«åšé裡èšç®
# åèçæ¡(è·åé¢äžæš£)¶
#%load solutions/softmax_compute_q.py
%run -i solutions/softmax_compute_q.py
#顯瀺 q
q
Explanation: æ¥é©ïŒèšç® q
End of explanation
# è«åšé裡èšç®
# åèçæ¡(è·åé¢äžæš£)
%run -i solutions/softmax_compute_loss1.py
#顯瀺 loss
loss
Explanation: æ¥é©ïŒ èšç® loss
End of explanation
# è«åšé裡èšç® grad_b
#åèçæ¡
%run -i solutions/softmax_compute_grad_b.py
grad_b
Explanation: æ¥é©ïŒèšç®å° b ç gradient
End of explanation
# è«åšé裡èšç®
#åèçæ¡
%run -i solutions/softmax_compute_grad_W.py
grad_W
Explanation: æ¥é©ïŒèšç®å° W ç gradient
End of explanation
# è«åšé裡èšç®
# åèçæ¡
%run -i solutions/softmax_update_Wb.py
# åå
ç q
q
# åå
ç loss
loss
# çŸåšç loss
%run -i solutions/softmax_compute_q.py
%run -i solutions/softmax_compute_loss1.py
loss
q
Explanation: æ¥é©ïŒæŽæ° W, b åæžæ 0.5 * gradientïŒ ç¶åŸççæ°ç loss æ¯åп鲿¥äºïŒ
End of explanation
X = np.array([Vector(i%2, (i>>1)%2, (i>>2)%2, (i>>3)%2) for i in range(16)])
for i in range(4):
print("i=", i)
display(X[i])
X
# å°æççµå¥
y = np.array([i%3 for i in range(16)])
y
Explanation: äžæ¬¡èšç·Žå€çµè³æ
äžé¢åªéå°äžçµ x (i=14) äŸèšç·ŽïŒåŠæäžæ¬¡å°ææ x èšç·Žå¢ïŒ
éåžžæåææçµå¥æŸåš axis-0
End of explanation
# è«åšé裡èšç®
# åèè§£çåŠåŸ
Explanation: ä»»åïŒ å°èšç·Žåéå
End of explanation
d = np.exp(W @ X + b)
q = d/d.sum(axis=(1,2), keepdims=True)
q
Explanation: å°ç
§
python
d = np.exp(W @ x + b)
q = d/d.sum()
q
End of explanation
loss = -np.log(q[range(len(y)), y])
loss
# çšå¹³åç¶ææåçæ£ç loss
loss.mean()
Explanation: å°ç
§
python
loss = -np.log(q[y])
loss
End of explanation
# fancy indexing :p
one_y = np.eye(3)[y][..., None]
grad_b_all = q - one_y
grad_b = grad_b_all.mean(axis=0)
grad_b
Explanation: å°ç
§
python
grad_b = q - np.eye(3)[y][:, None]
End of explanation
grad_W_all = grad_b_all @ X.swapaxes(1,2)
grad_W = grad_W_all.mean(axis=0)
grad_W
W -= 0.5 * grad_W
b -= 0.5 * grad_b
# ä¹åç loss
loss.mean()
d = np.exp(W @ X + b)
q = d/d.sum(axis=(1,2), keepdims=True)
loss = -np.log(q[range(len(y)), y])
loss.mean()
Explanation: å°ç
§
python
grad_W = grad_b @ x.T
End of explanation
# åšé裡èšç®
# åèçæ¡
%run -i solutions/softmax_train.py
# ç«åº loss çæ²ç·
%matplotlib inline
import matplotlib.pyplot as plt
plt.plot(loss_history);
# å°çæ¡
display((W @ X + b).argmax(axis=1).ravel())
display(y)
Explanation: ä»»åïŒå
šéšååšäžèµ·
èšå® W,b
èšå® X
èšç·Žäžå次
èšç® q å loss
èšç® grad_b å grad_W
æŽæ° W, b
ççæºç¢ºåºŠ
End of explanation |
12,495 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Detección de anomalÃas
La detección de anomalÃas (anomaly detection, AD) es una tarea de aprendizaje automático que consiste en detectar outliers o datos fuera de rango.
An outlier is an observation in a data set which appears to be inconsistent with the remainder of that set of data.
Johnson 1992
An outlier is an observation which deviates so much from the other observations as to arouse suspicions that it was generated by a different mechanism.
Outlier/Anomaly
Hawkins 1980
Tipos de entornos en los que se produce la detección de anomalÃas
AD supervisada
Las etiquetas están disponibles, tanto para casos normales como para casos anómalos.
En cierto modo, similar a minerÃa de clases poco comunes o clasificación no balanceada.
AD Semi-supervisada (detección de novedades, Novelty Detection)
Durante el entrenamiento, solo tenemos datos normales.
El algoritmo aprende únicamente usando los datos normales.
AD no supervisada (detección de outliers, Outlier Detection)
No hay etiquetas y el conjunto de entrenamiento tiene datos normales y datos anómalos.
Asunción
Step1: Vamos a familiarizarnos con la detección de anomalÃas no supervisada. Para visualizar la salida de los distintos algoritmos, vamos a considerar un dataset bidimensional que consiste en una mixtura de Gaussianas.
Generando el dataset
Step2: Detección de anomalÃas con estimación de densidad
Step3: One-Class SVM
El problema de usar la estimación de densidad es que es ineficiente cuando la dimensionalidad de los datos es demasiado alta. El algoritmo one-class SVM si que puede utilizarse en estos casos.
Step4: Vectores soporte o outliers
En el one-class SVM, no todos los vectores soporte son outliers
Step5: Solo los vectores soporte sirven a la hora de calcular la función de decisión del One-Class SVM.
Ahora vamos a representar la función de decisión del One-Class SVM como hicimos con la densidad y vamos a marcar los vectores soporte.
Step6: <div class="alert alert-success">
<b>EJERCICIO</b>
Step7: <div class="alert alert-success">
<b>EJERCICIO</b>
Step8: La base de datos de dÃgitos consiste en imágenes de 8x8 valores de gris.
Step9: Para usar las imágenes como patrones de entrenamiento, tenemos que pasarlas a vector
Step10: Vamos a centrarnos en el dÃgito 5.
Step11: Vamos a usar IsolationForest para encontrar el 5% de imágenes más anómalas y representarlas
Step12: Sacamos el grado de anormalidad utilizando iforest.decision_function. Cuanto más bajo, más anómalo.
Step13: Dibujemos los 10 ejemplos más "normales" (inliers)
Step14: Ahora vamos a dibujar los outliers | Python Code:
%matplotlib inline
import warnings
warnings.filterwarnings("ignore")
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
Explanation: Detección de anomalÃas
La detección de anomalÃas (anomaly detection, AD) es una tarea de aprendizaje automático que consiste en detectar outliers o datos fuera de rango.
An outlier is an observation in a data set which appears to be inconsistent with the remainder of that set of data.
Johnson 1992
An outlier is an observation which deviates so much from the other observations as to arouse suspicions that it was generated by a different mechanism.
Outlier/Anomaly
Hawkins 1980
Tipos de entornos en los que se produce la detección de anomalÃas
AD supervisada
Las etiquetas están disponibles, tanto para casos normales como para casos anómalos.
En cierto modo, similar a minerÃa de clases poco comunes o clasificación no balanceada.
AD Semi-supervisada (detección de novedades, Novelty Detection)
Durante el entrenamiento, solo tenemos datos normales.
El algoritmo aprende únicamente usando los datos normales.
AD no supervisada (detección de outliers, Outlier Detection)
No hay etiquetas y el conjunto de entrenamiento tiene datos normales y datos anómalos.
Asunción: los datos anómalos son poco frecuentes.
End of explanation
from sklearn.datasets import make_blobs
X, y = make_blobs(n_features=2, centers=3, n_samples=500,
random_state=666)
X.shape
plt.figure()
plt.scatter(X[:, 0], X[:, 1])
plt.show()
Explanation: Vamos a familiarizarnos con la detección de anomalÃas no supervisada. Para visualizar la salida de los distintos algoritmos, vamos a considerar un dataset bidimensional que consiste en una mixtura de Gaussianas.
Generando el dataset
End of explanation
from sklearn.neighbors import KernelDensity
# Estimador de densidad Gaussiano
kde = KernelDensity(kernel='gaussian')
kde = kde.fit(X)
kde
kde_X = kde.score_samples(X)
print(kde_X.shape) # nos proporciona la verosimilitud de los datos. Cuanto más baja, más anómalo
from scipy.stats.mstats import mquantiles
alpha_set = 0.95
tau_kde = mquantiles(kde_X, 1. - alpha_set)
n_samples, n_features = X.shape
X_range = np.zeros((n_features, 2))
X_range[:, 0] = np.min(X, axis=0) - 1.
X_range[:, 1] = np.max(X, axis=0) + 1.
h = 0.1 # Tamaño de paso de la rejilla
x_min, x_max = X_range[0]
y_min, y_max = X_range[1]
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
grid = np.c_[xx.ravel(), yy.ravel()]
Z_kde = kde.score_samples(grid)
Z_kde = Z_kde.reshape(xx.shape)
plt.figure()
c_0 = plt.contour(xx, yy, Z_kde, levels=tau_kde, colors='red', linewidths=3)
plt.clabel(c_0, inline=1, fontsize=15, fmt={tau_kde[0]: str(alpha_set)})
plt.scatter(X[:, 0], X[:, 1])
plt.legend()
plt.show()
Explanation: Detección de anomalÃas con estimación de densidad
End of explanation
from sklearn.svm import OneClassSVM
nu = 0.05 # Resultados teóricos dicen que hay un 5% de datos anómalos
ocsvm = OneClassSVM(kernel='rbf', gamma=0.05, nu=nu)
ocsvm.fit(X)
X_outliers = X[ocsvm.predict(X) == -1]
Z_ocsvm = ocsvm.decision_function(grid)
Z_ocsvm = Z_ocsvm.reshape(xx.shape)
plt.figure()
c_0 = plt.contour(xx, yy, Z_ocsvm, levels=[0], colors='red', linewidths=3)
plt.clabel(c_0, inline=1, fontsize=15, fmt={0: str(alpha_set)})
plt.scatter(X[:, 0], X[:, 1])
plt.scatter(X_outliers[:, 0], X_outliers[:, 1], color='red')
plt.legend()
plt.show()
Explanation: One-Class SVM
El problema de usar la estimación de densidad es que es ineficiente cuando la dimensionalidad de los datos es demasiado alta. El algoritmo one-class SVM si que puede utilizarse en estos casos.
End of explanation
X_SV = X[ocsvm.support_]
n_SV = len(X_SV)
n_outliers = len(X_outliers)
print('{0:.2f} <= {1:.2f} <= {2:.2f}?'.format(1./n_samples*n_outliers, nu, 1./n_samples*n_SV))
Explanation: Vectores soporte o outliers
En el one-class SVM, no todos los vectores soporte son outliers:
End of explanation
plt.figure()
plt.contourf(xx, yy, Z_ocsvm, 10, cmap=plt.cm.Blues_r)
plt.scatter(X[:, 0], X[:, 1], s=1.)
plt.scatter(X_SV[:, 0], X_SV[:, 1], color='orange')
plt.show()
Explanation: Solo los vectores soporte sirven a la hora de calcular la función de decisión del One-Class SVM.
Ahora vamos a representar la función de decisión del One-Class SVM como hicimos con la densidad y vamos a marcar los vectores soporte.
End of explanation
from sklearn.ensemble import IsolationForest
iforest = IsolationForest(n_estimators=300, contamination=0.10)
iforest = iforest.fit(X)
from scipy.stats import scoreatpercentile
Z_iforest = iforest.decision_function(grid)
Z_iforest = Z_iforest.reshape(xx.shape)
threshold = -scoreatpercentile(-iforest.decision_function(X), 100. * (1. - iforest.contamination))
plt.figure()
c_0 = plt.contour(xx, yy, Z_iforest,
levels=[threshold],
colors='red', linewidths=3)
plt.clabel(c_0, inline=1, fontsize=15,
fmt={threshold: str(alpha_set)})
plt.scatter(X[:, 0], X[:, 1], s=1.)
plt.legend()
plt.show()
Explanation: <div class="alert alert-success">
<b>EJERCICIO</b>:
<ul>
<li>
**Cambia** el parámetro `gamma` y comprueba como afecta la función de decisión.
</li>
</ul>
</div>
Isolation Forest
El algoritmo Isolation Forest es un algoritmo de AD basado en árboles. Construye un determinado número de árboles aleatorios y su idea principal es que si un ejemplo es una anomalÃa, entonces deberÃa aparecer aislado en la hoja de un árbol tras algunas particiones. El Isolation Forest deriva una puntuación de anormalidad basada en la profundidad del árbol en la cuál términos los ejemplos anómalos.
End of explanation
from sklearn import datasets
digits = datasets.load_digits()
Explanation: <div class="alert alert-success">
<b>EJERCICIO</b>:
<ul>
<li>
Ilustra gráficamente la influencia del número de árboles en la suavidad de la función de decisión
</li>
</ul>
</div>
Aplicación al dataset de dÃgitos
Ahora vamos a aplicar el IsolationForest para intentar localizar dÃgitos que han sido escritos de modo poco convencional.
End of explanation
images = digits.images
labels = digits.target
images.shape
i = 102
plt.figure(figsize=(2, 2))
plt.title('{0}'.format(labels[i]))
plt.axis('off')
plt.imshow(images[i], cmap=plt.cm.gray_r, interpolation='nearest')
plt.show()
Explanation: La base de datos de dÃgitos consiste en imágenes de 8x8 valores de gris.
End of explanation
n_samples = len(digits.images)
data = digits.images.reshape((n_samples, -1))
data.shape
X = data
y = digits.target
X.shape
Explanation: Para usar las imágenes como patrones de entrenamiento, tenemos que pasarlas a vector:
End of explanation
X_5 = X[y == 5]
X_5.shape
fig, axes = plt.subplots(1, 5, figsize=(10, 4))
for ax, x in zip(axes, X_5[:5]):
img = x.reshape(8, 8)
ax.imshow(img, cmap=plt.cm.gray_r, interpolation='nearest')
ax.axis('off')
Explanation: Vamos a centrarnos en el dÃgito 5.
End of explanation
from sklearn.ensemble import IsolationForest
iforest = IsolationForest(contamination=0.05)
iforest = iforest.fit(X_5)
Explanation: Vamos a usar IsolationForest para encontrar el 5% de imágenes más anómalas y representarlas:
End of explanation
iforest_X = iforest.decision_function(X_5)
plt.hist(iforest_X);
Explanation: Sacamos el grado de anormalidad utilizando iforest.decision_function. Cuanto más bajo, más anómalo.
End of explanation
X_strong_inliers = X_5[np.argsort(iforest_X)[-10:]]
fig, axes = plt.subplots(2, 5, figsize=(10, 5))
for i, ax in zip(range(len(X_strong_inliers)), axes.ravel()):
ax.imshow(X_strong_inliers[i].reshape((8, 8)),
cmap=plt.cm.gray_r, interpolation='nearest')
ax.axis('off')
Explanation: Dibujemos los 10 ejemplos más "normales" (inliers):
End of explanation
fig, axes = plt.subplots(2, 5, figsize=(10, 5))
X_outliers = X_5[iforest.predict(X_5) == -1]
for i, ax in zip(range(len(X_outliers)), axes.ravel()):
ax.imshow(X_outliers[i].reshape((8, 8)),
cmap=plt.cm.gray_r, interpolation='nearest')
ax.axis('off')
Explanation: Ahora vamos a dibujar los outliers:
End of explanation |
12,496 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The $k$-Nearest Neighbor Classification Algorithm
Notebook version
Step1: 1. The binary classification problem.
In a binary classification problem, we are given an observation vector ${\bf x}\in \mathbb{R}^N$ which is known to belong to one and only one category or class, $y$, in the set ${\mathcal Y} = {0, 1}$. The goal of a classifier system is to predict the value of $y$ based on ${\bf x}$.
To design the classifier, we are given a collection of labelled observations ${\mathcal D} = {({\bf x}k, y_k)}{k=0}^{K-1}$ where, for each observation ${\bf x}k$, the value of its true category, $y_k$, is known. All samples are outcomes of an unknown distribution $p{{\bf X},Y}({\bf x}, y)$.
2. The Iris dataset
(Iris dataset presentation is based on this <a href=http
Step2: 2.1. Train/test split
Next, we will split the data into two sets
Step3: We can use this function to get a data split. An expected ratio of 67/33 samples for train/test will be used. However, note that, because of the way samples are assigned to the train or test datasets, the exact number of samples in each partition will differ if you run the code several times.
Step4: 2.2. Versicolor vs Virginica
In the following, we will design a classifier to separate classes "Versicolor" and "Virginica" using $x_0$ and $x_1$ only. To do so, we build a training set with samples from these categories, and a bynary label $y^{(k)} = 1$ for samples in class "Virginica", and $0$ for "Versicolor" data.
Step5: A scatter plot is useful to get some insights on the difficulty of the classification problem
Step6: 3. Baseline Classifier
Step7: The maximum a priori classifier assigns any sample ${\bf x}$ to the most frequent class in the training set. Therefore, the class prediction $y$ for any sample ${\bf x}$ is
Step8: The error rate for this baseline classifier is
Step10: The error rate of the baseline classifier is a simple benchmark for classification. Since the maximum a priori decision is independent on the observation, ${\bf x}$, any classifier based on ${\bf x}$ should have a better (or, at least, not worse) performance than the baseline classifier.
4. The Nearest-Neighbour Classifier (1-NN).
The 1-NN classifier assigns any instance ${\bf x}$ to the category of the nearest neighbor in the training set.
$$
d = f({\bf x}) = y_n, {\rm~where} \
n = \arg \min_k \|{\bf x}-{\bf x}_k\|
$$
In case of ties (i.e. if there is more than one instance at minimum distance) the class of one of them, taken arbitrarily, is assigned to ${\bf x}$.
Step11: Let us apply the 1-NN classifier to the given dataset. First, we will show the decision regions of the classifier. To do so, we compute the classifier output for all points in a rectangular grid from the sample space.
Step12: Now we plot the results
Step13: We can observe that the decision boudary of the 1-NN classifier is rather intricate, and it may contain small islands covering one or few samples from one class. Actually, the extension of these small regions usually reduces as we have more training samples, though the number of them may increase.
Now we compute the error rates over the training and test sets.
Step14: The training and test error rates of the 1-NN may be significantly different. In fact, the training error may go down to zero if samples do not overlap. In the selected problem, this is not the case, because samples from different classes coincide at the same point, causing some classification errors.
4.1. Consistency of the 1-NN classifier
Despite the 1-NN usually reduces the error rate with respect to the baseline classifier, the number of errors may be too large. Errors may be attributed to diferent causes
Step16: 5. $k$-NN classifier
A simple extension of the 1-NN classifier is the $k$-NN classifier, which, for any input sample ${\bf x}$, computes the $k$ closest neighbors in the training set, and takes the majority class in the subset. To avoid ties, in the binary classification case $k$ is usually taken as an odd number.
The following method implements the $k$-NN classifiers.
Step17: Now, we can plot the decision boundaries for different values of $k$
Step18: 5.1. Influence of $k$
We can analyze the influence of parameter $k$ by observing both traning and test errors.
Step19: Exercise 2
Step20: However, using the test set to select the optimal value of the hyperparameter $k$ is not allowed. Instead, we should recur to cross validation.
5.2 Hyperparameter selection via cross-validation
An inconvenient of the application of the $k$-NN method is that the selection of $k$ influences the final error of the algorithm. In the previous experiments, we noticed that the location of the minimum is not necessarily the same from the perspective of the test and training data. Ideally, we would like that the designed classification model works as well as possible on future unlabeled patterns that are not available during the training phase. This property is known as generalization.
Fitting the training data is only pursued in the hope that we are also indirectly obtaining a model that generalizes well. In order to achieve this goal, there are some strategies that try to guarantee a correct generalization of the model. One of such approaches is known as <b>cross-validation</b>.
Since using the test labels during the training phase is not allowed (they should be kept aside to simultate the future application of the classification model on unseen patterns), we need to figure out some way to improve our estimation of the hyperparameter that requires only training data. Cross-validation allows us to do so by following the following steps
Step21: 6. Scikit-learn implementation
In practice, most well-known machine learning methods are implemented and available for python. Probably, the most complete library for machine learning is <a href=http
Step22: <a href = http | Python Code:
# To visualize plots in the notebook
%matplotlib inline
# Import some libraries that will be necessary for working with data and displaying plots
import csv # To read csv files
import random
import matplotlib.pyplot as plt
import numpy as np
from scipy import spatial
from sklearn import neighbors, datasets
Explanation: The $k$-Nearest Neighbor Classification Algorithm
Notebook version: 2.2 (Oct 25, 2020)
Author: Jesús Cid Sueiro ([email protected])
Jerónimo Arenas GarcÃa ([email protected])
Changes: v.1.0 - First version
v.1.1 - Function loadDataset updated to work with any number of dimensions
v.2.0 - Compatible with Python 3 (backcompatible with Python 2.7)
Added solution to Exercise 3
v.2.1 - Minor corrections regarding notation
v.2.2 - Adaptation for slides conversion
End of explanation
# Taken from Jason Brownlee notebook.
with open('datasets/iris.data', 'r') as csvfile:
lines = csv.reader(csvfile)
for row in lines:
print(', '.join(row))
Explanation: 1. The binary classification problem.
In a binary classification problem, we are given an observation vector ${\bf x}\in \mathbb{R}^N$ which is known to belong to one and only one category or class, $y$, in the set ${\mathcal Y} = {0, 1}$. The goal of a classifier system is to predict the value of $y$ based on ${\bf x}$.
To design the classifier, we are given a collection of labelled observations ${\mathcal D} = {({\bf x}k, y_k)}{k=0}^{K-1}$ where, for each observation ${\bf x}k$, the value of its true category, $y_k$, is known. All samples are outcomes of an unknown distribution $p{{\bf X},Y}({\bf x}, y)$.
2. The Iris dataset
(Iris dataset presentation is based on this <a href=http://machinelearningmastery.com/tutorial-to-implement-k-nearest-neighbors-in-python-from-scratch/> Tutorial </a> by <a href=http://machinelearningmastery.com/about/> Jason Brownlee</a>)
To illustrate the algorithms, we will consider the <a href = http://archive.ics.uci.edu/ml/datasets/Iris> Iris dataset </a>, taken from the <a href=http://archive.ics.uci.edu/ml/> UCI Machine Learning repository </a>. Quoted from the dataset description:
This is perhaps the best known database to be found in the pattern recognition literature. The data set contains 3 classes of 50 instances each, where each class refers to a type of iris plant. [...] One class is linearly separable from the other 2; the latter are NOT linearly separable from each other.
The class is the species, which is one of setosa, versicolor or virginica. Each instance contains 4 measurements of given flowers: sepal length, sepal width, petal length and petal width, all in centimeters.
End of explanation
# Adapted from a notebook by Jason Brownlee
def loadDataset(filename, split):
xTrain = []
cTrain = []
xTest = []
cTest = []
with open(filename, 'r') as csvfile:
lines = csv.reader(csvfile)
dataset = list(lines)
for i in range(len(dataset)-1):
for y in range(4):
dataset[i][y] = float(dataset[i][y])
item = dataset[i]
if random.random() < split:
xTrain.append(item[0:-1])
cTrain.append(item[-1])
else:
xTest.append(item[0:-1])
cTest.append(item[-1])
return xTrain, cTrain, xTest, cTest
Explanation: 2.1. Train/test split
Next, we will split the data into two sets:
Training set, that will be used to learn the classification model
Test set, that will be used to evaluate the classification performance
The data partition must be random, in such a way that the statistical distribution of both datasets is the same.
The code fragment below defines a function loadDataset that loads the data in a CSV with the provided filename, converts the flower measures (that were loaded as strings) into numbers and, finally, it splits the data into a training and test sets.
End of explanation
xTrain_all, cTrain_all, xTest_all, cTest_all = loadDataset('datasets/iris.data', 0.67)
nTrain_all = len(xTrain_all)
nTest_all = len(xTest_all)
print('Train:', nTrain_all)
print('Test:', nTest_all)
Explanation: We can use this function to get a data split. An expected ratio of 67/33 samples for train/test will be used. However, note that, because of the way samples are assigned to the train or test datasets, the exact number of samples in each partition will differ if you run the code several times.
End of explanation
# Select two classes
c0 = 'Iris-versicolor'
c1 = 'Iris-virginica'
# Select two coordinates
ind = [0, 1]
# Take training test
X_tr = np.array([[xTrain_all[n][i] for i in ind] for n in range(nTrain_all)
if cTrain_all[n]==c0 or cTrain_all[n]==c1])
C_tr = [c for c in cTrain_all if c==c0 or c==c1]
Y_tr = np.array([int(c==c1) for c in C_tr])
n_tr = len(X_tr)
# Take test set
X_tst = np.array([[xTest_all[n][i] for i in ind] for n in range(nTest_all)
if cTest_all[n]==c0 or cTest_all[n]==c1])
C_tst = [c for c in cTest_all if c==c0 or c==c1]
Y_tst = np.array([int(c==c1) for c in C_tst])
n_tst = len(X_tst)
# Separate components of x into different arrays (just for the plots)
x0c0 = [X_tr[n][0] for n in range(n_tr) if Y_tr[n]==0]
x1c0 = [X_tr[n][1] for n in range(n_tr) if Y_tr[n]==0]
x0c1 = [X_tr[n][0] for n in range(n_tr) if Y_tr[n]==1]
x1c1 = [X_tr[n][1] for n in range(n_tr) if Y_tr[n]==1]
Explanation: 2.2. Versicolor vs Virginica
In the following, we will design a classifier to separate classes "Versicolor" and "Virginica" using $x_0$ and $x_1$ only. To do so, we build a training set with samples from these categories, and a bynary label $y^{(k)} = 1$ for samples in class "Virginica", and $0$ for "Versicolor" data.
End of explanation
# Scatterplot.
labels = {'Iris-setosa': 'Setosa',
'Iris-versicolor': 'Versicolor',
'Iris-virginica': 'Virginica'}
plt.plot(x0c0, x1c0,'r.', label=labels[c0])
plt.plot(x0c1, x1c1,'g+', label=labels[c1])
plt.xlabel('$x_' + str(ind[0]) + '$')
plt.ylabel('$x_' + str(ind[1]) + '$')
plt.legend(loc='best')
plt.show()
Explanation: A scatter plot is useful to get some insights on the difficulty of the classification problem
End of explanation
print(f'Class 0 {c0}: {n_tr - sum(Y_tr)} samples')
print(f'Class 1 ({c1}): {sum(Y_tr)} samples')
Explanation: 3. Baseline Classifier: Maximum A Priori.
For the selected data set, we have two clases and a dataset with the following class proportions:
End of explanation
y = int(2*sum(Y_tr) > n_tr)
print(f'y = {y} ({c1 if y==1 else c0})')
Explanation: The maximum a priori classifier assigns any sample ${\bf x}$ to the most frequent class in the training set. Therefore, the class prediction $y$ for any sample ${\bf x}$ is
End of explanation
# Training and test error arrays
E_tr = (Y_tr != y)
E_tst = (Y_tst != y)
# Error rates
pe_tr = float(sum(E_tr)) / n_tr
pe_tst = float(sum(E_tst)) / n_tst
print('Pe(train):', pe_tr)
print('Pe(test):', pe_tst)
Explanation: The error rate for this baseline classifier is:
End of explanation
def nn_classifier(X1, Y1, X2):
Compute the 1-NN classification for the observations contained in
the rows of X2, for the training set given by the rows in X1 and the
class labels contained in Y1.
if X1.ndim == 1:
X1 = np.asmatrix(X1).T
if X2.ndim == 1:
X2 = np.asmatrix(X2).T
distances = spatial.distance.cdist(X1,X2,'euclidean')
neighbors = np.argsort(distances, axis=0, kind='quicksort', order=None)
closest = neighbors[0,:]
y_values = np.zeros([X2.shape[0],1])
for idx in range(X2.shape[0]):
y_values[idx] = Y1[closest[idx]]
return y_values
Explanation: The error rate of the baseline classifier is a simple benchmark for classification. Since the maximum a priori decision is independent on the observation, ${\bf x}$, any classifier based on ${\bf x}$ should have a better (or, at least, not worse) performance than the baseline classifier.
4. The Nearest-Neighbour Classifier (1-NN).
The 1-NN classifier assigns any instance ${\bf x}$ to the category of the nearest neighbor in the training set.
$$
d = f({\bf x}) = y_n, {\rm~where} \
n = \arg \min_k \|{\bf x}-{\bf x}_k\|
$$
In case of ties (i.e. if there is more than one instance at minimum distance) the class of one of them, taken arbitrarily, is assigned to ${\bf x}$.
End of explanation
# Create a regtangular grid.
n_points = 200
x_min, x_max = X_tr[:, 0].min(), X_tr[:, 0].max()
y_min, y_max = X_tr[:, 1].min(), X_tr[:, 1].max()
dx = x_max - x_min
dy = y_max - y_min
h = dy / n_points
xx, yy = np.meshgrid(np.arange(x_min - 0.1 * dx, x_max + 0.1 * dx, h),
np.arange(y_min - 0.1 * dx, y_max + 0.1 * dy, h))
X_grid = np.array([xx.ravel(), yy.ravel()]).T
# Compute the classifier output for all samples in the grid.
Z = nn_classifier(X_tr, Y_tr, X_grid)
Explanation: Let us apply the 1-NN classifier to the given dataset. First, we will show the decision regions of the classifier. To do so, we compute the classifier output for all points in a rectangular grid from the sample space.
End of explanation
# Put the result into a color plot
plt.plot(x0c0, x1c0,'r.', label=labels[c0])
plt.plot(x0c1, x1c1,'g+', label=labels[c1])
plt.xlabel('$x_' + str(ind[0]) + '$')
plt.ylabel('$x_' + str(ind[1]) + '$')
plt.legend(loc='best')
Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z)
plt.show()
Explanation: Now we plot the results
End of explanation
# Training errors
Z_tr = nn_classifier(X_tr, Y_tr, X_tr)
E_tr = Z_tr.flatten()!=Y_tr
# Test errors
Z_tst = nn_classifier(X_tr, Y_tr, X_tst)
E_tst = Z_tst.flatten()!=Y_tst
# Error rates
pe_tr = float(sum(E_tr)) / n_tr
pe_tst = float(sum(E_tst)) / n_tst
print('Pe(train):', pe_tr)
print('Pe(test):', pe_tst)
Explanation: We can observe that the decision boudary of the 1-NN classifier is rather intricate, and it may contain small islands covering one or few samples from one class. Actually, the extension of these small regions usually reduces as we have more training samples, though the number of them may increase.
Now we compute the error rates over the training and test sets.
End of explanation
# <SOL>
from sklearn.neighbors import KNeighborsClassifier
Ntest = 10000
Ntr = [10, 20, 40, 80, 200, 1000]
nruns = 100
xtest = []
ytest = []
for k in range(Ntest):
if k<Ntest/2:
ytest.append(0)
xtest.append([2*np.random.random()])
else:
ytest.append(1)
xtest.append([1+4*np.random.random()])
#print(np.mean(np.array(ytest)))
#print(xtest)
Etest = np.zeros((len(Ntr),))
for k in range(nruns):
for kk,Ntrain in enumerate(Ntr):
xtrain = []
ytrain = []
for k in range(Ntrain):
if k < Ntrain / 2:
ytrain.append(0)
xtrain.append([2*np.random.random()])
else:
ytrain.append(1)
xtrain.append([1+4*np.random.random()])
# Train the classifier and get predictions for ytest
neigh = KNeighborsClassifier(n_neighbors=1)
ytest_pred = neigh.fit(xtrain,ytrain).predict(xtest)
error_rate = np.mean(np.array(ytest)!=ytest_pred)
Etest[kk] += error_rate/nruns
print(Etest)
# </SOL>
Explanation: The training and test error rates of the 1-NN may be significantly different. In fact, the training error may go down to zero if samples do not overlap. In the selected problem, this is not the case, because samples from different classes coincide at the same point, causing some classification errors.
4.1. Consistency of the 1-NN classifier
Despite the 1-NN usually reduces the error rate with respect to the baseline classifier, the number of errors may be too large. Errors may be attributed to diferent causes:
The class distributions are overlapped, because the selected features have no complete information for discriminating between the classes: this would imply that, even the best possible classifier would be prone to errors.
The training sample is small, and it is not enough to obtaing a good estimate of the optimal classifiers.
The classifier has intrinsic limitations: even though we had an infinite number of samples, the classifier performance does not approach the optimal classifiers.
In general, a classifier is said to be consistent if it makes nearly optimal decisions as the number of training samples increases. Actually, it can be shown that this is the case of the 1-NN classifier if the classification problem is separable, i.e. if there exist a decision boundary with zero error probability. Unfortunately, in a non-separable case, the 1-NN classifier is not consistent. It can be shown that the error rate of the 1-NN classifier converges to an error rate which is not worse than twice the minimum attainable error rate (Bayes error rate) as the number of training samples goes to infinity.
Exercise 1: In this exercise we test the non-consistency of the 1-NN classifier for overlapping distributions. Generate an artificial dataset for classification as follows:
Generate $N$ binary labels at random with values '0' and '1'. Store them in vector ${\bf y}$
For every label $y_k$ in ${\bf y}$:
If the label is 0, take sample $x_k$ at random from a uniform distribution $U(0,2)$.
If the label is 1, take sample $x_k$ at random from a uniform distribution $U(1,5)$.
Take $N=1000$ for the test set. This is a large sample to get accurate error rate estimates. Also, take $N=10$, $20$, $40$, $80$,... for the training set. Compute the 1-NN classifier, and observe the test error rate as a function of $N$.
Now, compute the test error rate of the classifier making decision $1$ if $x_k>1.5$, and $0$ otherwise.
End of explanation
def knn_classifier(X1,Y1,X2,k):
Compute the k-NN classification for the observations contained in
the rows of X2, for the training set given by the rows in X1 and the
components of S1. k is the number of neighbours.
if X1.ndim == 1:
X1 = np.asmatrix(X1).T
if X2.ndim == 1:
X2 = np.asmatrix(X2).T
distances = spatial.distance.cdist(X1,X2,'euclidean')
neighbors = np.argsort(distances, axis=0, kind='quicksort', order=None)
closest = neighbors[range(k),:]
y_values = np.zeros([X2.shape[0],1])
for idx in range(X2.shape[0]):
y_values[idx] = np.median(Y1[closest[:,idx]])
return y_values
Explanation: 5. $k$-NN classifier
A simple extension of the 1-NN classifier is the $k$-NN classifier, which, for any input sample ${\bf x}$, computes the $k$ closest neighbors in the training set, and takes the majority class in the subset. To avoid ties, in the binary classification case $k$ is usually taken as an odd number.
The following method implements the $k$-NN classifiers.
End of explanation
k = 15
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, m_max]x[y_min, y_max].
Z = knn_classifier(X_tr, Y_tr, X_grid, k)
# Put the result into a color plot
plt.plot(x0c0, x1c0,'r.', label=labels[c0])
plt.plot(x0c1, x1c1,'g+', label=labels[c1])
plt.xlabel('$x_' + str(ind[0]) + '$')
plt.ylabel('$x_' + str(ind[1]) + '$')
plt.legend(loc='best')
Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z)
plt.show()
Explanation: Now, we can plot the decision boundaries for different values of $k$
End of explanation
# Plot training and test error as a function of parameter k.
pe_tr = []
pe_tst = []
k_list = [2*n+1 for n in range(int(np.floor(n_tr/2)))]
for k in k_list:
# Training errors
Z_tr = knn_classifier(X_tr, Y_tr, X_tr, k)
E_tr = Z_tr.flatten()!=Y_tr
# Test errors
Z_tst = knn_classifier(X_tr, Y_tr, X_tst, k)
E_tst = Z_tst.flatten()!=Y_tst
# Error rates
pe_tr.append(float(sum(E_tr)) / n_tr)
pe_tst.append(float(sum(E_tst)) / n_tst)
# Put the result into a color plot
markerline, stemlines, baseline = plt.stem(k_list, pe_tr, 'r', markerfmt='s', label='Training',
use_line_collection=True)
plt.plot(k_list, pe_tr,'r:')
plt.setp(markerline, 'markerfacecolor', 'r', )
plt.setp(baseline, 'color','r', 'linewidth', 2)
markerline, stemlines, baseline = plt.stem(k_list, pe_tst, label='Test', use_line_collection=True)
plt.plot(k_list, pe_tst,'b:')
plt.xlabel('$k$')
plt.ylabel('Error rate')
plt.legend(loc='best')
plt.show()
Explanation: 5.1. Influence of $k$
We can analyze the influence of parameter $k$ by observing both traning and test errors.
End of explanation
i = np.argmin(pe_tst)
k_opt = k_list[i]
print('k_opt:', k_opt)
Explanation: Exercise 2: Observe the train and test error for large $k$. Could you relate the error rate of the baseline classifier with that to the $k$-NN for certain value of $k$?
The figure above suggests that the optimal value of $k$ is
End of explanation
## k-nn with M-fold cross validation
#Â Obtain the indices for the different folds
n_tr = X_tr.shape[0]
M = n_tr
permutation = np.random.permutation(n_tr)
# Initialize sets of indices
set_indices = {n: [] for n in range(M)}
# Distribute data amont M partitions
n = 0
for pos in range(n_tr):
set_indices[n].append(permutation[pos])
n = (n+1) % M
# Now, we run the cross-validation process using the k-nn method
k_max = 30
k_list = [2*j+1 for j in range(int(k_max))]
# Obtain the validation errors
pe_val = 0
for n in range(M):
i_val = set_indices[n]
i_tr = []
for kk in range(M):
if not n==kk:
i_tr += set_indices[kk]
pe_val_iter = []
for k in k_list:
y_tr_iter = knn_classifier(X_tr[i_tr], Y_tr[i_tr], X_tr[i_val], k)
pe_val_iter.append(np.mean(Y_tr[i_val] != y_tr_iter))
pe_val = pe_val + np.asarray(pe_val_iter).T
pe_val = pe_val / M
# We compute now the train and test errors curves
pe_tr = [np.mean(Y_tr != knn_classifier(X_tr, Y_tr, X_tr, k).T) for k in k_list]
k_opt = k_list[np.argmin(pe_val)]
pe_tst = np.mean(Y_tst != knn_classifier(X_tr, Y_tr, X_tst, k_opt).T)
plt.plot(k_list, pe_tr,'b--o',label='Training error')
plt.plot(k_list, pe_val.T,'g--o',label='Validation error')
plt.stem([k_opt], [pe_tst],'r-o',label='Test error', use_line_collection=True)
plt.legend(loc='best')
plt.title('The optimal value of $k$ is ' + str(k_opt))
plt.xlabel('$k$')
plt.ylabel('Error rate')
plt.show()
Explanation: However, using the test set to select the optimal value of the hyperparameter $k$ is not allowed. Instead, we should recur to cross validation.
5.2 Hyperparameter selection via cross-validation
An inconvenient of the application of the $k$-NN method is that the selection of $k$ influences the final error of the algorithm. In the previous experiments, we noticed that the location of the minimum is not necessarily the same from the perspective of the test and training data. Ideally, we would like that the designed classification model works as well as possible on future unlabeled patterns that are not available during the training phase. This property is known as generalization.
Fitting the training data is only pursued in the hope that we are also indirectly obtaining a model that generalizes well. In order to achieve this goal, there are some strategies that try to guarantee a correct generalization of the model. One of such approaches is known as <b>cross-validation</b>.
Since using the test labels during the training phase is not allowed (they should be kept aside to simultate the future application of the classification model on unseen patterns), we need to figure out some way to improve our estimation of the hyperparameter that requires only training data. Cross-validation allows us to do so by following the following steps:
Split the training data into several (generally non-overlapping) subsets. If we use $M$ subsets, the method is referred to as $M$-fold cross-validation. If we consider each pattern a different subset, the method is usually referred to as leave-one-out (LOO) cross-validation.
Train of the system $M$ times. For each run, use a different partition as a validation set, and use the restating partitions as the training set. Evaluate the performance for different choices of the hyperparameter (i.e., for different values of $k$ for the $k$-NN method).
Average the validation error over all partitions, and pick the hyperparameter that provided the minimum validation error.
Rerun the algorithm using all the training data, keeping the value of the parameter that came out of the cross-validation process.
End of explanation
k = 5
# import some data to play with
iris = datasets.load_iris()
# Take training test
X_tr = np.array([xTrain_all[n] for n in range(nTrain_all)
if cTrain_all[n]==c0 or cTrain_all[n]==c1])
C_tr = [c for c in cTrain_all if c==c0 or c==c1]
Y_tr = np.array([int(c==c1) for c in C_tr])
n_tr = len(X_tr)
# Take test set
X_tst = np.array([xTest_all[n] for n in range(nTest_all)
if cTest_all[n]==c0 or cTest_all[n]==c1])
C_tst = [c for c in cTest_all if c==c0 or c==c1]
Y_tst = np.array([int(c==c1) for c in C_tst])
n_tst = len(X_tst)
for weights in ['uniform', 'distance']:
# we create an instance of Neighbours Classifier and fit the data.
clf = neighbors.KNeighborsClassifier(k, weights=weights)
clf.fit(X_tr, Y_tr)
Z = clf.predict(X_tst)
pe_tst = np.mean(Y_tst != Z)
print(f'Test error rate with {weights} weights = {pe_tst}')
Explanation: 6. Scikit-learn implementation
In practice, most well-known machine learning methods are implemented and available for python. Probably, the most complete library for machine learning is <a href=http://scikit-learn.org/stable/>Scikit-learn</a>. The following piece of code uses the method
KNeighborsClassifier
available in Scikit-learn, to compute the $k$-NN classifier using the four components of the observations in the original dataset. This routine allows us to classify a particular point using a weighted average of the targets of the neighbors:
To classify point ${\bf x}$:
Find $k$ closest points to ${\bf x}$ in the training set
Average the corresponding targets, weighting each value according to the distance of each point to ${\bf x}$, so that closer points have a larger influence in the estimation.
End of explanation
# Select two classes
c0 = 'Iris-versicolor'
c1 = 'Iris-virginica'
# Select two coordinates
coords = [0, 1]
# Take the selected coordinates only
X_tr = np.array(xTrain_all)[:, coords]
X_tst = np.array(xTest_all)[:, coords]
# Take training test
ind = [i for i, c in enumerate(cTrain_all) if c==c0 or c==c1]
X_tr = X_tr[ind, :]
C_tr = np.array(cTrain_all)[ind]
Y_tr = (C_tr == c1)
# Take test set
ind = [i for i, c in enumerate(cTest_all) if c==c0 or c==c1]
X_tst = X_tst[ind, :]
C_tst = np.array(cTest_all)[ind]
Y_tst = (C_tst == c1)
#<SOL>
from sklearn import neighbors
k = 15
neigh = neighbors.KNeighborsRegressor(n_neighbors=k)
Z = neigh.fit(X_tr, Y_tr).predict(X_grid)
# Separate components of x into different arrays (just for the plots)
x0c0 = [X_tr[n][0] for n, y in enumerate(Y_tr) if y==0]
x1c0 = [X_tr[n][1] for n, y in enumerate(Y_tr) if y==0]
x0c1 = [X_tr[n][0] for n, y in enumerate(Y_tr) if y==1]
x1c1 = [X_tr[n][1] for n, y in enumerate(Y_tr) if y==1]
#</SOL>
#<SOL>
# Put the result into a color plot
plt.plot(x0c0, x1c0,'r.', label=labels[c0])
plt.plot(x0c1, x1c1,'g+', label=labels[c1])
plt.xlabel('$x_' + str(ind[0]) + '$')
plt.ylabel('$x_' + str(ind[1]) + '$')
plt.legend(loc='best')
Z = Z.reshape(xx.shape)
CS = plt.contourf(xx, yy, Z)
CS2 = plt.contour(CS, levels=[0.5], colors='m', linewidths=(3,))
plt.show()
#</SOL>
Explanation: <a href = http://scikit-learn.org/stable/auto_examples/neighbors/plot_classification.html> Here</a> you can find an example of the application of KNeighborsClassifier to the complete 3-class Iris flower classification problem.
7. $k$-NN Classification and Probability Estimation.
If a sample ${\bf x}$ has $m$ neighbors from class 1 and $k-m$ neighbors from class $0$, we can estimate the posterior probability that an observation ${\bf x}$ belongs to class 1 as
$$
\hat P{y=1|x} = \frac{m}{k}
$$
Therefore, besides computing a decision about the class of the data, we can modify the $k$-NN algorithm to obtain posterior probability estimates.
Note that the above equation is equivalent to
$$
\hat P{y=1|x} = \frac{\sum_{n \in {\mathcal N}({\bf x})} y^{(n)}}{k}
$$
where ${\mathcal N}({\bf x})$ is the set of indices for the samples in the neighborhood of $\bf x$.
In other words, $\hat P{y=1|x}$ is the average of the neighbor labels. This is actually what the $k$-NN regression algorithm does. Thus, we can estimate the posterior using the sklearn regression methods from KNeighborsRegressor.
Exercise 3: Plot a $k$-NN posterior probability map for the Iris flower data, for $k=15$.
End of explanation |
12,497 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Converting on OSX Keychain into Text Notes
The purpose of this notebook is to export the secured notes from an Apple keychain into separate files. It is pretty rough, but generally this only needs to be done once (at least for me this was the case) so no need to polish
IMPORTANT NOTICE
Step1: retrieve the titles of all notes, and read them into the array list_of_notes
Step2: now retrieve the text of the notes; we make some very crude regex matching on the raw output here | Python Code:
path = '/Volumes/---/keychains'
#!ls -l $path
Explanation: Converting on OSX Keychain into Text Notes
The purpose of this notebook is to export the secured notes from an Apple keychain into separate files. It is pretty rough, but generally this only needs to be done once (at least for me this was the case) so no need to polish
IMPORTANT NOTICE: Using an iPython Notebook in a browser is not particularly secure. The decrypted data migth end up in the browser cache. Use Private Browsing mode or clear the cache when you are finished
Note: path is where the keychain lives and where the notes will be written. It should be an encrypted volume.
End of explanation
!security dump $path/test.keychain | grep svce > $path/raw_list_of_notes.txt
with open(path+"/raw_list_of_notes.txt") as f:
content = f.readlines()
#content
import re
def f(txt):
m = re.match('.*=\"(.*)\"', txt)
return m.groups()[0]
list_of_notes = list(map (f, content))
#list_of_notes
Explanation: retrieve the titles of all notes, and read them into the array list_of_notes
End of explanation
def note_text(name):
text = !security find-generic-password -g -s "$name"
m = re.match(".*<key>NOTE</key>.*?<string>(.*)</string>", text[0])
if m == None:
m = re.match('.*"(.*)"', text[0])
if m == None:
return (name, "-not accessible-")
return (name, m.groups()[0])
def note_text_raw(name):
text = !security find-generic-password -g -s "$name"
return (name, text)
note_contents = list(map(note_text, list_of_notes))
#note_contents
import pickle
if False:
output = open(path+'/note_contents.pkl', 'wb')
pickle.dump(note_contents, output)
output.close()
if False:
pkl_file = open(path+'/note_contents.pkl', 'rb')
note_contents1 = pickle.load(pkl_file)
pkl_file.close()
#note_contents1
for n in note_contents1:
if n[1] != "-not accessible-":
fn = path+'/out/'+n[0].replace("/", "-")+".note"
output = open(fn, 'w')
text = n[1].replace('\\012', '\n')
output.write(text)
output.close()
Explanation: now retrieve the text of the notes; we make some very crude regex matching on the raw output here
End of explanation |
12,498 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img src='https
Step1: Locally and Remote
Run locally
Connect to the cloud (e.g AWS)
Connect to supercomputer (e.g. XSEDE Resource)
Add compute power
Step2: Plot a Histogram of x
Step3: Notebooks can be customized
Custom CSS
Custom javascript libraries
Create your own output format.
Tools and workflow
Magic Commands
Built-in useful functions
% line commands
%% cell commands
Step4: Other Languages
Step5: Keep it all together
Step6: NBconvert examples
HTML
PDF (print) - you have to have LaTex installed
Slides
Dynamic Slides
ReStructured Text (sphinx) | Python Code:
2+4
print("hello")
a=2
print("Hello world!")
Explanation: <img src='https://raw.githubusercontent.com/scientific-visualization-2016/ClassMaterials/master/Images/rc_logo.png' style="height:75px">
Data Analysis and Visualization with the IPython Notebook
<img src='https://raw.githubusercontent.com/scientific-visualization-2016/ClassMaterials/master/Images/data_overview.png' style="height:500px">
Materials from Monte Lunacek and Thomas Hauser tutorials
Objectives
Become familiar with the IPython Notebook.
Introduce the IPython landscape.
Getting started with visualization and data analysis in Python
Conducting reproducible data analysis, visualization and computing experiments
How do you currently:
wrangle data?
visualize results?
Analysis: machine learning, stats
Parallel computing
Big data
What is Python?
<blockquote>
<p>
Python is a general-purpose programming language that blends procedural, functional, and object-oriented paradigms
<p>
Mark Lutz, <a href="http://www.amazon.com/Learning-Python-Edition-Mark-Lutz/dp/1449355730">Learning Python</a>
</blockquote>
Simple, clean syntax
Easy to learn
Interpreted
Strong, dynamically typed
Runs everywhere: Linux, Mac, and Windows
Free and open
Expressive: do more with fewer lines of code
Lean: modules
Options: Procedural, object-oriented, and functional.
Abstractions
Python provides high-level abstraction
Performance can be on par with compiled code if right approach is used
<img src="https://s3.amazonaws.com/research_computing_tutorials/matrix_multiply_compare.png" style="margin:5px auto; height:400px; display:block;">
IPython and the Jupyter Notebook
IPython
Platform for interactive computing
Shell or browser-based notebook
Project Jupyter: https://jupyter.org
Language independent notebook
Can be used with R, Julia, bash ...
Jupyter IPython Notebook
http://blog.fperez.org/2012/01/ipython-notebook-historical.html
Interactive web-based computing, data analysis, and documentation.
One document for code and output
Run locally and remote
Document process
Share results
<img src='https://raw.githubusercontent.com/scientific-visualization-2016/ClassMaterials/master/Images/traditional_python.png'>
<img src='https://raw.githubusercontent.com/scientific-visualization-2016/ClassMaterials/master/Images/ipython-notebook.png'>
Integrate Code and Documentation
Data structure ouput
Inline plots
Conversation sytle programming (Literate programming)
Telling a data story
Great for iterative programming.
Data analysis
Quick scripts
Prototyping
2 type of cells:
Markdown for documentation
Markdown can contain LaTeX for equations
Code for execution programs
Markdown
Github flavored markdown: https://help.github.com/articles/github-flavored-markdown/
Markdown basics: https://help.github.com/articles/markdown-basics/
Here is a formula:
$f(x,y) = x^2 + e^x$
Images
<img src='https://s3.amazonaws.com/research_computing_tutorials/monty-python.png' width="300">
This is an image:
<img src='https://s3.amazonaws.com/research_computing_tutorials/monty-python.png' width="300">
Code
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
x = np.random.randn(10000)
print(x)
Explanation: Locally and Remote
Run locally
Connect to the cloud (e.g AWS)
Connect to supercomputer (e.g. XSEDE Resource)
Add compute power:
mpi4py
IPython Parallel
spark big distributed data
Numbapro GPU
...
Documentation and Sharing
<img src='https://raw.githubusercontent.com/scientific-visualization-2016/ClassMaterials/master/Images/ipython-notebook-sharing.png'>
Keyboard Shortcuts
<img src='https://raw.githubusercontent.com/scientific-visualization-2016/ClassMaterials/master/Images/ipython-notebook-keyboard.png'>
Embeded Plots
End of explanation
plt.hist(x, bins=50)
plt.show()
Explanation: Plot a Histogram of x
End of explanation
%lsmagic
%timeit y = np.random.randn(100000)
%ls
Explanation: Notebooks can be customized
Custom CSS
Custom javascript libraries
Create your own output format.
Tools and workflow
Magic Commands
Built-in useful functions
% line commands
%% cell commands
End of explanation
%%bash
ls -l
files = !ls # But glob is a better way
print files[:5]
Explanation: Other Languages: Bash
End of explanation
%%writefile example.cpp
#include <iostream>
int main(){
std::cout << "hello from c++" << std::endl;
}
%ls example.cpp
%%bash
g++ example.cpp -o example
./example
Explanation: Keep it all together
End of explanation
!ipython nbconvert --to 'PDF' 01_introduction-IPython-notebook.ipynb
!open 01_introduction-IPython-notebook.pdf
!ipython nbconvert --to 'html' 01_introduction-IPython-notebook.ipynb
Explanation: NBconvert examples
HTML
PDF (print) - you have to have LaTex installed
Slides
Dynamic Slides
ReStructured Text (sphinx)
End of explanation |
12,499 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Parameter Estimation of RIG Roll Experiments
Setup and descriptions
Without ACM model
Turn on wind tunnel
Only 1DoF for RIG roll movement
Use small-amplitude aileron command of CMP as inputs (in degrees)
$$U = \delta_{a,cmp}(t)$$
Consider RIG roll angle and its derivative as States (in radians)
$$X = \begin{pmatrix} \phi_{rig} \ \dot{\phi}_{rig} \end{pmatrix}$$
Observe RIG roll angle and its derivative as Outputs (in degrees)
$$Z = \begin{pmatrix} \phi_{rig} \ \dot{\phi}_{rig} \end{pmatrix}$$
Use output error method based on most-likehood(ML) to estimate
$$ \theta = \begin{pmatrix} C_{l,\delta_a,cmp} \ C_{lp,cmp} \end{pmatrix} $$
Startup computation engines
Step1: Data preparation
Load raw data
Step2: Check time sequence and inputs/outputs
Click 'Check data' button to show the raw data.
Click on curves to select time point and push into queue; click 'T/s' text to pop up last point in the queue; and click 'Output' text to print time sequence table.
Step3: Input data set information and do processing
For each section,
* Select time range and shift it to start from zero;
* Resample Time, Inputs, Outputs in unique $\delta_T$;
* Smooth Input/Observe data if flag bit0 is set;
* Take derivatives of observe data if flag bit1 is set.
Step4: Define dynamic model to be estimated
$$\left{\begin{matrix}\begin{align}
M_{x,rig} &= M_{x,a} + M_{x,f} + M_{x,cg} = 0 \
M_{x,a} &= \frac{1}{2} \rho V^2S_cb_c C_{la,cmp}\delta_{a,cmp} \
M_{x,f} &= -F_c \, sign(\dot{\phi}{rig}) \
M{x,cg} &= -m_T g l_{zT} \sin \left ( \phi - \phi_0 \right )
\end{align}\end{matrix}\right.$$
Step5: Initial guess
Input default values and ranges for parameters
Select sections for trainning
Adjust parameters based on simulation results
Decide start values of parameters for optimization
Step6: Optimize using ML
Step7: Show and test results | Python Code:
%run matt_startup
%run -i matt_utils
button_qtconsole()
#import other needed modules in all used engines
#with dview.sync_imports():
# import os
Explanation: Parameter Estimation of RIG Roll Experiments
Setup and descriptions
Without ACM model
Turn on wind tunnel
Only 1DoF for RIG roll movement
Use small-amplitude aileron command of CMP as inputs (in degrees)
$$U = \delta_{a,cmp}(t)$$
Consider RIG roll angle and its derivative as States (in radians)
$$X = \begin{pmatrix} \phi_{rig} \ \dot{\phi}_{rig} \end{pmatrix}$$
Observe RIG roll angle and its derivative as Outputs (in degrees)
$$Z = \begin{pmatrix} \phi_{rig} \ \dot{\phi}_{rig} \end{pmatrix}$$
Use output error method based on most-likehood(ML) to estimate
$$ \theta = \begin{pmatrix} C_{l,\delta_a,cmp} \ C_{lp,cmp} \end{pmatrix} $$
Startup computation engines
End of explanation
filename = 'FIWT_Exp051_20150612163239.dat.npz'
def loadData():
# Read and parse raw data
global exp_data
exp_data = np.load(filename)
# Select colums
global T_cmp, da1_cmp, da2_cmp, da3_cmp , da4_cmp
T_cmp = exp_data['data33'][:,0]
da1_cmp = exp_data['data33'][:,3]
da2_cmp = exp_data['data33'][:,5]
da3_cmp = exp_data['data33'][:,7]
da4_cmp = exp_data['data33'][:,9]
global T_rig, phi_rig
T_rig = exp_data['data44'][:,0]
phi_rig = exp_data['data44'][:,2]
loadData()
Explanation: Data preparation
Load raw data
End of explanation
def checkInputOutputData():
#check inputs/outputs
fig, ax = plt.subplots(2,1,True)
ax[0].plot(T_cmp,da1_cmp,'r', T_cmp,da2_cmp,'g',
T_cmp,da3_cmp,'b', T_cmp,da4_cmp,'m',
picker=1)
ax[1].plot(T_rig,phi_rig, 'b', picker=2)
ax[0].set_ylabel('$\delta \/ / \/ ^o$')
ax[1].set_ylabel('$\phi \/ / \/ ^o/s$')
ax[1].set_xlabel('$T \/ / \/ s$', picker=True)
ax[0].set_title('Output', picker=True)
fig.canvas.mpl_connect('pick_event', onPickTime)
fig.show()
display(fig)
button_CheckData()
Explanation: Check time sequence and inputs/outputs
Click 'Check data' button to show the raw data.
Click on curves to select time point and push into queue; click 'T/s' text to pop up last point in the queue; and click 'Output' text to print time sequence table.
End of explanation
# Decide DT,U,Z and their processing method
process_set1 = {
# Pick up focused time ranges
'time_marks' : [
[10.244684487,71.0176928039,"ramp cmp1 u1"],
[75.590251771,136.676120954,"ramp cmp1 d1"],
[138.085246353,198.267991292,"ramp cmp2 u1"],
[202.947148504,263.947709116,"ramp cmp2 d1"],
[265.320787191,326.086083756,"ramp cmp2 d2"],
[328.651258032,387.065040394,"ramp cmp3 u1"],
[391.213644461,451.562391555,"ramp cmp3 d1"],
[454.129343981,513.129351723,"ramp cmp4 u1"],
[515.902028523,577.039776316,"ramp cmp4 d1"],
[580.1785085,679.5971965,"ramp cmp1/3 d1"],
[683.479931261,783.589410569,"ramp cmp1/3 d2"],
[785.543756555,885.058700497,"ramp cmp1/3 u1"],
[889.116441348,987.763826608,"ramp cmp2/4 d1"],
[992.08250855,1089.55268169,"ramp cmp2/4 u1"],
[1098.5355005,1216.24487648,"ramp cmpall u1"],
[1220.27886043,1340.49709824,"ramp cmpall u2"],
[1343.59303659,1464.78924788,"ramp cmpall d1"],
],
'U':[(T_cmp, da1_cmp,0),
(T_cmp, da2_cmp,0),
(T_cmp, da3_cmp,0),
(T_cmp, da4_cmp,0),],
'U_names' : ['$\delta_{a1,cmp} \, / \, ^o$',
'$\delta_{a2,cmp} \, / \, ^o$',
'$\delta_{a3,cmp} \, / \, ^o$',
'$\delta_{a4,cmp} \, / \, ^o$'],
'Z':[(T_rig, phi_rig,1),],
'Z_names' : ['$\phi_{a,rig} \, / \, ^o$'],
'cutoff_freq': 1, #Hz
'consts' : {'DT':0.1, 'id':0, 'V':30}
}
display_data_set(process_set1)
resample(process_set1, append=False);
Explanation: Input data set information and do processing
For each section,
* Select time range and shift it to start from zero;
* Resample Time, Inputs, Outputs in unique $\delta_T$;
* Smooth Input/Observe data if flag bit0 is set;
* Take derivatives of observe data if flag bit1 is set.
End of explanation
%%px --local
#update common const parameters in all engines
angles = range(-40,41,5)
angles[0] -= 1
angles[-1] += 1
del angles[angles.index(0)]
angles_num = len(angles)
#problem size
Nx = 0
Nu = 4
Ny = 1
Npar = 4*angles_num+1
#reference
S_c = 0.1254 #S_c(m2)
b_c = 0.7 #b_c(m)
g = 9.81 #g(m/s2)
#static measurement
m_T = 7.5588 #m_T(kg)
l_z_T = 0.0424250531303 #l_z_T(m)
#previous estimations
F_c = 0.0532285873599 #F_c(N*m)
Clda_cmp = -0.315904095782 #Clda_cmp(1/rad)
Clphi_cmp = -0.0131776778575 #Clphi_cmp(1/rad)
#for short
_m_T_l_z_T_g = -(m_T*l_z_T)*g
def obs(Z,T,U,params,consts):
DT = consts['DT']
ID = consts['id']
V = consts['V']
k1 = np.array(params[0:angles_num])
k2 = np.array(params[angles_num:angles_num*2])
k3 = np.array(params[angles_num*2:angles_num*3])
k4 = np.array(params[angles_num*3:angles_num*4])
phi0 = params[-1]
Clda1 = scipy.interpolate.interp1d(angles, Clda_cmp*0.00436332313*k1*angles,assume_sorted=True)
Clda2 = scipy.interpolate.interp1d(angles, Clda_cmp*0.00436332313*k2*angles,assume_sorted=True)
Clda3 = scipy.interpolate.interp1d(angles, Clda_cmp*0.00436332313*k3*angles,assume_sorted=True)
Clda4 = scipy.interpolate.interp1d(angles, Clda_cmp*0.00436332313*k4*angles,assume_sorted=True)
s = T.size
qbarSb = 0.5*1.225*V*V*S_c*b_c
moments_a = qbarSb*(Clda1(U[:s,0])+Clda2(U[:s,1])
+Clda3(U[:s,2])+Clda4(U[:s,3]))
phi = phi0+np.arcsin(np.clip(-moments_a/_m_T_l_z_T_g, -1, 1))
moments_f = np.copysign(F_c, phi);
moments_p = qbarSb*Clphi_cmp*phi;
phi = phi0+np.arcsin(np.clip(-(moments_a+moments_p+moments_f)/_m_T_l_z_T_g, -1, 1))
return (phi*57.3).reshape((-1,1))
display(HTML('<b>Constant Parameters</b>'))
table = ListTable()
table.append(['Name','Value','unit'])
table.append(['$S_c$',S_c,'$m^2$'])
table.append(['$b_c$',b_c,'$m$'])
table.append(['$g$',g,'$m/s^2$'])
table.append(['$m_T$',m_T,'$kg$'])
table.append(['$l_{zT}$',l_z_T,'$m$'])
table.append(['$F_c$',F_c,'$Nm$'])
table.append(['$C_{l \delta a,cmp}$',Clda_cmp,'$rad^{-1}$'])
table.append(['$C_{l \phi,cmp}$',Clphi_cmp,'$rad^{-1}$'])
display(table)
Explanation: Define dynamic model to be estimated
$$\left{\begin{matrix}\begin{align}
M_{x,rig} &= M_{x,a} + M_{x,f} + M_{x,cg} = 0 \
M_{x,a} &= \frac{1}{2} \rho V^2S_cb_c C_{la,cmp}\delta_{a,cmp} \
M_{x,f} &= -F_c \, sign(\dot{\phi}{rig}) \
M{x,cg} &= -m_T g l_{zT} \sin \left ( \phi - \phi_0 \right )
\end{align}\end{matrix}\right.$$
End of explanation
#initial guess
param0 = [1]*(4*angles_num)+[0]
param_name = ['k_{}_{}'.format(i/angles_num+1, angles[i%angles_num]) for i in range(4*angles_num)] + ['$phi_0$']
param_unit = ['1']*(4*angles_num) + ['$rad$']
NparID = Npar
opt_idx = range(Npar)
opt_param0 = [param0[i] for i in opt_idx]
par_del = [0.001]*(4*angles_num) + [0.0001]
bounds = [(0,1.5)]*(4*angles_num) +[(-0.1, 0.1)]
display_default_params()
#select sections for training
section_idx = range(9)
del section_idx[3]
display_data_for_train()
#push parameters to engines
push_opt_param()
# select 4 section from training data
#idx = random.sample(section_idx, 4)
idx = section_idx[:]
interact_guess();
Explanation: Initial guess
Input default values and ranges for parameters
Select sections for trainning
Adjust parameters based on simulation results
Decide start values of parameters for optimization
End of explanation
display_preopt_params()
if False:
InfoMat = None
method = 'trust-ncg'
def hessian(opt_params, index):
global InfoMat
return InfoMat
dview['enable_infomat']=True
options={'gtol':1}
opt_bounds = None
else:
method = 'L-BFGS-B'
hessian = None
dview['enable_infomat']=False
options={'ftol':1e-6,'maxfun':400}
opt_bounds = bounds
cnt = 0
tmp_rslt = None
T0 = time.time()
print('#cnt, Time, |R|')
%time res = sp.optimize.minimize(fun=costfunc, x0=opt_param0, \
args=(opt_idx,), method=method, jac=True, hess=hessian, \
bounds=opt_bounds, options=options)
Explanation: Optimize using ML
End of explanation
display_opt_params()
# show result
idx = range(len(sections))
display_data_for_test();
update_guess();
res_params = res['x']
params = param0[:]
for i,j in enumerate(opt_idx):
params[j] = res_params[i]
k1 = np.array(params[0:angles_num])
k2 = np.array(params[angles_num:angles_num*2])
k3 = np.array(params[angles_num*2:angles_num*3])
k4 = np.array(params[angles_num*3:angles_num*4])
Clda_cmp1 = Clda_cmp*0.00436332313*k1*angles
Clda_cmp2 = Clda_cmp*0.00436332313*k2*angles
Clda_cmp3 = Clda_cmp*0.00436332313*k3*angles
Clda_cmp4 = Clda_cmp*0.00436332313*k4*angles
print('angeles = ')
print(angles)
print('Clda_cmpx = ')
print(np.vstack((Clda_cmp1,Clda_cmp2,Clda_cmp3,Clda_cmp4)))
%matplotlib inline
plt.figure(figsize=(12,8),dpi=300)
plt.plot(angles, Clda_cmp1, 'r')
plt.plot(angles, Clda_cmp2, 'g')
plt.plot(angles, Clda_cmp3, 'b')
plt.plot(angles, Clda_cmp4, 'm')
plt.xlabel('$\delta_{a,cmp}$')
plt.ylabel('$C_{l \delta a,cmp}$')
plt.show()
toggle_inputs()
button_qtconsole()
(-0.05-0.05)/(80/57.3)
Clda_cmp/4
Explanation: Show and test results
End of explanation |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.