markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
ํ๋ จ ํ ํ
์คํธ์
์ ๋ํ ์ ํ๋๊ฐ 90%๋ฅผ ์กฐ๊ธ ์๋ ์ ๋๋ก ๋ง์ด ํฅ์๋์๋ค. | model = get_model()
callbacks = [
keras.callbacks.ModelCheckpoint("binary_2gram.keras",
save_best_only=True)
]
model.fit(binary_2gram_train_ds.cache(),
validation_data=binary_2gram_val_ds.cache(),
epochs=10,
callbacks=callbacks)
model = keras.models.load_model("binary_2gram.keras")
print(f"Test acc: {model.evaluate(binary_2gram_test_ds)[1]:.3f}") | Epoch 1/10
625/625 [==============================] - 12s 18ms/step - loss: 0.3857 - accuracy: 0.8347 - val_loss: 0.2791 - val_accuracy: 0.9000
Epoch 2/10
625/625 [==============================] - 4s 6ms/step - loss: 0.2592 - accuracy: 0.9082 - val_loss: 0.2947 - val_accuracy: 0.8988
Epoch 3/10
625/625 [==============================] - 4s 6ms/step - loss: 0.2277 - accuracy: 0.9241 - val_loss: 0.3060 - val_accuracy: 0.8978
Epoch 4/10
625/625 [==============================] - 4s 6ms/step - loss: 0.2074 - accuracy: 0.9333 - val_loss: 0.3417 - val_accuracy: 0.8994
Epoch 5/10
625/625 [==============================] - 4s 6ms/step - loss: 0.2070 - accuracy: 0.9365 - val_loss: 0.3538 - val_accuracy: 0.8968
Epoch 6/10
625/625 [==============================] - 4s 6ms/step - loss: 0.1997 - accuracy: 0.9395 - val_loss: 0.3908 - val_accuracy: 0.8946
Epoch 7/10
625/625 [==============================] - 4s 6ms/step - loss: 0.1940 - accuracy: 0.9421 - val_loss: 0.3715 - val_accuracy: 0.8940
Epoch 8/10
625/625 [==============================] - 4s 6ms/step - loss: 0.1902 - accuracy: 0.9427 - val_loss: 0.4054 - val_accuracy: 0.8930
Epoch 9/10
625/625 [==============================] - 4s 6ms/step - loss: 0.1952 - accuracy: 0.9432 - val_loss: 0.3848 - val_accuracy: 0.8880
Epoch 10/10
625/625 [==============================] - 4s 6ms/step - loss: 0.1949 - accuracy: 0.9441 - val_loss: 0.4011 - val_accuracy: 0.8912
782/782 [==============================] - 9s 11ms/step - loss: 0.2788 - accuracy: 0.8953
Test acc: 0.895
| MIT | notebooks/dlp11_part01_introduction.ipynb | codingalzi/dlp |
**๋ฐฉ์ 3: ๋ฐ์ด๊ทธ๋จ TF-IDF ์ธ์ฝ๋ฉ** N-๊ทธ๋จ์ ๋ฒกํฐํํ ๋ ์ฌ์ฉ ๋น๋๋ฅผ ํจ๊ป ์ ์ฅํ๋ ๋ฐฉ์์ ์ฌ์ฉํ ์ ์๋ค.๋จ์ด์ ์ฌ์ฉ ๋น๋๊ฐ ์๋ฌด๋๋ ๋ฌธ์ฅ ํ๊ฐ์ ์ค์ํ ์ญํ ์ ์ํํ ๊ฒ์ด๊ธฐ ๋๋ฌธ์ด๋ค.์๋ ์ฝ๋์์์ฒ๋ผ `output_mode="count"` ์ต์
์ ์ฌ์ฉํ๋ฉด ๋๋ค. | text_vectorization = TextVectorization(
ngrams=2,
max_tokens=20000,
output_mode="count"
) | _____no_output_____ | MIT | notebooks/dlp11_part01_introduction.ipynb | codingalzi/dlp |
๊ทธ๋ฐ๋ฐ ์ด๋ ๊ฒ ํ๋ฉด "the", "a", "is", "are" ๋ฑ์ ์ฌ์ฉ ๋น๋๋ ๋งค์ฐ ๋์ ๋ฐ๋ฉด์"Chollet" ๋ฑ์ ๋จ์ด๋ ๋น๋๊ฐ ๊ฑฐ์ 0์ ๊ฐ๊น๊ฒ ๋์จ๋ค.๋ํ ์์ฑ๋ ๋ฒกํฐ์ ๋๋ถ๋ถ์ 0์ผ๋ก ์ฑ์์ง ๊ฒ์ด๋ค. `max_tokens=20000`์ ์ฌ์ฉํ ๋ฐ๋ฉด์ ํ๋์ ๋ฌธ์ฅ์ ๋ง์์ผ ๋ช ์ญ๊ฐ ์ ๋์ ๋จ์ด๋ง ์ฌ์ฉ๋์๊ธฐ ๋๋ฌธ์ด๋ค. ```pythoninputs[0]: tf.Tensor([1. 1. 1. ... 0. 0. 0.], shape=(20000,), dtype=float32)``` ์ด ์ ์ ๊ณ ๋ คํด์ ์ฌ์ฉ ๋น๋๋ฅผ ์ ๊ทํํ๋ค. ํ๊ท ์ ์์ ์ผ๋ก ๋ง๋ค์ง๋ ์๊ณ TF-IDF ๊ฐ์ผ๋ก ๋๋๊ธฐ๋ง ์คํํ๋ค.์ด์ ๋ ํ๊ท ์ ์ฎ๊ธฐ๋ฉด ๋ฒกํฐ์ ๋๋ถ๋ถ์ ๊ฐ์ด 0์ด ์๋๊ฒ ๋์ดํ๋ จ์ ๋ณด๋ค ๋ง์ ๊ณ์ฐ์ด ์๊ตฌ๋๊ธฐ ๋๋ฌธ์ด๋ค. **TF-IDF**์ ์๋ฏธ๋ ๋ค์๊ณผ ๊ฐ๋ค.- `TF`(Term Frequency) - ํ๋์ ๋ฌธ์ฅ์์ ์ฌ์ฉ๋๋ ๋จ์ด์ ๋น๋ - ๋์ ์๋ก ์ค์ - ์๋ฅผ ๋ค์ด, ํ๋์ ๋ฆฌ๋ทฐ์ "terrible" ์ด ๋ง์ด ์ฌ์ฉ๋์๋ค๋ฉด ํด๋น ๋ฆฌ๋ทฐ๋ ๋ถ์ ์ผ ๊ฐ๋ฅ์ฑ ๋์.- `IDF`(Inverse Document Frequency) - ๋ฐ์ดํฐ์
์ ์ฒด ๋ฌธ์ฅ์์ ์ฌ์ฉ๋ ๋จ์ด์ ๋น๋ - ๋ฎ์ ์๋ก ์ค์. - "the", "a", "is" ๋ฑ์ `IDF` ๊ฐ์ ๋์ง๋ง ๋ณ๋ก ์ค์ํ์ง ์์.- `TF-IDF = TF / IDF` `output_mode="tf_idf"` ์ต์
์ ์ฌ์ฉํ๋ฉด TF-IDF ์ธ์ฝ๋ฉ์ ์ง์ํ๋ค. | text_vectorization = TextVectorization(
ngrams=2,
max_tokens=20000,
output_mode="tf_idf",
) | _____no_output_____ | MIT | notebooks/dlp11_part01_introduction.ipynb | codingalzi/dlp |
ํ๋ จ ํ ํ
์คํธ์
์ ๋ํ ์ ํ๋๊ฐ ๋ค์ 89% ์๋๋ก ๋ด๋ ค๊ฐ๋ค.์ฌ๊ธฐ์๋ ๋ณ ๋์์ด ๋์ง ์์์ง๋ง ๋ง์ ํ
์คํธ ๋ถ๋ฅ ๋ชจ๋ธ์์๋ 1% ์ ๋์ ์ฑ๋ฅ ํฅ์์ ๊ฐ์ ธ์จ๋ค.**์ฃผ์์ฌํญ**: ์๋ ์ฝ๋๋ ํ์ฌ(Tensorflow 2.6๊ณผ 2.7) GPU๋ฅผ ์ฌ์ฉํ์ง ์๋ ๊ฒฝ์ฐ์๋ง ์๋ํ๋ค. ์ด์ ๋ ์์ง ๋ชจ๋ฅธ๋ค([์ฌ๊ธฐ ์ฐธ์กฐ](https://github.com/fchollet/deep-learning-with-python-notebooks/issues/190)). | text_vectorization.adapt(text_only_train_ds)
tfidf_2gram_train_ds = train_ds.map(lambda x, y: (text_vectorization(x), y))
tfidf_2gram_val_ds = val_ds.map(lambda x, y: (text_vectorization(x), y))
tfidf_2gram_test_ds = test_ds.map(lambda x, y: (text_vectorization(x), y))
model = get_model()
model.summary()
callbacks = [
keras.callbacks.ModelCheckpoint("tfidf_2gram.keras",
save_best_only=True)
]
model.fit(tfidf_2gram_train_ds.cache(),
validation_data=tfidf_2gram_val_ds.cache(),
epochs=10,
callbacks=callbacks)
model = keras.models.load_model("tfidf_2gram.keras")
print(f"Test acc: {model.evaluate(tfidf_2gram_test_ds)[1]:.3f}") | Model: "model_2"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_3 (InputLayer) [(None, 20000)] 0
dense_4 (Dense) (None, 16) 320016
dropout_2 (Dropout) (None, 16) 0
dense_5 (Dense) (None, 1) 17
=================================================================
Total params: 320,033
Trainable params: 320,033
Non-trainable params: 0
_________________________________________________________________
Epoch 1/10
625/625 [==============================] - 11s 17ms/step - loss: 0.5232 - accuracy: 0.7588 - val_loss: 0.3197 - val_accuracy: 0.8806
Epoch 2/10
625/625 [==============================] - 4s 6ms/step - loss: 0.3534 - accuracy: 0.8442 - val_loss: 0.2946 - val_accuracy: 0.8954
Epoch 3/10
625/625 [==============================] - 4s 6ms/step - loss: 0.3231 - accuracy: 0.8609 - val_loss: 0.3086 - val_accuracy: 0.8864
Epoch 4/10
625/625 [==============================] - 4s 6ms/step - loss: 0.3053 - accuracy: 0.8734 - val_loss: 0.3087 - val_accuracy: 0.8814
Epoch 5/10
625/625 [==============================] - 4s 6ms/step - loss: 0.2781 - accuracy: 0.8845 - val_loss: 0.3225 - val_accuracy: 0.8878
Epoch 6/10
625/625 [==============================] - 4s 6ms/step - loss: 0.2703 - accuracy: 0.8870 - val_loss: 0.3472 - val_accuracy: 0.8702
Epoch 7/10
625/625 [==============================] - 4s 6ms/step - loss: 0.2695 - accuracy: 0.8883 - val_loss: 0.3357 - val_accuracy: 0.8682
Epoch 8/10
625/625 [==============================] - 4s 6ms/step - loss: 0.2650 - accuracy: 0.8931 - val_loss: 0.3343 - val_accuracy: 0.8664
Epoch 9/10
625/625 [==============================] - 4s 6ms/step - loss: 0.2606 - accuracy: 0.8901 - val_loss: 0.3546 - val_accuracy: 0.8580
Epoch 10/10
625/625 [==============================] - 4s 6ms/step - loss: 0.2575 - accuracy: 0.8924 - val_loss: 0.3318 - val_accuracy: 0.8760
782/782 [==============================] - 8s 10ms/step - loss: 0.2998 - accuracy: 0.8927
Test acc: 0.893
| MIT | notebooks/dlp11_part01_introduction.ipynb | codingalzi/dlp |
**๋ถ๋ก: ๋ฌธ์์ด ๋ฒกํฐํ ์ ์ฒ๋ฆฌ๋ฅผ ํจ๊ป ์ฒ๋ฆฌํ๋ ๋ชจ๋ธ ๋ด๋ณด๋ด๊ธฐ** ํ๋ จ๋ ๋ชจ๋ธ์ ์ค์ ์ ๋ฐฐ์นํ๋ ค๋ฉด ํ
์คํธ ๋ฒกํฐํ๋ ๋ชจ๋ธ๊ณผ ํจ๊ป ๋ด๋ณด๋ด์ผ ํ๋ค.์ด๋ฅผ ์ํด `TextVectorization` ์ธต์ ๊ฒฐ๊ณผ๋ฅผ ์ฌํ์ฉ๋ง ํ๋ฉด ๋๋ค. | inputs = keras.Input(shape=(1,), dtype="string")
# ํ
์คํธ ๋ฒกํฐํ ์ถ๊ฐ
processed_inputs = text_vectorization(inputs)
# ํ๋ จ๋ ๋ชจ๋ธ์ ์ ์ฉ
outputs = model(processed_inputs)
# ์ต์ข
๋ชจ๋ธ
inference_model = keras.Model(inputs, outputs) | _____no_output_____ | MIT | notebooks/dlp11_part01_introduction.ipynb | codingalzi/dlp |
`inference_model`์ ์ผ๋ฐ ํ
์คํธ ๋ฌธ์ฅ์ ์ง์ ์ธ์๋ก ๋ฐ์ ์ ์๋ค.์๋ฅผ ๋ค์ด "That was an excellent movie, I loved it."๋ผ๋ ๋ฆฌ๋ทฐ๋๊ธ์ ์ผ ํ๋ฅ ์ด ๋งค์ฐ ๋๋ค๊ณ ์์ธก๋๋ค. | import tensorflow as tf
raw_text_data = tf.convert_to_tensor([
["That was an excellent movie, I loved it."],
])
predictions = inference_model(raw_text_data)
print(f"{float(predictions[0] * 100):.2f} percent positive") | 92.10 percent positive
| MIT | notebooks/dlp11_part01_introduction.ipynb | codingalzi/dlp |
Loading neurons from s3 | import numpy as np
from skimage import io
from pathlib import Path
from brainlit.utils.session import NeuroglancerSession
from brainlit.utils.Neuron_trace import NeuronTrace
import napari
from napari.utils import nbscreenshot
%gui qt | _____no_output_____ | Apache-2.0 | docs/notebooks/visualization/loading.ipynb | neurodata/brainl |
Loading entire neuron from AWS `s3_trace = NeuronTrace(s3_path,seg_id,mip)` to create a NeuronTrace object with s3 file path`swc_trace = NeuronTrace(swc_path)` to create a NeuronTrace object with swc file path1. `s3_trace.get_df()` to output the s3 NeuronTrace object as a pd.DataFrame2. `swc_trace.get_df()` to output the swc NeuronTrace object as a pd.DataFrame3. `swc_trace.generate_df_subset(list_of_voxels)` creates a smaller subset of the original dataframe with coordinates in img space4. `swc_trace.get_df_voxel()` to output a DataFrame that converts the coordinates from spatial to voxel coordinates5. `swc_trace.get_graph()` to output the s3 NeuronTrace object as a netwrokx.DiGraph6. `swc_trace.get_paths()` to output the s3 NeuronTrace object as a list of paths7. `ViewerModel.add_shapes` to add the paths as a shape layer into the napari viewer8. `swc_trace.get_sub_neuron(bounding_box)` to output NeuronTrace object as a graph cropped by a bounding box9. `swc_trace.get_sub_neuron(bounding_box)` to output NeuronTrace object as paths cropped by a bounding box 1. `s3_trace.get_df()`This function outputs the s3 NeuronTrace object as a pd.DataFrame. Each row is a vertex in the swc file with the following information: `sample number``structure identifier``x coordinate``y coordinate``z coordinate``radius of dendrite``sample number of parent`The coordinates are given in spatial units of micrometers ([swc specification](http://www.neuronland.org/NLMorphologyConverter/MorphologyFormats/SWC/Spec.html)) | """
s3_path = "s3://open-neurodata/brainlit/brain1_segments"
seg_id = 2
mip = 1
s3_trace = NeuronTrace(s3_path, seg_id, mip)
df = s3_trace.get_df()
df.head()
""" | Downloading: 100%|โโโโโโโโโโ| 1/1 [00:00<00:00, 5.13it/s]
Downloading: 100%|โโโโโโโโโโ| 1/1 [00:00<00:00, 5.82it/s]
| Apache-2.0 | docs/notebooks/visualization/loading.ipynb | neurodata/brainl |
2. `swc_trace.get_df()`This function outputs the swc NeuronTrace object as a pd.DataFrame. Each row is a vertex in the swc file with the following information: `sample number``structure identifier``x coordinate``y coordinate``z coordinate``radius of dendrite``sample number of parent`The coordinates are given in spatial units of micrometers ([swc specification](http://www.neuronland.org/NLMorphologyConverter/MorphologyFormats/SWC/Spec.html)) | """
swc_path = str(Path().resolve().parents[2] / "data" / "data_octree" / "consensus-swcs" / '2018-08-01_G-002_consensus.swc')
swc_trace = NeuronTrace(path=swc_path)
df = swc_trace.get_df()
df.head()
""" | _____no_output_____ | Apache-2.0 | docs/notebooks/visualization/loading.ipynb | neurodata/brainl |
3. `swc_trace.generate_df_subset(list_of_voxels)`This function creates a smaller subset of the original dataframe with coordinates in img space. Each row is a vertex in the swc file with the following information: `sample number``structure identifier``x coordinate``y coordinate``z coordinate``radius of dendrite``sample number of parent`The coordinates are given in same spatial units as the image file when using `ngl.pull_vertex_list` | """# Choose vertices to use for the subneuron
subneuron_df = df[0:3]
vertex_list = subneuron_df['sample'].array
# Define a neuroglancer session
url = "s3://open-neurodata/brainlit/brain1"
mip = 1
ngl = NeuroglancerSession(url, mip=mip)
# Get vertices
seg_id = 2
buffer = 10
img, bounds, vox_in_img_list = ngl.pull_vertex_list(seg_id=seg_id, v_id_list=vertex_list, buffer = buffer, expand = True)
df_subneuron = swc_trace.generate_df_subset(vox_in_img_list.tolist(),subneuron_start=0,subneuron_end=3 )
print(df_subneuron)
""" | Downloading: 100%|โโโโโโโโโโ| 1/1 [00:00<00:00, 6.08it/s]
Downloading: 100%|โโโโโโโโโโ| 1/1 [00:00<00:00, 6.95it/s]
Downloading: 100%|โโโโโโโโโโ| 1/1 [00:00<00:00, 5.02it/s]
Downloading: 0%| | 0/4 [00:01<?, ?it/s] sample structure x y z r parent
0 1 0 106 106 112 1.0 -1
1 2 0 121 80 61 1.0 1
2 3 0 61 55 49 1.0 2
| Apache-2.0 | docs/notebooks/visualization/loading.ipynb | neurodata/brainl |
4. `swc_trace.get_df_voxel()` If we want to overlay the swc file with a corresponding image, we need to make sure that they are in the same coordinate space. Because an image in an array of voxels, it makes sense to convert the vertices from spatial units into voxel units.Given the `spacing` (spatial units/voxel) and `origin` (spatial units) of the image, `swc_to_voxel` does the conversion by using the following equation:$voxel = \frac{spatial - origin}{spacing}$ | # spacing = np.array([0.29875923,0.3044159,0.98840415])
# origin = np.array([70093.276,15071.596,29306.737])
# df_voxel = swc_trace.get_df_voxel(spacing=spacing, origin=origin)
# df_voxel.head() | _____no_output_____ | Apache-2.0 | docs/notebooks/visualization/loading.ipynb | neurodata/brainl |
5. `swc_trace.get_graph()`A neuron is a graph with no cycles (tree). While napari does not support displaying graph objects, it can display multiple paths. The DataFrame already contains all the possible edges in the neurons. Each row in the DataFrame is an edge. For example, from the above we can see that `sample 2` has `parent 1`, which represents edge `(1,2)`. `sample 1` having `parent -1` means that `sample 1` is the root of the tree. `swc_trace.get_graph()` converts the NeuronTrace object into a networkx directional graph. | # G = swc_trace.get_graph()
# print('Number of nodes:', len(G.nodes))
# print('Number of edges:', len(G.edges))
# print('\n')
# print('Sample 1 coordinates (x,y,z)')
# print(G.nodes[1]['x'],G.nodes[1]['y'],G.nodes[1]['z']) | Number of nodes: 1650
Number of edges: 1649
Sample 1 coordinates (x,y,z)
-387 1928 -1846
| Apache-2.0 | docs/notebooks/visualization/loading.ipynb | neurodata/brainl |
6. `swc_trace.get_paths()` This function returns the NeuronTrace object as a list of non-overlapping paths. The union of the paths forms the graph.The algorithm works by:1. Find longest path in the graph ([networkx.algorithms.dag.dag_longest_path](https://networkx.github.io/documentation/stable/reference/algorithms/generated/networkx.algorithms.dag.dag_longest_path.html))2. Remove longest path from graph3. Repeat steps 1 and 2 until there are no more edges left in the graph | # paths = swc_trace.get_paths()
# print(f"The graph was decomposed into {len(paths)} paths") | The graph was decomposed into 179 paths
| Apache-2.0 | docs/notebooks/visualization/loading.ipynb | neurodata/brainl |
7. `ViewerModel.add_shapes`napari displays "layers". The most common layer is the image layer. In order to display the neuron, we use `path` from the [shapes](https://napari.org/tutorials/shapes) layer | # viewer = napari.Viewer(ndisplay=3)
# viewer.add_shapes(data=paths, shape_type='path', edge_color='white', name='Skeleton 2')
# nbscreenshot(viewer) | _____no_output_____ | Apache-2.0 | docs/notebooks/visualization/loading.ipynb | neurodata/brainl |
Loading sub-neuronThe image of the entire brain has dimensions of (33792, 25600, 13312) voxels. G-002 spans a sub-image of (7386, 9932, 5383) voxels. Both are too big to load in napari and overlay the neuron.To circumvent this, we can crop out a smaller region of the neuron, load the sub-neuron, and load the corresponding sub-image.In order to get a sub-neuron, we need to specify the `bounding_box` that will be used to crop the neuron. `bounding_box` is a length 2 tuple. The first element is one corner of the bounding box (inclusive) and the second element is the opposite corner of the bounding box (exclusive). Both corners are in voxel units.`add_swc` can do all of this automatically when given `bounding_box` by following these steps:1. `read_s3` to read the swc file into a pd.DataFrame2. `swc_to_voxel` to convert the coordinates from spatial to voxel coordinates3. `df_to_graph` to convert the DataFrame into a netwrokx.DiGraph**3.1 `swc.get_sub_neuron` to crop the graph by `bounding_box`**4. `graph_to_paths` to convert from a graph into a list of paths5. `ViewerModel.add_shapes` to add the paths as a shape layer into the napari viewer 8. `swc_trace.get_sub_neuron(bounding_box)` 9. `swc_trace.get_sub_neuron_paths(bounding_box)` This function crops a graph by removing edges. It removes edges that do not intersect the bounding box.Edges that intersect the bounding box will have at least one of its vertices be contained by the bounding box. The algorithm follows this principle by checking the neighborhood of vertices.For each vertex *v* in the graph:1. Find vertices belonging to local neighborhood of *v*2. If vertex *v* or any of its local neighborhood vertices are in the bounding box, do nothing. Otherwise, remove vertex *v* and its edges from the graphWe check the neighborhood of *v* along with *v* because we want the sub-neuron to show all edges that pass through the bounding box, including edges that are only partially contained.`swc_trace.get_sub_neuron(bounding_box)` returns a sub neuron in graph format`swc_trace.get_sub_neuron_paths(bounding_box)` returns a sub neuron in paths format | # # Create an NGL session to get the bounding box
# url = "s3://open-neurodata/brainlit/brain1"
# mip = 1
# ngl = NeuroglancerSession(url, mip=mip)
# img, bbbox, vox = ngl.pull_chunk(2, 300, 1)
# bbox = bbbox.to_list()
# box = (bbox[:3], bbox[3:])
# print(box)
# G_sub = s3_trace.get_sub_neuron(box)
# paths_sub = s3_trace.get_sub_neuron_paths(box)
# print(len(G_sub))
# viewer = napari.Viewer(ndisplay=3)
# viewer.add_shapes(data=paths_sub, shape_type='path', edge_color='blue', name='sub-neuron')
# # overlay corresponding image (random image but correct should be G-002_15312-4400-6448_15840-4800-6656.tif' )
# image_path = str(Path().resolve().parents[2] / "data" / "data_octree" / 'default.0.tif')
# img_comp = io.imread(image_path)
# img_comp = np.swapaxes(img_comp,0,2)
# viewer.add_image(img_comp)
# nbscreenshot(viewer) | 459
| Apache-2.0 | docs/notebooks/visualization/loading.ipynb | neurodata/brainl |
Deep Convolutional GANsIn this notebook, you'll build a GAN using convolutional layers in the generator and discriminator. This is called a Deep Convolutional GAN, or DCGAN for short. The DCGAN architecture was first explored in 2016 and has seen impressive results in generating new images; you can read the [original paper, here](https://arxiv.org/pdf/1511.06434.pdf).You'll be training DCGAN on the [Street View House Numbers](http://ufldl.stanford.edu/housenumbers/) (SVHN) dataset. These are color images of house numbers collected from Google street view. SVHN images are in color and much more variable than MNIST. So, our goal is to create a DCGAN that can generate new, realistic-looking images of house numbers. We'll go through the following steps to do this:* Load in and pre-process the house numbers dataset* Define discriminator and generator networks* Train these adversarial networks* Visualize the loss over time and some sample, generated images Deeper Convolutional NetworksSince this dataset is more complex than our MNIST data, we'll need a deeper network to accurately identify patterns in these images and be able to generate new ones. Specifically, we'll use a series of convolutional or transpose convolutional layers in the discriminator and generator. It's also necessary to use batch normalization to get these convolutional networks to train. Besides these changes in network structure, training the discriminator and generator networks should be the same as before. That is, the discriminator will alternate training on real and fake (generated) images, and the generator will aim to trick the discriminator into thinking that its generated images are real! | # import libraries
import matplotlib.pyplot as plt
import numpy as np
import pickle as pkl
%matplotlib inline | _____no_output_____ | MIT | DCGAN_Exercise.ipynb | ng572/DCGAN_SVHN |
Getting the dataHere you can download the SVHN dataset. It's a dataset built-in to the PyTorch datasets library. We can load in training data, transform it into Tensor datatypes, then create dataloaders to batch our data into a desired size. | import torch
from torchvision import datasets
from torchvision import transforms
# Tensor transform
transform = transforms.ToTensor()
# SVHN training datasets
svhn_train = datasets.SVHN(root='data/', split='train', download=True, transform=transform)
batch_size = 128
num_workers = 0
# build DataLoaders for SVHN dataset
train_loader = torch.utils.data.DataLoader(dataset=svhn_train,
batch_size=batch_size,
shuffle=True,
num_workers=num_workers)
| Using downloaded and verified file: data/train_32x32.mat
| MIT | DCGAN_Exercise.ipynb | ng572/DCGAN_SVHN |
Visualize the DataHere I'm showing a small sample of the images. Each of these is 32x32 with 3 color channels (RGB). These are the real, training images that we'll pass to the discriminator. Notice that each image has _one_ associated, numerical label. | # obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
plot_size=20
for idx in np.arange(plot_size):
ax = fig.add_subplot(2, plot_size/2, idx+1, xticks=[], yticks=[])
ax.imshow(np.transpose(images[idx], (1, 2, 0)))
# print out the correct label for each image
# .item() gets the value contained in a Tensor
ax.set_title(str(labels[idx].item())) | <ipython-input-3-a55faf2ffde6>:9: MatplotlibDeprecationWarning: Passing non-integers as three-element position specification is deprecated since 3.3 and will be removed two minor releases later.
ax = fig.add_subplot(2, plot_size/2, idx+1, xticks=[], yticks=[])
| MIT | DCGAN_Exercise.ipynb | ng572/DCGAN_SVHN |
Pre-processing: scaling from -1 to 1We need to do a bit of pre-processing; we know that the output of our `tanh` activated generator will contain pixel values in a range from -1 to 1, and so, we need to rescale our training images to a range of -1 to 1. (Right now, they are in a range from 0-1.) | # current range
img = images[0]
print('Min: ', img.min())
print('Max: ', img.max())
# helper scale function
def scale(x, feature_range=(-1, 1)):
''' Scale takes in an image x and returns that image, scaled
with a feature_range of pixel values from -1 to 1.
This function assumes that the input x is already scaled from 0-1.'''
# assume x is scaled to (0, 1)
# scale to feature_range and return scaled x
range_min, range_max = feature_range
x = x * (range_max - range_min) + range_min
return x
# scaled range
scaled_img = scale(img)
print('Scaled min: ', scaled_img.min())
print('Scaled max: ', scaled_img.max()) | Scaled min: tensor(-0.4196)
Scaled max: tensor(0.2627)
| MIT | DCGAN_Exercise.ipynb | ng572/DCGAN_SVHN |
--- Define the ModelA GAN is comprised of two adversarial networks, a discriminator and a generator. DiscriminatorHere you'll build the discriminator. This is a convolutional classifier like you've built before, only without any maxpooling layers. * The inputs to the discriminator are 32x32x3 tensor images* You'll want a few convolutional, hidden layers* Then a fully connected layer for the output; as before, we want a sigmoid output, but we'll add that in the loss function, [BCEWithLogitsLoss](https://pytorch.org/docs/stable/nn.htmlbcewithlogitsloss), laterFor the depths of the convolutional layers I suggest starting with 32 filters in the first layer, then double that depth as you add layers (to 64, 128, etc.). Note that in the DCGAN paper, they did all the downsampling using only strided convolutional layers with no maxpooling layers.You'll also want to use batch normalization with [nn.BatchNorm2d](https://pytorch.org/docs/stable/nn.htmlbatchnorm2d) on each layer **except** the first convolutional layer and final, linear output layer. Helper `conv` function In general, each layer should look something like convolution > batch norm > leaky ReLU, and so we'll define a function to put these layers together. This function will create a sequential series of a convolutional + an optional batch norm layer. We'll create these using PyTorch's [Sequential container](https://pytorch.org/docs/stable/nn.htmlsequential), which takes in a list of layers and creates layers according to the order that they are passed in to the Sequential constructor.Note: It is also suggested that you use a **kernel_size of 4** and a **stride of 2** for strided convolutions. | import torch.nn as nn
import torch.nn.functional as F
# helper conv function
def conv(in_channels, out_channels, kernel_size, stride=2, padding=1, batch_norm=True):
"""Creates a convolutional layer, with optional batch normalization.
"""
layers = []
conv_layer = nn.Conv2d(in_channels, out_channels,
kernel_size, stride, padding, bias=False)
# append conv layer
layers.append(conv_layer)
if batch_norm:
# append batchnorm layer
layers.append(nn.BatchNorm2d(out_channels))
# using Sequential container
return nn.Sequential(*layers)
class Discriminator(nn.Module):
def __init__(self, conv_dim=32):
super(Discriminator, self).__init__()
# complete init function
self.conv1 = conv(in_channels=3, out_channels=conv_dim, kernel_size=4, stride=2, batch_norm=False)
self.conv2 = conv(in_channels=conv_dim, out_channels=conv_dim*2, kernel_size=4, stride=2)
self.conv3 = conv(in_channels=conv_dim*2, out_channels=conv_dim*4, kernel_size=4, stride=2)
# 128*4*4
self.fc = nn.Linear(in_features=128*4*4, out_features=1)
def forward(self, x):
# complete forward function
x = self.conv1(x)
x = F.leaky_relu(x, negative_slope=0.2)
x = self.conv2(x)
x = F.leaky_relu(x, negative_slope=0.2)
x = self.conv3(x)
x = F.leaky_relu(x, negative_slope=0.2)
x = x.view(-1, 128*4*4)
x = self.fc(x)
return x
| _____no_output_____ | MIT | DCGAN_Exercise.ipynb | ng572/DCGAN_SVHN |
GeneratorNext, you'll build the generator network. The input will be our noise vector `z`, as before. And, the output will be a $tanh$ output, but this time with size 32x32 which is the size of our SVHN images.What's new here is we'll use transpose convolutional layers to create our new images. * The first layer is a fully connected layer which is reshaped into a deep and narrow layer, something like 4x4x512. * Then, we use batch normalization and a leaky ReLU activation. * Next is a series of [transpose convolutional layers](https://pytorch.org/docs/stable/nn.htmlconvtranspose2d), where you typically halve the depth and double the width and height of the previous layer. * And, we'll apply batch normalization and ReLU to all but the last of these hidden layers. Where we will just apply a `tanh` activation. Helper `deconv` functionFor each of these layers, the general scheme is transpose convolution > batch norm > ReLU, and so we'll define a function to put these layers together. This function will create a sequential series of a transpose convolutional + an optional batch norm layer. We'll create these using PyTorch's Sequential container, which takes in a list of layers and creates layers according to the order that they are passed in to the Sequential constructor.Note: It is also suggested that you use a **kernel_size of 4** and a **stride of 2** for transpose convolutions. | # helper deconv function
def deconv(in_channels, out_channels, kernel_size, stride=2, padding=1, batch_norm=True):
"""Creates a transposed-convolutional layer, with optional batch normalization.
"""
## TODO: Complete this function
## create a sequence of transpose + optional batch norm layers
layers = []
deconv_layer = nn.ConvTranspose2d(in_channels, out_channels, kernel_size, stride, padding, bias=False)
layers.append(deconv_layer)
if batch_norm:
layers.append(nn.BatchNorm2d(out_channels))
return nn.Sequential(*layers)
class Generator(nn.Module):
def __init__(self, z_size, conv_dim=32):
super(Generator, self).__init__()
# complete init function
self.fc = nn.Linear(z_size, 4*4*512)
self.deconv1 = deconv(conv_dim*16, conv_dim*8, kernel_size=4)
self.deconv2 = deconv(conv_dim*8, conv_dim*4, kernel_size=4)
self.deconv3 = deconv(conv_dim*4, 3, kernel_size=4, batch_norm=False)
def forward(self, x):
# complete forward function
x = self.fc(x)
x = x.view(-1, 512, 4, 4)
x = self.deconv1(x)
x = F.relu(x)
x = self.deconv2(x)
x = F.relu(x)
x = self.deconv3(x)
x = torch.tanh(x)
return x
| _____no_output_____ | MIT | DCGAN_Exercise.ipynb | ng572/DCGAN_SVHN |
Build complete networkDefine your models' hyperparameters and instantiate the discriminator and generator from the classes defined above. Make sure you've passed in the correct input arguments. | # define hyperparams
conv_dim = 32
z_size = 100
# define discriminator and generator
D = Discriminator(conv_dim)
G = Generator(z_size=z_size, conv_dim=conv_dim)
print(D)
print()
print(G) | Discriminator(
(conv1): Sequential(
(0): Conv2d(3, 32, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
)
(conv2): Sequential(
(0): Conv2d(32, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(conv3): Sequential(
(0): Conv2d(64, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(fc): Linear(in_features=2048, out_features=1, bias=True)
)
Generator(
(fc): Linear(in_features=100, out_features=8192, bias=True)
(deconv1): Sequential(
(0): ConvTranspose2d(512, 256, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(deconv2): Sequential(
(0): ConvTranspose2d(256, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(deconv3): Sequential(
(0): ConvTranspose2d(128, 3, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
)
)
| MIT | DCGAN_Exercise.ipynb | ng572/DCGAN_SVHN |
Training on GPUCheck if you can train on GPU. If you can, set this as a variable and move your models to GPU. > Later, we'll also move any inputs our models and loss functions see (real_images, z, and ground truth labels) to GPU as well. | train_on_gpu = torch.cuda.is_available()
if train_on_gpu:
# move models to GPU
G.cuda()
D.cuda()
print('GPU available for training. Models moved to GPU')
else:
print('Training on CPU.')
| GPU available for training. Models moved to GPU
| MIT | DCGAN_Exercise.ipynb | ng572/DCGAN_SVHN |
--- Discriminator and Generator LossesNow we need to calculate the losses. And this will be exactly the same as before. Discriminator Losses> * For the discriminator, the total loss is the sum of the losses for real and fake images, `d_loss = d_real_loss + d_fake_loss`. * Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.The losses will by binary cross entropy loss with logits, which we can get with [BCEWithLogitsLoss](https://pytorch.org/docs/stable/nn.htmlbcewithlogitsloss). This combines a `sigmoid` activation function **and** and binary cross entropy loss in one function.For the real images, we want `D(real_images) = 1`. That is, we want the discriminator to classify the real images with a label = 1, indicating that these are real. The discriminator loss for the fake data is similar. We want `D(fake_images) = 0`, where the fake images are the _generator output_, `fake_images = G(z)`. Generator LossThe generator loss will look similar only with flipped labels. The generator's goal is to get `D(fake_images) = 1`. In this case, the labels are **flipped** to represent that the generator is trying to fool the discriminator into thinking that the images it generates (fakes) are real! | def real_loss(D_out, smooth=False):
batch_size = D_out.size(0)
# label smoothing
if smooth:
# smooth, real labels = 0.9
labels = torch.ones(batch_size)*0.9
else:
labels = torch.ones(batch_size) # real labels = 1
# move labels to GPU if available
if train_on_gpu:
labels = labels.cuda()
# binary cross entropy with logits loss
criterion = nn.BCEWithLogitsLoss()
# calculate loss
loss = criterion(D_out.squeeze(), labels)
return loss
def fake_loss(D_out):
batch_size = D_out.size(0)
labels = torch.zeros(batch_size) # fake labels = 0
if train_on_gpu:
labels = labels.cuda()
criterion = nn.BCEWithLogitsLoss()
# calculate loss
loss = criterion(D_out.squeeze(), labels)
return loss | _____no_output_____ | MIT | DCGAN_Exercise.ipynb | ng572/DCGAN_SVHN |
OptimizersNot much new here, but notice how I am using a small learning rate and custom parameters for the Adam optimizers, This is based on some research into DCGAN model convergence. HyperparametersGANs are very sensitive to hyperparameters. A lot of experimentation goes into finding the best hyperparameters such that the generator and discriminator don't overpower each other. Try out your own hyperparameters or read [the DCGAN paper](https://arxiv.org/pdf/1511.06434.pdf) to see what worked for them. | import torch.optim as optim
# params
lr = 0.0002
beta1=0.5
beta2=0.999
# Create optimizers for the discriminator and generator
d_optimizer = optim.Adam(D.parameters(), lr, [beta1, beta2])
g_optimizer = optim.Adam(G.parameters(), lr, [beta1, beta2]) | _____no_output_____ | MIT | DCGAN_Exercise.ipynb | ng572/DCGAN_SVHN |
--- TrainingTraining will involve alternating between training the discriminator and the generator. We'll use our functions `real_loss` and `fake_loss` to help us calculate the discriminator losses in all of the following cases. Discriminator training1. Compute the discriminator loss on real, training images 2. Generate fake images3. Compute the discriminator loss on fake, generated images 4. Add up real and fake loss5. Perform backpropagation + an optimization step to update the discriminator's weights Generator training1. Generate fake images2. Compute the discriminator loss on fake images, using **flipped** labels!3. Perform backpropagation + an optimization step to update the generator's weights Saving SamplesAs we train, we'll also print out some loss statistics and save some generated "fake" samples.**Evaluation mode**Notice that, when we call our generator to create the samples to display, we set our model to evaluation mode: `G.eval()`. That's so the batch normalization layers will use the population statistics rather than the batch statistics (as they do during training), *and* so dropout layers will operate in eval() mode; not turning off any nodes for generating samples. | import pickle as pkl
# training hyperparams
num_epochs = 30
# keep track of loss and generated, "fake" samples
samples = []
losses = []
print_every = 300
# Get some fixed data for sampling. These are images that are held
# constant throughout training, and allow us to inspect the model's performance
sample_size=16
fixed_z = np.random.uniform(-1, 1, size=(sample_size, z_size))
fixed_z = torch.from_numpy(fixed_z).float()
# train the network
for epoch in range(num_epochs):
for batch_i, (real_images, _) in enumerate(train_loader):
batch_size = real_images.size(0)
# important rescaling step
real_images = scale(real_images)
# ============================================
# TRAIN THE DISCRIMINATOR
# ============================================
d_optimizer.zero_grad()
# 1. Train with real images
# Compute the discriminator losses on real images
if train_on_gpu:
real_images = real_images.cuda()
D_real = D(real_images)
d_real_loss = real_loss(D_real)
# 2. Train with fake images
# Generate fake images
z = np.random.uniform(-1, 1, size=(batch_size, z_size))
z = torch.from_numpy(z).float()
# move x to GPU, if available
if train_on_gpu:
z = z.cuda()
fake_images = G(z)
# Compute the discriminator losses on fake images
D_fake = D(fake_images)
d_fake_loss = fake_loss(D_fake)
# add up loss and perform backprop
d_loss = d_real_loss + d_fake_loss
d_loss.backward()
d_optimizer.step()
# =========================================
# TRAIN THE GENERATOR
# =========================================
g_optimizer.zero_grad()
# 1. Train with fake images and flipped labels
# Generate fake images
z = np.random.uniform(-1, 1, size=(batch_size, z_size))
z = torch.from_numpy(z).float()
if train_on_gpu:
z = z.cuda()
fake_images = G(z)
# Compute the discriminator losses on fake images
# using flipped labels!
D_fake = D(fake_images)
g_loss = real_loss(D_fake) # use real loss to flip labels
# perform backprop
g_loss.backward()
g_optimizer.step()
# Print some loss stats
if batch_i % print_every == 0:
# append discriminator loss and generator loss
losses.append((d_loss.item(), g_loss.item()))
# print discriminator and generator loss
print('Epoch [{:5d}/{:5d}] | d_loss: {:6.4f} | g_loss: {:6.4f}'.format(
epoch+1, num_epochs, d_loss.item(), g_loss.item()))
## AFTER EACH EPOCH##
# generate and save sample, fake images
G.eval() # for generating samples
if train_on_gpu:
fixed_z = fixed_z.cuda()
samples_z = G(fixed_z)
samples.append(samples_z)
G.train() # back to training mode
# Save training generator samples
with open('train_samples.pkl', 'wb') as f:
pkl.dump(samples, f) | Epoch [ 1/ 30] | d_loss: 1.4085 | g_loss: 0.9993
Epoch [ 1/ 30] | d_loss: 0.6737 | g_loss: 1.9478
Epoch [ 2/ 30] | d_loss: 0.7026 | g_loss: 2.6182
Epoch [ 2/ 30] | d_loss: 0.4292 | g_loss: 2.3596
Epoch [ 3/ 30] | d_loss: 0.2889 | g_loss: 2.7350
Epoch [ 3/ 30] | d_loss: 0.1361 | g_loss: 4.5357
Epoch [ 4/ 30] | d_loss: 0.2069 | g_loss: 4.2325
Epoch [ 4/ 30] | d_loss: 0.2646 | g_loss: 10.0169
Epoch [ 5/ 30] | d_loss: 0.1014 | g_loss: 5.4149
Epoch [ 5/ 30] | d_loss: 0.0929 | g_loss: 4.8199
Epoch [ 6/ 30] | d_loss: 0.0590 | g_loss: 6.2550
Epoch [ 6/ 30] | d_loss: 0.3863 | g_loss: 2.4094
Epoch [ 7/ 30] | d_loss: 0.1023 | g_loss: 4.1469
Epoch [ 7/ 30] | d_loss: 0.0450 | g_loss: 5.3767
Epoch [ 8/ 30] | d_loss: 0.1133 | g_loss: 3.1710
Epoch [ 8/ 30] | d_loss: 0.3909 | g_loss: 2.8371
Epoch [ 9/ 30] | d_loss: 0.0228 | g_loss: 7.7792
Epoch [ 9/ 30] | d_loss: 0.5372 | g_loss: 3.8941
Epoch [ 10/ 30] | d_loss: 0.0888 | g_loss: 4.3109
Epoch [ 10/ 30] | d_loss: 0.4739 | g_loss: 5.8511
Epoch [ 11/ 30] | d_loss: 0.1066 | g_loss: 4.5965
Epoch [ 11/ 30] | d_loss: 0.0896 | g_loss: 8.8515
Epoch [ 12/ 30] | d_loss: 0.0152 | g_loss: 6.1287
Epoch [ 12/ 30] | d_loss: 0.0917 | g_loss: 3.1805
Epoch [ 13/ 30] | d_loss: 0.5349 | g_loss: 6.7379
Epoch [ 13/ 30] | d_loss: 0.0511 | g_loss: 7.0306
Epoch [ 14/ 30] | d_loss: 0.0228 | g_loss: 4.7947
Epoch [ 14/ 30] | d_loss: 0.0280 | g_loss: 6.1609
Epoch [ 15/ 30] | d_loss: 0.0406 | g_loss: 7.4366
Epoch [ 15/ 30] | d_loss: 0.0334 | g_loss: 5.7624
Epoch [ 16/ 30] | d_loss: 0.0413 | g_loss: 6.2405
Epoch [ 16/ 30] | d_loss: 0.2505 | g_loss: 3.0691
Epoch [ 17/ 30] | d_loss: 0.1006 | g_loss: 6.3208
Epoch [ 17/ 30] | d_loss: 0.1634 | g_loss: 3.8810
Epoch [ 18/ 30] | d_loss: 0.0337 | g_loss: 5.8343
Epoch [ 18/ 30] | d_loss: 0.2797 | g_loss: 6.6325
Epoch [ 19/ 30] | d_loss: 0.3332 | g_loss: 4.8526
Epoch [ 19/ 30] | d_loss: 0.0802 | g_loss: 3.8929
Epoch [ 20/ 30] | d_loss: 0.3042 | g_loss: 3.1036
Epoch [ 20/ 30] | d_loss: 0.1205 | g_loss: 1.6828
Epoch [ 21/ 30] | d_loss: 0.0568 | g_loss: 3.0710
Epoch [ 21/ 30] | d_loss: 0.1022 | g_loss: 5.4802
Epoch [ 22/ 30] | d_loss: 0.2595 | g_loss: 8.0963
Epoch [ 22/ 30] | d_loss: 0.1082 | g_loss: 2.2444
Epoch [ 23/ 30] | d_loss: 0.0162 | g_loss: 8.6265
Epoch [ 23/ 30] | d_loss: 0.1178 | g_loss: 5.9417
Epoch [ 24/ 30] | d_loss: 0.1680 | g_loss: 4.4331
Epoch [ 24/ 30] | d_loss: 0.5456 | g_loss: 3.4422
Epoch [ 25/ 30] | d_loss: 0.2071 | g_loss: 2.1712
Epoch [ 25/ 30] | d_loss: 0.0729 | g_loss: 5.1144
Epoch [ 26/ 30] | d_loss: 0.0537 | g_loss: 3.7469
Epoch [ 26/ 30] | d_loss: 0.3997 | g_loss: 6.2408
Epoch [ 27/ 30] | d_loss: 0.0555 | g_loss: 2.4301
Epoch [ 27/ 30] | d_loss: 0.1863 | g_loss: 3.7632
Epoch [ 28/ 30] | d_loss: 0.2211 | g_loss: 3.6232
Epoch [ 28/ 30] | d_loss: 0.1328 | g_loss: 4.7159
Epoch [ 29/ 30] | d_loss: 0.1348 | g_loss: 3.1974
Epoch [ 29/ 30] | d_loss: 0.2392 | g_loss: 2.6332
Epoch [ 30/ 30] | d_loss: 0.0215 | g_loss: 6.6233
Epoch [ 30/ 30] | d_loss: 0.2591 | g_loss: 3.3791
| MIT | DCGAN_Exercise.ipynb | ng572/DCGAN_SVHN |
Training lossHere we'll plot the training losses for the generator and discriminator, recorded after each epoch. | fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator', alpha=0.5)
plt.plot(losses.T[1], label='Generator', alpha=0.5)
plt.title("Training Losses")
plt.legend() | _____no_output_____ | MIT | DCGAN_Exercise.ipynb | ng572/DCGAN_SVHN |
Generator samples from trainingHere we can view samples of images from the generator. We'll look at the images we saved during training. | # helper function for viewing a list of passed in sample images
def view_samples(epoch, samples):
fig, axes = plt.subplots(figsize=(16,4), nrows=2, ncols=8, sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
img = img.detach().cpu().numpy()
img = np.transpose(img, (1, 2, 0))
img = ((img +1)*255 / (2)).astype(np.uint8) # rescale to pixel range (0-255)
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
im = ax.imshow(img.reshape((32,32,3)))
_ = view_samples(-1, samples) | _____no_output_____ | MIT | DCGAN_Exercise.ipynb | ng572/DCGAN_SVHN |
get names of each condition for later | pd.Categorical(luminescence_raw_df.condition)
names = luminescence_raw_df.condition.unique()
for name in names:
print(name)
#get list of promoters
pd.Categorical(luminescence_raw_df.Promoter)
prom_names = luminescence_raw_df.Promoter.unique()
for name in prom_names:
print(name) | UBQ10
NIR1
NOS
STAP4
NRP
| MIT | src/plotting/luminescence/24.11.19/luminescence_plots.ipynb | Switham1/PromoterArchitecture |
test normality | #returns test statistic, p-value
for name1 in prom_names:
for name in names:
print('{}: {}'.format(name, stats.shapiro(luminescence_raw_df['nluc/fluc'][luminescence_raw_df.condition == name])))
| nitrate_free: (0.7033216953277588, 0.0002697518502827734)
100mM nitrate_2hrs_morning: (0.7973607182502747, 0.00463036959990859)
100mM nitrate_overnight: (0.8101227879524231, 0.004972793627530336)
nitrate_free: (0.7033216953277588, 0.0002697518502827734)
100mM nitrate_2hrs_morning: (0.7973607182502747, 0.00463036959990859)
100mM nitrate_overnight: (0.8101227879524231, 0.004972793627530336)
nitrate_free: (0.7033216953277588, 0.0002697518502827734)
100mM nitrate_2hrs_morning: (0.7973607182502747, 0.00463036959990859)
100mM nitrate_overnight: (0.8101227879524231, 0.004972793627530336)
nitrate_free: (0.7033216953277588, 0.0002697518502827734)
100mM nitrate_2hrs_morning: (0.7973607182502747, 0.00463036959990859)
100mM nitrate_overnight: (0.8101227879524231, 0.004972793627530336)
nitrate_free: (0.7033216953277588, 0.0002697518502827734)
100mM nitrate_2hrs_morning: (0.7973607182502747, 0.00463036959990859)
100mM nitrate_overnight: (0.8101227879524231, 0.004972793627530336)
| MIT | src/plotting/luminescence/24.11.19/luminescence_plots.ipynb | Switham1/PromoterArchitecture |
not normal | #test variance
stats.levene(luminescence_raw_df['nluc/fluc'][luminescence_raw_df.condition == names[0]],
luminescence_raw_df['nluc/fluc'][luminescence_raw_df.condition == names[1]],
luminescence_raw_df['nluc/fluc'][luminescence_raw_df.condition == names[2]])
test = luminescence_raw_df.groupby('Promoter')['nluc/fluc'].apply
test | _____no_output_____ | MIT | src/plotting/luminescence/24.11.19/luminescence_plots.ipynb | Switham1/PromoterArchitecture |
ะะฐะณััะทะบะฐ ะฑะธะฑะปะธะพัะตะบ ะธ ะดะฐะฝะฝัั
| !pip install simpletransformers==0.61.13
!pip uninstall transformers
!pip install transformers==4.10.0
!git clone https://github.com/GoldenRMT/WikiSearch.git
!pip install googledrivedownloader
import nltk
nltk.download('stopwords')
nltk.download('punkt')
from nltk.corpus import stopwords
nltk.download('wordnet')
stopwords = stopwords.words("english")
lemmatizer = nltk.stem.WordNetLemmatizer()
from nltk.tokenize import RegexpTokenizer, word_tokenize, sent_tokenize
main_tokenizer = RegexpTokenizer(r'\w+',)
sec_tokenizer = RegexpTokenizer(r'\S+')
from sklearn.externals import joblib
import numpy as np
from google.colab import output
import urllib
import difflib
import WikiSearch.wikipedia.wikipedia as wikipedia
import pandas as pd
from bs4 import BeautifulSoup
from google_drive_downloader import GoogleDriveDownloader as gdd
gdd.download_file_from_google_drive(file_id='13Nuwm7BV-4RXI9JqjPTDE9rcdupkKqlF',
dest_path='/Data/AIIJC/aiijc_1578_goodFromTrain_pretrained.model') | Downloading 13Nuwm7BV-4RXI9JqjPTDE9rcdupkKqlF into /Data/AIIJC/aiijc_1578_goodFromTrain_pretrained.model... Done.
| MIT | solution_AIIJC(NLP)/Notebooks/singleAnswering_Aiijc.ipynb | Makual/AIIJC_NLP |
ะคัะฝะบัะธะธ ะดะปั ะฟัะตะดะพะฑัะฐะฑะพัะบะธ ัะตะบััะฐ | def normal_form(word): #ะะพะปััะตะฝะธะต ะฝะพัะผะฐะปัะฝะพะน ัะพัะผั ัะปะพะฒะฐ
word = word.lower()
return word
def clean_html(html): #ะัะธััะบะฐ html
soup = BeautifulSoup(BeautifulSoup(html, "lxml").text)
return str(soup.body)
def get_good_tokens(text): #ะัะดะตะปะตะฝะธะต ะบะปััะตะฒัั
ัะปะพะฒ
good_tokens = []
for tokens in tokenizer(text)[1]:
for token in tokens:
token = normal_form(token)
if token not in stopwords:
good_tokens.append(token)
return good_tokens
def tokenizer(text): #ะขะพะบะตะฝะธะทะฐัะธั ัะตะบััะฐ ะฒ ะพะฑัะฐะฑะพัะฐะฝะฝัะต ะธ ะฝะตะพะฑัะฐะฑะพัะฐะฝะฝัะต ัะพะบะตะฝั
raw_tokens = sec_tokenizer.tokenize(text)
clean_tokens = main_tokenizer.tokenize_sents(raw_tokens)
nClean_tokens = []
for i in range(len(clean_tokens)):
nClean_tokens.append([])
for m in range(len(clean_tokens[i])):
if normal_form(clean_tokens[i][m]) != 's':
nClean_tokens[i].append(normal_form(clean_tokens[i][m]))
return (raw_tokens, nClean_tokens)
def similarity(s1, s2): #ะะฐั
ะพะถะดะตะฝะธะต ะบะพัััะธัะธะตะฝัะฐ ัั
ะพะถะตััะธ ะผะตะถะดั ะดะฒัะผั ัััะพะบะฐะผะธ
normalized1 = s1.lower()
normalized2 = s2.lower()
matcher = difflib.SequenceMatcher(None, normalized1, normalized2)
return matcher.ratio()
def part_extractor(data,question,step,part_length): #ะคัะฝะบัะธั ะฒัะดะตะปะตะฝะธั ัะตะปะตะฒะฐะฝัะฝะพะณะพ ััะฐะณะผะตะฝัะฐ (ะขะตะบัั, ะฒะพะฟัะพั, ะดะปะธะฝะฝะฐ ััะฐะณะผะตะฝัะฐ)
good_tokens = get_good_tokens(question)
tokens = tokenizer(data)
for i in range(step-(len(tokens[0]) % step)): #ะฃะฒะตะปะธัะตะฝะธะต ะบะพะปะธัะตััะฒะฐ ัะพะบะตะฝะพะฒ ะดะพ ะบัะฐัะฝะพะณะพ ะดะปะธะฝั ัะฐััะธ
tokens[0].append('')
tokens[1].append('')
match_counter = 0 #ะกัะตััะธะบ ัะพัะฝัั
ัะพะฒะฟะฐะดะตะฝะธะน ัะพะบะตะฝะพะฒ
best_part = '' #ะัััะฐั ัะฐััั
max_match_qty = 0 #ะะฐะบัะธะผะฐะปัะฝะพะต ะบะพะปะธัะตััะฒะพ ัะพะฒะฟะฐะฒัะธั
ัะพะบะตะฝะพะฒ
main_clrTokens = tokens[1]
main_tokens = tokens[0]
for i in range(0,len(tokens[0])-1,part_length): #ะะฐั
ะพะถะดะตะฝะธะต ะฝะฐะธะฑะพะปะตะต ัะตะปะตะฒะฐะฝัะฝะพะน ัะฐััะธ ัะตะบััะฐ
tokens = main_tokens[i:i+part_length-1]
clrTokens = main_clrTokens[i:i+part_length-1]
for good_token in good_tokens:
if in_tokens(good_token,clrTokens):
match_counter += 1
if match_counter > max_match_qty:
max_match_qty = match_counter
best_part = tokens
match_counter = 0
fin = '' #ะะพัััะฐะฝะพะฒะปะตะฝะธะต ัะตะบััะฐ
for i in best_part:
fin += (i+' ')
return fin
def in_tokens(token,text):
for i in text:
for m in i:
if token == m:
return True
return False | _____no_output_____ | MIT | solution_AIIJC(NLP)/Notebooks/singleAnswering_Aiijc.ipynb | Makual/AIIJC_NLP |
ะะฐะณััะทะบะฐ ะผะพะดะตะปะธ ะธ ััะฝะบัะธั ะดะปั ะพัะฒะตัะพะฒ ะฝะฐ ะฒะพะฟัะพั | model = joblib.load('/Data/AIIJC/aiijc_1578_goodFromTrain_pretrained.model')
model.args.max_seq_length = 512
model.args.silent = True
def answering(question):
text = question
good_tokens = get_good_tokens(text)
try:
urls = wikipedia.search(text,results=2)
except:
link_1 = '-'
link_2 = '-'
try:
link_1 = urls[0]
except:
link_1 = '-'
try:
link_2 = urls[1]
except:
link_2 = '-'
#ะะฐะณััะทะบะฐ ััะฐัะตะน ะฒะธะบะธะฟะตะดะธะธ
try:
link_1 = link_1.replace('https://en.wikipedia.org/wiki/','') #ะฃะฑะตัะฐะตะผ ะฝะฐัะฐะปะพ ัััะปะบะธ
link_1 = urllib.parse.unquote(link_1) #ะะฐะผะตะฝัะตะผ ะบัะธะฒัะต ัะธะผะฒะพะปั ะฝะฐ ะพัะธะณะธะฝะฐะป
data_1 = wikipedia.page(link_1,auto_suggest=False).content #ะะฐััะธะผ ัััะฐะฝะธัะบั ะฒะธะบะธ
data_1 = data_1.replace('\n',' ')
except:
pass
try:
link_2 = link_2.replace('https://en.wikipedia.org/wiki/','') #ะฃะฑะตัะฐะตะผ ะฝะฐัะฐะปะพ ัััะปะบะธ
link_2 = urllib.parse.unquote(link_2) #ะะฐะผะตะฝัะตะผ ะบัะธะฒัะต ัะธะผะฒะพะปั ะฝะฐ ะพัะธะณะธะฝะฐะป
data_2 = wikipedia.page(link_2,auto_suggest=False).content #ะะฐััะธะผ ัััะฐะฝะธัะบั ะฒะธะบะธ
data_2 = data_2.replace('\n',' ')
except:
pass
try: #ะะพะธัะบ ัะตะปะตะฒะฐะฝัะฝะพะณะพ ะบััะบะฐ ะดะปะธะฝะพะน 128 ัะพะบะตะฝะพะฒ ั ัะฐะณะพะผ 64 ะฒ ัะฐะผะพะน ัะตะปะตะฒะฐะฝัะฝะพะน ััะฐััะต
context = part_extractor(data_1,question,16,64)
except:
pass
try: #ะะพะธัะบ ัะตะปะตะฒะฐะฝัะฝะพะณะพ ะบััะบะฐ ะดะปะธะฝะพะน 64 ัะพะบะตะฝะฐ ั ัะฐะณะพะผ 32 ะฒะพ ะฒัะพัะพะน ะฟะพ ัะตะปะตะฒะฐะฝัะฝะพััะธ ััะฐััะต
context += ' ' + part_extractor(data_2,question,16,32)
except:
pass
try:
predict = model.predict([{'context': context,'qas': [{'id': 0, 'question': question}]}])[0] #ะัะตะดัะบะฐะทะฐะฝะธะต ะพัะฒะตัะฐ
except:
predict = [{'answer':['']}]
predict[0]['answer'][0] = 'empty'
if predict[0]['answer'][0] == 'empty':
try:
context = part_extractor(data_1,question,16,64)
predict = model.predict([{'context': context,'qas': [{'id': 0, 'question': question}]}])[0]
except:
pass
if predict[0]['answer'][0] == 'empty':
try:
context = part_extractor(data_2,question,16,64)
predict = model.predict([{'context': context,'qas': [{'id': 0, 'question': question}]}])[0]
except:
pass
if predict[0]['answer'][0] == 'empty':
try:
context = part_extractor(data_1,question,16,128)
predict = model.predict([{'context': context,'qas': [{'id': 0, 'question': question}]}])[0]
except:
pass
if predict[0]['answer'][0] == 'empty':
try:
context = part_extractor(data_2,question,16,128)
predict = model.predict([{'context': context,'qas': [{'id': 0, 'question': question}]}])[0]
except:
pass
if predict[0]['answer'][0] == 'empty':
try:
context = part_extractor(data_1,question,16,256)
predict = model.predict([{'context': context,'qas': [{'id': 0, 'question': question}]}])[0]
except:
pass
if predict[0]['answer'][0] == 'empty':
try:
context = part_extractor(data_2,question,16,256)
predict = model.predict([{'context': context,'qas': [{'id': 0, 'question': question}]}])[0]
except:
pass
return predict[0]['answer'][0] | _____no_output_____ | MIT | solution_AIIJC(NLP)/Notebooks/singleAnswering_Aiijc.ipynb | Makual/AIIJC_NLP |
ะัะพะฒะตัะบะฐ ัะฐะฑะพัะพัะฟะพัะพะฑะฝะพััะธ ะธ ะฒัะตะผะตะฝะธ ัะฐะฑะพัั ััะฝะบัะธะธ | import time
time_1 = time.time()
print(answering("What is the name of Trump first daughter?"))
print('ะัะตะผั ะพะฑัะฐะฑะพัะบะธ ะทะฐะฟัะพัะฐ: ' + str(time.time()-time_1)) | Ivana Marie "Ivanka" Trump
ะัะตะผั ะพะฑัะฐะฑะพัะบะธ ะทะฐะฟัะพัะฐ: 1.6687853336334229
| MIT | solution_AIIJC(NLP)/Notebooks/singleAnswering_Aiijc.ipynb | Makual/AIIJC_NLP |
CNTK 101: Logistic Regression and ML PrimerThis tutorial is targeted to individuals who are new to CNTK and to machine learning. In this tutorial, you will train a simple yet powerful machine learning model that is widely used in industry for a variety of applications. The model trained below scales to massive data sets in the most expeditious manner by harnessing computational scalability leveraging the computational resources you may have (one or more CPU cores, one or more GPUs, a cluster of CPUs or a cluster of GPUs), transparently via the CNTK library.The following notebook uses Python APIs. If you are looking for this example in BrainScript, please look [here](https://github.com/Microsoft/CNTK/tree/release/2.6/Tutorials/HelloWorld-LogisticRegression). Introduction**Problem**:A cancer hospital has provided data and wants us to determine if a patient has a fatal [malignant](https://en.wikipedia.org/wiki/Malignancy) cancer vs. a benign growth. This is known as a classification problem. To help classify each patient, we are given their age and the size of the tumor. Intuitively, one can imagine that younger patients and/or patients with small tumors are less likely to have a malignant cancer. The data set simulates this application: each observation is a patient represented as a dot (in the plot below), where red indicates malignant and blue indicates benign. Note: This is a toy example for learning; in real life many features from different tests/examination sources and the expertise of doctors would play into the diagnosis/treatment decision for a patient. | # Figure 1
Image(url="https://www.cntk.ai/jup/cancer_data_plot.jpg", width=400, height=400) | _____no_output_____ | MIT | Tutorials/CNTK_101_LogisticRegression.ipynb | shyamalschandra/CNTK |
**Goal**:Our goal is to learn a classifier that can automatically label any patient into either the benign or malignant categories given two features (age and tumor size). In this tutorial, we will create a linear classifier, a fundamental building-block in deep networks. | # Figure 2
Image(url= "https://www.cntk.ai/jup/cancer_classify_plot.jpg", width=400, height=400) | _____no_output_____ | MIT | Tutorials/CNTK_101_LogisticRegression.ipynb | shyamalschandra/CNTK |
In the figure above, the green line represents the model learned from the data and separates the blue dots from the red dots. In this tutorial, we will walk you through the steps to learn the green line. Note: this classifier does make mistakes, where a couple of blue dots are on the wrong side of the green line. However, there are ways to fix this and we will look into some of the techniques in later tutorials. **Approach**: Any learning algorithm typically has five stages. These are Data reading, Data preprocessing, Creating a model, Learning the model parameters, and Evaluating the model (a.k.a. testing/prediction). >1. Data reading: We generate simulated data sets with each sample having two features (plotted below) indicative of the age and tumor size.>2. Data preprocessing: Often, the individual features such as size or age need to be scaled. Typically, one would scale the data between 0 and 1. To keep things simple, we are not doing any scaling in this tutorial (for details look here: [feature scaling](https://en.wikipedia.org/wiki/Feature_scaling).>3. Model creation: We introduce a basic linear model in this tutorial. >4. Learning the model: This is also known as training. While fitting a linear model can be done in a variety of ways ([linear regression](https://en.wikipedia.org/wiki/Linear_regression), in CNTK we use Stochastic Gradient Descent a.k.a. [SGD](https://en.wikipedia.org/wiki/Stochastic_gradient_descent).>5. Evaluation: This is also known as testing, where one evaluates the model on data sets with known labels (a.k.a. ground-truth) that were never used for training. This allows us to assess how a model would perform in real-world (previously unseen) observations. Logistic Regression[Logistic regression](https://en.wikipedia.org/wiki/Logistic_regression) is a fundamental machine learning technique that uses a linear weighted combination of features and generates the probability of predicting different classes. In our case, the classifier will generate a probability in [0,1] which can then be compared to a threshold (such as 0.5) to produce a binary label (0 or 1). However, the method shown can easily be extended to multiple classes. [softmax]: https://en.wikipedia.org/wiki/Multinomial_logistic_regression | # Figure 3
Image(url= "https://www.cntk.ai/jup/logistic_neuron.jpg", width=300, height=200) | _____no_output_____ | MIT | Tutorials/CNTK_101_LogisticRegression.ipynb | shyamalschandra/CNTK |
In the above figure, contributions from different input features are linearly weighted and aggregated. The resulting sum is mapped to a (0, 1) range via a [sigmoid]( https://en.wikipedia.org/wiki/Sigmoid_function) function. For classifiers with more than two output labels, one can use a [softmax](https://en.wikipedia.org/wiki/Softmax_function) function. | # Import the relevant components
from __future__ import print_function
import numpy as np
import sys
import os
import cntk as C
import cntk.tests.test_utils
cntk.tests.test_utils.set_device_from_pytest_env() # (only needed for our build system)
C.cntk_py.set_fixed_random_seed(1) # fix the random seed so that LR examples are repeatable | _____no_output_____ | MIT | Tutorials/CNTK_101_LogisticRegression.ipynb | shyamalschandra/CNTK |
Data GenerationLet us generate some synthetic data emulating the cancer example using the `numpy` library. We have two input features (represented in two-dimensions) and two output classes (benign/blue or malignant/red). In our example, each observation (a single 2-tuple of features - age and size) in the training data has a label (blue or red). Because we have two output labels, we call this a binary classification task. | # Define the network
input_dim = 2
num_output_classes = 2 | _____no_output_____ | MIT | Tutorials/CNTK_101_LogisticRegression.ipynb | shyamalschandra/CNTK |
Input and LabelsIn this tutorial we are generating synthetic data using the `numpy` library. In real-world problems, one would use a [reader](https://docs.microsoft.com/en-us/cognitive-toolkit/brainscript-and-python---understanding-and-extending-readers), that would read feature values (`features`: *age* and *tumor size*) corresponding to each observation (patient). The simulated *age* variable is scaled down to have a similar range to that of the other variable. This is a key aspect of data pre-processing that we will learn more about in later tutorials. Note: in general, observations and labels can reside in higher dimensional spaces (when more features or classifications are available) and are then represented as [tensors](https://en.wikipedia.org/wiki/Tensor) in CNTK. More advanced tutorials introduce the handling of high dimensional data. | # Ensure that we always get the same results
np.random.seed(0)
# Helper function to generate a random data sample
def generate_random_data_sample(sample_size, feature_dim, num_classes):
# Create synthetic data using NumPy.
Y = np.random.randint(size=(sample_size, 1), low=0, high=num_classes)
# Make sure that the data is separable
X = (np.random.randn(sample_size, feature_dim)+3) * (Y+1)
# Specify the data type to match the input variable used later in the tutorial
# (default type is double)
X = X.astype(np.float32)
# convert class 0 into the vector "1 0 0",
# class 1 into the vector "0 1 0", ...
class_ind = [Y==class_number for class_number in range(num_classes)]
Y = np.asarray(np.hstack(class_ind), dtype=np.float32)
return X, Y
# Create the input variables denoting the features and the label data. Note: the input
# does not need additional info on the number of observations (Samples) since CNTK creates only
# the network topology first
mysamplesize = 32
features, labels = generate_random_data_sample(mysamplesize, input_dim, num_output_classes) | _____no_output_____ | MIT | Tutorials/CNTK_101_LogisticRegression.ipynb | shyamalschandra/CNTK |
Let us visualize the input data.**Note**: If the import of `matplotlib.pyplot` fails, please run `conda install matplotlib`, which will fix the `pyplot` version dependencies. If you are on a python environment different from Anaconda, then use `pip install matplotlib`. | # Plot the data
import matplotlib.pyplot as plt
%matplotlib inline
# let 0 represent malignant/red and 1 represent benign/blue
colors = ['r' if label == 0 else 'b' for label in labels[:,0]]
plt.scatter(features[:,0], features[:,1], c=colors)
plt.xlabel("Age (scaled)")
plt.ylabel("Tumor size (in cm)")
plt.show() | _____no_output_____ | MIT | Tutorials/CNTK_101_LogisticRegression.ipynb | shyamalschandra/CNTK |
Model CreationA logistic regression (a.k.a. LR) network is a simple building block, but has powered many ML applications in the past decade. LR is a simple linear model that takes as input a vector of numbers describing the properties of what we are classifying (also known as a feature vector, $\bf{x}$, the blue nodes in the figure below) and emits the *evidence* ($z$) (output of the green node, also known as "activation"). Each feature in the input layer is connected to an output node by a corresponding weight $w$ (indicated by the black lines of varying thickness). | # Figure 4
Image(url= "https://www.cntk.ai/jup/logistic_neuron2.jpg", width=300, height=200) | _____no_output_____ | MIT | Tutorials/CNTK_101_LogisticRegression.ipynb | shyamalschandra/CNTK |
The first step is to compute the evidence for an observation. $$z = \sum_{i=1}^n w_i \times x_i + b = \textbf{w} \cdot \textbf{x} + b$$ where $\bf{w}$ is the weight vector of length $n$ and $b$ is known as the [bias](https://www.quora.com/What-does-the-bias-term-represent-in-logistic-regression) term. Note: we use **bold** notation to denote vectors. The computed evidence is mapped to a (0, 1) range using a `sigmoid` (when the outcome can be in one of two possible classes) or a `softmax` function (when the outcome can be in one of more than two possible classes).Network input and output: - **input** variable (a key CNTK concept): >An **input** variable is a user-code-facing container where user-provided code fills in different observations (a data point or sample of data points, equivalent to (age, size) tuples in our example) as inputs to the model function during model learning (a.k.a.training) and model evaluation (a.k.a. testing). Thus, the shape of the `input` must match the shape of the data that will be provided. For example, if each data point was a grayscale image of height 10 pixels and width 5 pixels, the input feature would be a vector of 50 floating-point values representing the intensity of each of the 50 pixels, and could be written as `C.input_variable(10*5, np.float32)`. Similarly, in our example the dimensions are age and tumor size, thus `input_dim` = 2. More on data and their dimensions to appear in separate tutorials. | feature = C.input_variable(input_dim, np.float32) | _____no_output_____ | MIT | Tutorials/CNTK_101_LogisticRegression.ipynb | shyamalschandra/CNTK |
Network setupThe `linear_layer` function is a straightforward implementation of the equation above. We perform two operations:0. multiply the weights ($\bf{w}$) with the features ($\bf{x}$) using the CNTK `times` operator,1. add the bias term ($b$).These CNTK operations are optimized for execution on the available hardware and the implementation hides the complexity away from the user. | # Define a dictionary to store the model parameters
mydict = {}
def linear_layer(input_var, output_dim):
input_dim = input_var.shape[0]
weight_param = C.parameter(shape=(input_dim, output_dim))
bias_param = C.parameter(shape=(output_dim))
mydict['w'], mydict['b'] = weight_param, bias_param
return C.times(input_var, weight_param) + bias_param | _____no_output_____ | MIT | Tutorials/CNTK_101_LogisticRegression.ipynb | shyamalschandra/CNTK |
`z` will be used to represent the output of the network. | output_dim = num_output_classes
z = linear_layer(feature, output_dim) | _____no_output_____ | MIT | Tutorials/CNTK_101_LogisticRegression.ipynb | shyamalschandra/CNTK |
Learning model parametersNow that the network is set up, we would like to learn the parameters $\bf w$ and $b$ for our simple linear layer. To do so we convert, the computed evidence ($z$) into a set of predicted probabilities ($\textbf p$) using a `softmax` function.$$ \textbf{p} = \mathrm{softmax}(z)$$ The `softmax` is an activation function that normalizes the accumulated evidence into a probability distribution over the classes (Details of [softmax](https://www.cntk.ai/pythondocs/cntk.ops.htmlcntk.ops.softmax)). Other choices of activation function can be [here](https://cntk.ai/pythondocs/cntk.layers.layers.htmlcntk.layers.layers.Activation). TrainingThe output of the `softmax` is the probabilities of an observation belonging each of the respective classes. For training the classifier, we need to determine what behavior the model needs to mimic. In other words, we want the generated probabilities to be as close as possible to the observed labels. We can accomplish this by minimizing the difference between our output and the ground-truth labels. This difference is calculated by the *cost* or *loss* function.[Cross entropy](http://cntk.ai/pythondocs/cntk.ops.htmlcntk.ops.cross_entropy_with_softmax) is a popular loss function. It is defined as:$$ H(p) = - \sum_{j=1}^{| \textbf y |} y_j \log (p_j) $$ where $p$ is our predicted probability from `softmax` function and $y$ is the ground-truth label, provided with the training data. In the two-class example, the `label` variable has two dimensions (equal to the `num_output_classes` or $| \textbf y |$). Generally speaking, the label variable will have $| \textbf y |$ elements with 0 everywhere except at the index of the true class of the data point, where it will be 1. Understanding the [details](http://colah.github.io/posts/2015-09-Visual-Information/) of the cross-entropy function is highly recommended. | label = C.input_variable(num_output_classes, np.float32)
loss = C.cross_entropy_with_softmax(z, label) | _____no_output_____ | MIT | Tutorials/CNTK_101_LogisticRegression.ipynb | shyamalschandra/CNTK |
EvaluationIn order to evaluate the classification, we can compute the [classification_error](https://www.cntk.ai/pythondocs/cntk.metrics.htmlcntk.metrics.classification_error), which is 0 if our model was correct (it assigned the true label the most probability), otherwise 1. | eval_error = C.classification_error(z, label) | _____no_output_____ | MIT | Tutorials/CNTK_101_LogisticRegression.ipynb | shyamalschandra/CNTK |
Configure trainingThe trainer strives to minimize the `loss` function using an optimization technique. In this tutorial, we will use [Stochastic Gradient Descent](https://en.wikipedia.org/wiki/Stochastic_gradient_descent) (`sgd`), one of the most popular techniques. Typically, one starts with random initialization of the model parameters (the weights and biases, in our case). For each observation, the `sgd` optimizer can calculate the `loss` or error between the predicted label and the corresponding ground-truth label, and apply [gradient descent](http://www.statisticsviews.com/details/feature/5722691/Getting-to-the-Bottom-of-Regression-with-Gradient-Descent.html) to generate a new set of model parameters after each observation. The aforementioned process of updating all parameters after each observation is attractive because it does not require the entire data set (all observations) to be loaded in memory and also computes the gradient over fewer datapoints, thus allowing for training on large data sets. However, the updates generated using a single observation at a time can vary wildly between iterations. An intermediate ground is to load a small set of observations into the model and use an average of the `loss` or error from that set to update the model parameters. This subset is called a *minibatch*.With minibatches we often sample observations from the larger training dataset. We repeat the process of updating the model parameters using different combinations of training samples, and over a period of time minimize the `loss` (and the error). When the incremental error rates are no longer changing significantly, or after a preset maximum number of minibatches have been processed, we claim that our model is trained.One of the key parameters of [optimization](https://en.wikipedia.org/wiki/Category:Convex_optimization) is the `learning_rate`. For now, we can think of it as a scaling factor that modulates how much we change the parameters in any iteration. We will cover more details in later tutorials. With this information, we are ready to create our trainer. | # Instantiate the trainer object to drive the model training
learning_rate = 0.5
lr_schedule = C.learning_rate_schedule(learning_rate, C.UnitType.minibatch)
learner = C.sgd(z.parameters, lr_schedule)
trainer = C.Trainer(z, (loss, eval_error), [learner]) | _____no_output_____ | MIT | Tutorials/CNTK_101_LogisticRegression.ipynb | shyamalschandra/CNTK |
First, let us create some helper functions that will be needed to visualize different functions associated with training. Note: these convenience functions are for understanding what goes on under the hood. | # Define a utility function to compute the moving average.
# A more efficient implementation is possible with np.cumsum() function
def moving_average(a, w=10):
if len(a) < w:
return a[:]
return [val if idx < w else sum(a[(idx-w):idx])/w for idx, val in enumerate(a)]
# Define a utility that prints the training progress
def print_training_progress(trainer, mb, frequency, verbose=1):
training_loss, eval_error = "NA", "NA"
if mb % frequency == 0:
training_loss = trainer.previous_minibatch_loss_average
eval_error = trainer.previous_minibatch_evaluation_average
if verbose:
print ("Minibatch: {0}, Loss: {1:.4f}, Error: {2:.2f}".format(mb, training_loss, eval_error))
return mb, training_loss, eval_error | _____no_output_____ | MIT | Tutorials/CNTK_101_LogisticRegression.ipynb | shyamalschandra/CNTK |
Run the trainerWe are now ready to train our Logistic Regression model. We want to decide what data we need to feed into the training engine.In this example, each iteration of the optimizer will work on 25 samples (25 dots w.r.t. the plot above) a.k.a. `minibatch_size`. We would like to train on 20000 observations. If the number of samples in the data is only 10000, the trainer will make 2 passes through the data. This is represented by `num_minibatches_to_train`. Note: in a real world scenario, we would be given a certain amount of labeled data (in the context of this example, (age, size) observations and their labels (benign / malignant)). We would use a large number of observations for training, say 70%, and set aside the remainder for the evaluation of the trained model.With these parameters we can proceed with training our simple feedforward network. | # Initialize the parameters for the trainer
minibatch_size = 25
num_samples_to_train = 20000
num_minibatches_to_train = int(num_samples_to_train / minibatch_size)
from collections import defaultdict
# Run the trainer and perform model training
training_progress_output_freq = 50
plotdata = defaultdict(list)
for i in range(0, num_minibatches_to_train):
features, labels = generate_random_data_sample(minibatch_size, input_dim, num_output_classes)
# Assign the minibatch data to the input variables and train the model on the minibatch
trainer.train_minibatch({feature : features, label : labels})
batchsize, loss, error = print_training_progress(trainer, i,
training_progress_output_freq, verbose=1)
if not (loss == "NA" or error =="NA"):
plotdata["batchsize"].append(batchsize)
plotdata["loss"].append(loss)
plotdata["error"].append(error)
# Compute the moving average loss to smooth out the noise in SGD
plotdata["avgloss"] = moving_average(plotdata["loss"])
plotdata["avgerror"] = moving_average(plotdata["error"])
# Plot the training loss and the training error
import matplotlib.pyplot as plt
plt.figure(1)
plt.subplot(211)
plt.plot(plotdata["batchsize"], plotdata["avgloss"], 'b--')
plt.xlabel('Minibatch number')
plt.ylabel('Loss')
plt.title('Minibatch run vs. Training loss')
plt.show()
plt.subplot(212)
plt.plot(plotdata["batchsize"], plotdata["avgerror"], 'r--')
plt.xlabel('Minibatch number')
plt.ylabel('Label Prediction Error')
plt.title('Minibatch run vs. Label Prediction Error')
plt.show() | _____no_output_____ | MIT | Tutorials/CNTK_101_LogisticRegression.ipynb | shyamalschandra/CNTK |
Run evaluation / Testing Now that we have trained the network, let us evaluate the trained network on data that hasn't been used for training. This is called **testing**. Let us create some new data and evaluate the average error and loss on this set. This is done using `trainer.test_minibatch`. Note the error on this previously unseen data is comparable to the training error. This is a **key** check. Should the error be larger than the training error by a large margin, it indicates that the trained model will not perform well on data that it has not seen during training. This is known as [overfitting](https://en.wikipedia.org/wiki/Overfitting). There are several ways to address overfitting that are beyond the scope of this tutorial, but the Cognitive Toolkit provides the necessary components to address overfitting.Note: we are testing on a single minibatch for illustrative purposes. In practice, one runs several minibatches of test data and reports the average. **Question** Why is this suggested? Try plotting the test error over several set of generated data sample and plot using plotting functions used for training. Do you see a pattern? | # Run the trained model on a newly generated dataset
test_minibatch_size = 25
features, labels = generate_random_data_sample(test_minibatch_size, input_dim, num_output_classes)
trainer.test_minibatch({feature : features, label : labels}) | _____no_output_____ | MIT | Tutorials/CNTK_101_LogisticRegression.ipynb | shyamalschandra/CNTK |
Checking prediction / evaluation For evaluation, we softmax the output of the network into a probability distribution over the two classes, the probability of each observation being malignant or benign. | out = C.softmax(z)
result = out.eval({feature : features}) | _____no_output_____ | MIT | Tutorials/CNTK_101_LogisticRegression.ipynb | shyamalschandra/CNTK |
Let us compare the ground-truth label with the predictions. They should be in agreement.**Question:** - How many predictions were mislabeled? Can you change the code below to identify which observations were misclassified? | print("Label :", [np.argmax(label) for label in labels])
print("Predicted:", [np.argmax(x) for x in result]) | Label : [1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1]
Predicted: [1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1]
| MIT | Tutorials/CNTK_101_LogisticRegression.ipynb | shyamalschandra/CNTK |
VisualizationIt is desirable to visualize the results. In this example, the data can be conveniently plotted using two spatial dimensions for the input (patient age on the x-axis and tumor size on the y-axis), and a color dimension for the output (red for malignant and blue for benign). For data with higher dimensions, visualization can be challenging. There are advanced dimensionality reduction techniques, such as [t-sne](https://en.wikipedia.org/wiki/T-distributed_stochastic_neighbor_embedding) that allow for such visualizations. | # Model parameters
print(mydict['b'].value)
bias_vector = mydict['b'].value
weight_matrix = mydict['w'].value
# Plot the data
import matplotlib.pyplot as plt
# let 0 represent malignant/red, and 1 represent benign/blue
colors = ['r' if label == 0 else 'b' for label in labels[:,0]]
plt.scatter(features[:,0], features[:,1], c=colors)
plt.plot([0, bias_vector[0]/weight_matrix[0][1]],
[ bias_vector[1]/weight_matrix[0][0], 0], c = 'g', lw = 3)
plt.xlabel("Patient age (scaled)")
plt.ylabel("Tumor size (in cm)")
plt.show() | [ 8.00007153 -8.00006485]
| MIT | Tutorials/CNTK_101_LogisticRegression.ipynb | shyamalschandra/CNTK |
3์ฅ ์ฒ์ ์์ํ๋ ๋จธ์ ๋ฌ๋ | # ๅฟ
่ฆใฉใคใใฉใชใฎๅฐๅ
ฅ
!pip install japanize_matplotlib | tail -n 1
!pip install torchviz | tail -n 1
# ๅฟ
่ฆใฉใคใใฉใชใฎใคใณใใผใ
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
#import japanize_matplotlib
from IPython.display import display
# PyTorch้ข้ฃใฉใคใใฉใช
import torch
from torchviz import make_dot
# ใใใฉใซใใใฉใณใใตใคใบๅคๆด
plt.rcParams['font.size'] = 14
# ใใใฉใซใใฐใฉใใตใคใบๅคๆด
plt.rcParams['figure.figsize'] = (6,6)
# ใใใฉใซใใงๆน็ผ่กจ็คบON
plt.rcParams['axes.grid'] = True
# numpyใฎๆตฎๅๅฐๆฐ็นใฎ่กจ็คบ็ฒพๅบฆ
np.set_printoptions(suppress=True, precision=4)
# warning่กจ็คบoff
import warnings
warnings.simplefilter('ignore') | _____no_output_____ | Apache-2.0 | notebooks/ch03_first_ml.ipynb | ychoi-kr/pytorch_book_info |
3.4 ๊ฒฝ์ฌ ํ๊ฐ๋ฒ์ ๊ตฌํ ๋ฐฉ๋ฒ | def L(u, v):
return 3 * u**2 + 3 * v**2 - u*v + 7*u - 7*v + 10
def Lu(u, v):
return 6* u - v + 7
def Lv(u, v):
return 6* v - u - 7
u = np.linspace(-5, 5, 501)
v = np.linspace(-5, 5, 501)
U, V = np.meshgrid(u, v)
Z = L(U, V)
# ๅพ้
้ไธๆณใฎใทใใฅใฌใผใทใงใณ
W = np.array([4.0, 4.0])
W1 = [W[0]]
W2 = [W[1]]
N = 21
alpha = 0.05
for i in range(N):
W = W - alpha *np.array([Lu(W[0], W[1]), Lv(W[0], W[1])])
W1.append(W[0])
W2.append(W[1])
n_loop=11
WW1 = np.array(W1[:n_loop])
WW2 = np.array(W2[:n_loop])
ZZ = L(WW1, WW2)
fig = plt.figure(figsize=(8,8))
ax = plt.axes(projection='3d')
ax.set_zlim(0,250)
ax.set_xlabel('W')
ax.set_ylabel('B')
ax.set_zlabel('loss')
ax.view_init(50, 240)
ax.xaxis._axinfo["grid"]['linewidth'] = 2.
ax.yaxis._axinfo["grid"]['linewidth'] = 2.
ax.zaxis._axinfo["grid"]['linewidth'] = 2.
ax.contour3D(U, V, Z, 100, cmap='Blues', alpha=0.7)
ax.plot3D(WW1, WW2, ZZ, 'o-', c='k', alpha=1, markersize=7)
plt.show()
fig.savefig('fig03-06.tif', format='tif', dpi=300) | _____no_output_____ | Apache-2.0 | notebooks/ch03_first_ml.ipynb | ychoi-kr/pytorch_book_info |
3.5 ใใผใฟๅๅฆ็5ไบบใฎไบบใฎ่บซ้ทใจไฝ้ใฎใใผใฟใไฝฟใใ 1ๆฌก้ขๆฐใง่บซ้ทใใไฝ้ใไบๆธฌใใๅ ดๅใๆ้ฉใช็ด็ทใๆฑใใใใจใ็ฎ็ใ | # ใตใณใใซใใผใฟใฎๅฎฃ่จ
sampleData1 = np.array([
[166, 58.7],
[176.0, 75.7],
[171.0, 62.1],
[173.0, 70.4],
[169.0,60.1]
])
print(sampleData1)
# ๆฉๆขฐๅญฆ็ฟใขใใซใงๆฑใใใใ่บซ้ทใ ใใๆใๅบใใๅคๆฐxใจ
# ไฝ้ใ ใใๆใๅบใใๅคๆฐyใใปใใใใ
x = sampleData1[:,0]
y = sampleData1[:,1]
import matplotlib
# '๋ง์ ๊ณ ๋'์ผ๋ก ํฐํธ ์ค์
matplotlib.rcParams['font.family'] = 'Malgun Gothic'
# ํ๊ธ ํฐํธ์์ ๋ง์ด๋์ค(-) ํฐํธ๊ฐ ๊นจ์ง๋ ๊ฒ์ ๋ฐฉ์ง
matplotlib.rcParams['axes.unicode_minus'] = False
# ๆฃๅธๅณ่กจ็คบใง็ถๆณใฎ็ขบ่ช
fig1 = plt.gcf()
plt.scatter(x, y, c='k', s=50)
plt.xlabel('$x$: ์ ์ฅ(cm) ')
plt.ylabel('$y$: ์ฒด์ค(kg)')
plt.title('์ ์ฅ๊ณผ ์ฒด์ค์ ๊ด๊ณ')
plt.show()
plt.draw()
fig1.savefig('ex03-03.tif', format='tif', dpi=300) | _____no_output_____ | Apache-2.0 | notebooks/ch03_first_ml.ipynb | ychoi-kr/pytorch_book_info |
ๅบงๆจ็ณปใฎๅคๆๆฉๆขฐๅญฆ็ฟใขใใซใงใฏใใใผใฟใฏ0ใซ่ฟใๅคใๆใคใใจใๆใพใใใ ใใใงใx, y ใจใใซๅนณๅๅคใ0ใซใชใใใใซๅนณ่ก็งปๅใใๆฐใใๅบงๆจ็ณปใX, Yใจใใใ | X = x - x.mean()
Y = y - y.mean()
# ๆฃๅธๅณ่กจ็คบใง็ตๆใฎ็ขบ่ช
fig1 = plt.gcf()
plt.scatter(X, Y, c='k', s=50)
plt.xlabel('$X$')
plt.ylabel('$Y$')
plt.title('๋ฐ์ดํฐ ๊ฐ๊ณต ํ ์ ์ฅ๊ณผ ์ฒด์ค์ ๊ด๊ณ')
plt.show()
plt.draw()
fig1.savefig('ex03-04.tif', format='tif', dpi=300) | _____no_output_____ | Apache-2.0 | notebooks/ch03_first_ml.ipynb | ychoi-kr/pytorch_book_info |
3.6 ไบๆธฌ่จ็ฎ | # XใจYใใใณใฝใซๅคๆฐๅใใ
X = torch.tensor(X).float()
Y = torch.tensor(Y).float()
# ็ตๆ็ขบ่ช
print(X)
print(Y)
# ้ใฟๅคๆฐใฎๅฎ็พฉ
# WใจBใฏๅพ้
่จ็ฎใใใใฎใงใrequires_grad=Trueใจใใ
W = torch.tensor(1.0, requires_grad=True).float()
B = torch.tensor(1.0, requires_grad=True).float()
# ไบๆธฌ้ขๆฐใฏไธๆฌก้ขๆฐ
def pred(X):
return W * X + B
# ไบๆธฌๅคใฎ่จ็ฎ
Yp = pred(X)
# ็ตๆๆจ็คบ
print(Yp)
# ไบๆธฌๅคใฎ่จ็ฎใฐใฉใๅฏ่ฆๅ
params = {'W': W, 'B': B}
g = make_dot(Yp, params=params)
display(g)
g.render('ex03-08', format='tif')
!dot -Ttif -Gdpi=300 ex03-08 -o ex03-08_large.tif | _____no_output_____ | Apache-2.0 | notebooks/ch03_first_ml.ipynb | ychoi-kr/pytorch_book_info |
3.7 ๆๅคฑ่จ็ฎ | # ๆๅคฑ้ขๆฐใฏ่ชคๅทฎไบไนๅนณๅ
def mse(Yp, Y):
loss = ((Yp - Y) ** 2).mean()
return loss
# ๆๅคฑ่จ็ฎ
loss = mse(Yp, Y)
# ็ตๆๆจ็คบ
print(loss)
# ๆๅคฑใฎ่จ็ฎใฐใฉใๅฏ่ฆๅ
params = {'W': W, 'B': B}
g = make_dot(loss, params=params)
display(g)
g.render('ex03-11', format='tif')
!dot -Ttif -Gdpi=300 ex03-11 -o ex03-11_large.tif | _____no_output_____ | Apache-2.0 | notebooks/ch03_first_ml.ipynb | ychoi-kr/pytorch_book_info |
3.8 ๅพ้
่จ็ฎ | # ๅพ้
่จ็ฎ
loss.backward()
# ๅพ้
ๅค็ขบ่ช
print(W.grad)
print(B.grad) | tensor(-19.0400)
tensor(2.0000)
| Apache-2.0 | notebooks/ch03_first_ml.ipynb | ychoi-kr/pytorch_book_info |
3.9 ใใฉใกใผใฟไฟฎๆญฃ | # ๅญฆ็ฟ็ใฎๅฎ็พฉ
lr = 0.001
# ๊ฒฝ์ฌ๋ฅผ ๊ธฐ๋ฐ์ผ๋ก ํ๋ผ๋ฏธํฐ ์์
W -= lr * W.grad
B -= lr * B.grad | _____no_output_____ | Apache-2.0 | notebooks/ch03_first_ml.ipynb | ychoi-kr/pytorch_book_info |
WใจBใฏไธๅบฆ่จ็ฎๆธใฟใชใฎใงใใใฎ็ถๆ
ใงๅคใฎๆดๆฐใใงใใชใ ๆฌกใฎๆธใๆนใซใใๅฟ
่ฆใใใ | # ๅพ้
ใๅ
ใซใใฉใกใผใฟไฟฎๆญฃ
# with torch.no_grad() ใไปใใๅฟ
่ฆใใใ
with torch.no_grad():
W -= lr * W.grad
B -= lr * B.grad
# ่จ็ฎๆธใฟใฎๅพ้
ๅคใใชใปใใใใ
W.grad.zero_()
B.grad.zero_()
# ใใฉใกใผใฟใจๅพ้
ๅคใฎ็ขบ่ช
print(W)
print(B)
print(W.grad)
print(B.grad) | tensor(1.0190, requires_grad=True)
tensor(0.9980, requires_grad=True)
tensor(0.)
tensor(0.)
| Apache-2.0 | notebooks/ch03_first_ml.ipynb | ychoi-kr/pytorch_book_info |
ๅ
ใฎๅคใฏใฉใกใใ1.0ใ ใฃใใฎใงใWใฏๅพฎๅฐ้ๅขๅ ใBใฏๅพฎๅฐ้ๆธๅฐใใใใจใใใใใ ใใฎ่จ็ฎใ็นฐใ่ฟใใใจใงใๆ้ฉใชWใจBใๆฑใใใฎใๅพ้
้ไธๆณใจใชใใ 3.10 ็นฐใ่ฟใ่จ็ฎ | # ๅๆๅ
# WใจBใๅคๆฐใจใใฆๆฑใ
W = torch.tensor(1.0, requires_grad=True).float()
B = torch.tensor(1.0, requires_grad=True).float()
# ็นฐใ่ฟใๅๆฐ
num_epochs = 500
# ๅญฆ็ฟ็
lr = 0.001
# ่จ้ฒ็จ้
ๅๅๆๅ
history = np.zeros((0, 2))
# ใซใผใๅฆ็
for epoch in range(num_epochs):
# ไบๆธฌ่จ็ฎ
Yp = pred(X)
# ๆๅคฑ่จ็ฎ
loss = mse(Yp, Y)
# ๅพ้
่จ็ฎ
loss.backward()
with torch.no_grad():
# ใใฉใกใผใฟไฟฎๆญฃ
W -= lr * W.grad
B -= lr * B.grad
# ๅพ้
ๅคใฎๅๆๅ
W.grad.zero_()
B.grad.zero_()
# ๆๅคฑใฎ่จ้ฒ
if (epoch %10 == 0):
item = np.array([epoch, loss.item()])
history = np.vstack((history, item))
print(f'epoch = {epoch} loss = {loss:.4f}')
| epoch = 0 loss = 13.3520
epoch = 10 loss = 10.3855
epoch = 20 loss = 8.5173
epoch = 30 loss = 7.3364
epoch = 40 loss = 6.5858
epoch = 50 loss = 6.1047
epoch = 60 loss = 5.7927
epoch = 70 loss = 5.5868
epoch = 80 loss = 5.4476
epoch = 90 loss = 5.3507
epoch = 100 loss = 5.2805
epoch = 110 loss = 5.2275
epoch = 120 loss = 5.1855
epoch = 130 loss = 5.1507
epoch = 140 loss = 5.1208
epoch = 150 loss = 5.0943
epoch = 160 loss = 5.0703
epoch = 170 loss = 5.0480
epoch = 180 loss = 5.0271
epoch = 190 loss = 5.0074
epoch = 200 loss = 4.9887
epoch = 210 loss = 4.9708
epoch = 220 loss = 4.9537
epoch = 230 loss = 4.9373
epoch = 240 loss = 4.9217
epoch = 250 loss = 4.9066
epoch = 260 loss = 4.8922
epoch = 270 loss = 4.8783
epoch = 280 loss = 4.8650
epoch = 290 loss = 4.8522
epoch = 300 loss = 4.8399
epoch = 310 loss = 4.8281
epoch = 320 loss = 4.8167
epoch = 330 loss = 4.8058
epoch = 340 loss = 4.7953
epoch = 350 loss = 4.7853
epoch = 360 loss = 4.7756
epoch = 370 loss = 4.7663
epoch = 380 loss = 4.7574
epoch = 390 loss = 4.7488
epoch = 400 loss = 4.7406
epoch = 410 loss = 4.7327
epoch = 420 loss = 4.7251
epoch = 430 loss = 4.7178
epoch = 440 loss = 4.7108
epoch = 450 loss = 4.7040
epoch = 460 loss = 4.6976
epoch = 470 loss = 4.6913
epoch = 480 loss = 4.6854
epoch = 490 loss = 4.6796
| Apache-2.0 | notebooks/ch03_first_ml.ipynb | ychoi-kr/pytorch_book_info |
3.11 ็ตๆ็ขบ่ช | # ใใฉใกใผใฟใฎๆ็ตๅค
print('W = ', W.data.numpy())
print('B = ', B.data.numpy())
#ๆๅคฑใฎ็ขบ่ช
print(f'์ด๊ธฐ์ํ: ์์ค:{history[0,1]:.4f}')
print(f'์ต์ข
์ํ: ์์ค:{history[-1,1]:.4f}')
# ๅญฆ็ฟๆฒ็ทใฎ่กจ็คบ (ๆๅคฑ)
fig1 = plt.gcf()
plt.plot(history[:,0], history[:,1], 'b')
plt.xlabel('๋ฐ๋ณต ํ์')
plt.ylabel('์์ค')
plt.title('ํ์ต ๊ณก์ (์์ค)')
plt.show()
plt.draw()
fig1.savefig('ex03-19.tif', format='tif', dpi=300) | _____no_output_____ | Apache-2.0 | notebooks/ch03_first_ml.ipynb | ychoi-kr/pytorch_book_info |
ๆฃๅธๅณใซๅๅธฐ็ด็ทใ้ใญๆธใใใ | # xใฎ็ฏๅฒใๆฑใใ(Xrange)
X_max = X.max()
X_min = X.min()
X_range = np.array((X_min, X_max))
X_range = torch.from_numpy(X_range).float()
print(X_range)
# ๅฏพๅฟใใyใฎไบๆธฌๅคใๆฑใใ
Y_range = pred(X_range)
print(Y_range.data)
# ใฐใฉใๆ็ป
fig1 = plt.gcf()
plt.scatter(X, Y, c='k', s=50)
plt.xlabel('$X$')
plt.ylabel('$Y$')
plt.plot(X_range.data, Y_range.data, lw=2, c='b')
plt.title('์ ์ฅ๊ณผ ์ฒด์ค์ ์๊ด ์ง์ (๊ฐ๊ณต ํ)')
plt.show()
plt.draw()
fig1.savefig('ex03-20.tif', format='tif', dpi=300) | _____no_output_____ | Apache-2.0 | notebooks/ch03_first_ml.ipynb | ychoi-kr/pytorch_book_info |
ๅ ๅทฅๅใใผใฟใธใฎๅๅธฐ็ด็ทๆ็ป | # yๅบงๆจๅคใจxๅบงๆจๅคใฎ่จ็ฎ
x_range = X_range + x.mean()
yp_range = Y_range + y.mean()
# ใฐใฉใๆ็ป
fig1 = plt.gcf()
plt.scatter(x, y, c='k', s=50)
plt.xlabel('$x$')
plt.ylabel('$y$')
plt.plot(x_range, yp_range.data, lw=2, c='b')
plt.title('์ ์ฅ๊ณผ ์ฒด์ค์ ์๊ด ์ง์ (๊ฐ๊ณต ์ )')
plt.show()
plt.draw()
fig1.savefig('ex03-21.tif', format='tif', dpi=300) | _____no_output_____ | Apache-2.0 | notebooks/ch03_first_ml.ipynb | ychoi-kr/pytorch_book_info |
3.12 ๆ้ฉๅ้ขๆฐใจstep้ขๆฐใฎๅฉ็จ | # ๅๆๅ
# WใจBใๅคๆฐใจใใฆๆฑใ
W = torch.tensor(1.0, requires_grad=True).float()
B = torch.tensor(1.0, requires_grad=True).float()
# ็นฐใ่ฟใๅๆฐ
num_epochs = 500
# ๅญฆ็ฟ็
lr = 0.001
# optimizerใจใใฆSGD(็ขบ็็ๅพ้
้ไธๆณ)ใๆๅฎใใ
import torch.optim as optim
optimizer = optim.SGD([W, B], lr=lr)
# ่จ้ฒ็จ้
ๅๅๆๅ
history = np.zeros((0, 2))
# ใซใผใๅฆ็
for epoch in range(num_epochs):
# ไบๆธฌ่จ็ฎ
Yp = pred(X)
# ๆๅคฑ่จ็ฎ
loss = mse(Yp, Y)
# ๅพ้
่จ็ฎ
loss.backward()
# ใใฉใกใผใฟไฟฎๆญฃ
optimizer.step()
#ๅพ้
ๅคๅๆๅ
optimizer.zero_grad()
# ๆๅคฑๅคใฎ่จ้ฒ
if (epoch %10 == 0):
item = np.array([epoch, loss.item()])
history = np.vstack((history, item))
print(f'epoch = {epoch} loss = {loss:.4f}')
# ใใฉใกใผใฟใฎๆ็ตๅค
print('W = ', W.data.numpy())
print('B = ', B.data.numpy())
#ๆๅคฑใฎ็ขบ่ช
print(f'ๅๆ็ถๆ
: ๆๅคฑ:{history[0,1]:.4f}')
print(f'ๆ็ต็ถๆ
: ๆๅคฑ:{history[-1,1]:.4f}')
# ๅญฆ็ฟๆฒ็ทใฎ่กจ็คบ (ๆๅคฑ)
plt.plot(history[:,0], history[:,1], 'b')
plt.xlabel('็นฐใ่ฟใๅๆฐ')
plt.ylabel('ๆๅคฑ')
plt.title('ๅญฆ็ฟๆฒ็ท(ๆๅคฑ)')
plt.show() | _____no_output_____ | Apache-2.0 | notebooks/ch03_first_ml.ipynb | ychoi-kr/pytorch_book_info |
3.7ใฎ็ตๆใจ่ฆๆฏในใใจใพใฃใใๅใใงใใใใจใใใใใ ใคใพใใstep้ขๆฐใงใใฃใฆใใใใจใฏใๆฌกใฎใณใผใใจๅใใ```py3 with torch.no_grad(): ใใฉใกใผใฟไฟฎๆญฃ (ใใฌใผใ ใฏใผใฏใไฝฟใๅ ดๅใฏstep้ขๆฐ) W -= lr * W.grad B -= lr * B.grad``` ๆ้ฉๅ้ขๆฐใฎใใฅใผใใณใฐ | # ๅๆๅ
# WใจBใๅคๆฐใจใใฆๆฑใ
W = torch.tensor(1.0, requires_grad=True).float()
B = torch.tensor(1.0, requires_grad=True).float()
# ็นฐใ่ฟใๅๆฐ
num_epochs = 500
# ๅญฆ็ฟ็
lr = 0.001
# optimizerใจใใฆSGD(็ขบ็็ๅพ้
้ไธๆณ)ใๆๅฎใใ
import torch.optim as optim
optimizer = optim.SGD([W, B], lr=lr, momentum=0.9)
# ่จ้ฒ็จ้
ๅๅๆๅ
history2 = np.zeros((0, 2))
# ใซใผใๅฆ็
for epoch in range(num_epochs):
# ไบๆธฌ่จ็ฎ
Yp = pred(X)
# ๆๅคฑ่จ็ฎ
loss = mse(Yp, Y)
# ๅพ้
่จ็ฎ
loss.backward()
# ใใฉใกใผใฟไฟฎๆญฃ
optimizer.step()
#ๅพ้
ๅคๅๆๅ
optimizer.zero_grad()
# ๆๅคฑๅคใฎ่จ้ฒ
if (epoch %10 == 0):
item = np.array([epoch, loss.item()])
history2 = np.vstack((history2, item))
print(f'epoch = {epoch} loss = {loss:.4f}')
# ๅญฆ็ฟๆฒ็ทใฎ่กจ็คบ (ๆๅคฑ)
fig1 = plt.gcf()
plt.plot(history[:,0], history[:,1], 'b', label='๊ธฐ๋ณธ๊ฐ ์ค์ ')
plt.plot(history2[:,0], history2[:,1], 'k', label='momentum=0.9')
plt.xlabel('๋ฐ๋ณต ํ์')
plt.ylabel('์์ค')
plt.legend()
plt.title('ํ์ต ๊ณก์ (์์ค)')
plt.show()
plt.draw()
fig1.savefig('ex03-27.tif', format='tif', dpi=300) | _____no_output_____ | Apache-2.0 | notebooks/ch03_first_ml.ipynb | ychoi-kr/pytorch_book_info |
ใณใฉใ ใๅฑๆๆ้ฉ่งฃ | def f(x):
return x * (x+1) * (x+2) * (x-2)
x = np.arange(-3, 2.7, 0.05)
y = f(x)
plt.plot(x, y)
plt.axis('off')
plt.show() | _____no_output_____ | Apache-2.0 | notebooks/ch03_first_ml.ipynb | ychoi-kr/pytorch_book_info |
Assignment 2: **Machine learning with tree based models** In this assignment, you will work on the **Titanic** dataset and use machine learning to create a model that predicts which passengers survived the **Titanic** shipwreck. --- About the dataset:---* The column named `Survived` is the label and the remaining columns are features. * The features can be described as given below: Variable Definition pclass Ticket class SibSp Number of siblings / spouses aboard the Titanic Parch Number of parents / children aboard the Titanic Ticket Ticket number Embarked Port of Embarkation: C = Cherbourg, Q = Queenstown, S = Southampton --- Instructions---* Apply suitable data pre-processing techniques, if needed. * Implement a few classifiers to create your model and compare the performance metrics by plotting the curves like roc_auc, confusion matrix, etc. | import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.impute import SimpleImputer
import seaborn as sns
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import train_test_split,cross_val_score,GridSearchCV
from sklearn.linear_model import LinearRegression,LogisticRegression
from sklearn.neighbors import KNeighborsClassifier as KNN
from sklearn.ensemble import RandomForestClassifier,VotingClassifier,BaggingClassifier,AdaBoostClassifier,GradientBoostingClassifier
from sklearn.metrics import accuracy_score,mean_squared_error as MSE,roc_auc_score,confusion_matrix,classification_report,roc_curve
from xgboost import XGBClassifier
import xgboost as xgb
SEED=1
titanic_data = pd.read_csv('https://raw.githubusercontent.com/shala2020/shala2020.github.io/master/Lecture_Materials/Assignments/MachineLearning/L2/titanic.csv')
titanic_data.head()
titanic_data.shape
print(titanic_data.isna().sum())
titanic_data.dtypes
titanic_data.describe()
titanic_data.info()
titanic_data = titanic_data.drop(['PassengerId','Name','Cabin','Ticket'], axis=1)
titanic_data
imp =SimpleImputer(missing_values=np.nan, strategy='mean')
imp.fit(titanic_data[['Age']])
titanic_data['Age'] = imp.transform(titanic_data[['Age']])
titanic_data['Embarked'].describe()
common_value='S'
titanic_data['Embarked'] = titanic_data['Embarked'].fillna(common_value)
titanic_data['Sex'] = titanic_data['Sex'].apply(lambda x: 0 if x=="male" else 1)
titanic_data
print(titanic_data.isna().sum())
ports = {"C":0,"Q":1,"S":2}
titanic_data['Embarked'] = titanic_data['Embarked'].map(ports)
titanic_data
titanic_data['Age']=titanic_data['Age'].astype(int)
titanic_data['Fare']=titanic_data['Fare'].astype(int)
titanic_data
pd.qcut(titanic_data['Fare'],4)
titanic_data.loc[ titanic_data['Age'] <=19, 'Age'] = 0
titanic_data.loc[(titanic_data['Age'] > 19 )& (titanic_data['Age'] <= 25), 'Age'] = 1
titanic_data.loc[(titanic_data['Age'] > 25) & (titanic_data['Age'] <= 29), 'Age'] = 2
titanic_data.loc[(titanic_data['Age'] > 29) & (titanic_data['Age'] <= 31), 'Age'] = 3
titanic_data.loc[(titanic_data['Age'] > 31) & (titanic_data['Age'] <= 40), 'Age'] = 4
titanic_data.loc[(titanic_data['Age'] > 40) & (titanic_data['Age'] <= 80), 'Age'] = 5
titanic_data['Age'].value_counts()
titanic_data.loc[ titanic_data['Fare'] <=7, 'Fare'] = 0
titanic_data.loc[(titanic_data['Fare'] > 7 )& (titanic_data['Fare'] <= 14), 'Fare'] = 1
titanic_data.loc[(titanic_data['Fare'] > 14) & (titanic_data['Fare'] <= 31), 'Fare'] = 2
titanic_data.loc[(titanic_data['Fare'] > 31) & (titanic_data['Fare'] <= 512), 'Fare'] = 3
titanic_data.loc[(titanic_data['Fare'] > 512), 'Fare'] = 4
titanic_data['Fare'].value_counts()
titanic_data['Relatives']=titanic_data['SibSp']+titanic_data['Parch']
titanic_data
titanic_data['Fare_Per_Person'] = titanic_data['Fare']/(titanic_data['Relatives']+1)
titanic_data['Fare_Per_Person'] = titanic_data['Fare_Per_Person'].astype(int)
titanic_data
titanic_data['Age_Class']= titanic_data['Age']* titanic_data['Pclass']
titanic_data
y=titanic_data['Survived']
X=titanic_data.drop(['Survived','Parch','Fare_Per_Person'],axis=1)
# Split data into 70% train and 30% test
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size= 0.3, random_state= SEED)
# Instantiate individual classifiers
lr = LogisticRegression(random_state=SEED)
knn = KNN()
dt = DecisionTreeClassifier(random_state=SEED)
rf = RandomForestClassifier(n_estimators=300,random_state=SEED)
bc = BaggingClassifier(base_estimator=dt, n_estimators=300, n_jobs=-1,random_state=SEED,oob_score=True)
adb = AdaBoostClassifier(base_estimator=dt, n_estimators=100,random_state=SEED)
gb= GradientBoostingClassifier(n_estimators=300, max_depth=1, random_state=SEED,subsample=0.8,max_features=0.2)
xgb = xgb.XGBClassifier(learning_rate=0.01)
# Define a list called classifier that contains the tuples (classifier_name, classifier)
classifiers = [('Logistic Regression', lr),('K Nearest Neighbours', knn),
('Classification Tree', dt),('Random Forest',rf),
('Bagging Classifier',bc),('Adaboost',adb),('Gradient Boosting',gb),('Xtreme GB',xgb)]
import warnings
warnings.filterwarnings("ignore")
# Iterate over the defined list of tuples containing the classifiers
for clf_name, clf in classifiers:
#fit clf to the training set
clf.fit(X_train, y_train)
# Predict the labels of the test set
y_pred = clf.predict(X_test)
# Evaluate the accuracy of clf on the test set
print('{:s} : {:.3f}'.format(clf_name, accuracy_score(y_test, y_pred)))
print(confusion_matrix(y_test,y_pred))
y_pred_proba = clf.predict_proba(X_test)[:,1]
clf_roc_auc_score = roc_auc_score(y_test, y_pred_proba)
print('ROC AUC score: {:.2f}'.format(clf_roc_auc_score))
fpr, tpr, thresholds = roc_curve(y_test, y_pred_proba)
plt.plot([0, 1], [0, 1], 'k--')
plt.plot(fpr, tpr, label='Random Forest Classification')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
plt.show();
print(classification_report(y_test, y_pred))
print("="*60)
oob_accuracy = bc.oob_score_
print('OOB accuracy of bagging classifier: {:.3f}'.format(oob_accuracy))
# Instantiate a VotingClassifier 'vc'
vc = VotingClassifier(estimators=classifiers)
# Fit 'vc' to the traing set and predict test set labels
vc.fit(X_train, y_train)
y_pred = vc.predict(X_test)
# Evaluate the test-set accuracy of 'vc'
print('Voting Classifier: {:.3f}'.format(accuracy_score(y_test, y_pred)))
classifiers = [('Logistic Regression', lr),('K Nearest Neighbours', knn),
('Classification Tree', dt),('Random Forest',rf),
('Bagging Classifier',bc),('Adaboost',adb),('Gradient Boosting',gb)]
lr = LogisticRegression(random_state=SEED)
knn = KNN()
dt = DecisionTreeClassifier(random_state=SEED)
rf = RandomForestClassifier(n_estimators=300,random_state=SEED)
bc = BaggingClassifier(base_estimator=dt, n_estimators=300, n_jobs=-1,random_state=SEED,oob_score=True)
adb = AdaBoostClassifier(base_estimator=dt, n_estimators=100,random_state=SEED)
gb= GradientBoostingClassifier(n_estimators=300, max_depth=1, random_state=SEED,subsample=0.8,max_features=0.2)
for clf_name, clf in classifiers:
scores = cross_val_score(clf, X_train, y_train, cv=10, scoring = "accuracy")
print('{:s} '.format(clf_name))
print("Scores:", scores)
print("Mean:", scores.mean())
print("Standard Deviation:", scores.std())
feature_imp = pd.Series(rf.feature_importances_,index=list(X.columns.values.tolist())).sort_values(ascending=False)
feature_imp
plt.figure(figsize=(10,10))
sns.barplot(x=feature_imp, y=feature_imp.index)
# Add labels to your graph
plt.xlabel('Feature Importance Score')
plt.ylabel('Features')
plt.title("Visualizing Important Features")
plt.legend()
plt.show()
param_grid = { "criterion" : ["gini", "entropy"], "min_samples_leaf" : [1, 5, 10, 25, 50, 70],
"min_samples_split" : [2, 4, 10, 12, 16, 18, 25, 35],
"n_estimators": [100, 400, 700, 1000, 1500]}
raf = RandomForestClassifier(random_state=SEED)
clfa = GridSearchCV(estimator=raf, param_grid=param_grid, n_jobs=-1)
clfa.fit(X_train, y_train)
clfa.best_params_
So bagging classifier is the classifer with highest accuracy=78% and oob score=80.4% among all the classifers and will be used to train our model. | _____no_output_____ | MIT | Assignment_06/Assignment_ML_L2_Sankalp_Jain_ipynb_txt.ipynb | Sankalp679/SHALA |
Read the CSV and Perform Basic Data Cleaning | df = pd.read_csv("exoplanet_data.csv")
# Drop the null columns where all values are null
df = df.dropna(axis='columns', how='all')
# Drop the null rows
df = df.dropna()
df.head()
df.describe() | _____no_output_____ | MIT | exoplanet1.ipynb | bshub6/machine-learning-challenge |
Select your features (columns) | # Set features. This will also be used as your x values.
target = df["koi_disposition"]
data = df.drop("koi_disposition", axis=1)
feature_names = data.columns
data.head() | _____no_output_____ | MIT | exoplanet1.ipynb | bshub6/machine-learning-challenge |
Create a Train Test SplitUse `koi_disposition` for the y values | from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(data, target, random_state=42)
X_train.head() | _____no_output_____ | MIT | exoplanet1.ipynb | bshub6/machine-learning-challenge |
Pre-processingScale the data using the MinMaxScaler and perform some feature selection | # Scale your data
from sklearn.preprocessing import MinMaxScaler
X_minmax = MinMaxScaler().fit(X_train)
X_train_minmax = X_minmax.transform(X_train)
X_test_minmax = X_minmax.transform(X_test)
from sklearn.svm import SVC
model = SVC(kernel='linear')
model.fit(X_train_minmax, y_train) | _____no_output_____ | MIT | exoplanet1.ipynb | bshub6/machine-learning-challenge |
Train the Model | print(f"Training Data Score: {model.score(X_train_minmax, y_train)}")
print(f"Testing Data Score: {model.score(X_test_minmax, y_test)}") | Training Data Score: 0.8455082967766546
Testing Data Score: 0.8415331807780321
| MIT | exoplanet1.ipynb | bshub6/machine-learning-challenge |
Hyperparameter TuningUse `GridSearchCV` to tune the model's parameters | # Create the GridSearchCV model
from sklearn.model_selection import GridSearchCV
param_grid = {'C': [1, 5, 10, 50],
'gamma': [0.0001, 0.0005, 0.001, 0.005]}
grid = GridSearchCV(model, param_grid, verbose=3)
# Train the model with GridSearch
grid.fit(X_train_minmax, y_train)
print(grid.best_params_)
print(grid.best_score_)
# Model Accuracy
print('Test Acc: %.3f' % model.score(X_test, y_test))
# Make prediction and save to variable for report.
predictions = grid.predict(X_test_minmax)
# Print Classification Report.
from sklearn.metrics import classification_report
print(classification_report(y_test, predictions)) | precision recall f1-score support
CANDIDATE 0.81 0.67 0.73 411
CONFIRMED 0.76 0.85 0.80 484
FALSE POSITIVE 0.98 1.00 0.99 853
accuracy 0.88 1748
macro avg 0.85 0.84 0.84 1748
weighted avg 0.88 0.88 0.88 1748
| MIT | exoplanet1.ipynb | bshub6/machine-learning-challenge |
Save the Model | # save your model by updating "your_name" with your name
# and "your_model" with your model variable
# be sure to turn this in to BCS
# if joblib fails to import, try running the command to install in terminal/git-bash
import joblib
filename = 'models/bridgette_svm.sav'
joblib.dump(model, filename) | _____no_output_____ | MIT | exoplanet1.ipynb | bshub6/machine-learning-challenge |
Watershed Distance Transform for 3D Data---Implementation of papers:[Deep Watershed Transform for Instance Segmentation](http://openaccess.thecvf.com/content_cvpr_2017/papers/Bai_Deep_Watershed_Transform_CVPR_2017_paper.pdf)[Learn to segment single cells with deep distance estimator and deep cell detector](https://arxiv.org/abs/1803.10829) | import os
import errno
import datetime
import numpy as np
import deepcell | Using TensorFlow backend.
| Apache-2.0 | scripts/watershed/Watershed Transform 3D Fully Convolutional.ipynb | esgomezm/deepcell-tf |
Load the Training Data | # Download the data (saves to ~/.keras/datasets)
filename = 'mousebrain.npz'
test_size = 0.1 # % of data saved as test
seed = 0 # seed for random train-test split
(X_train, y_train), (X_test, y_test) = deepcell.datasets.mousebrain.load_data(filename, test_size=test_size, seed=seed)
print('X.shape: {}\ny.shape: {}'.format(X_train.shape, y_train.shape)) | Downloading data from https://deepcell-data.s3.amazonaws.com/nuclei/mousebrain.npz
1730158592/1730150850 [==============================] - 106s 0us/step
X.shape: (176, 15, 256, 256, 1)
y.shape: (176, 15, 256, 256, 1)
| Apache-2.0 | scripts/watershed/Watershed Transform 3D Fully Convolutional.ipynb | esgomezm/deepcell-tf |
Set up filepath constants | # the path to the data file is currently required for `train_model_()` functions
# change DATA_DIR if you are not using `deepcell.datasets`
DATA_DIR = os.path.expanduser(os.path.join('~', '.keras', 'datasets'))
# DATA_FILE should be a npz file, preferably from `make_training_data`
DATA_FILE = os.path.join(DATA_DIR, filename)
# confirm the data file is available
assert os.path.isfile(DATA_FILE)
# Set up other required filepaths
# If the data file is in a subdirectory, mirror it in MODEL_DIR and LOG_DIR
PREFIX = os.path.relpath(os.path.dirname(DATA_FILE), DATA_DIR)
ROOT_DIR = '/data' # TODO: Change this! Usually a mounted volume
MODEL_DIR = os.path.abspath(os.path.join(ROOT_DIR, 'models', PREFIX))
LOG_DIR = os.path.abspath(os.path.join(ROOT_DIR, 'logs', PREFIX))
# create directories if they do not exist
for d in (MODEL_DIR, LOG_DIR):
try:
os.makedirs(d)
except OSError as exc: # Guard against race condition
if exc.errno != errno.EEXIST:
raise | _____no_output_____ | Apache-2.0 | scripts/watershed/Watershed Transform 3D Fully Convolutional.ipynb | esgomezm/deepcell-tf |
Set up training parameters | from tensorflow.keras.optimizers import SGD
from deepcell.utils.train_utils import rate_scheduler
fgbg_model_name = 'conv_fgbg_3d_model'
conv_model_name = 'conv_watershed_3d_model'
n_epoch = 10 # Number of training epochs
norm_method = 'whole_image' # data normalization - `whole_image` for 3d conv
receptive_field = 61 # should be adjusted for the scale of the data
optimizer = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
lr_sched = rate_scheduler(lr=0.01, decay=0.99)
# FC training settings
n_skips = 3 # number of skip-connections (only for FC training)
batch_size = 1 # FC training uses 1 image per batch
# Transformation settings
transform = 'watershed'
distance_bins = 4 # number of distance classes
erosion_width = 1 # erode edges, improves segmentation when cells are close
# 3D Settings
frames_per_batch = 3 | _____no_output_____ | Apache-2.0 | scripts/watershed/Watershed Transform 3D Fully Convolutional.ipynb | esgomezm/deepcell-tf |
First, create a foreground/background separation model Instantiate the fgbg model | from deepcell import model_zoo
fgbg_model = model_zoo.bn_feature_net_skip_3D(
receptive_field=receptive_field,
n_features=2, # segmentation mask (is_cell, is_not_cell)
n_frames=frames_per_batch,
n_skips=n_skips,
n_conv_filters=32,
n_dense_filters=128,
input_shape=tuple([frames_per_batch] + list(X_train.shape[2:])),
multires=False,
last_only=False,
norm_method='whole_image') | _____no_output_____ | Apache-2.0 | scripts/watershed/Watershed Transform 3D Fully Convolutional.ipynb | esgomezm/deepcell-tf |
Train the fgbg model | from deepcell.training import train_model_conv
fgbg_model = train_model_conv(
model=fgbg_model,
dataset=DATA_FILE, # full path to npz file
model_name=fgbg_model_name,
test_size=test_size,
seed=seed,
transform='fgbg',
optimizer=optimizer,
batch_size=batch_size,
frames_per_batch=frames_per_batch,
n_epoch=n_epoch,
model_dir=MODEL_DIR,
lr_sched=rate_scheduler(lr=0.01, decay=0.95),
rotation_range=180,
flip=True,
shear=False,
zoom_range=(0.8, 1.2)) | X_train shape: (198, 15, 256, 256, 1)
y_train shape: (198, 15, 256, 256, 1)
X_test shape: (22, 15, 256, 256, 1)
y_test shape: (22, 15, 256, 256, 1)
Output Shape: (None, 3, 256, 256, 2)
Number of Classes: 2
Training on 1 GPUs
Epoch 1/10
197/198 [============================>.] - ETA: 0s - loss: 0.8965 - model_loss: 0.2152 - model_1_loss: 0.2171 - model_2_loss: 0.2121 - model_3_loss: 0.2163 - model_acc: 0.9120 - model_1_acc: 0.9044 - model_2_acc: 0.9113 - model_3_acc: 0.9067
Epoch 00001: val_loss improved from inf to 1.07529, saving model to /data/models/conv_fgbg_3d_model.h5
198/198 [==============================] - 133s 673ms/step - loss: 0.8945 - model_loss: 0.2147 - model_1_loss: 0.2166 - model_2_loss: 0.2116 - model_3_loss: 0.2158 - model_acc: 0.9122 - model_1_acc: 0.9046 - model_2_acc: 0.9115 - model_3_acc: 0.9069 - val_loss: 1.0753 - val_model_loss: 0.3084 - val_model_1_loss: 0.2489 - val_model_2_loss: 0.2391 - val_model_3_loss: 0.2430 - val_model_acc: 0.9378 - val_model_1_acc: 0.9172 - val_model_2_acc: 0.9246 - val_model_3_acc: 0.9233
Epoch 2/10
197/198 [============================>.] - ETA: 0s - loss: 0.7316 - model_loss: 0.1725 - model_1_loss: 0.1739 - model_2_loss: 0.1755 - model_3_loss: 0.1739 - model_acc: 0.9246 - model_1_acc: 0.9222 - model_2_acc: 0.9219 - model_3_acc: 0.9232
Epoch 00002: val_loss improved from 1.07529 to 1.02578, saving model to /data/models/conv_fgbg_3d_model.h5
198/198 [==============================] - 108s 547ms/step - loss: 0.7372 - model_loss: 0.1742 - model_1_loss: 0.1755 - model_2_loss: 0.1767 - model_3_loss: 0.1750 - model_acc: 0.9246 - model_1_acc: 0.9222 - model_2_acc: 0.9219 - model_3_acc: 0.9232 - val_loss: 1.0258 - val_model_loss: 0.2438 - val_model_1_loss: 0.2449 - val_model_2_loss: 0.2461 - val_model_3_loss: 0.2551 - val_model_acc: 0.9296 - val_model_1_acc: 0.9310 - val_model_2_acc: 0.9389 - val_model_3_acc: 0.9429
Epoch 3/10
197/198 [============================>.] - ETA: 0s - loss: 0.6888 - model_loss: 0.1632 - model_1_loss: 0.1636 - model_2_loss: 0.1634 - model_3_loss: 0.1626 - model_acc: 0.9282 - model_1_acc: 0.9265 - model_2_acc: 0.9277 - model_3_acc: 0.9277
Epoch 00003: val_loss improved from 1.02578 to 0.99577, saving model to /data/models/conv_fgbg_3d_model.h5
198/198 [==============================] - 108s 548ms/step - loss: 0.6880 - model_loss: 0.1630 - model_1_loss: 0.1634 - model_2_loss: 0.1633 - model_3_loss: 0.1624 - model_acc: 0.9282 - model_1_acc: 0.9264 - model_2_acc: 0.9276 - model_3_acc: 0.9277 - val_loss: 0.9958 - val_model_loss: 0.2410 - val_model_1_loss: 0.2488 - val_model_2_loss: 0.2339 - val_model_3_loss: 0.2361 - val_model_acc: 0.9151 - val_model_1_acc: 0.9145 - val_model_2_acc: 0.9202 - val_model_3_acc: 0.9169
Epoch 4/10
197/198 [============================>.] - ETA: 0s - loss: 0.6923 - model_loss: 0.1636 - model_1_loss: 0.1646 - model_2_loss: 0.1648 - model_3_loss: 0.1634 - model_acc: 0.9286 - model_1_acc: 0.9271 - model_2_acc: 0.9284 - model_3_acc: 0.9292
Epoch 00004: val_loss improved from 0.99577 to 0.96332, saving model to /data/models/conv_fgbg_3d_model.h5
198/198 [==============================] - 108s 547ms/step - loss: 0.6920 - model_loss: 0.1635 - model_1_loss: 0.1646 - model_2_loss: 0.1648 - model_3_loss: 0.1633 - model_acc: 0.9287 - model_1_acc: 0.9271 - model_2_acc: 0.9285 - model_3_acc: 0.9293 - val_loss: 0.9633 - val_model_loss: 0.2329 - val_model_1_loss: 0.2375 - val_model_2_loss: 0.2301 - val_model_3_loss: 0.2270 - val_model_acc: 0.8982 - val_model_1_acc: 0.8952 - val_model_2_acc: 0.8998 - val_model_3_acc: 0.9054
Epoch 5/10
197/198 [============================>.] - ETA: 0s - loss: 0.6872 - model_loss: 0.1633 - model_1_loss: 0.1625 - model_2_loss: 0.1638 - model_3_loss: 0.1618 - model_acc: 0.9274 - model_1_acc: 0.9267 - model_2_acc: 0.9262 - model_3_acc: 0.9282
Epoch 00005: val_loss improved from 0.96332 to 0.96122, saving model to /data/models/conv_fgbg_3d_model.h5
198/198 [==============================] - 108s 546ms/step - loss: 0.6896 - model_loss: 0.1638 - model_1_loss: 0.1631 - model_2_loss: 0.1645 - model_3_loss: 0.1624 - model_acc: 0.9273 - model_1_acc: 0.9265 - model_2_acc: 0.9260 - model_3_acc: 0.9281 - val_loss: 0.9612 - val_model_loss: 0.2280 - val_model_1_loss: 0.2326 - val_model_2_loss: 0.2325 - val_model_3_loss: 0.2323 - val_model_acc: 0.9260 - val_model_1_acc: 0.9179 - val_model_2_acc: 0.9242 - val_model_3_acc: 0.9140
Epoch 6/10
197/198 [============================>.] - ETA: 0s - loss: 0.6726 - model_loss: 0.1590 - model_1_loss: 0.1591 - model_2_loss: 0.1603 - model_3_loss: 0.1583 - model_acc: 0.9290 - model_1_acc: 0.9277 - model_2_acc: 0.9273 - model_3_acc: 0.9286
Epoch 00006: val_loss did not improve from 0.96122
198/198 [==============================] - 108s 546ms/step - loss: 0.6717 - model_loss: 0.1588 - model_1_loss: 0.1589 - model_2_loss: 0.1601 - model_3_loss: 0.1581 - model_acc: 0.9290 - model_1_acc: 0.9277 - model_2_acc: 0.9274 - model_3_acc: 0.9286 - val_loss: 1.0302 - val_model_loss: 0.2523 - val_model_1_loss: 0.2546 - val_model_2_loss: 0.2410 - val_model_3_loss: 0.2465 - val_model_acc: 0.8991 - val_model_1_acc: 0.8924 - val_model_2_acc: 0.9154 - val_model_3_acc: 0.9130
Epoch 7/10
197/198 [============================>.] - ETA: 0s - loss: 0.6620 - model_loss: 0.1565 - model_1_loss: 0.1566 - model_2_loss: 0.1574 - model_3_loss: 0.1557 - model_acc: 0.9301 - model_1_acc: 0.9281 - model_2_acc: 0.9290 - model_3_acc: 0.9297
Epoch 00007: val_loss improved from 0.96122 to 0.92732, saving model to /data/models/conv_fgbg_3d_model.h5
198/198 [==============================] - 108s 547ms/step - loss: 0.6616 - model_loss: 0.1564 - model_1_loss: 0.1565 - model_2_loss: 0.1573 - model_3_loss: 0.1556 - model_acc: 0.9300 - model_1_acc: 0.9281 - model_2_acc: 0.9290 - model_3_acc: 0.9296 - val_loss: 0.9273 - val_model_loss: 0.2280 - val_model_1_loss: 0.2261 - val_model_2_loss: 0.2177 - val_model_3_loss: 0.2197 - val_model_acc: 0.9086 - val_model_1_acc: 0.9049 - val_model_2_acc: 0.9144 - val_model_3_acc: 0.9117
Epoch 8/10
197/198 [============================>.] - ETA: 0s - loss: 0.6602 - model_loss: 0.1563 - model_1_loss: 0.1562 - model_2_loss: 0.1564 - model_3_loss: 0.1555 - model_acc: 0.9312 - model_1_acc: 0.9294 - model_2_acc: 0.9296 - model_3_acc: 0.9298
Epoch 00008: val_loss did not improve from 0.92732
198/198 [==============================] - 108s 545ms/step - loss: 0.6601 - model_loss: 0.1563 - model_1_loss: 0.1562 - model_2_loss: 0.1564 - model_3_loss: 0.1554 - model_acc: 0.9313 - model_1_acc: 0.9295 - model_2_acc: 0.9297 - model_3_acc: 0.9299 - val_loss: 0.9669 - val_model_loss: 0.2298 - val_model_1_loss: 0.2335 - val_model_2_loss: 0.2339 - val_model_3_loss: 0.2338 - val_model_acc: 0.9224 - val_model_1_acc: 0.9255 - val_model_2_acc: 0.9318 - val_model_3_acc: 0.9229
Epoch 9/10
197/198 [============================>.] - ETA: 0s - loss: 0.6534 - model_loss: 0.1554 - model_1_loss: 0.1542 - model_2_loss: 0.1548 - model_3_loss: 0.1532 - model_acc: 0.9312 - model_1_acc: 0.9312 - model_2_acc: 0.9315 - model_3_acc: 0.9314
Epoch 00009: val_loss improved from 0.92732 to 0.88550, saving model to /data/models/conv_fgbg_3d_model.h5
198/198 [==============================] - 108s 547ms/step - loss: 0.6536 - model_loss: 0.1554 - model_1_loss: 0.1542 - model_2_loss: 0.1549 - model_3_loss: 0.1533 - model_acc: 0.9310 - model_1_acc: 0.9310 - model_2_acc: 0.9313 - model_3_acc: 0.9312 - val_loss: 0.8855 - val_model_loss: 0.2115 - val_model_1_loss: 0.2154 - val_model_2_loss: 0.2107 - val_model_3_loss: 0.2121 - val_model_acc: 0.9330 - val_model_1_acc: 0.9328 - val_model_2_acc: 0.9316 - val_model_3_acc: 0.9308
Epoch 10/10
197/198 [============================>.] - ETA: 0s - loss: 0.6626 - model_loss: 0.1569 - model_1_loss: 0.1567 - model_2_loss: 0.1572 - model_3_loss: 0.1560 - model_acc: 0.9306 - model_1_acc: 0.9295 - model_2_acc: 0.9292 - model_3_acc: 0.9297
Epoch 00010: val_loss did not improve from 0.88550
198/198 [==============================] - 108s 545ms/step - loss: 0.6622 - model_loss: 0.1568 - model_1_loss: 0.1566 - model_2_loss: 0.1571 - model_3_loss: 0.1559 - model_acc: 0.9304 - model_1_acc: 0.9293 - model_2_acc: 0.9290 - model_3_acc: 0.9296 - val_loss: 0.9433 - val_model_loss: 0.2337 - val_model_1_loss: 0.2267 - val_model_2_loss: 0.2240 - val_model_3_loss: 0.2230 - val_model_acc: 0.9096 - val_model_1_acc: 0.9157 - val_model_2_acc: 0.9234 - val_model_3_acc: 0.9264
| Apache-2.0 | scripts/watershed/Watershed Transform 3D Fully Convolutional.ipynb | esgomezm/deepcell-tf |
Next, Create a model for the watershed energy transform Instantiate the distance transform model | from deepcell import model_zoo
watershed_model = model_zoo.bn_feature_net_skip_3D(
fgbg_model=fgbg_model,
receptive_field=receptive_field,
n_skips=n_skips,
n_features=distance_bins,
n_frames=frames_per_batch,
n_conv_filters=32,
n_dense_filters=128,
multires=False,
last_only=False,
input_shape=tuple([frames_per_batch] + list(X_train.shape[2:])),
norm_method='whole_image') | _____no_output_____ | Apache-2.0 | scripts/watershed/Watershed Transform 3D Fully Convolutional.ipynb | esgomezm/deepcell-tf |
Train the model | from deepcell.training import train_model_conv
watershed_model = train_model_conv(
model=watershed_model,
dataset=DATA_FILE, # full path to npz file
model_name=conv_model_name,
test_size=test_size,
seed=seed,
transform=transform,
distance_bins=distance_bins,
erosion_width=erosion_width,
optimizer=optimizer,
batch_size=batch_size,
n_epoch=n_epoch,
frames_per_batch=frames_per_batch,
model_dir=MODEL_DIR,
lr_sched=lr_sched,
rotation_range=180,
flip=True,
shear=False,
zoom_range=(0.8, 1.2)) | X_train shape: (198, 15, 256, 256, 1)
y_train shape: (198, 15, 256, 256, 1)
X_test shape: (22, 15, 256, 256, 1)
y_test shape: (22, 15, 256, 256, 1)
Output Shape: (None, 3, 256, 256, 4)
Number of Classes: 4
Training on 1 GPUs
Epoch 1/10
197/198 [============================>.] - ETA: 0s - loss: 3.8927 - model_5_loss: 0.9546 - model_6_loss: 0.9520 - model_7_loss: 0.9633 - model_8_loss: 0.9501 - model_5_acc: 0.8515 - model_6_acc: 0.8609 - model_7_acc: 0.8556 - model_8_acc: 0.8664
Epoch 00001: val_loss improved from inf to 3.58243, saving model to /data/models/conv_watershed_3d_model.h5
198/198 [==============================] - 171s 862ms/step - loss: 3.8903 - model_5_loss: 0.9541 - model_6_loss: 0.9513 - model_7_loss: 0.9626 - model_8_loss: 0.9497 - model_5_acc: 0.8516 - model_6_acc: 0.8611 - model_7_acc: 0.8557 - model_8_acc: 0.8664 - val_loss: 3.5824 - val_model_5_loss: 0.9999 - val_model_6_loss: 0.8246 - val_model_7_loss: 0.8505 - val_model_8_loss: 0.8347 - val_model_5_acc: 0.8552 - val_model_6_acc: 0.8806 - val_model_7_acc: 0.8899 - val_model_8_acc: 0.8820
Epoch 2/10
197/198 [============================>.] - ETA: 0s - loss: 3.2688 - model_5_loss: 0.7954 - model_6_loss: 0.8017 - model_7_loss: 0.8018 - model_8_loss: 0.7971 - model_5_acc: 0.8935 - model_6_acc: 0.8904 - model_7_acc: 0.8868 - model_8_acc: 0.8929
Epoch 00002: val_loss improved from 3.58243 to 3.26194, saving model to /data/models/conv_watershed_3d_model.h5
198/198 [==============================] - 144s 727ms/step - loss: 3.2674 - model_5_loss: 0.7951 - model_6_loss: 0.8014 - model_7_loss: 0.8014 - model_8_loss: 0.7967 - model_5_acc: 0.8936 - model_6_acc: 0.8904 - model_7_acc: 0.8869 - model_8_acc: 0.8930 - val_loss: 3.2619 - val_model_5_loss: 0.8245 - val_model_6_loss: 0.8164 - val_model_7_loss: 0.7766 - val_model_8_loss: 0.7716 - val_model_5_acc: 0.9008 - val_model_6_acc: 0.9094 - val_model_7_acc: 0.9056 - val_model_8_acc: 0.9073
Epoch 3/10
197/198 [============================>.] - ETA: 0s - loss: 3.2074 - model_5_loss: 0.7824 - model_6_loss: 0.7851 - model_7_loss: 0.7882 - model_8_loss: 0.7788 - model_5_acc: 0.8969 - model_6_acc: 0.8971 - model_7_acc: 0.8925 - model_8_acc: 0.8956
Epoch 00003: val_loss improved from 3.26194 to 3.08470, saving model to /data/models/conv_watershed_3d_model.h5
198/198 [==============================] - 143s 725ms/step - loss: 3.2081 - model_5_loss: 0.7827 - model_6_loss: 0.7852 - model_7_loss: 0.7883 - model_8_loss: 0.7790 - model_5_acc: 0.8967 - model_6_acc: 0.8970 - model_7_acc: 0.8924 - model_8_acc: 0.8955 - val_loss: 3.0847 - val_model_5_loss: 0.7503 - val_model_6_loss: 0.7541 - val_model_7_loss: 0.7527 - val_model_8_loss: 0.7546 - val_model_5_acc: 0.9105 - val_model_6_acc: 0.9094 - val_model_7_acc: 0.8988 - val_model_8_acc: 0.9046
Epoch 4/10
197/198 [============================>.] - ETA: 0s - loss: 3.2189 - model_5_loss: 0.7860 - model_6_loss: 0.8026 - model_7_loss: 0.7803 - model_8_loss: 0.7770 - model_5_acc: 0.8933 - model_6_acc: 0.8874 - model_7_acc: 0.8916 - model_8_acc: 0.8949
Epoch 00004: val_loss did not improve from 3.08470
198/198 [==============================] - 143s 723ms/step - loss: 3.2174 - model_5_loss: 0.7857 - model_6_loss: 0.8021 - model_7_loss: 0.7800 - model_8_loss: 0.7767 - model_5_acc: 0.8933 - model_6_acc: 0.8874 - model_7_acc: 0.8916 - model_8_acc: 0.8950 - val_loss: 3.0984 - val_model_5_loss: 0.7471 - val_model_6_loss: 0.7796 - val_model_7_loss: 0.7491 - val_model_8_loss: 0.7495 - val_model_5_acc: 0.9032 - val_model_6_acc: 0.9134 - val_model_7_acc: 0.8882 - val_model_8_acc: 0.9122
Epoch 5/10
197/198 [============================>.] - ETA: 0s - loss: 3.1495 - model_5_loss: 0.7716 - model_6_loss: 0.7765 - model_7_loss: 0.7652 - model_8_loss: 0.7632 - model_5_acc: 0.9012 - model_6_acc: 0.8977 - model_7_acc: 0.8981 - model_8_acc: 0.9031
Epoch 00005: val_loss improved from 3.08470 to 3.01958, saving model to /data/models/conv_watershed_3d_model.h5
198/198 [==============================] - 144s 726ms/step - loss: 3.1469 - model_5_loss: 0.7710 - model_6_loss: 0.7758 - model_7_loss: 0.7644 - model_8_loss: 0.7626 - model_5_acc: 0.9012 - model_6_acc: 0.8977 - model_7_acc: 0.8981 - model_8_acc: 0.9031 - val_loss: 3.0196 - val_model_5_loss: 0.7375 - val_model_6_loss: 0.7557 - val_model_7_loss: 0.7238 - val_model_8_loss: 0.7295 - val_model_5_acc: 0.8905 - val_model_6_acc: 0.9011 - val_model_7_acc: 0.8767 - val_model_8_acc: 0.8719
Epoch 6/10
197/198 [============================>.] - ETA: 0s - loss: 3.0814 - model_5_loss: 0.7565 - model_6_loss: 0.7578 - model_7_loss: 0.7482 - model_8_loss: 0.7457 - model_5_acc: 0.8979 - model_6_acc: 0.8955 - model_7_acc: 0.8965 - model_8_acc: 0.9008
Epoch 00006: val_loss improved from 3.01958 to 2.92890, saving model to /data/models/conv_watershed_3d_model.h5
198/198 [==============================] - 144s 725ms/step - loss: 3.0772 - model_5_loss: 0.7555 - model_6_loss: 0.7567 - model_7_loss: 0.7472 - model_8_loss: 0.7447 - model_5_acc: 0.8981 - model_6_acc: 0.8957 - model_7_acc: 0.8967 - model_8_acc: 0.9009 - val_loss: 2.9289 - val_model_5_loss: 0.7209 - val_model_6_loss: 0.7166 - val_model_7_loss: 0.7114 - val_model_8_loss: 0.7069 - val_model_5_acc: 0.9053 - val_model_6_acc: 0.9081 - val_model_7_acc: 0.8920 - val_model_8_acc: 0.9041
Epoch 7/10
197/198 [============================>.] - ETA: 0s - loss: 3.0843 - model_5_loss: 0.7582 - model_6_loss: 0.7587 - model_7_loss: 0.7492 - model_8_loss: 0.7450 - model_5_acc: 0.9002 - model_6_acc: 0.8977 - model_7_acc: 0.8987 - model_8_acc: 0.9020
Epoch 00007: val_loss did not improve from 2.92890
198/198 [==============================] - 143s 722ms/step - loss: 3.0841 - model_5_loss: 0.7582 - model_6_loss: 0.7586 - model_7_loss: 0.7492 - model_8_loss: 0.7449 - model_5_acc: 0.9003 - model_6_acc: 0.8979 - model_7_acc: 0.8989 - model_8_acc: 0.9022 - val_loss: 3.0507 - val_model_5_loss: 0.7429 - val_model_6_loss: 0.7490 - val_model_7_loss: 0.7422 - val_model_8_loss: 0.7434 - val_model_5_acc: 0.8986 - val_model_6_acc: 0.8998 - val_model_7_acc: 0.8944 - val_model_8_acc: 0.9109
Epoch 8/10
197/198 [============================>.] - ETA: 0s - loss: 3.0380 - model_5_loss: 0.7474 - model_6_loss: 0.7455 - model_7_loss: 0.7375 - model_8_loss: 0.7344 - model_5_acc: 0.8997 - model_6_acc: 0.8984 - model_7_acc: 0.8992 - model_8_acc: 0.9030
Epoch 00008: val_loss did not improve from 2.92890
198/198 [==============================] - 143s 724ms/step - loss: 3.0375 - model_5_loss: 0.7472 - model_6_loss: 0.7455 - model_7_loss: 0.7374 - model_8_loss: 0.7342 - model_5_acc: 0.8996 - model_6_acc: 0.8983 - model_7_acc: 0.8991 - model_8_acc: 0.9030 - val_loss: 3.0694 - val_model_5_loss: 0.7353 - val_model_6_loss: 0.7731 - val_model_7_loss: 0.7532 - val_model_8_loss: 0.7347 - val_model_5_acc: 0.8916 - val_model_6_acc: 0.8562 - val_model_7_acc: 0.8855 - val_model_8_acc: 0.8808
Epoch 9/10
197/198 [============================>.] - ETA: 0s - loss: 3.0477 - model_5_loss: 0.7486 - model_6_loss: 0.7477 - model_7_loss: 0.7391 - model_8_loss: 0.7390 - model_5_acc: 0.9000 - model_6_acc: 0.8975 - model_7_acc: 0.8999 - model_8_acc: 0.9026
Epoch 00009: val_loss improved from 2.92890 to 2.91570, saving model to /data/models/conv_watershed_3d_model.h5
198/198 [==============================] - 144s 726ms/step - loss: 3.0471 - model_5_loss: 0.7485 - model_6_loss: 0.7474 - model_7_loss: 0.7390 - model_8_loss: 0.7390 - model_5_acc: 0.9001 - model_6_acc: 0.8975 - model_7_acc: 0.9000 - model_8_acc: 0.9027 - val_loss: 2.9157 - val_model_5_loss: 0.7177 - val_model_6_loss: 0.7192 - val_model_7_loss: 0.7003 - val_model_8_loss: 0.7052 - val_model_5_acc: 0.8953 - val_model_6_acc: 0.9110 - val_model_7_acc: 0.9087 - val_model_8_acc: 0.8886
Epoch 10/10
197/198 [============================>.] - ETA: 0s - loss: 3.0652 - model_5_loss: 0.7553 - model_6_loss: 0.7531 - model_7_loss: 0.7425 - model_8_loss: 0.7411 - model_5_acc: 0.9027 - model_6_acc: 0.9017 - model_7_acc: 0.9013 - model_8_acc: 0.9044
Epoch 00010: val_loss improved from 2.91570 to 2.90629, saving model to /data/models/conv_watershed_3d_model.h5
198/198 [==============================] - 144s 726ms/step - loss: 3.0634 - model_5_loss: 0.7548 - model_6_loss: 0.7526 - model_7_loss: 0.7420 - model_8_loss: 0.7407 - model_5_acc: 0.9026 - model_6_acc: 0.9016 - model_7_acc: 0.9012 - model_8_acc: 0.9043 - val_loss: 2.9063 - val_model_5_loss: 0.7080 - val_model_6_loss: 0.7071 - val_model_7_loss: 0.7149 - val_model_8_loss: 0.7030 - val_model_5_acc: 0.9048 - val_model_6_acc: 0.9090 - val_model_7_acc: 0.9118 - val_model_8_acc: 0.9021
| Apache-2.0 | scripts/watershed/Watershed Transform 3D Fully Convolutional.ipynb | esgomezm/deepcell-tf |
Run the modelThe model was trained on only a `frames_per_batch` frames at a time. In order to run this data on a full set of frames, a new model must be instantiated, which will load the trained weights. Save weights of trained models | fgbg_weights_file = os.path.join(MODEL_DIR, '{}.h5'.format(fgbg_model_name))
fgbg_model.save_weights(fgbg_weights_file)
watershed_weights_file = os.path.join(MODEL_DIR, '{}.h5'.format(conv_model_name))
watershed_model.save_weights(watershed_weights_file) | _____no_output_____ | Apache-2.0 | scripts/watershed/Watershed Transform 3D Fully Convolutional.ipynb | esgomezm/deepcell-tf |
Initialize the new models | from deepcell import model_zoo
# All training parameters should match except for the `input_shape`
run_fgbg_model = model_zoo.bn_feature_net_skip_3D(
receptive_field=receptive_field,
n_features=2,
n_frames=frames_per_batch,
n_skips=n_skips,
n_conv_filters=32,
n_dense_filters=128,
input_shape=tuple(X_test.shape[1:]),
multires=False,
last_only=False,
norm_method=norm_method)
run_fgbg_model.load_weights(fgbg_weights_file)
run_watershed_model = model_zoo.bn_feature_net_skip_3D(
fgbg_model=run_fgbg_model,
receptive_field=receptive_field,
n_skips=n_skips,
n_features=distance_bins,
n_frames=frames_per_batch,
n_conv_filters=32,
n_dense_filters=128,
multires=False,
last_only=False,
input_shape=tuple(X_test.shape[1:]),
norm_method=norm_method)
run_watershed_model.load_weights(watershed_weights_file)
# too many batches at once causes OOM
X_test, y_test = X_test[:4], y_test[:4]
print(X_test.shape) | (4, 15, 256, 256, 1)
| Apache-2.0 | scripts/watershed/Watershed Transform 3D Fully Convolutional.ipynb | esgomezm/deepcell-tf |
Make predictions on test data | test_images = run_watershed_model.predict(X_test)[-1]
test_images_fgbg = run_fgbg_model.predict(X_test)[-1]
print('watershed transform shape:', test_images.shape)
print('segmentation mask shape:', test_images_fgbg.shape) | watershed transform shape: (4, 15, 256, 256, 4)
segmentation mask shape: (4, 15, 256, 256, 2)
| Apache-2.0 | scripts/watershed/Watershed Transform 3D Fully Convolutional.ipynb | esgomezm/deepcell-tf |
Watershed post-processing | argmax_images = []
for i in range(test_images.shape[0]):
max_image = np.argmax(test_images[i], axis=-1)
argmax_images.append(max_image)
argmax_images = np.array(argmax_images)
argmax_images = np.expand_dims(argmax_images, axis=-1)
print('watershed argmax shape:', argmax_images.shape)
# threshold the foreground/background
# and remove back ground from watershed transform
threshold = 0.5
fg_thresh = test_images_fgbg[..., 1] > threshold
fg_thresh = np.expand_dims(fg_thresh, axis=-1)
argmax_images_post_fgbg = argmax_images * fg_thresh
# Apply watershed method with the distance transform as seed
from skimage.measure import label
from skimage.morphology import watershed
from skimage.feature import peak_local_max
watershed_images = []
for i in range(argmax_images_post_fgbg.shape[0]):
image = fg_thresh[i, ..., 0]
distance = argmax_images_post_fgbg[i, ..., 0]
local_maxi = peak_local_max(
test_images[i, ..., -1],
min_distance=10,
threshold_abs=0.05,
indices=False,
labels=image,
exclude_border=False)
markers = label(local_maxi)
segments = watershed(-distance, markers, mask=image)
watershed_images.append(segments)
watershed_images = np.array(watershed_images)
watershed_images = np.expand_dims(watershed_images, axis=-1) | _____no_output_____ | Apache-2.0 | scripts/watershed/Watershed Transform 3D Fully Convolutional.ipynb | esgomezm/deepcell-tf |
Plot the results | import matplotlib.pyplot as plt
import matplotlib.animation as animation
index = np.random.randint(low=0, high=watershed_images.shape[0])
frame = np.random.randint(low=0, high=watershed_images.shape[1])
print('Image:', index)
print('Frame:', frame)
fig, axes = plt.subplots(ncols=3, nrows=2, figsize=(15, 15), sharex=True, sharey=True)
ax = axes.ravel()
ax[0].imshow(X_test[index, frame, ..., 0])
ax[0].set_title('Source Image')
ax[1].imshow(test_images_fgbg[index, frame, ..., 1])
ax[1].set_title('FGBG Prediction')
ax[2].imshow(fg_thresh[index, frame, ..., 0], cmap='jet')
ax[2].set_title('FGBG {}% Threshold'.format(int(threshold * 100)))
ax[3].imshow(argmax_images[index, frame, ..., 0], cmap='jet')
ax[3].set_title('Distance Transform')
ax[4].imshow(argmax_images_post_fgbg[index, frame, ..., 0], cmap='jet')
ax[4].set_title('Distance Transform w/o Background')
ax[5].imshow(watershed_images[index, frame, ..., 0], cmap='jet')
ax[5].set_title('Watershed Segmentation')
fig.tight_layout()
plt.show()
# Can also export as a video
# But this does not render well on GitHub
from IPython.display import HTML
from deepcell.utils.plot_utils import get_js_video
HTML(get_js_video(watershed_images[..., [-1]], batch=index)) | _____no_output_____ | Apache-2.0 | scripts/watershed/Watershed Transform 3D Fully Convolutional.ipynb | esgomezm/deepcell-tf |
Tutorial 6 - Handle Missing Data replace function | import pandas as pd
import numpy as np
df = pd.read_csv('sample_data_tutorial_06.csv')
df
newdf = df.replace(-99999,np.NaN)
newdf
newdf = df.replace([-99999, -88888],np.NaN)
newdf
newdf = df.replace({
'temperature': -99999,
'windspeed': [-99999, -88888],
'event': 'No event'
}, np.NaN)
newdf
# Podemos gerar um mapa das alteraรงรตes que queremos fazer:
newdf = df.replace({
-99999: np.NaN,
-88888: np.NaN,
'No event': 'Sunny'
})
newdf
# Importando outro csv com algumas unidades que precisam ser limpas!
df = pd.read_csv('sample_data_tutorial_06a.csv')
df
# ร necessรกrio usar o 'regex' (regular expression)
# No caso abaixo estamos substituindo todas as letras (de A a Z - maiรบscula e minรบscula) por vazio (='')
newdf = df.replace('[A-Za-z]', '', regex=True)
newdf
# Observe, no caso anterior, que ele removeu o que pedimos mas tambรฉm removeu toda coluna 'event'
# Para fazer as substituiรงรตes somente em determinadas colunas รฉ preciso utilizar o dicionรกrio:
newdf = df.replace({
'temperature': '[A-Za-z]',
'windspeed': '[A-Za-z]'
}, '', regex=True)
newdf | _____no_output_____ | MIT | Python Pandas Tutorials 06.ipynb | HenriqueArgentieri/Tutoriais |
PARSE SINGLE ABSTRACT WITH NON-INDEXED AUTHORS LIST | abstract = text_dict['P123']
#abstract
abstract_info = re.findall(r"\w+[A-Z\w+]\w+.*(?=TNF\stherapy.*)", abstract)
abstract_head = str(abstract_info[0])
abstract_head
authors_info = re.findall(r"\w+[^A-Z\d)\W]\s\w.*(?=TNF\stherapy*)", abstract)
authors = str(authors_info[0])
authors
author_name = re.findall(r"\w.+(?=Spherix)", authors)
author_name
author_list = [x for x in author_name[0].split(',')]
author_list
author_location = re.findall(r"Spherix[^*]+", authors)
author_location
pattern = re.compile(r"\w+[^A-Z\d\W]\s\w.*")
abstract_title = [re.sub(pattern, "", i) for i in abstract_info]
abstract_title
abstract_text = re.findall(r"(TNF\stherapy.*)", abstract)
#abstract_text
import pandas as pd
df = pd.DataFrame({"About the person": 'Name (incl. titles if any mentioned)',
"Unnamed: 1": 'Affiliation(s) Name(s)',
"Unnamed: 2": "Person's Location",
"About the session/topic": "Session Name",
"Unnamed: 4": 'Topic Title',
"Unnamed: 5": 'Presentation Abstract'}, index=[0])
df1 = pd.DataFrame({"About the person": author_list[2],
"Unnamed: 1": author_location,
"Unnamed: 2": "",
"About the session/topic": "P123",
"Unnamed: 4": abstract_title,
"Unnamed: 5": abstract_text})
df1 | _____no_output_____ | MIT | file_parse.ipynb | ivanlohvyn/beetroot_parse_pdf |
PARSE SINGLE ABSTRACT WITH INDEXED AUTHORS LIST | abstract = text_dict['P120']
#abstract
abstract_info = re.findall(r"\w+[A-Z\w+]\w+.*(?=Introduction.*)", abstract)
abstract_head = str(abstract_info[0])
abstract_head
authors_info = re.findall(r"\w+[^A-Z\d)\W]\s\w.*(?=Introduction.*)", abstract)
authors = str(authors_info[0])
authors
author_name = re.findall(r"(\w+.\s[a-zA-z\s-]+\d)", authors)
author_name
author_location = re.findall(r"(\d\w+\W[a-zA-z-'\s&,\s.,(A-Z)]+)", authors)
author_location
import string
from collections import namedtuple
DigitGroup = namedtuple('DigitGroup', ['keys', 'values'])
def combine(all_keys, all_values):
by_digit = {}
for word in all_keys:
for char in word:
if char in string.digits:
group = by_digit.get(char)
if not group:
group = DigitGroup(word, [])
by_digit[char] = group
break
for word in all_values:
for char in word:
if char in string.digits:
group = by_digit[char]
group.values.append(word)
break
return dict(by_digit.values())
combined_dict = combine(author_location, author_name)
combined_dict
list_of_dict = [{k: v} for k, v in combined_dict.items()]
list_of_dict
import itertools
i = list_of_dict[6]
get_key = i.keys()
names = []
for key, value in (
itertools.chain.from_iterable(
[itertools.product((k, ), v) for k, v in i.items()])):
names.append(value)
names
name = [re.sub(r'[0-9]', '', i) for i in names]
print(name)
location = [re.sub(r'[0-9]', '', i) for i in get_key]
print(location)
pattern = re.compile(r"\w+[^A-Z\d\W]\s\w.*")
abstract_title = [re.sub(pattern, "", i) for i in abstract_info]
abstract_title
abstract_text = re.findall(r"(Introduction.*)", abstract)
#abstract_text
import pandas as pd
df2 = pd.DataFrame({"About the person": name[0],
"Unnamed: 1": location,
"Unnamed: 2": "",
"About the session/topic": "P120",
"Unnamed: 4": abstract_title,
"Unnamed: 5": abstract_text})
df2
df = pd.read_excel('/home/azashiro/Desktop/Datas.xlsx')
df73 = df72.append(df2, ignore_index=True)
df73
'''Import pandas DataFrame into Excel file'''
excel_file = df73.to_excel("/home/azashiro/Desktop/beetroot_task/Datas.xlsx", index=False)
excel_file | _____no_output_____ | MIT | file_parse.ipynb | ivanlohvyn/beetroot_parse_pdf |
 6.3.2 Self Check **2. _(IPython Session)_** Given the sets `{10, 20, 30}` and `{5, 10, 15, 20}` use the mathematical set operators to produce the following results:**a.** `{30}` **b.** `{5, 15, 30}` **c.** `{5, 10, 15, 20, 30}` **d.** `{10, 20}`**Answer:** | {10, 20, 30} - {5, 10, 15, 20}
{10, 20, 30} ^ {5, 10, 15, 20}
{10, 20, 30} | {5, 10, 15, 20}
{10, 20, 30} & {5, 10, 15, 20}
##########################################################################
# (C) Copyright 2019 by Deitel & Associates, Inc. and #
# Pearson Education, Inc. All Rights Reserved. #
# #
# DISCLAIMER: The authors and publisher of this book have used their #
# best efforts in preparing the book. These efforts include the #
# development, research, and testing of the theories and programs #
# to determine their effectiveness. The authors and publisher make #
# no warranty of any kind, expressed or implied, with regard to these #
# programs or to the documentation contained in these books. The authors #
# and publisher shall not be liable in any event for incidental or #
# consequential damages in connection with, or arising out of, the #
# furnishing, performance, or use of these programs. #
##########################################################################
| _____no_output_____ | Apache-2.0 | examples/ch06/snippets_ipynb/06.03.02selfcheck.ipynb | germanngc/PythonFundamentals |
All the IPython Notebooks in this lecture series by Dr. Milan Parmar are available @ **[GitHub](https://github.com/milaan9/03_Python_Flow_Control)** Python Nested `if` statementWe can have a nested-**[if-else](https://github.com/milaan9/03_Python_Flow_Control/blob/main/002_Python_if_else_statement.ipynb)** or nested-**[if-elif-else](https://github.com/milaan9/03_Python_Flow_Control/blob/main/003_Python_if_elif_else_statement%20.ipynb)** statement inside another **`if-else`** statement. This is called **nesting** in computer programming. The nested if statements is useful when we want to make a series of decisions.Any number of these statements can be nested inside one another. Indentation is the only way to figure out the level of nesting. They can get confusing, so they must be avoided unless necessary.We can use nested if statements for situations where we want to check for a **secondary condition** if the first condition executes as **`True`**. Syntax: Example 1:```pythonif conditon_outer: if condition_inner: statement of nested if else: statement of nested if else: statement ot outer ifelse: Outer else statement outside if block``` Example 2:```pythonif expression1: statement(s) if expression2: statement(s) elif expression3: statement(s) elif expression4: statement(s) else: statement(s)else: statement(s)``` | # Example 1:
a=10
if a>=20: # Condition FALSE
print ("Condition is True")
else: # Code will go to ELSE body
if a>=15: # Condition FALSE
print ("Checking second value")
else: # Code will go to ELSE body
print ("All Conditions are false")
# Example 2:
x = 10
y = 12
if x > y:
print( "x>y")
elif x < y:
print( "x<y")
if x==10:
print ("x=10")
else:
print ("invalid")
else:
print ("x=y")
# Example 3:
num1 = 0
if (num1 != 0): # For zero condition is FALSE
if(num1 > 0):
print("num1 is a positive number")
else:
print("num1 is a negative number")
else: # For zero condition is TRUE
print("num1 is neither positive nor negative")
# Example 4:
'''In this program, we input a number check if the number is
positive or negative or zero and display an appropriate message.
This time we use nested if statement'''
num = float(input("Enter a number: "))
if num >= 0:
if num == 0:
print("Zero")
else:
print("Positive number")
else:
print("Negative number")
# Example 5:
def number_arithmetic(num1, num2):
if num1 >= num2:
if num1 == num2:
print(f'{num1} and {num2} are equal')
else:
print(f'{num1} is greater than {num2}')
else:
print(f'{num1} is smaller than {num2}')
number_arithmetic(96, 66)
# Output 96 is greater than 66
number_arithmetic(96, 96)
# Output 56 and 56 are equal | 96 is greater than 66
96 and 96 are equal
| MIT | 004_Python_Nested_if_statement.ipynb | chen181016/03_Python_Flow_Control |
import torch
import torch.nn as nn
import torchvision.transforms.functional as TF | _____no_output_____ | MIT | notebooks/Original_U-Net_PyTorch.ipynb | jimmiemunyi/fastai-experiments |
|
The Original U-Net Architecture  Defining the double convolution block: | def conv_block(ni, nf):
return nn.Sequential(
nn.Conv2d(ni, nf, kernel_size=3, stride=1),
nn.ReLU(inplace=True),
nn.Conv2d(nf, nf, kernel_size=3, stride=1),
nn.ReLU(inplace=True)
) | _____no_output_____ | MIT | notebooks/Original_U-Net_PyTorch.ipynb | jimmiemunyi/fastai-experiments |
Implementing the origal architecture: | class UNET(nn.Module):
def __init__(self, in_channels=1, out_channels=1,
features = [64, 128, 256, 512]):
super(UNET, self).__init__()
self.encoder = nn.ModuleList()
self.pool = nn.MaxPool2d(kernel_size=2, stride=2)
# create the contracting path (encoder + bottleneck)
for feature in features:
self.encoder.append(conv_block(in_channels, feature))
in_channels = feature
self.bottleneck = conv_block(features[-1], features[-1]*2)
# create the expansive path
self.decoder = nn.ModuleList()
# reversed because we want to create from last to first
for feature in reversed(features):
self.decoder.append(
nn.Sequential(
nn.ConvTranspose2d(feature*2, feature, kernel_size=2, stride=2),
conv_block(feature*2, feature)
)
)
self.final_conv = nn.Conv2d(features[0], out_channels, kernel_size=1)
def forward(self, x):
activations = []
# forward pass on the Encoder
for module in self.encoder:
x = module(x)
activations.append(x)
x = self.pool(x)
x = self.bottleneck(x)
# reverse the order of activations for easier usage
activations.reverse()
# forward pass on the decoder
for idx in range(len(self.decoder)):
# up scale first
x = self.decoder[idx][0](x)
# crop incoming activation
activation = TF.resize(activations[idx], size=x.shape[2:])
# concat
x = torch.cat([activation, x], dim=1)
# double conv
x = self.decoder[idx][1](x)
return self.final_conv(x) | _____no_output_____ | MIT | notebooks/Original_U-Net_PyTorch.ipynb | jimmiemunyi/fastai-experiments |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.