path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
docs_src/models.rnn.ipynb | ###Markdown
models.rnn Type an introduction of the package here. Links to relevant papers: [recurrent neural network](http://www.pnas.org/content/79/8/2554) [AWD LSTM](https://arxiv.org/pdf/1708.02182.pdf)
###Code
from fastai.gen_doc.nbdoc import *
from fastai.models.rnn import *
###Output
_____no_output_____
###Markdown
Global Variable Definitions:
###Code
show_doc(EmbeddingDropout)
###Output
_____no_output_____
###Markdown
[`EmbeddingDropout`](/models.rnn.htmlEmbeddingDropout)
###Code
show_doc(EmbeddingDropout.forward)
###Output
_____no_output_____
###Markdown
`EmbeddingDropout.forward`
###Code
show_doc(LinearDecoder)
###Output
_____no_output_____
###Markdown
[`LinearDecoder`](/models.rnn.htmlLinearDecoder)
###Code
show_doc(LinearDecoder.forward)
###Output
_____no_output_____
###Markdown
`LinearDecoder.forward`
###Code
show_doc(MultiBatchRNNCore)
###Output
_____no_output_____
###Markdown
[`MultiBatchRNNCore`](/models.rnn.htmlMultiBatchRNNCore)
###Code
show_doc(MultiBatchRNNCore.concat)
###Output
_____no_output_____
###Markdown
`MultiBatchRNNCore.concat`
###Code
show_doc(MultiBatchRNNCore.forward)
###Output
_____no_output_____
###Markdown
`MultiBatchRNNCore.forward`
###Code
show_doc(PoolingLinearClassifier)
###Output
_____no_output_____
###Markdown
[`PoolingLinearClassifier`](/models.rnn.htmlPoolingLinearClassifier)
###Code
show_doc(PoolingLinearClassifier.forward)
###Output
_____no_output_____
###Markdown
`PoolingLinearClassifier.forward`
###Code
show_doc(PoolingLinearClassifier.pool)
###Output
_____no_output_____
###Markdown
`PoolingLinearClassifier.pool`
###Code
show_doc(RNNCore)
###Output
_____no_output_____
###Markdown
[`RNNCore`](/models.rnn.htmlRNNCore)
###Code
show_doc(RNNCore.forward)
###Output
_____no_output_____
###Markdown
`RNNCore.forward`
###Code
show_doc(RNNCore.one_hidden)
###Output
_____no_output_____
###Markdown
`RNNCore.one_hidden`
###Code
show_doc(RNNCore.reset)
###Output
_____no_output_____
###Markdown
`RNNCore.reset`
###Code
show_doc(RNNDropout)
###Output
_____no_output_____
###Markdown
[`RNNDropout`](/models.rnn.htmlRNNDropout)
###Code
show_doc(RNNDropout.forward)
###Output
_____no_output_____
###Markdown
`RNNDropout.forward`
###Code
show_doc(SequentialRNN)
###Output
_____no_output_____
###Markdown
[`SequentialRNN`](/models.rnn.htmlSequentialRNN)
###Code
show_doc(SequentialRNN.reset)
###Output
_____no_output_____
###Markdown
`SequentialRNN.reset`
###Code
show_doc(WeightDropout)
###Output
_____no_output_____
###Markdown
[`WeightDropout`](/models.rnn.htmlWeightDropout)
###Code
show_doc(WeightDropout._setweights)
###Output
_____no_output_____
###Markdown
`WeightDropout._setweights`
###Code
show_doc(WeightDropout.forward)
###Output
_____no_output_____
###Markdown
`WeightDropout.forward`
###Code
show_doc(WeightDropout.reset)
###Output
_____no_output_____
###Markdown
`WeightDropout.reset`
###Code
show_doc(dropout_mask)
###Output
_____no_output_____
###Markdown
[`dropout_mask`](/models.rnn.htmldropout_mask)
###Code
show_doc(get_language_model)
###Output
_____no_output_____
###Markdown
[`get_language_model`](/models.rnn.htmlget_language_model)
###Code
show_doc(get_rnn_classifier)
###Output
_____no_output_____
###Markdown
[`get_rnn_classifier`](/models.rnn.htmlget_rnn_classifier)
###Code
show_doc(repackage_var)
###Output
_____no_output_____
###Markdown
models.rnn Type an introduction of the package here.
###Code
from fastai.gen_doc.nbdoc import *
from fastai.models.rnn import *
###Output
_____no_output_____
###Markdown
Global Variable Definitions:
###Code
show_doc(EmbeddingDropout)
###Output
_____no_output_____
###Markdown
[EmbeddingDropout](http://docs.fast.ai/models.rnn.htmlEmbeddingDropout)
###Code
show_doc(EmbeddingDropout.forward)
###Output
_____no_output_____
###Markdown
`EmbeddingDropout.forward`
###Code
show_doc(LinearDecoder)
###Output
_____no_output_____
###Markdown
[LinearDecoder](http://docs.fast.ai/models.rnn.htmlLinearDecoder)
###Code
show_doc(LinearDecoder.forward)
###Output
_____no_output_____
###Markdown
`LinearDecoder.forward`
###Code
show_doc(MultiBatchRNNCore)
###Output
_____no_output_____
###Markdown
[MultiBatchRNNCore](http://docs.fast.ai/models.rnn.htmlMultiBatchRNNCore)
###Code
show_doc(MultiBatchRNNCore.concat)
###Output
_____no_output_____
###Markdown
`MultiBatchRNNCore.concat`
###Code
show_doc(MultiBatchRNNCore.forward)
###Output
_____no_output_____
###Markdown
`MultiBatchRNNCore.forward`
###Code
show_doc(PoolingLinearClassifier)
###Output
_____no_output_____
###Markdown
[PoolingLinearClassifier](http://docs.fast.ai/models.rnn.htmlPoolingLinearClassifier)
###Code
show_doc(PoolingLinearClassifier.forward)
###Output
_____no_output_____
###Markdown
`PoolingLinearClassifier.forward`
###Code
show_doc(PoolingLinearClassifier.pool)
###Output
_____no_output_____
###Markdown
`PoolingLinearClassifier.pool`
###Code
show_doc(RNNCore)
###Output
_____no_output_____
###Markdown
[RNNCore](http://docs.fast.ai/models.rnn.htmlRNNCore)
###Code
show_doc(RNNCore.forward)
###Output
_____no_output_____
###Markdown
`RNNCore.forward`
###Code
show_doc(RNNCore.one_hidden)
###Output
_____no_output_____
###Markdown
`RNNCore.one_hidden`
###Code
show_doc(RNNCore.reset)
###Output
_____no_output_____
###Markdown
`RNNCore.reset`
###Code
show_doc(RNNDropout)
###Output
_____no_output_____
###Markdown
[RNNDropout](http://docs.fast.ai/models.rnn.htmlRNNDropout)
###Code
show_doc(RNNDropout.forward)
###Output
_____no_output_____
###Markdown
`RNNDropout.forward`
###Code
show_doc(SequentialRNN)
###Output
_____no_output_____
###Markdown
[SequentialRNN](http://docs.fast.ai/models.rnn.htmlSequentialRNN)
###Code
show_doc(SequentialRNN.reset)
###Output
_____no_output_____
###Markdown
`SequentialRNN.reset`
###Code
show_doc(WeightDropout)
###Output
_____no_output_____
###Markdown
[WeightDropout](http://docs.fast.ai/models.rnn.htmlWeightDropout)
###Code
show_doc(WeightDropout._setweights)
###Output
_____no_output_____
###Markdown
`WeightDropout._setweights`
###Code
show_doc(WeightDropout.forward)
###Output
_____no_output_____
###Markdown
`WeightDropout.forward`
###Code
show_doc(WeightDropout.reset)
###Output
_____no_output_____
###Markdown
`WeightDropout.reset`
###Code
show_doc(dropout_mask)
###Output
_____no_output_____
###Markdown
[dropout_mask](http://docs.fast.ai/models.rnn.htmldropout_mask)
###Code
show_doc(get_language_model)
###Output
_____no_output_____
###Markdown
[get_language_model](http://docs.fast.ai/models.rnn.htmlget_language_model)
###Code
show_doc(get_rnn_classifier)
###Output
_____no_output_____
###Markdown
[get_rnn_classifier](http://docs.fast.ai/models.rnn.htmlget_rnn_classifier)
###Code
show_doc(repackage_var)
###Output
_____no_output_____ |
plot_try_7.ipynb | ###Markdown
Following are the solutions found for run 7 Seed = 1092841564 (randomly chosen, not farmed ) PR439
###Code
x =[339,286,285,284,279,231,230,224,188,187,186,229,280,281,228,282,227,226,225,185,148,84,83,82,51,1,2,0,3,4,5,6,7,8,9,10,11,12,13,14,15,16,32,33,34,35,36,37,38,39,40,42,41,43,44,45,48,46,47,49,50,52,53,54,55,81,80,79,78,56,57,77,85,86,76,58,59,60,75,74,61,31,30,29,27,25,26,28,62,72,73,63,24,22,20,21,23,64,65,66,19,18,17,67,68,69,70,71,87,88,89,90,103,104,105,106,107,108,109,110,111,145,143,146,147,144,184,189,223,222,232,233,234,235,236,237,238,239,240,241,242,243,244,245,246,247,248,249,209,210,211,212,213,214,215,216,217,218,219,220,221,190,183,191,182,192,193,194,195,196,197,198,199,200,201,175,174,160,159,158,176,177,178,179,180,181,153,154,155,156,157,137,138,139,140,141,151,152,150,149,142,112,113,115,117,118,120,121,100,119,116,114,102,101,91,92,93,99,94,95,96,97,98,127,132,131,130,129,128,126,125,124,122,123,136,135,134,133,161,162,163,164,166,165,168,167,169,170,171,172,173,203,202,204,205,206,207,208,250,252,251,254,253,315,314,316,317,318,319,320,321,322,323,324,325,326,327,328,330,331,329,302,301,303,304,305,306,307,308,309,310,311,312,313,255,256,257,258,259,260,261,262,263,264,265,266,267,268,269,270,271,300,272,273,274,275,276,277,278,293,294,295,296,297,298,299,332,333,334,335,336,337,338,359,369,360,361,362,363,364,365,366,367,368,377,384,385,386,387,400,399,401,403,404,416,417,427,428,430,415,414,418,419,420,413,405,406,412,411,422,421,431,432,429,426,423,425,424,409,410,408,407,402,433,437,436,435,434,438,283,343,342,341,340,372,370,371,382,381,383,393,394,392,396,395,397,398,391,390,380,374,373,379,389,388,378,376,375,344,345,347,346,348,349,350,352,351,353,354,355,357,356,358,292,291,290,289,288,287]
P = ProblemInstance('./problems_cleaned/pr439.tsp')
points = getPoints('./problems_cleaned/pr439.tsp')
plotSolution(x, points)
found = compute_length(x, P.dist_matrix)
print("GAP IS:", P.GAP(found))
print("Solution is valid: ", P.ValidateSolution(x))
###Output
_____no_output_____
###Markdown
PCB442
###Code
x = [43,75,74,73,72,71,70,69,37,4,3,36,68,67,66,34,35,2,1,0,441,33,65,101,102,113,125,135,148,160,171,184,399,404,226,233,237,238,265,268,272,275,278,280,281,427,340,341,345,346,347,432,348,349,350,351,342,352,353,354,433,355,356,357,434,358,359,360,429,322,296,297,298,323,428,343,361,362,430,363,364,365,344,366,367,368,369,431,370,371,372,373,374,337,336,426,335,334,306,333,332,331,330,305,304,329,328,327,326,325,324,299,300,301,302,303,421,418,259,258,257,256,255,254,253,252,417,416,277,294,295,321,320,319,293,292,318,317,291,290,316,315,314,288,289,424,420,423,286,287,313,312,339,311,285,284,310,338,309,283,282,308,307,439,279,425,276,273,270,269,266,239,234,227,405,400,185,172,161,149,136,126,385,440,103,114,386,388,115,137,391,151,150,173,395,398,186,174,392,162,152,138,139,127,116,104,105,106,117,128,140,153,164,163,176,188,396,175,187,199,212,221,229,228,220,211,210,402,406,240,241,242,243,244,245,246,247,248,414,249,250,251,230,222,213,401,200,201,189,190,202,214,223,231,232,224,215,203,192,191,179,166,155,142,130,387,389,393,394,178,397,177,165,154,141,129,118,107,438,82,50,51,83,84,52,53,54,55,87,377,86,85,381,380,108,119,384,120,109,121,131,390,143,144,156,167,180,193,194,195,196,181,168,157,145,132,122,110,111,123,133,146,158,169,182,197,208,218,217,207,206,205,204,216,403,408,407,411,412,260,261,235,262,263,415,267,419,271,437,422,274,436,264,236,413,409,410,225,219,209,198,183,170,159,147,134,124,112,382,383,97,96,379,95,378,94,62,63,64,32,376,375,31,30,29,28,61,93,435,100,92,91,90,89,88,56,57,58,59,60,27,26,25,24,23,22,21,20,19,18,17,16,49,81,80,99,79,47,48,15,14,13,46,78,77,98,76,44,45,12,11,10,9,8,7,6,5,38,39,40,41,42]
P = ProblemInstance('./problems/pcb442.tsp')
points = getPoints('./problems/pcb442.tsp')
plotSolution(x, points)
found = compute_length(x, P.dist_matrix)
print("GAP IS:", P.GAP(found))
print("Solution is valid: ", P.ValidateSolution(x))
###Output
_____no_output_____
###Markdown
LIN318
###Code
x =[294,293,292,291,287,280,277,276,273,281,286,288,295,289,285,282,290,284,283,278,279,272,271,314,268,164,165,169,170,167,166,209,163,59,60,64,65,87,86,189,188,187,186,182,175,172,171,168,176,181,183,190,184,180,177,173,174,178,179,185,193,194,203,202,197,195,196,200,201,205,206,99,93,94,98,97,89,88,92,101,100,96,95,91,90,84,83,82,81,77,70,67,66,63,71,76,78,85,79,75,72,80,74,73,68,69,62,61,104,58,55,54,49,47,44,48,316,315,39,103,43,46,50,53,56,57,52,51,45,42,40,41,36,35,25,24,17,16,15,26,23,18,11,19,22,27,32,31,30,29,28,21,20,102,14,10,9,6,5,1,0,2,7,8,4,3,105,106,110,111,114,115,12,13,119,207,125,126,133,134,135,136,33,34,38,37,144,153,160,159,154,152,149,208,148,151,155,158,161,162,157,156,150,147,145,146,141,140,130,129,122,121,120,123,128,131,137,132,127,124,116,107,112,113,109,108,210,211,215,216,219,220,117,118,224,312,230,231,238,239,240,241,138,139,143,142,249,258,265,264,259,257,254,313,253,256,260,263,266,267,262,261,255,252,250,251,246,245,235,234,227,226,225,228,233,236,242,237,232,229,221,212,217,218,214,213,222,223,243,244,247,248,317,269,270,274,275,296,297,303,304,309,308,307,299,298,302,300,301,305,306,310,311,204,199,198,192,191]
P = ProblemInstance('./problems/lin318.tsp')
points = getPoints('./problems/lin318.tsp')
plotSolution(x, points)
found = compute_length(x, P.dist_matrix)
print("GAP IS:", P.GAP(found))
print("Solution is valid: ", P.ValidateSolution(x))
###Output
_____no_output_____
###Markdown
FL1577
###Code
x =[1448,1401,1449,1402,1450,1492,1534,1576,1575,1438,1437,1522,1574,1521,1480,1479,1520,1573,1572,1571,1436,1519,1570,1569,1518,1435,1478,1434,1433,1477,1568,1567,1476,1432,1475,1566,1517,1431,1474,1516,1565,1564,1515,1430,1429,1428,1473,1427,1472,1514,1563,1562,1403,1451,1493,1535,1494,1536,1495,1537,1496,1538,1497,1539,1498,1540,1499,1541,1500,1542,1501,1543,1502,1544,1503,1545,1504,1546,1561,1560,1513,1559,1512,1558,1511,1557,1556,1510,1555,1509,1554,1553,1552,1551,1508,1507,1550,1506,1549,1548,1547,1505,1463,1415,1464,1416,1417,1418,1465,1419,1466,1420,1467,1421,1468,1422,1469,1423,1424,1470,1471,1425,1426,1462,1414,1461,1413,1460,1412,1459,1411,1458,1410,1457,1409,1456,1408,1455,1407,1454,1406,1453,1405,1452,1404,1102,1103,1104,1105,1106,1107,1108,1109,1110,1111,1112,1113,1114,1115,1116,1117,1118,1119,1120,1121,1122,1123,1124,1125,1126,1127,1128,1129,1130,1131,1132,1133,1134,1135,1204,1203,1202,1201,1138,1137,1136,1091,1092,1093,1094,1095,1096,1066,1065,1064,1018,1017,1016,1015,1014,1067,1084,1085,1083,1068,1013,1012,1011,1048,1010,1082,1081,1069,1062,1063,993,994,1009,995,1008,1071,1061,1070,1080,1086,1087,1072,1073,1090,1074,1075,1079,1078,1077,1060,1052,1051,998,997,1050,1049,1006,1007,996,986,972,987,971,988,970,949,950,969,951,968,952,967,953,966,989,990,965,991,964,992,963,954,955,962,956,961,957,960,958,959,1019,1020,1021,1022,1023,1024,1025,1026,1027,1028,1029,1030,1031,1032,1033,1034,1035,1036,1037,1038,1039,1040,1041,1042,1043,1044,1045,1046,1047,874,873,872,871,870,869,868,867,866,865,864,863,862,861,860,859,858,857,856,855,854,853,852,851,850,849,848,847,846,845,844,843,842,841,840,839,838,837,836,835,834,833,832,831,830,829,828,827,826,825,824,823,822,821,820,819,818,817,816,815,814,813,812,811,810,809,808,875,807,745,556,557,558,559,560,561,562,563,564,565,566,555,554,553,552,551,550,549,548,547,546,545,544,543,542,541,540,539,538,537,536,535,534,533,532,531,530,529,528,527,526,594,628,595,627,596,626,597,625,598,624,599,623,600,622,601,621,602,620,603,619,604,618,605,617,606,616,607,615,608,614,613,612,611,610,609,506,507,508,509,510,511,512,513,514,515,516,517,518,519,520,521,522,523,524,525,501,500,499,498,497,496,495,494,493,492,491,490,502,489,503,504,505,488,487,486,485,484,483,429,482,430,481,431,480,432,479,433,478,434,477,435,476,436,475,437,474,438,473,439,472,440,471,470,469,468,467,466,465,464,463,462,461,460,459,458,457,456,455,454,453,452,451,450,449,448,441,442,443,444,445,446,447,384,385,386,387,388,389,390,391,392,393,394,395,396,397,398,399,400,401,402,403,404,405,406,407,408,409,410,411,412,413,414,415,416,417,418,419,420,421,422,423,424,425,426,427,428,336,337,338,339,340,341,342,343,344,345,346,347,348,349,350,351,352,353,354,355,356,357,358,359,360,361,362,363,364,365,366,367,368,369,370,371,372,373,374,375,376,377,378,379,380,381,382,383,309,310,162,163,164,165,166,167,168,169,170,308,307,306,305,304,303,302,301,300,299,298,297,296,295,294,293,292,291,290,289,288,287,286,285,284,283,282,281,280,279,278,277,276,275,274,273,272,271,270,269,268,267,266,265,264,263,262,261,260,259,258,257,256,255,254,253,252,251,250,249,248,247,246,245,244,243,242,241,240,239,238,237,236,235,234,233,232,231,230,229,228,227,226,225,224,223,222,221,220,219,218,217,216,215,214,213,212,211,210,209,208,207,206,205,204,203,202,152,201,153,200,154,199,155,198,156,197,196,195,194,193,192,191,190,189,188,187,186,185,184,183,182,181,180,179,178,177,176,175,174,173,172,171,150,151,141,140,139,138,137,136,135,1341,1333,1334,1340,1335,1339,1338,1337,1336,1253,1252,1254,1251,1250,1249,1255,1248,1256,1247,1257,1342,1343,1258,1246,1245,1259,1244,1332,1344,1345,1331,1243,1346,1347,1260,1330,1261,1348,1329,1349,1328,1242,1241,1350,1351,1352,1262,1240,1239,1264,1263,1327,1353,1326,1354,1325,1355,1324,1265,1238,1237,1266,1236,1235,1267,1323,1356,1357,1322,1358,1359,1321,1268,1234,1269,1233,1270,1232,1231,1271,1319,1320,1360,1361,1362,1318,1272,1230,1229,1273,1317,1363,1316,30,29,28,31,27,32,26,33,68,85,86,87,67,66,65,88,89,64,63,62,90,104,103,102,101,124,123,122,121,120,119,118,125,117,126,116,127,115,128,129,130,131,132,133,98,92,93,80,73,54,53,72,71,81,91,99,100,82,83,84,69,70,52,21,22,23,24,25,46,58,59,51,47,50,60,61,48,35,49,34,11,12,13,14,0,1,2,3,4,5,6,10,36,45,37,44,38,43,39,42,40,41,76,75,77,74,78,79,94,114,105,113,106,112,107,111,108,110,109,97,96,95,57,56,55,20,19,18,17,16,15,7,8,9,1364,1365,1315,1274,1228,1275,1227,1226,1225,1314,1366,1367,1313,1276,1224,1312,1368,1277,1223,1278,1369,1370,1279,1222,1221,1280,1220,1311,1371,1372,1310,1219,1281,1282,1218,1217,1283,1216,1284,1215,1285,1214,1286,1213,1287,1212,1288,1211,1289,1210,1290,1209,1208,1291,1298,1292,1207,1293,1206,1294,1205,1295,1390,1296,1389,1297,1388,1387,1299,1386,1300,1385,1301,1384,1302,1383,1303,1382,1304,1381,1305,1380,1306,1379,1378,1377,1307,1376,1308,1375,1309,1374,1373,134,142,143,144,145,146,147,149,148,161,160,159,158,157,335,334,333,332,331,330,329,328,327,326,325,324,323,322,321,320,319,318,317,316,315,314,313,312,311,704,705,706,707,708,709,710,703,711,702,712,701,713,700,714,699,715,698,716,697,717,696,718,695,719,694,720,693,721,692,722,691,723,690,724,689,725,688,726,687,727,686,728,685,729,684,629,593,630,592,631,591,632,590,633,589,634,588,635,587,636,586,637,585,638,584,639,583,640,582,641,581,642,580,643,579,644,578,645,577,646,576,647,575,648,574,649,573,650,572,651,571,652,570,653,569,654,568,655,567,656,657,658,659,660,661,662,663,664,665,666,667,668,669,744,670,743,671,742,672,741,673,740,674,739,675,738,676,737,677,736,678,735,679,734,680,733,681,732,682,731,683,730,778,779,780,781,782,783,784,785,786,787,788,789,790,791,792,793,794,795,796,797,927,926,925,924,923,922,921,920,919,918,917,916,915,914,913,912,911,910,909,908,777,776,907,775,906,905,774,773,904,903,772,771,902,901,770,769,900,899,768,767,898,897,766,765,896,895,764,763,894,893,762,761,892,760,891,890,759,758,757,889,888,887,756,755,886,885,798,754,753,799,884,883,752,800,882,801,751,750,749,748,804,747,805,746,806,876,877,878,879,803,880,802,881,932,931,930,929,933,928,934,935,948,973,947,985,974,946,975,945,944,943,999,1000,984,1001,983,976,942,977,941,978,940,979,980,939,938,937,936,1059,1058,1057,1003,1056,1004,981,982,1002,1005,1055,1054,1053,1076,1089,1088,1101,1100,1097,1098,1099,1139,1140,1141,1142,1143,1144,1145,1146,1147,1148,1149,1150,1151,1152,1153,1154,1155,1156,1157,1158,1159,1160,1161,1162,1163,1164,1165,1166,1167,1168,1169,1170,1171,1172,1173,1174,1175,1176,1177,1178,1179,1180,1181,1182,1183,1184,1185,1186,1187,1188,1189,1190,1191,1192,1193,1194,1195,1196,1197,1198,1199,1200,1391,1481,1523,1482,1439,1392,1440,1393,1394,1441,1483,1524,1525,1484,1442,1395,1396,1443,1485,1526,1527,1486,1528,1487,1444,1397,1445,1488,1529,1530,1531,1489,1446,1398,1399,1447,1400,1490,1532,1533,1491]
P = ProblemInstance('./problems_cleaned/fl1577.tsp')
points = getPoints('./problems_cleaned/fl1577.tsp')
plotSolution(x, points)
found = compute_length(x, P.dist_matrix)
print("GAP IS:", P.GAP(found))
print("Solution is valid: ", P.ValidateSolution(x))
###Output
_____no_output_____
###Markdown
RAT783
###Code
x=[188,190,191,177,157,165,173,167,169,176,180,193,198,192,170,148,134,133,113,108,105,100,78,83,86,74,65,42,37,39,5,12,10,3,33,27,23,26,25,20,15,31,13,21,34,30,35,19,7,29,18,4,2,9,28,32,8,11,17,6,1,22,16,0,14,24,36,44,56,51,48,55,87,96,99,79,69,80,66,40,45,46,52,57,64,82,93,106,110,119,135,150,155,161,152,126,114,109,122,116,124,145,138,146,154,159,174,183,195,194,204,185,179,171,184,214,215,235,238,241,234,265,269,288,304,312,301,293,276,275,254,245,229,230,221,226,248,247,272,277,273,285,298,322,318,326,334,316,327,343,357,365,370,364,352,345,338,324,331,349,346,344,332,320,321,302,291,294,310,329,335,325,317,311,300,286,281,282,287,309,307,319,297,308,323,333,330,341,361,360,348,336,337,339,347,350,353,359,374,379,385,380,363,382,372,367,384,387,402,414,413,399,401,415,431,445,458,436,433,447,440,434,430,438,456,455,448,459,472,494,517,518,512,514,528,551,556,546,539,530,538,560,568,578,561,553,550,533,529,505,516,511,507,491,485,462,454,437,442,424,421,432,425,429,423,419,407,408,409,418,416,417,410,390,371,369,376,392,375,404,391,388,396,406,400,389,386,381,355,358,342,351,362,383,394,422,428,435,444,451,452,457,465,484,483,481,471,498,510,525,532,547,559,567,575,595,623,639,637,625,610,593,579,585,596,597,589,598,587,577,565,562,563,555,534,535,540,541,520,504,506,500,499,490,482,476,466,468,469,463,475,493,495,496,486,473,479,488,492,487,474,464,449,439,446,450,453,467,477,497,501,508,524,527,543,536,519,513,523,521,526,542,557,564,569,576,583,590,599,600,613,604,617,609,588,574,570,571,552,558,572,580,603,614,628,631,641,638,630,632,629,635,621,608,602,586,606,620,627,616,618,612,611,626,633,640,643,659,654,644,647,651,658,664,671,682,696,703,724,737,733,746,768,777,760,771,764,759,757,775,758,756,767,778,776,772,773,765,761,748,749,762,769,779,780,763,755,751,741,723,727,750,781,782,774,770,766,752,743,728,711,710,713,716,714,702,695,693,701,707,715,722,740,745,730,729,726,692,688,700,699,709,685,694,698,718,719,720,725,739,747,754,753,736,732,712,706,691,690,697,721,738,742,731,735,744,734,717,704,708,687,677,670,678,684,705,689,679,674,680,656,648,660,683,668,665,650,645,652,653,661,649,672,663,666,667,673,675,669,662,686,681,676,657,655,634,642,646,636,615,605,592,591,581,566,573,594,607,624,622,619,601,584,582,554,549,537,545,548,544,531,509,502,522,515,503,489,478,480,470,460,461,443,441,427,420,426,411,397,377,378,368,373,395,398,412,405,403,393,366,356,354,340,328,314,313,303,296,295,283,278,260,246,237,263,264,279,284,253,233,217,213,206,222,244,240,251,271,270,262,259,243,228,227,219,202,196,187,182,200,209,212,220,225,236,256,274,261,250,252,218,208,210,211,224,232,231,239,257,258,249,268,280,289,299,306,315,292,305,290,267,266,255,242,223,207,201,203,186,160,151,142,136,132,115,139,127,111,107,104,101,76,73,88,94,97,75,62,38,47,63,70,84,98,85,81,67,58,60,49,61,53,41,50,71,91,102,92,89,77,68,43,54,59,72,95,90,103,112,118,121,141,147,144,130,131,140,129,128,123,120,125,117,137,156,168,162,175,178,197,199,216,205,189,181,172,164,163,149,143,158,153,166]
P = ProblemInstance('./problems/rat783.tsp')
points = getPoints('./problems/rat783.tsp')
plotSolution(x, points)
found = compute_length(x, P.dist_matrix)
print("GAP IS:", P.GAP(found))
print("Solution is valid: ", P.ValidateSolution(x))
###Output
_____no_output_____
###Markdown
U1060
###Code
x =[115,119,120,117,118,116,110,108,109,889,890,892,893,895,897,898,896,894,900,899,901,902,905,903,904,906,908,909,910,907,911,912,913,914,915,916,917,918,920,860,861,862,865,864,863,219,858,859,857,856,854,855,224,225,226,853,852,228,227,229,230,236,237,241,240,839,242,243,245,244,313,312,314,307,308,303,302,300,294,293,292,289,290,291,288,287,286,311,310,309,247,246,239,238,235,234,249,250,248,251,252,254,253,285,284,283,282,281,280,279,278,277,276,268,267,271,270,269,272,274,275,388,389,273,390,391,392,393,394,395,396,403,404,387,405,406,407,408,409,410,411,420,421,419,412,414,413,418,417,416,415,401,400,402,399,398,397,428,427,429,430,431,432,434,433,435,436,610,609,605,606,608,607,670,682,683,684,685,686,687,688,689,681,680,679,675,674,673,671,672,669,667,663,666,664,657,662,661,643,641,642,668,634,633,632,631,627,625,621,620,619,618,612,611,613,614,615,616,617,623,622,624,626,628,629,630,635,636,637,638,640,639,645,646,644,660,659,652,648,647,649,650,651,653,658,654,656,665,655,718,717,716,715,677,676,678,714,713,712,711,710,709,708,719,720,721,707,706,705,701,702,704,703,690,691,692,694,693,591,592,594,593,595,596,597,598,599,603,604,602,601,600,437,438,439,442,443,444,445,450,451,449,456,455,458,457,448,446,447,441,440,426,425,424,423,422,463,462,461,460,459,553,554,555,556,561,560,559,557,558,454,452,453,570,569,568,587,584,579,580,581,582,583,585,586,588,589,590,696,695,697,700,698,699,722,723,724,725,726,727,728,730,729,731,732,733,734,735,577,576,578,575,574,573,572,571,567,566,565,562,563,564,552,551,550,549,548,539,540,541,542,543,546,547,545,544,736,737,738,739,740,741,742,743,744,745,746,747,748,537,536,535,538,533,534,749,750,751,752,753,754,755,756,757,758,759,760,761,762,763,764,765,766,768,769,767,776,777,775,770,771,772,773,774,778,779,780,781,782,783,784,785,786,787,788,789,790,791,792,793,794,795,797,796,798,799,800,801,802,803,804,342,341,343,346,345,347,508,507,506,509,510,505,512,511,513,504,500,499,498,497,501,502,503,517,518,519,515,514,516,520,521,522,523,524,531,532,530,529,528,527,526,525,478,480,479,477,476,468,467,466,464,465,469,470,472,473,471,475,474,481,482,496,495,494,493,492,490,491,483,484,485,486,487,488,489,367,368,366,365,358,359,360,361,357,356,355,354,348,344,327,328,329,338,339,340,337,336,335,334,332,330,331,333,805,806,812,324,323,322,321,320,325,326,350,349,351,353,352,362,363,364,369,371,375,376,377,378,379,380,381,382,383,386,385,384,373,374,372,370,295,296,298,297,299,301,319,318,317,305,304,306,315,316,815,816,814,813,811,810,809,807,808,824,825,830,829,831,826,828,827,941,940,939,934,935,936,937,938,837,835,836,834,833,832,823,822,821,817,820,819,818,838,840,841,842,843,844,845,851,846,847,848,849,850,921,919,922,923,924,925,926,927,929,930,928,931,932,933,946,945,944,942,943,966,967,968,965,947,948,953,952,951,950,954,955,949,963,964,962,961,960,959,956,958,957,79,78,77,81,80,76,75,74,73,72,71,70,69,68,67,66,992,991,990,989,985,983,979,978,977,976,975,974,970,969,971,972,973,981,980,982,984,986,987,988,993,994,995,996,997,1003,1005,1004,1002,1007,1008,1001,998,999,1000,1013,1014,1012,1015,1011,1010,1009,1021,1006,1022,1023,1024,1025,1026,1020,1019,1016,1017,1018,1027,1029,1028,63,64,65,62,61,59,60,54,58,57,55,56,1033,1034,1032,1031,1030,1035,1036,1037,1038,36,1039,1040,1044,1045,1046,1047,1048,1049,1042,1043,1041,33,32,35,34,40,41,39,37,38,53,52,51,50,42,43,44,49,48,45,47,46,82,88,89,90,91,87,86,85,84,83,99,100,101,97,98,93,92,891,94,95,96,107,102,103,106,105,104,14,15,17,16,18,19,13,12,20,21,22,23,24,25,26,27,28,31,30,29,11,10,9,1052,1053,1050,1051,1054,1055,1056,1057,1058,1059,3,0,2,1,4,5,6,7,8,121,122,123,128,127,130,129,124,125,126,136,135,134,133,132,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,156,157,155,164,165,159,158,160,161,162,163,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,182,183,184,185,186,187,188,189,190,191,192,193,194,198,199,197,196,195,266,265,263,264,255,256,257,262,261,258,259,260,233,231,232,223,222,221,220,218,217,216,208,207,206,200,201,202,203,204,205,209,210,211,212,213,215,214,866,867,868,870,871,869,874,875,876,878,877,873,872,879,880,881,888,882,883,885,886,884,131,113,887,111,112,114]
P = ProblemInstance('./problems_cleaned/u1060.tsp')
points = getPoints('./problems_cleaned/u1060.tsp')
plotSolution(x, points)
found = compute_length(x, P.dist_matrix)
print("GAP IS:", P.GAP(found))
print("Solution is valid: ", P.ValidateSolution(x))
###Output
_____no_output_____
###Markdown
CH130
###Code
x =[77,120,125,20,32,22,39,46,21,36,92,43,41,50,59,54,121,95,12,66,13,9,101,5,90,71,48,57,119,52,105,37,91,72,98,73,74,51,64,55,8,56,81,100,122,110,118,83,35,31,112,24,47,62,67,97,109,88,93,76,102,80,11,86,78,94,115,23,28,14,99,18,26,30,16,33,42,103,126,106,69,96,6,25,87,85,68,63,123,128,60,108,75,10,4,44,15,127,104,61,27,114,111,116,38,70,40,0,129,49,1,53,34,3,19,117,79,45,17,7,107,113,2,82,29,58,89,124,84,65]
P = ProblemInstance('./problems/ch130.tsp')
points = getPoints('./problems/ch130.tsp')
plotSolution(x, points)
found = compute_length(x, P.dist_matrix)
print("GAP IS:", P.GAP(found))
print("Solution is valid: ", P.ValidateSolution(x))
###Output
_____no_output_____
###Markdown
EIL76
###Code
x =[44,28,47,29,1,73,27,60,20,46,35,68,70,59,69,19,36,4,14,56,12,53,18,7,6,34,52,13,58,10,65,64,37,9,30,54,24,49,17,23,48,22,55,40,42,41,63,21,61,0,72,32,62,15,2,43,31,8,38,71,57,11,39,16,50,5,67,3,74,75,25,66,33,45,51,26]
P = ProblemInstance('./problems/eil76.tsp')
points = getPoints('./problems/eil76.tsp')
plotSolution(x, points)
found = compute_length(x, P.dist_matrix)
print("GAP IS:", P.GAP(found))
print("Solution is valid: ", P.ValidateSolution(x))
###Output
_____no_output_____
###Markdown
D198
###Code
x = [154,153,138,122,119,118,117,116,115,120,121,114,103,102,101,100,92,93,94,95,89,88,96,97,87,82,69,66,59,54,43,55,58,67,71,72,83,86,98,99,85,84,70,68,57,56,41,42,40,12,11,10,9,8,7,4,3,2,5,6,1,0,39,13,14,15,16,17,18,19,20,23,24,25,26,21,22,28,29,27,33,32,35,36,31,30,37,44,53,60,65,73,81,74,61,52,45,51,64,80,75,76,77,63,49,50,46,38,34,47,48,62,78,79,90,91,104,113,112,111,105,106,110,109,108,107,166,167,181,182,180,176,175,172,173,174,177,179,183,184,178,193,194,197,196,195,185,192,191,186,190,187,188,189,170,171,165,164,163,162,150,149,144,143,136,128,127,126,169,125,124,168,123,137,139,133,131,130,129,132,135,134,140,142,141,146,145,148,147,152,151,161,160,159,158,157,156,155]
P = ProblemInstance('./problems_cleaned/d198.tsp')
points = getPoints('./problems_cleaned/d198.tsp')
plotSolution(x, points)
found = compute_length(x, P.dist_matrix)
print("GAP IS:", P.GAP(found))
print("Solution is valid: ", P.ValidateSolution(x))
###Output
_____no_output_____
###Markdown
KROA100
###Code
x =[92,27,66,57,60,50,86,24,80,68,63,39,53,1,43,49,72,67,84,81,94,12,75,32,36,4,51,77,95,38,29,47,99,40,70,13,2,42,45,28,33,82,54,6,8,56,19,11,26,85,34,61,59,76,22,97,90,44,31,10,14,16,58,73,20,71,9,83,35,98,37,23,17,78,52,87,15,93,21,69,65,25,64,3,96,55,79,30,88,41,7,91,74,18,89,48,5,62,0,46]
P = ProblemInstance('./problems_cleaned/kroA100.tsp')
points = getPoints('./problems_cleaned/kroA100.tsp')
plotSolution(x, points)
found = compute_length(x, P.dist_matrix)
print("GAP IS:", P.GAP(found))
print("Solution is valid: ", P.ValidateSolution(x))
d = {'problem': ['ch130', 'd198', 'eil76', 'fl1577', 'kroa100', 'lin318', 'pcb442', 'pr439', 'rat783', 'u1060'], \
'best_known': [6110, 15780, 538, 22249, 21282, 42029, 50778, 107217, 8806,224094],\
'best_found': [6110, 15780, 538, 22368, 21282, 42143, 51094, 107294, 8888, 227814], \
'iterations': [1269, 57813, 3630, 890, 189, 13660, 14252, 18586, 4280, 2333]}
df = pd.DataFrame(data=d)
df['gap'] = ((df['best_found'] - df['best_known'])/df['best_known']) * 100
df
###Output
_____no_output_____
###Markdown
Gap found = 0.40Note that the number of iterations is either when the best known was found or 3 minutes passed. Note also that for the smaller problems an almost optimal solution was found a lot earlier (in terms of iterations) but to arrive at the optimal one many more solutions were needed.
###Code
df['gap'].mean()
###Output
_____no_output_____ |
docs/00_tensorflow_fundamentals.ipynb | ###Markdown
Getting started with TensorFlow tutorial: A guide to the fundamentals What is TensorFlow?[TensorFlow](https://www.tensorflow.org/) is an open-source end-to-end machine learning library for preprocessing data, modelling data and serving models (getting them into the hands of others). Why use TensorFlow?Rather than building machine learning and deep learning models from scratch, it's more likely you'll use a library such as TensorFlow. This is because it contains many of the most common machine learning functions you'll want to use. What we're going to coverTensorFlow is vast. But the main premise is simple: turn data into numbers (tensors) and build machine learning algorithms to find patterns in them.In this notebook we cover some of the most fundamental TensorFlow operations, more specificially:* Introduction to tensors (creating tensors)* Getting information from tensors (tensor attributes)* Manipulating tensors (tensor operations)* Tensors and NumPy* Using @tf.function (a way to speed up your regular Python functions)* Using GPUs with TensorFlow* Exercises to tryThings to note:* Many of the conventions here will happen automatically behind the scenes (when you build a model) but it's worth knowing so if you see any of these things, you know what's happening.* For any TensorFlow function you see, it's important to be able to check it out in the documentation, for example, going to the Python API docs for all functions and searching for what you need: https://www.tensorflow.org/api_docs/python/ (don't worry if this seems overwhelming at first, with enough practice, you'll get used to navigating the documentaiton). Introduction to TensorsIf you've ever used NumPy, [tensors](https://www.tensorflow.org/guide/tensor) are kind of like NumPy arrays (we'll see more on this later).For the sake of this notebook and going forward, you can think of a tensor as a multi-dimensional numerical representation (also referred to as n-dimensional, where n can be any number) of something. Where something can be almost anything you can imagine: * It could be numbers themselves (using tensors to represent the price of houses). * It could be an image (using tensors to represent the pixels of an image).* It could be text (using tensors to represent words).* Or it could be some other form of information (or data) you want to represent with numbers.The main difference between tensors and NumPy arrays (also an n-dimensional array of numbers) is that tensors can be used on [GPUs (graphical processing units)](https://blogs.nvidia.com/blog/2009/12/16/whats-the-difference-between-a-cpu-and-a-gpu/) and [TPUs (tensor processing units)](https://en.wikipedia.org/wiki/Tensor_processing_unit). The benefit of being able to run on GPUs and TPUs is faster computation, this means, if we wanted to find patterns in the numerical representations of our data, we can generally find them faster using GPUs and TPUs.Okay, we've been talking enough about tensors, let's see them.The first thing we'll do is import TensorFlow under the common alias `tf`.
###Code
# Import TensorFlow
import tensorflow as tf
print(tf.__version__) # find the version number (should be 2.x+)
###Output
2.3.0
###Markdown
Creating Tensors with `tf.constant()`As mentioned before, in general, you usually won't create tensors yourself. This is because TensorFlow has modules built-in (such as [`tf.io`](https://www.tensorflow.org/api_docs/python/tf/io) and [`tf.data`](https://www.tensorflow.org/guide/data)) which are able to read your data sources and automatically convert them to tensors and then later on, neural network models will process these for us.But for now, because we're getting familar with tensors themselves and how to manipulate them, we'll see how we can create them ourselves.We'll begin by using [`tf.constant()`](https://www.tensorflow.org/api_docs/python/tf/constant).
###Code
# Create a scalar (rank 0 tensor)
scalar = tf.constant(7)
scalar
###Output
_____no_output_____
###Markdown
A scalar is known as a rank 0 tensor. Because it has no dimensions (it's just a number).> ๐ **Note:** For now, you don't need to know too much about the different ranks of tensors (but we will see more on this later). The important point is knowing tensors can have an unlimited range of dimensions (the exact amount will depend on what data you're representing).
###Code
# Check the number of dimensions of a tensor (ndim stands for number of dimensions)
scalar.ndim
# Create a vector (more than 0 dimensions)
vector = tf.constant([10, 10])
vector
# Check the number of dimensions of our vector tensor
vector.ndim
# Create a matrix (more than 1 dimension)
matrix = tf.constant([[10, 7],
[7, 10]])
matrix
matrix.ndim
###Output
_____no_output_____
###Markdown
By default, TensorFlow creates tensors with either an `int32` or `float32` datatype.This is known as [32-bit precision](https://en.wikipedia.org/wiki/Precision_(computer_science)) (the higher the number, the more precise the number, the more space it takes up on your computer).
###Code
# Create another matrix and define the datatype
another_matrix = tf.constant([[10., 7.],
[3., 2.],
[8., 9.]], dtype=tf.float16) # specify the datatype with 'dtype'
another_matrix
# Even though another_matrix contains more numbers, its dimensions stay the same
another_matrix.ndim
# How about a tensor? (more than 2 dimensions, although, all of the above items are also technically tensors)
tensor = tf.constant([[[1, 2, 3],
[4, 5, 6]],
[[7, 8, 9],
[10, 11, 12]],
[[13, 14, 15],
[16, 17, 18]]])
tensor
tensor.ndim
###Output
_____no_output_____
###Markdown
This is known as a rank 3 tensor (3-dimensions), however a tensor can have an arbitrary (unlimited) amount of dimensions.For example, you might turn a series of images into tensors with shape (224, 224, 3, 32), where:* 224, 224 (the first 2 dimensions) are the height and width of the images in pixels.* 3 is the number of colour channels of the image (red, green blue).* 32 is the batch size (the number of images a neural network sees at any one time).All of the above variables we've created are actually tensors. But you may also hear them referred to as their different names (the ones we gave them):* **scalar**: a single number.* **vector**: a number with direction (e.g. wind speed with direction).* **matrix**: a 2-dimensional array of numbers.* **tensor**: an n-dimensional array of numbers (where n can be any number, a 0-dimension tensor is a scalar, a 1-dimension tensor is a vector). To add to the confusion, the terms matrix and tensor are often used interchangably.Going forward since we're using TensorFlow, everything we refer to and use will be tensors.For more on the mathematical difference between scalars, vectors and matrices see the [visual algebra post by Math is Fun](https://www.mathsisfun.com/algebra/scalar-vector-matrix.html). Creating Tensors with `tf.Variable()`You can also (although you likely rarely will, because often, when working with data, tensors are created for you automatically) create tensors using [`tf.Variable()`](https://www.tensorflow.org/api_docs/python/tf/Variable).The difference between `tf.Variable()` and `tf.constant()` is tensors created with `tf.constant()` are immutable (can't be changed, can only be used to create a new tensor), where as, tensors created with `tf.Variable()` are mutable (can be changed).
###Code
# Create the same tensor with tf.Variable() and tf.constant()
changeable_tensor = tf.Variable([10, 7])
unchangeable_tensor = tf.constant([10, 7])
changeable_tensor, unchangeable_tensor
###Output
_____no_output_____
###Markdown
Now let's try to change one of the elements of the changable tensor.
###Code
# Will error (requires the .assign() method)
changeable_tensor[0] = 7
changeable_tensor
###Output
_____no_output_____
###Markdown
To change an element of a `tf.Variable()` tensor requires the `assign()` method.
###Code
# Won't error
changeable_tensor[0].assign(7)
changeable_tensor
###Output
_____no_output_____
###Markdown
Now let's try to change a value in a `tf.constant()` tensor.
###Code
# Will error (can't change tf.constant())
unchangeable_tensor[0].assign(7)
unchangleable_tensor
###Output
_____no_output_____
###Markdown
Which one should you use? `tf.constant()` or `tf.Variable()`?It will depend on what your problem requires. However, most of the time, TensorFlow will automatically choose for you (when loading data or modelling data). Creating random tensorsRandom tensors are tensors of some abitrary size which contain random numbers.Why would you want to create random tensors? This is what neural networks use to intialize their weights (patterns) that they're trying to learn in the data.For example, the process of a neural network learning often involves taking a random n-dimensional array of numbers and refining them until they represent some kind of pattern (a compressed way to represent the original data).**How a network learns***A network learns by starting with random patterns (1) then going through demonstrative examples of data (2) whilst trying to update its random patterns to represent the examples (3).*We can create random tensors by using the [`tf.random.Generator`](https://www.tensorflow.org/guide/random_numbersthe_tfrandomgenerator_class) class.
###Code
# Create two random (but the same) tensors
random_1 = tf.random.Generator.from_seed(42) # set the seed for reproducibility
random_1 = random_1.normal(shape=(3, 2)) # create tensor from a normal distribution
random_2 = tf.random.Generator.from_seed(42)
random_2 = random_2.normal(shape=(3, 2))
# Are they equal?
random_1, random_2, random_1 == random_2
###Output
_____no_output_____
###Markdown
The random tensors we've made are actually [pseudorandom numbers](https://www.computerhope.com/jargon/p/pseudo-random.htm) (they appear as random, but really aren't).If we set a seed we'll get the same random numbers (if you've ever used NumPy, this is similar to `np.random.seed(42)`). Setting the seed says, "hey, create some random numbers, but flavour them with X" (X is the seed).What do you think will happen when we change the seed?
###Code
# Create two random (and different) tensors
random_3 = tf.random.Generator.from_seed(42)
random_3 = random_3.normal(shape=(3, 2))
random_4 = tf.random.Generator.from_seed(11)
random_4 = random_4.normal(shape=(3, 2))
# Check the tensors and see if they are equal
random_3, random_4, random_1 == random_3, random_3 == random_4
###Output
_____no_output_____
###Markdown
What if you wanted to shuffle the order of a tensor?Wait, why would you want to do that?Let's say you working with 15,000 images of cats and dogs and the first 10,000 images of were of cats and the next 5,000 were of dogs. This order could effect how a neural network learns (it may overfit by learning the order of the data), instead, it might be a good idea to move your data around.
###Code
# Shuffle a tensor (valuable for when you want to shuffle your data)
not_shuffled = tf.constant([[10, 7],
[3, 4],
[2, 5]])
# Gets different results each time
tf.random.shuffle(not_shuffled)
# Shuffle in the same order every time using the seed parameter (won't acutally be the same)
tf.random.shuffle(not_shuffled, seed=42)
###Output
_____no_output_____
###Markdown
Wait... why didn't the numbers come out the same?It's due to rule 4 of the [`tf.random.set_seed()`](https://www.tensorflow.org/api_docs/python/tf/random/set_seed) documentation.> "4. If both the global and the operation seed are set: Both seeds are used in conjunction to determine the random sequence."`tf.random.set_seed(42)` sets the global seed, and the `seed` parameter in `tf.random.shuffle(seed=42)` sets the operation seed.Because, "Operations that rely on a random seed actually derive it from two seeds: the global and operation-level seeds. This sets the global seed."
###Code
# Shuffle in the same order every time
# Set the global random seed
tf.random.set_seed(42)
# Set the operation random seed
tf.random.shuffle(not_shuffled, seed=42)
# Set the global random seed
tf.random.set_seed(42) # if you comment this out you'll get different results
# Set the operation random seed
tf.random.shuffle(not_shuffled)
###Output
_____no_output_____
###Markdown
Other ways to make tensorsThough you might rarely use these (remember, many tensor operations are done behind the scenes for you), you can use [`tf.ones()`](https://www.tensorflow.org/api_docs/python/tf/ones) to create a tensor of all ones and [`tf.zeros()`](https://www.tensorflow.org/api_docs/python/tf/zeros) to create a tensor of all zeros.
###Code
# Make a tensor of all ones
tf.ones(shape=(3, 2))
# Make a tensor of all zeros
tf.zeros(shape=(3, 2))
###Output
_____no_output_____
###Markdown
You can also turn NumPy arrays in into tensors.Remember, the main difference between tensors and NumPy arrays is that tensors can be run on GPUs.> ๐ **Note:** A matrix or tensor is typically represented by a capital letter (e.g. `X` or `A`) where as a vector is typically represented by a lowercase letter (e.g. `y` or `b`).
###Code
import numpy as np
numpy_A = np.arange(1, 25, dtype=np.int32) # create a NumPy array between 1 and 25
A = tf.constant(numpy_A,
shape=[2, 4, 3]) # note: the shape total (2*4*3) has to match the number of elements in the array
numpy_A, A
###Output
_____no_output_____
###Markdown
Getting information from tensors (shape, rank, size)There will be times when you'll want to get different pieces of information from your tensors, in particuluar, you should know the following tensor vocabulary:* **Shape:** The length (number of elements) of each of the dimensions of a tensor.* **Rank:** The number of tensor dimensions. A scalar has rank 0, a vector has rank 1, a matrix is rank 2, a tensor has rank n.* **Axis** or **Dimension:** A particular dimension of a tensor.* **Size:** The total number of items in the tensor.You'll use these especially when you're trying to line up the shapes of your data to the shapes of your model. For example, making sure the shape of your image tensors are the same shape as your models input layer.We've already seen one of these before using the `ndim` attribute. Let's see the rest.
###Code
# Create a rank 4 tensor (4 dimensions)
rank_4_tensor = tf.zeros([2, 3, 4, 5])
rank_4_tensor
rank_4_tensor.shape, rank_4_tensor.ndim, tf.size(rank_4_tensor)
# Get various attributes of tensor
print("Datatype of every element:", rank_4_tensor.dtype)
print("Number of dimensions (rank):", rank_4_tensor.ndim)
print("Shape of tensor:", rank_4_tensor.shape)
print("Elements along axis 0 of tensor:", rank_4_tensor.shape[0])
print("Elements along last axis of tensor:", rank_4_tensor.shape[-1])
print("Total number of elements (2*3*4*5):", tf.size(rank_4_tensor).numpy()) # .numpy() converts to NumPy array
###Output
Datatype of every element: <dtype: 'float32'>
Number of dimensions (rank): 4
Shape of tensor: (2, 3, 4, 5)
Elements along axis 0 of tensor: 2
Elements along last axis of tensor: 5
Total number of elements (2*3*4*5): 120
###Markdown
You can also index tensors just like Python lists.
###Code
# Get the first 2 items of each dimension
rank_4_tensor[:2, :2, :2, :2]
# Get the dimension from each index except for the final one
rank_4_tensor[:1, :1, :1, :]
# Create a rank 2 tensor (2 dimensions)
rank_2_tensor = tf.constant([[10, 7],
[3, 4]])
# Get the last item of each row
rank_2_tensor[:, -1]
###Output
_____no_output_____
###Markdown
You can also add dimensions to your tensor whilst keeping the same information present using `tf.newaxis`.
###Code
# Add an extra dimension (to the end)
rank_3_tensor = rank_2_tensor[..., tf.newaxis] # in Python "..." means "all dimensions prior to"
rank_2_tensor, rank_3_tensor # shape (2, 2), shape (2, 2, 1)
###Output
_____no_output_____
###Markdown
You can achieve the same using [`tf.expand_dims()`](https://www.tensorflow.org/api_docs/python/tf/expand_dims).
###Code
tf.expand_dims(rank_2_tensor, axis=-1) # "-1" means last axis
###Output
_____no_output_____
###Markdown
Manipulating tensors (tensor operations)Finding patterns in tensors (numberical representation of data) requires manipulating them.Again, when building models in TensorFlow, much of this pattern discovery is done for you. Basic operationsYou can perform many of the basic mathematical operations directly on tensors using Pyhton operators such as, `+`, `-`, `*`.
###Code
# You can add values to a tensor using the addition operator
tensor = tf.constant([[10, 7], [3, 4]])
tensor + 10
###Output
_____no_output_____
###Markdown
Since we used `tf.constant()`, the original tensor is unchanged (the addition gets done on a copy).
###Code
# Original tensor unchanged
tensor
###Output
_____no_output_____
###Markdown
Other operators also work.
###Code
# Multiplication (known as element-wise multiplication)
tensor * 10
# Subtraction
tensor - 10
###Output
_____no_output_____
###Markdown
You can also use the equivalent TensorFlow function. Using the TensorFlow function (where possible) has the advantage of being sped up later down the line when running as part of a [TensorFlow graph](https://www.tensorflow.org/tensorboard/graphs).
###Code
# Use the tensorflow function equivalent of the '*' (multiply) operator
tf.multiply(tensor, 10)
# The original tensor is still unchanged
tensor
###Output
_____no_output_____
###Markdown
Matrix mutliplicationOne of the most common operations in machine learning algorithms is [matrix multiplication](https://www.mathsisfun.com/algebra/matrix-multiplying.html).TensorFlow implements this matrix multiplication functionality in the [`tf.matmul()`](https://www.tensorflow.org/api_docs/python/tf/linalg/matmul) method.The main two rules for matrix multiplication to remember are:1. The inner dimensions must match: * `(3, 5) @ (3, 5)` won't work * `(5, 3) @ (3, 5)` will work * `(3, 5) @ (5, 3)` will work2. The resulting matrix has the shape of the outer dimensions: * `(5, 3) @ (3, 5)` -> `(5, 5)` * `(3, 5) @ (5, 3)` -> `(3, 3)`> ๐ **Note:** '`@`' in Python is the symbol for matrix multiplication.
###Code
# Matrix multiplication in TensorFlow
print(tensor)
tf.matmul(tensor, tensor)
# Matrix multiplication with Python operator '@'
tensor @ tensor
###Output
_____no_output_____
###Markdown
Both of these examples work because our `tensor` variable is of shape (2, 2).What if we created some tensors which had mismatched shapes?
###Code
# Create (3, 2) tensor
X = tf.constant([[1, 2],
[3, 4],
[5, 6]])
# Create another (3, 2) tensor
Y = tf.constant([[7, 8],
[9, 10],
[11, 12]])
X, Y
# Try to matrix multiply them (will error)
X @ Y
###Output
_____no_output_____
###Markdown
Trying to matrix multiply two tensors with the shape `(3, 2)` errors because the inner dimensions don't match.We need to either:* Reshape X to `(2, 3)` so it's `(2, 3) @ (3, 2)`.* Reshape Y to `(3, 2)` so it's `(3, 2) @ (2, 3)`.We can do this with either:* [`tf.reshape()`](https://www.tensorflow.org/api_docs/python/tf/reshape) - allows us to reshape a tensor into a defined shape.* [`tf.transpose()`](https://www.tensorflow.org/api_docs/python/tf/transpose) - switches the dimensions of a given tensor.Let's try `tf.reshape()` first.
###Code
# Example of reshape (3, 2) -> (2, 3)
tf.reshape(Y, shape=(2, 3))
# Try matrix multiplication with reshaped Y
X @ tf.reshape(Y, shape=(2, 3))
###Output
_____no_output_____
###Markdown
It worked, let's try the same with a reshaped `X`, except this time we'll use [`tf.transpose()`](https://www.tensorflow.org/api_docs/python/tf/transpose) and `tf.matmul()`.
###Code
# Example of transpose (3, 2) -> (2, 3)
tf.transpose(X)
# Try matrix multiplication
tf.matmul(tf.transpose(X), Y)
# You can achieve the same result with parameters
tf.matmul(a=X, b=Y, transpose_a=True, transpose_b=False)
###Output
_____no_output_____
###Markdown
Notice the difference in the resulting shapes when tranposing `X` or reshaping `Y`.This is because of the 2nd rule mentioned above: * `(3, 2) @ (2, 3)` -> `(2, 2)` done with `tf.matmul(tf.transpose(X), Y)` * `(2, 3) @ (3, 2)` -> `(3, 3)` done with `X @ tf.reshape(Y, shape=(2, 3))`This kind of data manipulation is a reminder: you'll spend a lot of your time in machine learning and working with neural networks reshaping data (in the form of tensors) to prepare it to be used with various operations (such as feeding it to a model). The dot productMultiplying matrices by eachother is also referred to as the dot product.You can perform the `tf.matmul()` operation using [`tf.tensordot()`](https://www.tensorflow.org/api_docs/python/tf/tensordot).
###Code
# Perform the dot product on X and Y (requires X to be transposed)
tf.tensordot(tf.transpose(X), Y, axes=1)
###Output
_____no_output_____
###Markdown
You might notice that although using both `reshape` and `tranpose` work, you get different results when using each.Let's see an example, first with `tf.transpose()` then with `tf.reshape()`.
###Code
# Perform matrix multiplication between X and Y (transposed)
tf.matmul(X, tf.transpose(Y))
# Perform matrix multiplication between X and Y (reshaped)
tf.matmul(X, tf.reshape(Y, (2, 3)))
###Output
_____no_output_____
###Markdown
Hmm... they result in different values.Which is strange because when dealing with `Y` (a `(3x2)` matrix), reshaping to `(2, 3)` and tranposing it result in the same shape.
###Code
# Check shapes of Y, reshaped Y and tranposed Y
Y.shape, tf.reshape(Y, (2, 3)).shape, tf.transpose(Y).shape
###Output
_____no_output_____
###Markdown
But calling `tf.reshape()` and `tf.transpose()` on `Y` don't necessarily result in the same values.
###Code
# Check values of Y, reshape Y and tranposed Y
print("Normal Y:")
print(Y, "\n") # "\n" for newline
print("Y reshaped to (2, 3):")
print(tf.reshape(Y, (2, 3)), "\n")
print("Y transposed:")
print(tf.transpose(Y))
###Output
Normal Y:
tf.Tensor(
[[ 7 8]
[ 9 10]
[11 12]], shape=(3, 2), dtype=int32)
Y reshaped to (2, 3):
tf.Tensor(
[[ 7 8 9]
[10 11 12]], shape=(2, 3), dtype=int32)
Y transposed:
tf.Tensor(
[[ 7 9 11]
[ 8 10 12]], shape=(2, 3), dtype=int32)
###Markdown
As you can see, the outputs of `tf.reshape()` and `tf.transpose()` when called on `Y`, even though they have the same shape, are different.This can be explained by the default behaviour of each method:* [`tf.reshape()`](https://www.tensorflow.org/api_docs/python/tf/reshape) - change the shape of the given tensor (first) and then insert values in order they appear (in our case, 7, 8, 9, 10, 11, 12).* [`tf.transpose()`](https://www.tensorflow.org/api_docs/python/tf/transpose) - swap the order of the axes, by default the last axis becomes the first, however the order can be changed using the [`perm` parameter](https://www.tensorflow.org/api_docs/python/tf/transpose). So which should you use?Again, most of the time these operations (when they need to be run, such as during the training a neural network, will be implemented for you).But generally, whenever performing a matrix multiplication and the shapes of two matrices don't line up, you will transpose (not reshape) one of them in order to line them up. Matrix multiplication tidbits* If we transposed `Y`, it would be represented as $\mathbf{Y}^\mathsf{T}$ (note the capital T for tranpose).* Get an illustrative view of matrix multiplication [by Math is Fun](https://www.mathsisfun.com/algebra/matrix-multiplying.html).* Try a hands-on demo of matrix multiplcation: http://matrixmultiplication.xyz/ (shown below). Changing the datatype of a tensorSometimes you'll want to alter the default datatype of your tensor. This is common when you want to compute using less precision (e.g. 16-bit floating point numbers vs. 32-bit floating point numbers). Computing with less precision is useful on devices with less computing capacity such as mobile devices (because the less bits, the less space the computations require).You can change the datatype of a tensor using [`tf.cast()`](https://www.tensorflow.org/api_docs/python/tf/cast).
###Code
# Create a new tensor with default datatype (float32)
B = tf.constant([1.7, 7.4])
# Create a new tensor with default datatype (int32)
C = tf.constant([1, 7])
B, C
# Change from float32 to float16 (reduced precision)
B = tf.cast(B, dtype=tf.float16)
B
# Change from int32 to float32
C = tf.cast(C, dtype=tf.float32)
C
###Output
_____no_output_____
###Markdown
Getting the absolute valueSometimes you'll want the absolute values (all values are positive) of elements in your tensors.To do so, you can use [`tf.abs()`](https://www.tensorflow.org/api_docs/python/tf/math/abs).
###Code
# Create tensor with negative values
D = tf.constant([-7, -10])
D
# Get the absolute values
tf.abs(D)
###Output
_____no_output_____
###Markdown
Finding the min, max, mean, sum (aggregation)You can quickly aggregate (perform a calculation on a whole tensor) tensors to find things like the minimum value, maximum value, mean and sum of all the elements.To do so, aggregation methods typically have the syntax `reduce()_[action]`, such as:* [`tf.reduce_min()`](https://www.tensorflow.org/api_docs/python/tf/math/reduce_min) - find the minimum value in a tensor.* [`tf.reduce_max()`](https://www.tensorflow.org/api_docs/python/tf/math/reduce_max) - find the maximum value in a tensor (helpful for when you want to find the highest prediction probability).* [`tf.reduce_mean()`](https://www.tensorflow.org/api_docs/python/tf/math/reduce_mean) - find the mean of all elements in a tensor.* [`tf.reduce_sum()`](https://www.tensorflow.org/api_docs/python/tf/math/reduce_sum) - find the sum of all elements in a tensor.* **Note:** typically, each of these is under the `math` module, e.g. `tf.math.reduce_min()` but you can use the alias `tf.reduce_min()`.Let's see them in action.
###Code
# Create a tensor with 50 random values between 0 and 100
E = tf.constant(np.random.randint(low=0, high=100, size=50))
E
# Find the minimum
tf.reduce_min(E)
# Find the maximum
tf.reduce_max(E)
# Find the mean
tf.reduce_mean(E)
# Find the sum
tf.reduce_sum(E)
###Output
_____no_output_____
###Markdown
You can also find the standard deviation ([`tf.reduce_std()`](https://www.tensorflow.org/api_docs/python/tf/math/reduce_std)) and variance ([`tf.reduce_variance()`](https://www.tensorflow.org/api_docs/python/tf/math/reduce_variance)) of elements in a tensor using similar methods. Finding the positional maximum and minimumHow about finding the position a tensor where the maximum value occurs?This is helpful when you want to line up your labels (say `['Green', 'Blue', 'Red']`) with your prediction probabilities tensor (e.g. `[0.98, 0.01, 0.01]`).In this case, the predicted label (the one with the highest prediction probability) would be `'Green'`.You can do the same for the minimum (if required) with the following:* [`tf.argmax()`](https://www.tensorflow.org/api_docs/python/tf/math/argmax) - find the position of the maximum element in a given tensor.* [`tf.argmin()`](https://www.tensorflow.org/api_docs/python/tf/math/argmin) - find the position of the minimum element in a given tensor.
###Code
# Create a tensor with 50 values between 0 and 1
F = tf.constant(np.random.random(50))
F
# Find the maximum element position of F
tf.argmax(F)
# Find the minimum element position of F
tf.argmin(F)
# Find the maximum element position of F
print(f"The maximum value of F is at position: {tf.argmax(F).numpy()}")
print(f"The maximum value of F is: {tf.reduce_max(F).numpy()}")
print(f"Using tf.argmax() to index F, the maximum value of F is: {F[tf.argmax(F)].numpy()}")
print(f"Are the two max values the same (they should be)? {F[tf.argmax(F)].numpy() == tf.reduce_max(F).numpy()}")
###Output
The maximum value of F is at position: 16
The maximum value of F is: 0.9999594897376615
Using tf.argmax() to index F, the maximum value of F is: 0.9999594897376615
Are the two max values the same (they should be)? True
###Markdown
Squeezing a tensor (removing all single dimensions)If you need to remove single-dimensions from a tensor (dimensions with size 1), you can use `tf.squeeze()`.* [`tf.squeeze()`](https://www.tensorflow.org/api_docs/python/tf/squeeze) - remove all dimensions of 1 from a tensor.
###Code
# Create a rank 5 (5 dimensions) tensor of 50 numbers between 0 and 100
G = tf.constant(np.random.randint(0, 100, 50), shape=(1, 1, 1, 1, 50))
G.shape, G.ndim
# Squeeze tensor G (remove all 1 dimensions)
G_squeezed = tf.squeeze(G)
G_squeezed.shape, G_squeezed.ndim
###Output
_____no_output_____
###Markdown
One-hot encodingIf you have a tensor of indicies and would like to one-hot encode it, you can use [`tf.one_hot()`](https://www.tensorflow.org/api_docs/python/tf/one_hot).You should also specify the `depth` parameter (the level which you want to one-hot encode to).
###Code
# Create a list of indices
some_list = [0, 1, 2, 3]
# One hot encode them
tf.one_hot(some_list, depth=4)
###Output
_____no_output_____
###Markdown
You can also specify values for `on_value` and `off_value` instead of the default `0` and `1`.
###Code
# Specify custom values for on and off encoding
tf.one_hot(some_list, depth=4, on_value="We're live!", off_value="Offline")
###Output
_____no_output_____
###Markdown
Squaring, log, square rootMany other common mathematical operations you'd like to perform at some stage, probably exist.Let's take a look at:* [`tf.square()`](https://www.tensorflow.org/api_docs/python/tf/math/square) - get the square of every value in a tensor. * [`tf.sqrt()`](https://www.tensorflow.org/api_docs/python/tf/math/sqrt) - get the squareroot of every value in a tensor (**note:** the elements need to be floats or this will error).* [`tf.math.log()`](https://www.tensorflow.org/api_docs/python/tf/math/log) - get the natural log of every value in a tensor (elements need to floats).
###Code
# Create a new tensor
H = tf.constant(np.arange(1, 10))
H
# Square it
tf.square(H)
# Find the squareroot (will error), needs to be non-integer
tf.sqrt(H)
# Change H to float32
H = tf.cast(H, dtype=tf.float32)
H
# Find the square root
tf.sqrt(H)
# Find the log (input also needs to be float)
tf.math.log(H)
###Output
_____no_output_____
###Markdown
Manipulating `tf.Variable` tensorsTensors created with `tf.Variable()` can be changed in place using methods such as:* [`.assign()`](https://www.tensorflow.org/api_docs/python/tf/Variableassign) - assign a different value to a particular index of a variable tensor.* [`.add_assign()`](https://www.tensorflow.org/api_docs/python/tf/Variableassign_add) - add to an existing value and reassign it at a particular index of a variable tensor.
###Code
# Create a variable tensor
I = tf.Variable(np.arange(0, 5))
I
# Assign the final value a new value of 50
I.assign([0, 1, 2, 3, 50])
# The change happens in place (the last value is now 50, not 4)
I
# Add 10 to every element in I
I.assign_add([10, 10, 10, 10, 10])
# Again, the change happens in place
I
###Output
_____no_output_____
###Markdown
Tensors and NumPyWe've seen some examples of tensors interact with NumPy arrays, such as, using NumPy arrays to create tensors. Tensors can also be converted to NumPy arrays using:* `np.array()` - pass a tensor to convert to an ndarray (NumPy's main datatype).* `tensor.numpy()` - call on a tensor to convert to an ndarray.Doing this is helpful as it makes tensors iterable as well as allows us to use any of NumPy's methods on them.
###Code
# Create a tensor from a NumPy array
J = tf.constant(np.array([3., 7., 10.]))
J
# Convert tensor J to NumPy with np.array()
np.array(J), type(np.array(J))
# Convert tensor J to NumPy with .numpy()
J.numpy(), type(J.numpy())
###Output
_____no_output_____
###Markdown
By default tensors have `dtype=float32`, where as NumPy arrays have `dtype=float64`.This is because neural networks (which are usually built with TensorFlow) can generally work very well with less precision (32-bit rather than 64-bit).
###Code
# Create a tensor from NumPy and from an array
numpy_J = tf.constant(np.array([3., 7., 10.])) # will be float64 (due to NumPy)
tensor_J = tf.constant([3., 7., 10.]) # will be float32 (due to being TensorFlow default)
numpy_J.dtype, tensor_J.dtype
###Output
_____no_output_____
###Markdown
Using `@tf.function`In your TensorFlow adventures, you might come across Python functions which have the decorator [`@tf.function`](https://www.tensorflow.org/api_docs/python/tf/function).If you aren't sure what Python decorators do, [read RealPython's guide on them](https://realpython.com/primer-on-python-decorators/).But in short, decorators modify a function in one way or another.In the `@tf.function` decorator case, it turns a Python function into a callable TensorFlow graph. Which is a fancy way of saying, if you've written your own Python function, and you decorate it with `@tf.function`, when you export your code (to potentially run on another device), TensorFlow will attempt to convert it into a fast(er) version of itself (by making it part of a computation graph).For more on this, read the [Better performnace with tf.function](https://www.tensorflow.org/guide/function) guide.
###Code
# Create a simple function
def function(x, y):
return x ** 2 + y
x = tf.constant(np.arange(0, 10))
y = tf.constant(np.arange(10, 20))
function(x, y)
# Create the same function and decorate it with tf.function
@tf.function
def tf_function(x, y):
return x ** 2 + y
tf_function(x, y)
###Output
_____no_output_____
###Markdown
If you noticed no difference between the above two functions (the decorated one and the non-decorated one) you'd be right.Much of the difference happens behind the scenes. One of the main ones being potential code speed-ups where possible. Finding access to GPUsWe've mentioned GPUs plenty of times throughout this notebook.So how do you check if you've got one available?You can check if you've got access to a GPU using [`tf.config.list_physical_devices()`](https://www.tensorflow.org/guide/gpu).
###Code
print(tf.config.list_physical_devices('GPU'))
###Output
[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
###Markdown
If the above outputs an empty array (or nothing), it means you don't have access to a GPU (or at least TensorFlow can't find it).If you're running in Google Colab, you can access a GPU by going to *Runtime -> Change Runtime Type -> Select GPU* (**note:** after doing this your notebook will restart and any variables you've saved will be lost).Once you've changed your runtime type, run the cell below.
###Code
import tensorflow as tf
print(tf.config.list_physical_devices('GPU'))
###Output
[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
###Markdown
If you've got access to a GPU, the cell above should output something like:`[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]`You can also find information about your GPU using `!nvidia-smi`.
###Code
!nvidia-smi
###Output
Thu Nov 26 00:41:59 2020
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 455.38 Driver Version: 418.67 CUDA Version: 10.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla T4 Off | 00000000:00:04.0 Off | 0 |
| N/A 75C P0 33W / 70W | 229MiB / 15079MiB | 0% Default |
| | | ERR! |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
###Markdown
Getting started with TensorFlow tutorial: A guide to the fundamentals What is TensorFlow?[TensorFlow](https://www.tensorflow.org/) is an open-source end-to-end machine learning library for preprocessing data, modelling data and serving models (getting them into the hands of others). Why use TensorFlow?Rather than building machine learning and deep learning models from scratch, it's more likely you'll use a library such as TensorFlow. This is because it contains many of the most common machine learning functions you'll want to use. What we're going to coverTensorFlow is vast. But the main premise is simple: turn data into numbers (tensors) and build machine learning algorithms to find patterns in them.In this notebook we cover some of the most fundamental TensorFlow operations, more specificially:* Introduction to tensors (creating tensors)* Getting information from tensors (tensor attributes)* Manipulating tensors (tensor operations)* Tensors and NumPy* Using @tf.function (a way to speed up your regular Python functions)* Using GPUs with TensorFlow* Exercises to tryThings to note:* Many of the conventions here will happen automatically behind the scenes (when you build a model) but it's worth knowing so if you see any of these things, you know what's happening.* For any TensorFlow function you see, it's important to be able to check it out in the documentation, for example, going to the Python API docs for all functions and searching for what you need: https://www.tensorflow.org/api_docs/python/ (don't worry if this seems overwhelming at first, with enough practice, you'll get used to navigating the documentaiton). Introduction to TensorsIf you've ever used NumPy, [tensors](https://www.tensorflow.org/guide/tensor) are kind of like NumPy arrays (we'll see more on this later).For the sake of this notebook and going forward, you can think of a tensor as a multi-dimensional numerical representation (also referred to as n-dimensional, where n can be any number) of something. Where something can be almost anything you can imagine: * It could be numbers themselves (using tensors to represent the price of houses). * It could be an image (using tensors to represent the pixels of an image).* It could be text (using tensors to represent words).* Or it could be some other form of information (or data) you want to represent with numbers.The main difference between tensors and NumPy arrays (also an n-dimensional array of numbers) is that tensors can be used on [GPUs (graphical processing units)](https://blogs.nvidia.com/blog/2009/12/16/whats-the-difference-between-a-cpu-and-a-gpu/) and [TPUs (tensor processing units)](https://en.wikipedia.org/wiki/Tensor_processing_unit). The benefit of being able to run on GPUs and TPUs is faster computation, this means, if we wanted to find patterns in the numerical representations of our data, we can generally find them faster using GPUs and TPUs.Okay, we've been talking enough about tensors, let's see them.The first thing we'll do is import TensorFlow under the common alias `tf`.
###Code
# Import TensorFlow
import tensorflow as tf
print(tf.__version__) # find the version number (should be 2.x+)
###Output
2.3.0
###Markdown
Creating Tensors with `tf.constant()`As mentioned before, in general, you usually won't create tensors yourself. This is because TensorFlow has modules built-in (such as [`tf.io`](https://www.tensorflow.org/api_docs/python/tf/io) and [`tf.data`](https://www.tensorflow.org/guide/data)) which are able to read your data sources and automatically convert them to tensors and then later on, neural network models will process these for us.But for now, because we're getting familar with tensors themselves and how to manipulate them, we'll see how we can create them ourselves.We'll begin by using [`tf.constant()`](https://www.tensorflow.org/api_docs/python/tf/constant).
###Code
# Create a scalar (rank 0 tensor)
scalar = tf.constant(7)
scalar
###Output
_____no_output_____
###Markdown
A scalar is known as a rank 0 tensor. Because it has no dimensions (it's just a number).> ๐ **Note:** For now, you don't need to know too much about the different ranks of tensors (but we will see more on this later). The important point is knowing tensors can have an unlimited range of dimensions (the exact amount will depend on what data you're representing).
###Code
# Check the number of dimensions of a tensor (ndim stands for number of dimensions)
scalar.ndim
# Create a vector (more than 0 dimensions)
vector = tf.constant([10, 10])
vector
# Check the number of dimensions of our vector tensor
vector.ndim
# Create a matrix (more than 1 dimension)
matrix = tf.constant([[10, 7],
[7, 10]])
matrix
matrix.ndim
###Output
_____no_output_____
###Markdown
By default, TensorFlow creates tensors with either an `int32` or `float32` datatype.This is known as [32-bit precision](https://en.wikipedia.org/wiki/Precision_(computer_science)) (the higher the number, the more precise the number, the more space it takes up on your computer).
###Code
# Create another matrix and define the datatype
another_matrix = tf.constant([[10., 7.],
[3., 2.],
[8., 9.]], dtype=tf.float16) # specify the datatype with 'dtype'
another_matrix
# Even though another_matrix contains more numbers, its dimensions stay the same
another_matrix.ndim
# How about a tensor? (more than 2 dimensions, although, all of the above items are also technically tensors)
tensor = tf.constant([[[1, 2, 3],
[4, 5, 6]],
[[7, 8, 9],
[10, 11, 12]],
[[13, 14, 15],
[16, 17, 18]]])
tensor
tensor.ndim
###Output
_____no_output_____
###Markdown
This is known as a rank 3 tensor (3-dimensions), however a tensor can have an arbitrary (unlimited) amount of dimensions.For example, you might turn a series of images into tensors with shape (224, 224, 3, 32), where:* 224, 224 (the first 2 dimensions) are the height and width of the images in pixels.* 3 is the number of colour channels of the image (red, green blue).* 32 is the batch size (the number of images a neural network sees at any one time).All of the above variables we've created are actually tensors. But you may also hear them referred to as their different names (the ones we gave them):* **scalar**: a single number.* **vector**: a number with direction (e.g. wind speed with direction).* **matrix**: a 2-dimensional array of numbers.* **tensor**: an n-dimensional array of numbers (where n can be any number, a 0-dimension tensor is a scalar, a 1-dimension tensor is a vector). To add to the confusion, the terms matrix and tensor are often used interchangably.Going forward since we're using TensorFlow, everything we refer to and use will be tensors.For more on the mathematical difference between scalars, vectors and matrices see the [visual algebra post by Math is Fun](https://www.mathsisfun.com/algebra/scalar-vector-matrix.html). Creating Tensors with `tf.Variable()`You can also (although you likely rarely will, because often, when working with data, tensors are created for you automatically) create tensors using [`tf.Variable()`](https://www.tensorflow.org/api_docs/python/tf/Variable).The difference between `tf.Variable()` and `tf.constant()` is tensors created with `tf.constant()` are immutable (can't be changed, can only be used to create a new tensor), where as, tensors created with `tf.Variable()` are mutable (can be changed).
###Code
# Create the same tensor with tf.Variable() and tf.constant()
changeable_tensor = tf.Variable([10, 7])
unchangeable_tensor = tf.constant([10, 7])
changeable_tensor, unchangeable_tensor
###Output
_____no_output_____
###Markdown
Now let's try to change one of the elements of the changable tensor.
###Code
# Will error (requires the .assign() method)
changeable_tensor[0] = 7
changeable_tensor
###Output
_____no_output_____
###Markdown
To change an element of a `tf.Variable()` tensor requires the `assign()` method.
###Code
# Won't error
changeable_tensor[0].assign(7)
changeable_tensor
###Output
_____no_output_____
###Markdown
Now let's try to change a value in a `tf.constant()` tensor.
###Code
# Will error (can't change tf.constant())
unchangeable_tensor[0].assign(7)
unchangleable_tensor
###Output
_____no_output_____
###Markdown
Which one should you use? `tf.constant()` or `tf.Variable()`?It will depend on what your problem requires. However, most of the time, TensorFlow will automatically choose for you (when loading data or modelling data). Creating random tensorsRandom tensors are tensors of some abitrary size which contain random numbers.Why would you want to create random tensors? This is what neural networks use to intialize their weights (patterns) that they're trying to learn in the data.For example, the process of a neural network learning often involves taking a random n-dimensional array of numbers and refining them until they represent some kind of pattern (a compressed way to represent the original data).**How a network learns***A network learns by starting with random patterns (1) then going through demonstrative examples of data (2) whilst trying to update its random patterns to represent the examples (3).*We can create random tensors by using the [`tf.random.Generator`](https://www.tensorflow.org/guide/random_numbersthe_tfrandomgenerator_class) class.
###Code
# Create two random (but the same) tensors
random_1 = tf.random.Generator.from_seed(42) # set the seed for reproducibility
random_1 = random_1.normal(shape=(3, 2)) # create tensor from a normal distribution
random_2 = tf.random.Generator.from_seed(42)
random_2 = random_2.normal(shape=(3, 2))
# Are they equal?
random_1, random_2, random_1 == random_2
###Output
_____no_output_____
###Markdown
The random tensors we've made are actually [pseudorandom numbers](https://www.computerhope.com/jargon/p/pseudo-random.htm) (they appear as random, but really aren't).If we set a seed we'll get the same random numbers (if you've ever used NumPy, this is similar to `np.random.seed(42)`). Setting the seed says, "hey, create some random numbers, but flavour them with X" (X is the seed).What do you think will happen when we change the seed?
###Code
# Create two random (and different) tensors
random_3 = tf.random.Generator.from_seed(42)
random_3 = random_3.normal(shape=(3, 2))
random_4 = tf.random.Generator.from_seed(11)
random_4 = random_4.normal(shape=(3, 2))
# Check the tensors and see if they are equal
random_3, random_4, random_1 == random_3, random_3 == random_4
###Output
_____no_output_____
###Markdown
What if you wanted to shuffle the order of a tensor?Wait, why would you want to do that?Let's say you working with 15,000 images of cats and dogs and the first 10,000 images of were of cats and the next 5,000 were of dogs. This order could effect how a neural network learns (it may overfit by learning the order of the data), instead, it might be a good idea to move your data around.
###Code
# Shuffle a tensor (valuable for when you want to shuffle your data)
not_shuffled = tf.constant([[10, 7],
[3, 4],
[2, 5]])
# Gets different results each time
tf.random.shuffle(not_shuffled)
# Shuffle in the same order every time using the seed parameter (won't acutally be the same)
tf.random.shuffle(not_shuffled, seed=42)
###Output
_____no_output_____
###Markdown
Wait... why didn't the numbers come out the same?It's due to rule 4 of the [`tf.random.set_seed()`](https://www.tensorflow.org/api_docs/python/tf/random/set_seed) documentation.> "4. If both the global and the operation seed are set: Both seeds are used in conjunction to determine the random sequence."`tf.random.set_seed(42)` sets the global seed, and the `seed` parameter in `tf.random.shuffle(seed=42)` sets the operation seed.Because, "Operations that rely on a random seed actually derive it from two seeds: the global and operation-level seeds. This sets the global seed."
###Code
# Shuffle in the same order every time
# Set the global random seed
tf.random.set_seed(42)
# Set the operation random seed
tf.random.shuffle(not_shuffled, seed=42)
# Set the global random seed
tf.random.set_seed(42) # if you comment this out you'll get different results
# Set the operation random seed
tf.random.shuffle(not_shuffled)
###Output
_____no_output_____
###Markdown
Other ways to make tensorsThough you might rarely use these (remember, many tensor operations are done behind the scenes for you), you can use [`tf.ones()`](https://www.tensorflow.org/api_docs/python/tf/ones) to create a tensor of all ones and [`tf.zeros()`](https://www.tensorflow.org/api_docs/python/tf/zeros) to create a tensor of all zeros.
###Code
# Make a tensor of all ones
tf.ones(shape=(3, 2))
# Make a tensor of all zeros
tf.zeros(shape=(3, 2))
###Output
_____no_output_____
###Markdown
You can also turn NumPy arrays in into tensors.Remember, the main difference between tensors and NumPy arrays is that tensors can be run on GPUs.> ๐ **Note:** A matrix or tensor is typically represented by a capital letter (e.g. `X` or `A`) where as a vector is typically represented by a lowercase letter (e.g. `y` or `b`).
###Code
import numpy as np
numpy_A = np.arange(1, 25, dtype=np.int32) # create a NumPy array between 1 and 25
A = tf.constant(numpy_A,
shape=[2, 4, 3]) # note: the shape total (2*4*3) has to match the number of elements in the array
numpy_A, A
###Output
_____no_output_____
###Markdown
Getting information from tensors (shape, rank, size)There will be times when you'll want to get different pieces of information from your tensors, in particuluar, you should know the following tensor vocabulary:* **Shape:** The length (number of elements) of each of the dimensions of a tensor.* **Rank:** The number of tensor dimensions. A scalar has rank 0, a vector has rank 1, a matrix is rank 2, a tensor has rank n.* **Axis** or **Dimension:** A particular dimension of a tensor.* **Size:** The total number of items in the tensor.You'll use these especially when you're trying to line up the shapes of your data to the shapes of your model. For example, making sure the shape of your image tensors are the same shape as your models input layer.We've already seen one of these before using the `ndim` attribute. Let's see the rest.
###Code
# Create a rank 4 tensor (4 dimensions)
rank_4_tensor = tf.zeros([2, 3, 4, 5])
rank_4_tensor
rank_4_tensor.shape, rank_4_tensor.ndim, tf.size(rank_4_tensor)
# Get various attributes of tensor
print("Datatype of every element:", rank_4_tensor.dtype)
print("Number of dimensions (rank):", rank_4_tensor.ndim)
print("Shape of tensor:", rank_4_tensor.shape)
print("Elements along axis 0 of tensor:", rank_4_tensor.shape[0])
print("Elements along last axis of tensor:", rank_4_tensor.shape[-1])
print("Total number of elements (2*3*4*5):", tf.size(rank_4_tensor).numpy()) # .numpy() converts to NumPy array
###Output
Datatype of every element: <dtype: 'float32'>
Number of dimensions (rank): 4
Shape of tensor: (2, 3, 4, 5)
Elements along axis 0 of tensor: 2
Elements along last axis of tensor: 5
Total number of elements (2*3*4*5): 120
###Markdown
You can also index tensors just like Python lists.
###Code
# Get the first 2 items of each dimension
rank_4_tensor[:2, :2, :2, :2]
# Get the dimension from each index except for the final one
rank_4_tensor[:1, :1, :1, :]
# Create a rank 2 tensor (2 dimensions)
rank_2_tensor = tf.constant([[10, 7],
[3, 4]])
# Get the last item of each row
rank_2_tensor[:, -1]
###Output
_____no_output_____
###Markdown
You can also add dimensions to your tensor whilst keeping the same information present using `tf.newaxis`.
###Code
# Add an extra dimension (to the end)
rank_3_tensor = rank_2_tensor[..., tf.newaxis] # in Python "..." means "all dimensions prior to"
rank_2_tensor, rank_3_tensor # shape (2, 2), shape (2, 2, 1)
###Output
_____no_output_____
###Markdown
You can achieve the same using [`tf.expand_dims()`](https://www.tensorflow.org/api_docs/python/tf/expand_dims).
###Code
tf.expand_dims(rank_2_tensor, axis=-1) # "-1" means last axis
###Output
_____no_output_____
###Markdown
Manipulating tensors (tensor operations)Finding patterns in tensors (numberical representation of data) requires manipulating them.Again, when building models in TensorFlow, much of this pattern discovery is done for you. Basic operationsYou can perform many of the basic mathematical operations directly on tensors using Pyhton operators such as, `+`, `-`, `*`.
###Code
# You can add values to a tensor using the addition operator
tensor = tf.constant([[10, 7], [3, 4]])
tensor + 10
###Output
_____no_output_____
###Markdown
Since we used `tf.constant()`, the original tensor is unchanged (the addition gets done on a copy).
###Code
# Original tensor unchanged
tensor
###Output
_____no_output_____
###Markdown
Other operators also work.
###Code
# Multiplication (known as element-wise multiplication)
tensor * 10
# Subtraction
tensor - 10
###Output
_____no_output_____
###Markdown
You can also use the equivalent TensorFlow function. Using the TensorFlow function (where possible) has the advantage of being sped up later down the line when running as part of a [TensorFlow graph](https://www.tensorflow.org/tensorboard/graphs).
###Code
# Use the tensorflow function equivalent of the '*' (multiply) operator
tf.multiply(tensor, 10)
# The original tensor is still unchanged
tensor
###Output
_____no_output_____
###Markdown
Matrix mutliplicationOne of the most common operations in machine learning algorithms is [matrix multiplication](https://www.mathsisfun.com/algebra/matrix-multiplying.html).TensorFlow implements this matrix multiplication functionality in the [`tf.matmul()`](https://www.tensorflow.org/api_docs/python/tf/linalg/matmul) method.The main two rules for matrix multiplication to remember are:1. The inner dimensions must match: * `(3, 5) @ (3, 5)` won't work * `(5, 3) @ (3, 5)` will work * `(3, 5) @ (5, 3)` will work2. The resulting matrix has the shape of the outer dimensions: * `(5, 3) @ (3, 5)` -> `(5, 5)` * `(3, 5) @ (5, 3)` -> `(3, 3)`> ๐ **Note:** '`@`' in Python is the symbol for matrix multiplication.
###Code
# Matrix multiplication in TensorFlow
print(tensor)
tf.matmul(tensor, tensor)
# Matrix multiplication with Python operator '@'
tensor @ tensor
###Output
_____no_output_____
###Markdown
Both of these examples work because our `tensor` variable is of shape (2, 2).What if we created some tensors which had mismatched shapes?
###Code
# Create (3, 2) tensor
X = tf.constant([[1, 2],
[3, 4],
[5, 6]])
# Create another (3, 2) tensor
Y = tf.constant([[7, 8],
[9, 10],
[11, 12]])
X, Y
# Try to matrix multiply them (will error)
X @ Y
###Output
_____no_output_____
###Markdown
Trying to matrix multiply two tensors with the shape `(3, 2)` errors because the inner dimensions don't match.We need to either:* Reshape X to `(2, 3)` so it's `(2, 3) @ (3, 2)`.* Reshape Y to `(3, 2)` so it's `(3, 2) @ (2, 3)`.We can do this with either:* [`tf.reshape()`](https://www.tensorflow.org/api_docs/python/tf/reshape) - allows us to reshape a tensor into a defined shape.* [`tf.transpose()`](https://www.tensorflow.org/api_docs/python/tf/transpose) - switches the dimensions of a given tensor.Let's try `tf.reshape()` first.
###Code
# Example of reshape (3, 2) -> (2, 3)
tf.reshape(Y, shape=(2, 3))
# Try matrix multiplication with reshaped Y
X @ tf.reshape(Y, shape=(2, 3))
###Output
_____no_output_____
###Markdown
It worked, let's try the same with a reshaped `X`, except this time we'll use [`tf.transpose()`](https://www.tensorflow.org/api_docs/python/tf/transpose) and `tf.matmul()`.
###Code
# Example of transpose (3, 2) -> (2, 3)
tf.transpose(X)
# Try matrix multiplication
tf.matmul(tf.transpose(X), Y)
# You can achieve the same result with parameters
tf.matmul(a=X, b=Y, transpose_a=True, transpose_b=False)
###Output
_____no_output_____
###Markdown
Notice the difference in the resulting shapes when tranposing `X` or reshaping `Y`.This is because of the 2nd rule mentioned above: * `(3, 2) @ (2, 3)` -> `(2, 2)` done with `tf.matmul(tf.transpose(X), Y)` * `(2, 3) @ (3, 2)` -> `(3, 3)` done with `X @ tf.reshape(Y, shape=(2, 3))`This kind of data manipulation is a reminder: you'll spend a lot of your time in machine learning and working with neural networks reshaping data (in the form of tensors) to prepare it to be used with various operations (such as feeding it to a model). The dot productMultiplying matrices by eachother is also referred to as the dot product.You can perform the `tf.matmul()` operation using [`tf.tensordot()`](https://www.tensorflow.org/api_docs/python/tf/tensordot).
###Code
# Perform the dot product on X and Y (requires X to be transposed)
tf.tensordot(tf.transpose(X), Y, axes=1)
###Output
_____no_output_____
###Markdown
You might notice that although using both `reshape` and `tranpose` work, you get different results when using each.Let's see an example, first with `tf.transpose()` then with `tf.reshape()`.
###Code
# Perform matrix multiplication between X and Y (transposed)
tf.matmul(X, tf.transpose(Y))
# Perform matrix multiplication between X and Y (reshaped)
tf.matmul(X, tf.reshape(Y, (2, 3)))
###Output
_____no_output_____
###Markdown
Hmm... they result in different values.Which is strange because when dealing with `Y` (a `(3x2)` matrix), reshaping to `(2, 3)` and tranposing it result in the same shape.
###Code
# Check shapes of Y, reshaped Y and tranposed Y
Y.shape, tf.reshape(Y, (2, 3)).shape, tf.transpose(Y).shape
###Output
_____no_output_____
###Markdown
But calling `tf.reshape()` and `tf.transpose()` on `Y` don't necessarily result in the same values.
###Code
# Check values of Y, reshape Y and tranposed Y
print("Normal Y:")
print(Y, "\n") # "\n" for newline
print("Y reshaped to (2, 3):")
print(tf.reshape(Y, (2, 3)), "\n")
print("Y transposed:")
print(tf.transpose(Y))
###Output
Normal Y:
tf.Tensor(
[[ 7 8]
[ 9 10]
[11 12]], shape=(3, 2), dtype=int32)
Y reshaped to (2, 3):
tf.Tensor(
[[ 7 8 9]
[10 11 12]], shape=(2, 3), dtype=int32)
Y transposed:
tf.Tensor(
[[ 7 9 11]
[ 8 10 12]], shape=(2, 3), dtype=int32)
###Markdown
As you can see, the outputs of `tf.reshape()` and `tf.transpose()` when called on `Y`, even though they have the same shape, are different.This can be explained by the default behaviour of each method:* [`tf.reshape()`](https://www.tensorflow.org/api_docs/python/tf/reshape) - change the shape of the given tensor (first) and then insert values in order they appear (in our case, 7, 8, 9, 10, 11, 12).* [`tf.transpose()`](https://www.tensorflow.org/api_docs/python/tf/transpose) - swap the order of the axes, by default the last axis becomes the first, however the order can be changed using the [`perm` parameter](https://www.tensorflow.org/api_docs/python/tf/transpose). So which should you use?Again, most of the time these operations (when they need to be run, such as during the training a neural network, will be implemented for you).But generally, whenever performing a matrix multiplication and the shapes of two matrices don't line up, you will transpose (not reshape) one of them in order to line them up. Matrix multiplication tidbits* If we transposed `Y`, it would be represented as $\mathbf{Y}^\mathsf{T}$ (note the capital T for tranpose).* Get an illustrative view of matrix multiplication [by Math is Fun](https://www.mathsisfun.com/algebra/matrix-multiplying.html).* Try a hands-on demo of matrix multiplcation: http://matrixmultiplication.xyz/ (shown below). Changing the datatype of a tensorSometimes you'll want to alter the default datatype of your tensor. This is common when you want to compute using less precision (e.g. 16-bit floating point numbers vs. 32-bit floating point numbers). Computing with less precision is useful on devices with less computing capacity such as mobile devices (because the less bits, the less space the computations require).You can change the datatype of a tensor using [`tf.cast()`](https://www.tensorflow.org/api_docs/python/tf/cast).
###Code
# Create a new tensor with default datatype (float32)
B = tf.constant([1.7, 7.4])
# Create a new tensor with default datatype (int32)
C = tf.constant([1, 7])
B, C
# Change from float32 to float16 (reduced precision)
B = tf.cast(B, dtype=tf.float16)
B
# Change from int32 to float32
C = tf.cast(C, dtype=tf.float32)
C
###Output
_____no_output_____
###Markdown
Getting the absolute valueSometimes you'll want the absolute values (all values are positive) of elements in your tensors.To do so, you can use [`tf.abs()`](https://www.tensorflow.org/api_docs/python/tf/math/abs).
###Code
# Create tensor with negative values
D = tf.constant([-7, -10])
D
# Get the absolute values
tf.abs(D)
###Output
_____no_output_____
###Markdown
Finding the min, max, mean, sum (aggregation)You can quickly aggregate (perform a calculation on a whole tensor) tensors to find things like the minimum value, maximum value, mean and sum of all the elements.To do so, aggregation methods typically have the syntax `reduce()_[action]`, such as:* [`tf.reduce_min()`](https://www.tensorflow.org/api_docs/python/tf/math/reduce_min) - find the minimum value in a tensor.* [`tf.reduce_max()`](https://www.tensorflow.org/api_docs/python/tf/math/reduce_max) - find the maximum value in a tensor (helpful for when you want to find the highest prediction probability).* [`tf.reduce_mean()`](https://www.tensorflow.org/api_docs/python/tf/math/reduce_mean) - find the mean of all elements in a tensor.* [`tf.reduce_sum()`](https://www.tensorflow.org/api_docs/python/tf/math/reduce_sum) - find the sum of all elements in a tensor.* **Note:** typically, each of these is under the `math` module, e.g. `tf.math.reduce_min()` but you can use the alias `tf.reduce_min()`.Let's see them in action.
###Code
# Create a tensor with 50 random values between 0 and 100
E = tf.constant(np.random.randint(low=0, high=100, size=50))
E
# Find the minimum
tf.reduce_min(E)
# Find the maximum
tf.reduce_max(E)
# Find the mean
tf.reduce_mean(E)
# Find the sum
tf.reduce_sum(E)
###Output
_____no_output_____
###Markdown
You can also find the standard deviation ([`tf.reduce_std()`](https://www.tensorflow.org/api_docs/python/tf/math/reduce_std)) and variance ([`tf.reduce_variance()`](https://www.tensorflow.org/api_docs/python/tf/math/reduce_variance)) of elements in a tensor using similar methods. Finding the positional maximum and minimumHow about finding the position a tensor where the maximum value occurs?This is helpful when you want to line up your labels (say `['Green', 'Blue', 'Red']`) with your prediction probabilities tensor (e.g. `[0.98, 0.01, 0.01]`).In this case, the predicted label (the one with the highest prediction probability) would be `'Green'`.You can do the same for the minimum (if required) with the following:* [`tf.argmax()`](https://www.tensorflow.org/api_docs/python/tf/math/argmax) - find the position of the maximum element in a given tensor.* [`tf.argmin()`](https://www.tensorflow.org/api_docs/python/tf/math/argmin) - find the position of the minimum element in a given tensor.
###Code
# Create a tensor with 50 values between 0 and 1
F = tf.constant(np.random.random(50))
F
# Find the maximum element position of F
tf.argmax(F)
# Find the minimum element position of F
tf.argmin(F)
# Find the maximum element position of F
print(f"The maximum value of F is at position: {tf.argmax(F).numpy()}")
print(f"The maximum value of F is: {tf.reduce_max(F).numpy()}")
print(f"Using tf.argmax() to index F, the maximum value of F is: {F[tf.argmax(F)].numpy()}")
print(f"Are the two max values the same (they should be)? {F[tf.argmax(F)].numpy() == tf.reduce_max(F).numpy()}")
###Output
The maximum value of F is at position: 16
The maximum value of F is: 0.9999594897376615
Using tf.argmax() to index F, the maximum value of F is: 0.9999594897376615
Are the two max values the same (they should be)? True
###Markdown
Squeezing a tensor (removing all single dimensions)If you need to remove single-dimensions from a tensor (dimensions with size 1), you can use `tf.squeeze()`.* [`tf.squeeze()`](https://www.tensorflow.org/api_docs/python/tf/squeeze) - remove all dimensions of 1 from a tensor.
###Code
# Create a rank 5 (5 dimensions) tensor of 50 numbers between 0 and 100
G = tf.constant(np.random.randint(0, 100, 50), shape=(1, 1, 1, 1, 50))
G.shape, G.ndim
# Squeeze tensor G (remove all 1 dimensions)
G_squeezed = tf.squeeze(G)
G_squeezed.shape, G_squeezed.ndim
###Output
_____no_output_____
###Markdown
One-hot encodingIf you have a tensor of indicies and would like to one-hot encode it, you can use [`tf.one_hot()`](https://www.tensorflow.org/api_docs/python/tf/one_hot).You should also specify the `depth` parameter (the level which you want to one-hot encode to).
###Code
# Create a list of indices
some_list = [0, 1, 2, 3]
# One hot encode them
tf.one_hot(some_list, depth=4)
###Output
_____no_output_____
###Markdown
You can also specify values for `on_value` and `off_value` instead of the default `0` and `1`.
###Code
# Specify custom values for on and off encoding
tf.one_hot(some_list, depth=4, on_value="We're live!", off_value="Offline")
###Output
_____no_output_____
###Markdown
Squaring, log, square rootMany other common mathematical operations you'd like to perform at some stage, probably exist.Let's take a look at:* [`tf.square()`](https://www.tensorflow.org/api_docs/python/tf/math/square) - get the square of every value in a tensor. * [`tf.sqrt()`](https://www.tensorflow.org/api_docs/python/tf/math/sqrt) - get the squareroot of every value in a tensor (**note:** the elements need to be floats or this will error).* [`tf.math.log()`](https://www.tensorflow.org/api_docs/python/tf/math/log) - get the natural log of every value in a tensor (elements need to floats).
###Code
# Create a new tensor
H = tf.constant(np.arange(1, 10))
H
# Square it
tf.square(H)
# Find the squareroot (will error), needs to be non-integer
tf.sqrt(H)
# Change H to float32
H = tf.cast(H, dtype=tf.float32)
H
# Find the square root
tf.sqrt(H)
# Find the log (input also needs to be float)
tf.math.log(H)
###Output
_____no_output_____
###Markdown
Manipulating `tf.Variable` tensorsTensors created with `tf.Variable()` can be changed in place using methods such as:* [`.assign()`](https://www.tensorflow.org/api_docs/python/tf/Variableassign) - assign a different value to a particular index of a variable tensor.* [`.add_assign()`](https://www.tensorflow.org/api_docs/python/tf/Variableassign_add) - add to an existing value and reassign it at a particular index of a variable tensor.
###Code
# Create a variable tensor
I = tf.Variable(np.arange(0, 5))
I
# Assign the final value a new value of 50
I.assign([0, 1, 2, 3, 50])
# The change happens in place (the last value is now 50, not 4)
I
# Add 10 to every element in I
I.assign_add([10, 10, 10, 10, 10])
# Again, the change happens in place
I
###Output
_____no_output_____
###Markdown
Tensors and NumPyWe've seen some examples of tensors interact with NumPy arrays, such as, using NumPy arrays to create tensors. Tensors can also be converted to NumPy arrays using:* `np.array()` - pass a tensor to convert to an ndarray (NumPy's main datatype).* `tensor.numpy()` - call on a tensor to convert to an ndarray.Doing this is helpful as it makes tensors iterable as well as allows us to use any of NumPy's methods on them.
###Code
# Create a tensor from a NumPy array
J = tf.constant(np.array([3., 7., 10.]))
J
# Convert tensor J to NumPy with np.array()
np.array(J), type(np.array(J))
# Convert tensor J to NumPy with .numpy()
J.numpy(), type(J.numpy())
###Output
_____no_output_____
###Markdown
By default tensors have `dtype=float32`, where as NumPy arrays have `dtype=float64`.This is because neural networks (which are usually built with TensorFlow) can generally work very well with less precision (32-bit rather than 64-bit).
###Code
# Create a tensor from NumPy and from an array
numpy_J = tf.constant(np.array([3., 7., 10.])) # will be float64 (due to NumPy)
tensor_J = tf.constant([3., 7., 10.]) # will be float32 (due to being TensorFlow default)
numpy_J.dtype, tensor_J.dtype
###Output
_____no_output_____
###Markdown
Using `@tf.function`In your TensorFlow adventures, you might come across Python functions which have the decorator [`@tf.function`](https://www.tensorflow.org/api_docs/python/tf/function).If you aren't sure what Python decorators do, [read RealPython's guide on them](https://realpython.com/primer-on-python-decorators/).But in short, decorators modify a function in one way or another.In the `@tf.function` decorator case, it turns a Python function into a callable TensorFlow graph. Which is a fancy way of saying, if you've written your own Python function, and you decorate it with `@tf.function`, when you export your code (to potentially run on another device), TensorFlow will attempt to convert it into a fast(er) version of itself (by making it part of a computation graph).For more on this, read the [Better performnace with tf.function](https://www.tensorflow.org/guide/function) guide.
###Code
# Create a simple function
def function(x, y):
return x ** 2 + y
x = tf.constant(np.arange(0, 10))
y = tf.constant(np.arange(10, 20))
function(x, y)
# Create the same function and decorate it with tf.function
@tf.function
def tf_function(x, y):
return x ** 2 + y
tf_function(x, y)
###Output
_____no_output_____
###Markdown
If you noticed no difference between the above two functions (the decorated one and the non-decorated one) you'd be right.Much of the difference happens behind the scenes. One of the main ones being potential code speed-ups where possible. Finding access to GPUsWe've mentioned GPUs plenty of times throughout this notebook.So how do you check if you've got one available?You can check if you've got access to a GPU using [`tf.config.list_physical_devices()`](https://www.tensorflow.org/guide/gpu).
###Code
print(tf.config.list_physical_devices('GPU'))
###Output
[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
###Markdown
If the above outputs an empty array (or nothing), it means you don't have access to a GPU (or at least TensorFlow can't find it).If you're running in Google Colab, you can access a GPU by going to *Runtime -> Change Runtime Type -> Select GPU* (**note:** after doing this your notebook will restart and any variables you've saved will be lost).Once you've changed your runtime type, run the cell below.
###Code
import tensorflow as tf
print(tf.config.list_physical_devices('GPU'))
###Output
[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
###Markdown
If you've got access to a GPU, the cell above should output something like:`[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]`You can also find information about your GPU using `!nvidia-smi`.
###Code
!nvidia-smi
###Output
Thu Nov 26 00:41:59 2020
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 455.38 Driver Version: 418.67 CUDA Version: 10.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla T4 Off | 00000000:00:04.0 Off | 0 |
| N/A 75C P0 33W / 70W | 229MiB / 15079MiB | 0% Default |
| | | ERR! |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
|
notebooks/development/preprocessing/smooth grid.ipynb | ###Markdown
%matplotlib notebookplt.close()grd.mask_rho.plot()plt.show()
###Code
RoughMat_old = bathy_smoother.bathy_tools.RoughnessMatrix(grd.h.values,grd.mask_rho.values)
print('Max Roughness value is: ', RoughMat_old.max())
Area = 1/(grd.pm.values*grd.pn.values)
mask = grd.mask_rho.values
mask[vostock[0]:vostock[1],vostock[2]:vostock[3]] = 0
RetBathy, HmodifVal, ValueFct,eta,xi = smoothing_PlusMinus_rx0(mask,grd.h.values,0.3,Area,150)
RoughMat = bathy_smoother.bathy_tools.RoughnessMatrix(RetBathy,grd.mask_rho.values)
print('Max Roughness value is: ', RoughMat.max())
%matplotlib notebook
plt.close()
plt.pcolormesh(RoughMat-RoughMat_old,vmin=-0.1,vmax=0.1)
plt.colorbar()
plt.show()
bed = RetBathy.copy()
ice = grd.zice.values.copy()
#set bed minimum depth to 10 cm
bed[bed<0.1]= 0.1
#set ice draft at these places to zero
ice[bed<0.1] = 0.0
#set water column thickness to a small positive value (ROMS don't like when bed = ice draft)
wct = (bed+ice).copy()
ice[wct<=0] = -bed[wct<=0] + 0.1
grd.h.values = bed
grd.zice.values = ice
mask = np.ones_like(bed)
mask[(wct<20.0)]=0
mask[grd.mask_rho==0]=0
umask,vmask,pmask = uvp_masks(mask)
grd.mask_rho.values = mask
grd.mask_u.values = umask
grd.mask_v.values = vmask
grd.mask_psi.values = pmask
plt.close()
(grd.h+grd.zice).plot(size=10)
plt.show()
print("write smoothed bathy to ",out_path)
grd.to_netcdf(out_path)
###Output
write smoothed bathy to /home/ubuntu/bigStick/tidal_melting/data/preprocessing/processed/waom10_grd.nc
###Markdown
!jupyter nbconvert --to script smooth\ grid.ipynb
###Code
path = os.path.join(os.environ.get('prodir'),'waom4_grd.nc')
grd = xr.open_dataset(path)
plt.close()
(grd.h+grd.zice).where(grd.mask_rho).plot()
plt.show()
RoughMat = bathy_smoother.bathy_tools.RoughnessMatrix(grd.h.values,grd.mask_rho.values)
print('Max Roughness value is: ', RoughMat.max())
###Output
_____no_output_____ |
useful_codes/functions_basics.ipynb | ###Markdown
if, elif, else
###Code
# Define variables
room = "kit"
area = 14.0
# if statement for room
if room == "kit" :
print("looking around in the kitchen.")
# if statement for area
if area > 15 :
print('big place!')
# if-else construct for room
if room == "kit" :
print("looking around in the kitchen.")
else :
print("looking around elsewhere.")
# if-else construct for area
if area > 15 :
print("big place!")
else :
print('pretty small.')
# if-elif-else construct for room
if room == "kit" :
print("looking around in the kitchen.")
elif room == "bed":
print("looking around in the bedroom.")
else :
print("looking around elsewhere.")
# if-elif-else construct for area
if area > 15 :
print("big place!")
elif area > 10 :
print('medium size, nice!')
else :
print("pretty small.")
###Output
looking around in the kitchen.
medium size, nice!
###Markdown
while loop
###Code
# Initialize offset
offset = 8
# Code the while loop
while offset != 0 :
print('correcting...')
offset = offset - 1
print(offset)
# Initialize offset
offset = -6
# Code the while loop
while offset != 0 :
print("correcting...")
if offset > 0 :
offset = offset - 1
else :
offset = offset + 1
print(offset)
###Output
_____no_output_____
###Markdown
for loop
###Code
'''
Basic formulation:
for var in seq :
expression
What's interesting is that the name 'var' is arbitrary.
'seq' must be an object.
By its design, the for loop recognizes what an "element" is in 'seq'.
This allows us to write very simple instructions.
'''
# areas list
areas = [11.25, 18.0, 20.0, 10.75, 9.50]
# Code the for loop
for area in areas :
print(area)
# areas list
areas = [11.25, 18.0, 20.0, 10.75, 9.50]
# Change for loop to use enumerate() and update print()
for index, area in enumerate(areas) :
print("room", str(index), ":", str(area))
# areas list
areas = [11.25, 18.0, 20.0, 10.75, 9.50]
# Code the for loop with 'room x' starting at 'room 1'
for index, area in enumerate(areas) :
print("room " + str(index + 1) + ": " + str(area))
###Output
room 1: 11.25
room 2: 18.0
room 3: 20.0
room 4: 10.75
room 5: 9.5
###Markdown
Looping over lists
###Code
# house list of lists
house = [["hallway", 11.25],
["kitchen", 18.0],
["living room", 20.0],
["bedroom", 10.75],
["bathroom", 9.50]]
# Build a for loop from scratch
for place, area in house:
print('the', place, 'is', str(area), 'sqm')
###Output
the hallway is 11.25 sqm
the kitchen is 18.0 sqm
the living room is 20.0 sqm
the bedroom is 10.75 sqm
the bathroom is 9.5 sqm
###Markdown
Looping over dictionaries
###Code
# Definition of dictionary
europe = {'spain':'madrid', 'france':'paris', 'germany':'berlin',
'norway':'oslo', 'italy':'rome', 'poland':'warsaw', 'austria':'vienna' }
# Iterate over europe
for key, value in europe.items() : # remember to use this method
print('the capital of', key, 'is', value)
###Output
the capital of spain is madrid
the capital of france is paris
the capital of germany is berlin
the capital of norway is oslo
the capital of italy is rome
the capital of poland is warsaw
the capital of austria is vienna
###Markdown
Looping over numpy arrays
###Code
countries = ['United States', 'Australia', 'Japan', 'India', 'Russia', 'Morocco', 'Egypt']
np_countries = np.array(countries)
for country in np_countries :
print('this country is', country)
for country in np_countries :
print(country)
for country in np.nditer(np_countries) :
print('this country is', country)
###Output
this country is United States
this country is Australia
this country is Japan
this country is India
this country is Russia
this country is Morocco
this country is Egypt
|
tests/spark parameter test.ipynb | ###Markdown
Papermill parameter test spark notebook
###Code
val newline = """new
line"""
val quote = "'single quoting' \"double quote\""
val flt = 3.123
val num = 1234
val boolean = false
def f[T](v: T) = v
println(s"newline: $newline, type: " + newline.getClass())
println(s"quote: $quote, type: " + quote.getClass())
println(s"flt: $flt, type: " + flt.getClass())
println(s"num: $num, type: " + num.getClass())
println(s"boolean: $boolean, type: " + boolean.getClass())
%local
import papermill as pm
pm.inspect_notebook('spark parameter test.ipynb')
###Output
Translator for 'scala' language does not support parameter introspection.
|
notebook/bithumb_data_provider_test.ipynb | ###Markdown
ํ์ฌ ๋๋ ํ ๋ฆฌ๊ฐ smtm ํ๋ก์ ํธ root๋ก ์ค์ ๋์๋์ง ํ์ธ๋ถ์ ๊ฒฐ๊ณผ ํ์ผ์ด ์ ์ฅ๋ output ํด๋ ์์ฑ
###Code
print("ํ์ฌ ๋๋ ํ ๋ฆฌ " , os.getcwd())
from smtm import BithumbDataProvider
dp = BithumbDataProvider()
dp.get_info()
###Output
_____no_output_____ |
Python_break_and_continue.ipynb | ###Markdown
**Python break and continue** **1. What is the use of break and continue in Python?**- In Python, **break and continue** statements can alter the flow of a normal loop.- Loops iterate over a block of code until the test expression is false, but sometimes we wish to terminate the current iteration or even the whole loop without checking test expression.- The **break and continue** statements are used in these cases. **2. Python break statement**- The break statement terminates the loop containing it. Control of the program flows to the statement immediately after the body of the loop.- If the break statement is inside a nested loop (loop inside another loop), the break statement will terminate the innermost loop. **Syntax of break**- *break* **Flowchart of break**  The working of break statement in for loop and while loop is shown below.  **Example: Python break**
###Code
# Use of break statement inside the loop
for val in "string":
if val == "i":
break
print(val)
print("The end")
###Output
s
t
r
The end
###Markdown
- In this program, we iterate through the "string" sequence. We check if the letter is i, upon which we break from the loop. Hence, we see in our output that all the letters up till i gets printed. After that, the loop terminates. **3. Python continue statement**- The continue statement is used to skip the rest of the code inside a loop for the current iteration only. Loop does not terminate but continues on with the next iteration. **Syntax of Continue**- *continue* **Flowchart of continue**  The working of continue statement in for and while loop is shown below.  **Example: Python continue**
###Code
# Program to show the use of continue statement inside loops
for val in "string":
if val == "i":
continue
print(val)
print("The end")
###Output
s
t
r
n
g
The end
|
scripts/notebooks/Reliability Diagrams of Dirichlet (Figure 1 and Supp. Figure 12).ipynb | ###Markdown
Calibration - Reliability Diagrams of Dirichlet. Generate reliability diagrams for the Dirichlet paper.1. Models need to be trained and tuned for calibrators 1. Dir-odir (dir_l2_mu_off) 2. For that read ReadMe.txt in scripts folder.2. Get only the best tunings as separate folder (TODO find the script for that)2. Run this notebook
###Code
import sys
from os import path
sys.path.append( path.dirname( path.dirname( path.abspath("calibration") ) ) )
import numpy as np
import pandas as pd
from os.path import join
from calibration.cal_methods import evaluate, cal_results, TemperatureScaling, Dirichlet_NN
from dirichlet import FullDirichletCalibrator
import pickle
from tune_dirichlet_nn_slim import kf_model
from utility.unpickle_probs import unpickle_probs
from utility.evaluation import softmax, get_bin_info, ECE
from sklearn.metrics import log_loss
import sys
from os import path
import os
from matplotlib import pyplot as plt
import seaborn as sns
from IPython.display import display
import sys
from os import path
import os
import glob
from sklearn.preprocessing import label_binarize
###Output
_____no_output_____
###Markdown
Get the best Lambda for each model Path to logits and tuning
###Code
PATH_models_l2_mu_off = join("..", "..", "model_weights", "models_best_dir_l2_mu_off")
## NB! Folder contains already weights with optimal parameters.
PATH = join('..', '..', 'logits')
files_10 = ('probs_resnet_wide32_c10_logits.p', 'probs_densenet40_c10_logits.p',
'probs_lenet5_c10_logits.p', 'probs_resnet110_SD_c10_logits.p',
'probs_resnet110_c10_logits.p', 'probs_resnet152_SD_SVHN_logits.p')
files_100 = ('probs_resnet_wide32_c100_logits.p', 'probs_densenet40_c100_logits.p',
'probs_lenet5_c100_logits.p', 'probs_resnet110_SD_c100_logits.p',
'probs_resnet110_c100_logits.p')
###Output
_____no_output_____
###Markdown
Load in models
###Code
def get_weights_dir(path, ext = ".p"):
file_path = join(path, "*" + ext)
files = glob.glob(file_path)
dict_weights = {}
dict_params = {}
for fname in files:
with open(fname, "rb") as f:
models, (name, l2, mu) = pickle.load(f)
weights = []
for w in models:
w = np.hstack([w[0].T, w[1].reshape(-1,1)])
weights.append(w)
dict_weights[name] = np.array(weights)
dict_params[name] = [l2, mu]
return (dict_weights, dict_params)
weights_params_l2_mu_off = get_weights_dir(PATH_models_l2_mu_off)
###Output
_____no_output_____
###Markdown
Reliability Diagrams Compute Accuracy and Confidence of a Bin
###Code
def get_bin_info2(probs, true, bin_size = 0.1, ece_full = False, normalize = False, k = -1):
probs = np.array(probs)
true = np.array(true)
if len(true.shape) == 2 and true.shape[1] > 1:
true = true.argmax(axis=1).reshape(-1, 1)
if k == -1:
if ece_full:
pred, conf, true = get_preds_all(probs, true, normalize=normalize, flatten=ece_full)
else:
pred = np.argmax(probs, axis=1) # Take maximum confidence as prediction
if normalize:
conf = np.max(probs, axis=1)/np.sum(probs, axis=1)
# Check if everything below or equal to 1?
else:
conf = np.max(probs, axis=1) # Take only maximum confidence
else:
pred, conf, true = get_preds_k(probs, true, k)
# get predictions, confidences and true labels for all classes
upper_bounds = np.arange(bin_size, 1+bin_size, bin_size) # Get bounds of bins
n = len(conf)
ece = 0 # Starting error
accuracies = []
confidences = []
bin_lengths = []
for conf_thresh in upper_bounds: # Go through bounds and find accuracies and confidences
acc, avg_conf, len_bin = compute_acc_bin(conf_thresh-bin_size, conf_thresh, conf, pred, true, ece_full)
accuracies.append(acc)
confidences.append(avg_conf)
bin_lengths.append(len_bin)
return np.array(accuracies), np.array(confidences), np.array(bin_lengths)
def compute_acc_bin(conf_thresh_lower, conf_thresh_upper, conf, pred, true, ece_full = False):
"""
# Computes accuracy and average confidence for bin
Args:
conf_thresh_lower (float): Lower Threshold of confidence interval
conf_thresh_upper (float): Upper Threshold of confidence interval
conf (numpy.ndarray): list of confidences
pred (numpy.ndarray): list of predictions
true (numpy.ndarray): list of true labels
pred_thresh (float) : float in range (0,1), indicating the prediction threshold
Returns:
(accuracy, avg_conf, len_bin): accuracy of bin, confidence of bin and number of elements in bin.
"""
filtered_tuples = [x for x in zip(pred, true, conf) if (x[2] > conf_thresh_lower or conf_thresh_lower == 0) and x[2] <= conf_thresh_upper]
if len(filtered_tuples) < 1:
return 0,0,0
else:
if ece_full:
len_bin = len(filtered_tuples) # How many elements falls into given bin
avg_conf = sum([x[2] for x in filtered_tuples])/len_bin # Avg confidence of BIN
accuracy = np.mean([x[1] for x in filtered_tuples]) # Mean difference from actual class
else:
correct = len([x for x in filtered_tuples if x[0] == x[1]]) # How many correct labels
len_bin = len(filtered_tuples) # How many elements falls into given bin
avg_conf = sum([x[2] for x in filtered_tuples]) / len_bin # Avg confidence of BIN
accuracy = float(correct)/len_bin # accuracy of BIN
return accuracy, avg_conf, len_bin
###Output
_____no_output_____
###Markdown
Old function, but it together with new
###Code
from sklearn.preprocessing import OneHotEncoder
def get_preds_all(y_probs, y_true, axis = 1, normalize = False, flatten = True):
y_preds = np.argmax(y_probs, axis=axis) # Take maximum confidence as prediction
y_preds = y_preds.reshape(-1, 1)
if normalize:
y_probs /= np.sum(y_probs, axis=axis)
enc = OneHotEncoder(handle_unknown='ignore', sparse=False)
enc.fit(y_preds)
y_preds = enc.transform(y_preds)
y_true = enc.transform(y_true)
if flatten:
y_preds = y_preds.flatten()
y_true = y_true.flatten()
y_probs = y_probs.flatten()
return y_preds, y_probs, y_true
from sklearn.preprocessing import OneHotEncoder
def get_preds_k(y_probs, y_true, k, axis = 1):
y_probs = y_probs[:, k] # Take maximum confidence as prediction
y_preds = np.repeat(k, len(y_probs))
return y_preds, y_probs, y_true
def get_uncalibrated_res(path, file, M = 15, method = np.max, k = -1):
bin_size = 1/M
FILE_PATH = join(path, file)
(y_logits_val, y_val), (y_logits_test, y_test) = unpickle_probs(FILE_PATH)
y_probs_val = softmax(y_logits_val)
y_probs_test = softmax(y_logits_test)
res_test = get_bin_info2(y_probs_test, y_test, bin_size = bin_size, k = k)
res_val = get_bin_info2(y_probs_val, y_val, bin_size = bin_size, k = k)
return (res_test, res_val)
def cal_res(method, path, file, M = 15, m_kwargs = {}, k = -1):
bin_size = 1/M
FILE_PATH = join(path, file)
(y_logits_val, y_val), (y_logits_test, y_test) = unpickle_probs(FILE_PATH)
y_probs_val = softmax(y_logits_val) # Softmax logits
y_probs_test = softmax(y_logits_test)
model = method(**m_kwargs)
model.fit(y_logits_val, y_val)
y_probs_val = model.predict(y_logits_val)
y_probs_test = model.predict(y_logits_test)
accs_val, confs_val, len_bins_val = get_bin_info2(y_probs_val, y_val, bin_size = bin_size, k = k)
accs_test, confs_test, len_bins_test = get_bin_info2(y_probs_test, y_test, bin_size = bin_size, k = k)
return (accs_test, confs_test, len_bins_test), (accs_val, confs_val, len_bins_val)
def get_preds(x, w):
return softmax(np.log(clip_for_log(x)) @ w[:, :-1].T + w[:, -1])
def clip_for_log(X):
eps = np.finfo(float).eps
return np.clip(X, eps, 1-eps)
def get_dir_results(file, weights_model, M = 15, k = -1):
bin_size = 1/M
FILE_PATH = join(PATH, file)
(logits_val, y_val), (logits_test, y_test) = unpickle_probs(FILE_PATH)
probs_val = softmax(logits_val) # Softmax logits
probs_test = softmax(logits_test)
results_val = []
results_test = []
for w in weights_model:
get_preds(probs_test, w)
results_val.append(get_preds(probs_val, w))
results_test.append(get_preds(probs_test, w))
results_val_mean = np.mean(results_val, axis=0)
results_test_mean = np.mean(results_test, axis=0)
accs_val, confs_val, len_bins_val = get_bin_info2(results_val_mean, y_val, bin_size = bin_size, k = k)
accs_test, confs_test, len_bins_test = get_bin_info2(results_test_mean, y_test, bin_size = bin_size, k = k)
return (accs_test, confs_test, len_bins_test), (accs_val, confs_val, len_bins_val)
def plot_bin_importance(values, ax, M = 15, name = "Importance of Bins", xname = "Confidence", yname=""):
"""Plot that shows how much each confidence interval adds to final ECE"""
bin_size = 1/M
positions = np.arange(0+bin_size/2, 1+bin_size/2, bin_size)
ax.bar(positions, values, width = bin_size, edgecolor = "black", color = "blue", label="Outputs", zorder = 3)
ax.set_aspect('equal')
#ax.plot([0,1], [0,1], linestyle = "--")
ax.set_xlim(0,1)
ax.set_ylim(0,1)
ax.set_title(name, fontsize=24)
ax.set_xlabel(xname, fontsize=22, color = "black")
ax.set_ylabel(yname, fontsize=22, color = "black")
def get_errors(bin_info):
# Get error_standardized and ECE
error = np.abs(np.array(bin_info[0]) - np.array(bin_info[1]))
lens = bin_info[2]
n = sum(lens)
error_norm = error*lens/n
error_standard = error_norm/sum(error_norm)
ECE = sum(error_norm)
return error_standard, ECE
###Output
_____no_output_____
###Markdown
Generate lineplots
###Code
def get_accs_confs(k, weights_all = weights_params_l2_mu_off, file = files_10[0], M = 15, v = 0):
bin_info_uncal = get_uncalibrated_res(PATH, file, M=M, k=k)
accs_confs = [bin_info_uncal[v]]
accs_confs.append(cal_res(TemperatureScaling, PATH, file, M=M, k=k)[v])
name = "_".join(file.split("_")[1:-1]) # file_name
print(name)
accs_confs.append(get_dir_results(file, weights_all[0][name], M=M, k=k)[v])
return np.array(accs_confs)
def gen_lineplots(files, weights_all, plot_names = [], M = 15, val_set = False, version = "ECE", classes = 10, extra_plots = False):
if val_set: # Plot Reliability diagrams for validation set
v = 1
else:
v = 0
for i, file in enumerate(files):
k_accs_confs = []
if classes > 0:
for k in range(classes):
accs_confs = get_accs_confs(k, weights_all = weights_all, file = file, M = M, v = v)
k_accs_confs.append(accs_confs)
else:
return get_accs_confs(-1, weights_all = weights_all, file = file, M = M, v = v)
return np.array(k_accs_confs)
k_accs_confs = gen_lineplots([files_10[0]], weights_params_l2_mu_off, M = 15)
conf_ECE_accs_confs = gen_lineplots([files_10[0]], weights_params_l2_mu_off, M = 15, classes=-1)
def plot_comp1(bin_info, k = 0):
if k != -1:
bin_info = bin_info[k]
accs = bin_info[0, 0]
accs_temp = bin_info[1, 0]
accs_dir = bin_info[2, 0]
x = np.linspace(0, 1, 15)
plt.plot(x, x, linestyle=":", color = "gray")
plt.step(x = x, y = accs, where="mid", linestyle="--", color = "red",
linewidth=2.5, label="Uncal. (ECE=%0.4f)" % get_errors(bin_info[0])[1])
plt.step(x = x, y = accs_temp, where="mid", linestyle="-.", color = "blue",
linewidth=2.5, label="Temp. (ECE=%0.4f)" % get_errors(bin_info[1])[1])
plt.step(x = x, y = accs_dir, where="mid", linestyle="-", color="green",
linewidth=2.5, label="Diri. (ECE=%0.4f)" % get_errors(bin_info[2])[1])
plt.legend(prop={'size': 12})
#plt.axes.Axes.set_aspect('equal')
if k == -1:
plt.title("Confidence ECE")
else:
plt.title("Calibration of class %i" % k)
#plt.savefig("Comp1_k=%i.pdf" % k, format='pdf', dpi=1000, bbox_inches='tight', pad_inches=0.2)
#plt.show()
def get_errors(bin_info):
# Get error_standardized and ECE
error = np.abs(np.array(bin_info[0]) - np.array(bin_info[1]))
lens = bin_info[2]
n = sum(lens)
error_norm = error*lens/n
error_standard = error_norm/sum(error_norm)
ECE = sum(error_norm)
return error_standard, ECE
# reliability diagram plotting for subplot case.
def rel_diagram_sub2(accs, confs, ax, M = 10, name = "Reliability Diagram", xname = "", yname="",
gname = "Gap (conf-ECE=%0.4f)", ece = -1, leg_name = "Observed accuracy"):
acc_conf = np.column_stack([accs,confs])
#acc_conf.sort(axis=1) # No need to sort
outputs = np.array(accs) # Accuracy
gap = confs - outputs # Absolute difference between accuracy and confidence
bottoms = accs
bin_size = 1/M
positions = np.arange(0+bin_size/2, 1+bin_size/2, bin_size)
#Bars with outputs
output_plt = ax.bar(positions, outputs, width = bin_size, edgecolor = "black", color = "blue", label=leg_name,
alpha = 1, zorder = 2)
# Plot gap first, so its below everything
gap_plt = ax.bar(positions, gap, bottom = bottoms, width = bin_size*0.3, edgecolor = "red", color = "#ffc8c6",
alpha = 1, label=gname % ece, linewidth=1, zorder=2)
# Line plot with center line.
ax.set_aspect('equal')
ax.plot([0,1], [0,1], linestyle = "--")
ax.legend(handles = [gap_plt, output_plt], prop={'size': 15})
ax.set_xlim(0,1)
ax.set_ylim(0,1)
ax.set_title(name, fontsize=24)
ax.set_xlabel(xname, fontsize=22, color = "black")
ax.set_ylabel(yname, fontsize=22, color = "black")
for tick in ax.xaxis.get_major_ticks():
tick.label.set_fontsize(15)
#tick.label.set_rotation('vertical')
for tick in ax.yaxis.get_major_ticks():
tick.label.set_fontsize(15)
###Output
_____no_output_____
###Markdown
Figure 1 in the main article.
###Code
ece1 = get_errors(conf_ECE_accs_confs[0])[1]
ece2 = get_errors(conf_ECE_accs_confs[1])[1]
ece3 = get_errors(k_accs_confs[2][1])[1]
ece4 = get_errors(k_accs_confs[2][2])[1]
fig = plt.figure(figsize=(30,6))
#fig.set_edgecolor('red')
ax0 = fig.add_subplot(141)
rel_diagram_sub2(conf_ECE_accs_confs[0][0], conf_ECE_accs_confs[0][1], ax0 , M = 15,
name = "(a) Confidence ECE (Uncal.)", xname="", ece = ece1)
ax1 = fig.add_subplot(142)
rel_diagram_sub2(conf_ECE_accs_confs[1][0], conf_ECE_accs_confs[1][1], ax1 , M = 15,
name = "(b) Confidence ECE (Temp.)", xname="", ece = ece2)
ax2 = fig.add_subplot(143, sharex=ax1)
rel_diagram_sub2(k_accs_confs[2][1][0], k_accs_confs[2][1][1], ax2 , M = 15,
name = "(c) Class 2 reliability (Temp.Scal.)", xname="",
gname=r'Gap (class-2-ECE=%0.4f)', ece = ece3, leg_name = "Observed frequency")
ax3 = fig.add_subplot(144, sharex=ax1)
rel_diagram_sub2(k_accs_confs[2][2][0], k_accs_confs[2][2][1], ax3 , M = 15,
name = "(d) Class 2 reliability (Dirichlet Cal.)", xname="",
gname=r'Gap (class-2-ECE=%0.4f)', ece = ece4, leg_name = "Observed frequency")
names = ["(a) Conf-reliability (Uncal.)", "(b) Conf-reliability (Temp.Scal.)",
"(c) Class 2 reliability (Temp.Scal.)", "(d) Class 2 reliability (Dirichlet Cal.)"]
axes_all = [ax0, ax1, ax2, ax3]
for i, ax in enumerate(axes_all):
ax.set_aspect("equal")
ax.set_xlim(0,1)
ax.set_ylim(0,1)
ax.set_title(names[i], fontsize=22)
ax.set_ylabel("Accuracy", fontsize=20, color = "black")
ax.set_xlabel("Confidence", fontsize=20, color = "black")
for ax in axes_all[2:]:
ax.set_ylabel("Frequency", fontsize=20, color = "black")
ax.set_xlabel("Predicted Probability", fontsize=20, color = "black")
plt.savefig("figure_RD_ECE.pdf", format='pdf', dpi=1000, bbox_inches='tight', pad_inches=0.2)
plt.show()
###Output
_____no_output_____
###Markdown
Figure 12 in Supplemental Material
###Code
ece1 = get_errors(conf_ECE_accs_confs[0])[1]
ece2 = get_errors(conf_ECE_accs_confs[1])[1]
ece3 = get_errors(k_accs_confs[4][1])[1]
ece4 = get_errors(k_accs_confs[4][2])[1]
fig = plt.figure(figsize=(30,6))
#fig.set_edgecolor('red')
#ax0 = fig.add_subplot(141)
#rel_diagram_sub2(conf_ECE_accs_confs[0][0], conf_ECE_accs_confs[0][1], ax0 , M = 15,
# name = "(a) Confidence ECE (Uncal.)", xname="", ece = ece1)
#ax1 = fig.add_subplot(142)
#rel_diagram_sub2(conf_ECE_accs_confs[1][0], conf_ECE_accs_confs[1][1], ax1 , M = 15,
# name = "(b) Confidence ECE (Temp.)", xname="", ece = ece2)
ax2 = fig.add_subplot(143, sharex=ax1)
rel_diagram_sub2(k_accs_confs[4][1][0], k_accs_confs[2][1][1], ax2 , M = 15,
name = "(c) Class 2 reliability (Temp.Scal.)", xname="",
gname=r'Gap (class-4-ECE=%0.4f)', ece = ece3, leg_name = "Observed frequency")
ax3 = fig.add_subplot(144, sharex=ax1)
rel_diagram_sub2(k_accs_confs[4][2][0], k_accs_confs[2][2][1], ax3 , M = 15,
name = "(d) Class 2 reliability (Dirichlet Cal.)", xname="",
gname=r'Gap (class-4-ECE=%0.4f)', ece = ece4, leg_name = "Observed frequency")
names = ["(a) Conf-reliability (Uncal.)", "(b) Conf-reliability (Temp.Scal.)",
"(a) Class 4 reliability (Temp.Scal.)", "(b) Class 4 reliability (Dirichlet Cal.)"]
axes_all = [ax0, ax1, ax2, ax3]
for i, ax in enumerate(axes_all):
ax.set_aspect("equal")
ax.set_xlim(0,1)
ax.set_ylim(0,1)
ax.set_title(names[i], fontsize=22)
ax.set_ylabel("Accuracy", fontsize=20, color = "black")
ax.set_xlabel("Confidence", fontsize=20, color = "black")
for ax in axes_all[2:]:
ax.set_ylabel("Frequency", fontsize=20, color = "black")
ax.set_xlabel("Predicted Probability", fontsize=20, color = "black")
plt.savefig("figure_RD_ECE_class4.pdf", format='pdf', dpi=1000, bbox_inches='tight', pad_inches=0.2)
plt.show()
###Output
_____no_output_____ |
guides/deployment/deploy-with-aws-ecs/deploy-with-aws-ecs.ipynb | ###Markdown
BentoML Example: Deploy to AWS ECS using AWS Fargate[BentoML](http://bentoml.ai) is an open source framework for building, shipping and running machine learning services. It provides high-level APIs for defining an ML service and packaging its artifacts, source code, dependencies, and configurations into a production-system-friendly format that is ready for deployment.This notebook demonstrates how to use BentoML to deploy a machine learning model as a serverless REST API endpoint to AWS ECS. For this demo, we are using the [Sentiment Analysis with Scikit-learn](https://github.com/bentoml/BentoML/blob/master/examples/sklearn-sentiment-clf/sklearn-sentiment-clf.ipynb) example, using dataset from [Sentiment140](http://help.sentiment140.com/for-students/).
###Code
%reload_ext autoreload
%autoreload 2
%matplotlib inline
!pip install bentoml
!pip install sklearn pandas numpy
import bentoml
import numpy as np
import pandas as pd
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import classification_report, roc_auc_score, roc_curve
from sklearn.pipeline import Pipeline
###Output
_____no_output_____
###Markdown
Prepare Dataset
###Code
%%bash
if [ ! -f ./trainingandtestdata.zip ]; then
wget -q http://cs.stanford.edu/people/alecmgo/trainingandtestdata.zip
unzip -n trainingandtestdata.zip
fi
columns = ['polarity', 'tweetid', 'date', 'query_name', 'user', 'text']
dftrain = pd.read_csv('training.1600000.processed.noemoticon.csv',
header = None,
encoding ='ISO-8859-1')
dftest = pd.read_csv('testdata.manual.2009.06.14.csv',
header = None,
encoding ='ISO-8859-1')
dftrain.columns = columns
dftest.columns = columns
###Output
_____no_output_____
###Markdown
Model Training
###Code
sentiment_lr = Pipeline([
('count_vect', CountVectorizer(min_df = 100,
ngram_range = (1,1),
stop_words = 'english')),
('lr', LogisticRegression())])
sentiment_lr.fit(dftrain.text, dftrain.polarity)
Xtest, ytest = dftest.text[dftest.polarity!=2], dftest.polarity[dftest.polarity!=2]
print(classification_report(ytest,sentiment_lr.predict(Xtest)))
sentiment_lr.predict([Xtest[0]])
###Output
_____no_output_____
###Markdown
Create BentoService for model servingTo package this trained model for model serving in production, you will need to create a new BentoML Service by subclassing it:
###Code
%%writefile sentiment_lr_model.py
import pandas as pd
import bentoml
from bentoml.artifact import PickleArtifact
from bentoml.handlers import DataframeHandler
@bentoml.artifacts([PickleArtifact('model')])
@bentoml.env(pip_dependencies=['sklearn', 'numpy', 'pandas'])
class SentimentLRModel(bentoml.BentoService):
@bentoml.api(DataframeHandler, typ='series')
def predict(self, series):
"""
predict expects pandas.Series as input
"""
return self.artifacts.model.predict(series)
###Output
Writing sentiment_lr_model.py
###Markdown
Save BentoService to file archive
###Code
# 1) import the custom BentoService defined above
from sentiment_lr_model import SentimentLRModel
# 2) `pack` it with required artifacts
bento_service = SentimentLRModel()
bento_service.pack('model', sentiment_lr)
# 3) save BentoSerivce to file archive
saved_path = bento_service.save()
###Output
[2019-12-16 13:29:17,399] WARNING - BentoML local changes detected - Local BentoML repository including all code changes will be bundled together with the BentoService bundle. When used with docker, the base docker image will be default to same version as last PyPI release at version: 0.5.3. You can also force bentoml to use a specific version for deploying your BentoService bundle, by setting the config 'core/bentoml_deploy_version' to a pinned version or your custom BentoML on github, e.g.:'bentoml_deploy_version = git+https://github.com/{username}/bentoml.git@{branch}'
[2019-12-16 13:29:17,401] WARNING - BentoML local changes detected - Local BentoML repository including all code changes will be bundled together with the BentoService bundle. When used with docker, the base docker image will be default to same version as last PyPI release at version: 0.5.3. You can also force bentoml to use a specific version for deploying your BentoService bundle, by setting the config 'core/bentoml_deploy_version' to a pinned version or your custom BentoML on github, e.g.:'bentoml_deploy_version = git+https://github.com/{username}/bentoml.git@{branch}'
[2019-12-16 13:29:17,413] WARNING - BentoML local changes detected - Local BentoML repository including all code changes will be bundled together with the BentoService bundle. When used with docker, the base docker image will be default to same version as last PyPI release at version: 0.5.3. You can also force bentoml to use a specific version for deploying your BentoService bundle, by setting the config 'core/bentoml_deploy_version' to a pinned version or your custom BentoML on github, e.g.:'bentoml_deploy_version = git+https://github.com/{username}/bentoml.git@{branch}'
[2019-12-16 13:29:44,475] WARNING - BentoML local changes detected - Local BentoML repository including all code changes will be bundled together with the BentoService bundle. When used with docker, the base docker image will be default to same version as last PyPI release at version: 0.5.3. You can also force bentoml to use a specific version for deploying your BentoService bundle, by setting the config 'core/bentoml_deploy_version' to a pinned version or your custom BentoML on github, e.g.:'bentoml_deploy_version = git+https://github.com/{username}/bentoml.git@{branch}'
running sdist
running egg_info
writing BentoML.egg-info/PKG-INFO
writing dependency_links to BentoML.egg-info/dependency_links.txt
writing entry points to BentoML.egg-info/entry_points.txt
writing requirements to BentoML.egg-info/requires.txt
writing top-level names to BentoML.egg-info/top_level.txt
reading manifest file 'BentoML.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
###Markdown
Load saved BentoService
###Code
# Load exported bentoML model archive from path
loaded_bento_service = bentoml.load(saved_path)
# Call predict on the restored BentoService
loaded_bento_service.predict(pd.Series(["hello", "hi"]))
bento_tag = '{name}:{version}'.format(name=bento_service.name, version=bento_service.version)
print(bento_tag)
###Output
SentimentLRModel:20191216132917_85D657
###Markdown
AWS ECS Deployment Build and push docker image to AWS ECR *Get docker login info from AWS ecr*
###Code
!aws ecr get-login --region us-west-2 --no-include-email
###Output
docker login -u AWS -p eyJwYXlsb2FkIjoiYnFQdjVIUkRpbzl0bXB4aFM4Ny9nbWp4OTV2UTZwTFp1WEhrNEZyWGsvaHBFNjBqUGU2Vm40aVNPdVhnb3BNaHNZMHJWeFFnQ09vUVBMRHdIbXdTSDR2TVZCdTUrL0gwb3V0Z3dJRFMwUUx6MmNxSmdPK0pqUmh4SWhDZTIwa0dHWWZ3M3gvM0pxYXlTcUdnUUtkSGFhMDJkenBnTWNOVU9ia2NESHJKQ0NJeGNZUGgwWHhiNWpCT0piYTlrU0RwajJIVFJCalNnMjZaRnZJUWVwcnprY0JpNStiUnMzVWFmbXozdHlHczJzTzI1SXh1QjFudzBiOVNIdjBXZTZydWVDSlRjV0dLN3FMZG1yL29iN2gzZW4wY1JORHVJcFJoVFdwY1NKWGllY3J1SEp0Y3JMRElnMzRmOGVRY0RhREdXcTVJbklaRkc3MU5ERTYvSUZham96blExK0ZJM0liY0c4eEtUemYxUllseG9jd2FBcmduWlFnMmhSc081VjJScjlRakp5cEcxeWYyWkI4M3Q1M2FsTTRCVVg1R1JZRFN6bitFeGNOSkphVjVIcTU2UDY0djRhU2VvQTBtQmRyU3BvV1Z5K0xwRElKWkFZcFFZZEQvc2ZrNC9sRU8xL0kwWTFRQkg1bU8zRWlkakVXa2hBZG05TGlLU3VvLzRLL21TcFBZTDB0S1l3cTlQRUUwTDVCSG9yQ2NiSTIvMm9VdG42NUNTOWZHeGpnUTRpZnJGZnE5VXhwbWZUTHZTZms4c3dmOXdCN2ZSTDRhN1E2SGh6MlF5enRqSDhNWFNnNHFXZTVobitBYUxYYWxHU1lWbkdRdnFGTlk3ODl2SnVUNnNuRXFrYjlTbTlVc3J3R3B4TzBnQUYyVG9MQWNScHlVeFE5QlMzS0M1aXdMaVM5OExHN0EzSFkxNnF4VU4yaW1lT2JEL3FFRXdLKzBsK0xQY2E3TjRacGpoaXJEamN6MFBCQ0hOMnNtSXJpUktFQUNSZlFnUGsyNTc4VzNlSlRZYk9DNnZGM29UV0ptbUNqN3RHSUpvd05TRDJNNzdoMWhYaFRpdzBaZ2FCTzdDTE5USTRWcHl4a2IxOE5IYWZ6TE5UTXdZeEhaanRqZDBaUkdzdTl4V0hJL1ZkL2hTbzJ4Uk9teTRMelhoV3k4S256ZUJwNXIwa2pvaWlDOHFSYmZvYlVyVnUvSmEyaklRSlh4K2xkekpJcyt6dlY0NHAyaWMrSzVsc3RFbVo5c05MUFQ4ZStRajVKU1oxSm1TMWZNcDBKSzhIV3pGdld1dlJIY0JBbWIwbGxVNlhrOUthd3JJNUdCNy9mQUc2S2RrdXdNbXRHNDlsSGV6OXgrT3Y0VmhvWEpuVGl3RkxSZnlWTEhRS1I4NXFBS3hGdkdFOUh4QUkrcXpiMUNJaDdrZ0R0RVY3RXpHV3IwN2ZzVjRiSExqMEhuckhoSVUwcEdzczNqRmoyQWxUZkpCMDNTZmc9PSIsImRhdGFrZXkiOiJBUUVCQUhqNmxjNFhJSncvN2xuMEhjMDBETWVrNkdFeEhDYlk0UklwVE1DSTU4SW5Vd0FBQUg0d2ZBWUpLb1pJaHZjTkFRY0dvRzh3YlFJQkFEQm9CZ2txaGtpRzl3MEJCd0V3SGdZSllJWklBV1VEQkFFdU1CRUVESnNTSCsvVjhDUStlUkhaWVFJQkVJQTdETThKR2JEc1h5NHBnR0pyQ3A2cm45Y0xiaDcrQ3NUOThiRGVvMXZwb3JyWVRQZEZXV0l2UUtITTJLTW9yYVJSTHcwZ0NzQk12ZjBrTEJFPSIsInZlcnNpb24iOiIyIiwidHlwZSI6IkRBVEFfS0VZIiwiZXhwaXJhdGlvbiI6MTU3NjU4NDg0OH0= https://192023623294.dkr.ecr.us-west-2.amazonaws.com
###Markdown
*Copy and run the output from previous cell*
###Code
!docker login -u AWS -p eyJwYXlsb2FkIjoiYnFQdjVIUkRpbzl0bXB4aFM4Ny9nbWp4OTV2UTZwTFp1WEhrNEZyWGsvaHBFNjBqUGU2Vm40aVNPdVhnb3BNaHNZMHJWeFFnQ09vUVBMRHdIbXdTSDR2TVZCdTUrL0gwb3V0Z3dJRFMwUUx6MmNxSmdPK0pqUmh4SWhDZTIwa0dHWWZ3M3gvM0pxYXlTcUdnUUtkSGFhMDJkenBnTWNOVU9ia2NESHJKQ0NJeGNZUGgwWHhiNWpCT0piYTlrU0RwajJIVFJCalNnMjZaRnZJUWVwcnprY0JpNStiUnMzVWFmbXozdHlHczJzTzI1SXh1QjFudzBiOVNIdjBXZTZydWVDSlRjV0dLN3FMZG1yL29iN2gzZW4wY1JORHVJcFJoVFdwY1NKWGllY3J1SEp0Y3JMRElnMzRmOGVRY0RhREdXcTVJbklaRkc3MU5ERTYvSUZham96blExK0ZJM0liY0c4eEtUemYxUllseG9jd2FBcmduWlFnMmhSc081VjJScjlRakp5cEcxeWYyWkI4M3Q1M2FsTTRCVVg1R1JZRFN6bitFeGNOSkphVjVIcTU2UDY0djRhU2VvQTBtQmRyU3BvV1Z5K0xwRElKWkFZcFFZZEQvc2ZrNC9sRU8xL0kwWTFRQkg1bU8zRWlkakVXa2hBZG05TGlLU3VvLzRLL21TcFBZTDB0S1l3cTlQRUUwTDVCSG9yQ2NiSTIvMm9VdG42NUNTOWZHeGpnUTRpZnJGZnE5VXhwbWZUTHZTZms4c3dmOXdCN2ZSTDRhN1E2SGh6MlF5enRqSDhNWFNnNHFXZTVobitBYUxYYWxHU1lWbkdRdnFGTlk3ODl2SnVUNnNuRXFrYjlTbTlVc3J3R3B4TzBnQUYyVG9MQWNScHlVeFE5QlMzS0M1aXdMaVM5OExHN0EzSFkxNnF4VU4yaW1lT2JEL3FFRXdLKzBsK0xQY2E3TjRacGpoaXJEamN6MFBCQ0hOMnNtSXJpUktFQUNSZlFnUGsyNTc4VzNlSlRZYk9DNnZGM29UV0ptbUNqN3RHSUpvd05TRDJNNzdoMWhYaFRpdzBaZ2FCTzdDTE5USTRWcHl4a2IxOE5IYWZ6TE5UTXdZeEhaanRqZDBaUkdzdTl4V0hJL1ZkL2hTbzJ4Uk9teTRMelhoV3k4S256ZUJwNXIwa2pvaWlDOHFSYmZvYlVyVnUvSmEyaklRSlh4K2xkekpJcyt6dlY0NHAyaWMrSzVsc3RFbVo5c05MUFQ4ZStRajVKU1oxSm1TMWZNcDBKSzhIV3pGdld1dlJIY0JBbWIwbGxVNlhrOUthd3JJNUdCNy9mQUc2S2RrdXdNbXRHNDlsSGV6OXgrT3Y0VmhvWEpuVGl3RkxSZnlWTEhRS1I4NXFBS3hGdkdFOUh4QUkrcXpiMUNJaDdrZ0R0RVY3RXpHV3IwN2ZzVjRiSExqMEhuckhoSVUwcEdzczNqRmoyQWxUZkpCMDNTZmc9PSIsImRhdGFrZXkiOiJBUUVCQUhqNmxjNFhJSncvN2xuMEhjMDBETWVrNkdFeEhDYlk0UklwVE1DSTU4SW5Vd0FBQUg0d2ZBWUpLb1pJaHZjTkFRY0dvRzh3YlFJQkFEQm9CZ2txaGtpRzl3MEJCd0V3SGdZSllJWklBV1VEQkFFdU1CRUVESnNTSCsvVjhDUStlUkhaWVFJQkVJQTdETThKR2JEc1h5NHBnR0pyQ3A2cm45Y0xiaDcrQ3NUOThiRGVvMXZwb3JyWVRQZEZXV0l2UUtITTJLTW9yYVJSTHcwZ0NzQk12ZjBrTEJFPSIsInZlcnNpb24iOiIyIiwidHlwZSI6IkRBVEFfS0VZIiwiZXhwaXJhdGlvbiI6MTU3NjU4NDg0OH0= https://192023623294.dkr.ecr.us-west-2.amazonaws.com
###Output
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
Login Succeeded
###Markdown
**Build docker image**
###Code
!cd {saved_path} & docker build . --tag=192023623294.dkr.ecr.us-west-2.amazonaws.com/sentiment-ecs
###Output
Sending build context to Docker daemon 9.283MB
Step 1/12 : FROM continuumio/miniconda3:4.7.12
---> 406f2b43ea59
Step 2/12 : ENTRYPOINT [ "/bin/bash", "-c" ]
---> Using cache
---> 52d60658abca
Step 3/12 : EXPOSE 5000
---> Using cache
---> 041d44f68694
Step 4/12 : RUN set -x && apt-get update && apt-get install --no-install-recommends --no-install-suggests -y libpq-dev build-essential && rm -rf /var/lib/apt/lists/*
---> Using cache
---> a618012fac78
Step 5/12 : RUN conda update conda -y && conda install pip numpy scipy && pip install gunicorn
---> Using cache
---> f40b70099ec8
Step 6/12 : COPY . /bento
---> be181a1904d3
Step 7/12 : WORKDIR /bento
---> Running in ea1152d959b1
Removing intermediate container ea1152d959b1
---> 60ff0402076b
Step 8/12 : RUN conda env update -n base -f /bento/environment.yml
---> Running in 10a070278bf7
Collecting package metadata (repodata.json): ...working... done
Solving environment: ...working... done
Downloading and Extracting Packages
python-3.7.2 | 31.9 MB | | 0%
python-3.7.2 | 31.9 MB | | 0%
python-3.7.2 | 31.9 MB | 4 | 4%
python-3.7.2 | 31.9 MB | 6 | 6%
python-3.7.2 | 31.9 MB | 8 | 9%
python-3.7.2 | 31.9 MB | #1 | 11%
python-3.7.2 | 31.9 MB | #4 | 14%
python-3.7.2 | 31.9 MB | #7 | 17%
python-3.7.2 | 31.9 MB | #9 | 19%
python-3.7.2 | 31.9 MB | ##2 | 22%
python-3.7.2 | 31.9 MB | ##5 | 26%
python-3.7.2 | 31.9 MB | ##8 | 28%
python-3.7.2 | 31.9 MB | ### | 30%
python-3.7.2 | 31.9 MB | ###2 | 33%
python-3.7.2 | 31.9 MB | ###6 | 36%
python-3.7.2 | 31.9 MB | ###8 | 39%
python-3.7.2 | 31.9 MB | ####1 | 41%
python-3.7.2 | 31.9 MB | ####3 | 43%
python-3.7.2 | 31.9 MB | ####6 | 47%
python-3.7.2 | 31.9 MB | ####9 | 49%
python-3.7.2 | 31.9 MB | #####2 | 52%
python-3.7.2 | 31.9 MB | #####4 | 55%
python-3.7.2 | 31.9 MB | #####7 | 57%
python-3.7.2 | 31.9 MB | ###### | 60%
python-3.7.2 | 31.9 MB | ######3 | 64%
python-3.7.2 | 31.9 MB | ######6 | 66%
python-3.7.2 | 31.9 MB | ######9 | 70%
python-3.7.2 | 31.9 MB | #######2 | 72%
python-3.7.2 | 31.9 MB | #######5 | 75%
python-3.7.2 | 31.9 MB | #######8 | 79%
python-3.7.2 | 31.9 MB | ########1 | 81%
python-3.7.2 | 31.9 MB | ########4 | 84%
python-3.7.2 | 31.9 MB | ########7 | 87%
python-3.7.2 | 31.9 MB | ######### | 90%
python-3.7.2 | 31.9 MB | #########3 | 93%
python-3.7.2 | 31.9 MB | #########6 | 96%
python-3.7.2 | 31.9 MB | #########9 | 99%
python-3.7.2 | 31.9 MB | ########## | 100%
Preparing transaction: ...working... done
Verifying transaction: ...working... done
Executing transaction: ...working... done
#
# To activate this environment, use
#
# $ conda activate base
#
# To deactivate an active environment, use
#
# $ conda deactivate
Removing intermediate container 10a070278bf7
---> 60206bf45998
Step 9/12 : RUN pip install -r /bento/requirements.txt
---> Running in 56d51426ec77
Collecting bentoml==0.5.3
Downloading https://files.pythonhosted.org/packages/20/53/6656851abd7ea4df7d3934a0b7eb8972c3c40dad2b4c0ea6b23e0c9c0624/BentoML-0.5.3-py3-none-any.whl (523kB)
Collecting sklearn
Downloading https://files.pythonhosted.org/packages/1e/7a/dbb3be0ce9bd5c8b7e3d87328e79063f8b263b2b1bfa4774cb1147bfcd3f/sklearn-0.0.tar.gz
Requirement already satisfied: numpy in /opt/conda/lib/python3.7/site-packages (from -r /bento/requirements.txt (line 3)) (1.17.4)
Collecting pandas
Downloading https://files.pythonhosted.org/packages/63/e0/a1b39cdcb2c391f087a1538bc8a6d62a82d0439693192aef541d7b123769/pandas-0.25.3-cp37-cp37m-manylinux1_x86_64.whl (10.4MB)
Collecting click>=7.0
Downloading https://files.pythonhosted.org/packages/fa/37/45185cb5abbc30d7257104c434fe0b07e5a195a6847506c074527aa599ec/Click-7.0-py2.py3-none-any.whl (81kB)
Collecting ruamel.yaml>=0.15.0
Downloading https://files.pythonhosted.org/packages/fa/90/ecff85a2e9c497e2fa7142496e10233556b5137db5bd46f3f3b006935ca8/ruamel.yaml-0.16.5-py2.py3-none-any.whl (123kB)
Collecting python-json-logger
Downloading https://files.pythonhosted.org/packages/80/9d/1c3393a6067716e04e6fcef95104c8426d262b4adaf18d7aa2470eab028d/python-json-logger-0.1.11.tar.gz
Collecting boto3
Downloading https://files.pythonhosted.org/packages/5d/62/9629ee1a41757b65b55c2070d2e0afee80a89e3ea0076b9b9669a773308b/boto3-1.10.40-py2.py3-none-any.whl (128kB)
Collecting flask
Downloading https://files.pythonhosted.org/packages/9b/93/628509b8d5dc749656a9641f4caf13540e2cdec85276964ff8f43bbb1d3b/Flask-1.1.1-py2.py3-none-any.whl (94kB)
Collecting alembic
Downloading https://files.pythonhosted.org/packages/dc/6d/3c1411dfdcf089ec89ce5e2222deb2292f39b6b1a5911222e15af9fe5a92/alembic-1.3.2.tar.gz (1.1MB)
Collecting configparser
Downloading https://files.pythonhosted.org/packages/7a/2a/95ed0501cf5d8709490b1d3a3f9b5cf340da6c433f896bbe9ce08dbe6785/configparser-4.0.2-py2.py3-none-any.whl
Collecting grpcio
Downloading https://files.pythonhosted.org/packages/b5/68/070ee7609b452e950bd5af35f7161f0ceb0abd61cf16ff3b23c852d4594b/grpcio-1.25.0-cp37-cp37m-manylinux2010_x86_64.whl (2.4MB)
Requirement already satisfied: six in /opt/conda/lib/python3.7/site-packages (from bentoml==0.5.3->-r /bento/requirements.txt (line 1)) (1.13.0)
Collecting packaging
Downloading https://files.pythonhosted.org/packages/cf/94/9672c2d4b126e74c4496c6b3c58a8b51d6419267be9e70660ba23374c875/packaging-19.2-py2.py3-none-any.whl
Collecting prometheus-client
Downloading https://files.pythonhosted.org/packages/b3/23/41a5a24b502d35a4ad50a5bb7202a5e1d9a0364d0c12f56db3dbf7aca76d/prometheus_client-0.7.1.tar.gz
Collecting sqlalchemy>=1.3.0
Downloading https://files.pythonhosted.org/packages/17/7f/35879c73859368ad19a952b69ee780aa97fc30350dabd45fb948d6a4e3ea/SQLAlchemy-1.3.12.tar.gz (6.0MB)
Requirement already satisfied: gunicorn in /opt/conda/lib/python3.7/site-packages (from bentoml==0.5.3->-r /bento/requirements.txt (line 1)) (20.0.4)
Collecting humanfriendly
Downloading https://files.pythonhosted.org/packages/90/df/88bff450f333114680698dc4aac7506ff7cab164b794461906de31998665/humanfriendly-4.18-py2.py3-none-any.whl (73kB)
Collecting pathlib2
Downloading https://files.pythonhosted.org/packages/e9/45/9c82d3666af4ef9f221cbb954e1d77ddbb513faf552aea6df5f37f1a4859/pathlib2-2.3.5-py2.py3-none-any.whl
Requirement already satisfied: requests in /opt/conda/lib/python3.7/site-packages (from bentoml==0.5.3->-r /bento/requirements.txt (line 1)) (2.22.0)
Collecting protobuf>=3.6.0
Downloading https://files.pythonhosted.org/packages/c8/bd/9609c681f655c4b85f4e4f4c99d42b68bca63ad8891c6924ba6696dee0bb/protobuf-3.11.1-cp37-cp37m-manylinux1_x86_64.whl (1.3MB)
Collecting tabulate
Downloading https://files.pythonhosted.org/packages/c4/41/523f6a05e6dc3329a5660f6a81254c6cd87e5cfb5b7482bae3391d86ec3a/tabulate-0.8.6.tar.gz (45kB)
Collecting cerberus
Downloading https://files.pythonhosted.org/packages/90/a7/71c6ed2d46a81065e68c007ac63378b96fa54c7bb614d653c68232f9c50c/Cerberus-1.3.2.tar.gz (52kB)
Collecting docker
Downloading https://files.pythonhosted.org/packages/cc/ca/699d4754a932787ef353a157ada74efd1ceb6d1fc0bfb7989ae1e7b33111/docker-4.1.0-py2.py3-none-any.whl (139kB)
Collecting scikit-learn
Downloading https://files.pythonhosted.org/packages/19/96/8034e350d4550748277e514d0d6d91bdd36be19e6c5f40b8af0d74cb0c84/scikit_learn-0.22-cp37-cp37m-manylinux1_x86_64.whl (7.0MB)
Collecting pytz>=2017.2
Downloading https://files.pythonhosted.org/packages/e7/f9/f0b53f88060247251bf481fa6ea62cd0d25bf1b11a87888e53ce5b7c8ad2/pytz-2019.3-py2.py3-none-any.whl (509kB)
Collecting python-dateutil>=2.6.1
Downloading https://files.pythonhosted.org/packages/d4/70/d60450c3dd48ef87586924207ae8907090de0b306af2bce5d134d78615cb/python_dateutil-2.8.1-py2.py3-none-any.whl (227kB)
Collecting ruamel.yaml.clib>=0.1.2; platform_python_implementation == "CPython" and python_version < "3.8"
Downloading https://files.pythonhosted.org/packages/40/80/da16b691d5e259dd9919a10628e541fca321cb4b078fbb88e1c7c22aa42d/ruamel.yaml.clib-0.2.0-cp37-cp37m-manylinux1_x86_64.whl (547kB)
Collecting botocore<1.14.0,>=1.13.40
Downloading https://files.pythonhosted.org/packages/6b/16/c8fa1876cd4e7749e67e3d47e9fa88dfde4c4aded28747e278be3424733d/botocore-1.13.40-py2.py3-none-any.whl (5.8MB)
Collecting jmespath<1.0.0,>=0.7.1
Downloading https://files.pythonhosted.org/packages/83/94/7179c3832a6d45b266ddb2aac329e101367fbdb11f425f13771d27f225bb/jmespath-0.9.4-py2.py3-none-any.whl
Collecting s3transfer<0.3.0,>=0.2.0
Downloading https://files.pythonhosted.org/packages/16/8a/1fc3dba0c4923c2a76e1ff0d52b305c44606da63f718d14d3231e21c51b0/s3transfer-0.2.1-py2.py3-none-any.whl (70kB)
Collecting Werkzeug>=0.15
Downloading https://files.pythonhosted.org/packages/ce/42/3aeda98f96e85fd26180534d36570e4d18108d62ae36f87694b476b83d6f/Werkzeug-0.16.0-py2.py3-none-any.whl (327kB)
Collecting Jinja2>=2.10.1
Downloading https://files.pythonhosted.org/packages/65/e0/eb35e762802015cab1ccee04e8a277b03f1d8e53da3ec3106882ec42558b/Jinja2-2.10.3-py2.py3-none-any.whl (125kB)
Collecting itsdangerous>=0.24
Downloading https://files.pythonhosted.org/packages/76/ae/44b03b253d6fade317f32c24d100b3b35c2239807046a4c953c7b89fa49e/itsdangerous-1.1.0-py2.py3-none-any.whl
Collecting Mako
Downloading https://files.pythonhosted.org/packages/b0/3c/8dcd6883d009f7cae0f3157fb53e9afb05a0d3d33b3db1268ec2e6f4a56b/Mako-1.1.0.tar.gz (463kB)
Collecting python-editor>=0.3
Downloading https://files.pythonhosted.org/packages/c6/d3/201fc3abe391bbae6606e6f1d598c15d367033332bd54352b12f35513717/python_editor-1.0.4-py3-none-any.whl
Collecting pyparsing>=2.0.2
Downloading https://files.pythonhosted.org/packages/c0/0c/fc2e007d9a992d997f04a80125b0f183da7fb554f1de701bbb70a8e7d479/pyparsing-2.4.5-py2.py3-none-any.whl (67kB)
Requirement already satisfied: setuptools>=3.0 in /opt/conda/lib/python3.7/site-packages (from gunicorn->bentoml==0.5.3->-r /bento/requirements.txt (line 1)) (42.0.2.post20191203)
Requirement already satisfied: certifi>=2017.4.17 in /opt/conda/lib/python3.7/site-packages (from requests->bentoml==0.5.3->-r /bento/requirements.txt (line 1)) (2019.11.28)
Requirement already satisfied: idna<2.9,>=2.5 in /opt/conda/lib/python3.7/site-packages (from requests->bentoml==0.5.3->-r /bento/requirements.txt (line 1)) (2.8)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /opt/conda/lib/python3.7/site-packages (from requests->bentoml==0.5.3->-r /bento/requirements.txt (line 1)) (1.25.7)
Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /opt/conda/lib/python3.7/site-packages (from requests->bentoml==0.5.3->-r /bento/requirements.txt (line 1)) (3.0.4)
Collecting websocket-client>=0.32.0
Downloading https://files.pythonhosted.org/packages/29/19/44753eab1fdb50770ac69605527e8859468f3c0fd7dc5a76dd9c4dbd7906/websocket_client-0.56.0-py2.py3-none-any.whl (200kB)
Collecting joblib>=0.11
Downloading https://files.pythonhosted.org/packages/28/5c/cf6a2b65a321c4a209efcdf64c2689efae2cb62661f8f6f4bb28547cf1bf/joblib-0.14.1-py2.py3-none-any.whl (294kB)
Requirement already satisfied: scipy>=0.17.0 in /opt/conda/lib/python3.7/site-packages (from scikit-learn->sklearn->-r /bento/requirements.txt (line 2)) (1.3.2)
Collecting docutils<0.16,>=0.10
Downloading https://files.pythonhosted.org/packages/22/cd/a6aa959dca619918ccb55023b4cb151949c64d4d5d55b3f4ffd7eee0c6e8/docutils-0.15.2-py3-none-any.whl (547kB)
Collecting MarkupSafe>=0.23
Downloading https://files.pythonhosted.org/packages/98/7b/ff284bd8c80654e471b769062a9b43cc5d03e7a615048d96f4619df8d420/MarkupSafe-1.1.1-cp37-cp37m-manylinux1_x86_64.whl
Building wheels for collected packages: sklearn, python-json-logger, alembic, prometheus-client, sqlalchemy, tabulate, cerberus, Mako
Building wheel for sklearn (setup.py): started
Building wheel for sklearn (setup.py): finished with status 'done'
Created wheel for sklearn: filename=sklearn-0.0-py2.py3-none-any.whl size=1316 sha256=ee768234506c0aff93e999e31ce65e6aa7e8f9ce0dbaa517a30d6f3f27ef6d1b
Stored in directory: /root/.cache/pip/wheels/76/03/bb/589d421d27431bcd2c6da284d5f2286c8e3b2ea3cf1594c074
Building wheel for python-json-logger (setup.py): started
Building wheel for python-json-logger (setup.py): finished with status 'done'
Created wheel for python-json-logger: filename=python_json_logger-0.1.11-py2.py3-none-any.whl size=5076 sha256=1486766b2bae040d124ee4e3316a1d5d45ccd9a7acdbcc5472834673538c2efe
Stored in directory: /root/.cache/pip/wheels/97/f7/a1/752e22bb30c1cfe38194ea0070a5c66e76ef4d06ad0c7dc401
Building wheel for alembic (setup.py): started
Building wheel for alembic (setup.py): finished with status 'done'
Created wheel for alembic: filename=alembic-1.3.2-py2.py3-none-any.whl size=151128 sha256=9a9b9840b422c364773e7da0809a1542b1b27631802e9059450218d10e17f5ac
Stored in directory: /root/.cache/pip/wheels/5c/66/53/e0633382ac8625ab1c099db6a290d1b6b24f849a4666a57105
Building wheel for prometheus-client (setup.py): started
Building wheel for prometheus-client (setup.py): finished with status 'done'
Created wheel for prometheus-client: filename=prometheus_client-0.7.1-cp37-none-any.whl size=41402 sha256=811c987036a4f243fd8f82b9f70eac33a79298f61efc5b009b00434b87f78d43
Stored in directory: /root/.cache/pip/wheels/1c/54/34/fd47cd9b308826cc4292b54449c1899a30251ef3b506bc91ea
Building wheel for sqlalchemy (setup.py): started
Building wheel for sqlalchemy (setup.py): finished with status 'done'
Created wheel for sqlalchemy: filename=SQLAlchemy-1.3.12-cp37-cp37m-linux_x86_64.whl size=1220007 sha256=a16dda3cee01242502a77d417db1f5336541646b38dabcd6d8666ce9fd9e36b9
Stored in directory: /root/.cache/pip/wheels/ee/33/44/0788a6e806866ae2e246d5cd841d07498a46bcb3f3c42ea5a4
Building wheel for tabulate (setup.py): started
Building wheel for tabulate (setup.py): finished with status 'done'
Created wheel for tabulate: filename=tabulate-0.8.6-cp37-none-any.whl size=23274 sha256=04fa3ba61173c3b11a45ea566ccb4ed9120324bcba4f77fac1f4a70b81e1829f
Stored in directory: /root/.cache/pip/wheels/9c/9b/f4/eb243fdb89676ec00588e8c54bb54360724c06e7fafe95278e
Building wheel for cerberus (setup.py): started
Building wheel for cerberus (setup.py): finished with status 'done'
Created wheel for cerberus: filename=Cerberus-1.3.2-cp37-none-any.whl size=54336 sha256=c02ea2109b6e4fbdfa74f9d2ee561b94a5a384333a7867f20d8e391cc7d8378c
Stored in directory: /root/.cache/pip/wheels/e9/38/1f/f2cc84182676f3ae7134b9b2d744f9c235b24d2ddc8f7fe465
Building wheel for Mako (setup.py): started
Building wheel for Mako (setup.py): finished with status 'done'
Created wheel for Mako: filename=Mako-1.1.0-cp37-none-any.whl size=75360 sha256=a5acb721f1164962f5bb04263a824ce9668b8473d9a22c569895cba3b68bf767
Stored in directory: /root/.cache/pip/wheels/98/32/7b/a291926643fc1d1e02593e0d9e247c5a866a366b8343b7aa27
Successfully built sklearn python-json-logger alembic prometheus-client sqlalchemy tabulate cerberus Mako
[91mERROR: botocore 1.13.40 has requirement python-dateutil<2.8.1,>=2.1; python_version >= "2.7", but you'll have python-dateutil 2.8.1 which is incompatible.
[0mInstalling collected packages: click, ruamel.yaml.clib, ruamel.yaml, python-json-logger, python-dateutil, docutils, jmespath, botocore, s3transfer, boto3, Werkzeug, MarkupSafe, Jinja2, itsdangerous, flask, sqlalchemy, Mako, python-editor, alembic, configparser, grpcio, pyparsing, packaging, prometheus-client, humanfriendly, pytz, pandas, pathlib2, protobuf, tabulate, cerberus, websocket-client, docker, bentoml, joblib, scikit-learn, sklearn
Successfully installed Jinja2-2.10.3 Mako-1.1.0 MarkupSafe-1.1.1 Werkzeug-0.16.0 alembic-1.3.2 bentoml-0.5.3 boto3-1.10.40 botocore-1.13.40 cerberus-1.3.2 click-7.0 configparser-4.0.2 docker-4.1.0 docutils-0.15.2 flask-1.1.1 grpcio-1.25.0 humanfriendly-4.18 itsdangerous-1.1.0 jmespath-0.9.4 joblib-0.14.1 packaging-19.2 pandas-0.25.3 pathlib2-2.3.5 prometheus-client-0.7.1 protobuf-3.11.1 pyparsing-2.4.5 python-dateutil-2.8.1 python-editor-1.0.4 python-json-logger-0.1.11 pytz-2019.3 ruamel.yaml-0.16.5 ruamel.yaml.clib-0.2.0 s3transfer-0.2.1 scikit-learn-0.22 sklearn-0.0 sqlalchemy-1.3.12 tabulate-0.8.6 websocket-client-0.56.0
Removing intermediate container 56d51426ec77
---> 7a3f9e905bf1
Step 10/12 : RUN if [ -f /bento/bentoml_init.sh ]; then /bin/bash -c /bento/bentoml_init.sh; fi
---> Running in 5074c6f1790b
Processing ./bundled_pip_dependencies/BentoML-0.5.3+19.g4c71912.tar.gz
Requirement already satisfied, skipping upgrade: ruamel.yaml>=0.15.0 in /opt/conda/lib/python3.7/site-packages (from BentoML==0.5.3+19.g4c71912) (0.16.5)
Requirement already satisfied, skipping upgrade: numpy in /opt/conda/lib/python3.7/site-packages (from BentoML==0.5.3+19.g4c71912) (1.17.4)
Requirement already satisfied, skipping upgrade: flask in /opt/conda/lib/python3.7/site-packages (from BentoML==0.5.3+19.g4c71912) (1.1.1)
Requirement already satisfied, skipping upgrade: gunicorn in /opt/conda/lib/python3.7/site-packages (from BentoML==0.5.3+19.g4c71912) (20.0.4)
Requirement already satisfied, skipping upgrade: click>=7.0 in /opt/conda/lib/python3.7/site-packages (from BentoML==0.5.3+19.g4c71912) (7.0)
Requirement already satisfied, skipping upgrade: pandas in /opt/conda/lib/python3.7/site-packages (from BentoML==0.5.3+19.g4c71912) (0.25.3)
Requirement already satisfied, skipping upgrade: prometheus_client in /opt/conda/lib/python3.7/site-packages (from BentoML==0.5.3+19.g4c71912) (0.7.1)
Requirement already satisfied, skipping upgrade: python-json-logger in /opt/conda/lib/python3.7/site-packages (from BentoML==0.5.3+19.g4c71912) (0.1.11)
Requirement already satisfied, skipping upgrade: boto3 in /opt/conda/lib/python3.7/site-packages (from BentoML==0.5.3+19.g4c71912) (1.10.40)
Requirement already satisfied, skipping upgrade: requests in /opt/conda/lib/python3.7/site-packages (from BentoML==0.5.3+19.g4c71912) (2.22.0)
Requirement already satisfied, skipping upgrade: packaging in /opt/conda/lib/python3.7/site-packages (from BentoML==0.5.3+19.g4c71912) (19.2)
Requirement already satisfied, skipping upgrade: docker in /opt/conda/lib/python3.7/site-packages (from BentoML==0.5.3+19.g4c71912) (4.1.0)
Requirement already satisfied, skipping upgrade: configparser in /opt/conda/lib/python3.7/site-packages (from BentoML==0.5.3+19.g4c71912) (4.0.2)
Requirement already satisfied, skipping upgrade: sqlalchemy>=1.3.0 in /opt/conda/lib/python3.7/site-packages (from BentoML==0.5.3+19.g4c71912) (1.3.12)
Requirement already satisfied, skipping upgrade: protobuf>=3.6.0 in /opt/conda/lib/python3.7/site-packages (from BentoML==0.5.3+19.g4c71912) (3.11.1)
Requirement already satisfied, skipping upgrade: grpcio in /opt/conda/lib/python3.7/site-packages (from BentoML==0.5.3+19.g4c71912) (1.25.0)
Requirement already satisfied, skipping upgrade: cerberus in /opt/conda/lib/python3.7/site-packages (from BentoML==0.5.3+19.g4c71912) (1.3.2)
Requirement already satisfied, skipping upgrade: tabulate in /opt/conda/lib/python3.7/site-packages (from BentoML==0.5.3+19.g4c71912) (0.8.6)
Requirement already satisfied, skipping upgrade: humanfriendly in /opt/conda/lib/python3.7/site-packages (from BentoML==0.5.3+19.g4c71912) (4.18)
Requirement already satisfied, skipping upgrade: alembic in /opt/conda/lib/python3.7/site-packages (from BentoML==0.5.3+19.g4c71912) (1.3.2)
Collecting python-dateutil<2.8.1,>=2.1
Downloading https://files.pythonhosted.org/packages/41/17/c62faccbfbd163c7f57f3844689e3a78bae1f403648a6afb1d0866d87fbb/python_dateutil-2.8.0-py2.py3-none-any.whl (226kB)
Requirement already satisfied, skipping upgrade: ruamel.yaml.clib>=0.1.2; platform_python_implementation == "CPython" and python_version < "3.8" in /opt/conda/lib/python3.7/site-packages (from ruamel.yaml>=0.15.0->BentoML==0.5.3+19.g4c71912) (0.2.0)
Requirement already satisfied, skipping upgrade: itsdangerous>=0.24 in /opt/conda/lib/python3.7/site-packages (from flask->BentoML==0.5.3+19.g4c71912) (1.1.0)
Requirement already satisfied, skipping upgrade: Werkzeug>=0.15 in /opt/conda/lib/python3.7/site-packages (from flask->BentoML==0.5.3+19.g4c71912) (0.16.0)
Requirement already satisfied, skipping upgrade: Jinja2>=2.10.1 in /opt/conda/lib/python3.7/site-packages (from flask->BentoML==0.5.3+19.g4c71912) (2.10.3)
Requirement already satisfied, skipping upgrade: setuptools>=3.0 in /opt/conda/lib/python3.7/site-packages (from gunicorn->BentoML==0.5.3+19.g4c71912) (42.0.2.post20191203)
Requirement already satisfied, skipping upgrade: pytz>=2017.2 in /opt/conda/lib/python3.7/site-packages (from pandas->BentoML==0.5.3+19.g4c71912) (2019.3)
Requirement already satisfied, skipping upgrade: botocore<1.14.0,>=1.13.40 in /opt/conda/lib/python3.7/site-packages (from boto3->BentoML==0.5.3+19.g4c71912) (1.13.40)
Requirement already satisfied, skipping upgrade: s3transfer<0.3.0,>=0.2.0 in /opt/conda/lib/python3.7/site-packages (from boto3->BentoML==0.5.3+19.g4c71912) (0.2.1)
Requirement already satisfied, skipping upgrade: jmespath<1.0.0,>=0.7.1 in /opt/conda/lib/python3.7/site-packages (from boto3->BentoML==0.5.3+19.g4c71912) (0.9.4)
Requirement already satisfied, skipping upgrade: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /opt/conda/lib/python3.7/site-packages (from requests->BentoML==0.5.3+19.g4c71912) (1.25.7)
Requirement already satisfied, skipping upgrade: idna<2.9,>=2.5 in /opt/conda/lib/python3.7/site-packages (from requests->BentoML==0.5.3+19.g4c71912) (2.8)
Requirement already satisfied, skipping upgrade: chardet<3.1.0,>=3.0.2 in /opt/conda/lib/python3.7/site-packages (from requests->BentoML==0.5.3+19.g4c71912) (3.0.4)
Requirement already satisfied, skipping upgrade: certifi>=2017.4.17 in /opt/conda/lib/python3.7/site-packages (from requests->BentoML==0.5.3+19.g4c71912) (2019.11.28)
Requirement already satisfied, skipping upgrade: six in /opt/conda/lib/python3.7/site-packages (from packaging->BentoML==0.5.3+19.g4c71912) (1.13.0)
Requirement already satisfied, skipping upgrade: pyparsing>=2.0.2 in /opt/conda/lib/python3.7/site-packages (from packaging->BentoML==0.5.3+19.g4c71912) (2.4.5)
Requirement already satisfied, skipping upgrade: websocket-client>=0.32.0 in /opt/conda/lib/python3.7/site-packages (from docker->BentoML==0.5.3+19.g4c71912) (0.56.0)
Requirement already satisfied, skipping upgrade: Mako in /opt/conda/lib/python3.7/site-packages (from alembic->BentoML==0.5.3+19.g4c71912) (1.1.0)
Requirement already satisfied, skipping upgrade: python-editor>=0.3 in /opt/conda/lib/python3.7/site-packages (from alembic->BentoML==0.5.3+19.g4c71912) (1.0.4)
Requirement already satisfied, skipping upgrade: MarkupSafe>=0.23 in /opt/conda/lib/python3.7/site-packages (from Jinja2>=2.10.1->flask->BentoML==0.5.3+19.g4c71912) (1.1.1)
Requirement already satisfied, skipping upgrade: docutils<0.16,>=0.10 in /opt/conda/lib/python3.7/site-packages (from botocore<1.14.0,>=1.13.40->boto3->BentoML==0.5.3+19.g4c71912) (0.15.2)
Building wheels for collected packages: BentoML
Building wheel for BentoML (setup.py): started
Building wheel for BentoML (setup.py): finished with status 'done'
Created wheel for BentoML: filename=BentoML-0.5.3+19.g4c71912-cp37-none-any.whl size=496096 sha256=2ce484218bdf38301d28889ef752ba5b94e04e63c84abfed59ca8f59f1f84a48
Stored in directory: /root/.cache/pip/wheels/3a/91/cd/6379a08ddcf727da4c5960e799eeb691bf09521ba75ddf05e8
Successfully built BentoML
Installing collected packages: python-dateutil, BentoML
Found existing installation: python-dateutil 2.8.1
Uninstalling python-dateutil-2.8.1:
Successfully uninstalled python-dateutil-2.8.1
Found existing installation: BentoML 0.5.3
Uninstalling BentoML-0.5.3:
Successfully uninstalled BentoML-0.5.3
Successfully installed BentoML-0.5.3+19.g4c71912 python-dateutil-2.8.0
Removing intermediate container 5074c6f1790b
---> 369e99937286
Step 11/12 : RUN if [ -f /bento/setup.sh ]; then /bin/bash -c /bento/setup.sh; fi
---> Running in 0ac99d4f5557
Removing intermediate container 0ac99d4f5557
---> befdd93d991c
Step 12/12 : CMD ["bentoml serve-gunicorn /bento"]
---> Running in 61f3f1dcdd38
Removing intermediate container 61f3f1dcdd38
---> 19d21c608b08
Successfully built 19d21c608b08
Successfully tagged 192023623294.dkr.ecr.us-west-2.amazonaws.com/sentiment-ecs:latest
###Markdown
Create ECR repository
###Code
!aws ecr create-repository --repository-name sentiment-ecs
!docker push 192023623294.dkr.ecr.us-west-2.amazonaws.com/sentiment-ecs
###Output
The push refers to repository [192023623294.dkr.ecr.us-west-2.amazonaws.com/sentiment-ecs]
[1B96eea2fa: Preparing
[1B778aa2c1: Preparing
[1Bc052c405: Preparing
[1Bab50d2e3: Preparing
[1B90513c25: Preparing
[1B2405333d: Preparing
[1Bcb249b79: Preparing
[1B190fd43a: Preparing
###Markdown
Deploy to AWS ECS 1. Install ECS-CLI toolhttps://docs.aws.amazon.com/AmazonECS/latest/developerguide/ECS_CLI_installation.htmlFor Mac:Download```sudo curl -o /usr/local/bin/ecs-cli https://amazon-ecs-cli.s3.amazonaws.com/ecs-cli-darwin-amd64-latest```Make it executable```sudo chmod +x /usr/local/bin/ecs-cli``` 2. Configure ECS-CLIhttps://docs.aws.amazon.com/AmazonECS/latest/developerguide/ECS_CLI_Configuration.htmlCreate cluster profile configuration```ecs-cli configure --cluster tutorial --default-launch-type FARGATE --config-name tutorial --region us-west-2```Create CLI profile```ecs-cli configure profile --access-key AWS_ACCESS_KEY_ID --secret-key AWS_SECRET_ACCESS_KEY --profile-name tutorial-profile```
###Code
!ecs-cli configure --cluster tutorial --default-launch-type FARGATE --config-name tutorial --region us-west-2
!ecs-cli configure profile --profile-name tutorial-profile --access-key AWS_ACCESS_KEY_ID --secret-key AWS_SECRET_ACCESS_KEY
###Output
[36mINFO[0m[0000] Saved ECS CLI profile configuration tutorial-profile.
###Markdown
Create the Task Execution IAM Role
###Code
%%writefile task-execution-assume-role.json
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "ecs-tasks.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
!aws iam --region us-west-2 create-role --role-name ecsTaskExecutionRole --assume-role-policy-document file://task-execution-assume-role.json
!aws iam --region us-west-2 attach-role-policy --role-name ecsTaskExecutionRole --policy-arn arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy
###Output
_____no_output_____
###Markdown
Start up a AWS ECS Cluster
###Code
!ecs-cli up --cluster-config tutorial --ecs-profile tutorial-profile
###Output
[36mINFO[0m[0001] Created cluster [36mcluster[0m=tutorial [36mregion[0m=us-west-2
[36mINFO[0m[0002] Waiting for your cluster resources to be created...
[36mINFO[0m[0002] Cloudformation stack status [36mstackStatus[0m=CREATE_IN_PROGRESS
[36mINFO[0m[0063] Cloudformation stack status [36mstackStatus[0m=CREATE_IN_PROGRESS
VPC created: vpc-0465d14ba04402f80
Subnet created: subnet-0d23851806f3db403
Subnet created: subnet-0dece5451f1a3b8b2
Cluster creation succeeded.
###Markdown
**Use the VPC id from the output from previous cell**
###Code
!aws ec2 describe-security-groups --filters Name=vpc-id,Values=vpc-0465d14ba04402f80 --region us-west-2
###Output
{
"SecurityGroups": [
{
"Description": "default VPC security group",
"GroupName": "default",
"IpPermissions": [
{
"IpProtocol": "-1",
"IpRanges": [],
"Ipv6Ranges": [],
"PrefixListIds": [],
"UserIdGroupPairs": [
{
"GroupId": "sg-0258b891f053e077b",
"UserId": "192023623294"
}
]
}
],
"OwnerId": "192023623294",
"GroupId": "sg-0258b891f053e077b",
"IpPermissionsEgress": [
{
"IpProtocol": "-1",
"IpRanges": [
{
"CidrIp": "0.0.0.0/0"
}
],
"Ipv6Ranges": [],
"PrefixListIds": [],
"UserIdGroupPairs": []
}
],
"VpcId": "vpc-0465d14ba04402f80"
}
]
}
###Markdown
**Use the security group ID from previous cell**
###Code
!aws ec2 authorize-security-group-ingress --group-id sg-0258b891f053e077b --protocol tcp --port 5000 --cidr 0.0.0.0/0 --region us-west-2
###Output
An error occurred (InvalidPermission.Duplicate) when calling the AuthorizeSecurityGroupIngress operation: the specified rule "peer: 0.0.0.0/0, TCP, from port: 5000, to port: 5000, ALLOW" already exists
###Markdown
**Use the docker image information from docker push cell**
###Code
%%writefile docker-compose.yml
version: '3'
services:
web:
image: 192023623294.dkr.ecr.us-west-2.amazonaws.com/sentiment-ecs
ports:
- "5000:5000"
logging:
driver: awslogs
options:
awslogs-group: sentiment-aws-ecs
awslogs-region: us-west-2
awslogs-stream-prefix: web
###Output
Overwriting docker-compose.yml
###Markdown
**Use the subnets from previous cell that create ECS cluster.****Use security group value from cell that describe secruity group cell**
###Code
%%writefile ecs-params.yml
version: 1
task_definition:
task_execution_role: ecsTaskExecutionRole
ecs_network_mode: awsvpc
task_size:
mem_limit: 0.5GB
cpu_limit: 256
run_params:
network_configuration:
awsvpc_configuration:
subnets:
- subnet-0d23851806f3db403
- subnet-0dece5451f1a3b8b2
security_groups:
- sg-0258b891f053e077b
assign_public_ip: ENABLED
!ecs-cli compose --project-name tutorial-bentoml-ecs service up --create-log-groups --cluster-config tutorial --ecs-profile tutorial-profile
!ecs-cli compose --project-name tutorial-bentoml-ecs service ps --cluster-config tutorial --ecs-profile tutorial-profile
###Output
Name State Ports TaskDefinition Health
ecd119f0-b159-42e6-b86c-e6a62242ce7a/web RUNNING 34.212.49.46:5000->5000/tcp tutorial-bentoml-ecs:1 UNKNOWN
###Markdown
Test ECS endpoint
###Code
!curl -i \
--request POST \
--header "Content-Type: application/json" \
--data '["sweet food", "bad food", "happy day"]' \
http://34.212.49.46:5000/predict
###Output
HTTP/1.1 200 OK
Server: gunicorn/20.0.4
Date: Tue, 17 Dec 2019 01:14:32 GMT
Connection: close
Content-Type: application/json
Content-Length: 9
request_id: 60540ac8-d1e7-4244-be14-68e2fc0920e7
[4, 0, 4]
###Markdown
Clean up ECS deployment
###Code
!ecs-cli compose --project-name tutorial-bentoml-ecs service down --cluster-config tutorial --ecs-profile tutorial-profile
!ecs-cli down --force --cluster-config tutorial --ecs-profile tutorial-profile
###Output
[36mINFO[0m[0001] Waiting for your cluster resources to be deleted...
[36mINFO[0m[0001] Cloudformation stack status [36mstackStatus[0m=DELETE_IN_PROGRESS
[36mINFO[0m[0062] Deleted cluster [36mcluster[0m=tutorial
|
Data Science/Data Visualization/Practice Exercise - Session 1/.ipynb_checkpoints/Practice Exercise - Session 1-checkpoint.ipynb | ###Markdown
I - Virat Kohli Dataset
###Code
df = pd.read_csv("virat.csv")
df.head()
###Output
_____no_output_____
###Markdown
Spread in RunsQuestion 1: Analyse the spread of Runs scored by Virat in all his matches and report the difference between the scores at the 50th percentile and the 25th percentile respectively. a)16.5 b)22.5 c)26.5 d)32.5
###Code
## Your code here
###Output
_____no_output_____
###Markdown
Box PlotsQuestion 2: Plot a Box Plot to analyse the spread of Runs that Virat has scored. The upper fence in the box plot lies in which interval? a)100-120 b)120-140 c)140-160 d)160-180
###Code
#Your code here
###Output
_____no_output_____
###Markdown
False StatementQ3:Consider the following statements and choose the correct option I - Virat has played the maximum number of matches in 2011 II - Virat has the highest run average in the year 2017 III - Virat has the maximum score in a single match and the highest run average in the year 2016.Which of the above statements is/are false? a)I and II b)I and III c)II d)III
###Code
## Your code here
###Output
_____no_output_____
###Markdown
Maximum FrequencyQ4:Plot a histogram for the Mins column with 15 bins. Among the three ranges mentioned below, which one has the highest frequency?A -ย [54.6,68)B -ย [68,81.4)C -ย [121.6,135) a)A - [54.6,68) b)B - [68,81.4) c)C - [121.6,135) d)All the bin ranges have the same frequency
###Code
#Your code here
###Output
_____no_output_____ |
week04/_examples04_week04.ipynb | ###Markdown
A way to improve this is through a stacked line plot
###Code
plt.figure(dpi = 300)
multi_shapes = ["disk", "light", "fireball", "chevron"]
for i, shape1 in enumerate(multi_shapes):
for j, shape2 in enumerate(multi_shapes):
plt.subplot(len(multi_shapes), len(multi_shapes), i * len(multi_shapes) + j + 1)
if shape1 == shape2:
plt.text(0.25, 0.5, shape1)
plt.xticks([])
plt.yticks([])
continue
plt.plot(shapes[shape1] / (shapes[shape2] + shapes[shape1]))
plt.xlim(1940, 2014)
plt.ylim(0, 1.0)
plt.xticks([])
plt.yticks([])
ufos_by_date = ufos.set_index("date")
ufos_by_date.resample('1Y')["shape"].count().plot()
ufos_by_date.resample('5Y')["shape"].count().plot()
import ipywidgets
@ipywidgets.interact(freq = (1, 120, 1))
def make_plot(freq):
ufos_by_date.resample('%sM' % freq)["shape"].count().plot()
ufos_by_date["month"] = ufos_by_date.index.month
ufos_by_date.groupby("month")["shape"].count().plot()
ufos_by_date.set_index("shape").loc["fireball"].groupby("month")["year"].count().plot()
ufos_by_date.set_index("shape").loc["chevron"].groupby("month")["year"].count().plot()
ufos["latitude"].min(), ufos["latitude"].max(), ufos["longitude"].min(), ufos["longitude"].max(),
plt.figure(dpi=200)
plt.hexbin(ufos["longitude"], ufos["latitude"], ufos["duration_seconds"], gridsize=32, bins='log')
plt.colorbar()
import numpy as np
plt.hist(np.log10(ufos["duration_seconds"]), log=True, bins = 32);
buildings = pd.read_csv("/home/shared/sp18-is590dv/data/IL_Building_Inventory.csv",
na_values = {'Square Footage': 0, 'Year Acquired': 0, 'Year Constructed': 0})
with plt.style.context("ggplot"):
plt.plot(buildings.groupby("Year Acquired")["Square Footage"].sum())
plt.yscale("log")
###Output
_____no_output_____ |
Main_MaseseSpeech_ASR.ipynb | ###Markdown
Imports
###Code
# If you need some install, uncomment the code bellow
!pip install torch==1.4
!pip install torchvision
!pip install ipdb
!pip install torchaudio
!pip install PyDrive
!pip install soundfile
import torch
import torchaudio
from matplotlib import pyplot as plt
import numpy as np
from torch import Tensor
import os
from typing import Tuple
import ipdb
# from torchaudio.datasets import YESNO, LIBRISPEECH
from torch.utils.data import DataLoader
from torch.utils.data import Dataset
import pdb, traceback, sys
from torchaudio.datasets.utils import (
download_url,
extract_archive,
walk_files,
)
###Output
_____no_output_____
###Markdown
MaseseSpeech 2h
###Code
# Train link Drive
# https://drive.google.com/file/d/1CfqHBiOEVnJiuwZoqBJMlGO-N97JD3sR/view?usp=sharing
# wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=1CfqHBiOEVnJiuwZoqBJMlGO-N97JD3sR' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=1CfqHBiOEVnJiuwZoqBJMlGO-N97JD3sR" -O train-clean.tar.xz && rm -rf /tmp/cookies.txt
# # Valid link Drive
# https://drive.google.com/file/d/1Y1CiB7TbrGdghVokwMTBQA1fRza2NZUP/view?usp=sharing
# https://drive.google.com/file/d/1Y1CiB7TbrGdghVokwMTBQA1fRza2NZUP/view?usp=sharing
# wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=1Y1CiB7TbrGdghVokwMTBQA1fRza2NZUP' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=1Y1CiB7TbrGdghVokwMTBQA1fRza2NZUP" -O dev-clean.tar.xz && rm -rf /tmp/cookies.txt
# !wget https://github.com/Kabongosalomon/MaseseSpeech/raw/master/voalingala.tar.xz -O voalingala.tar.xz
# extract_archive("voalingala.tar.xz")
# !rm -r "voalingala.tar.xz"
!mkdir MaseseSpeech
!mkdir MaseseSpeech
!wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=1CfqHBiOEVnJiuwZoqBJMlGO-N97JD3sR' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=1CfqHBiOEVnJiuwZoqBJMlGO-N97JD3sR" -O MaseseSpeech/train-clean.tar.xz && rm -rf /tmp/cookies.txt
!wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=1Y1CiB7TbrGdghVokwMTBQA1fRza2NZUP' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=1Y1CiB7TbrGdghVokwMTBQA1fRza2NZUP" -O MaseseSpeech/dev-clean.tar.xz && rm -rf /tmp/cookies.txt
extract_archive("MaseseSpeech/train-clean.tar.xz")
extract_archive("MaseseSpeech/dev-clean.tar.xz")
!rm -r "MaseseSpeech/train-clean.tar.xz"
!rm -r "MaseseSpeech/dev-clean.tar.xz"
special_caracters = "ฦ,;.ฦรร-?:*รโโโ!ร(10)" # Special caracters special to Lingala and this dataset
tokens_list = list(" ABCDEFGHIJKLMNOPQRSTUVWXYZ"+special_caracters)
tokens_set = set(tokens_list)
URL = "train-clean"
FOLDER_IN_ARCHIVE = "MaseseSpeech"
def load_masesespeech_item(fileid: str,
path: str,
ext_audio: str,
ext_txt: str) -> Tuple[Tensor, int, str, int, int, int]:
book_id, chapter_id, utterance_id = fileid.split("-")
file_text = book_id + "-" + chapter_id + ext_txt
file_text = os.path.join(path, book_id, chapter_id, file_text)
fileid_audio = book_id + "-" + chapter_id + "-" + utterance_id
file_audio = fileid_audio + ext_audio
file_audio = os.path.join(path, book_id, chapter_id, file_audio)
try :
# Load audio
waveform, sample_rate = torchaudio.load(file_audio)
# Load text
with open(file_text) as ft:
for line in ft:
fileid_text, utterance = line.strip().split(" ", 1) # this takes the first space split
if fileid_audio == fileid_text:
# stop when we found the text corresponding to
# the audio ID
break
else:
# Translation not found
raise FileNotFoundError("Translation not found for " + fileid_audio)
except:
print(file_audio) # this is for debugging purpose
print(waveform) # to show which file may have an issue
pass
# traceback.print_exc()
# Use this is your acoustic model is outputting letters
special_caracters = "ฦ,;.ฦรร-?:*รโโโ!ร(10)" # Special caracters special to Lingala and this dataset
tokens_list = list(" ABCDEFGHIJKLMNOPQRSTUVWXYZ"+special_caracters)
tokens_set = set(tokens_list)
transcriptions = [b for b in utterance]
t = []
for index in transcriptions:
t.append(str(tokens_list.index(index)))
targets = (" ".join(t))
# ipdb.set_trace()
with open("./MaseseSpeech/converted_aligned_phones.txt", "a+") as text_file:
# Move read cursor to the start of file.
text_file.seek(0)
# If file is not empty then append '\n'
data = text_file.read(100)
if len(data) > 0 :
text_file.write("\n")
# .strip() to delect any leading and trailing whitespace
text_file.write(book_id+"-"+chapter_id+"-"+utterance_id+" "+targets)
return (
waveform,
sample_rate,
utterance,
int(book_id),
int(chapter_id),
int(utterance_id),
)
class MASESESPEECH_2H_MP3(Dataset):
"""
Create a Dataset for MaseseSpeech. Each item is a tuple of the form:
waveform, utterance, chapter_id, verse_id, utterance_id
"""
_ext_txt = ".trans.txt"
_ext_audio = ".wav"
def __init__(self,
root: str,
mode: str = "MaseseSpeech/train-clean",
folder_in_archive: str = FOLDER_IN_ARCHIVE,
) -> None:
self._path = mode
walker = walk_files(
self._path, suffix=self._ext_audio, prefix=False, remove_suffix=True
)
self._walker = list(walker)
def __getitem__(self, n: int) -> Tuple[Tensor, int, str, int, int, int]:
fileid = self._walker[n]
return load_masesespeech_item(fileid, self._path, self._ext_audio, self._ext_txt)
def __len__(self) -> int:
return len(self._walker)
masese_train = MASESESPEECH_2H_MP3(".", mode = "MaseseSpeech/train-clean")
masese_dev = MASESESPEECH_2H_MP3(".", mode = "MaseseSpeech/dev-clean")
# # just so you get an idea of the format
# print(next(iter(masese_train)))
# print(next(iter(masese_dev)))
def collate_fn_libri(batch):
#print(batch)
tensors = [b[0].t() for b in batch if b]
tensors_len = [len(t) for t in tensors]
tensors = torch.nn.utils.rnn.pad_sequence(tensors, batch_first=True)
tensors = tensors.transpose(1, -1)
# ipdb.set_trace()
transcriptions = [list(b[2].replace("'", " ")) for b in batch if b]
targets = [torch.tensor([tokens_list.index(e) for e in t]) for t in transcriptions]
targets_len = [len(t) for t in targets]
targets = torch.nn.utils.rnn.pad_sequence(targets, batch_first=True)
return tensors, targets, torch.tensor(tensors_len), torch.tensor(targets_len)
train_set = torch.utils.data.DataLoader(masese_train, batch_size=50000, shuffle=True,
num_workers=4, collate_fn=collate_fn_libri)
test_set = torch.utils.data.DataLoader(masese_dev, batch_size=50000, shuffle=True,
num_workers=4, collate_fn=collate_fn_libri)
print(next(iter(train_set)))
print(next(iter(test_set)))
!git clone https://github.com/facebookresearch/CPC_audio.git #
%cd CPC_audio/
!ls
!%cd /CPC_audio
!python setup.py develop
###Output
_____no_output_____
###Markdown
Exercise 1 : Building the modelIn this exercise, we will build and train a small CPC model using the repository CPC_audio.
###Code
# %cd ./CPC_audio
from cpc.model import CPCEncoder, CPCAR
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
DIM_ENCODER = 256
DIM_CONTEXT = 256
KEEP_HIDDEN_VECTOR = False
N_LEVELS_CONTEXT = 1
CONTEXT_RNN = "LSTM"
N_PREDICTIONS = 12
LEARNING_RATE = 2e-4
N_NEGATIVE_SAMPLE = 128
encoder = CPCEncoder(DIM_ENCODER)
context = CPCAR(DIM_ENCODER,
DIM_CONTEXT,
KEEP_HIDDEN_VECTOR,
N_LEVELS_CONTEXT,
mode="CONTEXT_RNN")
# sudo apt-get install libsndfile1-dev
# Several functions that will be necessary to load the data later
from cpc.dataset import findAllSeqs, AudioBatchData, parseSeqLabels
SIZE_WINDOW = 20480
BATCH_SIZE=8
def load_dataset(path_dataset, file_extension='.wav', phone_label_dict=None):
data_list, speakers = findAllSeqs(path_dataset, extension=file_extension)
dataset = AudioBatchData(path_dataset, SIZE_WINDOW, data_list, phone_label_dict, len(speakers))
return dataset
class CPCModel(torch.nn.Module):
def __init__(self,
encoder,
AR):
super(CPCModel, self).__init__()
self.gEncoder = encoder
self.gAR = AR
def forward(self, batch_data):
encoder_output = self.gEncoder(batch_data)
#print(encoder_output.shape)
# The output of the encoder data does not have the good format
# indeed it is Batch_size x Hidden_size x temp size
# while the context requires Batch_size x temp size x Hidden_size
# thus you need to permute
context_input = encoder_output.permute(0, 2, 1)
context_output = self.gAR(context_input)
#print(context_output.shape)
return context_output, encoder_output
# !ls ..
audio = torchaudio.load(
# "../voalingala/20200611/20200611-160000-VCD361-program_16k.mp3")[0]
"../MaseseSpeech/train-clean/020/001/020-001-013.wav")[0]
audio = audio.view(1, 1, -1)
cpc_model = CPCModel(encoder, context).to(device)
context_output, encoder_output = cpc_model(audio.to(device))
audio
!ls checkpoint_data/
# !pip install torch==1.5.0+cu101 torchvision==0.6.0+cu101 -f https://download.pytorch.org/whl/torch_stable.html # in case you're getting nvdia error
# from cpc.dataset import parseSeqLabels
# from cpc.feature_loader import loadModel
# checkpoint_path = 'checkpoint_data/checkpoint_30.pt'
# cpc_model, HIDDEN_CONTEXT_MODEL, HIDDEN_ENCODER_MODEL = loadModel([checkpoint_path])
# cpc_model = cpc_model.cuda()
###Output
_____no_output_____
###Markdown
Exercise 2 : CPC lossWe will define a class ```CPCCriterion``` which will hold the prediction networks $\phi_k$ defined above and perform the classification loss $\mathcal{L}_c$.a) In this exercise, the $\phi_k$ will be a linear transform, ie:\\[ \phi_k(c_t) = \mathbf{A}_k c_t\\]Using the class [torch.nn.Linear](https://pytorch.org/docs/stable/nn.htmltorch.nn.Linear), define the transformations $\phi_k$ in the code below and complete the function ```get_prediction_k``` which computes $\phi_k(c_t)$ for a given batch of vectors $c_t$.b) Using both ```get_prediction_k``` and ```sample_negatives``` defined below, write the forward function which will take as input two batches of features $c_t$ and $g_t$ and outputs the classification loss $\mathcal{L}_c$ and the average acuracy for all predictions.
###Code
# Exercice 2: write the CPC loss
# a) Write the negative sampling (with some help)
# ERRATUM: it's really hard, the sampling will be provided
class CPCCriterion(torch.nn.Module):
def __init__(self,
K,
dim_context,
dim_encoder,
n_negative):
super(CPCCriterion, self).__init__()
self.K_ = K
self.dim_context = dim_context
self.dim_encoder = dim_encoder
self.n_negative = n_negative
self.predictors = torch.nn.ModuleList()
for k in range(self.K_):
# TO COMPLETE !
# A affine transformation in pytorch is equivalent to a nn.Linear layer
# To get a linear transformation you must set bias=False
# input dimension of the layer = dimension of the encoder
# output dimension of the layer = dimension of the context
self.predictors.append(torch.nn.Linear(dim_context, dim_encoder, bias=False))
def get_prediction_k(self, context_data):
#TO COMPLETE !
output = []
# For each time step k
for k in range(self.K_):
# We need to compute phi_k = A_k * c_t
phi_k = self.predictors[k](context_data)
output.append(phi_k)
return output
def sample_negatives(self, encoded_data):
r"""
Sample some negative examples in the given encoded data.
Input:
- encoded_data size: B x T x H
Returns
- outputs of size B x (n_negative + 1) x (T - K_) x H
outputs[:, 0, :, :] contains the positive example
outputs[:, 1:, :, :] contains negative example sampled in the batch
- labels, long tensor of size B x (T - K_)
Since the positive example is always at coordinates 0 for all sequences
in the batch and all timestep in the sequence, labels is just a tensor
full of zeros !
"""
batch_size, time_size, dim_encoded = encoded_data.size()
window_size = time_size - self.K_
outputs = []
neg_ext = encoded_data.contiguous().view(-1, dim_encoded)
n_elem_sampled = self.n_negative * window_size * batch_size
# Draw nNegativeExt * batchSize negative samples anywhere in the batch
batch_idx = torch.randint(low=0, high=batch_size,
size=(n_elem_sampled, ),
device=encoded_data.device)
seq_idx = torch.randint(low=1, high=time_size,
size=(n_elem_sampled, ),
device=encoded_data.device)
base_idx = torch.arange(0, window_size, device=encoded_data.device)
base_idx = base_idx.view(1, 1, window_size)
base_idx = base_idx.expand(1, self.n_negative, window_size)
base_idx = base_idx.expand(batch_size, self.n_negative, window_size)
seq_idx += base_idx.contiguous().view(-1)
seq_idx = torch.remainder(seq_idx, time_size)
ext_idx = seq_idx + batch_idx * time_size
neg_ext = neg_ext[ext_idx].view(batch_size, self.n_negative,
window_size, dim_encoded)
label_loss = torch.zeros((batch_size, window_size),
dtype=torch.long,
device=encoded_data.device)
for k in range(1, self.K_ + 1):
# Positive samples
if k < self.K_:
pos_seq = encoded_data[:, k:-(self.K_-k)]
else:
pos_seq = encoded_data[:, k:]
pos_seq = pos_seq.view(batch_size, 1, pos_seq.size(1), dim_encoded)
full_seq = torch.cat((pos_seq, neg_ext), dim=1)
outputs.append(full_seq)
return outputs, label_loss
def forward(self, encoded_data, context_data):
# TO COMPLETE:
# Perform the full cpc criterion
# Returns 2 values:
# - the average classification loss avg_loss
# - the average classification acuracy avg_acc
# Reminder : The permuation !
encoded_data = encoded_data.permute(0, 2, 1)
# First we need to sample the negative examples
negative_samples, labels = self.sample_negatives(encoded_data)
# Then we must compute phi_k
phi_k = self.get_prediction_k(context_data)
# Finally we must get the dot product between phi_k and negative_samples
# for each k
#The total loss is the average of all losses
avg_loss = 0
# Average acuracy
avg_acc = 0
for k in range(self.K_):
B, N_sampled, S_small, H = negative_samples[k].size()
B, S, H = phi_k[k].size()
# As told before S = S_small + K. For segments too far in the sequence
# there are no positive exmples anyway, so we must shorten phi_k
phi = phi_k[k][:, :S_small]
# Now the dot product
# You have several ways to do that, let's do the simple but non optimal
# one
# pytorch has a matrix product function https://pytorch.org/docs/stable/torch.html#torch.bmm
# But it takes only 3D tensors of the same batch size !
# To begin negative_samples is a 4D tensor !
# We want to compute the dot product for each features, of each sequence
# of the batch. Thus we are trying to compute a dot product for all
# B* N_sampled * S_small 1D vector of negative_samples[k]
# Or, a 1D tensor of size H is also a matrix of size 1 x H
# Then, we must view it as a 3D tensor of size (B* N_sampled * S_small, 1, H)
negative_sample_k = negative_samples[k].view(B* N_sampled* S_small, 1, H)
# But now phi and negative_sample_k no longer have the same batch size !
# No worries, we can expand phi so that each sequence of the batch
# is repeated N_sampled times
phi = phi.view(B, 1,S_small, H).expand(B, N_sampled, S_small, H)
# And now we can view it as a 3D tensor
phi = phi.contiguous().view(B * N_sampled * S_small, H, 1)
# We can finally get the dot product !
scores = torch.bmm(negative_sample_k, phi)
# Dot_product has a size (B * N_sampled * S_small , 1, 1)
# Let's reorder it a bit
scores = scores.reshape(B, N_sampled, S_small)
# For each elements of the sequence, and each elements sampled, it gives
# a floating score stating the likelihood of this element being the
# true one.
# Now the classification loss, we need to use the Cross Entropy loss
# https://pytorch.org/docs/master/generated/torch.nn.CrossEntropyLoss.html
# For each time-step of each sequence of the batch
# we have N_sampled possible predictions.
# Looking at the documentation of torch.nn.CrossEntropyLoss
# we can see that this loss expect a tensor of size M x C where
# - M is the number of elements with a classification score
# - C is the number of possible classes
# There are N_sampled candidates for each predictions so
# C = N_sampled
# Each timestep of each sequence of the batch has a prediction so
# M = B * S_small
# Thus we need an input vector of size B * S_small, N_sampled
# To begin, we need to permute the axis
scores = scores.permute(0, 2, 1) # Now it has size B , S_small, N_sampled
# Then we can cast it into a 2D tensor
scores = scores.reshape(B * S_small, N_sampled)
# Same thing for the labels
labels = labels.reshape(B * S_small)
# Finally we can get the classification loss
loss_criterion = torch.nn.CrossEntropyLoss()
loss_k = loss_criterion(scores, labels)
avg_loss+= loss_k
# And for the acuracy
# The prediction for each elements is the sample with the highest score
# Thus the tensors of all predictions is the tensors of the index of the
# maximal score for each time-step of each sequence of the batch
predictions = torch.argmax(scores, 1)
acc_k = (labels == predictions).sum() / (B * S_small)
avg_acc += acc_k
# Normalization
avg_loss = avg_loss / self.K_
avg_acc = avg_acc / self.K_
return avg_loss , avg_acc
# !ls ../MaseseSpeech/train-clean/020/001/
audio = torchaudio.load(
"../MaseseSpeech/train-clean/020/001/020-001-013.wav")[0]
# "../voalingala/20200611/20200611-160000-VCD361-program_16k.mp3")[0]
audio = audio.view(1, 1, -1)
cpc_criterion = CPCCriterion(N_PREDICTIONS, DIM_CONTEXT,
DIM_ENCODER, N_NEGATIVE_SAMPLE).to(device)
context_output, encoder_output = cpc_model(audio.to(device))
loss, avg = cpc_criterion(encoder_output,context_output)
loss
###Output
_____no_output_____
###Markdown
Exercise 3: Full training loop !You have the model, you have the criterion. All you need now are a data loader and an optimizer to run your training loop.We will use an Adam optimizer:
###Code
parameters = list(cpc_criterion.parameters()) + list(cpc_model.parameters())
optimizer = torch.optim.Adam(parameters, lr=LEARNING_RATE)
# dataset_train = load_dataset('../voalingala/train',
# file_extension='.mp3')
# dataset_val = load_dataset('../voalingala/val',
# file_extension='.mp3')
dataset_train = load_dataset('../MaseseSpeech/train-clean')
dataset_val = load_dataset('../MaseseSpeech/dev-clean')
data_loader_train = dataset_train.getDataLoader(BATCH_SIZE, "speaker", True)
data_loader_val = dataset_train.getDataLoader(BATCH_SIZE, "sequence", False)
def train_step(data_loader,
cpc_model,
cpc_criterion,
optimizer):
avg_loss = 0
avg_acc = 0
n_items = 0
for step, data in enumerate(data_loader):
x,y = data
bs = len(x)
optimizer.zero_grad()
context_output, encoder_output = cpc_model(x.to(device))
loss , acc = cpc_criterion(encoder_output, context_output)
loss.backward()
n_items+=bs
avg_loss+=loss.item()*bs
avg_acc +=acc.item()*bs
avg_loss/=n_items
avg_acc/=n_items
return avg_loss, avg_acc
###Output
_____no_output_____
###Markdown
Exercise 4 : Validation loopNow complete the validation loop.
###Code
def validation_step(data_loader,
cpc_model,
cpc_criterion):
avg_loss = 0
avg_acc = 0
n_items = 0
for step, data in enumerate(data_loader):
x,y = data
bs = len(x)
context_output, encoder_output = cpc_model(x.to(device))
loss , acc = cpc_criterion(encoder_output, context_output)
n_items+=bs
avg_loss+=loss.item()*bs
avg_acc+=acc.item()*bs
avg_loss/=n_items
avg_acc/=n_items
return avg_loss, avg_acc
###Output
_____no_output_____
###Markdown
Exercise 5: Run everything
###Code
def run(train_loader,
val_loader,
cpc_model,
cpc_criterion,
optimizer,
n_epochs):
for epoch in range(n_epochs):
print(f"Running epoch {epoch+1} / {n_epochs}")
avg_loss_train, avg_acc_train = train_step(train_loader, cpc_model, cpc_criterion, optimizer)
print("----------------------")
print(f"Training dataset")
print(f"- average loss : {avg_loss_train}")
print(f"- average acuracy : {avg_acc_train}")
print("----------------------")
with torch.no_grad():
cpc_model.eval()
cpc_criterion.eval()
avg_loss_val, avg_acc_val = validation_step(val_loader, cpc_model, cpc_criterion)
print(f"Validation dataset")
print(f"- average loss : {avg_loss_val}")
print(f"- average acuracy : {avg_acc_val}")
print("----------------------")
print()
cpc_model.train()
cpc_criterion.train()
run(data_loader_train, data_loader_val, cpc_model,cpc_criterion,optimizer,1)
###Output
Running epoch 1 / 1
----------------------
Training dataset
- average loss : 5.365146972868177
- average acuracy : 0.0
----------------------
Validation dataset
- average loss : 5.365160039265951
- average acuracy : 0.0
----------------------
###Markdown
Once everything is donw, clear the memory.
###Code
del dataset_train
del dataset_val
del cpc_model
del context
del encoder
###Output
_____no_output_____
###Markdown
Part 2 : Fine tuning Exercice 1 : Phone separability with aligned phonemes.One option to **evaluate the quality of the features trained with CPC can be to check if they can be used to recognize phonemes**. To do so, we can fine-tune a **pre-trained model using a limited amount of labelled speech data**.We are going to start with a simple evaluation setting where we have the phone labels for each timestep corresponding to a CPC feature.We will work with a model already pre-trained on English data. As far as the fine-tuning dataset is concerned, we will use a 1h subset of [librispeech-100](http://www.openslr.org/12/).
###Code
!mkdir checkpoint_data
!wget https://dl.fbaipublicfiles.com/librilight/CPC_checkpoints/not_hub/2levels_6k_top_ctc/checkpoint_30.pt -P checkpoint_data
!wget https://dl.fbaipublicfiles.com/librilight/CPC_checkpoints/not_hub/2levels_6k_top_ctc/checkpoint_logs.json -P checkpoint_data
!wget https://dl.fbaipublicfiles.com/librilight/CPC_checkpoints/not_hub/2levels_6k_top_ctc/checkpoint_args.json -P checkpoint_data
!ls checkpoint_data
ls ../MaseseSpeech/dev-clean/
# %cd /content/CPC_audio
from cpc.dataset import parseSeqLabels
from cpc.feature_loader import loadModel
checkpoint_path = 'checkpoint_data/checkpoint_30.pt'
cpc_model, HIDDEN_CONTEXT_MODEL, HIDDEN_ENCODER_MODEL = loadModel([checkpoint_path])
cpc_model = cpc_model.cuda()
label_dict, N_PHONES = parseSeqLabels('../MaseseSpeech/converted_aligned_phones.txt')
dataset_train = load_dataset('../MaseseSpeech/train-clean/', file_extension='.wav', phone_label_dict=label_dict)
dataset_val = load_dataset('../MaseseSpeech/dev-clean/', file_extension='.wav', phone_label_dict=label_dict)
data_loader_train = dataset_train.getDataLoader(BATCH_SIZE, "speaker", True)
data_loader_val = dataset_val.getDataLoader(BATCH_SIZE, "sequence", False)
###Output
Loading checkpoint checkpoint_data/checkpoint_30.pt
Loading the state dict at checkpoint_data/checkpoint_30.pt
###Markdown
Then we will use a simple linear classifier to recognize the phonemes from the features produced by ```cpc_model```. a) Build the phone classifier Design a class of linear classifiers, ```PhoneClassifier``` that will take as input a batch of sequences of CPC features and output a score vector for each phoneme
###Code
class PhoneClassifier(torch.nn.Module):
def __init__(self,
input_dim : int,
n_phones : int):
super(PhoneClassifier, self).__init__()
self.linear = torch.nn.Linear(input_dim, n_phones)
def forward(self, x):
return self.linear(x)
###Output
_____no_output_____
###Markdown
Our phone classifier will then be:
###Code
phone_classifier = PhoneClassifier(HIDDEN_CONTEXT_MODEL, N_PHONES).to(device)
###Output
_____no_output_____
###Markdown
b - What would be the correct loss criterion for this task ?
###Code
loss_criterion = torch.nn.CrossEntropyLoss()
###Output
_____no_output_____
###Markdown
To perform the fine-tuning, we will also need an optimization function.We will use an [Adam optimizer ](https://pytorch.org/docs/stable/optim.htmltorch.optim.Adam).
###Code
parameters = list(phone_classifier.parameters()) + list(cpc_model.parameters())
LEARNING_RATE = 2e-4
optimizer = torch.optim.Adam(parameters, lr=LEARNING_RATE)
###Output
_____no_output_____
###Markdown
You might also want to perform this training while freezing the weights of the ```cpc_model```. Indeed, if the pre-training was good enough, then ```cpc_model``` phonemes representation should be linearly separable. In this case the optimizer should be defined like this:
###Code
optimizer_frozen = torch.optim.Adam(list(phone_classifier.parameters()), lr=LEARNING_RATE)
###Output
_____no_output_____
###Markdown
c- Now let's build a training loop. Complete the function ```train_one_epoch``` below.
###Code
def train_one_epoch(cpc_model,
phone_classifier,
loss_criterion,
data_loader,
optimizer):
cpc_model.train()
loss_criterion.train()
avg_loss = 0
avg_accuracy = 0
n_items = 0
for step, full_data in enumerate(data_loader):
# Each batch is represented by a Tuple of vectors:
# sequence of size : N x 1 x T
# label of size : N x T
#
# With :
# - N number of sequence in the batch
# - T size of each sequence
sequence, label = full_data
bs = len(sequence)
seq_len = label.size(1)
optimizer.zero_grad()
context_out, enc_out, _ = cpc_model(sequence.to(device),label.to(device))
scores = phone_classifier(context_out)
scores = scores.permute(0,2,1)
loss = loss_criterion(scores,label.to(device))
loss.backward()
optimizer.step()
avg_loss+=loss.item()*bs
n_items+=bs
correct_labels = scores.argmax(1)
avg_accuracy += ((label==correct_labels.cpu()).float()).mean(1).sum().item()
avg_loss/=n_items
avg_accuracy/=n_items
return avg_loss, avg_accuracy
###Output
_____no_output_____
###Markdown
Don't forget to test it !
###Code
avg_loss, avg_accuracy = train_one_epoch(cpc_model, phone_classifier, loss_criterion, data_loader_train, optimizer_frozen)
avg_loss, avg_accuracy
###Output
_____no_output_____
###Markdown
d- Build the validation loop
###Code
def validation_step(cpc_model,
phone_classifier,
loss_criterion,
data_loader):
cpc_model.eval()
phone_classifier.eval()
avg_loss = 0
avg_accuracy = 0
n_items = 0
with torch.no_grad():
for step, full_data in enumerate(data_loader):
# Each batch is represented by a Tuple of vectors:
# sequence of size : N x 1 x T
# label of size : N x T
#
# With :
# - N number of sequence in the batch
# - T size of each sequence
sequence, label = full_data
bs = len(sequence)
seq_len = label.size(1)
context_out, enc_out, _ = cpc_model(sequence.to(device),label.to(device))
scores = phone_classifier(context_out)
scores = scores.permute(0,2,1)
loss = loss_criterion(scores,label.to(device))
avg_loss+=loss.item()*bs
n_items+=bs
correct_labels = scores.argmax(1)
avg_accuracy += ((label==correct_labels.cpu()).float()).mean(1).sum().item()
avg_loss/=n_items
avg_accuracy/=n_items
return avg_loss, avg_accuracy
###Output
_____no_output_____
###Markdown
e- Run everythingTest this functiion with both ```optimizer``` and ```optimizer_frozen```.
###Code
def run(cpc_model,
phone_classifier,
loss_criterion,
data_loader_train,
data_loader_val,
optimizer,
n_epoch):
for epoch in range(n_epoch):
print(f"Running epoch {epoch + 1} / {n_epoch}")
loss_train, acc_train = train_one_epoch(cpc_model, phone_classifier, loss_criterion, data_loader_train, optimizer)
print("-------------------")
print(f"Training dataset :")
print(f"Average loss : {loss_train}. Average accuracy {acc_train}")
print("-------------------")
print("Validation dataset")
loss_val, acc_val = validation_step(cpc_model, phone_classifier, loss_criterion, data_loader_val)
print(f"Average loss : {loss_val}. Average accuracy {acc_val}")
print("-------------------")
print()
run(cpc_model,phone_classifier,loss_criterion,data_loader_train,data_loader_val,optimizer_frozen,n_epoch=10)
###Output
Running epoch 1 / 10
-------------------
Training dataset :
Average loss : 3.563035892115699. Average accuracy 0.09114583333333333
-------------------
Validation dataset
Average loss : 3.4712906800783596. Average accuracy 0.09735576923076923
-------------------
Running epoch 2 / 10
-------------------
Training dataset :
Average loss : 3.397801478703817. Average accuracy 0.10579427083333333
-------------------
Validation dataset
Average loss : 3.3246844915243297. Average accuracy 0.11399489182692307
-------------------
Running epoch 3 / 10
-------------------
Training dataset :
Average loss : 3.266044795513153. Average accuracy 0.1283908420138889
-------------------
Validation dataset
Average loss : 3.210064704601581. Average accuracy 0.14107572115384615
-------------------
Running epoch 4 / 10
-------------------
Training dataset :
Average loss : 3.16418319940567. Average accuracy 0.14561631944444445
-------------------
Validation dataset
Average loss : 3.1193005305070143. Average accuracy 0.15459735576923078
-------------------
Running epoch 5 / 10
-------------------
Training dataset :
Average loss : 3.0817254847950406. Average accuracy 0.1541069878472222
-------------------
Validation dataset
Average loss : 3.048164257636437. Average accuracy 0.15771484375
-------------------
Running epoch 6 / 10
-------------------
Training dataset :
Average loss : 3.0178873936335244. Average accuracy 0.1553276909722222
-------------------
Validation dataset
Average loss : 2.9911002287497888. Average accuracy 0.15899188701923078
-------------------
Running epoch 7 / 10
-------------------
Training dataset :
Average loss : 2.966975278324551. Average accuracy 0.15831163194444445
-------------------
Validation dataset
Average loss : 2.9479496020537157. Average accuracy 0.15940504807692307
-------------------
Running epoch 8 / 10
-------------------
Training dataset :
Average loss : 2.9286783602502613. Average accuracy 0.1586642795138889
-------------------
Validation dataset
Average loss : 2.9129452521984396. Average accuracy 0.15966796875
-------------------
Running epoch 9 / 10
-------------------
Training dataset :
Average loss : 2.8985020253393383. Average accuracy 0.1583930121527778
-------------------
Validation dataset
Average loss : 2.8869830003151526. Average accuracy 0.16064453125
-------------------
Running epoch 10 / 10
-------------------
Training dataset :
Average loss : 2.873071485095554. Average accuracy 0.16151258680555555
-------------------
Validation dataset
Average loss : 2.8657545951696544. Average accuracy 0.16192157451923078
-------------------
###Markdown
Exercise 2 : Phone separability without alignment (PER)**Aligned data are very practical, but un real life they are rarely available.** That's why in this excercise we will consider a **fine-tuning with non-aligned phonemes.**The model, the optimizer and the phone classifier will stay the same. However, we will replace our phone criterion with a [CTC loss](https://pytorch.org/docs/master/generated/torch.nn.CTCLoss.html).
###Code
loss_ctc = torch.nn.CTCLoss()
###Output
_____no_output_____
###Markdown
Besides, we will use a siglthy different dataset class.
###Code
# %cd /content/CPC_audio
from cpc.eval.common_voices_eval import SingleSequenceDataset, parseSeqLabels, findAllSeqs
path_train_data_per = '../MaseseSpeech/train-clean/'
path_val_data_per = '../MaseseSpeech/dev-clean'
path_phone_data_per = '../MaseseSpeech/converted_aligned_phones.txt'
BATCH_SIZE=8
phone_labels, N_PHONES = parseSeqLabels(path_phone_data_per)
data_train_per, _ = findAllSeqs(path_train_data_per, extension='.wav')
dataset_train_non_aligned = SingleSequenceDataset(path_train_data_per, data_train_per, phone_labels)
data_loader_train = torch.utils.data.DataLoader(dataset_train_non_aligned, batch_size=BATCH_SIZE,
shuffle=True)
data_val_per, _ = findAllSeqs(path_val_data_per, extension='.wav')
dataset_val_non_aligned = SingleSequenceDataset(path_val_data_per, data_val_per, phone_labels)
data_loader_val = torch.utils.data.DataLoader(dataset_val_non_aligned, batch_size=BATCH_SIZE,
shuffle=True)
###Output
15it [00:00, 4916.35it/s]
###Markdown
a- TrainingSince the phonemes are not aligned, there is no simple direct way to get the classification acuracy of a model. Write and test the three functions ```train_one_epoch_ctc```, ```validation_step_ctc``` and ```run_ctc``` as before but without considering the average acuracy of the model.
###Code
from cpc.feature_loader import loadModel
checkpoint_path = 'checkpoint_data/checkpoint_30.pt'
cpc_model, HIDDEN_CONTEXT_MODEL, HIDDEN_ENCODER_MODEL = loadModel([checkpoint_path])
cpc_model = cpc_model.cuda()
phone_classifier = PhoneClassifier(HIDDEN_CONTEXT_MODEL, N_PHONES).to(device)
parameters = list(phone_classifier.parameters()) + list(cpc_model.parameters())
LEARNING_RATE = 2e-4
optimizer = torch.optim.Adam(parameters, lr=LEARNING_RATE)
optimizer_frozen = torch.optim.Adam(list(phone_classifier.parameters()), lr=LEARNING_RATE)
import torch.nn.functional as F
def train_one_epoch_ctc(cpc_model,
phone_classifier,
loss_criterion,
data_loader,
optimizer):
cpc_model.train()
loss_criterion.train()
avg_loss = 0
avg_accuracy = 0
n_items = 0
for step, full_data in enumerate(data_loader):
x, x_len, y, y_len = full_data
x_batch_len = x.shape[-1]
x, y = x.to(device), y.to(device)
bs=x.size(0)
optimizer.zero_grad()
context_out, enc_out, _ = cpc_model(x.to(device),y.to(device))
scores = phone_classifier(context_out)
scores = scores.permute(1,0,2)
scores = F.log_softmax(scores,2)
yhat_len = torch.tensor([int(scores.shape[0]*x_len[i]/x_batch_len) for i in range(scores.shape[1])]) # this is an approximation, should be good enough
loss = loss_criterion(scores,y.to(device),yhat_len,y_len)
loss.backward()
optimizer.step()
avg_loss+=loss.item()*bs
n_items+=bs
avg_loss/=n_items
return avg_loss
def validation_step(cpc_model,
phone_classifier,
loss_criterion,
data_loader):
cpc_model.eval()
phone_classifier.eval()
avg_loss = 0
avg_accuracy = 0
n_items = 0
with torch.no_grad():
for step, full_data in enumerate(data_loader):
x, x_len, y, y_len = full_data
x_batch_len = x.shape[-1]
x, y = x.to(device), y.to(device)
bs=x.size(0)
context_out, enc_out, _ = cpc_model(x.to(device),y.to(device))
scores = phone_classifier(context_out)
scores = scores.permute(1,0,2)
scores = F.log_softmax(scores,2)
yhat_len = torch.tensor([int(scores.shape[0]*x_len[i]/x_batch_len) for i in range(scores.shape[1])]) # this is an approximation, should be good enough
loss = loss_criterion(scores,y.to(device),yhat_len,y_len)
avg_loss+=loss.item()*bs
n_items+=bs
avg_loss/=n_items
return avg_loss
def run_ctc(cpc_model,
phone_classifier,
loss_criterion,
data_loader_train,
data_loader_val,
optimizer,
n_epoch):
for epoch in range(n_epoch):
print(f"Running epoch {epoch + 1} / {n_epoch}")
loss_train = train_one_epoch_ctc(cpc_model, phone_classifier, loss_criterion, data_loader_train, optimizer)
print("-------------------")
print(f"Training dataset :")
print(f"Average loss : {loss_train}.")
print("-------------------")
print("Validation dataset")
loss_val = validation_step(cpc_model, phone_classifier, loss_criterion, data_loader_val)
print(f"Average loss : {loss_val}")
print("-------------------")
print()
run_ctc(cpc_model,phone_classifier,loss_ctc,data_loader_train,data_loader_val,optimizer_frozen,n_epoch=10)
###Output
Running epoch 1 / 10
-------------------
Training dataset :
Average loss : 40.1225111990264.
-------------------
Validation dataset
Average loss : 36.384601551402696
-------------------
Running epoch 2 / 10
-------------------
Training dataset :
Average loss : 34.704571810635656.
-------------------
Validation dataset
Average loss : 31.07986525102095
-------------------
Running epoch 3 / 10
-------------------
Training dataset :
Average loss : 29.263464641308325.
-------------------
Validation dataset
Average loss : 25.670075073242188
-------------------
Running epoch 4 / 10
-------------------
Training dataset :
Average loss : 23.94062240852797.
-------------------
Validation dataset
Average loss : 20.725055042613636
-------------------
Running epoch 5 / 10
-------------------
Training dataset :
Average loss : 19.358014251246598.
-------------------
Validation dataset
Average loss : 16.735400938554243
-------------------
Running epoch 6 / 10
-------------------
Training dataset :
Average loss : 15.754520962717777.
-------------------
Validation dataset
Average loss : 13.685918433449485
-------------------
Running epoch 7 / 10
-------------------
Training dataset :
Average loss : 13.06561186031205.
-------------------
Validation dataset
Average loss : 11.475975223888051
-------------------
Running epoch 8 / 10
-------------------
Training dataset :
Average loss : 11.12484725710446.
-------------------
Validation dataset
Average loss : 9.905263519287109
-------------------
Running epoch 9 / 10
-------------------
Training dataset :
Average loss : 9.739885385371437.
-------------------
Validation dataset
Average loss : 8.803418502807617
-------------------
Running epoch 10 / 10
-------------------
Training dataset :
Average loss : 8.756830835473767.
-------------------
Validation dataset
Average loss : 8.009756735021417
-------------------
###Markdown
b- Evaluation: the Phone Error Rate (PER)In order to compute the similarity between two sequences, we can use the [Levenshtein distance](https://en.wikipedia.org/wiki/Levenshtein_distance). This distance estimates the minimum number of insertion, deletion and addition to move from one sequence to another. If we normalize this distance by the number of characters in the reference sequence we get the Phone Error Rate (PER).This value can be interpreted as :\\[ PER = \frac{S + D + I}{N} \\]Where:* N is the number of characters in the reference* S is the number of substitutiion* I in the number of insertion* D in the number of deletionFor the best possible alignment of the two sequences.
###Code
import numpy as np
def get_PER_sequence(ref_seq, target_seq):
# re = g.split()
# h = h.split()
n = len(ref_seq)
m = len(target_seq)
D = np.zeros((n+1,m+1))
for i in range(1,n+1):
D[i,0] = D[i-1,0]+1
for j in range(1,m+1):
D[0,j] = D[0,j-1]+1
### TODO compute the alignment
for i in range(1,n+1):
for j in range(1,m+1):
D[i,j] = min(
D[i-1,j]+1,
D[i-1,j-1]+1,
D[i,j-1]+1,
D[i-1,j-1]+ 0 if ref_seq[i-1]==target_seq[j-1] else float("inf")
)
return D[n,m]/len(ref_seq)
#return PER
###Output
_____no_output_____
###Markdown
You can test your function below:
###Code
ref_seq = [0, 1, 1, 2, 0, 2, 2]
pred_seq = [1, 1, 2, 2, 0, 0]
expected_PER = 4. / 7.
print(get_PER_sequence(ref_seq, pred_seq) == expected_PER)
###Output
True
###Markdown
c- Evaluating the PER of your model on the test datasetEvaluate the PER on the validation dataset. Please notice that you should usually use a separate dataset, called the dev dataset, to perform this operation. However for the sake of simplicity we will work with validation data in this exercise.
###Code
import progressbar
from multiprocessing import Pool
def cut_data(seq, sizeSeq):
maxSeq = sizeSeq.max()
return seq[:, :maxSeq]
def prepare_data(data):
seq, sizeSeq, phone, sizePhone = data
seq = seq.cuda()
phone = phone.cuda()
sizeSeq = sizeSeq.cuda().view(-1)
sizePhone = sizePhone.cuda().view(-1)
seq = cut_data(seq.permute(0, 2, 1), sizeSeq).permute(0, 2, 1)
return seq, sizeSeq, phone, sizePhone
def get_per(test_dataloader,
cpc_model,
phone_classifier):
downsampling_factor = 160
cpc_model.eval()
phone_classifier.eval()
avgPER = 0
nItems = 0
print("Starting the PER computation through beam search")
bar = progressbar.ProgressBar(maxval=len(test_dataloader))
bar.start()
for index, data in enumerate(test_dataloader):
bar.update(index)
with torch.no_grad():
seq, sizeSeq, phone, sizePhone = prepare_data(data)
c_feature, _, _ = cpc_model(seq.to(device),phone.to(device))
sizeSeq = sizeSeq / downsampling_factor
predictions = torch.nn.functional.softmax(
phone_classifier(c_feature), dim=2).cpu()
phone = phone.cpu()
sizeSeq = sizeSeq.cpu()
sizePhone = sizePhone.cpu()
bs = c_feature.size(0)
data_per = [(predictions[b].argmax(1), phone[b]) for b in range(bs)]
# data_per = [(predictions[b], sizeSeq[b], phone[b], sizePhone[b],
# "criterion.module.BLANK_LABEL") for b in range(bs)]
with Pool(bs) as p:
poolData = p.starmap(get_PER_sequence, data_per)
avgPER += sum([x for x in poolData])
nItems += len(poolData)
bar.finish()
avgPER /= nItems
print(f"Average PER {avgPER}")
return avgPER
get_per(data_loader_val,cpc_model,phone_classifier)
###Output
0% | |
###Markdown
Exercice 3 : Character error rate (CER) **The Character Error Rate (CER) is an evaluation metric similar to the PER but with characters insterad of phonemes.** Using the following data, run the functions you defined previously to estimate the CER of your model after fine-tuning.
###Code
# Load a dataset labelled with the letters of each sequence.
# %cd /content/CPC_audio
from cpc.eval.common_voices_eval import SingleSequenceDataset, parseSeqLabels, findAllSeqs
path_train_data_cer = '../MaseseSpeech/train-clean/'
path_val_data_cer = '../MaseseSpeech/dev-clean'
path_letter_data_cer = '../MaseseSpeech/converted_aligned_phones.txt'
BATCH_SIZE=8
letters_labels, N_LETTERS = parseSeqLabels(path_letter_data_cer)
data_train_cer, _ = findAllSeqs(path_train_data_cer, extension='.wav')
dataset_train_non_aligned = SingleSequenceDataset(path_train_data_cer, data_train_cer, letters_labels)
data_val_cer, _ = findAllSeqs(path_val_data_cer, extension='.wav')
dataset_val_non_aligned = SingleSequenceDataset(path_val_data_cer, data_val_cer, letters_labels)
# The data loader will generate a tuple of tensors data, labels for each batch
# data : size N x T1 x 1 : the audio sequence
# label : size N x T2 the sequence of letters corresponding to the audio data
# IMPORTANT NOTE: just like the PER the CER is computed with non-aligned phone data.
data_loader_train_letters = torch.utils.data.DataLoader(dataset_train_non_aligned, batch_size=BATCH_SIZE,
shuffle=True)
data_loader_val_letters = torch.utils.data.DataLoader(dataset_val_non_aligned, batch_size=BATCH_SIZE,
shuffle=True)
from cpc.feature_loader import loadModel
checkpoint_path = 'checkpoint_data/checkpoint_30.pt'
cpc_model, HIDDEN_CONTEXT_MODEL, HIDDEN_ENCODER_MODEL = loadModel([checkpoint_path])
cpc_model = cpc_model.cuda()
character_classifier = PhoneClassifier(HIDDEN_CONTEXT_MODEL, N_LETTERS).to(device)
parameters = list(character_classifier.parameters()) + list(cpc_model.parameters())
LEARNING_RATE = 2e-4
optimizer = torch.optim.Adam(parameters, lr=LEARNING_RATE)
optimizer_frozen = torch.optim.Adam(list(character_classifier.parameters()), lr=LEARNING_RATE)
loss_ctc = torch.nn.CTCLoss()
run_ctc(cpc_model,character_classifier,loss_ctc,data_loader_train_letters,data_loader_val_letters,optimizer_frozen,n_epoch=10)
get_per(data_loader_val_letters,cpc_model,character_classifier)
###Output
0% | |
|
code/Significant Features in BTDs-CSDB.ipynb | ###Markdown
Importing libraries
###Code
from sklearn.decomposition import PCA
import numpy as np
import pandas as pd
import h5py
###Output
_____no_output_____
###Markdown
Loading BTD previously computed
###Code
new_file = h5py.File("../data/csdb_blabeled_reinhold_features/csdb_reinhold_features_correct_btd_complete.h5", "r")
btd_complete_scaled = new_file["btd_complete_scaled"][:]
new_file.close()
###Output
_____no_output_____
###Markdown
Initialize and fit PCA
###Code
pca = PCA(n_components=10)
pc = pca.fit_transform(btd_complete_scaled)
###Output
_____no_output_____
###Markdown
Compute eigenvalues, the contribution of each eigenvalue in percentage and the absolute eigenvectors
###Code
eigenvalues = pca.explained_variance_
eigenvalues_ratio = pca.explained_variance_ratio_
eigenvectors_absolute = abs(pca.components_)
###Output
_____no_output_____
###Markdown
Calculate indeces
###Code
ind1 = np.argpartition(eigenvectors_absolute[0,:], -100)[-100:]
largest_eigenvector_1 = eigenvectors_absolute[0, ind1]
ind2 = np.argpartition(eigenvectors_absolute[1,:], -100)[-100:]
largest_eigenvector_2 = eigenvectors_absolute[1, ind2]
ind3 = np.argpartition(eigenvectors_absolute[2,:], -100)[-100:]
largest_eigenvector_3 = eigenvectors_absolute[2, ind3]
###Output
_____no_output_____
###Markdown
From linear indeces to upper right matrix indeces
###Code
r,c = np.triu_indices(142,1)
eigen1_r = r[ind1]
eigen1_c = c[ind1]
eigen2_r = r[ind2]
eigen2_c = c[ind2]
eigen3_r = r[ind3]
eigen3_c = c[ind3]
###Output
_____no_output_____
###Markdown
Define function to retrieve windows indeces
###Code
def BTD_list(a1, a2):
return np.array2string(a1)+"-"+np.array2string(a2)
###Output
_____no_output_____
###Markdown
Retrieve windows indeces
###Code
list1 = list(map(BTD_list, eigen1_r, eigen1_c))
list2 = list(map(BTD_list, eigen2_r, eigen2_c))
list3 = list(map(BTD_list, eigen3_r, eigen3_c))
###Output
_____no_output_____
###Markdown
Represent data using Pandas Dataframe
###Code
d = {'eigenvector 1': list1, 'eigenvector 2': list2, 'eigenvector 3': list3}
df = pd.DataFrame(data=d)
###Output
_____no_output_____
###Markdown
Print table
###Code
print(df)
###Output
eigenvector 1 eigenvector 2 eigenvector 3
0 17-112 76-126 62-137
1 55-107 75-109 108-123
2 45-108 80-126 108-124
3 29-111 81-121 108-115
4 27-111 75-116 108-122
5 21-112 76-116 108-125
6 46-108 78-121 108-113
7 22-111 81-126 13-62
8 50-108 76-109 107-113
9 21-111 83-126 9-97
10 28-111 83-125 60-137
11 2-111 83-124 114-120
12 23-111 83-123 107-115
13 31-111 83-122 10-113
14 17-111 83-121 10-122
15 20-111 83-120 9-132
16 39-117 83-119 9-131
17 24-111 83-118 10-123
18 49-108 83-116 10-114
19 54-107 83-115 9-107
20 54-117 83-127 10-124
21 22-112 83-114 10-115
22 32-111 83-128 9-86
23 25-111 83-113 16-62
24 44-108 83-129 16-60
25 48-108 83-109 10-121
26 30-111 75-113 10-109
27 26-111 83-130 9-108
28 35-112 75-114 10-120
29 47-112 75-115 13-60
.. ... ... ...
70 23-112 78-120 9-126
71 36-112 78-122 112-134
72 51-107 78-123 107-129
73 58-112 78-124 107-128
74 8-112 78-125 107-127
75 57-112 82-114 107-126
76 45-107 82-113 9-84
77 42-117 82-109 112-133
78 53-107 82-108 107-125
79 40-112 81-125 9-87
80 52-107 81-124 107-124
81 117-140 81-123 10-87
82 32-112 81-122 107-123
83 38-112 80-113 14-60
84 107-138 80-114 107-122
85 44-107 80-115 112-132
86 56-112 80-118 112-131
87 49-117 80-119 107-121
88 43-117 80-120 10-91
89 117-138 80-121 9-125
90 46-117 80-122 10-92
91 33-112 80-123 107-120
92 45-112 80-124 107-119
93 47-117 80-125 107-118
94 39-112 81-120 114-124
95 40-107 81-119 114-123
96 112-140 81-113 107-114
97 112-139 81-114 9-124
98 51-112 81-115 9-123
99 20-112 81-118 112-136
[100 rows x 3 columns]
###Markdown
Variance explained by the first three eigenvectors
###Code
for i in range(0, 3):
print("Eigenvector %d explains: " %(i+1) + "%.2f" %(eigenvalues_ratio[i]*100) + "% of total variance\n")
###Output
Eigenvector 1 explains: 45.17% of total variance
Eigenvector 2 explains: 22.45% of total variance
Eigenvector 3 explains: 14.18% of total variance
|
Material/Solucionario/Dirigidas/01/PD_ 01_Ecuaciones_Diferencial-Soluciรณn.ipynb | ###Markdown
------ **PRรCTICA DIRIGIDA Nยบ1** **Ecuaciones Diferenciales I** **Aplicaciones en Python** --- Propรณsito: > Dar a conocer el lenguaje Python a los alumnos del curso de Macroeconomรญa Dinรกmica como una herramienta adicional a sus habilidades informรกticas. Autor:> Joel Nestor Vicencio Damian------ **Primero: instalar los siguientes paquetes**---
###Code
# PAQUETE PRELIMINARES
import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt
from scipy.interpolate import interp1d
###Output
_____no_output_____
###Markdown
--- Outlines:1.Definir la ecuaciรณn diferencial> 1.1 Ecuaciรณn2. Definiendo valores> 2.1 Establcer las condiciones iniciales> 2.2 Instantes del tiempo> 2.3 Resolver la ecuaciรณn diferencial> 2.4 Mostramos los 6 valores principales3. Grรกficos> 3.1 Labels> 3.2 Cuadrรญcula> 3.3 Tamaรฑo> 3.4 Lรญnea de convergencia> 3.5 Ajuste de ejes> 3.6 Curva> 3.7 Mostrar el grรกfico---- **Ejercicio 2.1 a**: Resolver la siguiente ecuaciรณn diferencial$\frac{dy}{dt}+4y=-20\qquad y(0)=10$
###Code
#========================================================================
# PARTE 1: Definir la ecuaciรณn diferencial dt/dy
#========================================================================
### 1.1 Ecuaciรณn
def model(y,t):
k = 4
dydt = -20-k*y
return dydt
#========================================================================
# PARTE 1: Definiendo valores
#========================================================================
### 2.1 Establcer las condiciones iniciales
y0 = [10]
### 2.2 Instantes del tiempo
t = np.linspace(0,10,200)
### 2.3 Resolver la ecuaciรณn diferencial
y = odeint(model,y0,t)
### 2.4 Mostramos los 6 valores principales
for i in range(0,7):
print(f'y({t[i]}) = {y[i,0]}')
#========================================================================
# PARTE 3: Graficar resultados
#========================================================================
### 3.1 Labels
plt.title("Convergencia", fontsize=20)
plt.xlabel('time', fontsize=20)
plt.ylabel('y(t)', fontsize=20)
### 3.2 Cuadrรญcula
plt.grid()
### 3.3 Tamaรฑo
plt.rcParams["figure.figsize"] = (20, 15)
### 3.4 Lรญnea de convergencia
plt.axhline(y=-5, color='r', linestyle='-')
### 3.6 Ajusate de ejes
plt.ylim(-6,11)
plt.xlim(right=5)
### 3.7 Curva
plt.plot(t,y)
### 3.8 Mostrar el grรกfico
plt.show()
###Output
y(0.0) = 10.0
y(0.05025125628140704) = 7.268624861452221
y(0.10050251256281408) = 5.0346104858246195
y(0.15075376884422112) = 3.207391479794029
y(0.20100502512562815) = 1.7128937688720767
y(0.2512562814070352) = 0.49053167051351254
y(0.30150753768844224) = -0.509248460523577
###Markdown
**Ejercicio 2.1 b**: Resolver la siguiente ecuaciรณn diferencial$\frac{dy}{dt}=3y \qquad y(0)=2$
###Code
#========================================================================
# PARTE 1: Definir la ecuaciรณn diferencial dt/dy
#========================================================================
### 1.1 Ecuaciรณn
def model1(y,t):
k = 3
dydt = k*y
return dydt
#========================================================================
# PARTE 2: Definiendo valores
#========================================================================
### 2.1 Establcer las condiciones iniciales
y0 = [2]
### 2.2 Instantes del tiempo
t = np.linspace(0,10,200)
### 2.3 Resolver la ecuaciรณn diferencial
y = odeint(model1,y0,t)
### 2.4 Mostramos los 6 valores principales
for i in range(0,7):
print(f'y({t[i]}) = {y[i,0]}')
#========================================================================
# PARTE 3: Graficar resultados
#========================================================================
### 3.1 Labels
plt.title("Divergencia", fontsize=20)
plt.xlabel('time', fontsize=20)
plt.ylabel('y(t)', fontsize=20)
### 3.2 Cuadrรญcula
plt.grid()
### 3.3 Tamaรฑo
plt.rcParams["figure.figsize"] = (15, 10)
### 3.4 Lรญnea de convergencia
plt.axhline(y=2, color='r', linestyle='-')
###3.5 Ajuste de ejes
plt.ylim(top=10)
plt.xlim(right=0.5)
### 3.6 Curva
plt.plot(t,y)
### 3.7 Mostrar el grรกfico
plt.show()
###Output
y(0.0) = 2.0
y(0.05025125628140704) = 2.325420632387152
y(0.10050251256281408) = 2.7037906151976085
y(0.15075376884422112) = 3.143725253512361
y(0.20100502512562815) = 3.655241832200581
y(0.2512562814070352) = 4.249987567671555
y(0.30150753768844224) = 4.941504357552237
###Markdown
**Ejercicio 2.1 c**: Resolver la siguiente ecuaciรณn diferencial$\frac{dy}{dt}+3y=6t; \qquad y(0)=\frac{1}{2}$
###Code
#===============================================================================
# PARTE 1: Definir la ecuaciรณn diferencial dt/dy
#===============================================================================
mt = np.array([0,50])
sample_times = np.arange(len(mt))
tfunc = interp1d(sample_times, mt, bounds_error=False, fill_value="extrapolate")
# Test ODE function
def test(y, t):
dydt = 6*tfunc(t)-3*y
return dydt
#===============================================================================
# PARTE 2: Definiendo valores
#===============================================================================
### 2.1 Establcer las condiciones iniciales
y0 = [1/3]
### 2.2 Instantes del tiempo
tspan = np.linspace(0,2,200)
### 2.3 Resolver la ecuaciรณn diferencial
yt = odeint(test, y0, tspan)
### 2.4 Mostramos los 6 valores principales
for i in range(0,7):
print(f'y({t[i]}) = {y[i,0]}')
#===============================================================================
# PARTE 3: Graficar resultados
#===============================================================================
### 3.1 Labels
plt.title("Divergencia", fontsize=20)
plt.xlabel('time', fontsize=20)
plt.ylabel('y(t)', fontsize=20)
### 3.2 Cuadrรญcula
plt.grid()
### 3.3 Tamaรฑo
### 3.4 Lรญnea de convergencia
plt.axhline(y=1/3, color='r', linestyle='-')
###3.5 Ajuste de ejes
plt.ylim(top=3)
plt.xlim(0,0.2)
### 3.6 Curva
plt.plot(tspan, yt, 'black')
### 3.7 Mostrar el grรกfico
plt.show()
###Output
y(0.0) = 2.0
y(0.05025125628140704) = 2.325420632387152
y(0.10050251256281408) = 2.7037906151976085
y(0.15075376884422112) = 3.143725253512361
y(0.20100502512562815) = 3.655241832200581
y(0.2512562814070352) = 4.249987567671555
y(0.30150753768844224) = 4.941504357552237
###Markdown
**Ejercicio 2.2 a**: Resolver la siguiente ecuaciรณn diferencial$y''(t)+y'(t)+\frac{1}{4}y(t)=9;\quad y(0)=30\quad$ y $\quad y'(0)=15$
###Code
#========================================================================
# PARTE 1: Definir la ecuaciรณn diferencial dt/dy
#========================================================================
### 1.1 Ecuaciรณn
def f(u,t):
return (u[1],-u[1]-0.25*u[0]+9)
#========================================================================
# PARTE 2: Definiendo valores
#========================================================================
### 2.1 Establcer las condiciones iniciales
y0=[30,15]
### 2.2 Instantes del tiempo
ts=np.linspace(0,40,100)
### 2.3 Resolver la ecuaciรณn diferencial
us=odeint(f,y0,ts)
ys=us[:,0]
### 2.4 Mostramos los 6 valores principales
for i in range(0,31):
print(f'y({i}) = {ys[i]}')
#========================================================================
# PARTE 3: Graficar resultados
#========================================================================
### 3.1 Labels
plt.title("Convergencia", fontsize=20)
plt.xlabel('time', fontsize=20)
plt.ylabel('y(t)', fontsize=20)
### 3.2 Cuadrรญcula
plt.grid()
### 3.3 Tamaรฑo
plt.rcParams["figure.figsize"] = (10, 5)
### 3.4 Lรญnea de convergencia
plt.axhline(y=36, color='r', linestyle='-')
### 3.5 Curva
plt.plot(ts,ys,'-')
### 3.6 Mostrar el grรกfico
plt.show()
###Output
y(0) = 30.0
y(1) = 35.059121649818714
y(2) = 38.468159994888936
y(3) = 40.66150727259939
y(4) = 41.96984805032897
y(5) = 42.643565879077656
y(6) = 42.87105574659203
y(7) = 42.79302426823721
y(8) = 42.51363245561706
y(9) = 42.109157532550306
y(10) = 41.63470887605331
y(11) = 41.12941975098076
y(12) = 40.62044811561847
y(13) = 40.126048325465256
y(14) = 39.65791971568936
y(15) = 39.222993924305094
y(16) = 38.82478765347843
y(17) = 38.464420057785034
y(18) = 38.14137223294552
y(19) = 37.85404899671169
y(20) = 37.60019003790939
y(21) = 37.37716659758958
y(22) = 37.182191912824514
y(23) = 37.01246696506547
y(24) = 36.86527822953127
y(25) = 36.73806003342267
y(26) = 36.628431272872675
y(27) = 36.53421373018931
y(28) = 36.453437529785
y(29) = 36.38433779588138
y(30) = 36.32534556973636
###Markdown
**Ejercicio 2.2 b**: Resolver la siguiente ecuaciรณn diferencial$y''(t)-4y'(t)-5y(t)=35;\quad y(0)=5\quad$ y $\quad y'(0)=6$
###Code
#========================================================================
# PARTE 1: Definir la ecuaciรณn diferencial dt/dy
#========================================================================
### 1.1 Ecuaciรณn
def f(u,t):
return (u[1],4*u[1]+5*u[0]+35)
#========================================================================
# PARTE 2: Definiendo valores
#========================================================================
### 2.1 Establcer las condiciones iniciales
y0=[5,6]
### 2.2 Instantes del tiempo
ts=np.linspace(0,10,200)
### 2.3 Resolver la ecuaciรณn diferencial
us=odeint(f,y0,ts)
ys=us[:,0]
### 2.4 Mostramos los 6 valores principales
for i in range(0,10):
print(f'y({i}) = {ys[i]}')
#========================================================================
# PARTE 3: Graficar resultados
#========================================================================
### 3.1 Labels
plt.title("Divergencia", fontsize=20)
plt.xlabel('time', fontsize=20)
plt.ylabel('y(t)', fontsize=20)
### 3.2 Cuadrรญcula
plt.grid()
### 3.3 Tamaรฑo
plt.rcParams["figure.figsize"] = (10, 5)
### 3.4 Lรญnea de "convergencia"
plt.axhline(y=5, color='r', linestyle='-')
###3.5 Ajuste de ejes
plt.ylim(4,100)
plt.xlim(right=0.75)
### 3.5 Curva
plt.plot(ts,ys,'-')
### 3.6 Mostrar el grรกfico
plt.show()
###Output
y(0) = 5.0
y(1) = 5.4158326442668
y(2) = 6.098052544272336
y(3) = 7.1155162084924495
y(4) = 8.557102721533905
y(5) = 10.53741584601707
y(6) = 13.204115366395035
y(7) = 16.74734322868683
y(8) = 21.41184381276948
y(9) = 27.512545674224945
###Markdown
**Ejercicio 2.2 c**: Resolver la siguiente ecuaciรณn diferencial$y''(t)-\frac{1}{2}y'(t)=13;\quad y(0)=17\quad$ y $\quad y'(0)=-19$
###Code
#========================================================================
# PARTE 1: Definir la ecuaciรณn diferencial dt/dy
#========================================================================
### 1.1 Ecuaciรณn
def f(u,t):
return (u[1],0.5*u[1]+0*u[0]+13)
#========================================================================
# PARTE 2: Definiendo valores
#========================================================================
### 2.1 Establcer las condiciones iniciales
y0=[17,-18.5]
### 2.2 Instantes del tiempo
ts=np.linspace(0,10,200)
### 2.3 Resolver la ecuaciรณn diferencial
us=odeint(f,y0,ts)
ys=us[:,0]
### 2.4 Mostramos los 6 valores principales
for i in range(0,10):
print(f'y({i}) = {ys[i]}')
#========================================================================
# PARTE 3: Graficar resultados
#========================================================================
### 3.1 Labels
plt.title("Divergencia", fontsize=20)
plt.xlabel('time', fontsize=20)
plt.ylabel('y(t)', fontsize=20)
### 3.2 Cuadrรญcula
plt.grid()
### 3.3 Tamaรฑo
plt.rcParams["figure.figsize"] = (10, 5)
### 3.4 Lรญnea de "convergencia"
plt.axhline(y=5, color='r', linestyle='-')
###3.5 Ajuste de ejes
plt.ylim(top=10)
plt.xlim(right=20)
### 3.5 Curva
plt.plot(ts,ys,'-')
### 3.6 Mostrar el grรกfico
plt.show()
###Output
y(0) = 17.0
y(1) = 16.07512624888705
y(2) = 15.159963578666787
y(3) = 14.254758898020194
y(4) = 13.359765540148587
y(5) = 12.47524333976715
y(6) = 11.601458788081242
y(7) = 10.738685051832334
y(8) = 9.887202291998358
y(9) = 9.047297837329198
###Markdown
**Ejercicio 2.2 d**: Resolver la siguiente ecuaciรณn diferencial$y''(t)+2y'(t)+10y(0)=80;\quad y(0)=10\quad$ y $\quad y'(0)=13$
###Code
#========================================================================
# PARTE 1: Definir la ecuaciรณn diferencial dt/dy
#========================================================================
### 1.1 Ecuaciรณn
def f(u,t):
return (u[1],-2*u[1]-10*u[0]+80)
#========================================================================
# PARTE 2: Definiendo valores
#========================================================================
### 2.1 Establcer las condiciones iniciales
y0=[10,13]
### 2.2 Instantes del tiempo
ts=np.linspace(0,10,200)
### 2.3 Resolver la ecuaciรณn diferencial
us=odeint(f,y0,ts)
ys=us[:,0]
### 2.4 Mostramos los 6 valores principales
for i in range(0,10):
print(f'y({i}) = {ys[i]}')
#========================================================================
# PARTE 3: Graficar resultados
#========================================================================
### 3.1 Labels
plt.title("Convergencia", fontsize=30)
plt.xlabel('time', fontsize=30)
plt.ylabel('y(t)', fontsize=30)
### 3.2 Cuadrรญcula
plt.grid()
### 3.3 Tamaรฑo
plt.rcParams["figure.figsize"] = (30, 10)
### 3.4 Lรญnea de "convergencia"
plt.axhline(y=8, color='r', linestyle='-')
###3.5 Ajuste de ejes
plt.ylim(6,12)
plt.xlim(right=10)
### 3.5 Curva
plt.plot(ts,ys,'-')
### 3.6 Mostrar el grรกfico
plt.show()
###Output
y(0) = 10.0
y(1) = 10.59452379581274
y(2) = 11.070000054491716
y(3) = 11.426412684113867
y(4) = 11.666601714818173
y(5) = 11.795921772324656
y(6) = 11.821873528758767
y(7) = 11.753718608042565
y(8) = 11.60208916412115
y(9) = 11.378601942605314
|
notebooks/03.Layout-and-Styling/03.04-OPTIONAL-container-exercises.ipynb | ###Markdown
< [*OPTIONAL* Predefined widget styles](03.03-OPTIONAL-widget-specific-styling.ipynb) | [Contents](03.00-layout-and-styling-overview.ipynb) *OPTIONAL* Widget Layout ExercisesEarlier notebooks listed the container widgets in ipywidgets and how the widgets contained in them are laid out. As a reminder, the contents of the container are its `children`, a tuple of widgets. The distribution and alignment of the children are determined by the flex-box properties of the `layout` described in [Widget Styling](03.01-widget-layout-and-styling.ipynbThe-Flexbox-Layout).This set of exercises leads up to a password generator widget that will be completed after discussing widget events. The generator allows the user to set the length of the password, choose a set of special characters to include, and decide whether to include any digits. **Eventually** the widget will look like this:**At the end of this notebook** we will have laid out the controls shown below. We'll come back to the generator in later notebooks after we have discussed widget events.
###Code
import ipywidgets as widgets
###Output
_____no_output_____
###Markdown
1. Alignment of childrenThe cell below defines three children that are different sizes and lays them out in a horizontal box. Adjust the two layout properties in the code cell below so that the displayed hbox matches the image below.You may need to look back at the [styling notebook](03.01-widget-layout-and-styling.ipynb).
###Code
button = widgets.Button(description='Click me')
text = widgets.Textarea(description='Words here:', rows=10)
valid = widgets.Valid(description='check', value=True)
container = widgets.HBox()
container.children = [button, text, valid]
container.layout.width = '100%'
# The border is set here just to make it easier to see the position of
# the children with respect to the box.
container.layout.border = '2px solid grey'
container.layout.height = '250px'
# โโโโโโโ Adjust these properties โโโโโโโโโ
container.layout.justify_content = 'flex-start'
container.layout.align_items = 'flex-start'
# โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
container
###Output
_____no_output_____
###Markdown
2. Layout from scratchThree child widgets are defined in the cell below. Compose them into a vertical box laid out as shown in this image: You should be able to accomplish that layout by setting the appropriate `layout` attribute(s) on `vbox` (don't forget to add the children first).A box is drawn around the container to make it easier to see where the children are placed
###Code
# %load solutions/container-exercises/password-ex2.py
numbers = widgets.Checkbox(description='Include numbers in password')
words = widgets.Label('The generated password is:')
toggles = widgets.ToggleButtons(description='Type of special characters to include',
options=[',./;[', '!@#~%', '^&*()'],
style={'description_width': 'initial'})
vbox = widgets.VBox()
# The border is set here just to make it easier to see the position of
# the children with respect to the box.
vbox.layout.border = '2px solid grey'
vbox.layout.height = '250px'
# โโโโโโโ Insert your layout settings here โโโโโโโ
# Don't forget to add the children...
vbox
###Output
_____no_output_____
###Markdown
3. Improve the look of the childrenThe "special character" toggle buttons would really look better if the label was above the buttons, and the checkbox would look better without the whitespace to its left. i. A better special character controlIn the cell below, construct a widget with the text "Type of special characters to include" above the `ToggleButtons`, with all of the content left-aligned, and the toggle buttons slightly indented. Use the `margin` property of the layout to indent.It should look like this when you are done:This is the second time we've needed a vbox with all the items left-aligned, so let's start out with a `Layout` widget that defines that format
###Code
# %load solutions/container-exercises/password-ex3i.py
vbox_left_layout = widgets.Layout(align_items='flex-start')
label = widgets.Label('Choose special characters to include')
toggles = widgets.ToggleButtons(description='',
options=[',./;[', '!@#~%', '^&*()'],
style={'description_width': 'initial'})
# Set the margins to control the indentation.
# The order is top right bottom left
toggles.layout.margin = '0 0 0 20px'
better_toggles = widgets.VBox([label, toggles])
better_toggles.layout = vbox_left_layout
better_toggles
###Output
_____no_output_____
###Markdown
ii. Checkbox whitespace issuesThe checkbox in the example above has unnecessary whitespace to the left of the box. Setting the `description_width` to `initial` removes it, so do that below.
###Code
# %load solutions/container-exercises/password-ex3ii.py
numbers = widgets.Checkbox(description='Include numbers in password',
style={'description_width': 'initial'})
numbers
###Output
_____no_output_____
###Markdown
4 Put the pieces togetherUse your improved toggles and number checkbox to re-do the password generator interface from exercise 2, above.When you are done it should look like this:
###Code
# %load solutions/container-exercises/password-ex4.py
###Output
_____no_output_____
###Markdown
< [*OPTIONAL* Predefined widget styles](03.03-OPTIONAL-widget-specific-styling.ipynb) | [Contents](03.00-layout-and-styling-overview.ipynb) *OPTIONAL* Widget Layout ExercisesEarlier notebooks listed the container widgets in ipywidgets and how the widgets contained in them are laid out. As a reminder, the contents of the container are its `children`, a tuple of widgets. The distribution and alignment of the children are determined by the flex-box properties of the `layout` described in [Widget Styling](03.01-widget-layout-and-styling.ipynbThe-Flexbox-Layout).This set of exercises leads up to a password generator widget that will be completed after discussing widget events. The generator allows the user to set the length of the password, choose a set of special characters to include, and decide whether to include any digits. **Eventually** the widget will look like this:**At the end of this notebook** we will have laid out the controls shown below. We'll come back to the generator in later notebooks after we have discussed widget events.
###Code
import ipywidgets as widgets
###Output
_____no_output_____
###Markdown
1. Alignment of childrenThe cell below defines three children that are different sizes and lays them out in a horizontal box. Adjust the two layout properties in the code cell below so that the displayed hbox matches the image below.You may need to look back at the [styling notebook](03.01-widget-layout-and-styling.ipynb).
###Code
button = widgets.Button(description='Click me')
text = widgets.Textarea(description='Words here:', rows=10)
valid = widgets.Valid(description='check', value=True)
container = widgets.HBox()
container.children = [button, text, valid]
container.layout.width = '100%'
# The border is set here just to make it easier to see the position of
# the children with respect to the box.
container.layout.border = '2px solid grey'
container.layout.height = '250px'
# โโโโโโโ Adjust these properties โโโโโโโโโ
container.layout.justify_content = 'flex-start'
container.layout.align_items = 'flex-start'
# โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
container
###Output
_____no_output_____
###Markdown
2. Layout from scratchThree child widgets are defined in the cell below. Compose them into a vertical box laid out as shown in this image: You should be able to accomplish that layout by setting the appropriate `layout` attribute(s) on `vbox` (don't forget to add the children first).A box is drawn around the container to make it easier to see where the children are placed
###Code
# %load solutions/container-exercises/password-ex2.py
numbers = widgets.Checkbox(description='Include numbers in password')
words = widgets.Label('The generated password is:')
toggles = widgets.ToggleButtons(description='Type of special characters to include',
options=[',./;[', '!@#~%', '^&*()'],
style={'description_width': 'initial'})
vbox = widgets.VBox()
# The border is set here just to make it easier to see the position of
# the children with respect to the box.
vbox.layout.border = '2px solid grey'
vbox.layout.height = '250px'
# โโโโโโโ Insert your layout settings here โโโโโโโ
# Don't forget to add the children...
vbox
###Output
_____no_output_____
###Markdown
3. Improve the look of the childrenThe "special character" toggle buttons would really look better if the label was above the buttons, and the checkbox would look better without the whitespace to its left. i. A better special character controlIn the cell below, construct a widget with the text "Type of special characters to include" above the `ToggleButtons`, with all of the content left-aligned, and the toggle buttons slightly indented. Use the `margin` property of the layout to indent.It should look like this when you are done:This is the second time we've needed a vbox with all the items left-aligned, so let's start out with a `Layout` widget that defines that format
###Code
# %load solutions/container-exercises/password-ex3i.py
vbox_left_layout = widgets.Layout(align_items='flex-start')
label = widgets.Label('Choose special characters to include')
toggles = widgets.ToggleButtons(description='',
options=[',./;[', '!@#~%', '^&*()'],
style={'description_width': 'initial'})
# Set the margins to control the indentation.
# The order is top right bottom left
toggles.layout.margin = '0 0 0 20px'
better_toggles = widgets.VBox([label, toggles])
better_toggles.layout = vbox_left_layout
better_toggles
###Output
_____no_output_____
###Markdown
ii. Checkbox whitespace issuesThe checkbox in the example above has unnecessary whitespace to the left of the box. Setting the `description_width` to `initial` removes it, so do that below.
###Code
# %load solutions/container-exercises/password-ex3ii.py
numbers = widgets.Checkbox(description='Include numbers in password',
style={'description_width': 'initial'})
numbers
###Output
_____no_output_____
###Markdown
4 Put the pieces togetherUse your improved toggles and number checkbox to re-do the password generator interface from exercise 2, above.When you are done it should look like this:
###Code
# %load solutions/container-exercises/password-ex4.py
###Output
_____no_output_____ |
workshops/kfp-caip-sklearn/lab-01-caip-containers/exercises/lab-01.ipynb | ###Markdown
Using custom containers with AI Platform Training**Learning Objectives:**1. Learn how to create a train and a validation split with Big Query1. Learn how to wrap a machine learning model into a Docker container and train in on CAIP1. Learn how to use the hyperparameter tunning engine on GCP to find the best hyperparameters1. Learn how to deploy a trained machine learning model GCP as a rest API and query it.In this lab, you develop, package as a docker image, and run on **AI Platform Training** a training application that trains a multi-class classification model that predicts the type of forest cover from cartographic data. The [dataset](../../../datasets/covertype/README.md) used in the lab is based on **Covertype Data Set** from UCI Machine Learning Repository.The training code uses `scikit-learn` for data pre-processing and modeling. The code has been instrumented using the `hypertune` package so it can be used with **AI Platform** hyperparameter tuning.
###Code
import json
import os
import numpy as np
import pandas as pd
import pickle
import uuid
import time
import tempfile
from googleapiclient import discovery
from googleapiclient import errors
from google.cloud import bigquery
from jinja2 import Template
from kfp.components import func_to_container_op
from typing import NamedTuple
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.linear_model import SGDClassifier
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from sklearn.compose import ColumnTransformer
###Output
_____no_output_____
###Markdown
Configure environment settings Set location paths, connections strings, and other environment settings. Make sure to update `REGION`, and `ARTIFACT_STORE` with the settings reflecting your lab environment. - `REGION` - the compute region for AI Platform Training and Prediction- `ARTIFACT_STORE` - the GCS bucket created during installation of AI Platform Pipelines. The bucket name starts with the `hostedkfp-default-` prefix.
###Code
!gsutil ls
REGION = 'us-central1'
ARTIFACT_STORE = 'gs://hostedkfp-default-l2iv13wnek'
PROJECT_ID = !(gcloud config get-value core/project)
PROJECT_ID = PROJECT_ID[0]
DATA_ROOT='{}/data'.format(ARTIFACT_STORE)
JOB_DIR_ROOT='{}/jobs'.format(ARTIFACT_STORE)
TRAINING_FILE_PATH='{}/{}/{}'.format(DATA_ROOT, 'training', 'dataset.csv')
VALIDATION_FILE_PATH='{}/{}/{}'.format(DATA_ROOT, 'validation', 'dataset.csv')
###Output
_____no_output_____
###Markdown
Explore the Covertype dataset
###Code
%%bigquery
SELECT *
FROM `covertype_dataset.covertype`
###Output
_____no_output_____
###Markdown
Create training and validation splitsUse BigQuery to sample training and validation splits and save them to GCS storage Create a training split
###Code
!bq query \
-n 0 \
--destination_table covertype_dataset.training \
--replace \
--use_legacy_sql=false \
'SELECT * \
FROM `covertype_dataset.covertype` AS cover \
WHERE \
MOD(ABS(FARM_FINGERPRINT(TO_JSON_STRING(cover))), 10) IN (1, 2, 3, 4)'
!bq extract \
--destination_format CSV \
covertype_dataset.training \
$TRAINING_FILE_PATH
###Output
_____no_output_____
###Markdown
Create a validation split ExerciseIn the first cell below, create a validation split that takes 10% of the data using the `bq` command andexport this split into the BigQuery table `covertype_dataset.validation`.In the second cell, use the `bq` command to export that BigQuery validation table to GCS at `$VALIDATION_FILE_PATH`.
###Code
# TODO: You code to create the BQ table validation split
# TODO: Your code to export the validation table to GCS
df_train = pd.read_csv(TRAINING_FILE_PATH)
df_validation = pd.read_csv(VALIDATION_FILE_PATH)
print(df_train.shape)
print(df_validation.shape)
###Output
_____no_output_____
###Markdown
Develop a training application Configure the `sklearn` training pipeline.The training pipeline preprocesses data by standardizing all numeric features using `sklearn.preprocessing.StandardScaler` and encoding all categorical features using `sklearn.preprocessing.OneHotEncoder`. It uses stochastic gradient descent linear classifier (`SGDClassifier`) for modeling.
###Code
numeric_feature_indexes = slice(0, 10)
categorical_feature_indexes = slice(10, 12)
preprocessor = ColumnTransformer(
transformers=[
('num', StandardScaler(), numeric_feature_indexes),
('cat', OneHotEncoder(), categorical_feature_indexes)
])
pipeline = Pipeline([
('preprocessor', preprocessor),
('classifier', SGDClassifier(loss='log', tol=1e-3))
])
###Output
_____no_output_____
###Markdown
Convert all numeric features to `float64`To avoid warning messages from `StandardScaler` all numeric features are converted to `float64`.
###Code
num_features_type_map = {feature: 'float64' for feature in df_train.columns[numeric_feature_indexes]}
df_train = df_train.astype(num_features_type_map)
df_validation = df_validation.astype(num_features_type_map)
###Output
_____no_output_____
###Markdown
Run the pipeline locally.
###Code
X_train = df_train.drop('Cover_Type', axis=1)
y_train = df_train['Cover_Type']
X_validation = df_validation.drop('Cover_Type', axis=1)
y_validation = df_validation['Cover_Type']
pipeline.set_params(classifier__alpha=0.001, classifier__max_iter=200)
pipeline.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Calculate the trained model's accuracy.
###Code
accuracy = pipeline.score(X_validation, y_validation)
print(accuracy)
###Output
_____no_output_____
###Markdown
Prepare the hyperparameter tuning application.Since the training run on this dataset is computationally expensive you can benefit from running a distributed hyperparameter tuning job on AI Platform Training.
###Code
TRAINING_APP_FOLDER = 'training_app'
os.makedirs(TRAINING_APP_FOLDER, exist_ok=True)
###Output
_____no_output_____
###Markdown
Write the tuning script. Notice the use of the `hypertune` package to report the `accuracy` optimization metric to AI Platform hyperparameter tuning service. ExerciseComplete the code below to capture the metric that the hyper parameter tunning engine will use to optimizethe hyper parameter.
###Code
%%writefile {TRAINING_APP_FOLDER}/train.py
# Copyright 2019 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import subprocess
import sys
import fire
import pickle
import numpy as np
import pandas as pd
import hypertune
from sklearn.compose import ColumnTransformer
from sklearn.linear_model import SGDClassifier
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, OneHotEncoder
def train_evaluate(job_dir, training_dataset_path,
validation_dataset_path, alpha, max_iter, hptune):
df_train = pd.read_csv(training_dataset_path)
df_validation = pd.read_csv(validation_dataset_path)
if not hptune:
df_train = pd.concat([df_train, df_validation])
numeric_feature_indexes = slice(0, 10)
categorical_feature_indexes = slice(10, 12)
preprocessor = ColumnTransformer(
transformers=[
('num', StandardScaler(), numeric_feature_indexes),
('cat', OneHotEncoder(), categorical_feature_indexes)
])
pipeline = Pipeline([
('preprocessor', preprocessor),
('classifier', SGDClassifier(loss='log',tol=1e-3))
])
num_features_type_map = {feature: 'float64' for feature
in df_train.columns[numeric_feature_indexes]}
df_train = df_train.astype(num_features_type_map)
df_validation = df_validation.astype(num_features_type_map)
print('Starting training: alpha={}, max_iter={}'.format(alpha, max_iter))
X_train = df_train.drop('Cover_Type', axis=1)
y_train = df_train['Cover_Type']
pipeline.set_params(classifier__alpha=alpha, classifier__max_iter=max_iter)
pipeline.fit(X_train, y_train)
if hptune:
# TODO: Score the model with the validation data and capture the result
# with the hypertune library
# Save the model
if not hptune:
model_filename = 'model.pkl'
with open(model_filename, 'wb') as model_file:
pickle.dump(pipeline, model_file)
gcs_model_path = "{}/{}".format(job_dir, model_filename)
subprocess.check_call(['gsutil', 'cp', model_filename, gcs_model_path],
stderr=sys.stdout)
print("Saved model in: {}".format(gcs_model_path))
if __name__ == "__main__":
fire.Fire(train_evaluate)
###Output
_____no_output_____
###Markdown
Package the script into a docker image.Notice that we are installing specific versions of `scikit-learn` and `pandas` in the training image. This is done to make sure that the training runtime is aligned with the serving runtime. Later in the notebook you will deploy the model to AI Platform Prediction, using the 1.15 version of AI Platform Prediction runtime. Make sure to update the URI for the base image so that it points to your project's **Container Registry**. ExerciseComplete the Dockerfile below so that it copies the 'train.py' file into the containerat `/app` and runs it when the container is started.
###Code
%%writefile {TRAINING_APP_FOLDER}/Dockerfile
FROM gcr.io/deeplearning-platform-release/base-cpu
RUN pip install -U fire cloudml-hypertune scikit-learn==0.20.4 pandas==0.24.2
# TODO
###Output
_____no_output_____
###Markdown
Build the docker image. You use **Cloud Build** to build the image and push it your project's **Container Registry**. As you use the remote cloud service to build the image, you don't need a local installation of Docker.
###Code
IMAGE_NAME='trainer_image'
IMAGE_TAG='latest'
IMAGE_URI='gcr.io/{}/{}:{}'.format(PROJECT_ID, IMAGE_NAME, IMAGE_TAG)
!gcloud builds submit --tag $IMAGE_URI $TRAINING_APP_FOLDER
###Output
_____no_output_____
###Markdown
Submit an AI Platform hyperparameter tuning job Create the hyperparameter configuration file. Recall that the training code uses `SGDClassifier`. The training application has been designed to accept two hyperparameters that control `SGDClassifier`:- Max iterations- AlphaThe below file configures AI Platform hypertuning to run up to 6 trials on up to three nodes and to choose from two discrete values of `max_iter` and the linear range betwee 0.00001 and 0.001 for `alpha`. ExerciseComplete the `hptuning_config.yaml` file below so that the hyperparametertunning engine try for parameter values* `max_iter` the two values 200 and 300* `alpha` a linear range of values between 0.00001 and 0.001
###Code
%%writefile {TRAINING_APP_FOLDER}/hptuning_config.yaml
# Copyright 2019 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
trainingInput:
hyperparameters:
goal: MAXIMIZE
maxTrials: 4
maxParallelTrials: 4
hyperparameterMetricTag: accuracy
enableTrialEarlyStopping: TRUE
params:
# TODO: Your code goes here
###Output
_____no_output_____
###Markdown
Start the hyperparameter tuning job. ExerciseUse the `gcloud` command to start the hyperparameter tuning job.
###Code
JOB_NAME = "JOB_{}".format(time.strftime("%Y%m%d_%H%M%S"))
JOB_DIR = "{}/{}".format(JOB_DIR_ROOT, JOB_NAME)
SCALE_TIER = "BASIC"
!gcloud ai-platform jobs submit training $JOB_NAME \
--region=# TODO\
--job-dir=# TODO \
--master-image-uri=# TODO \
--scale-tier=# TODO \
--config # TODO \
-- \
# TODO
###Output
_____no_output_____
###Markdown
Monitor the job.You can monitor the job using GCP console or from within the notebook using `gcloud` commands.
###Code
!gcloud ai-platform jobs describe $JOB_NAME
!gcloud ai-platform jobs stream-logs $JOB_NAME
###Output
_____no_output_____
###Markdown
Retrieve HP-tuning results. After the job completes you can review the results using GCP Console or programatically by calling the AI Platform Training REST end-point.
###Code
ml = discovery.build('ml', 'v1')
job_id = 'projects/{}/jobs/{}'.format(PROJECT_ID, JOB_NAME)
request = ml.projects().jobs().get(name=job_id)
try:
response = request.execute()
except errors.HttpError as err:
print(err)
except:
print("Unexpected error")
response
###Output
_____no_output_____
###Markdown
The returned run results are sorted by a value of the optimization metric. The best run is the first item on the returned list.
###Code
response['trainingOutput']['trials'][0]
###Output
_____no_output_____
###Markdown
Retrain the model with the best hyperparametersYou can now retrain the model using the best hyperparameters and using combined training and validation splits as a training dataset. Configure and run the training job
###Code
alpha = response['trainingOutput']['trials'][0]['hyperparameters']['alpha']
max_iter = response['trainingOutput']['trials'][0]['hyperparameters']['max_iter']
JOB_NAME = "JOB_{}".format(time.strftime("%Y%m%d_%H%M%S"))
JOB_DIR = "{}/{}".format(JOB_DIR_ROOT, JOB_NAME)
SCALE_TIER = "BASIC"
!gcloud ai-platform jobs submit training $JOB_NAME \
--region=$REGION \
--job-dir=$JOB_DIR \
--master-image-uri=$IMAGE_URI \
--scale-tier=$SCALE_TIER \
-- \
--training_dataset_path=$TRAINING_FILE_PATH \
--validation_dataset_path=$VALIDATION_FILE_PATH \
--alpha=$alpha \
--max_iter=$max_iter \
--nohptune
!gcloud ai-platform jobs stream-logs $JOB_NAME
###Output
_____no_output_____
###Markdown
Examine the training outputThe training script saved the trained model as the 'model.pkl' in the `JOB_DIR` folder on GCS.
###Code
!gsutil ls $JOB_DIR
###Output
_____no_output_____
###Markdown
Deploy the model to AI Platform Prediction Create a model resource ExerciseComplete the `gcloud` command below to create a model with`model_name` in `$REGION` tagged with `labels`:
###Code
model_name = 'forest_cover_classifier'
labels = "task=classifier,domain=forestry"
!gcloud # TODO: You code goes here
###Output
_____no_output_____
###Markdown
Create a model version Exercise Complete the `gcloud` command below to create a version of the model:
###Code
model_version = 'v01'
!gcloud # TODO \
--model=# TODO \
--origin=# TODO \
--runtime-version=# TODO \
--framework=# TODO \
--python-version=# TODO
###Output
_____no_output_____
###Markdown
Serve predictions Prepare the input file with JSON formated instances.
###Code
input_file = 'serving_instances.json'
with open(input_file, 'w') as f:
for index, row in X_validation.head().iterrows():
f.write(json.dumps(list(row.values)))
f.write('\n')
!cat $input_file
###Output
_____no_output_____
###Markdown
Invoke the model ExerciseUsing the `gcloud` command send the data in `$input_file` to your model deployed as a REST API:
###Code
!gcloud # TODO: Complete the command
###Output
_____no_output_____
###Markdown
Configure environment settings Set location paths, connections strings, and other environment settings. Make sure to update `REGION`, and `ARTIFACT_STORE` with the settings reflecting your lab environment. - `REGION` - the compute region for AI Platform Training and Prediction- `ARTIFACT_STORE` - the GCS bucket created during installation of AI Platform Pipelines. The bucket name starts with the `hostedkfp-default-` prefix.
###Code
!gsutil ls
REGION = 'us-central1'
ARTIFACT_STORE = 'gs://hostedkfp-default-l2iv13wnek'
PROJECT_ID = !(gcloud config get-value core/project)
PROJECT_ID = PROJECT_ID[0]
DATA_ROOT='{}/data'.format(ARTIFACT_STORE)
JOB_DIR_ROOT='{}/jobs'.format(ARTIFACT_STORE)
TRAINING_FILE_PATH='{}/{}/{}'.format(DATA_ROOT, 'training', 'dataset.csv')
VALIDATION_FILE_PATH='{}/{}/{}'.format(DATA_ROOT, 'validation', 'dataset.csv')
###Output
_____no_output_____
###Markdown
Explore the Covertype dataset
###Code
%%bigquery
SELECT *
FROM `covertype_dataset.covertype`
###Output
_____no_output_____
###Markdown
Create training and validation splitsUse BigQuery to sample training and validation splits and save them to GCS storage Create a training split
###Code
!bq query \
-n 0 \
--destination_table covertype_dataset.training \
--replace \
--use_legacy_sql=false \
'SELECT * \
FROM `covertype_dataset.covertype` AS cover \
WHERE \
MOD(ABS(FARM_FINGERPRINT(TO_JSON_STRING(cover))), 10) IN (1, 2, 3, 4)'
!bq extract \
--destination_format CSV \
covertype_dataset.training \
$TRAINING_FILE_PATH
###Output
_____no_output_____
###Markdown
Create a validation split ExerciseIn the first cell below, create a validation split that takes 10% of the data using the `bq` command andexport this split into the BigQuery table `covertype_dataset.validation`.In the second cell, use the `bq` command to export that BigQuery validation table to GCS at `$VALIDATION_FILE_PATH`.
###Code
# TODO: You code to create the BQ table validation split
# TODO: Your code to export the validation table to GCS
df_train = pd.read_csv(TRAINING_FILE_PATH)
df_validation = pd.read_csv(VALIDATION_FILE_PATH)
print(df_train.shape)
print(df_validation.shape)
###Output
_____no_output_____
###Markdown
Develop a training application Configure the `sklearn` training pipeline.The training pipeline preprocesses data by standardizing all numeric features using `sklearn.preprocessing.StandardScaler` and encoding all categorical features using `sklearn.preprocessing.OneHotEncoder`. It uses stochastic gradient descent linear classifier (`SGDClassifier`) for modeling.
###Code
numeric_feature_indexes = slice(0, 10)
categorical_feature_indexes = slice(10, 12)
preprocessor = ColumnTransformer(
transformers=[
('num', StandardScaler(), numeric_feature_indexes),
('cat', OneHotEncoder(), categorical_feature_indexes)
])
pipeline = Pipeline([
('preprocessor', preprocessor),
('classifier', SGDClassifier(loss='log', tol=1e-3))
])
###Output
_____no_output_____
###Markdown
Convert all numeric features to `float64`To avoid warning messages from `StandardScaler` all numeric features are converted to `float64`.
###Code
num_features_type_map = {feature: 'float64' for feature in df_train.columns[numeric_feature_indexes]}
df_train = df_train.astype(num_features_type_map)
df_validation = df_validation.astype(num_features_type_map)
###Output
_____no_output_____
###Markdown
Run the pipeline locally.
###Code
X_train = df_train.drop('Cover_Type', axis=1)
y_train = df_train['Cover_Type']
X_validation = df_validation.drop('Cover_Type', axis=1)
y_validation = df_validation['Cover_Type']
pipeline.set_params(classifier__alpha=0.001, classifier__max_iter=200)
pipeline.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Calculate the trained model's accuracy.
###Code
accuracy = pipeline.score(X_validation, y_validation)
print(accuracy)
###Output
_____no_output_____
###Markdown
Prepare the hyperparameter tuning application.Since the training run on this dataset is computationally expensive you can benefit from running a distributed hyperparameter tuning job on AI Platform Training.
###Code
TRAINING_APP_FOLDER = 'training_app'
os.makedirs(TRAINING_APP_FOLDER, exist_ok=True)
###Output
_____no_output_____
###Markdown
Write the tuning script. Notice the use of the `hypertune` package to report the `accuracy` optimization metric to AI Platform hyperparameter tuning service. ExerciseComplete the code below to capture the metric that the hyper parameter tunning engine will use to optimizethe hyper parameter.
###Code
%%writefile {TRAINING_APP_FOLDER}/train.py
# Copyright 2019 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import subprocess
import sys
import fire
import pickle
import numpy as np
import pandas as pd
import hypertune
from sklearn.compose import ColumnTransformer
from sklearn.linear_model import SGDClassifier
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, OneHotEncoder
def train_evaluate(job_dir, training_dataset_path,
validation_dataset_path, alpha, max_iter, hptune):
df_train = pd.read_csv(training_dataset_path)
df_validation = pd.read_csv(validation_dataset_path)
if not hptune:
df_train = pd.concat([df_train, df_validation])
numeric_feature_indexes = slice(0, 10)
categorical_feature_indexes = slice(10, 12)
preprocessor = ColumnTransformer(
transformers=[
('num', StandardScaler(), numeric_feature_indexes),
('cat', OneHotEncoder(), categorical_feature_indexes)
])
pipeline = Pipeline([
('preprocessor', preprocessor),
('classifier', SGDClassifier(loss='log',tol=1e-3))
])
num_features_type_map = {feature: 'float64' for feature
in df_train.columns[numeric_feature_indexes]}
df_train = df_train.astype(num_features_type_map)
df_validation = df_validation.astype(num_features_type_map)
print('Starting training: alpha={}, max_iter={}'.format(alpha, max_iter))
X_train = df_train.drop('Cover_Type', axis=1)
y_train = df_train['Cover_Type']
pipeline.set_params(classifier__alpha=alpha, classifier__max_iter=max_iter)
pipeline.fit(X_train, y_train)
if hptune:
# TODO: Score the model with the validation data and capture the result
# with the hypertune library
# Save the model
if not hptune:
model_filename = 'model.pkl'
with open(model_filename, 'wb') as model_file:
pickle.dump(pipeline, model_file)
gcs_model_path = "{}/{}".format(job_dir, model_filename)
subprocess.check_call(['gsutil', 'cp', model_filename, gcs_model_path],
stderr=sys.stdout)
print("Saved model in: {}".format(gcs_model_path))
if __name__ == "__main__":
fire.Fire(train_evaluate)
###Output
_____no_output_____
###Markdown
Package the script into a docker image.Notice that we are installing specific versions of `scikit-learn` and `pandas` in the training image. This is done to make sure that the training runtime is aligned with the serving runtime. Later in the notebook you will deploy the model to AI Platform Prediction, using the 1.15 version of AI Platform Prediction runtime. Make sure to update the URI for the base image so that it points to your project's **Container Registry**. ExerciseComplete the Dockerfile below so that it copies the 'train.py' file into the containerat `/app` and runs it when the container is started.
###Code
%%writefile {TRAINING_APP_FOLDER}/Dockerfile
FROM gcr.io/deeplearning-platform-release/base-cpu
RUN pip install -U fire cloudml-hypertune scikit-learn==0.20.4 pandas==0.24.2
# TODO
###Output
_____no_output_____
###Markdown
Build the docker image. You use **Cloud Build** to build the image and push it your project's **Container Registry**. As you use the remote cloud service to build the image, you don't need a local installation of Docker.
###Code
IMAGE_NAME='trainer_image'
IMAGE_TAG='latest'
IMAGE_URI='gcr.io/{}/{}:{}'.format(PROJECT_ID, IMAGE_NAME, IMAGE_TAG)
!gcloud builds submit --tag $IMAGE_URI $TRAINING_APP_FOLDER
###Output
_____no_output_____
###Markdown
Submit an AI Platform hyperparameter tuning job Create the hyperparameter configuration file. Recall that the training code uses `SGDClassifier`. The training application has been designed to accept two hyperparameters that control `SGDClassifier`:- Max iterations- AlphaThe below file configures AI Platform hypertuning to run up to 6 trials on up to three nodes and to choose from two discrete values of `max_iter` and the linear range betwee 0.00001 and 0.001 for `alpha`. ExerciseComplete the `hptuning_config.yaml` file below so that the hyperparametertunning engine try for parameter values* `max_iter` the two values 200 and 300* `alpha` a linear range of values between 0.00001 and 0.001
###Code
%%writefile {TRAINING_APP_FOLDER}/hptuning_config.yaml
# Copyright 2019 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
trainingInput:
hyperparameters:
goal: MAXIMIZE
maxTrials: 4
maxParallelTrials: 4
hyperparameterMetricTag: accuracy
enableTrialEarlyStopping: TRUE
params:
# TODO: Your code goes here
###Output
_____no_output_____
###Markdown
Start the hyperparameter tuning job. ExerciseUse the `gcloud` command to start the hyperparameter tuning job.
###Code
JOB_NAME = "JOB_{}".format(time.strftime("%Y%m%d_%H%M%S"))
JOB_DIR = "{}/{}".format(JOB_DIR_ROOT, JOB_NAME)
SCALE_TIER = "BASIC"
!gcloud ai-platform jobs submit training $JOB_NAME \
--region=# TODO\
--job-dir=# TODO \
--master-image-uri=# TODO \
--scale-tier=# TODO \
--config # TODO \
-- \
# TODO
###Output
_____no_output_____
###Markdown
Monitor the job.You can monitor the job using GCP console or from within the notebook using `gcloud` commands.
###Code
!gcloud ai-platform jobs describe $JOB_NAME
!gcloud ai-platform jobs stream-logs $JOB_NAME
###Output
_____no_output_____
###Markdown
Retrieve HP-tuning results. After the job completes you can review the results using GCP Console or programatically by calling the AI Platform Training REST end-point.
###Code
ml = discovery.build('ml', 'v1')
job_id = 'projects/{}/jobs/{}'.format(PROJECT_ID, JOB_NAME)
request = ml.projects().jobs().get(name=job_id)
try:
response = request.execute()
except errors.HttpError as err:
print(err)
except:
print("Unexpected error")
response
###Output
_____no_output_____
###Markdown
The returned run results are sorted by a value of the optimization metric. The best run is the first item on the returned list.
###Code
response['trainingOutput']['trials'][0]
###Output
_____no_output_____
###Markdown
Retrain the model with the best hyperparametersYou can now retrain the model using the best hyperparameters and using combined training and validation splits as a training dataset. Configure and run the training job
###Code
alpha = response['trainingOutput']['trials'][0]['hyperparameters']['alpha']
max_iter = response['trainingOutput']['trials'][0]['hyperparameters']['max_iter']
JOB_NAME = "JOB_{}".format(time.strftime("%Y%m%d_%H%M%S"))
JOB_DIR = "{}/{}".format(JOB_DIR_ROOT, JOB_NAME)
SCALE_TIER = "BASIC"
!gcloud ai-platform jobs submit training $JOB_NAME \
--region=$REGION \
--job-dir=$JOB_DIR \
--master-image-uri=$IMAGE_URI \
--scale-tier=$SCALE_TIER \
-- \
--training_dataset_path=$TRAINING_FILE_PATH \
--validation_dataset_path=$VALIDATION_FILE_PATH \
--alpha=$alpha \
--max_iter=$max_iter \
--nohptune
!gcloud ai-platform jobs stream-logs $JOB_NAME
###Output
_____no_output_____
###Markdown
Examine the training outputThe training script saved the trained model as the 'model.pkl' in the `JOB_DIR` folder on GCS.
###Code
!gsutil ls $JOB_DIR
###Output
_____no_output_____
###Markdown
Deploy the model to AI Platform Prediction Create a model resource ExerciseComplete the `gcloud` command below to create a model with`model_name` in `$REGION` tagged with `labels`:
###Code
model_name = 'forest_cover_classifier'
labels = "task=classifier,domain=forestry"
filter = 'name:{}'.format(model_name)
models = !(gcloud ai-platform models list --filter={filter} --format='value(name)')
if not models:
!gcloud # TODO: You code goes here
else:
print("Model: {} already exists.".format(models[0]))
###Output
_____no_output_____
###Markdown
Create a model version Exercise Complete the `gcloud` command below to create a version of the model:
###Code
model_version = 'v01'
filter = 'name:{}'.format(model_version)
versions = !(gcloud ai-platform versions list --model={model_name} --format='value(name)' --filter={filter})
if not versions:
!gcloud # TODO \
--model=# TODO \
--origin=# TODO \
--runtime-version=# TODO \
--framework=# TODO \
--python-version=# TODO
else:
print("Model version: {} already exists.".format(versions[0]))
###Output
_____no_output_____
###Markdown
Serve predictions Prepare the input file with JSON formated instances.
###Code
input_file = 'serving_instances.json'
with open(input_file, 'w') as f:
for index, row in X_validation.head().iterrows():
f.write(json.dumps(list(row.values)))
f.write('\n')
!cat $input_file
###Output
_____no_output_____
###Markdown
Invoke the model ExerciseUsing the `gcloud` command send the data in `$input_file` to your model deployed as a REST API:
###Code
!gcloud # TODO: Complete the command
###Output
_____no_output_____
###Markdown
Using custom containers with AI Platform Training**Learning Objectives:**1. Learn how to create a train and a validation split with Big Query1. Learn how to wrap a machine learning model into a Docker container and train in on CAIP1. Learn how to use the hyperparameter tunning engine on GCP to find the best hyperparameters1. Learn how to deploy a trained machine learning model GCP as a rest API and query it.In this lab, you develop, package as a docker image, and run on **AI Platform Training** a training application that trains a multi-class classification model that predicts the type of forest cover from cartographic data. The [dataset](../../../datasets/covertype/README.md) used in the lab is based on **Covertype Data Set** from UCI Machine Learning Repository.The training code uses `scikit-learn` for data pre-processing and modeling. The code has been instrumented using the `hypertune` package so it can be used with **AI Platform** hyperparameter tuning.
###Code
import json
import os
import numpy as np
import pandas as pd
import pickle
import uuid
import time
import tempfile
from googleapiclient import discovery
from googleapiclient import errors
from google.cloud import bigquery
from jinja2 import Template
from kfp.components import func_to_container_op
from typing import NamedTuple
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.linear_model import SGDClassifier
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from sklearn.compose import ColumnTransformer
###Output
_____no_output_____
###Markdown
Configure environment settings Set location paths, connections strings, and other environment settings. Make sure to update `REGION`, and `ARTIFACT_STORE` with the settings reflecting your lab environment. - `REGION` - the compute region for AI Platform Training and Prediction- `ARTIFACT_STORE` - the GCS bucket created during installation of AI Platform Pipelines. The bucket name starts with the `hostedkfp-default-` prefix.
###Code
!gsutil ls
REGION = 'us-central1'
ARTIFACT_STORE = 'gs://artifacts.qwiklabs-gcp-02-e6e653a986e9.appspot.com'
PROJECT_ID = !(gcloud config get-value core/project)
PROJECT_ID = PROJECT_ID[0]
DATA_ROOT='{}/data'.format(ARTIFACT_STORE)
JOB_DIR_ROOT='{}/jobs'.format(ARTIFACT_STORE)
TRAINING_FILE_PATH='{}/{}/{}'.format(DATA_ROOT, 'training', 'dataset.csv')
VALIDATION_FILE_PATH='{}/{}/{}'.format(DATA_ROOT, 'validation', 'dataset.csv')
###Output
_____no_output_____
###Markdown
Explore the Covertype dataset
###Code
%%bigquery
SELECT *
FROM `covertype_dataset.covertype`
###Output
_____no_output_____
###Markdown
Create training and validation splitsUse BigQuery to sample training and validation splits and save them to GCS storage Create a training split
###Code
!bq query \
-n 0 \
--destination_table covertype_dataset.training \
--replace \
--use_legacy_sql=false \
'SELECT * \
FROM `covertype_dataset.covertype` AS cover \
WHERE \
MOD(ABS(FARM_FINGERPRINT(TO_JSON_STRING(cover))), 10) IN (1, 2, 3, 4)'
!bq extract \
--destination_format CSV \
covertype_dataset.training \
$TRAINING_FILE_PATH
###Output
Waiting on bqjob_r10ffde7539abd37e_00000176934333f3_1 ... (0s) Current status: DONE
###Markdown
Create a validation split ExerciseIn the first cell below, create a validation split that takes 10% of the data using the `bq` command andexport this split into the BigQuery table `covertype_dataset.validation`.In the second cell, use the `bq` command to export that BigQuery validation table to GCS at `$VALIDATION_FILE_PATH`.
###Code
!bq query \
-n 0 \
--destination_table covertype_dataset.validation \
--replace \
--use_legacy_sql=false \
'SELECT * \
FROM `covertype_dataset.covertype` AS cover \
WHERE \
MOD(ABS(FARM_FINGERPRINT(TO_JSON_STRING(cover))), 10) IN (8)'
!bq extract \
--destination_format CSV \
covertype_dataset.validation \
$VALIDATION_FILE_PATH
df_train = pd.read_csv(TRAINING_FILE_PATH)
df_validation = pd.read_csv(VALIDATION_FILE_PATH)
print(df_train.shape)
print(df_validation.shape)
###Output
(40009, 13)
(9836, 13)
###Markdown
Develop a training application Configure the `sklearn` training pipeline.The training pipeline preprocesses data by standardizing all numeric features using `sklearn.preprocessing.StandardScaler` and encoding all categorical features using `sklearn.preprocessing.OneHotEncoder`. It uses stochastic gradient descent linear classifier (`SGDClassifier`) for modeling.
###Code
numeric_feature_indexes = slice(0, 10)
categorical_feature_indexes = slice(10, 12)
preprocessor = ColumnTransformer(
transformers=[
('num', StandardScaler(), numeric_feature_indexes),
('cat', OneHotEncoder(), categorical_feature_indexes)
])
pipeline = Pipeline([
('preprocessor', preprocessor),
('classifier', SGDClassifier(loss='log', tol=1e-3))
])
###Output
_____no_output_____
###Markdown
Convert all numeric features to `float64`To avoid warning messages from `StandardScaler` all numeric features are converted to `float64`.
###Code
num_features_type_map = {feature: 'float64' for feature in df_train.columns[numeric_feature_indexes]}
df_train = df_train.astype(num_features_type_map)
df_validation = df_validation.astype(num_features_type_map)
###Output
_____no_output_____
###Markdown
Run the pipeline locally.
###Code
X_train = df_train.drop('Cover_Type', axis=1)
y_train = df_train['Cover_Type']
X_validation = df_validation.drop('Cover_Type', axis=1)
y_validation = df_validation['Cover_Type']
pipeline.set_params(classifier__alpha=0.001, classifier__max_iter=200)
pipeline.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Calculate the trained model's accuracy.
###Code
accuracy = pipeline.score(X_validation, y_validation)
print(accuracy)
###Output
0.6973363155754372
###Markdown
Prepare the hyperparameter tuning application.Since the training run on this dataset is computationally expensive you can benefit from running a distributed hyperparameter tuning job on AI Platform Training.
###Code
TRAINING_APP_FOLDER = 'training_app'
os.makedirs(TRAINING_APP_FOLDER, exist_ok=True)
###Output
_____no_output_____
###Markdown
Write the tuning script. Notice the use of the `hypertune` package to report the `accuracy` optimization metric to AI Platform hyperparameter tuning service. ExerciseComplete the code below to capture the metric that the hyper parameter tunning engine will use to optimizethe hyper parameter.
###Code
%%writefile {TRAINING_APP_FOLDER}/train.py
# Copyright 2019 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import subprocess
import sys
import fire
import pickle
import numpy as np
import pandas as pd
import hypertune
from sklearn.compose import ColumnTransformer
from sklearn.linear_model import SGDClassifier
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, OneHotEncoder
def train_evaluate(job_dir, training_dataset_path, validation_dataset_path, alpha, max_iter, hptune):
df_train = pd.read_csv(training_dataset_path)
df_validation = pd.read_csv(validation_dataset_path)
if not hptune:
df_train = pd.concat([df_train, df_validation])
numeric_feature_indexes = slice(0, 10)
categorical_feature_indexes = slice(10, 12)
preprocessor = ColumnTransformer(
transformers=[
('num', StandardScaler(), numeric_feature_indexes),
('cat', OneHotEncoder(), categorical_feature_indexes)
])
pipeline = Pipeline([
('preprocessor', preprocessor),
('classifier', SGDClassifier(loss='log',tol=1e-3))
])
num_features_type_map = {feature: 'float64' for feature in df_train.columns[numeric_feature_indexes]}
df_train = df_train.astype(num_features_type_map)
df_validation = df_validation.astype(num_features_type_map)
print('Starting training: alpha={}, max_iter={}'.format(alpha, max_iter))
X_train = df_train.drop('Cover_Type', axis=1)
y_train = df_train['Cover_Type']
pipeline.set_params(classifier__alpha=alpha, classifier__max_iter=max_iter)
pipeline.fit(X_train, y_train)
if hptune:
X_validation = df_validation.drop('Cover_Type', axis=1)
y_validation = df_validation['Cover_Type']
accuracy = pipeline.score(X_validation, y_validation)
print('Model accuracy: {}'.format(accuracy))
# Log it with hypertune
hpt = hypertune.HyperTune()
hpt.report_hyperparameter_tuning_metric(
hyperparameter_metric_tag='accuracy',
metric_value=accuracy
)
# Save the model
if not hptune:
model_filename = 'model.pkl'
with open(model_filename, 'wb') as model_file:
pickle.dump(pipeline, model_file)
gcs_model_path = "{}/{}".format(job_dir, model_filename)
subprocess.check_call(['gsutil', 'cp', model_filename, gcs_model_path], stderr=sys.stdout)
print("Saved model in: {}".format(gcs_model_path))
if __name__ == "__main__":
fire.Fire(train_evaluate)
###Output
Overwriting training_app/train.py
###Markdown
Package the script into a docker image.Notice that we are installing specific versions of `scikit-learn` and `pandas` in the training image. This is done to make sure that the training runtime is aligned with the serving runtime. Later in the notebook you will deploy the model to AI Platform Prediction, using the 1.15 version of AI Platform Prediction runtime. Make sure to update the URI for the base image so that it points to your project's **Container Registry**. ExerciseComplete the Dockerfile below so that it copies the 'train.py' file into the containerat `/app` and runs it when the container is started.
###Code
%%writefile {TRAINING_APP_FOLDER}/Dockerfile
FROM gcr.io/deeplearning-platform-release/base-cpu
RUN pip install -U fire cloudml-hypertune scikit-learn==0.20.4 pandas==0.24.2
WORKDIR /app
COPY train.py .
ENTRYPOINT ["python", "train.py"]
###Output
Writing training_app/Dockerfile
###Markdown
Build the docker image. You use **Cloud Build** to build the image and push it your project's **Container Registry**. As you use the remote cloud service to build the image, you don't need a local installation of Docker.
###Code
IMAGE_NAME='trainer_image'
IMAGE_TAG='latest'
IMAGE_URI='gcr.io/{}/{}:{}'.format(PROJECT_ID, IMAGE_NAME, IMAGE_TAG)
!gcloud builds submit --tag $IMAGE_URI $TRAINING_APP_FOLDER
###Output
Creating temporary tarball archive of 2 file(s) totalling 3.2 KiB before compression.
Uploading tarball of [training_app] to [gs://qwiklabs-gcp-02-e6e653a986e9_cloudbuild/source/1608788583.211579-57b2dfc4133a49ffa04f547f30bf9fa5.tgz]
Created [https://cloudbuild.googleapis.com/v1/projects/qwiklabs-gcp-02-e6e653a986e9/builds/b9183832-f73f-4c7b-ae2c-406055218124].
Logs are available at [https://console.cloud.google.com/cloud-build/builds/b9183832-f73f-4c7b-ae2c-406055218124?project=680473262844].
----------------------------- REMOTE BUILD OUTPUT ------------------------------
starting build "b9183832-f73f-4c7b-ae2c-406055218124"
FETCHSOURCE
Fetching storage object: gs://qwiklabs-gcp-02-e6e653a986e9_cloudbuild/source/1608788583.211579-57b2dfc4133a49ffa04f547f30bf9fa5.tgz#1608788583939995
Copying gs://qwiklabs-gcp-02-e6e653a986e9_cloudbuild/source/1608788583.211579-57b2dfc4133a49ffa04f547f30bf9fa5.tgz#1608788583939995...
/ [1 files][ 1.5 KiB/ 1.5 KiB]
Operation completed over 1 objects/1.5 KiB.
BUILD
Already have image (with digest): gcr.io/cloud-builders/docker
Sending build context to Docker daemon 5.632kB
Step 1/5 : FROM gcr.io/deeplearning-platform-release/base-cpu
latest: Pulling from deeplearning-platform-release/base-cpu
171857c49d0f: Pulling fs layer
419640447d26: Pulling fs layer
61e52f862619: Pulling fs layer
20b22764011e: Pulling fs layer
00244e2c5db1: Pulling fs layer
07e452976526: Pulling fs layer
9889ff203efe: Pulling fs layer
05dad74dd489: Pulling fs layer
abfc11aef694: Pulling fs layer
00e45e47c0d1: Pulling fs layer
5274e7716976: Pulling fs layer
0dcd37ccffa2: Pulling fs layer
8b7d4903c042: Pulling fs layer
c65868d8f6c7: Pulling fs layer
8497304a9c74: Pulling fs layer
25b00734a98b: Pulling fs layer
0e452e6fa8ed: Pulling fs layer
20b22764011e: Waiting
00244e2c5db1: Waiting
07e452976526: Waiting
9889ff203efe: Waiting
05dad74dd489: Waiting
abfc11aef694: Waiting
00e45e47c0d1: Waiting
5274e7716976: Waiting
0dcd37ccffa2: Waiting
c65868d8f6c7: Waiting
8497304a9c74: Waiting
25b00734a98b: Waiting
0e452e6fa8ed: Waiting
8b7d4903c042: Waiting
61e52f862619: Verifying Checksum
61e52f862619: Download complete
419640447d26: Verifying Checksum
419640447d26: Download complete
171857c49d0f: Verifying Checksum
171857c49d0f: Download complete
07e452976526: Verifying Checksum
07e452976526: Download complete
00244e2c5db1: Verifying Checksum
00244e2c5db1: Download complete
05dad74dd489: Verifying Checksum
05dad74dd489: Download complete
9889ff203efe: Verifying Checksum
9889ff203efe: Download complete
abfc11aef694: Verifying Checksum
abfc11aef694: Download complete
00e45e47c0d1: Verifying Checksum
00e45e47c0d1: Download complete
5274e7716976: Verifying Checksum
5274e7716976: Download complete
0dcd37ccffa2: Verifying Checksum
0dcd37ccffa2: Download complete
8b7d4903c042: Verifying Checksum
8b7d4903c042: Download complete
c65868d8f6c7: Verifying Checksum
c65868d8f6c7: Download complete
8497304a9c74: Verifying Checksum
8497304a9c74: Download complete
0e452e6fa8ed: Verifying Checksum
0e452e6fa8ed: Download complete
20b22764011e: Verifying Checksum
20b22764011e: Download complete
171857c49d0f: Pull complete
419640447d26: Pull complete
61e52f862619: Pull complete
25b00734a98b: Verifying Checksum
25b00734a98b: Download complete
20b22764011e: Pull complete
00244e2c5db1: Pull complete
07e452976526: Pull complete
9889ff203efe: Pull complete
05dad74dd489: Pull complete
abfc11aef694: Pull complete
00e45e47c0d1: Pull complete
5274e7716976: Pull complete
0dcd37ccffa2: Pull complete
8b7d4903c042: Pull complete
c65868d8f6c7: Pull complete
8497304a9c74: Pull complete
25b00734a98b: Pull complete
0e452e6fa8ed: Pull complete
Digest: sha256:f6c7ab6b9004322178cbccf7becb15a27fd3e2240e7335f5a51f8ff1861fd733
Status: Downloaded newer image for gcr.io/deeplearning-platform-release/base-cpu:latest
---> 2e14efcab90e
Step 2/5 : RUN pip install -U fire cloudml-hypertune scikit-learn==0.20.4 pandas==0.24.2
---> Running in 0c6e8a063d5e
Collecting fire
Downloading fire-0.3.1.tar.gz (81 kB)
Collecting cloudml-hypertune
Downloading cloudml-hypertune-0.1.0.dev6.tar.gz (3.2 kB)
Collecting scikit-learn==0.20.4
Downloading scikit_learn-0.20.4-cp37-cp37m-manylinux1_x86_64.whl (5.4 MB)
Collecting pandas==0.24.2
Downloading pandas-0.24.2-cp37-cp37m-manylinux1_x86_64.whl (10.1 MB)
Requirement already satisfied, skipping upgrade: six in /opt/conda/lib/python3.7/site-packages (from fire) (1.15.0)
Collecting termcolor
Downloading termcolor-1.1.0.tar.gz (3.9 kB)
Requirement already satisfied, skipping upgrade: numpy>=1.8.2 in /opt/conda/lib/python3.7/site-packages (from scikit-learn==0.20.4) (1.18.5)
Requirement already satisfied, skipping upgrade: scipy>=0.13.3 in /opt/conda/lib/python3.7/site-packages (from scikit-learn==0.20.4) (1.5.3)
Requirement already satisfied, skipping upgrade: python-dateutil>=2.5.0 in /opt/conda/lib/python3.7/site-packages (from pandas==0.24.2) (2.8.1)
Requirement already satisfied, skipping upgrade: pytz>=2011k in /opt/conda/lib/python3.7/site-packages (from pandas==0.24.2) (2020.1)
Building wheels for collected packages: fire, cloudml-hypertune, termcolor
Building wheel for fire (setup.py): started
Building wheel for fire (setup.py): finished with status 'done'
Created wheel for fire: filename=fire-0.3.1-py2.py3-none-any.whl size=111005 sha256=17b9dce79199ef588b791bbacff87a9c1ca733450636ee1ccb7800c23ff970d0
Stored in directory: /root/.cache/pip/wheels/95/38/e1/8b62337a8ecf5728bdc1017e828f253f7a9cf25db999861bec
Building wheel for cloudml-hypertune (setup.py): started
Building wheel for cloudml-hypertune (setup.py): finished with status 'done'
Created wheel for cloudml-hypertune: filename=cloudml_hypertune-0.1.0.dev6-py2.py3-none-any.whl size=3987 sha256=88240a2e3178fe3d5057f7bb878ab3ef7520e2e986948bf16473e0ff85cc58f1
Stored in directory: /root/.cache/pip/wheels/a7/ff/87/e7bed0c2741fe219b3d6da67c2431d7f7fedb183032e00f81e
Building wheel for termcolor (setup.py): started
Building wheel for termcolor (setup.py): finished with status 'done'
Created wheel for termcolor: filename=termcolor-1.1.0-py3-none-any.whl size=4830 sha256=81ce9ebfbd63ab52cdd487216e15b25413d96e9982e4f4fa65fc5f496d2f51cb
Stored in directory: /root/.cache/pip/wheels/3f/e3/ec/8a8336ff196023622fbcb36de0c5a5c218cbb24111d1d4c7f2
Successfully built fire cloudml-hypertune termcolor
Installing collected packages: termcolor, fire, cloudml-hypertune, scikit-learn, pandas
Attempting uninstall: scikit-learn
Found existing installation: scikit-learn 0.23.2
Uninstalling scikit-learn-0.23.2:
Successfully uninstalled scikit-learn-0.23.2
Attempting uninstall: pandas
Found existing installation: pandas 1.1.4
Uninstalling pandas-1.1.4:
Successfully uninstalled pandas-1.1.4
[91mERROR: After October 2020 you may experience errors when installing or updating packages. This is because pip will change the way that it resolves dependency conflicts.
We recommend you use --use-feature=2020-resolver to test your packages with the new resolver before it becomes the default.
visions 0.6.4 requires pandas>=0.25.3, but you'll have pandas 0.24.2 which is incompatible.
pandas-profiling 2.8.0 requires pandas!=1.0.0,!=1.0.1,!=1.0.2,>=0.25.3, but you'll have pandas 0.24.2 which is incompatible.
pandas-profiling 2.8.0 requires visions[type_image_path]==0.4.4, but you'll have visions 0.6.4 which is incompatible.
[0mSuccessfully installed cloudml-hypertune-0.1.0.dev6 fire-0.3.1 pandas-0.24.2 scikit-learn-0.20.4 termcolor-1.1.0
Removing intermediate container 0c6e8a063d5e
---> 9b65c1664129
Step 3/5 : WORKDIR /app
---> Running in 49d46f4baa06
Removing intermediate container 49d46f4baa06
---> 6da3c9b06447
Step 4/5 : COPY train.py .
---> 655edf5e066b
Step 5/5 : ENTRYPOINT ["python", "train.py"]
---> Running in 6626328b42ce
Removing intermediate container 6626328b42ce
---> 9dc8e4f78279
Successfully built 9dc8e4f78279
Successfully tagged gcr.io/qwiklabs-gcp-02-e6e653a986e9/trainer_image:latest
PUSH
Pushing gcr.io/qwiklabs-gcp-02-e6e653a986e9/trainer_image:latest
The push refers to repository [gcr.io/qwiklabs-gcp-02-e6e653a986e9/trainer_image]
a656de45e173: Preparing
be71a31b0060: Preparing
dd6420a5e570: Preparing
58c37b024800: Preparing
093955a9f693: Preparing
292c93aa8921: Preparing
25e90c4f31bb: Preparing
5ed5b5583a70: Preparing
fed2ce1b9bf5: Preparing
a2a7397c9263: Preparing
135d5d53f509: Preparing
28952c0fc305: Preparing
1fff2aeddb5e: Preparing
193419df8fce: Preparing
9d1088ee89e7: Preparing
98868f5e88f9: Preparing
efa6a40d1ffb: Preparing
7a694df0ad6c: Preparing
3fd9df553184: Preparing
805802706667: Preparing
292c93aa8921: Waiting
25e90c4f31bb: Waiting
5ed5b5583a70: Waiting
fed2ce1b9bf5: Waiting
a2a7397c9263: Waiting
135d5d53f509: Waiting
28952c0fc305: Waiting
1fff2aeddb5e: Waiting
193419df8fce: Waiting
9d1088ee89e7: Waiting
98868f5e88f9: Waiting
efa6a40d1ffb: Waiting
7a694df0ad6c: Waiting
3fd9df553184: Waiting
805802706667: Waiting
093955a9f693: Layer already exists
58c37b024800: Layer already exists
292c93aa8921: Layer already exists
25e90c4f31bb: Layer already exists
5ed5b5583a70: Layer already exists
fed2ce1b9bf5: Layer already exists
a2a7397c9263: Layer already exists
135d5d53f509: Layer already exists
28952c0fc305: Layer already exists
1fff2aeddb5e: Layer already exists
a656de45e173: Pushed
be71a31b0060: Pushed
193419df8fce: Layer already exists
98868f5e88f9: Layer already exists
9d1088ee89e7: Layer already exists
7a694df0ad6c: Layer already exists
efa6a40d1ffb: Layer already exists
3fd9df553184: Layer already exists
805802706667: Layer already exists
dd6420a5e570: Pushed
latest: digest: sha256:9070def65c4db62b3588a5e6f289da0ff2dd4b119a96354a1c112a55d8f7feb0 size: 4499
DONE
--------------------------------------------------------------------------------
ID CREATE_TIME DURATION SOURCE IMAGES STATUS
b9183832-f73f-4c7b-ae2c-406055218124 2020-12-24T05:43:04+00:00 3M22S gs://qwiklabs-gcp-02-e6e653a986e9_cloudbuild/source/1608788583.211579-57b2dfc4133a49ffa04f547f30bf9fa5.tgz gcr.io/qwiklabs-gcp-02-e6e653a986e9/trainer_image (+1 more) SUCCESS
###Markdown
Submit an AI Platform hyperparameter tuning job Create the hyperparameter configuration file. Recall that the training code uses `SGDClassifier`. The training application has been designed to accept two hyperparameters that control `SGDClassifier`:- Max iterations- AlphaThe below file configures AI Platform hypertuning to run up to 6 trials on up to three nodes and to choose from two discrete values of `max_iter` and the linear range betwee 0.00001 and 0.001 for `alpha`. ExerciseComplete the `hptuning_config.yaml` file below so that the hyperparametertunning engine try for parameter values* `max_iter` the two values 200 and 300* `alpha` a linear range of values between 0.00001 and 0.001
###Code
%%writefile {TRAINING_APP_FOLDER}/hptuning_config.yaml
# Copyright 2019 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
trainingInput:
hyperparameters:
goal: MAXIMIZE
maxTrials: 4
maxParallelTrials: 4
hyperparameterMetricTag: accuracy
enableTrialEarlyStopping: TRUE
params:
- parameterName: max_iter
type: DISCRETE
discreteValues: [
200,
500
]
- parameterName: alpha
type: DOUBLE
minValue: 0.00001
maxValue: 0.001
scaleType: UNIT_LINEAR_SCALE
###Output
Writing training_app/hptuning_config.yaml
###Markdown
Start the hyperparameter tuning job. ExerciseUse the `gcloud` command to start the hyperparameter tuning job.
###Code
JOB_NAME = "JOB_{}".format(time.strftime("%Y%m%d_%H%M%S"))
JOB_DIR = "{}/{}".format(JOB_DIR_ROOT, JOB_NAME)
SCALE_TIER = "BASIC"
!gcloud ai-platform jobs submit training $JOB_NAME \
--region=$REGION \
--job-dir=$JOB_DIR \
--master-image-uri=$IMAGE_URI \
--scale-tier=$SCALE_TIER \
--config $TRAINING_APP_FOLDER/hptuning_config.yaml \
-- \
--training_dataset_path=$TRAINING_FILE_PATH \
--validation_dataset_path=$VALIDATION_FILE_PATH \
--hptune
###Output
Job [JOB_20201224_054748] submitted successfully.
Your job is still active. You may view the status of your job with the command
$ gcloud ai-platform jobs describe JOB_20201224_054748
or continue streaming the logs with the command
$ gcloud ai-platform jobs stream-logs JOB_20201224_054748
jobId: JOB_20201224_054748
state: QUEUED
###Markdown
Monitor the job.You can monitor the job using GCP console or from within the notebook using `gcloud` commands.
###Code
!gcloud ai-platform jobs describe $JOB_NAME
!gcloud ai-platform jobs stream-logs $JOB_NAME
###Output
INFO 2020-12-24 05:47:50 +0000 service Validating job requirements...
INFO 2020-12-24 05:47:50 +0000 service Job creation request has been successfully validated.
INFO 2020-12-24 05:47:51 +0000 service Job JOB_20201224_054748 is queued.
INFO 2020-12-24 05:48:30 +0000 service 3 Waiting for job to be provisioned.
INFO 2020-12-24 05:48:30 +0000 service 1 Waiting for job to be provisioned.
INFO 2020-12-24 05:48:30 +0000 service 4 Waiting for job to be provisioned.
INFO 2020-12-24 05:48:30 +0000 service 2 Waiting for job to be provisioned.
INFO 2020-12-24 05:48:32 +0000 service 2 Waiting for training program to start.
INFO 2020-12-24 05:48:32 +0000 service 4 Waiting for training program to start.
INFO 2020-12-24 05:48:33 +0000 service 3 Waiting for training program to start.
INFO 2020-12-24 05:48:34 +0000 service 1 Waiting for training program to start.
INFO 2020-12-24 05:51:42 +0000 master-replica-0 3 Starting training: alpha=0.0007398982492925038, max_iter=500
INFO 2020-12-24 05:51:42 +0000 master-replica-0 3 Model accuracy: 0.6983529890199268
INFO 2020-12-24 05:51:57 +0000 master-replica-0 2 Starting training: alpha=0.0002884343597238859, max_iter=200
INFO 2020-12-24 05:51:57 +0000 master-replica-0 2 Model accuracy: 0.7062830418869459
INFO 2020-12-24 05:52:01 +0000 master-replica-0 4 Starting training: alpha=0.0009917247346542114, max_iter=500
INFO 2020-12-24 05:52:01 +0000 master-replica-0 4 Model accuracy: 0.6974379829198861
INFO 2020-12-24 05:52:08 +0000 master-replica-0 1 Starting training: alpha=0.000505, max_iter=500
INFO 2020-12-24 05:52:08 +0000 master-replica-0 1 Model accuracy: 0.7050630337535584
INFO 2020-12-24 05:54:49 +0000 service 3 Job completed successfully.
INFO 2020-12-24 05:55:19 +0000 service 2 Job completed successfully.
INFO 2020-12-24 05:55:21 +0000 service 4 Job completed successfully.
INFO 2020-12-24 05:55:24 +0000 service 1 Job completed successfully.
###Markdown
Retrieve HP-tuning results. After the job completes you can review the results using GCP Console or programatically by calling the AI Platform Training REST end-point.
###Code
ml = discovery.build('ml', 'v1')
job_id = 'projects/{}/jobs/{}'.format(PROJECT_ID, JOB_NAME)
request = ml.projects().jobs().get(name=job_id)
try:
response = request.execute()
except errors.HttpError as err:
print(err)
except:
print("Unexpected error")
response
###Output
_____no_output_____
###Markdown
The returned run results are sorted by a value of the optimization metric. The best run is the first item on the returned list.
###Code
response['trainingOutput']['trials'][0]
###Output
_____no_output_____
###Markdown
Retrain the model with the best hyperparametersYou can now retrain the model using the best hyperparameters and using combined training and validation splits as a training dataset. Configure and run the training job
###Code
alpha = response['trainingOutput']['trials'][0]['hyperparameters']['alpha']
max_iter = response['trainingOutput']['trials'][0]['hyperparameters']['max_iter']
JOB_NAME = "JOB_{}".format(time.strftime("%Y%m%d_%H%M%S"))
JOB_DIR = "{}/{}".format(JOB_DIR_ROOT, JOB_NAME)
SCALE_TIER = "BASIC"
!gcloud ai-platform jobs submit training $JOB_NAME \
--region=$REGION \
--job-dir=$JOB_DIR \
--master-image-uri=$IMAGE_URI \
--scale-tier=$SCALE_TIER \
-- \
--training_dataset_path=$TRAINING_FILE_PATH \
--validation_dataset_path=$VALIDATION_FILE_PATH \
--alpha=$alpha \
--max_iter=$max_iter \
--nohptune
!gcloud ai-platform jobs stream-logs $JOB_NAME
###Output
INFO 2020-12-24 06:07:41 +0000 service Validating job requirements...
INFO 2020-12-24 06:07:41 +0000 service Job creation request has been successfully validated.
INFO 2020-12-24 06:07:42 +0000 service Job JOB_20201224_060739 is queued.
INFO 2020-12-24 06:07:42 +0000 service Waiting for job to be provisioned.
INFO 2020-12-24 06:07:45 +0000 service Waiting for training program to start.
INFO 2020-12-24 06:11:23 +0000 master-replica-0 Copying file://model.pkl [Content-Type=application/octet-stream]...
INFO 2020-12-24 06:11:23 +0000 master-replica-0 / [0 files][ 0.0 B/ 6.2 KiB]
INFO 2020-12-24 06:11:23 +0000 master-replica-0 / [1 files][ 6.2 KiB/ 6.2 KiB]
INFO 2020-12-24 06:11:23 +0000 master-replica-0 Operation completed over 1 objects/6.2 KiB.
INFO 2020-12-24 06:11:23 +0000 master-replica-0 Starting training: alpha=0.0002884343597238859, max_iter=200
INFO 2020-12-24 06:11:23 +0000 master-replica-0 Saved model in: gs://artifacts.qwiklabs-gcp-02-e6e653a986e9.appspot.com/jobs/JOB_20201224_060739/model.pkl
INFO 2020-12-24 06:14:02 +0000 service Job completed successfully.
###Markdown
Examine the training outputThe training script saved the trained model as the 'model.pkl' in the `JOB_DIR` folder on GCS.
###Code
!gsutil ls $JOB_DIR
###Output
gs://artifacts.qwiklabs-gcp-02-e6e653a986e9.appspot.com/jobs/JOB_20201224_060739/model.pkl
###Markdown
Deploy the model to AI Platform Prediction Create a model resource ExerciseComplete the `gcloud` command below to create a model with`model_name` in `$REGION` tagged with `labels`:
###Code
model_name = 'forest_cover_classifier'
labels = "task=classifier,domain=forestry"
!gcloud ai-platform models create $model_name \
--regions=$REGION \
--labels=$labels
###Output
Using endpoint [https://ml.googleapis.com/]
Created ml engine model [projects/qwiklabs-gcp-02-e6e653a986e9/models/forest_cover_classifier].
###Markdown
Create a model version Exercise Complete the `gcloud` command below to create a version of the model:
###Code
model_version = 'v01'
!gcloud ai-platform versions create {model_version} \
--model={model_name} \
--origin=$JOB_DIR \
--runtime-version=1.15 \
--framework=scikit-learn \
--python-version=3.7
###Output
Using endpoint [https://ml.googleapis.com/]
Creating version (this might take a few minutes)......done.
###Markdown
Serve predictions Prepare the input file with JSON formated instances.
###Code
input_file = 'serving_instances.json'
with open(input_file, 'w') as f:
for index, row in X_validation.head().iterrows():
f.write(json.dumps(list(row.values)))
f.write('\n')
!cat $input_file
###Output
[2841.0, 45.0, 0.0, 644.0, 282.0, 1376.0, 218.0, 237.0, 156.0, 1003.0, "Commanche", "C4758"]
[2494.0, 180.0, 0.0, 0.0, 0.0, 819.0, 219.0, 238.0, 157.0, 5531.0, "Rawah", "C6101"]
[3153.0, 90.0, 0.0, 335.0, 11.0, 5842.0, 219.0, 237.0, 155.0, 930.0, "Rawah", "C7101"]
[3021.0, 90.0, 0.0, 42.0, 1.0, 4389.0, 219.0, 237.0, 155.0, 902.0, "Rawah", "C7745"]
[2916.0, 0.0, 0.0, 0.0, 0.0, 4562.0, 218.0, 238.0, 156.0, 5442.0, "Rawah", "C7745"]
###Markdown
Invoke the model ExerciseUsing the `gcloud` command send the data in `$input_file` to your model deployed as a REST API:
###Code
!gcloud ai-platform predict \
--model $model_name \
--version $model_version \
--json-instances $input_file
###Output
Using endpoint [https://ml.googleapis.com/]
[1, 1, 0, 1, 1]
###Markdown
Using custom containers with AI Platform Training**Learning Objectives:**1. Learn how to create a train and a validation split with BigQuery1. Learn how to wrap a machine learning model into a Docker container and train in on AI Platform1. Learn how to use the hyperparameter tunning engine on Google Cloud to find the best hyperparameters1. Learn how to deploy a trained machine learning model Google Cloud as a rest API and query itIn this lab, you develop a multi-class classification model, package the model as a docker image, and run on **AI Platform Training** as a training application. The training application trains a multi-class classification model that predicts the type of forest cover from cartographic data. The [dataset](../../../datasets/covertype/README.md) used in the lab is based on **Covertype Data Set** from UCI Machine Learning Repository.Scikit-learn is one of the most useful libraries for machineย learningย in Python. The training code uses `scikit-learn` for data pre-processing and modeling. The code is instrumented using the `hypertune` package so it can be used with **AI Platform** hyperparameter tuning job in searching for the best combination of hyperparameter values by optimizing the metrics you specified.
###Code
import json
import os
import numpy as np
import pandas as pd
import pickle
import uuid
import time
import tempfile
from googleapiclient import discovery
from googleapiclient import errors
from google.cloud import bigquery
from jinja2 import Template
from kfp.components import func_to_container_op
from typing import NamedTuple
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.linear_model import SGDClassifier
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from sklearn.compose import ColumnTransformer
###Output
_____no_output_____
###Markdown
Run the command in the cell below to install gcsfs package.
###Code
%pip install gcsfs==0.8
###Output
Requirement already satisfied: gcsfs==0.8 in /opt/conda/lib/python3.7/site-packages (0.8.0)
Requirement already satisfied: ujson in /opt/conda/lib/python3.7/site-packages (from gcsfs==0.8) (4.0.2)
Requirement already satisfied: requests in /opt/conda/lib/python3.7/site-packages (from gcsfs==0.8) (2.25.1)
Requirement already satisfied: google-auth-oauthlib in /opt/conda/lib/python3.7/site-packages (from gcsfs==0.8) (0.4.4)
Requirement already satisfied: aiohttp in /opt/conda/lib/python3.7/site-packages (from gcsfs==0.8) (3.7.4)
Requirement already satisfied: decorator in /opt/conda/lib/python3.7/site-packages (from gcsfs==0.8) (5.0.9)
Requirement already satisfied: fsspec>=0.8.0 in /opt/conda/lib/python3.7/site-packages (from gcsfs==0.8) (2021.5.0)
Requirement already satisfied: google-auth>=1.2 in /opt/conda/lib/python3.7/site-packages (from gcsfs==0.8) (1.30.0)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /opt/conda/lib/python3.7/site-packages (from google-auth>=1.2->gcsfs==0.8) (0.2.7)
Requirement already satisfied: six>=1.9.0 in /opt/conda/lib/python3.7/site-packages (from google-auth>=1.2->gcsfs==0.8) (1.16.0)
Requirement already satisfied: rsa<5,>=3.1.4 in /opt/conda/lib/python3.7/site-packages (from google-auth>=1.2->gcsfs==0.8) (4.7.2)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /opt/conda/lib/python3.7/site-packages (from google-auth>=1.2->gcsfs==0.8) (4.2.2)
Requirement already satisfied: setuptools>=40.3.0 in /opt/conda/lib/python3.7/site-packages (from google-auth>=1.2->gcsfs==0.8) (49.6.0.post20210108)
Requirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /opt/conda/lib/python3.7/site-packages (from pyasn1-modules>=0.2.1->google-auth>=1.2->gcsfs==0.8) (0.4.8)
Requirement already satisfied: multidict<7.0,>=4.5 in /opt/conda/lib/python3.7/site-packages (from aiohttp->gcsfs==0.8) (5.1.0)
Requirement already satisfied: async-timeout<4.0,>=3.0 in /opt/conda/lib/python3.7/site-packages (from aiohttp->gcsfs==0.8) (3.0.1)
Requirement already satisfied: attrs>=17.3.0 in /opt/conda/lib/python3.7/site-packages (from aiohttp->gcsfs==0.8) (21.2.0)
Requirement already satisfied: typing-extensions>=3.6.5 in /opt/conda/lib/python3.7/site-packages (from aiohttp->gcsfs==0.8) (3.7.4.3)
Requirement already satisfied: chardet<5.0,>=2.0 in /opt/conda/lib/python3.7/site-packages (from aiohttp->gcsfs==0.8) (4.0.0)
Requirement already satisfied: yarl<2.0,>=1.0 in /opt/conda/lib/python3.7/site-packages (from aiohttp->gcsfs==0.8) (1.6.3)
Requirement already satisfied: idna>=2.0 in /opt/conda/lib/python3.7/site-packages (from yarl<2.0,>=1.0->aiohttp->gcsfs==0.8) (2.10)
Requirement already satisfied: requests-oauthlib>=0.7.0 in /opt/conda/lib/python3.7/site-packages (from google-auth-oauthlib->gcsfs==0.8) (1.3.0)
Requirement already satisfied: oauthlib>=3.0.0 in /opt/conda/lib/python3.7/site-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib->gcsfs==0.8) (3.0.1)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /opt/conda/lib/python3.7/site-packages (from requests->gcsfs==0.8) (1.24.3)
Requirement already satisfied: certifi>=2017.4.17 in /opt/conda/lib/python3.7/site-packages (from requests->gcsfs==0.8) (2020.12.5)
Note: you may need to restart the kernel to use updated packages.
###Markdown
Prepare lab datasetSet environment variable so that we can use them throughout the entire lab.The pipeline ingests data from BigQuery. The cell below uploads the Covertype dataset to BigQuery.
###Code
PROJECT_ID=!(gcloud config get-value core/project)
PROJECT_ID=PROJECT_ID[0]
DATASET_ID='covertype_dataset'
DATASET_LOCATION='US'
TABLE_ID='covertype'
DATA_SOURCE='gs://workshop-datasets/covertype/small/dataset.csv'
SCHEMA='Elevation:INTEGER,Aspect:INTEGER,Slope:INTEGER,Horizontal_Distance_To_Hydrology:INTEGER,Vertical_Distance_To_Hydrology:INTEGER,Horizontal_Distance_To_Roadways:INTEGER,Hillshade_9am:INTEGER,Hillshade_Noon:INTEGER,Hillshade_3pm:INTEGER,Horizontal_Distance_To_Fire_Points:INTEGER,Wilderness_Area:STRING,Soil_Type:STRING,Cover_Type:INTEGER'
###Output
_____no_output_____
###Markdown
Next, create the BigQuery dataset and upload the Covertype csv data into a table.
###Code
!bq --location=$DATASET_LOCATION --project_id=$PROJECT_ID mk --dataset $DATASET_ID
!bq --project_id=$PROJECT_ID --dataset_id=$DATASET_ID load \
--source_format=CSV \
--skip_leading_rows=1 \
--replace \
$TABLE_ID \
$DATA_SOURCE \
$SCHEMA
###Output
Waiting on bqjob_r5d9baecf0c5847c3_00000179e518188f_1 ... (2s) Current status: DONE
###Markdown
Configure environment settings Set location paths, connections strings, and other environment settings. Make sure to update `REGION`, and `ARTIFACT_STORE` with the settings reflecting your lab environment. - `REGION` - the compute region for AI Platform Training and Prediction- `ARTIFACT_STORE` - the Cloud Storage bucket created during installation of AI Platform Pipelines. The bucket name starts with the `qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-default` prefix.Run gsutil ls without URLs to list all of the Cloud Storage buckets under your default project ID.
###Code
!gsutil ls
###Output
gs://artifacts.qwiklabs-gcp-04-568443837277.appspot.com/
gs://qwiklabs-gcp-04-568443837277/
gs://qwiklabs-gcp-04-568443837277-kubeflowpipelines-default/
gs://qwiklabs-gcp-04-568443837277_cloudbuild/
###Markdown
HINT: For ARTIFACT_STORE, copy the bucket name which starts with the qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-default prefix from the previous cell output.Your copied value should look like 'gs://qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-default').
###Code
REGION = 'us-central1'
ARTIFACT_STORE = 'gs://qwiklabs-gcp-04-568443837277-kubeflowpipelines-default'
PROJECT_ID = !(gcloud config get-value core/project)
PROJECT_ID = PROJECT_ID[0]
DATA_ROOT='{}/data'.format(ARTIFACT_STORE)
JOB_DIR_ROOT='{}/jobs'.format(ARTIFACT_STORE)
TRAINING_FILE_PATH='{}/{}/{}'.format(DATA_ROOT, 'training', 'dataset.csv')
VALIDATION_FILE_PATH='{}/{}/{}'.format(DATA_ROOT, 'validation', 'dataset.csv')
###Output
_____no_output_____
###Markdown
Explore the Covertype dataset Run the query statement below to scan covertype_dataset.covertype table in BigQuery and return the computed result rows.
###Code
%%bigquery
SELECT *
FROM `covertype_dataset.covertype`
###Output
Query complete after 0.00s: 100%|โโโโโโโโโโ| 2/2 [00:00<00:00, 1039.61query/s]
Downloading: 100%|โโโโโโโโโโ| 100000/100000 [00:01<00:00, 73595.75rows/s]
###Markdown
Create training and validation splitsUse BigQuery to sample training and validation splits and save them to Cloud Storage. Create a training splitRun the query below in order to have repeatable sampling of the data in BigQuery. Note that `FARM_FINGERPRINT()`ย is used on the field that you are going to split your data. It creates a training split that takes 80% of the data using theย `bq`ย command and exports this split into the BigQuery table of `covertype_dataset.training`.
###Code
!bq query \
-n 0 \
--destination_table covertype_dataset.training \
--replace \
--use_legacy_sql=false \
'SELECT * \
FROM `covertype_dataset.covertype` AS cover \
WHERE \
MOD(ABS(FARM_FINGERPRINT(TO_JSON_STRING(cover))), 10) IN (1, 2, 3, 4)'
###Output
Waiting on bqjob_r430a59a16840b1b_00000179e518af96_1 ... (1s) Current status: DONE
###Markdown
Use theย `bq`ย extract command to export the BigQuery training table to GCS atย `$TRAINING_FILE_PATH`.
###Code
!bq extract \
--destination_format CSV \
covertype_dataset.training \
$TRAINING_FILE_PATH
###Output
Waiting on bqjob_r686fb7d720dd414c_00000179e518c79d_1 ... (0s) Current status: DONE
###Markdown
Create a validation split ExerciseIn the first cell below, create a validation split that takes 10% of the data using the `bq` command andexport this split into the BigQuery table `covertype_dataset.validation`.In the second cell, use the `bq` command to export that BigQuery validation table to GCS at `$VALIDATION_FILE_PATH`.NOTE: If you need help, you may take a look at the complete solution by navigating to **mlops-on-gcp > workshops > kfp-caip-sklearn > lab-01-caip-containers** and opening **lab-01.ipynb**.
###Code
!bq query \
-n 0 \
--destination_table covertype_dataset.validation \
--replace \
--use_legacy_sql=false \
'SELECT * \
FROM `covertype_dataset.covertype` AS cover \
WHERE \
MOD(ABS(FARM_FINGERPRINT(TO_JSON_STRING(cover))), 10) IN (8)'
!bq extract \
--destination_format CSV \
covertype_dataset.validation \
$VALIDATION_FILE_PATH
df_train = pd.read_csv(TRAINING_FILE_PATH)
df_validation = pd.read_csv(VALIDATION_FILE_PATH)
print(df_train.shape)
print(df_validation.shape)
###Output
(40009, 13)
(9836, 13)
###Markdown
Develop a training application Configure the `sklearn` training pipeline.The training pipeline preprocesses data by standardizing all numeric features using `sklearn.preprocessing.StandardScaler` and encoding all categorical features using `sklearn.preprocessing.OneHotEncoder`. It uses stochastic gradient descent linear classifier (`SGDClassifier`) for modeling.
###Code
numeric_feature_indexes = slice(0, 10)
categorical_feature_indexes = slice(10, 12)
preprocessor = ColumnTransformer(
transformers=[
('num', StandardScaler(), numeric_feature_indexes),
('cat', OneHotEncoder(), categorical_feature_indexes)
])
pipeline = Pipeline([
('preprocessor', preprocessor),
('classifier', SGDClassifier(loss='log', tol=1e-3))
])
###Output
_____no_output_____
###Markdown
Convert all numeric features to `float64`To avoid warning messages from `StandardScaler` all numeric features are converted to `float64`.
###Code
num_features_type_map = {feature: 'float64' for feature in df_train.columns[numeric_feature_indexes]}
df_train = df_train.astype(num_features_type_map)
df_validation = df_validation.astype(num_features_type_map)
###Output
_____no_output_____
###Markdown
Run the pipeline locally.
###Code
X_train = df_train.drop('Cover_Type', axis=1)
y_train = df_train['Cover_Type']
X_validation = df_validation.drop('Cover_Type', axis=1)
y_validation = df_validation['Cover_Type']
pipeline.set_params(classifier__alpha=0.001, classifier__max_iter=200)
pipeline.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Calculate the trained model's accuracy.
###Code
accuracy = pipeline.score(X_validation, y_validation)
print(accuracy)
###Output
0.6939812932086213
###Markdown
Prepare the hyperparameter tuning application.Since the training run on this dataset is computationally expensive you can benefit from running a distributed hyperparameter tuning job on AI Platform Training.
###Code
TRAINING_APP_FOLDER = 'training_app'
os.makedirs(TRAINING_APP_FOLDER, exist_ok=True)
###Output
_____no_output_____
###Markdown
Write the tuning script. Notice the use of the `hypertune` package to report the `accuracy` optimization metric to AI Platform hyperparameter tuning service. ExerciseComplete the code below to capture the metric that the hyper parameter tunning engine will use to optimizethe hyper parameter. NOTE: If you need help, you may take a look at the complete solution by navigating to **mlops-on-gcp > workshops > kfp-caip-sklearn > lab-01-caip-containers** and opening **lab-01.ipynb**.
###Code
%%writefile {TRAINING_APP_FOLDER}/train.py
# Copyright 2019 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import subprocess
import sys
import fire
import pickle
import numpy as np
import pandas as pd
import hypertune
from sklearn.compose import ColumnTransformer
from sklearn.linear_model import SGDClassifier
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, OneHotEncoder
def train_evaluate(job_dir, training_dataset_path,
validation_dataset_path, alpha, max_iter, hptune):
df_train = pd.read_csv(training_dataset_path)
df_validation = pd.read_csv(validation_dataset_path)
if not hptune:
df_train = pd.concat([df_train, df_validation])
numeric_feature_indexes = slice(0, 10)
categorical_feature_indexes = slice(10, 12)
preprocessor = ColumnTransformer(
transformers=[
('num', StandardScaler(), numeric_feature_indexes),
('cat', OneHotEncoder(), categorical_feature_indexes)
])
pipeline = Pipeline([
('preprocessor', preprocessor),
('classifier', SGDClassifier(loss='log',tol=1e-3))
])
num_features_type_map = {feature: 'float64' for feature
in df_train.columns[numeric_feature_indexes]}
df_train = df_train.astype(num_features_type_map)
df_validation = df_validation.astype(num_features_type_map)
print('Starting training: alpha={}, max_iter={}'.format(alpha, max_iter))
X_train = df_train.drop('Cover_Type', axis=1)
y_train = df_train['Cover_Type']
pipeline.set_params(classifier__alpha=alpha, classifier__max_iter=max_iter)
pipeline.fit(X_train, y_train)
if hptune:
X_validation = df_validation.drop('Cover_Type', axis=1)
y_validation = df_validation['Cover_Type']
accuracy = pipeline.score(X_validation, y_validation)
print('Model accuracy: {}'.format(accuracy))
# Log it with hypertune
hpt = hypertune.HyperTune()
hpt.report_hyperparameter_tuning_metric(
hyperparameter_metric_tag='accuracy',
metric_value=accuracy
)
# Save the model
if not hptune:
model_filename = 'model.pkl'
with open(model_filename, 'wb') as model_file:
pickle.dump(pipeline, model_file)
gcs_model_path = "{}/{}".format(job_dir, model_filename)
subprocess.check_call(['gsutil', 'cp', model_filename, gcs_model_path],stderr=sys.stdout)
print("Saved model in: {}".format(gcs_model_path))
if __name__ == "__main__":
fire.Fire(train_evaluate)
###Output
Overwriting training_app/train.py
###Markdown
Package the script into a docker image.Notice that we are installing specific versions of `scikit-learn` and `pandas` in the training image. This is done to make sure that the training runtime is aligned with the serving runtime. Later in the notebook you will deploy the model to AI Platform Prediction, using the 1.15 version of AI Platform Prediction runtime. Make sure to update the URI for the base image so that it points to your project's **Container Registry**. ExerciseComplete the Dockerfile below so that it copies the 'train.py' file into the containerat `/app` and runs it when the container is started. NOTE: If you need help, you may take a look at the complete solution by navigating to **mlops-on-gcp > workshops > kfp-caip-sklearn > lab-01-caip-containers** and opening **lab-01.ipynb**.
###Code
%%writefile {TRAINING_APP_FOLDER}/Dockerfile
FROM gcr.io/deeplearning-platform-release/base-cpu
RUN pip install -U fire cloudml-hypertune scikit-learn==0.20.4 pandas==0.24.2
WORKDIR /app
COPY train.py .
ENTRYPOINT ["python", "train.py"]
###Output
Overwriting training_app/Dockerfile
###Markdown
Build the docker image. You use **Cloud Build** to build the image and push it your project's **Container Registry**. As you use the remote cloud service to build the image, you don't need a local installation of Docker.
###Code
IMAGE_NAME='trainer_image'
IMAGE_TAG='latest'
IMAGE_URI='gcr.io/{}/{}:{}'.format(PROJECT_ID, IMAGE_NAME, IMAGE_TAG)
!gcloud builds submit --tag $IMAGE_URI $TRAINING_APP_FOLDER
###Output
Creating temporary tarball archive of 3 file(s) totalling 4.2 KiB before compression.
Uploading tarball of [training_app] to [gs://qwiklabs-gcp-04-568443837277_cloudbuild/source/1623046503.227371-ff58163d94584f598f21488ea2514e4c.tgz]
Created [https://cloudbuild.googleapis.com/v1/projects/qwiklabs-gcp-04-568443837277/locations/global/builds/135cab39-de88-4fdf-968a-b5b2539df01e].
Logs are available at [https://console.cloud.google.com/cloud-build/builds/135cab39-de88-4fdf-968a-b5b2539df01e?project=752053140940].
----------------------------- REMOTE BUILD OUTPUT ------------------------------
starting build "135cab39-de88-4fdf-968a-b5b2539df01e"
FETCHSOURCE
Fetching storage object: gs://qwiklabs-gcp-04-568443837277_cloudbuild/source/1623046503.227371-ff58163d94584f598f21488ea2514e4c.tgz#1623046503617161
Copying gs://qwiklabs-gcp-04-568443837277_cloudbuild/source/1623046503.227371-ff58163d94584f598f21488ea2514e4c.tgz#1623046503617161...
/ [1 files][ 1.7 KiB/ 1.7 KiB]
Operation completed over 1 objects/1.7 KiB.
BUILD
Already have image (with digest): gcr.io/cloud-builders/docker
Sending build context to Docker daemon 8.192kB
Step 1/5 : FROM gcr.io/deeplearning-platform-release/base-cpu
latest: Pulling from deeplearning-platform-release/base-cpu
4bbfd2c87b75: Pulling fs layer
d2e110be24e1: Pulling fs layer
889a7173dcfe: Pulling fs layer
2e3325ceca25: Pulling fs layer
61d2446417d0: Pulling fs layer
c920f1df7b32: Pulling fs layer
4f4fb700ef54: Pulling fs layer
8773e572884a: Pulling fs layer
0d0e4ad523cc: Pulling fs layer
273fdf15330b: Pulling fs layer
639fd51e48c1: Pulling fs layer
bb5e13e17fd7: Pulling fs layer
379956344b3f: Pulling fs layer
79a0842bd8ff: Pulling fs layer
e124988c5196: Pulling fs layer
066a3d03cb12: Pulling fs layer
cd55345c1107: Pulling fs layer
258dc3f54395: Pulling fs layer
c4cde8551ee4: Pulling fs layer
639fd51e48c1: Waiting
bb5e13e17fd7: Waiting
379956344b3f: Waiting
79a0842bd8ff: Waiting
e124988c5196: Waiting
066a3d03cb12: Waiting
cd55345c1107: Waiting
258dc3f54395: Waiting
c4cde8551ee4: Waiting
2e3325ceca25: Waiting
61d2446417d0: Waiting
c920f1df7b32: Waiting
4f4fb700ef54: Waiting
8773e572884a: Waiting
0d0e4ad523cc: Waiting
273fdf15330b: Waiting
889a7173dcfe: Verifying Checksum
889a7173dcfe: Download complete
d2e110be24e1: Download complete
2e3325ceca25: Verifying Checksum
2e3325ceca25: Download complete
4bbfd2c87b75: Verifying Checksum
4bbfd2c87b75: Download complete
4f4fb700ef54: Download complete
8773e572884a: Verifying Checksum
8773e572884a: Download complete
c920f1df7b32: Verifying Checksum
c920f1df7b32: Download complete
273fdf15330b: Verifying Checksum
273fdf15330b: Download complete
639fd51e48c1: Verifying Checksum
639fd51e48c1: Download complete
bb5e13e17fd7: Verifying Checksum
bb5e13e17fd7: Download complete
379956344b3f: Verifying Checksum
379956344b3f: Download complete
79a0842bd8ff: Verifying Checksum
79a0842bd8ff: Download complete
e124988c5196: Verifying Checksum
e124988c5196: Download complete
066a3d03cb12: Verifying Checksum
066a3d03cb12: Download complete
cd55345c1107: Verifying Checksum
cd55345c1107: Download complete
0d0e4ad523cc: Verifying Checksum
0d0e4ad523cc: Download complete
c4cde8551ee4: Verifying Checksum
c4cde8551ee4: Download complete
61d2446417d0: Verifying Checksum
61d2446417d0: Download complete
4bbfd2c87b75: Pull complete
d2e110be24e1: Pull complete
889a7173dcfe: Pull complete
2e3325ceca25: Pull complete
258dc3f54395: Verifying Checksum
258dc3f54395: Download complete
61d2446417d0: Pull complete
c920f1df7b32: Pull complete
4f4fb700ef54: Pull complete
8773e572884a: Pull complete
0d0e4ad523cc: Pull complete
273fdf15330b: Pull complete
639fd51e48c1: Pull complete
bb5e13e17fd7: Pull complete
379956344b3f: Pull complete
79a0842bd8ff: Pull complete
e124988c5196: Pull complete
066a3d03cb12: Pull complete
cd55345c1107: Pull complete
258dc3f54395: Pull complete
c4cde8551ee4: Pull complete
Digest: sha256:9a803e129792d31945fb83f68618c2ffdf079f05ade226b7192566b7edeb2200
Status: Downloaded newer image for gcr.io/deeplearning-platform-release/base-cpu:latest
---> cbd4a0741b11
Step 2/5 : RUN pip install -U fire cloudml-hypertune scikit-learn==0.20.4 pandas==0.24.2
---> Running in 8aae9845049f
Collecting fire
Downloading fire-0.4.0.tar.gz (87 kB)
Collecting cloudml-hypertune
Downloading cloudml-hypertune-0.1.0.dev6.tar.gz (3.2 kB)
Collecting scikit-learn==0.20.4
Downloading scikit_learn-0.20.4-cp37-cp37m-manylinux1_x86_64.whl (5.4 MB)
Collecting pandas==0.24.2
Downloading pandas-0.24.2-cp37-cp37m-manylinux1_x86_64.whl (10.1 MB)
Requirement already satisfied: numpy>=1.8.2 in /opt/conda/lib/python3.7/site-packages (from scikit-learn==0.20.4) (1.19.5)
Requirement already satisfied: scipy>=0.13.3 in /opt/conda/lib/python3.7/site-packages (from scikit-learn==0.20.4) (1.6.3)
Requirement already satisfied: python-dateutil>=2.5.0 in /opt/conda/lib/python3.7/site-packages (from pandas==0.24.2) (2.8.1)
Requirement already satisfied: pytz>=2011k in /opt/conda/lib/python3.7/site-packages (from pandas==0.24.2) (2021.1)
Requirement already satisfied: six>=1.5 in /opt/conda/lib/python3.7/site-packages (from python-dateutil>=2.5.0->pandas==0.24.2) (1.16.0)
Collecting termcolor
Downloading termcolor-1.1.0.tar.gz (3.9 kB)
Building wheels for collected packages: fire, cloudml-hypertune, termcolor
Building wheel for fire (setup.py): started
Building wheel for fire (setup.py): finished with status 'done'
Created wheel for fire: filename=fire-0.4.0-py2.py3-none-any.whl size=115928 sha256=2c8685ff2e689f47339cafdc26501bd239df0ee1ef544ffe7bf6192043536280
Stored in directory: /root/.cache/pip/wheels/8a/67/fb/2e8a12fa16661b9d5af1f654bd199366799740a85c64981226
Building wheel for cloudml-hypertune (setup.py): started
Building wheel for cloudml-hypertune (setup.py): finished with status 'done'
Created wheel for cloudml-hypertune: filename=cloudml_hypertune-0.1.0.dev6-py2.py3-none-any.whl size=3988 sha256=f8c96711bf49f10473159fcf93c03a6177e88d3de421e80e13d05d11f3d802be
Stored in directory: /root/.cache/pip/wheels/a7/ff/87/e7bed0c2741fe219b3d6da67c2431d7f7fedb183032e00f81e
Building wheel for termcolor (setup.py): started
Building wheel for termcolor (setup.py): finished with status 'done'
Created wheel for termcolor: filename=termcolor-1.1.0-py3-none-any.whl size=4829 sha256=7936348242a85e422cf14769208a87c16646654bf6c63650032eaee0c9cf92ec
Stored in directory: /root/.cache/pip/wheels/3f/e3/ec/8a8336ff196023622fbcb36de0c5a5c218cbb24111d1d4c7f2
Successfully built fire cloudml-hypertune termcolor
Installing collected packages: termcolor, scikit-learn, pandas, fire, cloudml-hypertune
Attempting uninstall: scikit-learn
Found existing installation: scikit-learn 0.24.2
Uninstalling scikit-learn-0.24.2:
Successfully uninstalled scikit-learn-0.24.2
Attempting uninstall: pandas
Found existing installation: pandas 1.2.4
Uninstalling pandas-1.2.4:
Successfully uninstalled pandas-1.2.4
Successfully installed cloudml-hypertune-0.1.0.dev6 fire-0.4.0 pandas-0.24.2 scikit-learn-0.20.4 termcolor-1.1.0
[91mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
visions 0.7.1 requires pandas>=0.25.3, but you have pandas 0.24.2 which is incompatible.
phik 0.11.2 requires pandas>=0.25.1, but you have pandas 0.24.2 which is incompatible.
pandas-profiling 3.0.0 requires pandas!=1.0.0,!=1.0.1,!=1.0.2,!=1.1.0,>=0.25.3, but you have pandas 0.24.2 which is incompatible.
[0m[91mWARNING: Running pip as root will break packages and permissions. You should install packages reliably by using venv: https://pip.pypa.io/warnings/venv
[0mRemoving intermediate container 8aae9845049f
---> 2f252e008ff0
Step 3/5 : WORKDIR /app
---> Running in 972f30ed6bc5
Removing intermediate container 972f30ed6bc5
---> e364d38e6184
Step 4/5 : COPY train.py .
---> f0c55016986c
Step 5/5 : ENTRYPOINT ["python", "train.py"]
---> Running in bc14a73991af
Removing intermediate container bc14a73991af
---> b28d8c2ade69
Successfully built b28d8c2ade69
Successfully tagged gcr.io/qwiklabs-gcp-04-568443837277/trainer_image:latest
PUSH
Pushing gcr.io/qwiklabs-gcp-04-568443837277/trainer_image:latest
The push refers to repository [gcr.io/qwiklabs-gcp-04-568443837277/trainer_image]
f1cbbbaf9a61: Preparing
97f076ebcc95: Preparing
1345d27c63fd: Preparing
d635ca93e445: Preparing
9c6af3531768: Preparing
db84285b3362: Preparing
786c0730cb59: Preparing
a27f18de1f93: Preparing
fca7d1dfb5d0: Preparing
bd6f4ad6f932: Preparing
881c56a94be3: Preparing
00fc201fe5bc: Preparing
0fda670d547e: Preparing
ea2c53df77bf: Preparing
2cb10083321f: Preparing
5f70bf18a086: Preparing
3b06b75c6ba8: Preparing
06ee7b884610: Preparing
186cb363f69f: Preparing
5f08512fd434: Preparing
c7bb31fc0e08: Preparing
50858308da3d: Preparing
db84285b3362: Waiting
786c0730cb59: Waiting
a27f18de1f93: Waiting
fca7d1dfb5d0: Waiting
bd6f4ad6f932: Waiting
881c56a94be3: Waiting
00fc201fe5bc: Waiting
0fda670d547e: Waiting
ea2c53df77bf: Waiting
2cb10083321f: Waiting
5f70bf18a086: Waiting
3b06b75c6ba8: Waiting
06ee7b884610: Waiting
186cb363f69f: Waiting
5f08512fd434: Waiting
c7bb31fc0e08: Waiting
50858308da3d: Waiting
9c6af3531768: Layer already exists
d635ca93e445: Layer already exists
786c0730cb59: Layer already exists
db84285b3362: Layer already exists
a27f18de1f93: Layer already exists
fca7d1dfb5d0: Layer already exists
bd6f4ad6f932: Layer already exists
881c56a94be3: Layer already exists
00fc201fe5bc: Layer already exists
0fda670d547e: Layer already exists
2cb10083321f: Layer already exists
ea2c53df77bf: Layer already exists
5f70bf18a086: Layer already exists
3b06b75c6ba8: Layer already exists
06ee7b884610: Layer already exists
186cb363f69f: Layer already exists
5f08512fd434: Layer already exists
c7bb31fc0e08: Layer already exists
50858308da3d: Layer already exists
97f076ebcc95: Pushed
f1cbbbaf9a61: Pushed
1345d27c63fd: Pushed
latest: digest: sha256:b5a34c9b79085a8b2bc0a4b72b228da75d9129e2ee548c77094be25f1024d028 size: 4914
DONE
--------------------------------------------------------------------------------
ID CREATE_TIME DURATION SOURCE IMAGES STATUS
135cab39-de88-4fdf-968a-b5b2539df01e 2021-06-07T06:15:03+00:00 2M25S gs://qwiklabs-gcp-04-568443837277_cloudbuild/source/1623046503.227371-ff58163d94584f598f21488ea2514e4c.tgz gcr.io/qwiklabs-gcp-04-568443837277/trainer_image (+1 more) SUCCESS
###Markdown
Submit an AI Platform hyperparameter tuning job Create the hyperparameter configuration file. Recall that the training code uses `SGDClassifier`. The training application has been designed to accept two hyperparameters that control `SGDClassifier`:- Max iterations- AlphaThe below file configures AI Platform hypertuning to run up to 6 trials on up to three nodes and to choose from two discrete values of `max_iter` and the linear range betwee 0.00001 and 0.001 for `alpha`. ExerciseComplete the `hptuning_config.yaml` file below so that the hyperparametertunning engine try for parameter values* `max_iter` the two values 200 and 300* `alpha` a linear range of values between 0.00001 and 0.001NOTE: If you need help, you may take a look at the complete solution by navigating to **mlops-on-gcp > workshops > kfp-caip-sklearn > lab-01-caip-containers** and opening **lab-01.ipynb**.
###Code
%%writefile {TRAINING_APP_FOLDER}/hptuning_config.yaml
# Copyright 2019 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
trainingInput:
hyperparameters:
goal: MAXIMIZE
maxTrials: 4
maxParallelTrials: 4
hyperparameterMetricTag: accuracy
enableTrialEarlyStopping: TRUE
params:
- parameterName: max_iter
type: DISCRETE
discreteValues: [
200,
500
]
- parameterName: alpha
type: DOUBLE
minValue: 0.00001
maxValue: 0.001
scaleType: UNIT_LINEAR_SCALE
###Output
Overwriting training_app/hptuning_config.yaml
###Markdown
Start the hyperparameter tuning job. ExerciseUse the `gcloud` command to start the hyperparameter tuning job.NOTE: If you need help, you may take a look at the complete solution by navigating to **mlops-on-gcp > workshops > kfp-caip-sklearn > lab-01-caip-containers** and opening **lab-01.ipynb**.
###Code
JOB_NAME = "JOB_{}".format(time.strftime("%Y%m%d_%H%M%S"))
JOB_DIR = "{}/{}".format(JOB_DIR_ROOT, JOB_NAME)
SCALE_TIER = "BASIC"
!gcloud ai-platform jobs submit training $JOB_NAME \
--region=$REGION \
--job-dir=$JOB_DIR \
--master-image-uri=$IMAGE_URI \
--scale-tier=$SCALE_TIER \
--config $TRAINING_APP_FOLDER/hptuning_config.yaml \
-- \
--training_dataset_path=$TRAINING_FILE_PATH \
--validation_dataset_path=$VALIDATION_FILE_PATH \
--hptune
###Output
Job [JOB_20210607_061732] submitted successfully.
Your job is still active. You may view the status of your job with the command
$ gcloud ai-platform jobs describe JOB_20210607_061732
or continue streaming the logs with the command
$ gcloud ai-platform jobs stream-logs JOB_20210607_061732
jobId: JOB_20210607_061732
state: QUEUED
###Markdown
Monitor the job.You can monitor the job using Google Cloud console or from within the notebook using `gcloud` commands.
###Code
!gcloud ai-platform jobs describe $JOB_NAME
!gcloud ai-platform jobs stream-logs $JOB_NAME
###Output
INFO 2021-06-07 06:37:29 +0000 service Validating job requirements...
INFO 2021-06-07 06:37:30 +0000 service Job creation request has been successfully validated.
INFO 2021-06-07 06:37:30 +0000 service Waiting for job to be provisioned.
INFO 2021-06-07 06:37:30 +0000 service Job JOB_20210607_063728 is queued.
INFO 2021-06-07 06:37:31 +0000 service Waiting for training program to start.
INFO 2021-06-07 06:38:00 +0000 master-replica-0
INFO 2021-06-07 06:38:00 +0000 master-replica-0
INFO 2021-06-07 06:38:00 +0000 master-replica-0
INFO 2021-06-07 06:38:00 +0000 master-replica-0
INFO 2021-06-07 06:38:00 +0000 master-replica-0
INFO 2021-06-07 06:40:27 +0000 master-replica-0 Copying file://model.pkl [Content-Type=application/octet-stream]...
INFO 2021-06-07 06:40:27 +0000 master-replica-0 / [0 files][ 0.0 B/ 6.2 KiB]
INFO 2021-06-07 06:40:27 +0000 master-replica-0 / [1 files][ 6.2 KiB/ 6.2 KiB]
INFO 2021-06-07 06:40:27 +0000 master-replica-0 Operation completed over 1 objects/6.2 KiB.
INFO 2021-06-07 06:40:27 +0000 master-replica-0 Starting training: alpha=0.00028795960558683194, max_iter=200
INFO 2021-06-07 06:40:27 +0000 master-replica-0 Saved model in: gs://qwiklabs-gcp-04-568443837277-kubeflowpipelines-default/jobs/JOB_20210607_063728/model.pkl
INFO 2021-06-07 06:43:17 +0000 service Job completed successfully.
###Markdown
**NOTE: The above AI platform job stream logs will take approximately 5~10 minutes to display.** Retrieve HP-tuning results. After the job completes you can review the results using Google Cloud Console or programatically by calling the AI Platform Training REST end-point.
###Code
ml = discovery.build('ml', 'v1')
job_id = 'projects/{}/jobs/{}'.format(PROJECT_ID, JOB_NAME)
request = ml.projects().jobs().get(name=job_id)
try:
response = request.execute()
except errors.HttpError as err:
print(err)
except:
print("Unexpected error")
response
###Output
_____no_output_____
###Markdown
The returned run results are sorted by a value of the optimization metric. The best run is the first item on the returned list.
###Code
response['trainingOutput']['trials'][0]
###Output
_____no_output_____
###Markdown
Retrain the model with the best hyperparametersYou can now retrain the model using the best hyperparameters and using combined training and validation splits as a training dataset. Configure and run the training job
###Code
alpha = response['trainingOutput']['trials'][0]['hyperparameters']['alpha']
max_iter = response['trainingOutput']['trials'][0]['hyperparameters']['max_iter']
JOB_NAME = "JOB_{}".format(time.strftime("%Y%m%d_%H%M%S"))
JOB_DIR = "{}/{}".format(JOB_DIR_ROOT, JOB_NAME)
SCALE_TIER = "BASIC"
!gcloud ai-platform jobs submit training $JOB_NAME \
--region=$REGION \
--job-dir=$JOB_DIR \
--master-image-uri=$IMAGE_URI \
--scale-tier=$SCALE_TIER \
-- \
--training_dataset_path=$TRAINING_FILE_PATH \
--validation_dataset_path=$VALIDATION_FILE_PATH \
--alpha=$alpha \
--max_iter=$max_iter \
--nohptune
!gcloud ai-platform jobs stream-logs $JOB_NAME
###Output
_____no_output_____
###Markdown
**NOTE: The above AI platform job stream logs will take approximately 5~10 minutes to display.** Examine the training outputThe training script saved the trained model as the 'model.pkl' in the `JOB_DIR` folder on Cloud Storage.
###Code
!gsutil ls $JOB_DIR
###Output
_____no_output_____
###Markdown
Deploy the model to AI Platform Prediction Create a model resource ExerciseComplete the `gcloud` command below to create a model with`model_name` in `$REGION` tagged with `labels`:NOTE: If you need help, you may take a look at the complete solution by navigating to **mlops-on-gcp > workshops > kfp-caip-sklearn > lab-01-caip-containers** and opening **lab-01.ipynb**.
###Code
model_name = 'forest_cover_classifier'
labels = "task=classifier,domain=forestry"
!gcloud ai-platform models create $model_name \
--regions=$REGION \
--labels=$labels
###Output
_____no_output_____
###Markdown
Create a model version Exercise Complete the `gcloud` command below to create a version of the model:NOTE: If you need help, you may take a look at the complete solution by navigating to **mlops-on-gcp > workshops > kfp-caip-sklearn > lab-01-caip-containers** and opening **lab-01.ipynb**.
###Code
model_version = 'v01'
!gcloud ai-platform versions create {model_version} \
--model={model_name} \
--origin=$JOB_DIR \
--runtime-version=1.15 \
--framework=scikit-learn \
--python-version=3.7\
--region global
###Output
_____no_output_____
###Markdown
Serve predictions Prepare the input file with JSON formated instances.
###Code
input_file = 'serving_instances.json'
with open(input_file, 'w') as f:
for index, row in X_validation.head().iterrows():
f.write(json.dumps(list(row.values)))
f.write('\n')
!cat $input_file
###Output
_____no_output_____
###Markdown
Invoke the model ExerciseUsing the `gcloud` command send the data in `$input_file` to your model deployed as a REST API:NOTE: If you need help, you may take a look at the complete solution by navigating to **mlops-on-gcp > workshops > kfp-caip-sklearn > lab-01-caip-containers** and opening **lab-01.ipynb**.
###Code
!gcloud ai-platform predict \
--model $model_name \
--version $model_version \
--json-instances $input_file\
--region global
###Output
_____no_output_____
###Markdown
Using custom containers with AI Platform Training**Learning Objectives:**1. Learn how to create a train and a validation split with BigQuery1. Learn how to wrap a machine learning model into a Docker container and train in on AI Platform1. Learn how to use the hyperparameter tunning engine on Google Cloud to find the best hyperparameters1. Learn how to deploy a trained machine learning model Google Cloud as a rest API and query itIn this lab, you develop a multi-class classification model, package the model as a docker image, and run on **AI Platform Training** as a training application. The training application trains a multi-class classification model that predicts the type of forest cover from cartographic data. The [dataset](../../../datasets/covertype/README.md) used in the lab is based on **Covertype Data Set** from UCI Machine Learning Repository.Scikit-learn is one of the most useful libraries for machineย learningย in Python. The training code uses `scikit-learn` for data pre-processing and modeling. The code is instrumented using the `hypertune` package so it can be used with **AI Platform** hyperparameter tuning job in searching for the best combination of hyperparameter values by optimizing the metrics you specified.
###Code
import json
import os
import numpy as np
import pandas as pd
import pickle
import uuid
import time
import tempfile
from googleapiclient import discovery
from googleapiclient import errors
from google.cloud import bigquery
from jinja2 import Template
from kfp.components import func_to_container_op
from typing import NamedTuple
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.linear_model import SGDClassifier
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from sklearn.compose import ColumnTransformer
###Output
_____no_output_____
###Markdown
Run the command in the cell below to install gcsfs package.
###Code
%pip install gcsfs==0.7.2
###Output
_____no_output_____
###Markdown
Prepare lab datasetSet environment variable so that we can use them throughout the entire lab.The pipeline ingests data from BigQuery. The cell below uploads the Covertype dataset to BigQuery.
###Code
PROJECT_ID=!(gcloud config get-value core/project)
PROJECT_ID=PROJECT_ID[0]
DATASET_ID='covertype_dataset'
DATASET_LOCATION='US'
TABLE_ID='covertype'
DATA_SOURCE='gs://workshop-datasets/covertype/small/dataset.csv'
SCHEMA='Elevation:INTEGER,Aspect:INTEGER,Slope:INTEGER,Horizontal_Distance_To_Hydrology:INTEGER,Vertical_Distance_To_Hydrology:INTEGER,Horizontal_Distance_To_Roadways:INTEGER,Hillshade_9am:INTEGER,Hillshade_Noon:INTEGER,Hillshade_3pm:INTEGER,Horizontal_Distance_To_Fire_Points:INTEGER,Wilderness_Area:STRING,Soil_Type:STRING,Cover_Type:INTEGER'
###Output
_____no_output_____
###Markdown
Next, create the BigQuery dataset and upload the Covertype csv data into a table.
###Code
!bq --location=$DATASET_LOCATION --project_id=$PROJECT_ID mk --dataset $DATASET_ID
!bq --project_id=$PROJECT_ID --dataset_id=$DATASET_ID load \
--source_format=CSV \
--skip_leading_rows=1 \
--replace \
$TABLE_ID \
$DATA_SOURCE \
$SCHEMA
###Output
_____no_output_____
###Markdown
Configure environment settings Set location paths, connections strings, and other environment settings. Make sure to update `REGION`, and `ARTIFACT_STORE` with the settings reflecting your lab environment. - `REGION` - the compute region for AI Platform Training and Prediction- `ARTIFACT_STORE` - the Cloud Storage bucket created during installation of AI Platform Pipelines. The bucket name starts with the `qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-default` prefix.Run gsutil ls without URLs to list all of the Cloud Storage buckets under your default project ID.
###Code
!gsutil ls
###Output
_____no_output_____
###Markdown
HINT: For ARTIFACT_STORE, copy the bucket name which starts with the qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-default prefix from the previous cell output.Your copied value should look like 'gs://qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-default').
###Code
REGION = 'us-central1'
ARTIFACT_STORE = 'gs://qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-default' #ย TO DO: REPLACEย WITHย YOURย ARTIFACT_STOREย NAME
PROJECT_ID = !(gcloud config get-value core/project)
PROJECT_ID = PROJECT_ID[0]
DATA_ROOT='{}/data'.format(ARTIFACT_STORE)
JOB_DIR_ROOT='{}/jobs'.format(ARTIFACT_STORE)
TRAINING_FILE_PATH='{}/{}/{}'.format(DATA_ROOT, 'training', 'dataset.csv')
VALIDATION_FILE_PATH='{}/{}/{}'.format(DATA_ROOT, 'validation', 'dataset.csv')
###Output
_____no_output_____
###Markdown
Explore the Covertype dataset Run the query statement below to scan covertype_dataset.covertype table in BigQuery and return the computed result rows.
###Code
%%bigquery
SELECT *
FROM `covertype_dataset.covertype`
###Output
_____no_output_____
###Markdown
Create training and validation splitsUse BigQuery to sample training and validation splits and save them to Cloud Storage. Create a training splitRun the query below in order to have repeatable sampling of the data in BigQuery. Note that `FARM_FINGERPRINT()`ย is used on the field that you are going to split your data. It creates a training split that takes 80% of the data using theย `bq`ย command and exports this split into the BigQuery table of `covertype_dataset.training`.
###Code
!bq query \
-n 0 \
--destination_table covertype_dataset.training \
--replace \
--use_legacy_sql=false \
'SELECT * \
FROM `covertype_dataset.covertype` AS cover \
WHERE \
MOD(ABS(FARM_FINGERPRINT(TO_JSON_STRING(cover))), 10) IN (1, 2, 3, 4)'
###Output
_____no_output_____
###Markdown
Use theย `bq`ย extract command to export the BigQuery training table to GCS atย `$TRAINING_FILE_PATH`.
###Code
!bq extract \
--destination_format CSV \
covertype_dataset.training \
$TRAINING_FILE_PATH
###Output
_____no_output_____
###Markdown
Create a validation split ExerciseIn the first cell below, create a validation split that takes 10% of the data using the `bq` command andexport this split into the BigQuery table `covertype_dataset.validation`.In the second cell, use the `bq` command to export that BigQuery validation table to GCS at `$VALIDATION_FILE_PATH`.NOTE: If you need help, you may take a look at the complete solution by navigating to **mlops-on-gcp > workshops > kfp-caip-sklearn > lab-01-caip-containers** and opening **lab-01.ipynb**.
###Code
# TO DO: Your code goes here to create the BQ table validation split.
# TO DO: Your code goes here to export the validation table to the Cloud Storage bucket.
df_train = pd.read_csv(TRAINING_FILE_PATH)
df_validation = pd.read_csv(VALIDATION_FILE_PATH)
print(df_train.shape)
print(df_validation.shape)
###Output
_____no_output_____
###Markdown
Develop a training application Configure the `sklearn` training pipeline.The training pipeline preprocesses data by standardizing all numeric features using `sklearn.preprocessing.StandardScaler` and encoding all categorical features using `sklearn.preprocessing.OneHotEncoder`. It uses stochastic gradient descent linear classifier (`SGDClassifier`) for modeling.
###Code
numeric_feature_indexes = slice(0, 10)
categorical_feature_indexes = slice(10, 12)
preprocessor = ColumnTransformer(
transformers=[
('num', StandardScaler(), numeric_feature_indexes),
('cat', OneHotEncoder(), categorical_feature_indexes)
])
pipeline = Pipeline([
('preprocessor', preprocessor),
('classifier', SGDClassifier(loss='log', tol=1e-3))
])
###Output
_____no_output_____
###Markdown
Convert all numeric features to `float64`To avoid warning messages from `StandardScaler` all numeric features are converted to `float64`.
###Code
num_features_type_map = {feature: 'float64' for feature in df_train.columns[numeric_feature_indexes]}
df_train = df_train.astype(num_features_type_map)
df_validation = df_validation.astype(num_features_type_map)
###Output
_____no_output_____
###Markdown
Run the pipeline locally.
###Code
X_train = df_train.drop('Cover_Type', axis=1)
y_train = df_train['Cover_Type']
X_validation = df_validation.drop('Cover_Type', axis=1)
y_validation = df_validation['Cover_Type']
pipeline.set_params(classifier__alpha=0.001, classifier__max_iter=200)
pipeline.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Calculate the trained model's accuracy.
###Code
accuracy = pipeline.score(X_validation, y_validation)
print(accuracy)
###Output
_____no_output_____
###Markdown
Prepare the hyperparameter tuning application.Since the training run on this dataset is computationally expensive you can benefit from running a distributed hyperparameter tuning job on AI Platform Training.
###Code
TRAINING_APP_FOLDER = 'training_app'
os.makedirs(TRAINING_APP_FOLDER, exist_ok=True)
###Output
_____no_output_____
###Markdown
Write the tuning script. Notice the use of the `hypertune` package to report the `accuracy` optimization metric to AI Platform hyperparameter tuning service. ExerciseComplete the code below to capture the metric that the hyper parameter tunning engine will use to optimizethe hyper parameter. NOTE: If you need help, you may take a look at the complete solution by navigating to **mlops-on-gcp > workshops > kfp-caip-sklearn > lab-01-caip-containers** and opening **lab-01.ipynb**.
###Code
%%writefile {TRAINING_APP_FOLDER}/train.py
# Copyright 2019 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import subprocess
import sys
import fire
import pickle
import numpy as np
import pandas as pd
import hypertune
from sklearn.compose import ColumnTransformer
from sklearn.linear_model import SGDClassifier
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, OneHotEncoder
def train_evaluate(job_dir, training_dataset_path,
validation_dataset_path, alpha, max_iter, hptune):
df_train = pd.read_csv(training_dataset_path)
df_validation = pd.read_csv(validation_dataset_path)
if not hptune:
df_train = pd.concat([df_train, df_validation])
numeric_feature_indexes = slice(0, 10)
categorical_feature_indexes = slice(10, 12)
preprocessor = ColumnTransformer(
transformers=[
('num', StandardScaler(), numeric_feature_indexes),
('cat', OneHotEncoder(), categorical_feature_indexes)
])
pipeline = Pipeline([
('preprocessor', preprocessor),
('classifier', SGDClassifier(loss='log',tol=1e-3))
])
num_features_type_map = {feature: 'float64' for feature
in df_train.columns[numeric_feature_indexes]}
df_train = df_train.astype(num_features_type_map)
df_validation = df_validation.astype(num_features_type_map)
print('Starting training: alpha={}, max_iter={}'.format(alpha, max_iter))
X_train = df_train.drop('Cover_Type', axis=1)
y_train = df_train['Cover_Type']
pipeline.set_params(classifier__alpha=alpha, classifier__max_iter=max_iter)
pipeline.fit(X_train, y_train)
if hptune:
# TO DO: Your code goes here to score the model with the validation data and capture the result
# with the hypertune library
# Save the model
if not hptune:
model_filename = 'model.pkl'
with open(model_filename, 'wb') as model_file:
pickle.dump(pipeline, model_file)
gcs_model_path = "{}/{}".format(job_dir, model_filename)
subprocess.check_call(['gsutil', 'cp', model_filename, gcs_model_path],
stderr=sys.stdout)
print("Saved model in: {}".format(gcs_model_path))
if __name__ == "__main__":
fire.Fire(train_evaluate)
###Output
_____no_output_____
###Markdown
Package the script into a docker image.Notice that we are installing specific versions of `scikit-learn` and `pandas` in the training image. This is done to make sure that the training runtime is aligned with the serving runtime. Later in the notebook you will deploy the model to AI Platform Prediction, using the 1.15 version of AI Platform Prediction runtime. Make sure to update the URI for the base image so that it points to your project's **Container Registry**. ExerciseComplete the Dockerfile below so that it copies the 'train.py' file into the containerat `/app` and runs it when the container is started. NOTE: If you need help, you may take a look at the complete solution by navigating to **mlops-on-gcp > workshops > kfp-caip-sklearn > lab-01-caip-containers** and opening **lab-01.ipynb**.
###Code
%%writefile {TRAINING_APP_FOLDER}/Dockerfile
FROM gcr.io/deeplearning-platform-release/base-cpu
RUN pip install -U fire cloudml-hypertune scikit-learn==0.20.4 pandas==0.24.2
# TO DO: Your code goes here
###Output
_____no_output_____
###Markdown
Build the docker image. You use **Cloud Build** to build the image and push it your project's **Container Registry**. As you use the remote cloud service to build the image, you don't need a local installation of Docker.
###Code
IMAGE_NAME='trainer_image'
IMAGE_TAG='latest'
IMAGE_URI='gcr.io/{}/{}:{}'.format(PROJECT_ID, IMAGE_NAME, IMAGE_TAG)
!gcloud builds submit --tag $IMAGE_URI $TRAINING_APP_FOLDER
###Output
_____no_output_____
###Markdown
Submit an AI Platform hyperparameter tuning job Create the hyperparameter configuration file. Recall that the training code uses `SGDClassifier`. The training application has been designed to accept two hyperparameters that control `SGDClassifier`:- Max iterations- AlphaThe below file configures AI Platform hypertuning to run up to 6 trials on up to three nodes and to choose from two discrete values of `max_iter` and the linear range betwee 0.00001 and 0.001 for `alpha`. ExerciseComplete the `hptuning_config.yaml` file below so that the hyperparametertunning engine try for parameter values* `max_iter` the two values 200 and 300* `alpha` a linear range of values between 0.00001 and 0.001NOTE: If you need help, you may take a look at the complete solution by navigating to **mlops-on-gcp > workshops > kfp-caip-sklearn > lab-01-caip-containers** and opening **lab-01.ipynb**.
###Code
%%writefile {TRAINING_APP_FOLDER}/hptuning_config.yaml
# Copyright 2019 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
trainingInput:
hyperparameters:
goal: MAXIMIZE
maxTrials: 4
maxParallelTrials: 4
hyperparameterMetricTag: accuracy
enableTrialEarlyStopping: TRUE
params:
# TO DO: Your code goes here
###Output
_____no_output_____
###Markdown
Start the hyperparameter tuning job. ExerciseUse the `gcloud` command to start the hyperparameter tuning job.NOTE: If you need help, you may take a look at the complete solution by navigating to **mlops-on-gcp > workshops > kfp-caip-sklearn > lab-01-caip-containers** and opening **lab-01.ipynb**.
###Code
JOB_NAME = "JOB_{}".format(time.strftime("%Y%m%d_%H%M%S"))
JOB_DIR = "{}/{}".format(JOB_DIR_ROOT, JOB_NAME)
SCALE_TIER = "BASIC"
!gcloud ai-platform jobs submit training $JOB_NAME \
--region=# TO DO: ADD YOUR REGION \
--job-dir=# TO DO: ADD YOUR JOB-DIR \
--master-image-uri=# TO DO: ADD YOUR IMAGE-URI \
--scale-tier=# TO DO: ADD YOUR SCALE-TIER \
--config # TO DO: ADD YOUR CONFIG PATH \
-- \
# TO DO: Complete the command
###Output
_____no_output_____
###Markdown
Monitor the job.You can monitor the job using Google Cloud console or from within the notebook using `gcloud` commands.
###Code
!gcloud ai-platform jobs describe $JOB_NAME
!gcloud ai-platform jobs stream-logs $JOB_NAME
###Output
_____no_output_____
###Markdown
**NOTE: The above AI platform job stream logs will take approximately 5~10 minutes to display.** Retrieve HP-tuning results. After the job completes you can review the results using Google Cloud Console or programatically by calling the AI Platform Training REST end-point.
###Code
ml = discovery.build('ml', 'v1')
job_id = 'projects/{}/jobs/{}'.format(PROJECT_ID, JOB_NAME)
request = ml.projects().jobs().get(name=job_id)
try:
response = request.execute()
except errors.HttpError as err:
print(err)
except:
print("Unexpected error")
response
###Output
_____no_output_____
###Markdown
The returned run results are sorted by a value of the optimization metric. The best run is the first item on the returned list.
###Code
response['trainingOutput']['trials'][0]
###Output
_____no_output_____
###Markdown
Retrain the model with the best hyperparametersYou can now retrain the model using the best hyperparameters and using combined training and validation splits as a training dataset. Configure and run the training job
###Code
alpha = response['trainingOutput']['trials'][0]['hyperparameters']['alpha']
max_iter = response['trainingOutput']['trials'][0]['hyperparameters']['max_iter']
JOB_NAME = "JOB_{}".format(time.strftime("%Y%m%d_%H%M%S"))
JOB_DIR = "{}/{}".format(JOB_DIR_ROOT, JOB_NAME)
SCALE_TIER = "BASIC"
!gcloud ai-platform jobs submit training $JOB_NAME \
--region=$REGION \
--job-dir=$JOB_DIR \
--master-image-uri=$IMAGE_URI \
--scale-tier=$SCALE_TIER \
-- \
--training_dataset_path=$TRAINING_FILE_PATH \
--validation_dataset_path=$VALIDATION_FILE_PATH \
--alpha=$alpha \
--max_iter=$max_iter \
--nohptune
!gcloud ai-platform jobs stream-logs $JOB_NAME
###Output
_____no_output_____
###Markdown
**NOTE: The above AI platform job stream logs will take approximately 5~10 minutes to display.** Examine the training outputThe training script saved the trained model as the 'model.pkl' in the `JOB_DIR` folder on Cloud Storage.
###Code
!gsutil ls $JOB_DIR
###Output
_____no_output_____
###Markdown
Deploy the model to AI Platform Prediction Create a model resource ExerciseComplete the `gcloud` command below to create a model with`model_name` in `$REGION` tagged with `labels`:NOTE: If you need help, you may take a look at the complete solution by navigating to **mlops-on-gcp > workshops > kfp-caip-sklearn > lab-01-caip-containers** and opening **lab-01.ipynb**.
###Code
model_name = 'forest_cover_classifier'
labels = "task=classifier,domain=forestry"
!gcloud # TO DO: You code goes here
###Output
_____no_output_____
###Markdown
Create a model version Exercise Complete the `gcloud` command below to create a version of the model:NOTE: If you need help, you may take a look at the complete solution by navigating to **mlops-on-gcp > workshops > kfp-caip-sklearn > lab-01-caip-containers** and opening **lab-01.ipynb**.
###Code
model_version = 'v01'
!gcloud # TO DO: Complete the command \
--model=# TO DO: ADD YOUR MODEL NAME \
--origin=# TO DO: ADD YOUR PATH \
--runtime-version=# TO DO: ADD YOUR RUNTIME \
--framework=# TO DO: ADD YOUR FRAMEWORK \
--python-version=# TO DO: ADD YOUR PYTHON VERSION \
--region # TO DO: ADD YOUR REGION
###Output
_____no_output_____
###Markdown
Serve predictions Prepare the input file with JSON formated instances.
###Code
input_file = 'serving_instances.json'
with open(input_file, 'w') as f:
for index, row in X_validation.head().iterrows():
f.write(json.dumps(list(row.values)))
f.write('\n')
!cat $input_file
###Output
_____no_output_____
###Markdown
Invoke the model ExerciseUsing the `gcloud` command send the data in `$input_file` to your model deployed as a REST API:NOTE: If you need help, you may take a look at the complete solution by navigating to **mlops-on-gcp > workshops > kfp-caip-sklearn > lab-01-caip-containers** and opening **lab-01.ipynb**.
###Code
!gcloud # TO DO: Complete the command
###Output
_____no_output_____
###Markdown
Using custom containers with AI Platform Training**Learning Objectives:**1. Learn how to create a train and a validation split with BigQuery1. Learn how to wrap a machine learning model into a Docker container and train in on AI Platform1. Learn how to use the hyperparameter tunning engine on Google Cloud to find the best hyperparameters1. Learn how to deploy a trained machine learning model Google Cloud as a rest API and query itIn this lab, you develop a multi-class classification model, package the model as a docker image, and run on **AI Platform Training** as a training application. The training application trains a multi-class classification model that predicts the type of forest cover from cartographic data. The [dataset](../../../datasets/covertype/README.md) used in the lab is based on **Covertype Data Set** from UCI Machine Learning Repository.Scikit-learn is one of the most useful libraries for machineย learningย in Python. The training code uses `scikit-learn` for data pre-processing and modeling. The code is instrumented using the `hypertune` package so it can be used with **AI Platform** hyperparameter tuning job in searching for the best combination of hyperparameter values by optimizing the metrics you specified.
###Code
import json
import os
import numpy as np
import pandas as pd
import pickle
import uuid
import time
import tempfile
from googleapiclient import discovery
from googleapiclient import errors
from google.cloud import bigquery
from jinja2 import Template
from kfp.components import func_to_container_op
from typing import NamedTuple
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.linear_model import SGDClassifier
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from sklearn.compose import ColumnTransformer
###Output
_____no_output_____
###Markdown
Prepare lab datasetSet environment variable so that we can use them throughout the entire lab.The pipeline ingests data from BigQuery. The cell below uploads the Covertype dataset to BigQuery.
###Code
PROJECT_ID = !(gcloud config get-value core/project)
PROJECT_ID = PROJECT_ID[0]
DATASET_ID='covertype_dataset'
TABLE_ID='covertype'
DATA_SOURCE='gs://workshop-datasets/covertype/small/dataset.csv'
SCHEMA='Elevation:INTEGER,Aspect:INTEGER,Slope:INTEGER,Horizontal_Distance_To_Hydrology:INTEGER,Vertical_Distance_To_Hydrology:INTEGER,Horizontal_Distance_To_Roadways:INTEGER,Hillshade_9am:INTEGER,Hillshade_Noon:INTEGER,Hillshade_3pm:INTEGER,Horizontal_Distance_To_Fire_Points:INTEGER,Wilderness_Area:STRING,Soil_Type:STRING,Cover_Type:INTEGER'
###Output
_____no_output_____
###Markdown
Next, create the BigQuery dataset and upload the Covertype csv data into a table.
###Code
!bq --location=$DATASET_LOCATION --project_id=$PROJECT_ID mk --dataset $DATASET_ID
!bq --project_id=$PROJECT_ID --dataset_id=$DATASET_ID load \
--source_format=CSV \
--skip_leading_rows=1 \
--replace \
$TABLE_ID \
$DATA_SOURCE \
$SCHEMA
###Output
_____no_output_____
###Markdown
Configure environment settings Set location paths, connections strings, and other environment settings. Make sure to update `REGION`, and `ARTIFACT_STORE` with the settings reflecting your lab environment. - `REGION` - the compute region for AI Platform Training and Prediction- `ARTIFACT_STORE` - the Cloud Storage bucket created during installation of AI Platform Pipelines. The bucket name starts with the `qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-default` prefix.Run gsutil ls without URLs to list all of the Cloud Storage buckets under your default project ID.
###Code
!gsutil ls
###Output
_____no_output_____
###Markdown
HINT: For ARTIFACT_STORE, copy the bucket name which starts with the qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-default prefix from the previous cell output.Your copied value should look like 'gs://qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-default').
###Code
REGION = 'us-central1'
ARTIFACT_STORE = 'gs://qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-default' #ย TO DO: REPLACEย WITHย YOURย ARTIFACT_STOREย NAME
PROJECT_ID = !(gcloud config get-value core/project)
PROJECT_ID = PROJECT_ID[0]
DATA_ROOT='{}/data'.format(ARTIFACT_STORE)
JOB_DIR_ROOT='{}/jobs'.format(ARTIFACT_STORE)
TRAINING_FILE_PATH='{}/{}/{}'.format(DATA_ROOT, 'training', 'dataset.csv')
VALIDATION_FILE_PATH='{}/{}/{}'.format(DATA_ROOT, 'validation', 'dataset.csv')
###Output
_____no_output_____
###Markdown
Explore the Covertype dataset Run the query statement below to scan covertype_dataset.covertype table in BigQuery and return the computed result rows.
###Code
%%bigquery
SELECT *
FROM `covertype_dataset.covertype`
###Output
_____no_output_____
###Markdown
Create training and validation splitsUse BigQuery to sample training and validation splits and save them to Cloud Storage. Create a training splitRun the query below in order to have repeatable sampling of the data in BigQuery. Note that `FARM_FINGERPRINT()`ย is used on the field that you are going to split your data. It creates a training split that takes 80% of the data using theย `bq`ย command and exports this split into the BigQuery table of `covertype_dataset.training`.
###Code
!bq query \
-n 0 \
--destination_table covertype_dataset.training \
--replace \
--use_legacy_sql=false \
'SELECT * \
FROM `covertype_dataset.covertype` AS cover \
WHERE \
MOD(ABS(FARM_FINGERPRINT(TO_JSON_STRING(cover))), 10) IN (1, 2, 3, 4)'
###Output
_____no_output_____
###Markdown
Use theย `bq`ย extract command to export the BigQuery training table to GCS atย `$TRAINING_FILE_PATH`.
###Code
!bq extract \
--destination_format CSV \
covertype_dataset.training \
$TRAINING_FILE_PATH
###Output
_____no_output_____
###Markdown
Create a validation split ExerciseIn the first cell below, create a validation split that takes 10% of the data using the `bq` command andexport this split into the BigQuery table `covertype_dataset.validation`.In the second cell, use the `bq` command to export that BigQuery validation table to GCS at `$VALIDATION_FILE_PATH`.NOTE: If you need help, you may take a look at the complete solution by navigating to **mlops-on-gcp > workshops > kfp-caip-sklearn > lab-01-caip-containers** and opening **lab-01.ipynb**.
###Code
# TO DO: Your code goes here to create the BQ table validation split.
# TO DO: Your code goes here to export the validation table to the Cloud Storage bucket.
df_train = pd.read_csv(TRAINING_FILE_PATH)
df_validation = pd.read_csv(VALIDATION_FILE_PATH)
print(df_train.shape)
print(df_validation.shape)
###Output
_____no_output_____
###Markdown
Develop a training application Configure the `sklearn` training pipeline.The training pipeline preprocesses data by standardizing all numeric features using `sklearn.preprocessing.StandardScaler` and encoding all categorical features using `sklearn.preprocessing.OneHotEncoder`. It uses stochastic gradient descent linear classifier (`SGDClassifier`) for modeling.
###Code
numeric_feature_indexes = slice(0, 10)
categorical_feature_indexes = slice(10, 12)
preprocessor = ColumnTransformer(
transformers=[
('num', StandardScaler(), numeric_feature_indexes),
('cat', OneHotEncoder(), categorical_feature_indexes)
])
pipeline = Pipeline([
('preprocessor', preprocessor),
('classifier', SGDClassifier(loss='log', tol=1e-3))
])
###Output
_____no_output_____
###Markdown
Convert all numeric features to `float64`To avoid warning messages from `StandardScaler` all numeric features are converted to `float64`.
###Code
num_features_type_map = {feature: 'float64' for feature in df_train.columns[numeric_feature_indexes]}
df_train = df_train.astype(num_features_type_map)
df_validation = df_validation.astype(num_features_type_map)
###Output
_____no_output_____
###Markdown
Run the pipeline locally.
###Code
X_train = df_train.drop('Cover_Type', axis=1)
y_train = df_train['Cover_Type']
X_validation = df_validation.drop('Cover_Type', axis=1)
y_validation = df_validation['Cover_Type']
pipeline.set_params(classifier__alpha=0.001, classifier__max_iter=200)
pipeline.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Calculate the trained model's accuracy.
###Code
accuracy = pipeline.score(X_validation, y_validation)
print(accuracy)
###Output
_____no_output_____
###Markdown
Prepare the hyperparameter tuning application.Since the training run on this dataset is computationally expensive you can benefit from running a distributed hyperparameter tuning job on AI Platform Training.
###Code
TRAINING_APP_FOLDER = 'training_app'
os.makedirs(TRAINING_APP_FOLDER, exist_ok=True)
###Output
_____no_output_____
###Markdown
Write the tuning script. Notice the use of the `hypertune` package to report the `accuracy` optimization metric to AI Platform hyperparameter tuning service. ExerciseComplete the code below to capture the metric that the hyper parameter tunning engine will use to optimizethe hyper parameter. NOTE: If you need help, you may take a look at the complete solution by navigating to **mlops-on-gcp > workshops > kfp-caip-sklearn > lab-01-caip-containers** and opening **lab-01.ipynb**.
###Code
%%writefile {TRAINING_APP_FOLDER}/train.py
# Copyright 2019 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import subprocess
import sys
import fire
import pickle
import numpy as np
import pandas as pd
import hypertune
from sklearn.compose import ColumnTransformer
from sklearn.linear_model import SGDClassifier
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, OneHotEncoder
def train_evaluate(job_dir, training_dataset_path,
validation_dataset_path, alpha, max_iter, hptune):
df_train = pd.read_csv(training_dataset_path)
df_validation = pd.read_csv(validation_dataset_path)
if not hptune:
df_train = pd.concat([df_train, df_validation])
numeric_feature_indexes = slice(0, 10)
categorical_feature_indexes = slice(10, 12)
preprocessor = ColumnTransformer(
transformers=[
('num', StandardScaler(), numeric_feature_indexes),
('cat', OneHotEncoder(), categorical_feature_indexes)
])
pipeline = Pipeline([
('preprocessor', preprocessor),
('classifier', SGDClassifier(loss='log',tol=1e-3))
])
num_features_type_map = {feature: 'float64' for feature
in df_train.columns[numeric_feature_indexes]}
df_train = df_train.astype(num_features_type_map)
df_validation = df_validation.astype(num_features_type_map)
print('Starting training: alpha={}, max_iter={}'.format(alpha, max_iter))
X_train = df_train.drop('Cover_Type', axis=1)
y_train = df_train['Cover_Type']
pipeline.set_params(classifier__alpha=alpha, classifier__max_iter=max_iter)
pipeline.fit(X_train, y_train)
if hptune:
# TO DO: Your code goes here to score the model with the validation data and capture the result
# with the hypertune library
# Save the model
if not hptune:
model_filename = 'model.pkl'
with open(model_filename, 'wb') as model_file:
pickle.dump(pipeline, model_file)
gcs_model_path = "{}/{}".format(job_dir, model_filename)
subprocess.check_call(['gsutil', 'cp', model_filename, gcs_model_path],
stderr=sys.stdout)
print("Saved model in: {}".format(gcs_model_path))
if __name__ == "__main__":
fire.Fire(train_evaluate)
###Output
_____no_output_____
###Markdown
Package the script into a docker image.Notice that we are installing specific versions of `scikit-learn` and `pandas` in the training image. This is done to make sure that the training runtime is aligned with the serving runtime. Later in the notebook you will deploy the model to AI Platform Prediction, using the 1.15 version of AI Platform Prediction runtime. Make sure to update the URI for the base image so that it points to your project's **Container Registry**. ExerciseComplete the Dockerfile below so that it copies the 'train.py' file into the containerat `/app` and runs it when the container is started. NOTE: If you need help, you may take a look at the complete solution by navigating to **mlops-on-gcp > workshops > kfp-caip-sklearn > lab-01-caip-containers** and opening **lab-01.ipynb**.
###Code
%%writefile {TRAINING_APP_FOLDER}/Dockerfile
FROM gcr.io/deeplearning-platform-release/base-cpu
RUN pip install -U fire cloudml-hypertune scikit-learn==0.20.4 pandas==0.24.2
# TO DO: Your code goes here
###Output
_____no_output_____
###Markdown
Build the docker image. You use **Cloud Build** to build the image and push it your project's **Container Registry**. As you use the remote cloud service to build the image, you don't need a local installation of Docker.
###Code
IMAGE_NAME='trainer_image'
IMAGE_TAG='latest'
IMAGE_URI='gcr.io/{}/{}:{}'.format(PROJECT_ID, IMAGE_NAME, IMAGE_TAG)
!gcloud builds submit --tag $IMAGE_URI $TRAINING_APP_FOLDER
###Output
_____no_output_____
###Markdown
Submit an AI Platform hyperparameter tuning job Create the hyperparameter configuration file. Recall that the training code uses `SGDClassifier`. The training application has been designed to accept two hyperparameters that control `SGDClassifier`:- Max iterations- AlphaThe below file configures AI Platform hypertuning to run up to 6 trials on up to three nodes and to choose from two discrete values of `max_iter` and the linear range betwee 0.00001 and 0.001 for `alpha`. ExerciseComplete the `hptuning_config.yaml` file below so that the hyperparametertunning engine try for parameter values* `max_iter` the two values 200 and 300* `alpha` a linear range of values between 0.00001 and 0.001NOTE: If you need help, you may take a look at the complete solution by navigating to **mlops-on-gcp > workshops > kfp-caip-sklearn > lab-01-caip-containers** and opening **lab-01.ipynb**.
###Code
%%writefile {TRAINING_APP_FOLDER}/hptuning_config.yaml
# Copyright 2019 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
trainingInput:
hyperparameters:
goal: MAXIMIZE
maxTrials: 4
maxParallelTrials: 4
hyperparameterMetricTag: accuracy
enableTrialEarlyStopping: TRUE
params:
# TO DO: Your code goes here
###Output
_____no_output_____
###Markdown
Start the hyperparameter tuning job. ExerciseUse the `gcloud` command to start the hyperparameter tuning job.NOTE: If you need help, you may take a look at the complete solution by navigating to **mlops-on-gcp > workshops > kfp-caip-sklearn > lab-01-caip-containers** and opening **lab-01.ipynb**.
###Code
JOB_NAME = "JOB_{}".format(time.strftime("%Y%m%d_%H%M%S"))
JOB_DIR = "{}/{}".format(JOB_DIR_ROOT, JOB_NAME)
SCALE_TIER = "BASIC"
!gcloud ai-platform jobs submit training $JOB_NAME \
--region=# TO DO: ADD YOUR REGION \
--job-dir=# TO DO: ADD YOUR JOB-DIR \
--master-image-uri=# TO DO: ADD YOUR IMAGE-URI \
--scale-tier=# TO DO: ADD YOUR SCALE-TIER \
--config # TO DO: ADD YOUR CONFIG PATH \
-- \
# TO DO: Complete the command
###Output
_____no_output_____
###Markdown
Monitor the job.You can monitor the job using Google Cloud console or from within the notebook using `gcloud` commands.
###Code
!gcloud ai-platform jobs describe $JOB_NAME
!gcloud ai-platform jobs stream-logs $JOB_NAME
###Output
_____no_output_____
###Markdown
**NOTE: The above AI platform job stream logs will take approximately 5~10 minutes to display.** Retrieve HP-tuning results. After the job completes you can review the results using Google Cloud Console or programatically by calling the AI Platform Training REST end-point.
###Code
ml = discovery.build('ml', 'v1')
job_id = 'projects/{}/jobs/{}'.format(PROJECT_ID, JOB_NAME)
request = ml.projects().jobs().get(name=job_id)
try:
response = request.execute()
except errors.HttpError as err:
print(err)
except:
print("Unexpected error")
response
###Output
_____no_output_____
###Markdown
The returned run results are sorted by a value of the optimization metric. The best run is the first item on the returned list.
###Code
response['trainingOutput']['trials'][0]
###Output
_____no_output_____
###Markdown
Retrain the model with the best hyperparametersYou can now retrain the model using the best hyperparameters and using combined training and validation splits as a training dataset. Configure and run the training job
###Code
alpha = response['trainingOutput']['trials'][0]['hyperparameters']['alpha']
max_iter = response['trainingOutput']['trials'][0]['hyperparameters']['max_iter']
JOB_NAME = "JOB_{}".format(time.strftime("%Y%m%d_%H%M%S"))
JOB_DIR = "{}/{}".format(JOB_DIR_ROOT, JOB_NAME)
SCALE_TIER = "BASIC"
!gcloud ai-platform jobs submit training $JOB_NAME \
--region=$REGION \
--job-dir=$JOB_DIR \
--master-image-uri=$IMAGE_URI \
--scale-tier=$SCALE_TIER \
-- \
--training_dataset_path=$TRAINING_FILE_PATH \
--validation_dataset_path=$VALIDATION_FILE_PATH \
--alpha=$alpha \
--max_iter=$max_iter \
--nohptune
!gcloud ai-platform jobs stream-logs $JOB_NAME
###Output
_____no_output_____
###Markdown
**NOTE: The above AI platform job stream logs will take approximately 5~10 minutes to display.** Examine the training outputThe training script saved the trained model as the 'model.pkl' in the `JOB_DIR` folder on Cloud Storage.
###Code
!gsutil ls $JOB_DIR
###Output
_____no_output_____
###Markdown
Deploy the model to AI Platform Prediction Create a model resource ExerciseComplete the `gcloud` command below to create a model with`model_name` in `$REGION` tagged with `labels`:NOTE: If you need help, you may take a look at the complete solution by navigating to **mlops-on-gcp > workshops > kfp-caip-sklearn > lab-01-caip-containers** and opening **lab-01.ipynb**.
###Code
model_name = 'forest_cover_classifier'
labels = "task=classifier,domain=forestry"
!gcloud # TO DO: You code goes here
###Output
_____no_output_____
###Markdown
Create a model version Exercise Complete the `gcloud` command below to create a version of the model:NOTE: If you need help, you may take a look at the complete solution by navigating to **mlops-on-gcp > workshops > kfp-caip-sklearn > lab-01-caip-containers** and opening **lab-01.ipynb**.
###Code
model_version = 'v01'
!gcloud # TO DO: Complete the command \
--model=# TO DO: ADD YOUR MODEL NAME \
--origin=# TO DO: ADD YOUR PATH \
--runtime-version=# TO DO: ADD YOUR RUNTIME \
--framework=# TO DO: ADD YOUR FRAMEWORK \
--python-version=# TO DO: ADD YOUR PYTHON VERSION \
--region # TO DO: ADD YOUR REGION
###Output
_____no_output_____
###Markdown
Serve predictions Prepare the input file with JSON formated instances.
###Code
input_file = 'serving_instances.json'
with open(input_file, 'w') as f:
for index, row in X_validation.head().iterrows():
f.write(json.dumps(list(row.values)))
f.write('\n')
!cat $input_file
###Output
_____no_output_____
###Markdown
Invoke the model ExerciseUsing the `gcloud` command send the data in `$input_file` to your model deployed as a REST API:NOTE: If you need help, you may take a look at the complete solution by navigating to **mlops-on-gcp > workshops > kfp-caip-sklearn > lab-01-caip-containers** and opening **lab-01.ipynb**.
###Code
!gcloud # TO DO: Complete the command
###Output
_____no_output_____
###Markdown
Using custom containers with AI Platform Training**Learning Objectives:**1. Learn how to create a train and a validation split with Big Query1. Learn how to wrap a machine learning model into a Docker container and train in on CAIP1. Learn how to use the hyperparameter tunning engine on GCP to find the best hyperparameters1. Learn how to deploy a trained machine learning model GCP as a rest API and query itIn this lab, you develop a multi-class classification model, package the model as a docker image, and run on **AI Platform Training** as a training application. The training application trains a multi-class classification model that predicts the type of forest cover from cartographic data. The [dataset](../../../datasets/covertype/README.md) used in the lab is based on **Covertype Data Set** from UCI Machine Learning Repository.Scikit-learn is one of the most useful library for machineย learningย in Python. The training code uses `scikit-learn` for data pre-processing and modeling. The code is instrumented using the `hypertune` package so it can be used with **AI Platform** hyperparameter tuning job in searching for the best combination of hyperparameter values by optimizing the metrics you specified.
###Code
import json
import os
import numpy as np
import pandas as pd
import pickle
import uuid
import time
import tempfile
from googleapiclient import discovery
from googleapiclient import errors
from google.cloud import bigquery
from jinja2 import Template
from kfp.components import func_to_container_op
from typing import NamedTuple
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.linear_model import SGDClassifier
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from sklearn.compose import ColumnTransformer
###Output
_____no_output_____
###Markdown
Prepare lab datasetSet environment variable so that we can use them throughout the entire lab.
###Code
%%bash
export PROJECT_ID=$(gcloud config list project --format "value(core.project)")
echo "Your current GCP Project Name is: "${PROJECT_ID}
###Output
_____no_output_____
###Markdown
The pipeline ingests data from BigQuery. The cell below uploads the Covertype dataset to BigQuery.
###Code
PROJECT_ID='' #Fill in with your project id
DATASET_LOCATION='US'
DATASET_ID='covertype_dataset'
TABLE_ID='covertype'
DATA_SOURCE='gs://workshop-datasets/covertype/small/dataset.csv'
SCHEMA='Elevation:INTEGER,Aspect:INTEGER,Slope:INTEGER,Horizontal_Distance_To_Hydrology:INTEGER,Vertical_Distance_To_Hydrology:INTEGER,Horizontal_Distance_To_Roadways:INTEGER,Hillshade_9am:INTEGER,Hillshade_Noon:INTEGER,Hillshade_3pm:INTEGER,Horizontal_Distance_To_Fire_Points:INTEGER,Wilderness_Area:STRING,Soil_Type:STRING,Cover_Type:INTEGER'
###Output
_____no_output_____
###Markdown
Next, create the BigQuery dataset and upload the Covertype csv data into a table.
###Code
!bq --location=$DATASET_LOCATION --project_id=$PROJECT_ID mk --dataset $DATASET_ID
!bq --project_id=$PROJECT_ID --dataset_id=$DATASET_ID load \
--source_format=CSV \
--skip_leading_rows=1 \
--replace \
$TABLE_ID \
$DATA_SOURCE \
$SCHEMA
###Output
_____no_output_____
###Markdown
Configure environment settings Set location paths, connections strings, and other environment settings. Make sure to update `REGION`, and `ARTIFACT_STORE` with the settings reflecting your lab environment. - `REGION` - the compute region for AI Platform Training and Prediction- `ARTIFACT_STORE` - the GCS bucket created during installation of AI Platform Pipelines. The bucket name starts with the `qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-default` prefix.Run gsutil ls without URLs to list all of the Cloud Storage buckets under your default project ID.
###Code
!gsutil ls
REGION = 'us-central1'
ARTIFACT_STORE = 'gs://qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-default' #ย TO DO: REPLACEย WITHย YOURย ARTIFACT_STOREย NAME
# (HINT: Copyย theย bucketย nameย whichย startsย withย theย qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-defaultย prefixย fromย theย previousย cellย output.
# Your copied value should look like 'gs://qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-default').
PROJECT_ID = !(gcloud config get-value core/project)
PROJECT_ID = PROJECT_ID[0]
DATA_ROOT='{}/data'.format(ARTIFACT_STORE)
JOB_DIR_ROOT='{}/jobs'.format(ARTIFACT_STORE)
TRAINING_FILE_PATH='{}/{}/{}'.format(DATA_ROOT, 'training', 'dataset.csv')
VALIDATION_FILE_PATH='{}/{}/{}'.format(DATA_ROOT, 'validation', 'dataset.csv')
###Output
_____no_output_____
###Markdown
Explore the Covertype dataset Run the query statement below to scan covertype_dataset.covertype table in BigQuery and return the computed result rows.
###Code
%%bigquery
SELECT *
FROM `covertype_dataset.covertype`
###Output
_____no_output_____
###Markdown
Create training and validation splitsUse BigQuery to sample training and validation splits and save them to GCS storage. Create a training splitRun the query below in order to have repeatable sampling of the data in BigQuery. Note that `FARM_FINGERPRINT()`ย is used on the field that you are going to split your data. It creates a training split that takes 80% of the data using theย `bq`ย command and exports this split into the BigQuery table of `covertype_dataset.training`.
###Code
!bq query \
-n 0 \
--destination_table covertype_dataset.training \
--replace \
--use_legacy_sql=false \
'SELECT * \
FROM `covertype_dataset.covertype` AS cover \
WHERE \
MOD(ABS(FARM_FINGERPRINT(TO_JSON_STRING(cover))), 10) IN (1, 2, 3, 4)'
###Output
_____no_output_____
###Markdown
Use theย `bq`ย extract command to export the BigQuery training table to GCS atย `$TRAINING_FILE_PATH`.
###Code
!bq extract \
--destination_format CSV \
covertype_dataset.training \
$TRAINING_FILE_PATH
###Output
_____no_output_____
###Markdown
Create a validation split ExerciseIn the first cell below, create a validation split that takes 10% of the data using the `bq` command andexport this split into the BigQuery table `covertype_dataset.validation`.In the second cell, use the `bq` command to export that BigQuery validation table to GCS at `$VALIDATION_FILE_PATH`.NOTE: If you need help, you may take a look at the complete solution by navigating to **mlops-on-gcp > workshops > kfp-caip-sklearn > lab-01-caip-containers** and opening **lab-01.ipynb**.
###Code
# TO DO: Your code goes here to create the BQ table validation split.
# TO DO: Your code goes here to export the validation table to GCS.
df_train = pd.read_csv(TRAINING_FILE_PATH)
df_validation = pd.read_csv(VALIDATION_FILE_PATH)
print(df_train.shape)
print(df_validation.shape)
###Output
_____no_output_____
###Markdown
Develop a training application Configure the `sklearn` training pipeline.The training pipeline preprocesses data by standardizing all numeric features using `sklearn.preprocessing.StandardScaler` and encoding all categorical features using `sklearn.preprocessing.OneHotEncoder`. It uses stochastic gradient descent linear classifier (`SGDClassifier`) for modeling.
###Code
numeric_feature_indexes = slice(0, 10)
categorical_feature_indexes = slice(10, 12)
preprocessor = ColumnTransformer(
transformers=[
('num', StandardScaler(), numeric_feature_indexes),
('cat', OneHotEncoder(), categorical_feature_indexes)
])
pipeline = Pipeline([
('preprocessor', preprocessor),
('classifier', SGDClassifier(loss='log', tol=1e-3))
])
###Output
_____no_output_____
###Markdown
Convert all numeric features to `float64`To avoid warning messages from `StandardScaler` all numeric features are converted to `float64`.
###Code
num_features_type_map = {feature: 'float64' for feature in df_train.columns[numeric_feature_indexes]}
df_train = df_train.astype(num_features_type_map)
df_validation = df_validation.astype(num_features_type_map)
###Output
_____no_output_____
###Markdown
Run the pipeline locally.
###Code
X_train = df_train.drop('Cover_Type', axis=1)
y_train = df_train['Cover_Type']
X_validation = df_validation.drop('Cover_Type', axis=1)
y_validation = df_validation['Cover_Type']
pipeline.set_params(classifier__alpha=0.001, classifier__max_iter=200)
pipeline.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Calculate the trained model's accuracy.
###Code
accuracy = pipeline.score(X_validation, y_validation)
print(accuracy)
###Output
_____no_output_____
###Markdown
Prepare the hyperparameter tuning application.Since the training run on this dataset is computationally expensive you can benefit from running a distributed hyperparameter tuning job on AI Platform Training.
###Code
TRAINING_APP_FOLDER = 'training_app'
os.makedirs(TRAINING_APP_FOLDER, exist_ok=True)
###Output
_____no_output_____
###Markdown
Write the tuning script. Notice the use of the `hypertune` package to report the `accuracy` optimization metric to AI Platform hyperparameter tuning service. ExerciseComplete the code below to capture the metric that the hyper parameter tunning engine will use to optimizethe hyper parameter. NOTE: If you need help, you may take a look at the complete solution by navigating to **mlops-on-gcp > workshops > kfp-caip-sklearn > lab-01-caip-containers** and opening **lab-01.ipynb**.
###Code
%%writefile {TRAINING_APP_FOLDER}/train.py
# Copyright 2019 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import subprocess
import sys
import fire
import pickle
import numpy as np
import pandas as pd
import hypertune
from sklearn.compose import ColumnTransformer
from sklearn.linear_model import SGDClassifier
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, OneHotEncoder
def train_evaluate(job_dir, training_dataset_path,
validation_dataset_path, alpha, max_iter, hptune):
df_train = pd.read_csv(training_dataset_path)
df_validation = pd.read_csv(validation_dataset_path)
if not hptune:
df_train = pd.concat([df_train, df_validation])
numeric_feature_indexes = slice(0, 10)
categorical_feature_indexes = slice(10, 12)
preprocessor = ColumnTransformer(
transformers=[
('num', StandardScaler(), numeric_feature_indexes),
('cat', OneHotEncoder(), categorical_feature_indexes)
])
pipeline = Pipeline([
('preprocessor', preprocessor),
('classifier', SGDClassifier(loss='log',tol=1e-3))
])
num_features_type_map = {feature: 'float64' for feature
in df_train.columns[numeric_feature_indexes]}
df_train = df_train.astype(num_features_type_map)
df_validation = df_validation.astype(num_features_type_map)
print('Starting training: alpha={}, max_iter={}'.format(alpha, max_iter))
X_train = df_train.drop('Cover_Type', axis=1)
y_train = df_train['Cover_Type']
pipeline.set_params(classifier__alpha=alpha, classifier__max_iter=max_iter)
pipeline.fit(X_train, y_train)
if hptune:
# TO DO: Your code goes here to score the model with the validation data and capture the result
# with the hypertune library
# Save the model
if not hptune:
model_filename = 'model.pkl'
with open(model_filename, 'wb') as model_file:
pickle.dump(pipeline, model_file)
gcs_model_path = "{}/{}".format(job_dir, model_filename)
subprocess.check_call(['gsutil', 'cp', model_filename, gcs_model_path],
stderr=sys.stdout)
print("Saved model in: {}".format(gcs_model_path))
if __name__ == "__main__":
fire.Fire(train_evaluate)
###Output
_____no_output_____
###Markdown
Package the script into a docker image.Notice that we are installing specific versions of `scikit-learn` and `pandas` in the training image. This is done to make sure that the training runtime is aligned with the serving runtime. Later in the notebook you will deploy the model to AI Platform Prediction, using the 1.15 version of AI Platform Prediction runtime. Make sure to update the URI for the base image so that it points to your project's **Container Registry**. ExerciseComplete the Dockerfile below so that it copies the 'train.py' file into the containerat `/app` and runs it when the container is started. NOTE: If you need help, you may take a look at the complete solution by navigating to **mlops-on-gcp > workshops > kfp-caip-sklearn > lab-01-caip-containers** and opening **lab-01.ipynb**.
###Code
%%writefile {TRAINING_APP_FOLDER}/Dockerfile
FROM gcr.io/deeplearning-platform-release/base-cpu
RUN pip install -U fire cloudml-hypertune scikit-learn==0.20.4 pandas==0.24.2
# TO DO: Your code goes here
###Output
_____no_output_____
###Markdown
Build the docker image. You use **Cloud Build** to build the image and push it your project's **Container Registry**. As you use the remote cloud service to build the image, you don't need a local installation of Docker.
###Code
IMAGE_NAME='trainer_image'
IMAGE_TAG='latest'
IMAGE_URI='gcr.io/{}/{}:{}'.format(PROJECT_ID, IMAGE_NAME, IMAGE_TAG)
!gcloud builds submit --tag $IMAGE_URI $TRAINING_APP_FOLDER
###Output
_____no_output_____
###Markdown
Submit an AI Platform hyperparameter tuning job Create the hyperparameter configuration file. Recall that the training code uses `SGDClassifier`. The training application has been designed to accept two hyperparameters that control `SGDClassifier`:- Max iterations- AlphaThe below file configures AI Platform hypertuning to run up to 6 trials on up to three nodes and to choose from two discrete values of `max_iter` and the linear range betwee 0.00001 and 0.001 for `alpha`. ExerciseComplete the `hptuning_config.yaml` file below so that the hyperparametertunning engine try for parameter values* `max_iter` the two values 200 and 300* `alpha` a linear range of values between 0.00001 and 0.001NOTE: If you need help, you may take a look at the complete solution by navigating to **mlops-on-gcp > workshops > kfp-caip-sklearn > lab-01-caip-containers** and opening **lab-01.ipynb**.
###Code
%%writefile {TRAINING_APP_FOLDER}/hptuning_config.yaml
# Copyright 2019 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
trainingInput:
hyperparameters:
goal: MAXIMIZE
maxTrials: 4
maxParallelTrials: 4
hyperparameterMetricTag: accuracy
enableTrialEarlyStopping: TRUE
params:
# TO DO: Your code goes here
###Output
_____no_output_____
###Markdown
Start the hyperparameter tuning job. ExerciseUse the `gcloud` command to start the hyperparameter tuning job.NOTE: If you need help, you may take a look at the complete solution by navigating to **mlops-on-gcp > workshops > kfp-caip-sklearn > lab-01-caip-containers** and opening **lab-01.ipynb**.
###Code
JOB_NAME = "JOB_{}".format(time.strftime("%Y%m%d_%H%M%S"))
JOB_DIR = "{}/{}".format(JOB_DIR_ROOT, JOB_NAME)
SCALE_TIER = "BASIC"
!gcloud ai-platform jobs submit training $JOB_NAME \
--region=# TO DO: ADD YOUR REGION \
--job-dir=# TO DO: ADD YOUR JOB-DIR \
--master-image-uri=# TO DO: ADD YOUR IMAGE-URI \
--scale-tier=# TO DO: ADD YOUR SCALE-TIER \
--config # TO DO: ADD YOUR CONFIG PATH \
-- \
# TO DO: Complete the command
###Output
_____no_output_____
###Markdown
Monitor the job.You can monitor the job using GCP console or from within the notebook using `gcloud` commands.
###Code
!gcloud ai-platform jobs describe $JOB_NAME
!gcloud ai-platform jobs stream-logs $JOB_NAME
###Output
_____no_output_____
###Markdown
**NOTE: The above AI platform job stream logs will take approximately 5~10 minutes to display.** Retrieve HP-tuning results. After the job completes you can review the results using GCP Console or programatically by calling the AI Platform Training REST end-point.
###Code
ml = discovery.build('ml', 'v1')
job_id = 'projects/{}/jobs/{}'.format(PROJECT_ID, JOB_NAME)
request = ml.projects().jobs().get(name=job_id)
try:
response = request.execute()
except errors.HttpError as err:
print(err)
except:
print("Unexpected error")
response
###Output
_____no_output_____
###Markdown
The returned run results are sorted by a value of the optimization metric. The best run is the first item on the returned list.
###Code
response['trainingOutput']['trials'][0]
###Output
_____no_output_____
###Markdown
Retrain the model with the best hyperparametersYou can now retrain the model using the best hyperparameters and using combined training and validation splits as a training dataset. Configure and run the training job
###Code
alpha = response['trainingOutput']['trials'][0]['hyperparameters']['alpha']
max_iter = response['trainingOutput']['trials'][0]['hyperparameters']['max_iter']
JOB_NAME = "JOB_{}".format(time.strftime("%Y%m%d_%H%M%S"))
JOB_DIR = "{}/{}".format(JOB_DIR_ROOT, JOB_NAME)
SCALE_TIER = "BASIC"
!gcloud ai-platform jobs submit training $JOB_NAME \
--region=$REGION \
--job-dir=$JOB_DIR \
--master-image-uri=$IMAGE_URI \
--scale-tier=$SCALE_TIER \
-- \
--training_dataset_path=$TRAINING_FILE_PATH \
--validation_dataset_path=$VALIDATION_FILE_PATH \
--alpha=$alpha \
--max_iter=$max_iter \
--nohptune
!gcloud ai-platform jobs stream-logs $JOB_NAME
###Output
_____no_output_____
###Markdown
**NOTE: The above AI platform job stream logs will take approximately 5~10 minutes to display.** Examine the training outputThe training script saved the trained model as the 'model.pkl' in the `JOB_DIR` folder on GCS.
###Code
!gsutil ls $JOB_DIR
###Output
_____no_output_____
###Markdown
Deploy the model to AI Platform Prediction Create a model resource ExerciseComplete the `gcloud` command below to create a model with`model_name` in `$REGION` tagged with `labels`:NOTE: If you need help, you may take a look at the complete solution by navigating to **mlops-on-gcp > workshops > kfp-caip-sklearn > lab-01-caip-containers** and opening **lab-01.ipynb**.
###Code
model_name = 'forest_cover_classifier'
labels = "task=classifier,domain=forestry"
!gcloud # TO DO: You code goes here
###Output
_____no_output_____
###Markdown
Create a model version Exercise Complete the `gcloud` command below to create a version of the model:NOTE: If you need help, you may take a look at the complete solution by navigating to **mlops-on-gcp > workshops > kfp-caip-sklearn > lab-01-caip-containers** and opening **lab-01.ipynb**.
###Code
model_version = 'v01'
!gcloud # TO DO: Complete the command \
--model=# TO DO: ADD YOUR MODEL NAME \
--origin=# TO DO: ADD YOUR PATH \
--runtime-version=# TO DO: ADD YOUR RUNTIME \
--framework=# TO DO: ADD YOUR FRAMEWORK \
--python-version=# TO DO: ADD YOUR PYTHON VERSION \
--region # TO DO: ADD YOUR REGION
###Output
_____no_output_____
###Markdown
Serve predictions Prepare the input file with JSON formated instances.
###Code
input_file = 'serving_instances.json'
with open(input_file, 'w') as f:
for index, row in X_validation.head().iterrows():
f.write(json.dumps(list(row.values)))
f.write('\n')
!cat $input_file
###Output
_____no_output_____
###Markdown
Invoke the model ExerciseUsing the `gcloud` command send the data in `$input_file` to your model deployed as a REST API:NOTE: If you need help, you may take a look at the complete solution by navigating to **mlops-on-gcp > workshops > kfp-caip-sklearn > lab-01-caip-containers** and opening **lab-01.ipynb**.
###Code
!gcloud # TO DO: Complete the command
###Output
_____no_output_____
###Markdown
Using custom containers with AI Platform Training**Learning Objectives:**1. Learn how to create a train and a validation split with BigQuery1. Learn how to wrap a machine learning model into a Docker container and train in on AI Platform1. Learn how to use the hyperparameter tunning engine on Google Cloud to find the best hyperparameters1. Learn how to deploy a trained machine learning model Google Cloud as a rest API and query itIn this lab, you develop a multi-class classification model, package the model as a docker image, and run on **AI Platform Training** as a training application. The training application trains a multi-class classification model that predicts the type of forest cover from cartographic data. The [dataset](../../../datasets/covertype/README.md) used in the lab is based on **Covertype Data Set** from UCI Machine Learning Repository.Scikit-learn is one of the most useful libraries for machineย learningย in Python. The training code uses `scikit-learn` for data pre-processing and modeling. The code is instrumented using the `hypertune` package so it can be used with **AI Platform** hyperparameter tuning job in searching for the best combination of hyperparameter values by optimizing the metrics you specified.
###Code
import json
import os
import numpy as np
import pandas as pd
import pickle
import uuid
import time
import tempfile
from googleapiclient import discovery
from googleapiclient import errors
from google.cloud import bigquery
from jinja2 import Template
from kfp.components import func_to_container_op
from typing import NamedTuple
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.linear_model import SGDClassifier
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from sklearn.compose import ColumnTransformer
###Output
_____no_output_____
###Markdown
Run the command in the cell below to install gcsfs package.
###Code
%pip install gcsfs==0.8
###Output
_____no_output_____
###Markdown
Prepare lab datasetSet environment variable so that we can use them throughout the entire lab.The pipeline ingests data from BigQuery. The cell below uploads the Covertype dataset to BigQuery.
###Code
PROJECT_ID=!(gcloud config get-value core/project)
PROJECT_ID=PROJECT_ID[0]
DATASET_ID='covertype_dataset'
DATASET_LOCATION='US'
TABLE_ID='covertype'
DATA_SOURCE='gs://workshop-datasets/covertype/small/dataset.csv'
SCHEMA='Elevation:INTEGER,Aspect:INTEGER,Slope:INTEGER,Horizontal_Distance_To_Hydrology:INTEGER,Vertical_Distance_To_Hydrology:INTEGER,Horizontal_Distance_To_Roadways:INTEGER,Hillshade_9am:INTEGER,Hillshade_Noon:INTEGER,Hillshade_3pm:INTEGER,Horizontal_Distance_To_Fire_Points:INTEGER,Wilderness_Area:STRING,Soil_Type:STRING,Cover_Type:INTEGER'
###Output
_____no_output_____
###Markdown
Next, create the BigQuery dataset and upload the Covertype csv data into a table.
###Code
!bq --location=$DATASET_LOCATION --project_id=$PROJECT_ID mk --dataset $DATASET_ID
!bq --project_id=$PROJECT_ID --dataset_id=$DATASET_ID load \
--source_format=CSV \
--skip_leading_rows=1 \
--replace \
$TABLE_ID \
$DATA_SOURCE \
$SCHEMA
###Output
_____no_output_____
###Markdown
Configure environment settings Set location paths, connections strings, and other environment settings. Make sure to update `REGION`, and `ARTIFACT_STORE` with the settings reflecting your lab environment. - `REGION` - the compute region for AI Platform Training and Prediction- `ARTIFACT_STORE` - the Cloud Storage bucket created during installation of AI Platform Pipelines. The bucket name starts with the `qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-default` prefix.Run gsutil ls without URLs to list all of the Cloud Storage buckets under your default project ID.
###Code
!gsutil ls
###Output
_____no_output_____
###Markdown
HINT: For ARTIFACT_STORE, copy the bucket name which starts with the qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-default prefix from the previous cell output.Your copied value should look like 'gs://qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-default').
###Code
REGION = 'us-central1'
ARTIFACT_STORE = 'gs://qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-default' #ย TO DO: REPLACEย WITHย YOURย ARTIFACT_STOREย NAME
PROJECT_ID = !(gcloud config get-value core/project)
PROJECT_ID = PROJECT_ID[0]
DATA_ROOT='{}/data'.format(ARTIFACT_STORE)
JOB_DIR_ROOT='{}/jobs'.format(ARTIFACT_STORE)
TRAINING_FILE_PATH='{}/{}/{}'.format(DATA_ROOT, 'training', 'dataset.csv')
VALIDATION_FILE_PATH='{}/{}/{}'.format(DATA_ROOT, 'validation', 'dataset.csv')
###Output
_____no_output_____
###Markdown
Explore the Covertype dataset Run the query statement below to scan covertype_dataset.covertype table in BigQuery and return the computed result rows.
###Code
%%bigquery
SELECT *
FROM `covertype_dataset.covertype`
###Output
_____no_output_____
###Markdown
Create training and validation splitsUse BigQuery to sample training and validation splits and save them to Cloud Storage. Create a training splitRun the query below in order to have repeatable sampling of the data in BigQuery. Note that `FARM_FINGERPRINT()`ย is used on the field that you are going to split your data. It creates a training split that takes 80% of the data using theย `bq`ย command and exports this split into the BigQuery table of `covertype_dataset.training`.
###Code
!bq query \
-n 0 \
--destination_table covertype_dataset.training \
--replace \
--use_legacy_sql=false \
'SELECT * \
FROM `covertype_dataset.covertype` AS cover \
WHERE \
MOD(ABS(FARM_FINGERPRINT(TO_JSON_STRING(cover))), 10) IN (1, 2, 3, 4)'
###Output
_____no_output_____
###Markdown
Use theย `bq`ย extract command to export the BigQuery training table to GCS atย `$TRAINING_FILE_PATH`.
###Code
!bq extract \
--destination_format CSV \
covertype_dataset.training \
$TRAINING_FILE_PATH
###Output
_____no_output_____
###Markdown
Create a validation split ExerciseIn the first cell below, create a validation split that takes 10% of the data using the `bq` command andexport this split into the BigQuery table `covertype_dataset.validation`.In the second cell, use the `bq` command to export that BigQuery validation table to GCS at `$VALIDATION_FILE_PATH`.NOTE: If you need help, you may take a look at the complete solution by navigating to **mlops-on-gcp > workshops > kfp-caip-sklearn > lab-01-caip-containers** and opening **lab-01.ipynb**.
###Code
# TO DO: Your code goes here to create the BQ table validation split.
# TO DO: Your code goes here to export the validation table to the Cloud Storage bucket.
df_train = pd.read_csv(TRAINING_FILE_PATH)
df_validation = pd.read_csv(VALIDATION_FILE_PATH)
print(df_train.shape)
print(df_validation.shape)
###Output
_____no_output_____
###Markdown
Develop a training application Configure the `sklearn` training pipeline.The training pipeline preprocesses data by standardizing all numeric features using `sklearn.preprocessing.StandardScaler` and encoding all categorical features using `sklearn.preprocessing.OneHotEncoder`. It uses stochastic gradient descent linear classifier (`SGDClassifier`) for modeling.
###Code
numeric_feature_indexes = slice(0, 10)
categorical_feature_indexes = slice(10, 12)
preprocessor = ColumnTransformer(
transformers=[
('num', StandardScaler(), numeric_feature_indexes),
('cat', OneHotEncoder(), categorical_feature_indexes)
])
pipeline = Pipeline([
('preprocessor', preprocessor),
('classifier', SGDClassifier(loss='log', tol=1e-3))
])
###Output
_____no_output_____
###Markdown
Convert all numeric features to `float64`To avoid warning messages from `StandardScaler` all numeric features are converted to `float64`.
###Code
num_features_type_map = {feature: 'float64' for feature in df_train.columns[numeric_feature_indexes]}
df_train = df_train.astype(num_features_type_map)
df_validation = df_validation.astype(num_features_type_map)
###Output
_____no_output_____
###Markdown
Run the pipeline locally.
###Code
X_train = df_train.drop('Cover_Type', axis=1)
y_train = df_train['Cover_Type']
X_validation = df_validation.drop('Cover_Type', axis=1)
y_validation = df_validation['Cover_Type']
pipeline.set_params(classifier__alpha=0.001, classifier__max_iter=200)
pipeline.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Calculate the trained model's accuracy.
###Code
accuracy = pipeline.score(X_validation, y_validation)
print(accuracy)
###Output
_____no_output_____
###Markdown
Prepare the hyperparameter tuning application.Since the training run on this dataset is computationally expensive you can benefit from running a distributed hyperparameter tuning job on AI Platform Training.
###Code
TRAINING_APP_FOLDER = 'training_app'
os.makedirs(TRAINING_APP_FOLDER, exist_ok=True)
###Output
_____no_output_____
###Markdown
Write the tuning script. Notice the use of the `hypertune` package to report the `accuracy` optimization metric to AI Platform hyperparameter tuning service. ExerciseComplete the code below to capture the metric that the hyper parameter tunning engine will use to optimizethe hyper parameter. NOTE: If you need help, you may take a look at the complete solution by navigating to **mlops-on-gcp > workshops > kfp-caip-sklearn > lab-01-caip-containers** and opening **lab-01.ipynb**.
###Code
%%writefile {TRAINING_APP_FOLDER}/train.py
# Copyright 2019 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import subprocess
import sys
import fire
import pickle
import numpy as np
import pandas as pd
import hypertune
from sklearn.compose import ColumnTransformer
from sklearn.linear_model import SGDClassifier
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, OneHotEncoder
def train_evaluate(job_dir, training_dataset_path,
validation_dataset_path, alpha, max_iter, hptune):
df_train = pd.read_csv(training_dataset_path)
df_validation = pd.read_csv(validation_dataset_path)
if not hptune:
df_train = pd.concat([df_train, df_validation])
numeric_feature_indexes = slice(0, 10)
categorical_feature_indexes = slice(10, 12)
preprocessor = ColumnTransformer(
transformers=[
('num', StandardScaler(), numeric_feature_indexes),
('cat', OneHotEncoder(), categorical_feature_indexes)
])
pipeline = Pipeline([
('preprocessor', preprocessor),
('classifier', SGDClassifier(loss='log',tol=1e-3))
])
num_features_type_map = {feature: 'float64' for feature
in df_train.columns[numeric_feature_indexes]}
df_train = df_train.astype(num_features_type_map)
df_validation = df_validation.astype(num_features_type_map)
print('Starting training: alpha={}, max_iter={}'.format(alpha, max_iter))
X_train = df_train.drop('Cover_Type', axis=1)
y_train = df_train['Cover_Type']
pipeline.set_params(classifier__alpha=alpha, classifier__max_iter=max_iter)
pipeline.fit(X_train, y_train)
if hptune:
# TO DO: Your code goes here to score the model with the validation data and capture the result
# with the hypertune library
# Save the model
if not hptune:
model_filename = 'model.pkl'
with open(model_filename, 'wb') as model_file:
pickle.dump(pipeline, model_file)
gcs_model_path = "{}/{}".format(job_dir, model_filename)
subprocess.check_call(['gsutil', 'cp', model_filename, gcs_model_path],
stderr=sys.stdout)
print("Saved model in: {}".format(gcs_model_path))
if __name__ == "__main__":
fire.Fire(train_evaluate)
###Output
_____no_output_____
###Markdown
Package the script into a docker image.Notice that we are installing specific versions of `scikit-learn` and `pandas` in the training image. This is done to make sure that the training runtime is aligned with the serving runtime. Later in the notebook you will deploy the model to AI Platform Prediction, using the 1.15 version of AI Platform Prediction runtime. Make sure to update the URI for the base image so that it points to your project's **Container Registry**. ExerciseComplete the Dockerfile below so that it copies the 'train.py' file into the containerat `/app` and runs it when the container is started. NOTE: If you need help, you may take a look at the complete solution by navigating to **mlops-on-gcp > workshops > kfp-caip-sklearn > lab-01-caip-containers** and opening **lab-01.ipynb**.
###Code
%%writefile {TRAINING_APP_FOLDER}/Dockerfile
FROM gcr.io/deeplearning-platform-release/base-cpu
RUN pip install -U fire cloudml-hypertune scikit-learn==0.20.4 pandas==0.24.2
# TO DO: Your code goes here
###Output
_____no_output_____
###Markdown
Build the docker image. You use **Cloud Build** to build the image and push it your project's **Container Registry**. As you use the remote cloud service to build the image, you don't need a local installation of Docker.
###Code
IMAGE_NAME='trainer_image'
IMAGE_TAG='latest'
IMAGE_URI='gcr.io/{}/{}:{}'.format(PROJECT_ID, IMAGE_NAME, IMAGE_TAG)
!gcloud builds submit --tag $IMAGE_URI $TRAINING_APP_FOLDER
###Output
_____no_output_____
###Markdown
Submit an AI Platform hyperparameter tuning job Create the hyperparameter configuration file. Recall that the training code uses `SGDClassifier`. The training application has been designed to accept two hyperparameters that control `SGDClassifier`:- Max iterations- AlphaThe below file configures AI Platform hypertuning to run up to 6 trials on up to three nodes and to choose from two discrete values of `max_iter` and the linear range betwee 0.00001 and 0.001 for `alpha`. ExerciseComplete the `hptuning_config.yaml` file below so that the hyperparametertunning engine try for parameter values* `max_iter` the two values 200 and 300* `alpha` a linear range of values between 0.00001 and 0.001NOTE: If you need help, you may take a look at the complete solution by navigating to **mlops-on-gcp > workshops > kfp-caip-sklearn > lab-01-caip-containers** and opening **lab-01.ipynb**.
###Code
%%writefile {TRAINING_APP_FOLDER}/hptuning_config.yaml
# Copyright 2019 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
trainingInput:
hyperparameters:
goal: MAXIMIZE
maxTrials: 4
maxParallelTrials: 4
hyperparameterMetricTag: accuracy
enableTrialEarlyStopping: TRUE
params:
# TO DO: Your code goes here
###Output
_____no_output_____
###Markdown
Start the hyperparameter tuning job. ExerciseUse the `gcloud` command to start the hyperparameter tuning job.NOTE: If you need help, you may take a look at the complete solution by navigating to **mlops-on-gcp > workshops > kfp-caip-sklearn > lab-01-caip-containers** and opening **lab-01.ipynb**.
###Code
JOB_NAME = "JOB_{}".format(time.strftime("%Y%m%d_%H%M%S"))
JOB_DIR = "{}/{}".format(JOB_DIR_ROOT, JOB_NAME)
SCALE_TIER = "BASIC"
!gcloud ai-platform jobs submit training $JOB_NAME \
--region=# TO DO: ADD YOUR REGION \
--job-dir=# TO DO: ADD YOUR JOB-DIR \
--master-image-uri=# TO DO: ADD YOUR IMAGE-URI \
--scale-tier=# TO DO: ADD YOUR SCALE-TIER \
--config # TO DO: ADD YOUR CONFIG PATH \
-- \
# TO DO: Complete the command
###Output
_____no_output_____
###Markdown
Monitor the job.You can monitor the job using Google Cloud console or from within the notebook using `gcloud` commands.
###Code
!gcloud ai-platform jobs describe $JOB_NAME
!gcloud ai-platform jobs stream-logs $JOB_NAME
###Output
_____no_output_____
###Markdown
**NOTE: The above AI platform job stream logs will take approximately 5~10 minutes to display.** Retrieve HP-tuning results. After the job completes you can review the results using Google Cloud Console or programatically by calling the AI Platform Training REST end-point.
###Code
ml = discovery.build('ml', 'v1')
job_id = 'projects/{}/jobs/{}'.format(PROJECT_ID, JOB_NAME)
request = ml.projects().jobs().get(name=job_id)
try:
response = request.execute()
except errors.HttpError as err:
print(err)
except:
print("Unexpected error")
response
###Output
_____no_output_____
###Markdown
The returned run results are sorted by a value of the optimization metric. The best run is the first item on the returned list.
###Code
response['trainingOutput']['trials'][0]
###Output
_____no_output_____
###Markdown
Retrain the model with the best hyperparametersYou can now retrain the model using the best hyperparameters and using combined training and validation splits as a training dataset. Configure and run the training job
###Code
alpha = response['trainingOutput']['trials'][0]['hyperparameters']['alpha']
max_iter = response['trainingOutput']['trials'][0]['hyperparameters']['max_iter']
JOB_NAME = "JOB_{}".format(time.strftime("%Y%m%d_%H%M%S"))
JOB_DIR = "{}/{}".format(JOB_DIR_ROOT, JOB_NAME)
SCALE_TIER = "BASIC"
!gcloud ai-platform jobs submit training $JOB_NAME \
--region=$REGION \
--job-dir=$JOB_DIR \
--master-image-uri=$IMAGE_URI \
--scale-tier=$SCALE_TIER \
-- \
--training_dataset_path=$TRAINING_FILE_PATH \
--validation_dataset_path=$VALIDATION_FILE_PATH \
--alpha=$alpha \
--max_iter=$max_iter \
--nohptune
!gcloud ai-platform jobs stream-logs $JOB_NAME
###Output
_____no_output_____
###Markdown
**NOTE: The above AI platform job stream logs will take approximately 5~10 minutes to display.** Examine the training outputThe training script saved the trained model as the 'model.pkl' in the `JOB_DIR` folder on Cloud Storage.
###Code
!gsutil ls $JOB_DIR
###Output
_____no_output_____
###Markdown
Deploy the model to AI Platform Prediction Create a model resource ExerciseComplete the `gcloud` command below to create a model with`model_name` in `$REGION` tagged with `labels`:NOTE: If you need help, you may take a look at the complete solution by navigating to **mlops-on-gcp > workshops > kfp-caip-sklearn > lab-01-caip-containers** and opening **lab-01.ipynb**.
###Code
model_name = 'forest_cover_classifier'
labels = "task=classifier,domain=forestry"
!gcloud # TO DO: You code goes here
###Output
_____no_output_____
###Markdown
Create a model version Exercise Complete the `gcloud` command below to create a version of the model:NOTE: If you need help, you may take a look at the complete solution by navigating to **mlops-on-gcp > workshops > kfp-caip-sklearn > lab-01-caip-containers** and opening **lab-01.ipynb**.
###Code
model_version = 'v01'
!gcloud # TO DO: Complete the command \
--model=# TO DO: ADD YOUR MODEL NAME \
--origin=# TO DO: ADD YOUR PATH \
--runtime-version=# TO DO: ADD YOUR RUNTIME \
--framework=# TO DO: ADD YOUR FRAMEWORK \
--python-version=# TO DO: ADD YOUR PYTHON VERSION \
--region # TO DO: ADD YOUR REGION
###Output
_____no_output_____
###Markdown
Serve predictions Prepare the input file with JSON formated instances.
###Code
input_file = 'serving_instances.json'
with open(input_file, 'w') as f:
for index, row in X_validation.head().iterrows():
f.write(json.dumps(list(row.values)))
f.write('\n')
!cat $input_file
###Output
_____no_output_____
###Markdown
Invoke the model ExerciseUsing the `gcloud` command send the data in `$input_file` to your model deployed as a REST API:NOTE: If you need help, you may take a look at the complete solution by navigating to **mlops-on-gcp > workshops > kfp-caip-sklearn > lab-01-caip-containers** and opening **lab-01.ipynb**.
###Code
!gcloud # TO DO: Complete the command
###Output
_____no_output_____
###Markdown
Using custom containers with AI Platform Training**Learning Objectives:**1. Learn how to create a train and a validation split with Big Query1. Learn how to wrap a machine learning model into a Docker container and train in on CAIP1. Learn how to use the hyperparameter tunning engine on GCP to find the best hyperparameters1. Learn how to deploy a trained machine learning model GCP as a rest API and query itIn this lab, you develop a multi-class classification model, package the model as a docker image, and run on **AI Platform Training** as a training application. The training application trains a multi-class classification model that predicts the type of forest cover from cartographic data. The [dataset](../../../datasets/covertype/README.md) used in the lab is based on **Covertype Data Set** from UCI Machine Learning Repository.Scikit-learn is one of the most useful library for machineย learningย in Python. The training code uses `scikit-learn` for data pre-processing and modeling. The code is instrumented using the `hypertune` package so it can be used with **AI Platform** hyperparameter tuning job in searching for the best combination of hyperparameter values by optimizing the metrics you specified.
###Code
%%bash
export PROJECT_ID=$(gcloud config list project --format "value(core.project)")
echo "Your current GCP Project Name is: "${PROJECT_ID}
DATASET_LOCATION='US'
DATASET_ID='covertype_dataset'
TABLE_ID='covertype'
DATA_SOURCE='gs://workshop-datasets/covertype/small/dataset.csv'
SCHEMA='Elevation:INTEGER,Aspect:INTEGER,Slope:INTEGER,Horizontal_Distance_To_Hydrology:INTEGER,Vertical_Distance_To_Hydrology:INTEGER,Horizontal_Distance_To_Roadways:INTEGER,Hillshade_9am:INTEGER,Hillshade_Noon:INTEGER,Hillshade_3pm:INTEGER,Horizontal_Distance_To_Fire_Points:INTEGER,Wilderness_Area:STRING,Soil_Type:STRING,Cover_Type:INTEGER'
!bq --location=$DATASET_LOCATION --project_id=$PROJECT_ID mk --dataset $DATASET_ID
!bq --project_id=$PROJECT_ID --dataset_id=$DATASET_ID load \
--source_format=CSV \
--skip_leading_rows=1 \
--replace \
$TABLE_ID \
$DATA_SOURCE \
$SCHEMA
import json
import os
import numpy as np
import pandas as pd
import pickle
import uuid
import time
import tempfile
from googleapiclient import discovery
from googleapiclient import errors
from google.cloud import bigquery
from jinja2 import Template
from kfp.components import func_to_container_op
from typing import NamedTuple
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.linear_model import SGDClassifier
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from sklearn.compose import ColumnTransformer
###Output
_____no_output_____
###Markdown
Configure environment settings Set location paths, connections strings, and other environment settings. Make sure to update `REGION`, and `ARTIFACT_STORE` with the settings reflecting your lab environment. - `REGION` - the compute region for AI Platform Training and Prediction- `ARTIFACT_STORE` - the GCS bucket created during installation of AI Platform Pipelines. The bucket name starts with the `qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-default` prefix.Run gsutil ls without URLs to list all of the Cloud Storage buckets under your default project ID.
###Code
!gsutil ls
REGION = 'us-central1'
ARTIFACT_STORE = 'gs://qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-default' #ย TO DO: REPLACEย WITHย YOURย ARTIFACT_STOREย NAME
# (HINT: Copyย theย bucketย nameย whichย startsย withย theย qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-defaultย prefixย fromย theย previousย cellย output.
# Your copied value should look like 'gs://qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-default').
PROJECT_ID = !(gcloud config get-value core/project)
PROJECT_ID = PROJECT_ID[0]
DATA_ROOT='{}/data'.format(ARTIFACT_STORE)
JOB_DIR_ROOT='{}/jobs'.format(ARTIFACT_STORE)
TRAINING_FILE_PATH='{}/{}/{}'.format(DATA_ROOT, 'training', 'dataset.csv')
VALIDATION_FILE_PATH='{}/{}/{}'.format(DATA_ROOT, 'validation', 'dataset.csv')
###Output
_____no_output_____
###Markdown
Explore the Covertype dataset Run the query statement below to scan covertype_dataset.covertype table in BigQuery and return the computed result rows.
###Code
%%bigquery
SELECT *
FROM `covertype_dataset.covertype`
###Output
_____no_output_____
###Markdown
Create training and validation splitsUse BigQuery to sample training and validation splits and save them to GCS storage. Create a training splitRun the query below in order to have repeatable sampling of the data in BigQuery. Note that `FARM_FINGERPRINT()`ย is used on the field that you are going to split your data. It creates a training split that takes 80% of the data using theย `bq`ย command and exports this split into the BigQuery table of `covertype_dataset.training`.
###Code
!bq query \
-n 0 \
--destination_table covertype_dataset.training \
--replace \
--use_legacy_sql=false \
'SELECT * \
FROM `covertype_dataset.covertype` AS cover \
WHERE \
MOD(ABS(FARM_FINGERPRINT(TO_JSON_STRING(cover))), 10) IN (1, 2, 3, 4)'
###Output
_____no_output_____
###Markdown
Use theย `bq`ย extract command to export the BigQuery training table to GCS atย `$TRAINING_FILE_PATH`.
###Code
!bq extract \
--destination_format CSV \
covertype_dataset.training \
$TRAINING_FILE_PATH
###Output
_____no_output_____
###Markdown
Create a validation split ExerciseIn the first cell below, create a validation split that takes 10% of the data using the `bq` command andexport this split into the BigQuery table `covertype_dataset.validation`.In the second cell, use the `bq` command to export that BigQuery validation table to GCS at `$VALIDATION_FILE_PATH`.NOTE: If you need help, you may take a look at the complete solution by navigating to **mlops-on-gcp > workshops > kfp-caip-sklearn > lab-01-caip-containers** and opening **lab-01.ipynb**.
###Code
# TO DO: Your code goes here to create the BQ table validation split.
# TO DO: Your code goes here to export the validation table to GCS.
df_train = pd.read_csv(TRAINING_FILE_PATH)
df_validation = pd.read_csv(VALIDATION_FILE_PATH)
print(df_train.shape)
print(df_validation.shape)
###Output
_____no_output_____
###Markdown
Develop a training application Configure the `sklearn` training pipeline.The training pipeline preprocesses data by standardizing all numeric features using `sklearn.preprocessing.StandardScaler` and encoding all categorical features using `sklearn.preprocessing.OneHotEncoder`. It uses stochastic gradient descent linear classifier (`SGDClassifier`) for modeling.
###Code
numeric_feature_indexes = slice(0, 10)
categorical_feature_indexes = slice(10, 12)
preprocessor = ColumnTransformer(
transformers=[
('num', StandardScaler(), numeric_feature_indexes),
('cat', OneHotEncoder(), categorical_feature_indexes)
])
pipeline = Pipeline([
('preprocessor', preprocessor),
('classifier', SGDClassifier(loss='log', tol=1e-3))
])
###Output
_____no_output_____
###Markdown
Convert all numeric features to `float64`To avoid warning messages from `StandardScaler` all numeric features are converted to `float64`.
###Code
num_features_type_map = {feature: 'float64' for feature in df_train.columns[numeric_feature_indexes]}
df_train = df_train.astype(num_features_type_map)
df_validation = df_validation.astype(num_features_type_map)
###Output
_____no_output_____
###Markdown
Run the pipeline locally.
###Code
X_train = df_train.drop('Cover_Type', axis=1)
y_train = df_train['Cover_Type']
X_validation = df_validation.drop('Cover_Type', axis=1)
y_validation = df_validation['Cover_Type']
pipeline.set_params(classifier__alpha=0.001, classifier__max_iter=200)
pipeline.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Calculate the trained model's accuracy.
###Code
accuracy = pipeline.score(X_validation, y_validation)
print(accuracy)
###Output
_____no_output_____
###Markdown
Prepare the hyperparameter tuning application.Since the training run on this dataset is computationally expensive you can benefit from running a distributed hyperparameter tuning job on AI Platform Training.
###Code
TRAINING_APP_FOLDER = 'training_app'
os.makedirs(TRAINING_APP_FOLDER, exist_ok=True)
###Output
_____no_output_____
###Markdown
Write the tuning script. Notice the use of the `hypertune` package to report the `accuracy` optimization metric to AI Platform hyperparameter tuning service. ExerciseComplete the code below to capture the metric that the hyper parameter tunning engine will use to optimizethe hyper parameter. NOTE: If you need help, you may take a look at the complete solution by navigating to **mlops-on-gcp > workshops > kfp-caip-sklearn > lab-01-caip-containers** and opening **lab-01.ipynb**.
###Code
%%writefile {TRAINING_APP_FOLDER}/train.py
# Copyright 2019 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import subprocess
import sys
import fire
import pickle
import numpy as np
import pandas as pd
import hypertune
from sklearn.compose import ColumnTransformer
from sklearn.linear_model import SGDClassifier
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, OneHotEncoder
def train_evaluate(job_dir, training_dataset_path,
validation_dataset_path, alpha, max_iter, hptune):
df_train = pd.read_csv(training_dataset_path)
df_validation = pd.read_csv(validation_dataset_path)
if not hptune:
df_train = pd.concat([df_train, df_validation])
numeric_feature_indexes = slice(0, 10)
categorical_feature_indexes = slice(10, 12)
preprocessor = ColumnTransformer(
transformers=[
('num', StandardScaler(), numeric_feature_indexes),
('cat', OneHotEncoder(), categorical_feature_indexes)
])
pipeline = Pipeline([
('preprocessor', preprocessor),
('classifier', SGDClassifier(loss='log',tol=1e-3))
])
num_features_type_map = {feature: 'float64' for feature
in df_train.columns[numeric_feature_indexes]}
df_train = df_train.astype(num_features_type_map)
df_validation = df_validation.astype(num_features_type_map)
print('Starting training: alpha={}, max_iter={}'.format(alpha, max_iter))
X_train = df_train.drop('Cover_Type', axis=1)
y_train = df_train['Cover_Type']
pipeline.set_params(classifier__alpha=alpha, classifier__max_iter=max_iter)
pipeline.fit(X_train, y_train)
if hptune:
# TO DO: Your code goes here to score the model with the validation data and capture the result
# with the hypertune library
# Save the model
if not hptune:
model_filename = 'model.pkl'
with open(model_filename, 'wb') as model_file:
pickle.dump(pipeline, model_file)
gcs_model_path = "{}/{}".format(job_dir, model_filename)
subprocess.check_call(['gsutil', 'cp', model_filename, gcs_model_path],
stderr=sys.stdout)
print("Saved model in: {}".format(gcs_model_path))
if __name__ == "__main__":
fire.Fire(train_evaluate)
###Output
_____no_output_____
###Markdown
Package the script into a docker image.Notice that we are installing specific versions of `scikit-learn` and `pandas` in the training image. This is done to make sure that the training runtime is aligned with the serving runtime. Later in the notebook you will deploy the model to AI Platform Prediction, using the 1.15 version of AI Platform Prediction runtime. Make sure to update the URI for the base image so that it points to your project's **Container Registry**. ExerciseComplete the Dockerfile below so that it copies the 'train.py' file into the containerat `/app` and runs it when the container is started. NOTE: If you need help, you may take a look at the complete solution by navigating to **mlops-on-gcp > workshops > kfp-caip-sklearn > lab-01-caip-containers** and opening **lab-01.ipynb**.
###Code
%%writefile {TRAINING_APP_FOLDER}/Dockerfile
FROM gcr.io/deeplearning-platform-release/base-cpu
RUN pip install -U fire cloudml-hypertune scikit-learn==0.20.4 pandas==0.24.2
# TO DO: Your code goes here
###Output
_____no_output_____
###Markdown
Build the docker image. You use **Cloud Build** to build the image and push it your project's **Container Registry**. As you use the remote cloud service to build the image, you don't need a local installation of Docker.
###Code
IMAGE_NAME='trainer_image'
IMAGE_TAG='latest'
IMAGE_URI='gcr.io/{}/{}:{}'.format(PROJECT_ID, IMAGE_NAME, IMAGE_TAG)
!gcloud builds submit --tag $IMAGE_URI $TRAINING_APP_FOLDER
###Output
_____no_output_____
###Markdown
Submit an AI Platform hyperparameter tuning job Create the hyperparameter configuration file. Recall that the training code uses `SGDClassifier`. The training application has been designed to accept two hyperparameters that control `SGDClassifier`:- Max iterations- AlphaThe below file configures AI Platform hypertuning to run up to 6 trials on up to three nodes and to choose from two discrete values of `max_iter` and the linear range betwee 0.00001 and 0.001 for `alpha`. ExerciseComplete the `hptuning_config.yaml` file below so that the hyperparametertunning engine try for parameter values* `max_iter` the two values 200 and 300* `alpha` a linear range of values between 0.00001 and 0.001NOTE: If you need help, you may take a look at the complete solution by navigating to **mlops-on-gcp > workshops > kfp-caip-sklearn > lab-01-caip-containers** and opening **lab-01.ipynb**.
###Code
%%writefile {TRAINING_APP_FOLDER}/hptuning_config.yaml
# Copyright 2019 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
trainingInput:
hyperparameters:
goal: MAXIMIZE
maxTrials: 4
maxParallelTrials: 4
hyperparameterMetricTag: accuracy
enableTrialEarlyStopping: TRUE
params:
# TO DO: Your code goes here
###Output
_____no_output_____
###Markdown
Start the hyperparameter tuning job. ExerciseUse the `gcloud` command to start the hyperparameter tuning job.NOTE: If you need help, you may take a look at the complete solution by navigating to **mlops-on-gcp > workshops > kfp-caip-sklearn > lab-01-caip-containers** and opening **lab-01.ipynb**.
###Code
JOB_NAME = "JOB_{}".format(time.strftime("%Y%m%d_%H%M%S"))
JOB_DIR = "{}/{}".format(JOB_DIR_ROOT, JOB_NAME)
SCALE_TIER = "BASIC"
!gcloud ai-platform jobs submit training $JOB_NAME \
--region=# TO DO: ADD YOUR REGION \
--job-dir=# TO DO: ADD YOUR JOB-DIR \
--master-image-uri=# TO DO: ADD YOUR IMAGE-URI \
--scale-tier=# TO DO: ADD YOUR SCALE-TIER \
--config # TO DO: ADD YOUR CONFIG PATH \
-- \
# TO DO: Complete the command
###Output
_____no_output_____
###Markdown
Monitor the job.You can monitor the job using GCP console or from within the notebook using `gcloud` commands.
###Code
!gcloud ai-platform jobs describe $JOB_NAME
!gcloud ai-platform jobs stream-logs $JOB_NAME
###Output
_____no_output_____
###Markdown
**NOTE: The above AI platform job stream logs will take approximately 5~10 minutes to display.** Retrieve HP-tuning results. After the job completes you can review the results using GCP Console or programatically by calling the AI Platform Training REST end-point.
###Code
ml = discovery.build('ml', 'v1')
job_id = 'projects/{}/jobs/{}'.format(PROJECT_ID, JOB_NAME)
request = ml.projects().jobs().get(name=job_id)
try:
response = request.execute()
except errors.HttpError as err:
print(err)
except:
print("Unexpected error")
response
###Output
_____no_output_____
###Markdown
The returned run results are sorted by a value of the optimization metric. The best run is the first item on the returned list.
###Code
response['trainingOutput']['trials'][0]
###Output
_____no_output_____
###Markdown
Retrain the model with the best hyperparametersYou can now retrain the model using the best hyperparameters and using combined training and validation splits as a training dataset. Configure and run the training job
###Code
alpha = response['trainingOutput']['trials'][0]['hyperparameters']['alpha']
max_iter = response['trainingOutput']['trials'][0]['hyperparameters']['max_iter']
JOB_NAME = "JOB_{}".format(time.strftime("%Y%m%d_%H%M%S"))
JOB_DIR = "{}/{}".format(JOB_DIR_ROOT, JOB_NAME)
SCALE_TIER = "BASIC"
!gcloud ai-platform jobs submit training $JOB_NAME \
--region=$REGION \
--job-dir=$JOB_DIR \
--master-image-uri=$IMAGE_URI \
--scale-tier=$SCALE_TIER \
-- \
--training_dataset_path=$TRAINING_FILE_PATH \
--validation_dataset_path=$VALIDATION_FILE_PATH \
--alpha=$alpha \
--max_iter=$max_iter \
--nohptune
!gcloud ai-platform jobs stream-logs $JOB_NAME
###Output
_____no_output_____
###Markdown
**NOTE: The above AI platform job stream logs will take approximately 5~10 minutes to display.** Examine the training outputThe training script saved the trained model as the 'model.pkl' in the `JOB_DIR` folder on GCS.
###Code
!gsutil ls $JOB_DIR
###Output
_____no_output_____
###Markdown
Deploy the model to AI Platform Prediction Create a model resource ExerciseComplete the `gcloud` command below to create a model with`model_name` in `$REGION` tagged with `labels`:NOTE: If you need help, you may take a look at the complete solution by navigating to **mlops-on-gcp > workshops > kfp-caip-sklearn > lab-01-caip-containers** and opening **lab-01.ipynb**.
###Code
model_name = 'forest_cover_classifier'
labels = "task=classifier,domain=forestry"
!gcloud # TO DO: You code goes here
###Output
_____no_output_____
###Markdown
Create a model version Exercise Complete the `gcloud` command below to create a version of the model:NOTE: If you need help, you may take a look at the complete solution by navigating to **mlops-on-gcp > workshops > kfp-caip-sklearn > lab-01-caip-containers** and opening **lab-01.ipynb**.
###Code
model_version = 'v01'
!gcloud # TO DO: Complete the command \
--model=# TO DO: ADD YOUR MODEL NAME \
--origin=# TO DO: ADD YOUR PATH \
--runtime-version=# TO DO: ADD YOUR RUNTIME \
--framework=# TO DO: ADD YOUR FRAMEWORK \
--python-version=# TO DO: ADD YOUR PYTHON VERSION \
--region # TO DO: ADD YOUR REGION
###Output
_____no_output_____
###Markdown
Serve predictions Prepare the input file with JSON formated instances.
###Code
input_file = 'serving_instances.json'
with open(input_file, 'w') as f:
for index, row in X_validation.head().iterrows():
f.write(json.dumps(list(row.values)))
f.write('\n')
!cat $input_file
###Output
_____no_output_____
###Markdown
Invoke the model ExerciseUsing the `gcloud` command send the data in `$input_file` to your model deployed as a REST API:NOTE: If you need help, you may take a look at the complete solution by navigating to **mlops-on-gcp > workshops > kfp-caip-sklearn > lab-01-caip-containers** and opening **lab-01.ipynb**.
###Code
!gcloud # TO DO: Complete the command
###Output
_____no_output_____
###Markdown
Using custom containers with AI Platform Training**Learning Objectives:**1. Learn how to create a train and a validation split with Big Query1. Learn how to wrap a machine learning model into a Docker container and train in on CAIP1. Learn how to use the hyperparameter tunning engine on GCP to find the best hyperparameters1. Learn how to deploy a trained machine learning model GCP as a rest API and query it.In this lab, you develop, package as a docker image, and run on **AI Platform Training** a training application that trains a multi-class classification model that predicts the type of forest cover from cartographic data. The [dataset](../../../datasets/covertype/README.md) used in the lab is based on **Covertype Data Set** from UCI Machine Learning Repository.The training code uses `scikit-learn` for data pre-processing and modeling. The code has been instrumented using the `hypertune` package so it can be used with **AI Platform** hyperparameter tuning.
###Code
import json
import os
import numpy as np
import pandas as pd
import pickle
import uuid
import time
import tempfile
from googleapiclient import discovery
from googleapiclient import errors
from google.cloud import bigquery
from jinja2 import Template
from kfp.components import func_to_container_op
from typing import NamedTuple
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.linear_model import SGDClassifier
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from sklearn.compose import ColumnTransformer
###Output
_____no_output_____
###Markdown
Configure environment settings Set location paths, connections strings, and other environment settings. Make sure to update `REGION`, and `ARTIFACT_STORE` with the settings reflecting your lab environment. - `REGION` - the compute region for AI Platform Training and Prediction- `ARTIFACT_STORE` - the GCS bucket created during installation of AI Platform Pipelines. The bucket name starts with the `qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-default` prefix.
###Code
!gsutil ls
REGION = 'us-central1'
ARTIFACT_STORE = 'gs://qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-default' #Change
PROJECT_ID = !(gcloud config get-value core/project)
PROJECT_ID = PROJECT_ID[0]
DATA_ROOT='{}/data'.format(ARTIFACT_STORE)
JOB_DIR_ROOT='{}/jobs'.format(ARTIFACT_STORE)
TRAINING_FILE_PATH='{}/{}/{}'.format(DATA_ROOT, 'training', 'dataset.csv')
VALIDATION_FILE_PATH='{}/{}/{}'.format(DATA_ROOT, 'validation', 'dataset.csv')
###Output
_____no_output_____
###Markdown
Explore the Covertype dataset
###Code
%%bigquery
SELECT *
FROM `covertype_dataset.covertype`
###Output
_____no_output_____
###Markdown
Create training and validation splitsUse BigQuery to sample training and validation splits and save them to GCS storage Create a training split
###Code
!bq query \
-n 0 \
--destination_table covertype_dataset.training \
--replace \
--use_legacy_sql=false \
'SELECT * \
FROM `covertype_dataset.covertype` AS cover \
WHERE \
MOD(ABS(FARM_FINGERPRINT(TO_JSON_STRING(cover))), 10) IN (1, 2, 3, 4)'
!bq extract \
--destination_format CSV \
covertype_dataset.training \
$TRAINING_FILE_PATH
###Output
_____no_output_____
###Markdown
Create a validation split ExerciseIn the first cell below, create a validation split that takes 10% of the data using the `bq` command andexport this split into the BigQuery table `covertype_dataset.validation`.In the second cell, use the `bq` command to export that BigQuery validation table to GCS at `$VALIDATION_FILE_PATH`.
###Code
# TODO: You code to create the BQ table validation split
# TODO: Your code to export the validation table to GCS
df_train = pd.read_csv(TRAINING_FILE_PATH)
df_validation = pd.read_csv(VALIDATION_FILE_PATH)
print(df_train.shape)
print(df_validation.shape)
###Output
_____no_output_____
###Markdown
Develop a training application Configure the `sklearn` training pipeline.The training pipeline preprocesses data by standardizing all numeric features using `sklearn.preprocessing.StandardScaler` and encoding all categorical features using `sklearn.preprocessing.OneHotEncoder`. It uses stochastic gradient descent linear classifier (`SGDClassifier`) for modeling.
###Code
numeric_feature_indexes = slice(0, 10)
categorical_feature_indexes = slice(10, 12)
preprocessor = ColumnTransformer(
transformers=[
('num', StandardScaler(), numeric_feature_indexes),
('cat', OneHotEncoder(), categorical_feature_indexes)
])
pipeline = Pipeline([
('preprocessor', preprocessor),
('classifier', SGDClassifier(loss='log', tol=1e-3))
])
###Output
_____no_output_____
###Markdown
Convert all numeric features to `float64`To avoid warning messages from `StandardScaler` all numeric features are converted to `float64`.
###Code
num_features_type_map = {feature: 'float64' for feature in df_train.columns[numeric_feature_indexes]}
df_train = df_train.astype(num_features_type_map)
df_validation = df_validation.astype(num_features_type_map)
###Output
_____no_output_____
###Markdown
Run the pipeline locally.
###Code
X_train = df_train.drop('Cover_Type', axis=1)
y_train = df_train['Cover_Type']
X_validation = df_validation.drop('Cover_Type', axis=1)
y_validation = df_validation['Cover_Type']
pipeline.set_params(classifier__alpha=0.001, classifier__max_iter=200)
pipeline.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Calculate the trained model's accuracy.
###Code
accuracy = pipeline.score(X_validation, y_validation)
print(accuracy)
###Output
_____no_output_____
###Markdown
Prepare the hyperparameter tuning application.Since the training run on this dataset is computationally expensive you can benefit from running a distributed hyperparameter tuning job on AI Platform Training.
###Code
TRAINING_APP_FOLDER = 'training_app'
os.makedirs(TRAINING_APP_FOLDER, exist_ok=True)
###Output
_____no_output_____
###Markdown
Write the tuning script. Notice the use of the `hypertune` package to report the `accuracy` optimization metric to AI Platform hyperparameter tuning service. ExerciseComplete the code below to capture the metric that the hyper parameter tunning engine will use to optimizethe hyper parameter.
###Code
%%writefile {TRAINING_APP_FOLDER}/train.py
# Copyright 2019 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import subprocess
import sys
import fire
import pickle
import numpy as np
import pandas as pd
import hypertune
from sklearn.compose import ColumnTransformer
from sklearn.linear_model import SGDClassifier
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, OneHotEncoder
def train_evaluate(job_dir, training_dataset_path,
validation_dataset_path, alpha, max_iter, hptune):
df_train = pd.read_csv(training_dataset_path)
df_validation = pd.read_csv(validation_dataset_path)
if not hptune:
df_train = pd.concat([df_train, df_validation])
numeric_feature_indexes = slice(0, 10)
categorical_feature_indexes = slice(10, 12)
preprocessor = ColumnTransformer(
transformers=[
('num', StandardScaler(), numeric_feature_indexes),
('cat', OneHotEncoder(), categorical_feature_indexes)
])
pipeline = Pipeline([
('preprocessor', preprocessor),
('classifier', SGDClassifier(loss='log',tol=1e-3))
])
num_features_type_map = {feature: 'float64' for feature
in df_train.columns[numeric_feature_indexes]}
df_train = df_train.astype(num_features_type_map)
df_validation = df_validation.astype(num_features_type_map)
print('Starting training: alpha={}, max_iter={}'.format(alpha, max_iter))
X_train = df_train.drop('Cover_Type', axis=1)
y_train = df_train['Cover_Type']
pipeline.set_params(classifier__alpha=alpha, classifier__max_iter=max_iter)
pipeline.fit(X_train, y_train)
if hptune:
# TODO: Score the model with the validation data and capture the result
# with the hypertune library
# Save the model
if not hptune:
model_filename = 'model.pkl'
with open(model_filename, 'wb') as model_file:
pickle.dump(pipeline, model_file)
gcs_model_path = "{}/{}".format(job_dir, model_filename)
subprocess.check_call(['gsutil', 'cp', model_filename, gcs_model_path],
stderr=sys.stdout)
print("Saved model in: {}".format(gcs_model_path))
if __name__ == "__main__":
fire.Fire(train_evaluate)
###Output
_____no_output_____
###Markdown
Package the script into a docker image.Notice that we are installing specific versions of `scikit-learn` and `pandas` in the training image. This is done to make sure that the training runtime is aligned with the serving runtime. Later in the notebook you will deploy the model to AI Platform Prediction, using the 1.15 version of AI Platform Prediction runtime. Make sure to update the URI for the base image so that it points to your project's **Container Registry**. ExerciseComplete the Dockerfile below so that it copies the 'train.py' file into the containerat `/app` and runs it when the container is started.
###Code
%%writefile {TRAINING_APP_FOLDER}/Dockerfile
FROM gcr.io/deeplearning-platform-release/base-cpu
RUN pip install -U fire cloudml-hypertune scikit-learn==0.20.4 pandas==0.24.2
# TODO
###Output
_____no_output_____
###Markdown
Build the docker image. You use **Cloud Build** to build the image and push it your project's **Container Registry**. As you use the remote cloud service to build the image, you don't need a local installation of Docker.
###Code
IMAGE_NAME='trainer_image'
IMAGE_TAG='latest'
IMAGE_URI='gcr.io/{}/{}:{}'.format(PROJECT_ID, IMAGE_NAME, IMAGE_TAG)
!gcloud builds submit --tag $IMAGE_URI $TRAINING_APP_FOLDER
###Output
_____no_output_____
###Markdown
Submit an AI Platform hyperparameter tuning job Create the hyperparameter configuration file. Recall that the training code uses `SGDClassifier`. The training application has been designed to accept two hyperparameters that control `SGDClassifier`:- Max iterations- AlphaThe below file configures AI Platform hypertuning to run up to 6 trials on up to three nodes and to choose from two discrete values of `max_iter` and the linear range betwee 0.00001 and 0.001 for `alpha`. ExerciseComplete the `hptuning_config.yaml` file below so that the hyperparametertunning engine try for parameter values* `max_iter` the two values 200 and 300* `alpha` a linear range of values between 0.00001 and 0.001
###Code
%%writefile {TRAINING_APP_FOLDER}/hptuning_config.yaml
# Copyright 2019 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
trainingInput:
hyperparameters:
goal: MAXIMIZE
maxTrials: 4
maxParallelTrials: 4
hyperparameterMetricTag: accuracy
enableTrialEarlyStopping: TRUE
params:
# TODO: Your code goes here
###Output
_____no_output_____
###Markdown
Start the hyperparameter tuning job. ExerciseUse the `gcloud` command to start the hyperparameter tuning job.
###Code
JOB_NAME = "JOB_{}".format(time.strftime("%Y%m%d_%H%M%S"))
JOB_DIR = "{}/{}".format(JOB_DIR_ROOT, JOB_NAME)
SCALE_TIER = "BASIC"
!gcloud ai-platform jobs submit training $JOB_NAME \
--region=# TODO\
--job-dir=# TODO \
--master-image-uri=# TODO \
--scale-tier=# TODO \
--config # TODO \
-- \
# TODO
###Output
_____no_output_____
###Markdown
Monitor the job.You can monitor the job using GCP console or from within the notebook using `gcloud` commands.
###Code
!gcloud ai-platform jobs describe $JOB_NAME
!gcloud ai-platform jobs stream-logs $JOB_NAME
###Output
_____no_output_____
###Markdown
Retrieve HP-tuning results. After the job completes you can review the results using GCP Console or programatically by calling the AI Platform Training REST end-point.
###Code
ml = discovery.build('ml', 'v1')
job_id = 'projects/{}/jobs/{}'.format(PROJECT_ID, JOB_NAME)
request = ml.projects().jobs().get(name=job_id)
try:
response = request.execute()
except errors.HttpError as err:
print(err)
except:
print("Unexpected error")
response
###Output
_____no_output_____
###Markdown
The returned run results are sorted by a value of the optimization metric. The best run is the first item on the returned list.
###Code
response['trainingOutput']['trials'][0]
###Output
_____no_output_____
###Markdown
Retrain the model with the best hyperparametersYou can now retrain the model using the best hyperparameters and using combined training and validation splits as a training dataset. Configure and run the training job
###Code
alpha = response['trainingOutput']['trials'][0]['hyperparameters']['alpha']
max_iter = response['trainingOutput']['trials'][0]['hyperparameters']['max_iter']
JOB_NAME = "JOB_{}".format(time.strftime("%Y%m%d_%H%M%S"))
JOB_DIR = "{}/{}".format(JOB_DIR_ROOT, JOB_NAME)
SCALE_TIER = "BASIC"
!gcloud ai-platform jobs submit training $JOB_NAME \
--region=$REGION \
--job-dir=$JOB_DIR \
--master-image-uri=$IMAGE_URI \
--scale-tier=$SCALE_TIER \
-- \
--training_dataset_path=$TRAINING_FILE_PATH \
--validation_dataset_path=$VALIDATION_FILE_PATH \
--alpha=$alpha \
--max_iter=$max_iter \
--nohptune
!gcloud ai-platform jobs stream-logs $JOB_NAME
###Output
_____no_output_____
###Markdown
Examine the training outputThe training script saved the trained model as the 'model.pkl' in the `JOB_DIR` folder on GCS.
###Code
!gsutil ls $JOB_DIR
###Output
_____no_output_____
###Markdown
Deploy the model to AI Platform Prediction Create a model resource ExerciseComplete the `gcloud` command below to create a model with`model_name` in `$REGION` tagged with `labels`:
###Code
model_name = 'forest_cover_classifier'
labels = "task=classifier,domain=forestry"
!gcloud # TODO: You code goes here
###Output
_____no_output_____
###Markdown
Create a model version Exercise Complete the `gcloud` command below to create a version of the model:
###Code
model_version = 'v01'
!gcloud # TODO \
--model=# TODO \
--origin=# TODO \
--runtime-version=# TODO \
--framework=# TODO \
--python-version=# TODO\
--region # TODO
###Output
_____no_output_____
###Markdown
Serve predictions Prepare the input file with JSON formated instances.
###Code
input_file = 'serving_instances.json'
with open(input_file, 'w') as f:
for index, row in X_validation.head().iterrows():
f.write(json.dumps(list(row.values)))
f.write('\n')
!cat $input_file
###Output
_____no_output_____
###Markdown
Invoke the model ExerciseUsing the `gcloud` command send the data in `$input_file` to your model deployed as a REST API:
###Code
!gcloud # TODO: Complete the command
###Output
_____no_output_____
###Markdown
Using custom containers with AI Platform Training**Learning Objectives:**1. Learn how to create a train and a validation split with Big Query1. Learn how to wrap a machine learning model into a Docker container and train in on CAIP1. Learn how to use the hyperparameter tunning engine on GCP to find the best hyperparameters1. Learn how to deploy a trained machine learning model GCP as a rest API and query itIn this lab, you develop, package as a docker image, and run on **AI Platform Training** a training application that trains a multi-class classification model that predicts the type of forest cover from cartographic data. The [dataset](../../../datasets/covertype/README.md) used in the lab is based on **Covertype Data Set** from UCI Machine Learning Repository.The training code uses `scikit-learn` for data pre-processing and modeling. The code has been instrumented using the `hypertune` package so it can be used with **AI Platform** hyperparameter tuning.
###Code
import json
import os
import numpy as np
import pandas as pd
import pickle
import uuid
import time
import tempfile
from googleapiclient import discovery
from googleapiclient import errors
from google.cloud import bigquery
from jinja2 import Template
from kfp.components import func_to_container_op
from typing import NamedTuple
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.linear_model import SGDClassifier
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from sklearn.compose import ColumnTransformer
###Output
_____no_output_____
###Markdown
Using custom containers with AI Platform Training**Learning Objectives:**1. Learn how to create a train and a validation split with Big Query1. Learn how to wrap a machine learning model into a Docker container and train in on CAIP1. Learn how to use the hyperparameter tunning engine on GCP to find the best hyperparameters1. Learn how to deploy a trained machine learning model GCP as a rest API and query it.In this lab, you develop, package as a docker image, and run on **AI Platform Training** a training application that trains a multi-class classification model that predicts the type of forest cover from cartographic data. The [dataset](../../../datasets/covertype/README.md) used in the lab is based on **Covertype Data Set** from UCI Machine Learning Repository.The training code uses `scikit-learn` for data pre-processing and modeling. The code has been instrumented using the `hypertune` package so it can be used with **AI Platform** hyperparameter tuning.
###Code
import json
import os
import numpy as np
import pandas as pd
import pickle
import uuid
import time
import tempfile
from googleapiclient import discovery
from googleapiclient import errors
from google.cloud import bigquery
from jinja2 import Template
from kfp.components import func_to_container_op
from typing import NamedTuple
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.linear_model import SGDClassifier
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from sklearn.compose import ColumnTransformer
###Output
_____no_output_____
###Markdown
Configure environment settings Set location paths, connections strings, and other environment settings. Make sure to update `REGION`, and `ARTIFACT_STORE` with the settings reflecting your lab environment. - `REGION` - the compute region for AI Platform Training and Prediction- `ARTIFACT_STORE` - the GCS bucket created during installation of AI Platform Pipelines. The bucket name starts with the `hostedkfp-default-` prefix.
###Code
!gsutil ls
REGION = 'us-central1'
ARTIFACT_STORE = 'gs://qwiklabs-gcp-04-83153487f5ba-kubeflowpipelines-default'
PROJECT_ID = !(gcloud config get-value core/project)
PROJECT_ID = PROJECT_ID[0]
DATA_ROOT='{}/data'.format(ARTIFACT_STORE)
JOB_DIR_ROOT='{}/jobs'.format(ARTIFACT_STORE)
TRAINING_FILE_PATH='{}/{}/{}'.format(DATA_ROOT, 'training', 'dataset.csv')
VALIDATION_FILE_PATH='{}/{}/{}'.format(DATA_ROOT, 'validation', 'dataset.csv')
###Output
_____no_output_____
###Markdown
Explore the Covertype dataset
###Code
%%bigquery
SELECT *
FROM `covertype_dataset.covertype`
###Output
_____no_output_____
###Markdown
Create training and validation splitsUse BigQuery to sample training and validation splits and save them to GCS storage Create a training split
###Code
!bq query \
-n 0 \
--destination_table covertype_dataset.training \
--replace \
--use_legacy_sql=false \
'SELECT * \
FROM `covertype_dataset.covertype` AS cover \
WHERE \
MOD(ABS(FARM_FINGERPRINT(TO_JSON_STRING(cover))), 10) IN (1, 2, 3, 4)'
!bq extract \
--destination_format CSV \
covertype_dataset.training \
$TRAINING_FILE_PATH
###Output
Waiting on bqjob_rb8fbe1e0228dec7_00000176b2b7a2ac_1 ... (1s) Current status: DONE
###Markdown
Create a validation split ExerciseIn the first cell below, create a validation split that takes 10% of the data using the `bq` command andexport this split into the BigQuery table `covertype_dataset.validation`.In the second cell, use the `bq` command to export that BigQuery validation table to GCS at `$VALIDATION_FILE_PATH`.
###Code
# TODO: You code to create the BQ table validation split
!bq query \
-n 0 \
--destination_table covertype_dataset.validation \
--replace \
--use_legacy_sql=false \
'SELECT * \
FROM `covertype_dataset.covertype` AS cover \
WHERE \
MOD(ABS(FARM_FINGERPRINT(TO_JSON_STRING(cover))), 10) IN (5)'
# TODO: Your code to export the validation table to GCS
!bq extract \
--destination_format CSV \
covertype_dataset.training \
$VALIDATION_FILE_PATH
df_train = pd.read_csv(TRAINING_FILE_PATH)
df_validation = pd.read_csv(VALIDATION_FILE_PATH)
print(df_train.shape)
print(df_validation.shape)
###Output
(40009, 13)
(40009, 13)
###Markdown
Develop a training application Configure the `sklearn` training pipeline.The training pipeline preprocesses data by standardizing all numeric features using `sklearn.preprocessing.StandardScaler` and encoding all categorical features using `sklearn.preprocessing.OneHotEncoder`. It uses stochastic gradient descent linear classifier (`SGDClassifier`) for modeling.
###Code
numeric_feature_indexes = slice(0, 10)
categorical_feature_indexes = slice(10, 12)
preprocessor = ColumnTransformer(
transformers=[
('num', StandardScaler(), numeric_feature_indexes),
('cat', OneHotEncoder(), categorical_feature_indexes)
])
pipeline = Pipeline([
('preprocessor', preprocessor),
('classifier', SGDClassifier(loss='log', tol=1e-3))
])
###Output
_____no_output_____
###Markdown
Convert all numeric features to `float64`To avoid warning messages from `StandardScaler` all numeric features are converted to `float64`.
###Code
num_features_type_map = {feature: 'float64' for feature in df_train.columns[numeric_feature_indexes]}
df_train = df_train.astype(num_features_type_map)
df_validation = df_validation.astype(num_features_type_map)
###Output
_____no_output_____
###Markdown
Run the pipeline locally.
###Code
X_train = df_train.drop('Cover_Type', axis=1)
y_train = df_train['Cover_Type']
X_validation = df_validation.drop('Cover_Type', axis=1)
y_validation = df_validation['Cover_Type']
pipeline.set_params(classifier__alpha=0.001, classifier__max_iter=200)
pipeline.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Calculate the trained model's accuracy.
###Code
accuracy = pipeline.score(X_validation, y_validation)
print(accuracy)
###Output
0.6996925691719363
###Markdown
Prepare the hyperparameter tuning application.Since the training run on this dataset is computationally expensive you can benefit from running a distributed hyperparameter tuning job on AI Platform Training.
###Code
TRAINING_APP_FOLDER = 'training_app'
os.makedirs(TRAINING_APP_FOLDER, exist_ok=True)
###Output
_____no_output_____
###Markdown
Write the tuning script. Notice the use of the `hypertune` package to report the `accuracy` optimization metric to AI Platform hyperparameter tuning service. ExerciseComplete the code below to capture the metric that the hyper parameter tunning engine will use to optimizethe hyper parameter.
###Code
%%writefile {TRAINING_APP_FOLDER}/train.py
# Copyright 2019 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import subprocess
import sys
import fire
import pickle
import numpy as np
import pandas as pd
import hypertune
from sklearn.compose import ColumnTransformer
from sklearn.linear_model import SGDClassifier
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, OneHotEncoder
def train_evaluate(job_dir, training_dataset_path,
validation_dataset_path, alpha, max_iter, hptune):
df_train = pd.read_csv(training_dataset_path)
df_validation = pd.read_csv(validation_dataset_path)
if not hptune:
df_train = pd.concat([df_train, df_validation])
numeric_feature_indexes = slice(0, 10)
categorical_feature_indexes = slice(10, 12)
preprocessor = ColumnTransformer(
transformers=[
('num', StandardScaler(), numeric_feature_indexes),
('cat', OneHotEncoder(), categorical_feature_indexes)
])
pipeline = Pipeline([
('preprocessor', preprocessor),
('classifier', SGDClassifier(loss='log',tol=1e-3))
])
num_features_type_map = {feature: 'float64' for feature
in df_train.columns[numeric_feature_indexes]}
df_train = df_train.astype(num_features_type_map)
df_validation = df_validation.astype(num_features_type_map)
print('Starting training: alpha={}, max_iter={}'.format(alpha, max_iter))
X_train = df_train.drop('Cover_Type', axis=1)
y_train = df_train['Cover_Type']
pipeline.set_params(classifier__alpha=alpha, classifier__max_iter=max_iter)
pipeline.fit(X_train, y_train)
if hptune:
X_validation = df_validation.drop('Cover_Type', axis=1)
y_validation = df_validation['Cover_Type']
accuracy = pipeline.score(X_validation, y_validation)
print('Model accuracy: {}'.format(accuracy))
# Log it with hypertune
hpt = hypertune.HyperTune()
hpt.report_hyperparameter_tuning_metric(
hyperparameter_metric_tag='accuracy',
metric_value=accuracy
)
# Save the model
if not hptune:
model_filename = 'model.pkl'
with open(model_filename, 'wb') as model_file:
pickle.dump(pipeline, model_file)
gcs_model_path = "{}/{}".format(job_dir, model_filename)
subprocess.check_call(['gsutil', 'cp', model_filename, gcs_model_path],
stderr=sys.stdout)
print("Saved model in: {}".format(gcs_model_path))
if __name__ == "__main__":
fire.Fire(train_evaluate)
###Output
Writing training_app/train.py
###Markdown
Package the script into a docker image.Notice that we are installing specific versions of `scikit-learn` and `pandas` in the training image. This is done to make sure that the training runtime is aligned with the serving runtime. Later in the notebook you will deploy the model to AI Platform Prediction, using the 1.15 version of AI Platform Prediction runtime. Make sure to update the URI for the base image so that it points to your project's **Container Registry**. ExerciseComplete the Dockerfile below so that it copies the 'train.py' file into the containerat `/app` and runs it when the container is started.
###Code
%%writefile {TRAINING_APP_FOLDER}/Dockerfile
FROM gcr.io/deeplearning-platform-release/base-cpu
RUN pip install -U fire cloudml-hypertune scikit-learn==0.20.4 pandas==0.24.2
WORKDIR /app
COPY train.py .
ENTRYPOINT ["python", "train.py"]
###Output
Overwriting training_app/Dockerfile
###Markdown
Build the docker image. You use **Cloud Build** to build the image and push it your project's **Container Registry**. As you use the remote cloud service to build the image, you don't need a local installation of Docker.
###Code
IMAGE_NAME='trainer_image'
IMAGE_TAG='latest'
IMAGE_URI='gcr.io/{}/{}:{}'.format(PROJECT_ID, IMAGE_NAME, IMAGE_TAG)
!gcloud builds submit --tag $IMAGE_URI $TRAINING_APP_FOLDER
###Output
Creating temporary tarball archive of 2 file(s) totalling 3.2 KiB before compression.
Uploading tarball of [training_app] to [gs://qwiklabs-gcp-04-83153487f5ba_cloudbuild/source/1609316554.437085-bb3ead9ddfc64686be21ade91496090d.tgz]
Created [https://cloudbuild.googleapis.com/v1/projects/qwiklabs-gcp-04-83153487f5ba/builds/9bd6e2cc-afdd-4631-b054-f66439a33722].
Logs are available at [https://console.cloud.google.com/cloud-build/builds/9bd6e2cc-afdd-4631-b054-f66439a33722?project=1011248333086].
----------------------------- REMOTE BUILD OUTPUT ------------------------------
starting build "9bd6e2cc-afdd-4631-b054-f66439a33722"
FETCHSOURCE
Fetching storage object: gs://qwiklabs-gcp-04-83153487f5ba_cloudbuild/source/1609316554.437085-bb3ead9ddfc64686be21ade91496090d.tgz#1609316555009005
Copying gs://qwiklabs-gcp-04-83153487f5ba_cloudbuild/source/1609316554.437085-bb3ead9ddfc64686be21ade91496090d.tgz#1609316555009005...
/ [1 files][ 1.5 KiB/ 1.5 KiB]
Operation completed over 1 objects/1.5 KiB.
BUILD
Already have image (with digest): gcr.io/cloud-builders/docker
Sending build context to Docker daemon 6.144kB
Step 1/5 : FROM gcr.io/deeplearning-platform-release/base-cpu
latest: Pulling from deeplearning-platform-release/base-cpu
171857c49d0f: Pulling fs layer
419640447d26: Pulling fs layer
61e52f862619: Pulling fs layer
20b22764011e: Pulling fs layer
00244e2c5db1: Pulling fs layer
07e452976526: Pulling fs layer
9889ff203efe: Pulling fs layer
05dad74dd489: Pulling fs layer
abfc11aef694: Pulling fs layer
00e45e47c0d1: Pulling fs layer
5274e7716976: Pulling fs layer
0dcd37ccffa2: Pulling fs layer
8b7d4903c042: Pulling fs layer
c65868d8f6c7: Pulling fs layer
8497304a9c74: Pulling fs layer
25b00734a98b: Pulling fs layer
0e452e6fa8ed: Pulling fs layer
20b22764011e: Waiting
00244e2c5db1: Waiting
07e452976526: Waiting
9889ff203efe: Waiting
05dad74dd489: Waiting
abfc11aef694: Waiting
00e45e47c0d1: Waiting
5274e7716976: Waiting
0dcd37ccffa2: Waiting
8b7d4903c042: Waiting
c65868d8f6c7: Waiting
8497304a9c74: Waiting
25b00734a98b: Waiting
0e452e6fa8ed: Waiting
61e52f862619: Verifying Checksum
61e52f862619: Download complete
419640447d26: Verifying Checksum
419640447d26: Download complete
171857c49d0f: Verifying Checksum
171857c49d0f: Download complete
07e452976526: Verifying Checksum
07e452976526: Download complete
00244e2c5db1: Verifying Checksum
00244e2c5db1: Download complete
05dad74dd489: Verifying Checksum
05dad74dd489: Download complete
abfc11aef694: Verifying Checksum
abfc11aef694: Download complete
9889ff203efe: Verifying Checksum
9889ff203efe: Download complete
00e45e47c0d1: Verifying Checksum
00e45e47c0d1: Download complete
5274e7716976: Verifying Checksum
5274e7716976: Download complete
0dcd37ccffa2: Verifying Checksum
0dcd37ccffa2: Download complete
8b7d4903c042: Verifying Checksum
8b7d4903c042: Download complete
c65868d8f6c7: Verifying Checksum
c65868d8f6c7: Download complete
8497304a9c74: Verifying Checksum
8497304a9c74: Download complete
20b22764011e: Verifying Checksum
20b22764011e: Download complete
0e452e6fa8ed: Verifying Checksum
0e452e6fa8ed: Download complete
171857c49d0f: Pull complete
419640447d26: Pull complete
61e52f862619: Pull complete
25b00734a98b: Verifying Checksum
25b00734a98b: Download complete
20b22764011e: Pull complete
00244e2c5db1: Pull complete
07e452976526: Pull complete
9889ff203efe: Pull complete
05dad74dd489: Pull complete
abfc11aef694: Pull complete
00e45e47c0d1: Pull complete
5274e7716976: Pull complete
0dcd37ccffa2: Pull complete
8b7d4903c042: Pull complete
c65868d8f6c7: Pull complete
8497304a9c74: Pull complete
25b00734a98b: Pull complete
0e452e6fa8ed: Pull complete
Digest: sha256:f6c7ab6b9004322178cbccf7becb15a27fd3e2240e7335f5a51f8ff1861fd733
Status: Downloaded newer image for gcr.io/deeplearning-platform-release/base-cpu:latest
---> 2e14efcab90e
Step 2/5 : RUN pip install -U fire cloudml-hypertune scikit-learn==0.20.4 pandas==0.24.2
---> Running in 69a37c2123a8
Collecting fire
Downloading fire-0.3.1.tar.gz (81 kB)
Collecting cloudml-hypertune
Downloading cloudml-hypertune-0.1.0.dev6.tar.gz (3.2 kB)
Collecting scikit-learn==0.20.4
Downloading scikit_learn-0.20.4-cp37-cp37m-manylinux1_x86_64.whl (5.4 MB)
Collecting pandas==0.24.2
Downloading pandas-0.24.2-cp37-cp37m-manylinux1_x86_64.whl (10.1 MB)
Requirement already satisfied, skipping upgrade: six in /opt/conda/lib/python3.7/site-packages (from fire) (1.15.0)
Collecting termcolor
Downloading termcolor-1.1.0.tar.gz (3.9 kB)
Requirement already satisfied, skipping upgrade: numpy>=1.8.2 in /opt/conda/lib/python3.7/site-packages (from scikit-learn==0.20.4) (1.18.5)
Requirement already satisfied, skipping upgrade: scipy>=0.13.3 in /opt/conda/lib/python3.7/site-packages (from scikit-learn==0.20.4) (1.5.3)
Requirement already satisfied, skipping upgrade: python-dateutil>=2.5.0 in /opt/conda/lib/python3.7/site-packages (from pandas==0.24.2) (2.8.1)
Requirement already satisfied, skipping upgrade: pytz>=2011k in /opt/conda/lib/python3.7/site-packages (from pandas==0.24.2) (2020.1)
Building wheels for collected packages: fire, cloudml-hypertune, termcolor
Building wheel for fire (setup.py): started
Building wheel for fire (setup.py): finished with status 'done'
Created wheel for fire: filename=fire-0.3.1-py2.py3-none-any.whl size=111005 sha256=84465bf91f0ebb0044788d8be53571158e1aa751e781ee57d132940b155baec1
Stored in directory: /root/.cache/pip/wheels/95/38/e1/8b62337a8ecf5728bdc1017e828f253f7a9cf25db999861bec
Building wheel for cloudml-hypertune (setup.py): started
Building wheel for cloudml-hypertune (setup.py): finished with status 'done'
Created wheel for cloudml-hypertune: filename=cloudml_hypertune-0.1.0.dev6-py2.py3-none-any.whl size=3987 sha256=723059c77ffd3ce9964afa0b7f1c947ce7aa32a130427d84129971d46bc6ccfc
Stored in directory: /root/.cache/pip/wheels/a7/ff/87/e7bed0c2741fe219b3d6da67c2431d7f7fedb183032e00f81e
Building wheel for termcolor (setup.py): started
Building wheel for termcolor (setup.py): finished with status 'done'
Created wheel for termcolor: filename=termcolor-1.1.0-py3-none-any.whl size=4830 sha256=1314284b00342a215094ffb9b1c9139bccd4783d9f16c51145363b1f8ff2bf76
Stored in directory: /root/.cache/pip/wheels/3f/e3/ec/8a8336ff196023622fbcb36de0c5a5c218cbb24111d1d4c7f2
Successfully built fire cloudml-hypertune termcolor
Installing collected packages: termcolor, fire, cloudml-hypertune, scikit-learn, pandas
Attempting uninstall: scikit-learn
Found existing installation: scikit-learn 0.23.2
Uninstalling scikit-learn-0.23.2:
Successfully uninstalled scikit-learn-0.23.2
Attempting uninstall: pandas
Found existing installation: pandas 1.1.4
Uninstalling pandas-1.1.4:
Successfully uninstalled pandas-1.1.4
[91mERROR: After October 2020 you may experience errors when installing or updating packages. This is because pip will change the way that it resolves dependency conflicts.
We recommend you use --use-feature=2020-resolver to test your packages with the new resolver before it becomes the default.
visions 0.6.4 requires pandas>=0.25.3, but you'll have pandas 0.24.2 which is incompatible.
pandas-profiling 2.8.0 requires pandas!=1.0.0,!=1.0.1,!=1.0.2,>=0.25.3, but you'll have pandas 0.24.2 which is incompatible.
pandas-profiling 2.8.0 requires visions[type_image_path]==0.4.4, but you'll have visions 0.6.4 which is incompatible.
[0mSuccessfully installed cloudml-hypertune-0.1.0.dev6 fire-0.3.1 pandas-0.24.2 scikit-learn-0.20.4 termcolor-1.1.0
Removing intermediate container 69a37c2123a8
---> 0af5bc3f8f2a
Step 3/5 : WORKDIR /app
---> Running in 8f6cfcd303ed
Removing intermediate container 8f6cfcd303ed
---> 9a8c0c31690c
Step 4/5 : COPY train.py .
---> 740635fdc076
Step 5/5 : ENTRYPOINT ["python", "train.py"]
---> Running in 5a51c4139aa9
Removing intermediate container 5a51c4139aa9
---> f7badf423ee8
Successfully built f7badf423ee8
Successfully tagged gcr.io/qwiklabs-gcp-04-83153487f5ba/trainer_image:latest
PUSH
Pushing gcr.io/qwiklabs-gcp-04-83153487f5ba/trainer_image:latest
The push refers to repository [gcr.io/qwiklabs-gcp-04-83153487f5ba/trainer_image]
5d682bb8b2da: Preparing
c9af2f22963d: Preparing
7fc52cea5374: Preparing
58c37b024800: Preparing
093955a9f693: Preparing
292c93aa8921: Preparing
25e90c4f31bb: Preparing
5ed5b5583a70: Preparing
fed2ce1b9bf5: Preparing
a2a7397c9263: Preparing
135d5d53f509: Preparing
28952c0fc305: Preparing
1fff2aeddb5e: Preparing
193419df8fce: Preparing
9d1088ee89e7: Preparing
98868f5e88f9: Preparing
efa6a40d1ffb: Preparing
7a694df0ad6c: Preparing
3fd9df553184: Preparing
805802706667: Preparing
292c93aa8921: Waiting
25e90c4f31bb: Waiting
5ed5b5583a70: Waiting
fed2ce1b9bf5: Waiting
a2a7397c9263: Waiting
135d5d53f509: Waiting
28952c0fc305: Waiting
1fff2aeddb5e: Waiting
193419df8fce: Waiting
9d1088ee89e7: Waiting
98868f5e88f9: Waiting
efa6a40d1ffb: Waiting
7a694df0ad6c: Waiting
3fd9df553184: Waiting
805802706667: Waiting
58c37b024800: Layer already exists
093955a9f693: Layer already exists
292c93aa8921: Layer already exists
25e90c4f31bb: Layer already exists
5ed5b5583a70: Layer already exists
fed2ce1b9bf5: Layer already exists
a2a7397c9263: Layer already exists
135d5d53f509: Layer already exists
c9af2f22963d: Pushed
1fff2aeddb5e: Layer already exists
28952c0fc305: Layer already exists
193419df8fce: Layer already exists
5d682bb8b2da: Pushed
9d1088ee89e7: Layer already exists
98868f5e88f9: Layer already exists
7a694df0ad6c: Layer already exists
efa6a40d1ffb: Layer already exists
805802706667: Layer already exists
3fd9df553184: Layer already exists
7fc52cea5374: Pushed
latest: digest: sha256:c8b85dac412e6f44d9b16474a2a6503e20ce04ce6e191210cc2bce9980381a79 size: 4499
DONE
--------------------------------------------------------------------------------
ID CREATE_TIME DURATION SOURCE IMAGES STATUS
9bd6e2cc-afdd-4631-b054-f66439a33722 2020-12-30T08:22:35+00:00 3M24S gs://qwiklabs-gcp-04-83153487f5ba_cloudbuild/source/1609316554.437085-bb3ead9ddfc64686be21ade91496090d.tgz gcr.io/qwiklabs-gcp-04-83153487f5ba/trainer_image (+1 more) SUCCESS
###Markdown
Submit an AI Platform hyperparameter tuning job Create the hyperparameter configuration file. Recall that the training code uses `SGDClassifier`. The training application has been designed to accept two hyperparameters that control `SGDClassifier`:- Max iterations- AlphaThe below file configures AI Platform hypertuning to run up to 6 trials on up to three nodes and to choose from two discrete values of `max_iter` and the linear range betwee 0.00001 and 0.001 for `alpha`. ExerciseComplete the `hptuning_config.yaml` file below so that the hyperparametertunning engine try for parameter values* `max_iter` the two values 200 and 300* `alpha` a linear range of values between 0.00001 and 0.001
###Code
%%writefile {TRAINING_APP_FOLDER}/hptuning_config.yaml
# Copyright 2019 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
trainingInput:
hyperparameters:
goal: MAXIMIZE
maxTrials: 4
maxParallelTrials: 4
hyperparameterMetricTag: accuracy
enableTrialEarlyStopping: TRUE
params:
- parameterName: max_iter
type: DISCRETE
discreteValues: [
200,
500
]
- parameterName: alpha
type: DOUBLE
minValue: 0.00001
maxValue: 0.001
scaleType: UNIT_LINEAR_SCALE
###Output
Writing training_app/hptuning_config.yaml
###Markdown
Start the hyperparameter tuning job. ExerciseUse the `gcloud` command to start the hyperparameter tuning job.
###Code
JOB_NAME = "JOB_{}".format(time.strftime("%Y%m%d_%H%M%S"))
JOB_DIR = "{}/{}".format(JOB_DIR_ROOT, JOB_NAME)
SCALE_TIER = "BASIC"
!gcloud ai-platform jobs submit training $JOB_NAME \
--region=$REGION \
--job-dir=$JOB_DIR \
--master-image-uri=$IMAGE_URI \
--scale-tier=$SCALE_TIER \
--config $TRAINING_APP_FOLDER/hptuning_config.yaml \
-- \
--training_dataset_path=$TRAINING_FILE_PATH \
--validation_dataset_path=$VALIDATION_FILE_PATH \
--hptune
###Output
Job [JOB_20201230_083019] submitted successfully.
Your job is still active. You may view the status of your job with the command
$ gcloud ai-platform jobs describe JOB_20201230_083019
or continue streaming the logs with the command
$ gcloud ai-platform jobs stream-logs JOB_20201230_083019
jobId: JOB_20201230_083019
state: QUEUED
###Markdown
Monitor the job.You can monitor the job using GCP console or from within the notebook using `gcloud` commands.
###Code
!gcloud ai-platform jobs describe $JOB_NAME
!gcloud ai-platform jobs stream-logs $JOB_NAME
###Output
_____no_output_____
###Markdown
Retrieve HP-tuning results. After the job completes you can review the results using GCP Console or programatically by calling the AI Platform Training REST end-point.
###Code
ml = discovery.build('ml', 'v1')
job_id = 'projects/{}/jobs/{}'.format(PROJECT_ID, JOB_NAME)
request = ml.projects().jobs().get(name=job_id)
try:
response = request.execute()
except errors.HttpError as err:
print(err)
except:
print("Unexpected error")
response
###Output
_____no_output_____
###Markdown
The returned run results are sorted by a value of the optimization metric. The best run is the first item on the returned list.
###Code
response['trainingOutput']['trials'][0]
###Output
_____no_output_____
###Markdown
Retrain the model with the best hyperparametersYou can now retrain the model using the best hyperparameters and using combined training and validation splits as a training dataset. Configure and run the training job
###Code
alpha = response['trainingOutput']['trials'][0]['hyperparameters']['alpha']
max_iter = response['trainingOutput']['trials'][0]['hyperparameters']['max_iter']
JOB_NAME = "JOB_{}".format(time.strftime("%Y%m%d_%H%M%S"))
JOB_DIR = "{}/{}".format(JOB_DIR_ROOT, JOB_NAME)
SCALE_TIER = "BASIC"
!gcloud ai-platform jobs submit training $JOB_NAME \
--region=$REGION \
--job-dir=$JOB_DIR \
--master-image-uri=$IMAGE_URI \
--scale-tier=$SCALE_TIER \
-- \
--training_dataset_path=$TRAINING_FILE_PATH \
--validation_dataset_path=$VALIDATION_FILE_PATH \
--alpha=$alpha \
--max_iter=$max_iter \
--nohptune
!gcloud ai-platform jobs stream-logs $JOB_NAME
###Output
^C
Command killed by keyboard interrupt
###Markdown
Examine the training outputThe training script saved the trained model as the 'model.pkl' in the `JOB_DIR` folder on GCS.
###Code
!gsutil ls $JOB_DIR
###Output
CommandException: One or more URLs matched no objects.
###Markdown
Deploy the model to AI Platform Prediction Create a model resource ExerciseComplete the `gcloud` command below to create a model with`model_name` in `$REGION` tagged with `labels`:
###Code
model_name = 'forest_cover_classifier'
labels = "task=classifier,domain=forestry"
!gcloud ai-platform models create $model_name \
--regions=$REGION \
--labels=$labels
###Output
Using endpoint [https://ml.googleapis.com/]
Created ml engine model [projects/qwiklabs-gcp-04-83153487f5ba/models/forest_cover_classifier].
###Markdown
Create a model version Exercise Complete the `gcloud` command below to create a version of the model:
###Code
model_version = 'v01'
!gcloud ai-platform versions create {model_version} \
--model={model_name} \
--origin=$JOB_DIR \
--runtime-version=1.15 \
--framework=scikit-learn \
--python-version=3.7
###Output
Using endpoint [https://ml.googleapis.com/]
Creating version (this might take a few minutes)......done.
###Markdown
Serve predictions Prepare the input file with JSON formated instances.
###Code
input_file = 'serving_instances.json'
with open(input_file, 'w') as f:
for index, row in X_validation.head().iterrows():
f.write(json.dumps(list(row.values)))
f.write('\n')
!cat $input_file
###Output
_____no_output_____
###Markdown
Invoke the model ExerciseUsing the `gcloud` command send the data in `$input_file` to your model deployed as a REST API:
###Code
!gcloud # TODO: Complete the command
###Output
_____no_output_____
###Markdown
Using custom containers with AI Platform Training**Learning Objectives:**1. Learn how to create a train and a validation split with BigQuery1. Learn how to wrap a machine learning model into a Docker container and train in on AI Platform1. Learn how to use the hyperparameter tunning engine on Google Cloud to find the best hyperparameters1. Learn how to deploy a trained machine learning model Google Cloud as a rest API and query itIn this lab, you develop a multi-class classification model, package the model as a docker image, and run on **AI Platform Training** as a training application. The training application trains a multi-class classification model that predicts the type of forest cover from cartographic data. The [dataset](../../../datasets/covertype/README.md) used in the lab is based on **Covertype Data Set** from UCI Machine Learning Repository.Scikit-learn is one of the most useful libraries for machineย learningย in Python. The training code uses `scikit-learn` for data pre-processing and modeling. The code is instrumented using the `hypertune` package so it can be used with **AI Platform** hyperparameter tuning job in searching for the best combination of hyperparameter values by optimizing the metrics you specified.
###Code
import json
import os
import numpy as np
import pandas as pd
import pickle
import uuid
import time
import tempfile
from googleapiclient import discovery
from googleapiclient import errors
from google.cloud import bigquery
from jinja2 import Template
from kfp.components import func_to_container_op
from typing import NamedTuple
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.linear_model import SGDClassifier
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from sklearn.compose import ColumnTransformer
###Output
_____no_output_____
###Markdown
Prepare lab datasetSet environment variable so that we can use them throughout the entire lab.The pipeline ingests data from BigQuery. The cell below uploads the Covertype dataset to BigQuery.
###Code
PROJECT_ID=!(gcloud config get-value core/project)
PROJECT_ID=PROJECT_ID[0]
DATASET_ID='covertype_dataset'
DATASET_LOCATION='US'
TABLE_ID='covertype'
DATA_SOURCE='gs://workshop-datasets/covertype/small/dataset.csv'
SCHEMA='Elevation:INTEGER,Aspect:INTEGER,Slope:INTEGER,Horizontal_Distance_To_Hydrology:INTEGER,Vertical_Distance_To_Hydrology:INTEGER,Horizontal_Distance_To_Roadways:INTEGER,Hillshade_9am:INTEGER,Hillshade_Noon:INTEGER,Hillshade_3pm:INTEGER,Horizontal_Distance_To_Fire_Points:INTEGER,Wilderness_Area:STRING,Soil_Type:STRING,Cover_Type:INTEGER'
###Output
_____no_output_____
###Markdown
Next, create the BigQuery dataset and upload the Covertype csv data into a table.
###Code
!bq --location=$DATASET_LOCATION --project_id=$PROJECT_ID mk --dataset $DATASET_ID
!bq --project_id=$PROJECT_ID --dataset_id=$DATASET_ID load \
--source_format=CSV \
--skip_leading_rows=1 \
--replace \
$TABLE_ID \
$DATA_SOURCE \
$SCHEMA
###Output
_____no_output_____
###Markdown
Configure environment settings Set location paths, connections strings, and other environment settings. Make sure to update `REGION`, and `ARTIFACT_STORE` with the settings reflecting your lab environment. - `REGION` - the compute region for AI Platform Training and Prediction- `ARTIFACT_STORE` - the Cloud Storage bucket created during installation of AI Platform Pipelines. The bucket name starts with the `qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-default` prefix.Run gsutil ls without URLs to list all of the Cloud Storage buckets under your default project ID.
###Code
!gsutil ls
###Output
_____no_output_____
###Markdown
HINT: For ARTIFACT_STORE, copy the bucket name which starts with the qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-default prefix from the previous cell output.Your copied value should look like 'gs://qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-default').
###Code
REGION = 'us-central1'
ARTIFACT_STORE = 'gs://qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-default' #ย TO DO: REPLACEย WITHย YOURย ARTIFACT_STOREย NAME
PROJECT_ID = !(gcloud config get-value core/project)
PROJECT_ID = PROJECT_ID[0]
DATA_ROOT='{}/data'.format(ARTIFACT_STORE)
JOB_DIR_ROOT='{}/jobs'.format(ARTIFACT_STORE)
TRAINING_FILE_PATH='{}/{}/{}'.format(DATA_ROOT, 'training', 'dataset.csv')
VALIDATION_FILE_PATH='{}/{}/{}'.format(DATA_ROOT, 'validation', 'dataset.csv')
###Output
_____no_output_____
###Markdown
Explore the Covertype dataset Run the query statement below to scan covertype_dataset.covertype table in BigQuery and return the computed result rows.
###Code
%%bigquery
SELECT *
FROM `covertype_dataset.covertype`
###Output
_____no_output_____
###Markdown
Create training and validation splitsUse BigQuery to sample training and validation splits and save them to Cloud Storage. Create a training splitRun the query below in order to have repeatable sampling of the data in BigQuery. Note that `FARM_FINGERPRINT()`ย is used on the field that you are going to split your data. It creates a training split that takes 80% of the data using theย `bq`ย command and exports this split into the BigQuery table of `covertype_dataset.training`.
###Code
!bq query \
-n 0 \
--destination_table covertype_dataset.training \
--replace \
--use_legacy_sql=false \
'SELECT * \
FROM `covertype_dataset.covertype` AS cover \
WHERE \
MOD(ABS(FARM_FINGERPRINT(TO_JSON_STRING(cover))), 10) IN (1, 2, 3, 4)'
###Output
_____no_output_____
###Markdown
Use theย `bq`ย extract command to export the BigQuery training table to GCS atย `$TRAINING_FILE_PATH`.
###Code
!bq extract \
--destination_format CSV \
covertype_dataset.training \
$TRAINING_FILE_PATH
###Output
_____no_output_____
###Markdown
Create a validation split ExerciseIn the first cell below, create a validation split that takes 10% of the data using the `bq` command andexport this split into the BigQuery table `covertype_dataset.validation`.In the second cell, use the `bq` command to export that BigQuery validation table to GCS at `$VALIDATION_FILE_PATH`.NOTE: If you need help, you may take a look at the complete solution by navigating to **mlops-on-gcp > workshops > kfp-caip-sklearn > lab-01-caip-containers** and opening **lab-01.ipynb**.
###Code
# TO DO: Your code goes here to create the BQ table validation split.
# TO DO: Your code goes here to export the validation table to the Cloud Storage bucket.
df_train = pd.read_csv(TRAINING_FILE_PATH)
df_validation = pd.read_csv(VALIDATION_FILE_PATH)
print(df_train.shape)
print(df_validation.shape)
###Output
_____no_output_____
###Markdown
Develop a training application Configure the `sklearn` training pipeline.The training pipeline preprocesses data by standardizing all numeric features using `sklearn.preprocessing.StandardScaler` and encoding all categorical features using `sklearn.preprocessing.OneHotEncoder`. It uses stochastic gradient descent linear classifier (`SGDClassifier`) for modeling.
###Code
numeric_feature_indexes = slice(0, 10)
categorical_feature_indexes = slice(10, 12)
preprocessor = ColumnTransformer(
transformers=[
('num', StandardScaler(), numeric_feature_indexes),
('cat', OneHotEncoder(), categorical_feature_indexes)
])
pipeline = Pipeline([
('preprocessor', preprocessor),
('classifier', SGDClassifier(loss='log', tol=1e-3))
])
###Output
_____no_output_____
###Markdown
Convert all numeric features to `float64`To avoid warning messages from `StandardScaler` all numeric features are converted to `float64`.
###Code
num_features_type_map = {feature: 'float64' for feature in df_train.columns[numeric_feature_indexes]}
df_train = df_train.astype(num_features_type_map)
df_validation = df_validation.astype(num_features_type_map)
###Output
_____no_output_____
###Markdown
Run the pipeline locally.
###Code
X_train = df_train.drop('Cover_Type', axis=1)
y_train = df_train['Cover_Type']
X_validation = df_validation.drop('Cover_Type', axis=1)
y_validation = df_validation['Cover_Type']
pipeline.set_params(classifier__alpha=0.001, classifier__max_iter=200)
pipeline.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Calculate the trained model's accuracy.
###Code
accuracy = pipeline.score(X_validation, y_validation)
print(accuracy)
###Output
_____no_output_____
###Markdown
Prepare the hyperparameter tuning application.Since the training run on this dataset is computationally expensive you can benefit from running a distributed hyperparameter tuning job on AI Platform Training.
###Code
TRAINING_APP_FOLDER = 'training_app'
os.makedirs(TRAINING_APP_FOLDER, exist_ok=True)
###Output
_____no_output_____
###Markdown
Write the tuning script. Notice the use of the `hypertune` package to report the `accuracy` optimization metric to AI Platform hyperparameter tuning service. ExerciseComplete the code below to capture the metric that the hyper parameter tunning engine will use to optimizethe hyper parameter. NOTE: If you need help, you may take a look at the complete solution by navigating to **mlops-on-gcp > workshops > kfp-caip-sklearn > lab-01-caip-containers** and opening **lab-01.ipynb**.
###Code
%%writefile {TRAINING_APP_FOLDER}/train.py
# Copyright 2019 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import subprocess
import sys
import fire
import pickle
import numpy as np
import pandas as pd
import hypertune
from sklearn.compose import ColumnTransformer
from sklearn.linear_model import SGDClassifier
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, OneHotEncoder
def train_evaluate(job_dir, training_dataset_path,
validation_dataset_path, alpha, max_iter, hptune):
df_train = pd.read_csv(training_dataset_path)
df_validation = pd.read_csv(validation_dataset_path)
if not hptune:
df_train = pd.concat([df_train, df_validation])
numeric_feature_indexes = slice(0, 10)
categorical_feature_indexes = slice(10, 12)
preprocessor = ColumnTransformer(
transformers=[
('num', StandardScaler(), numeric_feature_indexes),
('cat', OneHotEncoder(), categorical_feature_indexes)
])
pipeline = Pipeline([
('preprocessor', preprocessor),
('classifier', SGDClassifier(loss='log',tol=1e-3))
])
num_features_type_map = {feature: 'float64' for feature
in df_train.columns[numeric_feature_indexes]}
df_train = df_train.astype(num_features_type_map)
df_validation = df_validation.astype(num_features_type_map)
print('Starting training: alpha={}, max_iter={}'.format(alpha, max_iter))
X_train = df_train.drop('Cover_Type', axis=1)
y_train = df_train['Cover_Type']
pipeline.set_params(classifier__alpha=alpha, classifier__max_iter=max_iter)
pipeline.fit(X_train, y_train)
if hptune:
# TO DO: Your code goes here to score the model with the validation data and capture the result
# with the hypertune library
# Save the model
if not hptune:
model_filename = 'model.pkl'
with open(model_filename, 'wb') as model_file:
pickle.dump(pipeline, model_file)
gcs_model_path = "{}/{}".format(job_dir, model_filename)
subprocess.check_call(['gsutil', 'cp', model_filename, gcs_model_path],
stderr=sys.stdout)
print("Saved model in: {}".format(gcs_model_path))
if __name__ == "__main__":
fire.Fire(train_evaluate)
###Output
_____no_output_____
###Markdown
Package the script into a docker image.Notice that we are installing specific versions of `scikit-learn` and `pandas` in the training image. This is done to make sure that the training runtime is aligned with the serving runtime. Later in the notebook you will deploy the model to AI Platform Prediction, using the 1.15 version of AI Platform Prediction runtime. Make sure to update the URI for the base image so that it points to your project's **Container Registry**. ExerciseComplete the Dockerfile below so that it copies the 'train.py' file into the containerat `/app` and runs it when the container is started. NOTE: If you need help, you may take a look at the complete solution by navigating to **mlops-on-gcp > workshops > kfp-caip-sklearn > lab-01-caip-containers** and opening **lab-01.ipynb**.
###Code
%%writefile {TRAINING_APP_FOLDER}/Dockerfile
FROM gcr.io/deeplearning-platform-release/base-cpu
RUN pip install -U fire cloudml-hypertune scikit-learn==0.20.4 pandas==0.24.2
# TO DO: Your code goes here
###Output
_____no_output_____
###Markdown
Build the docker image. You use **Cloud Build** to build the image and push it your project's **Container Registry**. As you use the remote cloud service to build the image, you don't need a local installation of Docker.
###Code
IMAGE_NAME='trainer_image'
IMAGE_TAG='latest'
IMAGE_URI='gcr.io/{}/{}:{}'.format(PROJECT_ID, IMAGE_NAME, IMAGE_TAG)
!gcloud builds submit --tag $IMAGE_URI $TRAINING_APP_FOLDER
###Output
_____no_output_____
###Markdown
Submit an AI Platform hyperparameter tuning job Create the hyperparameter configuration file. Recall that the training code uses `SGDClassifier`. The training application has been designed to accept two hyperparameters that control `SGDClassifier`:- Max iterations- AlphaThe below file configures AI Platform hypertuning to run up to 6 trials on up to three nodes and to choose from two discrete values of `max_iter` and the linear range betwee 0.00001 and 0.001 for `alpha`. ExerciseComplete the `hptuning_config.yaml` file below so that the hyperparametertunning engine try for parameter values* `max_iter` the two values 200 and 300* `alpha` a linear range of values between 0.00001 and 0.001NOTE: If you need help, you may take a look at the complete solution by navigating to **mlops-on-gcp > workshops > kfp-caip-sklearn > lab-01-caip-containers** and opening **lab-01.ipynb**.
###Code
%%writefile {TRAINING_APP_FOLDER}/hptuning_config.yaml
# Copyright 2019 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
trainingInput:
hyperparameters:
goal: MAXIMIZE
maxTrials: 4
maxParallelTrials: 4
hyperparameterMetricTag: accuracy
enableTrialEarlyStopping: TRUE
params:
# TO DO: Your code goes here
###Output
_____no_output_____
###Markdown
Start the hyperparameter tuning job. ExerciseUse the `gcloud` command to start the hyperparameter tuning job.NOTE: If you need help, you may take a look at the complete solution by navigating to **mlops-on-gcp > workshops > kfp-caip-sklearn > lab-01-caip-containers** and opening **lab-01.ipynb**.
###Code
JOB_NAME = "JOB_{}".format(time.strftime("%Y%m%d_%H%M%S"))
JOB_DIR = "{}/{}".format(JOB_DIR_ROOT, JOB_NAME)
SCALE_TIER = "BASIC"
!gcloud ai-platform jobs submit training $JOB_NAME \
--region=# TO DO: ADD YOUR REGION \
--job-dir=# TO DO: ADD YOUR JOB-DIR \
--master-image-uri=# TO DO: ADD YOUR IMAGE-URI \
--scale-tier=# TO DO: ADD YOUR SCALE-TIER \
--config # TO DO: ADD YOUR CONFIG PATH \
-- \
# TO DO: Complete the command
###Output
_____no_output_____
###Markdown
Monitor the job.You can monitor the job using Google Cloud console or from within the notebook using `gcloud` commands.
###Code
!gcloud ai-platform jobs describe $JOB_NAME
!gcloud ai-platform jobs stream-logs $JOB_NAME
###Output
_____no_output_____
###Markdown
**NOTE: The above AI platform job stream logs will take approximately 5~10 minutes to display.** Retrieve HP-tuning results. After the job completes you can review the results using Google Cloud Console or programatically by calling the AI Platform Training REST end-point.
###Code
ml = discovery.build('ml', 'v1')
job_id = 'projects/{}/jobs/{}'.format(PROJECT_ID, JOB_NAME)
request = ml.projects().jobs().get(name=job_id)
try:
response = request.execute()
except errors.HttpError as err:
print(err)
except:
print("Unexpected error")
response
###Output
_____no_output_____
###Markdown
The returned run results are sorted by a value of the optimization metric. The best run is the first item on the returned list.
###Code
response['trainingOutput']['trials'][0]
###Output
_____no_output_____
###Markdown
Retrain the model with the best hyperparametersYou can now retrain the model using the best hyperparameters and using combined training and validation splits as a training dataset. Configure and run the training job
###Code
alpha = response['trainingOutput']['trials'][0]['hyperparameters']['alpha']
max_iter = response['trainingOutput']['trials'][0]['hyperparameters']['max_iter']
JOB_NAME = "JOB_{}".format(time.strftime("%Y%m%d_%H%M%S"))
JOB_DIR = "{}/{}".format(JOB_DIR_ROOT, JOB_NAME)
SCALE_TIER = "BASIC"
!gcloud ai-platform jobs submit training $JOB_NAME \
--region=$REGION \
--job-dir=$JOB_DIR \
--master-image-uri=$IMAGE_URI \
--scale-tier=$SCALE_TIER \
-- \
--training_dataset_path=$TRAINING_FILE_PATH \
--validation_dataset_path=$VALIDATION_FILE_PATH \
--alpha=$alpha \
--max_iter=$max_iter \
--nohptune
!gcloud ai-platform jobs stream-logs $JOB_NAME
###Output
_____no_output_____
###Markdown
**NOTE: The above AI platform job stream logs will take approximately 5~10 minutes to display.** Examine the training outputThe training script saved the trained model as the 'model.pkl' in the `JOB_DIR` folder on Cloud Storage.
###Code
!gsutil ls $JOB_DIR
###Output
_____no_output_____
###Markdown
Deploy the model to AI Platform Prediction Create a model resource ExerciseComplete the `gcloud` command below to create a model with`model_name` in `$REGION` tagged with `labels`:NOTE: If you need help, you may take a look at the complete solution by navigating to **mlops-on-gcp > workshops > kfp-caip-sklearn > lab-01-caip-containers** and opening **lab-01.ipynb**.
###Code
model_name = 'forest_cover_classifier'
labels = "task=classifier,domain=forestry"
!gcloud # TO DO: You code goes here
###Output
_____no_output_____
###Markdown
Create a model version Exercise Complete the `gcloud` command below to create a version of the model:NOTE: If you need help, you may take a look at the complete solution by navigating to **mlops-on-gcp > workshops > kfp-caip-sklearn > lab-01-caip-containers** and opening **lab-01.ipynb**.
###Code
model_version = 'v01'
!gcloud # TO DO: Complete the command \
--model=# TO DO: ADD YOUR MODEL NAME \
--origin=# TO DO: ADD YOUR PATH \
--runtime-version=# TO DO: ADD YOUR RUNTIME \
--framework=# TO DO: ADD YOUR FRAMEWORK \
--python-version=# TO DO: ADD YOUR PYTHON VERSION \
--region # TO DO: ADD YOUR REGION
###Output
_____no_output_____
###Markdown
Serve predictions Prepare the input file with JSON formated instances.
###Code
input_file = 'serving_instances.json'
with open(input_file, 'w') as f:
for index, row in X_validation.head().iterrows():
f.write(json.dumps(list(row.values)))
f.write('\n')
!cat $input_file
###Output
_____no_output_____
###Markdown
Invoke the model ExerciseUsing the `gcloud` command send the data in `$input_file` to your model deployed as a REST API:NOTE: If you need help, you may take a look at the complete solution by navigating to **mlops-on-gcp > workshops > kfp-caip-sklearn > lab-01-caip-containers** and opening **lab-01.ipynb**.
###Code
!gcloud # TO DO: Complete the command
###Output
_____no_output_____ |
notebooks/LogisticRegression_banknote_dataset.ipynb | ###Markdown
ะะฐะดะฐัะฐ ะบะปะฐััะธัะธะบะฐัะธะธะัะปะธัะธัะตะปัะฝะพะน ะพัะพะฑะตะฝะฝะพัััั ะทะฐะดะฐัะธ ะบะปะฐััะธัะธะบะฐัะธะธ ะพั ะทะฐะดะฐัะธ ัะตะณัะตััะธะธ ัะฒะปัะตััั ะพะฑะปะฐััั ะทะฝะฐัะตะฝะธะน ัะตะปะตะฒะพะน ะฟะตัะตะผะตะฝะฝะพะน. ะัะปะธ ะฒ ะทะฐะดะฐัะต ัะตะณัะตััะธะธ ัะตะปะตะฒะฐั ะฟะตัะตะผะตะฝะฝะฐั $y$ ะฟัะธะฝะธะผะฐะปะฐ ะฒะตัะตััะฒะตะฝะฝัะต ะทะฝะฐัะตะฝะธั, ัะพ ะฒ ะทะฐะดะฐัะต ะบะปะฐััะธัะธะบะฐัะธะธ $y$ ะฟัะธะฝะฐะดะปะตะถะธั ะบะพะฝะตัะฝะพะผั ะดะธัะบัะตัะฝะพะผั ะผะฝะพะถะตััะฒั.ะ ัะปััะฐะต, ะบะพะณะดะฐ ะผะพะฝะพะถะตััะฒะพ ะทะฝะฐัะตะฝะธะน ัะพััะพัั ะฒัะตะณะพ ะธะท 2-ั
ัะปะตะผะตะฝัะพะฒ ะณะพะฒะพััั ะพ ะทะฐะดะฐัะต ะฑะธะฝะฐัะฝะพะน ะบะปะฐััะธัะธะบะฐัะธะธ. ะัะธะผะตั. ะะฐะดะฐัะฐ ะพะฟะตัะตะดะตะปะตะฝะธั ะฟะพะดะดะตะปัะฝัั
ะฑะฐะฝะบะฝะพัะก ะพะฟะธัะฐะฝะธะต ััะพะณะพ ะฝะฐะฑะพัะฐ ะดะฐะฝะฝัั
ะผะพะถะฝะพ ะพะทะฝะฐะบะพะผะธัััั ะฟะพ [ัััะปะบะต](https://archive.ics.uci.edu/ml/datasets/banknote+authentication).ะ ะฝะตะผ ะพะฟะธััะฒะฐะตััั ะทะฐะดะฐัะฐ ะบะปะฐััะธัะธะบะฐัะธะธ ะฑะฐะฝะบะฝะพั ะฝะฐ ะฝะฐััะพััะธะต (ะบะปะฐัั 0) ะธ ัะฐะปััะธะฒัะต (ะบะปะฐัั 1). ะะฐะถะดะฐั ะฑะฐะฝะบะฝะพัะฐ ะพะฟะธััะฒะฐะตััั 4-ะผั ัะธัะปะพะฒัะผะธ ะฟัะธะทะฝะฐะบะฐะผะธ, ะบะพัะพััะต ะฟะพะปััะตะฝั ะฟััะตะผ ะฟัะธะผะตะฝะตะฝะธั ัะฐะทะปะธัะฝัั
ะฒะตะนะฒะปะตัะฝัั
ะฟัะตะพะฑัะฐะทะพะฒะฐะฝะธะน ะบ ัะพัะพะณัะฐัะธัะผ ะฑะฐะฝะบะฝะพัั.ะะผะตัััั ัะปะตะดัััะธะต ะฟัะธะทะฝะฐะบะธ:* variance (ัะฐะทะฑัะพั)* skewness (ะฐััะธะผะตััะธั)* curtosis (ัะบััะตัั)* entropy (ัะฝััะพะฟะธั)ะฆะตะปะตะฒะฐั ะฟะตัะตะผะตะฝะฝะฐั ะทะฐะดะฐัะฐ ะฒ ััะพะปะฑัะต *class* ะธ ะฟัะธะฝะธะผะฐะตั ะทะฝะฐัะตะฝะธั 0 ะธะปะธ 1. ะ ะฐะทะฒะตะดัะฒะฐัะตะปัะฝัะน ะฐะฝะฐะปะธะท ะดะฐะฝะฝัั
ะะฐะณััะทะธะผ ะพะฟะธัะฐะฝะฝัะน ะฒััะต ะฝะฐะฑะพั ะดะฐะฝะฝัั
ะธ ะธะทััะธะผ ะตะณะพ ั
ะฐัะฐะบัะตัะธััะธะบะธ
###Code
import pandas as pd
dataset_url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/00267/data_banknote_authentication.txt'
ds = pd.read_csv(dataset_url,
names = [
'variance',
'skewness',
'curtosis',
'entropy',
'class',
],
)
ds.head()
###Output
_____no_output_____
###Markdown
ะะธะทัะฐะปะธะทะฐัะธั ะบะปะฐััะพะฒะะปั ะฟัะพััะพัั ะฑัะดะตะผ ัะฐััะผะฐััะธะฒะฐัั ัะพะปัะบะพ 2 ะฟัะธะทะฝะฐะบะฐ: ัะฐะทะฑัะพั (variance) ะธ ะฐััะธะผะตััะธั (skewness). ะะพัััะพะธะผ ะณัะฐัะธะบ ะฟะพ ััะธะผ ะดะฒัะผ ะฟัะธะทะฝะฐะบะฐะผ ะธ ัะฐัะบัะฐัะธะผ ัะพัะบะธ, ะฟัะธะฝะฐะดะปะตะถะฐัะธะต ัะฐะทะฝัะผ ะบะปะฐััะฐะผ ั ะฟะพะผะพััั ัะฐะทะปะธัะฝัั
ัะฒะตัะพะฒ.
###Code
%pylab inline
import seaborn as sns
sns.set(rc={'figure.figsize':(11,8)})
sns.scatterplot(x='variance', y='skewness', hue='class', data=ds.sample(500));
###Output
Populating the interactive namespace from numpy and matplotlib
###Markdown
ะะฐ ะณัะฐัะธะบะต ั
ะพัะพัะพ ะฒะธะดะฝะพ, ััะพ ะธะผะตะตััั ะดะฒะฐ ัะฐัะฟัะตะดะตะปะตะฝะธั, ะบะพัะพััะต ะฟะตัะตัะตะบะฐัััั ะผะตะถะดั ัะพะฑะพะน. ะะปั ัะตัะตะฝะธั ะทะฐะดะฐัะธ ะบะปะฐััะธัะธะบะฐัะธะธ ะฝะฐะผ ะฝะตะพะฑั
ะพะดะธะผะพ ะฟะพัััะพะธัั ัะตัะฐััะตะต ะฟัะฐะฒะธะปะพ. ะ ะดะฐะฝะฝะพะผ ัะปััะฐะต ะตะณะพ ะผะพะถะฝะพ ะฟัะตะดััะฐะฒะธัั ะฒ ะฒะธะดะต ะฟััะผะพะน ะฝะฐ ะฟะปะพัะบะพััะธ, ะบะพัะพัะฐั "ะฝะฐะธะปัััะธะผ" ัะฟะพัะพะฑะพะผ ัะฐะทะดะตะปะธั ัะฐัะฟัะตะดะตะปะตะฝะธั. ะะพัะปะต ัะตะณะพ ะฒัะต ัะพัะบะธ ะฟะพ ะพะดะฝะพะน ััะพัะพะฝะต ะพั ะฟััะผะพะน ะผะพะถะฝะพ ััะธัะฐัั ะฟัะธะฝะฐะดะปะตะถะฐัะธะผะธ ะบะปะฐััั 0, ะฐ ะฟะพ ะดััะณัั ััะพัะพะฝั - ะบะปะฐััั 1.ะัะธ ััะพะผ ะฒะพะทะฝะธะบะฐัั ัะปะตะดัััะธะต ะฒะพะฟัะพัั:* ะบะฐะบ ะฒััะธัะปะธัั ะบะพัััะธัะธะตะฝัั ััะพะน ะฟััะผะพะน?* ััะพ ะพะทะฝะฐัะฐะตั "ะฝะฐะธะปัััะธะผ" ัะฟะพัะพะฑะพะผ ัะฐะทะดะตะปะธั?ะะพะฟัะพะฑัะตะผ ะฝะฐัะธัะพะฒะฐัั ะฝะตัะบะพะปัะบะพ ัะฐะบะธั
ะฟััะผัั
.
###Code
import numpy as np
sns.scatterplot(x='variance', y='skewness', hue='class', data=ds.sample(500));
lines = [([0, -5], 'g'),
([-3, -4], 'y')]
for line in lines:
x = np.linspace(-6, 6, 100)
y = line[0][0] + x*line[0][1]
plt.plot(x, y, color=line[1], alpha=0.8)
plt.ylim((-15, 15));
###Output
_____no_output_____
###Markdown
ะะท ะณัะฐัะธะบะฐ ะฒะธะดะฝะพ, ััะพ ะบะฐะบัั-ะฑั ะฟััะผัั ะผั ะฝะต ะฒัะฑัะฐะปะธ ะฒัะตะณะดะฐ ะฑัะดัั ัะพัะบะธ ะธะท ะพะฑััะฐััะตะน ะฒัะฑะพัะบะธ, ะฝะฐ ะบะพัะพััั
ะบะปะฐััะธัะธะบะฐัะพั ะฑัะดะตั ะดะฐะฒะฐัั ะฝะตะฒะตัะฝัะน ัะตะทัะปััะฐั. ะััะธัะปะธัั ะดะปั ะฟัะตะฒะตะดะตะฝะฝัั
ะฒััะต ะดะฒัั
ะฟััะผัั
ัะฐะบัั ั
ะฐัะฐะบัะตัะธััะธะบั ะบะฐะบ ัะพัะฝะพััั (accuracy), ะบะพัะพัะฐั ะฟัะตะดัะฐัะฒะปัะตั ัะพะฑะพะน ะฝะพัะผะธัะพะฒะฐะฝะฝะพะต ะฝะฐ ะตะดะธะฝะธัั ัะธัะปะพ ะฟัะฐะฒะธะปัะฝัั
ัะตัะตะฝะธะน ะบะปะฐััะธัะธะบะฐัะพัะฐ.
###Code
from sklearn.metrics import accuracy_score
acc = []
for line in lines:
pred = ds['skewness'] - line[0][1]*ds['variance'] - line[0][0] < 0
acc.append(accuracy_score(ds['class'], pred))
print(f"ะขะพัะฝะพััั ะดะปั ะทะตะปะตะฝะพะน ะฟััะผะพะน: {acc[0]:.3}")
print(f"ะขะพัะฝะพััั ะดะปั ะถะตะปัะพะน ะฟััะผะพะน: {acc[1]:.3}")
###Output
ะขะพัะฝะพััั ะดะปั ะทะตะปะตะฝะพะน ะฟััะผะพะน: 0.873
ะขะพัะฝะพััั ะดะปั ะถะตะปัะพะน ะฟััะผะพะน: 0.848
###Markdown
ะะพะณะธััะธัะตัะบะฐั ัะตะณัะตััะธั ะกัะฐะฒะฝะธะฒะฐั ัะตะทัะปััะฐัั ะผั ะฒะธะดะธะผ, ััะพ ัะพัะฝะพััั ะดะปั ะทะตะปะตะฝะพะน ะฟััะผะพะน ะฒััะต, ะฐ ัะปะตะดะพะฒะฐัะตะปััะฝะพ, ั ัะพัะบะธ ะทัะตะฝะธั ะดะฐะฝะฝะพะณะพ ะบัะธัะตัะธั, ััะพ ัะตัะฐััะตะต ะฟัะฐะฒะธะปะพ "ะปัััะต" ะถะตะปัะพะน ะฟััะผะพะน.ะะพะดะพะฑะฝัะต ัะฐัััะถะดะตะฝะธั ะฟัะธะฒะพะดัั ะฝะฐั ะบ ัะปะตะดัััะตะน ัะพัะผะฐะปัะฝะพะน ะฟะพััะฐะฝะพะฒะบะต ะทะฐะดะฐัะธ: ัะปะตะดัะตั ะฟะพััะฐะฒะธัั ะทะฐะดะฐัั ะพะฟัะธะผะธะทะฐัะธะธ, ะณะดะต ะฒ ะบะฐัะตััะฒะต ะฒะฐััะธััะตะผัั
ะฟะฐัะฐะผะตััะพะฒ ะฑัะดัั ะฟะฐัะฐะผะตััั ัะฐะทะดะตะปัััะตะน ะฟััะผะพะน, ะฐ ะฒ ะบะฐัะตััะฒะต ะผะธะฝะธะผะธะทะธััะตะผะพะน ััะฝะบัะธะธ - ััะฝะบัะธั ะพะฑัะฐัะฝะฐั ะบ ัะพัะฝะพััะธ.ะะพะฟัะพะฑัะตะผ ะทะฐะฟะธัะฐัั ััะพ ะฒ ัะพัะผะฐะปัะฝะพะน ะฝะพัะฐัะธะธ ะธ ััะฐะทั ะดะปั $n$-ะผะตัะฝะพะณะพ ัะปััะฐั.ะัััั $\theta = (\theta_0, \theta_1, \dots, \theta_n)$ - ะบะพัััะธัะธะตะฝัั ะบะฐะฝะพะฝะธัะตัะบะพะณะพ ะฟัะตะดััะฐะฒะปะตะฝะธั ะณะธะฟะตัะฟะปะพัะบะพััะธ (ะดะปั ะดะฒัะผะตัะฝะพะณะพ ัะปััะฐั - ััะพ ะฑัะดะตั ะฟััะผะฐั).$x = (1, x_1, x_2, \dots, x_n)$ - ัะฐััะธัะตะฝะฝัะน ะฒะตะบัะพั ะฟัะธะทะฝะฐะบะพะฒ (ั ะดะพะฟะพะปะฝะธัะตะปัะฝะพะน ะตะดะธะฝะธัะตะน).ะัะดะตะผ ะฝะฐะทัะฒะฐัั ะพะดะธะฝ ะธะท ะบะปะฐััะพะฒ - ะฟะพะปะพะถะธัะตะปัะฝัะผะธ ะฟัะธะผะตัะฐะผะธ, ะฐ ะดััะณะพะน - ะพััะธัะฐัะตะปัะฝัะผะธ. ะัััั ัะตะปะตะฒะฐั ะฟะตัะตะผะตะฝะฐั $y$ ะฟัะธะฝะธะผะฐะตั ะทะฝะฐัะตะฝะธะต $1$ ะดะปั ะฟัะธะผะตัะพะฒ ะธะท ะฟะพะปะพะถะธัะตะปัะฝะพะณะพ ะบะปะฐััะฐ, ะธ ะทะฝะฐัะตะฝะธะต $0$ ะดะปั ะฟัะธะผะตัะพะฒ ะธะท ะพััะธัะฐัะตะปัะฝะพะณะพ ะบะปะฐััะฐ.ะกัะพัะผัะปะธััะตะผ ะณะธะฟะพัะตะทั $h_\theta(x)$ ัะฐะบ, ััะพะฑั ะพะฑะปะฐััั ะตะต ะทะฝะฐัะตะฝะธะน ะปะตะถะฐะปะฐ ะฒ ะพััะตะทะบะต $[0, 1]$, ะฐ ะพะฑะปะฐััั ะทะฝะฐัะตะฝะธะน ัะพะฒะฟะฐะดะฐะปะฐ ั $\mathbb{R}$. ะะท ััะฝะบัะธะน, ะบะพัะพััะต ะพะฑะปะฐะดะฐััั ะฟะพะดะพะฑะฝัะผะธ ัะฒะพะนััะฒะฐะผะธ ะฒัะดะตะปะธะผ ะธ ัะฐััะผะพััะธะผ ะปะพะณะธััะธัะตัะบัั ะบัะธะฒัั$$\sigma(z) = \frac{1}{1 + e^{-z}}.$$
###Code
z = np.linspace(-10, 10, 100)
sigma = 1/(1+np.exp(-z))
plt.plot(z, sigma)
plt.ylim(-0.1, 1.1)
plt.xlabel('$z$')
plt.title('ะัะฐัะธะบ ะปะพะณะธััะธัะตัะบะพะน ััะฝะบัะธะธ $\sigma(z)$');
###Output
_____no_output_____
###Markdown
ะะฐะฟะธัะตะผ ะณะธะฟะพัะตะทั $h_\theta(x)$ ั ะฟะพะผะพััั ะปะพะณะธััะธัะตัะบะพะน ััะฝะบัะธะธ $\sigma(z)$ ะฒ ะฒะธะดะต$$h_\theta(x) = \sigma(\theta^Tx).$$ะะฝะฐัะตะฝะธะต ะณะธะฟะพัะตะทั $h_\theta(x)$ ะฑัะดะตะผ ััะฐะบัะพะฒะฐัั ะบะฐะบ ะฒะตัะพััะฝะพััั ัะพะณะพ, ััะพ ะพะฑัะตะบั, ะพะฟะธััะฒะฐะตะผัะน ะฒะตะบัะพัะพะผ ะฟัะธะทะฝะฐะบะพะฒ $x$, ะฟัะธะฝะฐะดะปะตะถะธั ะบ ะฟะพะปะพะถะธัะตะปัะฝะพะผั ะบะปะฐััั $h_\theta(x) = P(y=1 |~x; \theta)$. ะกะพะพัะฒะตัััะฒะตะฝะฝะพ $1 - h_\theta(x) = P(y = 0 |~x; \theta)$ - ะฒะตัะพััะฝะพััั ัะพะณะพะฐ, ััะพ ะพะฑัะตะบั ะฟัะธะฝะฐะดะปะตะถะธั ะบ ะพััะธัะฐัะตะปัะฝะพะผั ะบะปะฐััั. ะัะธะผะตั. ัะผะตัั ะฝะพัะผะฐะปัะฝัั
ัะฐัะฟัะตะดะตะปะตะฝะธะนะ ะบะฐัะตััะฒะต ะธะปะปััััะธััััะตะณะพ ะฟัะธะผะตัะฐ ัะฐััะผะพััะธะผ ะทะฐะดะฐัั ะบะปะฐััะธัะธะบะฐัะธะธ, ะฒ ะบะพัะพัะพะน ะฟัะธะผะตัั ะพะฑััะฐััะตะน ะฒัะฑะพัะบะธ ะธะผะตัั ะพะดะฝะพะผะตัะฝัะน ะฟัะธะทะฝะฐะบ ะธ ัะพะพัะฒะตััะฒััั ะดะฒัะผ ะฝะพัะผะฐะปัะฝัะผ ัะฐัะฟัะตะดะตะปะตะฝะธัะผะธ
###Code
from scipy.stats import norm
np.random.seed(10)
mean_0 = 0.7
std_0 = 0.3
mean_1 = -0.5
std_1 = 0.4
nsamples_0 = 100
nsamples_1 = 100
samples_0 = mean_0 + sqrt(std_0)*np.random.randn(nsamples_0)
samples_1 = mean_1 + sqrt(std_1)*np.random.randn(nsamples_1)
x_0 = np.linspace(mean_0 - 5*std_0, mean_0 + 5*std_0, 100)
x_1 = np.linspace(mean_1 - 5*std_1, mean_1 + 5*std_1, 100)
from sklearn.linear_model import LogisticRegression
clf = LogisticRegression().fit(np.hstack((samples_0, samples_1)).reshape(-1, 1),
np.hstack(([1]*nsamples_0, [0]*nsamples_1)))
theta_0 = clf.intercept_[0]
theta_1 = clf.coef_[0][0]
x = np.linspace(mean_1 - 5*std_1, mean_0 + 5*std_0)
sigma = 1/(1 + np.exp(-(theta_0 + theta_1*x)))
draw_samples = 25
plt.scatter(x = np.random.choice(samples_0, draw_samples), y = [0]*draw_samples, color='red', s=20)
plt.plot(x_0, norm.pdf(x_0, mean_0, std_0), color='red', alpha=0.6)
plt.scatter(x = np.random.choice(samples_1, draw_samples), y = [0]*draw_samples, color='blue', s=20)
plt.plot(x_1, norm.pdf(x_1, mean_1, std_1), color='blue', alpha=0.6)
plt.plot(x, sigma, color='orange')
plt.axhline(0.5, color='black', linestyle='--')
plt.axvline(-theta_0/theta_1, color='black', linestyle='--')
plt.text(-theta_0/theta_1 + 0.2, -0.1, '$P(y=1|x) > 0.5$')
plt.text(-theta_0/theta_1 - 1.3, -0.1, '$P(y=1|x) < 0.5$')
plt.title('ะ ะฐะทะดะตะปะตะฝะธะต ะดะฒัั
ัะฐัะฟัะตะดะตะปะตะฝะธะน')
###Output
_____no_output_____
###Markdown
ะคัะฝะบัะธั ะฟะพัะตััะัะฟะพะปัะทะพะฒะฐะฝะธะต ัะฐะบะพะน ะถะต ััะฝะบัะธ ะฟะพัะตัั, ััะพ ะธ ะดะปั ะผะตัะพะดะฐ ะปะธะฝะตะนะฝะพะน ัะตะณัะตััะธ, ั.ะต. ััะตะดะฝะตะบะฒะฐะดัะฐัะธัะฝะฐั ะพัะธะฑะบะฐ ะพัะปะพะถะฝะตะฝะพ ัะตะผ ัะฐะบัะพะผ, ััะพ ะฟัะธ ะธัะฟะพะปัะทะพะฒะฐะฝะธะธ ะฝะตะธะฝะตะนะฝะพะน ะณะธะฟะพัะตะทั $h_\theta(x)$ ะฝะตั ะณะฐัะฐะฝัะธะธ, ััะพ ััะฝะบัะธั ะฟะพัะตัั ะฒะธะดะฐ$$L(\theta) = \frac{1}{2n}\sum\limits_{i=1}^{n}\left(h_\theta(x^{(i)}) - y^{(i)}\right)^2$$ะฑัะดะตั ะธะผะตัั ะฒัะฟัะบะปัะน ะฒะธะด.ะัะพะฒะตัะธะผ ััะพ ััะฒะตัะถะดะตะฝะธะต ะฝะฐ ะฟัะธะผะตัะต ะบะปะฐััะธัะธะบะฐัะธะธ ะฟะพ ัะพััั ะธ ะฒะตัั ะฝะฐ ะผัะถัะธะฝ ะธ ะถะตะฝัะธะฝ. ะฃะฟัะพััะธะผ ะดะปั ะฝะฐะณะปัะดะฝะพััะธ ะทะฐะดะฐัั ะธ ะฑัะดะตะผ ัะฐััะผะฐััะธะฒะฐัั ัะพะปัะบะพ ัะพัั. ะ ะบะฐัะตััะฒะต ะณะธะฟะพัะตะทั ัะฐััะผะพััะธะผ ะพะดะฝะพะฟะฐัะฐะผะตััะธัะตัะบัั ััะฝะบัะธั $h_\theta(x) = \theta x$ ะฑะตะท ัะฒะพะฑะพะดะฝะณะพ ะฟะฐัะฐะผะตััะฐ. ะะพัััะพะธะผ ะณัะฐัะธะบ $L(\theta)$
###Code
def mse(theta, x, y):
n = len(y)
def sigma(z):
return 1/(1 + np.exp(-z))
return 1/(2*n)*np.sum((sigma(theta*x) - y)**2)
x = ds['variance']
y = ds['class']
theta = np.linspace(-10, 10, 100)
L = [mse(th, x, y) for th in theta]
plt.plot(theta, L)
plt.xlabel('$\\theta$')
plt.ylabel('$L(\\theta)$')
plt.title('ะัะฐัะธะบ ะบะฒะฐะดัะฐัะธัะฝะพะน ััะฝะบัะธะธ ะฟะพัะตัั ะดะปั ะทะฐะดะฐัะธ ะบะปะฐััะธัะธะบะฐัะธะธ');
###Output
_____no_output_____
###Markdown
ะะพะณะธััะธัะตัะบะฐั ััะฝะบัะธั ะฟะพัะตััะ ะฐััะผะพััะธะผ ะฒ ะบะฐัะตััะฒะต ััะฝะบัะธะธ ะฟะพัะตัั ะดะปั ะผะตัะพะดะฐ ะปะพะณะธััะธัะตัะบะพะน ัะตะณัะตัะธะธ ะฒััะฐะถะตะฝะธะต ะฒะธะดะฐ$$L(\theta) = - \frac{1}{n}\sum\limits_{i=1}^n\begin{cases}\log (1-h_\theta(x^{(i)})),&& y^{(i)} = 0 \\\log h_\theta(x^{(i)}),&& y^{(i)} = 1.\end{cases}$$ะะปั ะฟะพะฝะธะผะฐะฝะธั ะฒะธะดะฐ $L(\theta)$ ัะฐััะผะพััะธะผ ะบะฐะบ ะฒะตะดัั ัะตะฑั ัะปะฐะณะฐะตะผัะต ะดะปั $y = 0$ ะธ $y = 1$ ะฟะพ ะพัะดะตะปัะฝะพััะธ.
###Code
fig, ax = plt.subplots(1, 2, figsize=(16, 5))
h = np.linspace(0.01, 0.99, 100)
ax[0].plot(h, -np.log(1 - h))
ax[0].set_xlabel('$h_{\\theta}(x)$')
ax[0].set_title('$y = 0, -\\log (1-h_\\theta(x^{(i)}))$')
ax[1].plot(h, -np.log(h))
ax[1].set_xlabel('$h_{\\theta}(x)$')
ax[1].set_title('$y = 1, -\\log h_\\theta(x^{(i)})$');ะะฐ ะปะตะฒะพะผ ะณัะฐัะธะบะต ั
ะพัะพัะพ ะฒะธะดะฝะพ, ััะพ ะฒ ัะปััะฐะต, ะบะพะณะดะฐ ะธััะธะฝะฝัะน ะบะปะฐัั ($y = 0$), ัะพ ะดะฐะฝะฝะฐั ััะฝะบัะธั ะฟะพัะตัั ะฝะฐะณัะฐะถะดะฐะตั ะฝะฐั, ะบะพะณะดะฐ ะณะธะฟะพัะตะทะฐ $h_\theta(x) \rightarrow 0$ ะธ ัััะฐััะตั, ะบะพะณะดะฐ $h_\theta(x) \rightarrow 1$. ะ ัะปััะฐะต ะดััะณะพะณะพ ะบะปะฐััะฐ $(y = 1$) ะฒัะต ั ัะพัะฝะพัััั ะดะพ ะฝะฐะพะฑะพัะพั. ะคัะฝะบัะธั ะฟะพัะตัั ะฝะฐะณัะฐะถะดะฐะตั ะฝะฐั ะฟัะธ $h_\theta(x) \rightarrow 1$ ะธ ัััะฐััะตั ะฟัะธ $h_\theta(x) \rightarrow 0$.
ะะพัะผะพััะธะผ ะบะฐะบ ะฒัะณะปัะดะธั ััะผะผะฐ ะฟะพะดะพะฑะฝัั
ัััะฐัะพะฒ ะดะปั ะฒัะตะน ะพะฑััะฐััะตะน ะฒัะฑะพัะบะธ ะฒ ะทะฐะฒะธัะธะผะพััะธ ะพั ะฟะฐัะฐะผะตััะฐ $\theta$.
###Output
_____no_output_____
###Markdown
ะะฐ ะปะตะฒะพะผ ะณัะฐัะธะบะต ั
ะพัะพัะพ ะฒะธะดะฝะพ, ััะพ ะฒ ัะปััะฐะต, ะบะพะณะดะฐ ะธััะธะฝะฝัะน ะบะปะฐัั ($y = 0$), ัะพ ะดะฐะฝะฝะฐั ััะฝะบัะธั ะฟะพัะตัั ะฝะฐะณัะฐะถะดะฐะตั ะฝะฐั, ะบะพะณะดะฐ ะณะธะฟะพัะตะทะฐ $h_\theta(x) \rightarrow 0$ ะธ ัััะฐััะตั, ะบะพะณะดะฐ $h_\theta(x) \rightarrow 1$. ะ ัะปััะฐะต ะดััะณะพะณะพ ะบะปะฐััะฐ $(y = 1$) ะฒัะต ั ัะพัะฝะพัััั ะดะพ ะฝะฐะพะฑะพัะพั. ะคัะฝะบัะธั ะฟะพัะตัั ะฝะฐะณัะฐะถะดะฐะตั ะฝะฐั ะฟัะธ $h_\theta(x) \rightarrow 1$ ะธ ัััะฐััะตั ะฟัะธ $h_\theta(x) \rightarrow 0$.ะะพัะผะพััะธะผ ะบะฐะบ ะฒัะณะปัะดะธั ััะผะผะฐ ะฟะพะดะพะฑะฝัั
ัััะฐัะพะฒ ะดะปั ะฒัะตะน ะพะฑััะฐััะตะน ะฒัะฑะพัะบะธ ะฒ ะทะฐะฒะธัะธะผะพััะธ ะพั ะฟะฐัะฐะผะตััะฐ $\theta$.
###Code
def logloss(theta, x, y):
n = len(y)
def sigma(z):
return 1/(1 + np.exp(-z))
return -1/n*np.sum(y*np.log(sigma(theta*x)) + (1-y)*np.log(1 - sigma(theta*x)))
x = ds['variance']
y = ds['class']
theta = np.linspace(-10, 10, 100)
L = [logloss(th, x, y) for th in theta]
plt.plot(theta, L)
plt.xlabel('$\\theta$')
plt.ylabel('$L(\\theta)$')
plt.title('ะัะฐัะธะบ ะปะพะณะธััะธัะตัะบะพะน ััะฝะบัะธะธ ะฟะพัะตัั ะดะปั ะทะฐะดะฐัะธ ะบะปะฐััะธัะธะบะฐัะธะธ');
###Output
/Users/filonov/.venv/lib/python3.8/site-packages/pandas/core/arraylike.py:364: RuntimeWarning: divide by zero encountered in log
result = getattr(ufunc, method)(*inputs, **kwargs)
###Markdown
ะะท ะณัะฐัะธะบะฐ ั
ะพัะพัะพ ะฒะธะดะฝะพ, ััะพ ะฟะพัััะพะตะฝะฝะฐั ััะฝะบัะธั ะฟะพัะตัั ะธะผะตะตั ะฒัะฟัะบะปัะน ะฒะธะด, ะฐ ัะปะตะดะพะฒะฐัะตะปัะฝะพ ะดะปั ะฝะฐั
ะพะถะดะตะฝะธั ะตะต ะผะธะฝะธะผัะผะฐ ั
ะพัะพัะพ ะฟะพะดะพะนะดะตั ะผะตัะพะด ะณัะฐะดะธะตัะฝะพะณะพ ัะฟััะบะฐ.ะะปั ััะพะณะพ ะฝะฐะนะดะตะผ ะฟะตัะตะฟะธัะตะผ ััะฝะบัะธั ะฟะพัะตัั ะฒ ัะบะฒะธะฒะฐะปะตะฝัะฝะพะผ ะฒะธะดะต$$L(\theta) = -\frac{1}{n}\sum\limits_{i=1}^n\left[y^{(i)}\log h_\theta(x^{(i)}) + (1 - y^{(i)})\log\left(1 - h_\theta(x^{(i)})\right)\right] = -\frac{1}{n}\sum\limits_{i=1}^n\left[y^{(i)}\log \sigma(\theta^Tx^{(i)}) + (1 - y^{(i)})\log\left(1 - \sigma(\theta^Tx^{(i)})\right)\right].$$ะ ะฝะฐะนะดะตะผ ะตะต ะฟัะพะธะทะฒะพะดะฝัั $$\frac{\partial L}{\partial \theta_j} = -\frac{1}{n}\sum\limits_{i=1}^n \left[ \frac{ y^{(i)} \sigma'(\theta^Tx^{(i)})}{\sigma(\theta^T x^{(i)})} - \frac{(1 - y^{(i)}) \sigma'(\theta^Tx^{(i)})}{1 - \sigma(\theta^T x^{(i)})}\right] x_j^{(i)}.$$ะฃัะธััะฒะฐั, ััะพ$$\sigma'(z) = \left(\frac{1}{1 + e^{-z}}\right)' = \frac{e^{-z}}{(1 + e^{-z})^2} = \frac{1 - 1 + e^{-z}}{(1 + e^{-z})^2} = \frac{1}{1 + e^{-z}} \left(1 - \frac{1}{1 + e^{-z}}\right) = \sigma(z)(1 - \sigma(z)),$$ะฟะพะปััะฐะตะผ$$\frac{\partial L}{\partial \theta_j} = -\frac{1}{n}\sum\limits_{i=1}^n \left[y^{(i)}(1 - \sigma(\theta^Tx^{(i)})) - (1 - y^{(i)})\sigma(\theta^Tx^{(i)})\right]x_j^{(i)} = -\frac{1}{n}\sum\limits_{i=1}^n \left[y^{(i)} - \sigma(\theta^Tx^{(i)}) \right]x_j^{(i)}.\quad j = \overline{1,m}.$$ะะปะธ ะฒ ะผะฐััะธัะฝะพะผ ะฒะธะดะต$$grad ~ L(\theta) = \frac{1}{n}X^T(\sigma(X\theta^T) - y).$$ะะดะตัั ะผั ััะธัะฐะตะผ, ััะพ $\sigma$ ะฟัะธะผะตะฝัะตััั ะบ ะบะฐะถะดะพะผั ัะปะตะผะตะฝัั ะฒะตะบัะพัะฐ $X\theta^T$. ะะตัะพะด ะณัะฐะดะธะตะฝัะฝะพะณะพ ัะฟััะบะฐ ะะฐะนะดะตะฝ ะฟะฐัะฐะผะตััั ะปะพะณะธััะธัะตัะบะพะน ัะตะณัะตััะธ ะดะปั ะฝะฐัะตะน ัะตััะพะฒะพะน ะทะฐะดะฐัะธ ั ะฟะพะผะพััั ะผะตัะพะดะฐ ะณัะฐะดะธะตะฝัะฝะพะณะพ ัะฟััะบะฐ
###Code
def sigma(z):
return 1/(1+np.exp(-z))
def grad(y, X, theta):
n = y.shape[0]
return 1/n * X.transpose() @ (sigma(X @ theta) - y)
def L(y, X, theta):
n = y.shape[0]
return -1/(n)*np.sum(y*np.log(sigma(X @ theta)) + (1 - y)*np.log(1 - sigma(X @ theta)))
def fit(y, X, theta_0, alpha=0.001, nsteps = 100):
theta = np.copy(theta_0)
loss = [L(y, X, theta)]
for i in range(nsteps):
theta -= alpha*grad(y, X, theta)
loss.append(L(y, X, theta))
return loss, theta
###Output
_____no_output_____
###Markdown
ะัะตะดะพะฑัะฐะฑะพัะบะฐ ะดะฐะฝะฝัั
ะะปั ะปัััะตะน ัั
ะพะดะธะผะพััะธ ะผะตัะพะดะฐ ะฟัะพะฒะตะดะตะผ ะฝะพัะผะฐะปะธะทะฐัะธั ะฟัะธะทะฝะฐะบะพะฒ.
###Code
X = ds[['variance', 'skewness']]
y = ds['class']
X_mean = np.mean(X, axis=0)
X_std = np.std(X, axis=0)
norm_X = (X - X_mean)/X_std
n = len(X)
X = np.hstack((np.ones((n, 1)), norm_X))
m = 2
theta_0 = np.zeros(m + 1)
loss_history, theta_star = fit(y, X, theta_0, alpha=1e-2, nsteps=5000)
plt.plot(loss_history)
plt.xlabel('$k$')
plt.ylabel('$L(\\theta^{(k)})$')
plt.title('ะัะธะฒะฐั ะพะฑััะตะฝะธั');
###Output
_____no_output_____
###Markdown
ะะพัััะพะธะผ ะฟะพะปััะธะฒััััั ัะฐะทะดะตะปััััั ะณะธะฟะตัะฟะปะพัะบะพััั ั ะฟะฐัะฐะผะตััะฐะผะธ $\theta^*$
###Code
theta_star
sns.scatterplot(x='variance', y='skewness', hue='class', data=ds.sample(500));
x_tmp = np.linspace(-6, 6, 100)
y_tmp = - (theta_star[0] + (x_tmp - X_mean[0])/X_std[0]*theta_star[1])/theta_star[2]
plt.plot(x_tmp, X_std[1]*y_tmp + X_mean[1] , color="green", alpha=0.8, linestyle='--')
plt.ylim((-15, 15));
###Output
_____no_output_____
###Markdown
ะััะธัะปะธะผ ัะธัะปะพ ะฟัะฐะฒะธะปัะฝัั
ะพัะฒะตัะพะฒ ะดะปั ะฟะพะปััะธะฒัะตะณะพัั ะบะปะฐััะธัะธะบะฐัะพัะฐ
###Code
y_pred = X @ theta_star > 0
logreg_score = accuracy_score(y, y_pred)
print(f'Logistic Regression accuracy: {logreg_score:.3f}')
###Output
Logistic Regression accuracy: 0.887
###Markdown
ะะตะปะธะฝะตะนะฝะฐั ัะฐะทะดะตะปัััะฐั ะฟะพะฒะตัั
ะฝะพัััะ ะบะฐัะตััะฒะต ะฟัะธะผะตัะฐ ะทะฐะดะฐัะธ ะบะปะฐััะธัะธะบะฐัะธะธ, ะบะพัะพััะน ััะตะฑัะตั ะฟะพัััะพะตะฝะธั ะฝะตะปะธะฝะตะนะฝัั
ัะฐะทะดะตะปัััะธั
ะฟะพะฒะตัั
ะฝะพััะตะน, ัะฐััะผะพััะธะผ ะฝะฐะฑะพัะฐ ะดะฐะฝะฝัั
ั ัะตะทัะปััะฐัะฐะผะธ ัะตััะพะฒ ะผะธะบัะพะฟัะพัะตััะพัะพะฒ.
###Code
microchip = pd.read_csv(
'../datasets/microchip_test.csv',
names=['Test1', 'Test2', 'Passed'],
)
sns.scatterplot(x='Test1', y='Test2', hue='Passed', data=microchip, s=100);
###Output
_____no_output_____
###Markdown
ะะฐ ัะธััะฝะบะต ั
ะพัะพัะพ ะฒะธะดะฝะพ, ััะพ ัะฐะทะดะตะปัััะฐั ะณัะฐะฝะธัะฐ ะดะพะปะถะฝะฐ ะธะผะตัั ะฝะตะปะธะฝะตะนะฝัะน ะฒะธะด. ะะพะฟัะพะฑัะตะผ ะดะพะฑะฐะฒะธัั ะบะฒะฐะดัะฐัะธัะฝัะต ะฟัะธะทะฝะฐะบะธ ะฒ ะฝะฐั ะฝะฐะฑะพั ะดะฐะฝะฝัั
.$$x' = \begin{pmatrix}1 & x_1 & x_2 & x_1^2 & x_1 x_2 & x_2^2\end{pmatrix}.$$
###Code
n = len(microchip)
m = 2
tmpX = np.ones((n, m))
X = np.zeros((n, 2*(m+1)))
tmpX[:] = microchip[['Test1', 'Test2']]
for i in range(n):
X[i, :] = np.array([1, tmpX[i, 0], tmpX[i, 1], tmpX[i, 0]**2, tmpX[i, 0]*tmpX[i, 1], tmpX[i, 1]**2])
y = microchip['Passed'].values
theta_0 = np.zeros(2*(m+1))
loss_history, theta_star = fit(y, X, theta_0, alpha=1e-1, nsteps=20000)
plt.plot(loss_history)
plt.xlabel('$k$')
plt.ylabel('$L(\\theta^{(k)})$')
plt.title('ะัะธะฒะฐั ะพะฑััะตะฝะธั');
###Output
_____no_output_____
###Markdown
ะ ะฐะทะดะตะปัััะฐั ะบัะธะฒะฐั
###Code
import warnings
warnings.filterwarnings('ignore')
npoints = 500
x1 = np.linspace(-1.0, 1.25, npoints)
x2 = np.linspace(-1.0, 1.25, npoints)
xx1, xx2 = np.meshgrid(x1, x2)
def decision_func(x, theta):
return theta[0] + theta[1]*x[0] + theta[2]*x[1] + theta[3]*x[0]**2 + theta[4]*x[0]*x[1] + theta[5]*x[1]**2
points = np.c_[xx1.ravel(), xx2.ravel()]
Z = np.array([1 if decision_func(x, theta_star) > 0 else 0 for x in points])
Z = Z.reshape(xx1.shape)
sns.scatterplot(x='Test1', y='Test2', hue='Passed', data=microchip, s=100);
plt.contour(xx1, xx2, Z, levels=[0], colors='green', alpha=0.6, linestyles='--');
###Output
_____no_output_____ |
lessons/python/ESPIN-04- For loops and conditionals.ipynb | ###Markdown
Programming with Python For loops minutes: 30---> Learning Objectives {.objectives}>> * Explain what a for loop does> * Correctly write for loops to repeat simple calculations> * Trace changes to a loop variable as the loop runs> * Trace changes to other variables as they are updated by a for loop For loopsAutomating repetitive tasks is best accomplished with a loop. A For Loop repeats a set of actions for every item in a collection (every letter in a word, every number in some range, every name in a list) until it runs out of items:
###Code
word = 'lead'
for char in word:
print( char)
###Output
_____no_output_____
###Markdown
This is shorter than writing individual statements for printing every letter in the word and it easy scales to longer or shorter words:
###Code
word = 'aluminium'
for char in word:
print( char)
word = 'tin'
for char in word:
print (char)
###Output
_____no_output_____
###Markdown
The general form of a for loop is:
###Code
# for item in collection:
# do things with item
###Output
_____no_output_____
###Markdown
A for loop starts with the word "for", then the variable name that each item in the collection is going to take inside the loop, then the word "in", and then the collection or sequence of items to loop through.In Python, there must be a colon at the end of the line starting the loop. The commands that are run repeatedly inside the loop are indented below that. Unlike many other languages, there is no command to end a loop (e.g. `end for`): the loop ends once the indentation moves back. Practice your skillsMake a for loop to count the letters in the word elephant. Itโs worth tracing the execution of this little program step by step. Since there are eight characters in โelephantโ, the statement inside the loop will be executed eight times. The first time around, `length` is zero (the value assigned to it on line 1) and `letter` is "e". The code adds 1 to the old value of `length`, producing 1, and updates `length` to refer to that new value. The next time the loop starts, `letter` is "l" and `length` is 1, so `length` is updated to 2. Once there are no characters left in "elephant" for Python to assign to `letter`, the loop finishes and the `print` statement tells us the final value of length.Note that a loop variable is just a variable thatโs being used to record progress in a loop. It still exists after the loop is over (and has the last value it had inside the loop). We can re-use variables previously defined as loop variables, overwriting their value:
###Code
letter = 'z'
for letter in 'abc':
print (letter)
print ('after the loop, letter is', letter)
###Output
_____no_output_____
###Markdown
Making ChoicesWhen analyzing data, weโll often want to automatically recognize differences between values and take different actions on the data depending on some conditions. Here, weโll learn how to write code that runs only when certain conditions are true. ConditionalsWe can ask Python to running different commands depending on a condition with an if statement:
###Code
num = 42
if num > 100:
print ('greater')
else:
print ('not greater')
print ('done')
###Output
_____no_output_____
###Markdown
The second line of this code uses the keyword `if` to tell Python that we want to make a choice. If the test that follows the `if` statement is true, the commands in the indented block are executed. If the test is false, the indented block beneath the else is executed instead. Only one or the other is ever executed.Conditional statements donโt have to include an `else`. If there isnโt one, Python simply does nothing if the test is false:
###Code
num = 42
print ('before conditional...')
if num > 100:
print (num, 'is greater than 100')
print ('...after conditional')
###Output
_____no_output_____
###Markdown
We can also chain several tests together using `elif`, which is short for โelse ifโ. The following Python code uses elif to print the sign of a number. We use a double equals sign `==` to test for equality between two values. The single equal sign is used for assignment:
###Code
num = -3
if num > 0:
print (num, "is positive")
elif num == 0:
print (num, "is zero")
else:
print (num, "is negative")
###Output
_____no_output_____
###Markdown
We can also combine tests using `and` and `or`. `and` is only true if both parts are true:
###Code
if (1 > 0) and (-1 > 0):
print ('both tests are true')
else:
print ('at least one test is false')
###Output
_____no_output_____
###Markdown
while `or` is true if at least one part is true:
###Code
if (1 > 0) or (-1 > 0):
print ('at least one test is true')
else:
print ('neither test is true')
###Output
_____no_output_____
###Markdown
Programming with Python For loops minutes: 30---> Learning Objectives {.objectives}>> * Explain what a for loop does> * Correctly write for loops to repeat simple calculations> * Trace changes to a loop variable as the loop runs> * Trace changes to other variables as they are updated by a for loop For loopsAutomating repetitive tasks is best accomplished with a loop. A For Loop repeats a set of actions for every item in a collection (every letter in a word, every number in some range, every name in a list) until it runs out of items: The general form of a for loop is:
###Code
# for item in collection:
# do things with item
###Output
_____no_output_____
###Markdown
A for loop starts with the word "for", then the variable name that each item in the collection is going to take inside the loop, then the word "in", and then the collection or sequence of items to loop through.In Python, there must be a colon at the end of the line starting the loop. The commands that are run repeatedly inside the loop are indented below that. Unlike many other languages, there is no command to end a loop (e.g. `end for`): the loop ends once the indentation moves back. Practice your skillsMake a for loop to count the letters in the word elephant. Itโs worth tracing the execution of this little program step by step. Since there are eight characters in โelephantโ, the statement inside the loop will be executed eight times. The first time around, `length` is zero (the value assigned to it on line 1) and `letter` is "e". The code adds 1 to the old value of `length`, producing 1, and updates `length` to refer to that new value. The next time the loop starts, `letter` is "l" and `length` is 1, so `length` is updated to 2. Once there are no characters left in "elephant" for Python to assign to `letter`, the loop finishes and the `print` statement tells us the final value of length.Note that a loop variable is just a variable thatโs being used to record progress in a loop. It still exists after the loop is over (and has the last value it had inside the loop). We can re-use variables previously defined as loop variables, overwriting their value:
###Code
letter = 'z'
for letter in 'abc':
print (letter)
print ('after the loop, letter is', letter)
###Output
_____no_output_____
###Markdown
Making ChoicesWhen analyzing data, weโll often want to automatically recognize differences between values and take different actions on the data depending on some conditions. Here, weโll learn how to write code that runs only when certain conditions are true. ConditionalsWe can ask Python to running different commands depending on a condition with an if statement: The second line of this code uses the keyword `if` to tell Python that we want to make a choice. If the test that follows the `if` statement is true, the commands in the indented block are executed. If the test is false, the indented block beneath the else is executed instead. Only one or the other is ever executed.Conditional statements donโt have to include an `else`. If there isnโt one, Python simply does nothing if the test is false:
###Code
num = 42
print ('before conditional...')
if num > 100:
print (num, 'is greater than 100')
print ('...after conditional')
###Output
before conditional...
...after conditional
|
LAB-01/144_01_01.ipynb | ###Markdown
1)Draw Scatter Plot between age and salary for "Data_for_Transformation.csv" file
###Code
data=pd.read_csv('Data_for_Transformation.csv')
plt.scatter(data['Age'],data['Salary'])
plt.show()
###Output
_____no_output_____
###Markdown
2) Draw Histogram of column/feature "Salary"
###Code
plt.hist(data['Salary'],bins=5)
plt.show()
###Output
_____no_output_____
###Markdown
3) Plot bar chart for column/feature "Country"
###Code
df=pd.DataFrame(data,columns=['Country'])
val = df['Country'].value_counts()
labels = val.keys()
fig = plt.figure(figsize = (6, 5))
plt.bar(labels, val,width = 0.4)
plt.xlabel("Country")
plt.ylabel("No. of Records")
plt.show()
###Output
_____no_output_____ |
Exercises/Exercise 8 - Multiclass with Signs/Exercise 8 - Answer.ipynb | ###Markdown
The data for this exercise is available at: https://www.kaggle.com/datamunge/sign-language-mnist/homeSign up and download to find 2 CSV files: sign_mnist_test.csv and sign_mnist_train.csv -- You will upload both of them using this button before you can continue.
###Code
uploaded=files.upload()
def get_data(filename):
with open(filename) as training_file:
csv_reader = csv.reader(training_file, delimiter=',')
first_line = True
temp_images = []
temp_labels = []
for row in csv_reader:
if first_line:
# print("Ignoring first line")
first_line = False
else:
temp_labels.append(row[0])
image_data = row[1:785]
image_data_as_array = np.array_split(image_data, 28)
temp_images.append(image_data_as_array)
images = np.array(temp_images).astype('float')
labels = np.array(temp_labels).astype('float')
return images, labels
training_images, training_labels = get_data('sign_mnist_train.csv')
testing_images, testing_labels = get_data('sign_mnist_test.csv')
print(training_images.shape)
print(training_labels.shape)
print(testing_images.shape)
print(testing_labels.shape)
training_images = np.expand_dims(training_images, axis=3)
testing_images = np.expand_dims(testing_images, axis=3)
train_datagen = ImageDataGenerator(
rescale=1. / 255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
validation_datagen = ImageDataGenerator(
rescale=1. / 255)
print(training_images.shape)
print(testing_images.shape)
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(64, (3, 3), activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(64, (3, 3), activation='relu'),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
tf.keras.layers.Dense(26, activation=tf.nn.softmax)])
model.compile(optimizer = tf.optimizers.Adam(),
loss = 'sparse_categorical_crossentropy',
metrics=['accuracy'])
history = model.fit(train_datagen.flow(training_images, training_labels, batch_size=32),
steps_per_epoch=len(training_images) / 32,
epochs=15,
validation_data=validation_datagen.flow(testing_images, testing_labels, batch_size=32),
validation_steps=len(testing_images) / 32)
model.evaluate(testing_images, testing_labels)
import matplotlib.pyplot as plt
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'r', label='Training accuracy')
plt.plot(epochs, val_acc, 'b', label='Validation accuracy')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'r', label='Training Loss')
plt.plot(epochs, val_loss, 'b', label='Validation Loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____ |
.ipynb_checkpoints/Probability and Combinatorics Exercise-checkpoint.ipynb | ###Markdown
Probability and Combinatorics Exercise Probabilistic Events. Combinatorics and Counting. Distributions Problem 1. Exploring Distribution ParametersA good idea to visualize and explore the parameters of various distributions is just to plot them.We can do this in either one of two ways:1. Draw (generate) many random variables which follow that distribution. Plot their histogram2. Write the distribution function directly and plot itEither of these will work but the second approach will give us better looking results. [`scipy.stats`](https://docs.scipy.org/doc/scipy-0.19.1/reference/stats.html) has a lot of built-in distributions that we can use. Each of them has its own use cases.It's very important that we plot discrete and continuous distributions in different ways. **We must not make discrete distributions look continuous**. That is, discrete distributions are only defined for integer number of trials: $n \in \mathbb{N}$.Let's plot the binomial and Gaussian distributions.
###Code
def plot_binomial_distribution(x, n, p):
"""
Plots the binomial distribution with parameters n and p. The parameter x specifies the values
where the function is evaluated at
"""
binomial = scipy.stats.binom.pmf(x, n, p)
plt.scatter(x, binomial, color = "blue")
plt.vlines(x, 0, binomial, color = "blue", linewidth = 5, alpha = 0.5)
plt.show()
def plot_gaussian_distribution(mu, sigma, x):
"""
Plots the Gaussian distribution with parameters mu and sigma. The parameter x specifies
the values where the function is evaluated at
"""
gaussian = scipy.stats.norm.pdf(x, loc = mu, scale = sigma)
plt.plot(x, gaussian, color = "blue")
plt.show()
x_binomial = np.arange(1, 10)
plot_binomial_distribution(x_binomial, 10, 0.5)
x_gaussian = np.linspace(-3, 3, 1000)
plot_gaussian_distribution(0, 1, x_gaussian)
###Output
_____no_output_____
###Markdown
These look similar. That's with a good reason: the Gaussian distribution is a generalization of the binomial distribution as $n \rightarrow \infty$.What do these parameters specify exactly? Let's find out. Take the binomial distribution. Keep $p = 0.5$ and change $n$. Plot several values of $n$ in the same plot, with different colors. **What values to choose?** Remember that $n$ was the number of experiments, so it should be an integer $\ge 1$.Now keep $n$ at some reasonable value (a number between 10 and 30 should be good) and change $p$. $p$ is a probability so its values must be between 0 and 1.What can you conclude? How does the function shape change? When is it symmetrical and when it is not?Perform the same kind of operations on $\mu$ and $\sigma$ with the Gaussian distribution. What do these parameters represent?If you get stuck, try to find what the distribution functions should look like on the Internet.
###Code
# Write your code here
###Output
_____no_output_____
###Markdown
Problem 2. Central Limit TheoremThe **Central Limit Theorem** tells us that no matter what quirky functions we have, their sum is going to be distributed according to the normal distribution. Let's prove this.Consider the following functions:$$ f(x) = 1 $$$$ f(x) = 2x $$$$ f(x) = 3x^2 $$$$ f(x) = 4\lvert x - 0,5\rvert $$$$ f(x) = 2 - 4\lvert x - 0,5\rvert $$For each of these functions `f`:1. Generate a big array of, say, 2000 values `x` between 0 and 1: `np.linspace(0, 1, 2000)`2. Generate the array f(x)3. Create 1000 experiments like this 1. Generate 25 random values $x$ between 0 and 1: `np.random.rand(25)` 3. Generate $y = f(x)$ 2. Sum all 25 values $y$ 3. Add the sum to the array of sums4. Plot the distribution of 1000 sumsWhat do you get? Can you experiment with a combination of functions? When is the normal distribution a good approximation of the real distribution?
###Code
def plot_function(f, ax, min_x = 0, max_x = 1, values = 2000):
x = np.linspace(min_x, max_x, values)
y = f(x)
ax.plot(x, y)
def perform_simulation(f, ax):
sums = []
for experiment in range(1001):
random_numbers = np.random.rand(25)
current_sum = f(random_numbers).sum()
sums.append(current_sum)
ax.hist(sums)
def plot_results(f, min_x = 0, max_x = 1, values = 2000):
vectorized_function = np.vectorize(f)
figure, (ax1, ax2) = plt.subplots(1,2, figsize = (12,4))
plot_function(vectorized_function, ax1, min_x, max_x, values)
perform_simulation(vectorized_function, ax2)
plot_results(lambda x: 1)
plot_results(lambda x: 2 * x)
plot_results(lambda x: 3 * x**2)
plot_results(lambda x: 4 * np.abs(x-0.5))
plot_results(lambda x: 2-4 * np.abs(x-0.5))
###Output
_____no_output_____
###Markdown
Problem 3. Birthday ParadoxHow many people do we need to have in a room, so that the probability of two people sharing a birthday is $p(A) > 0,5$?We suppose no leap years, so a year has 365 days. We could expect that we need about $365/2=182$ people. Well, the truth is a bit different. Solution**Random variable:** $A$: probability that two people share a birthdayIt's sometimes easier to work with the complementary variable: $\bar{A}$ - probability that **no people** share a birthday. Let's suppose we have $r$ people in the room. Of course, if $r = 1$, e.g. only one person, the probability is $1$ (there's no one to share a birthday with). If $r >= 365$, the probability must be 1 (by the so-called [pigeonhole principle](https://en.wikipedia.org/wiki/Pigeonhole_principle): if we have 366 people and 365 days, there's at least one day with a pair of people).Order the people 1 to $r$. Every person's birthday is independent, so that means 365 days for the first, 365 days for the second, and so on: $365^r$ birthday possibilities in total.We want no duplications of birthdays. The first person has 365 days to choose from, the second has 364, and so on. The $r$th person has $365-r+1$ days to choose from. Total: $365.364.363.\cdots.(365 - r + 1)$The probability that no people share the same birthday is the fraction of all non-shared birthdays to all possible birthdays:$$ p(\bar{A})=\frac{365.364.363.\cdots.(365 - r + 1)}{365^r} $$We're interested in $A$, not $\bar{A}$ and we know that these are complementary, so their probabilities add up to 1$$p(A) = 1 - p(\bar{A})$$Write a function which plots the probability of $r$ people sharing a birthday. Remember this is a discrete distribution and should be plotted like so.
###Code
def calculate_birtday_probability(r):
"""
Returns the probability of r people sharing the same birthday. A year is
supposed to have 365 days
"""
total_sum = []
for year in range(365-r+1,366):
current_sum = year/365
total_sum.append(current_sum)
return (1 - total_sum)
probabilities = [calculate_birtday_probability(r) for r in np.arange(2, 366)]
plt.hist(probabilities)
plt.show()
[print(x) for x in probabilities if x < 0.5]
###Output
0.0
0.002739726027397249
0.008204165884781345
0.016355912466550215
0.02713557369979347
0.040462483649111536
0.056235703095975365
0.07433529235166902
0.09462383388916673
0.11694817771107768
0.14114137832173312
0.1670247888380646
0.19441027523242926
0.22310251200497289
0.25290131976368635
0.2836040052528499
0.31500766529656055
0.34691141787178936
0.37911852603153673
0.41143838358058005
0.4436883351652058
0.47569530766254997
###Markdown
At how many people do you see a transition from $p(A) 0,5$?**Spoiler alert:** It's 23 people.Why so few? We're comparing everyone's birthday against everyone else's. We should **NOT** count the number of people, but the number of comparisons. In a room of 23 people, there are 252 total comparisons.In general, we could get a 50% chance of match using $\sqrt{n}$ people in $n$ days. * Breaking Cryptography: Birthday AttackWe already saw that if we have $n$ days in one year, it takes about $\sqrt{n}$ people to have a 50% chance of two people sharing a birthday. This is used in cryptography for the so-called **birthday attack**.Let's first introduce **hashing functions**. A hashing function is a function which takes text (bits) of any length and **returns a fixed number of bits**. There are many such functions. Some of them are completely insecure and **ARE NOT** used in cryptography. They're useful for other purposes, such as hash tables.Important properties of hashing functions:1. The output will have a fixed length, no matter whether the input is an empty string, one character, a full sentence, a full book or the entire history of mankind2. A concrete input will always produce the same outputOne such hashing function is **MD5**. It produces 128-bit hashes (32 hexadecimal symbols). This means that it takes the space of all possible texts and converts it to $2^{128} \approx 3.10^{38}$ possible hashes. Since the inputs are much more, by the pigeonhole principle, we can expect that many inputs will produce the same output. This is called a **hashing collision**.The birthday paradox tells us that using $\sqrt{n} = 2^{64} \approx 2.10^{19}$ hashes, we have a 50% probability of collision. This is still a very large number but compare it to $3.10^{38}$ - the difference is immense.You can see what these numbers mean in terms of CPU speed [here](https://blog.codinghorror.com/speed-hashing/).There are other algorithms which are even faster. The fastest one returns about $2^{18}$ hashes before it finds a collision.Another clever attack is using **rainbow tables**. These are massive dictionaries of precomputed hashes. So, for example, if the input is `password123`, its MD5 hash is `482c811da5d5b4bc6d497ffa98491e38`. Every time an algorithm sees this hash, it can convert it to its input. Rainbow tables work because humans are more predictable than algorithms. When implementing any cryptography, remember that **humans are always the weakest factor of any cryptographic system**.** * Optional: ** Write a function that finds collisions in **MD5** or **SHA1**. See [this](https://www.mscs.dal.ca/~selinger/md5collision/) demo for a good example, or [this StackOverflow post](https://crypto.stackexchange.com/questions/1434/are-there-two-known-strings-which-have-the-same-md5-hash-value) for more examples.
###Code
# Write your code here
###Output
_____no_output_____
###Markdown
Problem 4. Having Fun with Functions. Fourier TransformSometimes we can plot a **parametric curve**. We choose a parameter $t$, in this case $t \in [0; 2\pi]$. We then plot $x$ and $y$ as functions of $t$.Plot the function below.
###Code
t = np.linspace(0, 2 * np.pi, 2000)
x = -(721 * np.sin(t)) / 4 + 196 / 3 * np.sin(2 * t) - 86 / 3 * np.sin(3 * t) - 131 / 2 * np.sin(4 * t) + 477 / 14 * np.sin(5 * t) + 27 * np.sin(6 * t) - 29 / 2 * np.sin(7 * t) + 68 / 5 * np.sin(8 * t) + 1 / 10 * np.sin(9 * t) + 23 / 4 * np.sin(10 * t) - 19 / 2 * np.sin(12 * t) - 85 / 21 * np.sin(13 * t) + 2 / 3 * np.sin(14 * t) + 27 / 5 * np.sin(15 * t) + 7 / 4 * np.sin(16 * t) + 17 / 9 * np.sin(17 * t) - 4 * np.sin(18 * t) - 1 / 2 * np.sin(19 * t) + 1 / 6 * np.sin(20 * t) + 6 / 7 * np.sin(21 * t) - 1 / 8 * np.sin(22 * t) + 1 / 3 * np.sin(23 * t) + 3 / 2 * np.sin(24 * t) + 13 / 5 * np.sin(25 * t) + np.sin(26 * t) - 2 * np.sin(27 * t) + 3 / 5 * np.sin(28 * t) - 1 / 5 * np.sin(29 * t) + 1 / 5 * np.sin(30 * t) + (2337 * np.cos(t)
) / 8 - 43 / 5 * np.cos(2 * t) + 322 / 5 * np.cos(3 * t) - 117 / 5 * np.cos(4 * t) - 26 / 5 * np.cos(5 * t) - 23 / 3 * np.cos(6 * t) + 143 / 4 * np.cos(7 * t) - 11 / 4 * np.cos(8 * t) - 31 / 3 * np.cos(9 * t) - 13 / 4 * np.cos(10 * t) - 9 / 2 * np.cos(11 * t) + 41 / 20 * np.cos(12 * t) + 8 * np.cos(13 * t) + 2 / 3 * np.cos(14 * t) + 6 * np.cos(15 * t) + 17 / 4 * np.cos(16 * t) - 3 / 2 * np.cos(17 * t) - 29 / 10 * np.cos(18 * t) + 11 / 6 * np.cos(19 * t) + 12 / 5 * np.cos(20 * t) + 3 / 2 * np.cos(21 * t) + 11 / 12 * np.cos(22 * t) - 4 / 5 * np.cos(23 * t) + np.cos(24 * t) + 17 / 8 * np.cos(25 * t) - 7 / 2 * np.cos(26 * t) - 5 / 6 * np.cos(27 * t) - 11 / 10 * np.cos(28 * t) + 1 / 2 * np.cos(29 * t) - 1 / 5 * np.cos(30 * t)
y = -(637 * np.sin(t)) / 2 - 188 / 5 * np.sin(2 * t) - 11 / 7 * np.sin(3 * t) - 12 / 5 * np.sin(4 * t) + 11 / 3 * np.sin(5 * t) - 37 / 4 * np.sin(6 * t) + 8 / 3 * np.sin(7 * t) + 65 / 6 * np.sin(8 * t) - 32 / 5 * np.sin(9 * t) - 41 / 4 * np.sin(10 * t) - 38 / 3 * np.sin(11 * t) - 47 / 8 * np.sin(12 * t) + 5 / 4 * np.sin(13 * t) - 41 / 7 * np.sin(14 * t) - 7 / 3 * np.sin(15 * t) - 13 / 7 * np.sin(16 * t) + 17 / 4 * np.sin(17 * t) - 9 / 4 * np.sin(18 * t) + 8 / 9 * np.sin(19 * t) + 3 / 5 * np.sin(20 * t) - 2 / 5 * np.sin(21 * t) + 4 / 3 * np.sin(22 * t) + 1 / 3 * np.sin(23 * t) + 3 / 5 * np.sin(24 * t) - 3 / 5 * np.sin(25 * t) + 6 / 5 * np.sin(26 * t) - 1 / 5 * np.sin(27 * t) + 10 / 9 * np.sin(28 * t) + 1 / 3 * np.sin(29 * t) - 3 / 4 * \
np.sin(30 * t) - (125 * np.cos(t)) / 2 - 521 / 9 * np.cos(2 * t) - 359 / 3 * np.cos(3 * t) + 47 / 3 * np.cos(4 * t) - 33 / 2 * np.cos(5 * t) - 5 / 4 * np.cos(6 * t) + 31 / 8 * np.cos(7 * t) + 9 / 10 * np.cos(8 * t) - 119 / 4 * np.cos(9 * t) - 17 / 2 * np.cos(10 * t) + 22 / 3 * np.cos(11 * t) + 15 / 4 * np.cos(12 * t) - 5 / 2 * np.cos(13 * t) + 19 / 6 * np.cos(14 * t) + \
7 / 4 * np.cos(15 * t) + 31 / 4 * np.cos(16 * t) - np.cos(17 * t) + 11 / 10 * np.cos(18 * t) - 2 / 3 * np.cos(19 * t) + 13 / 3 * np.cos(20 * t) - 5 / 4 * np.cos(21 * t) + 2 / 3 * np.cos(
22 * t) + 1 / 4 * np.cos(23 * t) + 5 / 6 * np.cos(24 * t) + 3 / 4 * np.cos(26 * t) - 1 / 2 * np.cos(27 * t) - 1 / 10 * np.cos(28 * t) - 1 / 3 * np.cos(29 * t) - 1 / 19 * np.cos(30 * t)
plt.gca().set_aspect("equal")
plt.plot(x, y)
plt.show()
###Output
_____no_output_____
###Markdown
Interesting... Have a closer look at the variables `x` and `y`. Note that they're linear combinations of sines and cosines. There's nothing more except sines and cosines, multiplied by coefficients. How are these able to generate the picture? Can we generate any picture?Yes, we can generate pretty much anything and plot it as a parametric curve. See [this](https://www.wolframalpha.com/input/?i=Schroedinger+cat+bra-ket+curve) for example.It turns out that **every function**, no matter what, can be represented as a linear combination of sines and cosines. This is the basis of the **Fourier transform**. We'll look at it from two different perspectives: the algebraic one and the practical one. Algebraic perspective: Why does this transform exist? What does it mean?All functions form a **vector space**. We can see them as vectors. These vectors have infinitely many components which correspond to the infinitely many values $x \in (-\infty; \infty)$. The function space has infinitely many dimensions.We can find a basis in that space. After we've found a basis, we can express any other function as a linear combination of the basis functions. Any set of infinitely many linearly independent functions will work. But that doesn't help at all...We know that the best kind of basis is an *orthonormal basis*. This means that all basis vectors are orthogonal and each basis vector has "length" 1. Two vectors are orthogonal if their dot product is zero. Similarly, two functions are defined to be orthogonal if their product is zero, like this:$$ \int_a^b f(x)g(x)dx = 0 $$It can be shown that $1$, $\cos(mx)$ and $\sin(nx)$ ($m,n \in \mathbb{N}$) are orthogonal. So, the basis formed by them is orthogonal. They can also be made orthonormal if we divide by their norm. The norm of a function is defined by **functional analysis** - an area of mathematics which treats functions as vectors. We won't go into much more detail now. The norm for $1$ is 1, the norm for the trigonometric functions is $1/\sqrt{2}$.The takeaway is that ${1, \sqrt{2}\cos(mx), \sqrt{2}\sin(nx),\ m,n \in \mathbb{N}}$ is an orthonormal basis in the function space. All periodic functions with period $P$ can be described as linear combinations of these:$$ f(x) = \frac{a_0}{2} + \sum\left(a_n\cos\left(\frac{2\pi nx}{P}\right)+b_n\sin\left(\frac{2\pi nx}{P}\right)\right) $$This definition extends to non-periodic functions as well. Engineering perspectiveIn engineering, the Fourier transform **converts a function of time to a function of frequency**. The function of time is called a **signal**, and the function of frequency is the **spectrum** of that signal. There is a pair of functions - one inverts the other. We have two different options:1. We can inspect the spectrum2. We can modify the spectrumThis means that if some operation is very easy to perform in the spectrum we can perform it there using these steps:1. Create the spectrum from the signal - Fourier transform2. Perform the operation, e.g. remove a specific frequency3. Create the corrected signal from the corrected spectrum - inverse Fourier transformOne example usage is in audio processing. An audio signal is a 1D array of **samples** (numbers). Each audio signal has a *bitrate* which tells us how many samples are there in one second. Since audio is a function of time, we can easily get its spectrum.Some algorithms on images use the spectrum as well. The idea is exactly the same.Compare this entire process to how we created a **histogram**. Plotting a random variable $X$ as a function of the trial number is essentially plotting a function of time. To get the histogram, we counted how many times we saw each particular value. This is the same as taking the spectrum of the random variable. Problem 5. Working with Audio Files. Fourier TransformIn Python, it's easiest to work with `.wav` files. If we have other files, we can convert them first. To load audio files, we can use `scipy.io.wavfile`. Load the `c-note.wav` file. Use only one channel, e.g. the left one.
###Code
bitrate, audio = scipy.io.wavfile.read("c-note.wav")
left_channel = audio[:, 0]
right_channel = audio[:, 1]
plt.plot(left_channel)
plt.xlabel("Sample number") # To get seconds, divide by the bitrate
plt.ylabel("Amplitude")
plt.show()
left_fft = fft(left_channel)
# fftfreq() returns the frequences in number of cycles per sample. Since we have `bitrate` samples in one second,
# to get the frequencies in Hz, we have to multiply by the bitrate
frequencies = fftfreq(len(left_channel)) * bitrate
plt.plot(frequencies, left_fft)
plt.show()
###Output
_____no_output_____
###Markdown
Note that the signal is symmetric. This is always the case with Fourier transform. We are interested in only half the values (the ones which are $\ge 0$).
###Code
plt.plot(frequencies, left_fft)
plt.xlim((0, 15000))
plt.xlabel("Frequency [Hz]")
plt.ylabel("Amplitude")
plt.show()
###Output
_____no_output_____
###Markdown
We can see that some frequencies have higher intensities than others. Also, they are evenly spaced. This is because the sample is only one note: C4, which has a fundamental frequency of $261,6Hz$. Most other "loud" frequencies are a multiple of the fundamental frequency: these are called **obertones**. There are other frequencies as well. The combination of frequencies which one instrument emphasizes and the ones that it dampens (i.e. makes quiet) determines the specific sound, or **timbre** of that instrument.
###Code
plt.plot(frequencies, left_fft)
plt.xlim((240, 290))
plt.xlabel("Frequency [Hz]")
plt.ylabel("Amplitude")
plt.show()
###Output
_____no_output_____ |
discrete_fourier_transform/theorems.ipynb | ###Markdown
The Discrete Fourier Transform*This Jupyter notebook is part of a [collection of notebooks](../index.ipynb) in the bachelors module Signals and Systems, Comunications Engineering, Universitรคt Rostock. Please direct questions and suggestions to [[email protected]](mailto:[email protected]).* TheoremsThe theorems of the discrete Fourier transform (DFT) relate basic operations applied to discrete signals to their equivalents in the spectral domain. They are of use to transform signals composed from modified [standard signals](../discrete_signals/standard_signals.ipynb), for the computation of the response of a linear time-invariant (LTI) system and to predict the consequences of modifying a signal or system by certain operations. Convolution TheoremThe DFT $X[\mu] = \text{DFT}_N \{ x[k] \}$ and its inverse $x[k] = \text{IDFT}_N \{ X[\mu] \}$ are both periodic with period $N$. The linear convolution of two periodic signals is not defined. The periodic convolution introduced in the following is used instead for the convolution theorem of the DFT. Periodic ConvolutionThe [periodic (or circular/cyclic) convolution](https://en.wikipedia.org/wiki/Circular_convolution) of two finite-length signals $x[k]$ and $h[k]$ is defined as\begin{equation}x[k] \circledast_P h[k] = \sum_{\kappa=0}^{P-1} \tilde{x}_P[k - \kappa] \; \tilde{h}_P[\kappa] =\sum_{\kappa=0}^{P-1} \tilde{x}_P[\kappa] \; \tilde{h}_P[k - \kappa]\end{equation}where $\circledast_P$ denotes the periodic convolution with period $P$. The periodic summations $\tilde{x}_P[k]$ of $x[k]$ and $\tilde{h}_P[k]$ of $h[k]$ with period $P$ are defined as\begin{align}\tilde{x}_P[k] &= \sum_{\nu = -\infty}^{\infty} x[\nu \cdot P + k] \\\tilde{h}_P[k] &= \sum_{\nu = -\infty}^{\infty} h[\nu \cdot P + k]\end{align}The result of the circular convolution has a period of $P$. The periodic convolution of two signals is in general different to their linear convolution.For the special case that the length of one or both of the signals $x[k]$ and $h[k]$ is smaller or equal to the period $P$, the periodic summation degenerates to a periodic continuation of the signal(s). Furthermore, the periodic continuation does only have to be performed for the shifted signal in above convolution sum. For this special case, the periodic convolution is often termed as **cyclic convolution**. **Example - Periodic vs. linear convolution**The periodic $y_1[k] = x[k] \circledast_P h[k]$ and linear $y_2[k] = x[k] * h[k]$ convolution of two rectangular signals $x[k] = \mathrm{rect}_M[k]$ and $h[k] = \mathrm{rect}_N[k]$ is numerically evaluated. For this purpose helper functions are defined that implement the periodic summation and convolution.
###Code
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
def periodic_summation(x, P):
'Zero-padding to length P or periodic summation with period P.'
N = len(x)
rows = int(np.ceil(N/P))
if (N < int(P*rows)):
x = np.pad(x, (0, int(P*rows-N)), 'constant')
x = np.reshape(x, (rows, P))
return np.sum(x, axis=0)
def periodic_convolve(x, y, P):
'Periodic convolution of two signals x and y with period P.'
x = periodic_summation(x, P)
h = periodic_summation(y, P)
return np.array([np.dot(np.roll(x[::-1], k+1), h) for k in range(P)], float)
###Output
_____no_output_____
###Markdown
Now the signals are defined, the convolutions are computed and the signals plotted. Note, for the periodic signals $\tilde{x}_P[k]$ and $y_1[k]$ only one period is shown.
###Code
M = 32 # length of signal x[k]
N = 16 # length of signal h[k]
P = 24 # period of periodic convolution
def rect(k, N):
return np.where((0 <= k) & (k < N), 1.0, 0.0)
# generate signals
k = np.arange(M+N-1)
x = .5 * rect(k, M)
h = rect(k, N)
# periodic convolution
y1 = periodic_convolve(x, h, P)
# linear convolution
y2 = np.convolve(x, h, 'full')
# plot results
plt.figure()
plt.stem(periodic_summation(x, P), linefmt='C0-',
markerfmt='C0o', label=r'$\tilde{x}_P[k]$')
plt.stem(x, linefmt='C1--', markerfmt='C1.', label=r'$x[k]$')
plt.xlabel(r'$k$')
plt.xlim([0, M+N-1])
plt.legend()
plt.grid()
plt.figure()
plt.stem(y1, linefmt='C1-', markerfmt='C1o',
label=r'periodic convolution $P={}$'.format(P))
plt.stem(y2, linefmt='C0--', markerfmt='C0.', label=r'linear convolution')
plt.xlabel(r'$k$')
plt.xlim([0, M+N-1])
plt.legend()
plt.grid()
###Output
_____no_output_____
###Markdown
**Exercise*** Change the length $M$ of the rectangular signal $x[k]$. How does the result of the periodic convolution change?* Compare the result of the periodic convolution with the result of the linear convolution. For which values of $P$ are both the same? Convolution TheoremThe convolution theorem states that the DFT of the cyclic convolution of two discrete signals $x[k]$ and $y[k]$ is equal to the scalar multiplication of their DFTs $X[\mu] = \text{DFT}_N \{ x[k] \}$ and $Y[\mu] = \text{DFT}_N \{ y[k] \}$\begin{equation}\text{DFT}_N \{ x[k] \circledast_N y[k] \} = X[\mu] \cdot Y[\mu]\end{equation}for $k, \mu =0,1, \dots, N-1$.The theorem can be proven by introducing the definition of the periodic convolution into the [definition of the DFT](definition.ipynb) and changing the order of summation\begin{align}\text{DFT}_N \{ x[k] \circledast_N y[k] \} &= \sum_{k = 0}^{N-1} \left( \sum_{\kappa = 0}^{N-1} \tilde{x}_N[\kappa] \cdot \tilde{y}_N[k - \kappa] \right) w_N^{\mu k} \\&= \sum_{\kappa = 0}^{N-1} \left( \sum_{k = 0}^{N-1} \tilde{y}_N[k - \kappa] \, w_N^{\mu k} \right) \tilde{x}_N[\kappa] \\&= Y[\mu] \cdot \sum_{\kappa = 0}^{N-1} \tilde{x}_N[\kappa] \, w_N^{\mu \kappa} \\&= Y[\mu] \cdot X[\mu]\end{align}Note, $\text{DFT}_N \{ x[k] \} = \text{DFT}_N \{ \tilde{x}_N[k] \}$ due to the periodicity of the DFT.It can be concluded from the convolution theorem that a scalar multiplication of the two spectra results in a cyclic convolution of the corresponding signals. For a linear time-invariant (LTI) system, the output signal is given as the linear convolution of the input signal $x[k]$ with the impulse response $h[k] = \mathcal{H} \{ \delta[k] \}$. The convolution theorem cannot be applied straightforward to the computation of the output signal of an LTI system. The [fast convolution technique](fast_convolution.ipynb), introduced later, provides an efficient algorithm for the linear convolution of two signals based on the convolution theorem. Shift TheoremSince the convolution theorem of the DFT is given in terms of the cyclic convolution, the shift theorem of the DFT is given in terms of the periodic shift. The [periodic (circular) shift](https://en.wikipedia.org/wiki/Circular_shift) of a causal signal $x[k]$ of finite length $N$ can be expressed by a cyclic convolution with a shifted Dirac impulse\begin{equation}x[k - \kappa] = x[k] \circledast_N \delta[k - \kappa]\end{equation}for $\kappa \in 0,1,\dots, N-1$. This follows from the definition of the cyclic convolution in combination with the sifting property of the Dirac impulse. Applying the DFT to the left- and right-hand side and exploiting the convolution theorem yields\begin{equation}\text{DFT}_N \{ x[k - \kappa] \} = X[\mu] \cdot e^{-j \mu \frac{2 \pi}{N} \kappa}\end{equation}where $X[\mu] = \text{DFT}_N \{ x[k] \}$. Above relation is known as shift theorem of the DFT.Expressing the DFT $X[\mu] = |X[\mu]| \cdot e^{j \varphi[\mu]}$ by its absolute value $|X[\mu]|$ and phase $\varphi[\mu]$ results in\begin{equation}\text{DFT}_N \{ x[k - \kappa] \} = |X[\mu]| \cdot e^{j (\varphi[\mu] - \mu \frac{2 \pi}{N} \kappa)}\end{equation}The periodic shift of a signal does not change the absolute value of its spectrum but subtracts the linear contribution $\mu \frac{2 \pi}{N} \kappa$ from its phase. **Example - Shifting a signal in the spectral domain**A cosine signal $x[k] = \cos(\Omega_0 k)$ is shifted in the spectral domain by multiplying its spectrum $X[\mu] = \text{DFT}_N \{ x[k] \}$ with $e^{-j \mu \frac{2 \pi}{N} \kappa}$ followed by an IDFT.
###Code
from scipy.linalg import dft
N = 16 # length of signals/DFT
M = 1 # number of periods for cosine
kappa = 2 # shift
# generate signal
W0 = M * 2*np.pi/N
k = np.arange(N)
x = np.cos(W0 * k)
# compute DFT
F = dft(N)
mu = np.arange(N)
X = np.matmul(F, x)
# shift in spectral domain and IDFT
X2 = X * np.exp(-1j * mu * 2*np.pi/N * kappa)
IF = 1/N * np.conjugate(np.transpose(F))
x2 = np.matmul(IF, X2)
# plot signals
plt.stem(k, x, linefmt='C0-', markerfmt='C0o', label='$x[k]$')
plt.stem(k, np.real(x2), linefmt='C1-', markerfmt='C1o', label='$x[k - {}]$'.format(kappa))
plt.xlabel('$k$')
plt.legend(loc=9)
###Output
_____no_output_____
###Markdown
Multiplication TheoremThe transform of a multiplication of two signals $x[k] \cdot y[k]$ is derived by introducing the signals into the definition of the DFT, expressing the signal $x[k]$ by its spectrum $X[\mu] = \text{IDFT}_N \{ x[k] \}$ and rearranging terms\begin{align}\text{DFT}_N \{ x[k] \cdot y[k] \} &= \sum_{k=0}^{N-1} x[k] \cdot y[k] \, w_N^{\mu k} \\&= \sum_{k=0}^{N-1} \left( \frac{1}{N} \sum_{\nu=0}^{N-1} X[\nu] \, w_N^{-\nu k} \right) y[k] \, w_N^{\mu k} \\&= \frac{1}{N} \sum_{\nu=0}^{N-1} X[\nu] \sum_{k=0}^{N-1} y[k] \, w_N^{(\mu - \nu) k} \\&= \frac{1}{N} \sum_{\nu=0}^{N-1} X[\nu] \cdot Y[\mu - \nu] \\&= X[\mu] \circledast_N Y[\mu]\end{align}where $Y[\mu] = \text{IDFT}_N \{ y[k] \}$ and $k, \mu = 0,1,\dots,N-1$. Note, the last equality follows from the periodicity of the inverse DFT. The DFT of a multiplication of two signals $x[k] \cdot y[k]$ is given by the cyclic convolution of their spectra $X[\mu]$ and $Y[\mu]$ weighted by $\frac{1}{N}$. The cyclic convolution has a period of $N$ and it is performed with respect to the frequency index $\mu$.Applications of the multiplication theorem include the modulation and windowing of signals. The former leads to the modulation theorem introduced in the following. Modulation TheoremThe complex modulation of a signal $x[k]$ is defined as $e^{j \Omega_0 k} \cdot x[k]$ with $\Omega_0 = M \frac{2 \pi}{N}$, $M \in \mathbb{Z}$. The DFT of the modulated signal is derived by applying the multiplication theorem\begin{equation}\text{DFT}_N \left\{ e^{j M \frac{2 \pi}{N} k} \cdot x[k] \right\} = \delta[\mu - M] \circledast_N X[\mu] = X[\mu - M]\end{equation}where $X[\mu] = \text{DFT}_N \{ x[k] \}$ and $X[\mu - M]$ denotes the periodic shift of $X[\mu]$. Above result states that the complex modulation of a signal leads to a periodic shift of its spectrum. This result is known as modulation theorem. **Example - Decimation of a signal**An example for the application of the modulation theorem is the [downsampling/decimation](https://en.wikipedia.org/wiki/Decimation_(signal_processing)) of a discrete signal $x[k]$. Downsampling refers to lowering the sampling rate of a signal. The example focuses on the special case of removing every second sample, hence halving the sampling rate. The downsampling is modeled by defining a signal $x_\frac{1}{2}[k]$ where every second sample is set to zero\begin{equation}x_\frac{1}{2}[k] = \begin{cases} x[k] & \text{for even } k \\0 & \text{for odd } k\end{cases}\end{equation}In order to derive the spectrum $X_\frac{1}{2}[\mu] = \text{DFT}_N \{ x_\frac{1}{2}[k] \}$ for even $N$, the signal $u[k]$ is introduced where every second sample is zero\begin{equation}u[k] = \frac{1}{2} ( 1 + e^{j \pi k} ) = \begin{cases} 1 & \text{for even } k \\0 & \text{for odd } k \end{cases}\end{equation}Using $u[k]$, the process of setting every second sample of $x[k]$ to zero can be expressed as\begin{equation}x_\frac{1}{2}[k] = u[k] \cdot x[k]\end{equation}Now the spectrum $X_\frac{1}{2}[\mu]$ is derived by applying the multiplication theorem and introducing the [DFT of the exponential signal](definition.ipynbTransformation-of-the-Exponential-Signal). This results in\begin{equation}X_\frac{1}{2}[\mu] = \frac{1}{N} \left( \frac{N}{2} \delta[\mu] + \frac{N}{2} \delta[\mu - \frac{N}{2}] \right) \circledast X[\mu] =\frac{1}{2} X[\mu] + \frac{1}{2} X[\mu - \frac{N}{2}]\end{equation}where $X[\mu] = \text{DFT}_N \{ x[k] \}$. The spectrum $X_\frac{1}{2}[\mu]$ consists of the spectrum of the original signal $X[\mu]$ superimposed by the shifted spectrum $X[\mu - \frac{N}{2}]$ of the original signal. This may lead to overlaps that constitute aliasing. In order to avoid aliasing, the spectrum of the signal $x[k]$ has to be band-limited to $0 < \mu < \frac{N}{2}$ before downsampling. The decimation of a complex exponential signal is illustrated in the following. The signal $x[k] = \cos (\Omega_0 k)$ is decimated by setting every second sample to zero. The DFT of the original signal and decimated signal is computed and their magnitudes are plotted for illustration.
###Code
N = 16 # length of signals/DFT
M = 3.3 # number of periods for cosine
W0 = M*2*np.pi/N
k = np.arange(N)
x = np.exp(1j*W0*k)
x2 = np.copy(x)
x2[::2] = 0
F = dft(N)
X = np.matmul(F, x)
X2 = np.matmul(F, x2)
plt.figure(figsize=(8,4))
plt.subplot(1,2,1)
plt.stem(abs(X))
plt.xlabel('$\mu$')
plt.ylabel(r'|$X[\mu]$|')
plt.subplot(1,2,2)
plt.stem(abs(X2))
plt.xlabel('$\mu$')
plt.ylabel(r'|$X_{1/2}[\mu]$|');
###Output
_____no_output_____
###Markdown
The Discrete Fourier Transform*This Jupyter notebook is part of a [collection of notebooks](../index.ipynb) in the bachelors module Signals and Systems, Comunications Engineering, Universitรคt Rostock. Please direct questions and suggestions to [[email protected]](mailto:[email protected]).* TheoremsThe theorems of the discrete Fourier transform (DFT) relate basic operations applied to discrete signals to their equivalents in the spectral domain. They are of use to transform signals composed from modified [standard signals](../discrete_signals/standard_signals.ipynb), for the computation of the response of a linear time-invariant (LTI) system and to predict the consequences of modifying a signal or system by certain operations. Convolution TheoremThe DFT $X[\mu] = \text{DFT}_N \{ x[k] \}$ and its inverse $x[k] = \text{IDFT}_N \{ X[\mu] \}$ are both periodic with period $N$. The linear convolution of two periodic signals is not defined. The periodic convolution introduced in the following is used instead for the convolution theorem of the DFT. Periodic ConvolutionThe [periodic (or circular/cyclic) convolution](https://en.wikipedia.org/wiki/Circular_convolution) of two aperiodic signals $h[k]$ and $g[k]$ is defined as\begin{equation}h[k] \circledast_N g[k] = \sum_{\kappa = -\infty}^{\infty} h[\kappa] \cdot \tilde{g}[k-\kappa]\end{equation}where $\tilde{g}[k]$ denotes the periodic summation of $g[k]$ with period $N$\begin{equation}\tilde{g}[k] = \sum_{\nu = -\infty}^{\infty} g[k - \nu N]\end{equation}The result of the periodic convolution is periodic with period N. If the signal $h[k]$ is causal and of finite length $N$, the infinite summation of the convolution degenerates to\begin{equation}h[k] \circledast_N g[k] = \sum_{\kappa = 0}^{N-1} h[\kappa] \cdot \tilde{g}[k-\kappa]\end{equation}The same relation holds if $h[k]$ is periodic with period $N$. **Example**The periodic convolution of two finite-length signals $h[k]$ and $g[k]$ can be expressed as matrix/vector multiplication $\mathbf{y} = \mathbf{H} \, \mathbf{g}$ where the matrix $ \mathbf{H}$ is given as the [circulant matrix](https://en.wikipedia.org/wiki/Circulant_matrix) containing the samples of the signal $h[k]$ in its first column and the vector $\mathbf{g}$ contains the samples of the signal $g[k]$. The resulting vector $\mathbf{y}$ is composed from the samples $y[k] = h[k] \circledast_N g[k]$ for one period $k=0,1,\dots, N-1$ of the periodic convolution. This is illustrated by the numerical evaluation of the periodic convolution $y[k] = \text{rect}_M[k] \circledast_N \text{rect}_M[k]$ of the rectangular signal by itself.
###Code
%matplotlib inline
import numpy as np
from scipy.linalg import circulant
import matplotlib.pyplot as plt
def rect(k, N):
return np.where((0 <= k) & (k < N), 1.0, 0.0)
N = 16
M = 6
k = np.arange(N)
g = rect(k, M)
H = circulant(g)
y = np.matmul(H, g)
plt.figure(figsize=(8,4))
plt.subplot(1,2,1)
plt.stem(g)
plt.xlabel('$k$')
plt.ylabel('$\mathrm{rect}_M[k]$')
plt.gca().margins(y=0.1)
plt.subplot(1,2,2)
plt.stem(y)
plt.xlabel('$k$')
plt.ylabel('$y[k]$');
plt.gca().margins(y=0.1)
###Output
_____no_output_____
###Markdown
**Exercise*** Change the length $M$ of the rectangular signal. How does the result of the periodic convolution change?* Compare the result of the periodic convolution with the result of the linear convolution. For which values of $M$ is the result the same? Convolution TheoremThe convolution theorem states that the DFT of the periodic convolution of two discrete signals $x[k]$ and $y[k]$ is equal to the scalar multiplication of their DFTs $X[\mu] = \text{DFT}_N \{ x[k] \}$ and $Y[\mu] = \text{DFT}_N \{ y[k] \}$\begin{equation}\text{DFT}_N \{ x[k] \circledast_N y[k] \} = X[\mu] \cdot Y[\mu]\end{equation}for $k, \mu =0,1, \dots, N-1$.The theorem can be proven by introducing the definition of the periodic convolution into the [definition of the DFT](definition.ipynb) and changing the order of summation\begin{align}\text{DFT} \{ x[k] \circledast_N y[k] \} &= \sum_{k = 0}^{N-1} \left( \sum_{\kappa = 0}^{N-1} x[\kappa] \cdot \tilde{y}[k - \kappa] \right) w_N^{\mu k} \\&= \sum_{\kappa = 0}^{N-1} \left( \sum_{k = 0}^{N-1} \tilde{y}[k - \kappa] \, w_N^{\mu k} \right) x[\kappa] \\&= Y[\mu] \cdot \sum_{\kappa = 0}^{N-1} x[\kappa] \, w_N^{\mu \kappa} \\&= Y[\mu] \cdot X[\mu]\end{align}It can be concluded from the convolution theorem that a scalar multiplication of the two spectra results in a circular convolution of the corresponding signals. For a linear time-invariant (LTI) system, the output signal is given as the linear convolution of the input signal $x[k]$ with the impulse response $h[k] = \mathcal{H} \{ \delta[k] \}$. The convolution theorem cannot be applied straightforward for the computation of the output signal of an LTI system. The [fast convolution technique](fast_convolution.ipynb) introduced later provides an efficient algorithm for the linear convolution of two signals using the convolution theorem. Shift TheoremSince the convolution theorem of the DFT is given in terms of the periodic convolution, the shift theorem of the DFT is given in terms of the periodic shift. The [periodic (circular) shift](https://en.wikipedia.org/wiki/Circular_shift) of a causal signal $x[k]$ of finite length $N$ can be expressed by a periodic convolution with a shifted Dirac impulse\begin{equation}x[k - \kappa] = x[k] \circledast_N \delta[k - \kappa]\end{equation}for $\kappa \in 0,1,\dots, N-1$. This follows from the definition of the periodic convolution in combination with the sifting property of the Dirac impulse. Applying the DFT to the left- and right-hand side and exploiting the convolution theorem yields\begin{equation}\text{DFT}_N \{ x[k - \kappa] \} = X[\mu] \cdot e^{-j \mu \frac{2 \pi}{N} \kappa}\end{equation}where $X[\mu] = \text{DFT}_N \{ x[k] \}$. Above relation is known as shift theorem of the DFT.Expressing the DFT $X[\mu] = |X[\mu]| \cdot e^{j \varphi[\mu]}$ by its absolute value $|X[\mu]|$ and phase $\varphi[\mu]$ results in\begin{equation}\text{DFT}_N \{ x[k - \kappa] \} = |X[\mu]| \cdot e^{j (\varphi[\mu] - \mu \frac{2 \pi}{N} \kappa)}\end{equation}The periodic shift of a signal does not change the absolute value of its spectrum but subtracts the linear contribution $\mu \frac{2 \pi}{N} \kappa$ from its phase. **Example**A cosine signal $x[k] = \cos(\Omega_0 k)$ is shifted in the spectral domain by multiplying its spectrum $X[\mu] = \text{DFT}_N \{ x[k] \}$ with $e^{-j \mu \frac{2 \pi}{N} \kappa}$ followed by an IDFT.
###Code
from scipy.linalg import dft
N = 16
M = 1
kappa = 2
W0 = M * 2*np.pi/N
k = np.arange(N)
x = np.cos(W0 * k)
F = dft(N)
mu = np.arange(N)
X = np.matmul(F, x)
X2 = X * np.exp(-1j * mu * 2*np.pi/N * kappa)
IF = 1/N * np.conjugate(np.transpose(F))
x2 = np.matmul(IF, X2)
plt.stem(k, x, linefmt='b-', markerfmt='bo', label='$x[k]$')
plt.stem(k, np.real(x2), linefmt='r-', markerfmt='ro', label='$x[k - \kappa]$')
plt.xlabel('$k$')
plt.legend(loc=9);
###Output
_____no_output_____
###Markdown
Multiplication TheoremThe transform of a multiplication of two signals $x[k] \cdot y[k]$ is derived by introducing the signals into the definition of the DFT, expressing the signal $x[k]$ by its spectrum $X[\mu] = \text{IDFT}_N \{ x[k] \}$ and rearranging terms\begin{align}\text{DFT}_N \{ x[k] \cdot y[k] \} &= \sum_{k=0}^{N-1} x[k] \cdot y[k] \, w_N^{\mu k} \\&= \sum_{k=0}^{N-1} \left( \frac{1}{N} \sum_{\nu=0}^{N-1} X[\nu] \, w_N^{-\nu k} \right) y[k] \, w_N^{\mu k} \\&= \frac{1}{N} \sum_{\nu=0}^{N-1} X[\nu] \sum_{k=0}^{N-1} y[k] \, w_N^{(\mu - \nu) k} \\&= \frac{1}{N} \sum_{\nu=0}^{N-1} X[\nu] \cdot Y[\mu - \nu] \\&= X[\mu] \circledast_N Y[\mu]\end{align}where $Y[\mu] = \text{IDFT}_N \{ y[k] \}$ and $k, \mu = 0,1,\dots,N-1$. The DFT of a multiplication of two signals $x[k] \cdot y[k]$ is given by the periodic convolution of their spectra $X[\mu]$ and $Y[\mu]$ weighted by $\frac{1}{N}$. The periodic convolution has a period of $N$ and it is performed with respect to the normalized angular frequency $\mu$.Applications of the multiplication theorem include the modulation and windowing of signals. The former leads to the modulation theorem introduced in the following. Modulation TheoremThe complex modulation of a signal $x[k]$ is defined as $e^{j \Omega_0 k} \cdot x[k]$ with $\Omega_0 = M \frac{2 \pi}{N}$, $M \in \mathbb{Z}$. The DFT of the modulated signal is derived by applying the multiplication theorem\begin{equation}\text{DFT} \left\{ e^{j M \frac{2 \pi}{N} k} \cdot x[k] \right\} = \delta[\mu - M] \circledast_N X[\mu] = X[\mu - M]\end{equation}where $X[\mu] = \text{DFT}_N \{ x[k] \}$ and $X[\mu - M]$ denotes the periodic shift of $X[\mu]$. Above result states that the complex modulation of a signal leads to a periodic shift of its spectrum. This result is known as modulation theorem. **Example**An example for the application of the modulation theorem is the [downsampling/decimation](https://en.wikipedia.org/wiki/Decimation_(signal_processing&041;) of a discrete signal $x[k]$. Downsampling refers to lowering the sampling rate of a signal. The example focuses on the special case of removing every second sample, hence halving the sampling rate. The downsampling is modeled by defining a signal $x_\frac{1}{2}[k]$ where every second sample is set to zero\begin{equation}x_\frac{1}{2}[k] = \begin{cases} x[k] & \text{for even } k \\0 & \text{for odd } k\end{cases}\end{equation}In order to derive the spectrum $X_\frac{1}{2}[\mu] = \text{DFT}_N \{ x_\frac{1}{2}[k] \}$ for even $N$, the signal $u[k]$ is introduced where every second sample is zero\begin{equation}u[k] = \frac{1}{2} ( 1 + e^{j \pi k} ) = \begin{cases} 1 & \text{for even } k \\0 & \text{for odd } k \end{cases}\end{equation}Using $u[k]$, the process of setting every second sample of $x[k]$ to zero can be expressed as\begin{equation}x_\frac{1}{2}[k] = u[k] \cdot x[k]\end{equation}Now the spectrum $X_\frac{1}{2}[\mu]$ is derived by applying the multiplication theorem and introducing the [DFT of the exponential signal](definition.ipynbTransformation-of-the-Exponential-Signal). This results in\begin{equation}X_\frac{1}{2}[\mu] = \frac{1}{N} \left( \frac{N}{2} \delta[\mu] + \frac{N}{2} \delta[\mu - \frac{N}{2}] \right) \circledast X[\mu] =\frac{1}{2} X[\mu] + \frac{1}{2} X[\mu - \frac{N}{2}]\end{equation}where $X[\mu] = \text{DFT}_N \{ x[k] \}$. The spectrum $X_\frac{1}{2}[\mu]$ consists of the spectrum of the original signal $X[\mu]$ superimposed by the shifted spectrum $X[\mu - \frac{N}{2}]$ of the original signal. This may lead to overlaps that constitute aliasing. In order to avoid aliasing, the spectrum of the signal $x[k]$ has to be band-limited to $0 < \mu < \frac{N}{2}$ before downsampling. **Example**The subsampling of a complex exponential signal is illustrated in the following. The signal $x[k] = \cos (\Omega_0 k)$ is subsampled by setting every second sample to zero. The DFT of the original signal and subsampled signal is computed and their magnitude is plotted for illustration.
###Code
N = 16
M = 3.3
W0 = M*2*np.pi/N
k = np.arange(N)
x = np.exp(1j*W0*k)
x2 = np.copy(x)
x2[::2] = 0
F = dft(N)
X = np.matmul(F, x)
X2 = np.matmul(F, x2)
plt.figure(figsize=(8,4))
plt.subplot(1,2,1)
plt.stem(abs(X))
plt.xlabel('$\mu$')
plt.ylabel(r'|$X[\mu]$|')
plt.subplot(1,2,2)
plt.stem(abs(X2))
plt.xlabel('$\mu$')
plt.ylabel(r'|$X_{1/2}[\mu]$|');
###Output
_____no_output_____ |
keras-ssd-master/.ipynb_checkpoints/ssd300_training-checkpoint.ipynb | ###Markdown
SSD300 Training TutorialThis tutorial explains how to train an SSD300 on the Pascal VOC datasets. The preset parameters reproduce the training of the original SSD300 "07+12" model. Training SSD512 works simiarly, so there's no extra tutorial for that. The same goes for training on other datasets.You can find a summary of a full training here to get an impression of what it should look like:[SSD300 "07+12" training summary](https://github.com/pierluigiferrari/ssd_keras/blob/master/training_summaries/ssd300_pascal_07%2B12_training_summary.md)
###Code
from keras.optimizers import Adam, SGD
from keras.callbacks import ModelCheckpoint, LearningRateScheduler, TerminateOnNaN, CSVLogger
from keras import backend as K
from keras.models import load_model
from math import ceil
import numpy as np
from matplotlib import pyplot as plt
from models.keras_ssd300 import ssd_300
from keras_loss_function.keras_ssd_loss import SSDLoss
from keras_layers.keras_layer_AnchorBoxes import AnchorBoxes
from keras_layers.keras_layer_DecodeDetections import DecodeDetections
from keras_layers.keras_layer_DecodeDetectionsFast import DecodeDetectionsFast
from keras_layers.keras_layer_L2Normalization import L2Normalization
from ssd_encoder_decoder.ssd_input_encoder import SSDInputEncoder
from ssd_encoder_decoder.ssd_output_decoder import decode_detections, decode_detections_fast
from data_generator.object_detection_2d_data_generator import DataGenerator
from data_generator.object_detection_2d_geometric_ops import Resize
from data_generator.object_detection_2d_photometric_ops import ConvertTo3Channels
from data_generator.data_augmentation_chain_original_ssd import SSDDataAugmentation
from data_generator.object_detection_2d_misc_utils import apply_inverse_transforms
%matplotlib inline
###Output
/home/dlsaavedra/anaconda3/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
Using TensorFlow backend.
###Markdown
0. Preliminary noteAll places in the code where you need to make any changes are marked `TODO` and explained accordingly. All code cells that don't contain `TODO` markers just need to be executed. 1. Set the model configuration parametersThis section sets the configuration parameters for the model definition. The parameters set here are being used both by the `ssd_300()` function that builds the SSD300 model as well as further down by the constructor for the `SSDInputEncoder` object that is needed to run the training. Most of these parameters are needed to define the anchor boxes.The parameters as set below produce the original SSD300 architecture that was trained on the Pascal VOC datsets, i.e. they are all chosen to correspond exactly to their respective counterparts in the `.prototxt` file that defines the original Caffe implementation. Note that the anchor box scaling factors of the original SSD implementation vary depending on the datasets on which the models were trained. The scaling factors used for the MS COCO datasets are smaller than the scaling factors used for the Pascal VOC datasets. The reason why the list of scaling factors has 7 elements while there are only 6 predictor layers is that the last scaling factor is used for the second aspect-ratio-1 box of the last predictor layer. Refer to the documentation for details.As mentioned above, the parameters set below are not only needed to build the model, but are also passed to the `SSDInputEncoder` constructor further down, which is responsible for matching and encoding ground truth boxes and anchor boxes during the training. In order to do that, it needs to know the anchor box parameters.
###Code
img_height = 300 # Height of the model input images
img_width = 300 # Width of the model input images
img_channels = 3 # Number of color channels of the model input images
mean_color = [123, 117, 104] # The per-channel mean of the images in the dataset. Do not change this value if you're using any of the pre-trained weights.
swap_channels = [2, 1, 0] # The color channel order in the original SSD is BGR, so we'll have the model reverse the color channel order of the input images.
n_classes = 20 # Number of positive classes, e.g. 20 for Pascal VOC, 80 for MS COCO
scales_pascal = [0.1, 0.2, 0.37, 0.54, 0.71, 0.88, 1.05] # The anchor box scaling factors used in the original SSD300 for the Pascal VOC datasets
scales_coco = [0.07, 0.15, 0.33, 0.51, 0.69, 0.87, 1.05] # The anchor box scaling factors used in the original SSD300 for the MS COCO datasets
scales = scales_pascal
aspect_ratios = [[1.0, 2.0, 0.5],
[1.0, 2.0, 0.5, 3.0, 1.0/3.0],
[1.0, 2.0, 0.5, 3.0, 1.0/3.0],
[1.0, 2.0, 0.5, 3.0, 1.0/3.0],
[1.0, 2.0, 0.5],
[1.0, 2.0, 0.5]] # The anchor box aspect ratios used in the original SSD300; the order matters
two_boxes_for_ar1 = True
steps = [8, 16, 32, 64, 100, 300] # The space between two adjacent anchor box center points for each predictor layer.
offsets = [0.5, 0.5, 0.5, 0.5, 0.5, 0.5] # The offsets of the first anchor box center points from the top and left borders of the image as a fraction of the step size for each predictor layer.
clip_boxes = False # Whether or not to clip the anchor boxes to lie entirely within the image boundaries
variances = [0.1, 0.1, 0.2, 0.2] # The variances by which the encoded target coordinates are divided as in the original implementation
normalize_coords = True
###Output
_____no_output_____
###Markdown
2. Build or load the modelYou will want to execute either of the two code cells in the subsequent two sub-sections, not both. 2.1 Create a new model and load trained VGG-16 weights into it (or trained SSD weights)If you want to create a new SSD300 model, this is the relevant section for you. If you want to load a previously saved SSD300 model, skip ahead to section 2.2.The code cell below does the following things:1. It calls the function `ssd_300()` to build the model.2. It then loads the weights file that is found at `weights_path` into the model. You could load the trained VGG-16 weights or you could load the weights of a trained model. If you want to reproduce the original SSD training, load the pre-trained VGG-16 weights. In any case, you need to set the path to the weights file you want to load on your local machine. Download links to all the trained weights are provided in the [README](https://github.com/pierluigiferrari/ssd_keras/blob/master/README.md) of this repository.3. Finally, it compiles the model for the training. In order to do so, we're defining an optimizer (Adam) and a loss function (SSDLoss) to be passed to the `compile()` method.Normally, the optimizer of choice would be Adam (commented out below), but since the original implementation uses plain SGD with momentum, we'll do the same in order to reproduce the original training. Adam is generally the superior optimizer, so if your goal is not to have everything exactly as in the original training, feel free to switch to Adam. You might need to adjust the learning rate scheduler below slightly in case you use Adam.Note that the learning rate that is being set here doesn't matter, because further below we'll pass a learning rate scheduler to the training function, which will overwrite any learning rate set here, i.e. what matters are the learning rates that are defined by the learning rate scheduler.`SSDLoss` is a custom Keras loss function that implements the multi-task that consists of a log loss for classification and a smooth L1 loss for localization. `neg_pos_ratio` and `alpha` are set as in the paper.
###Code
# 1: Build the Keras model.
K.clear_session() # Clear previous models from memory.
model = ssd_300(image_size=(img_height, img_width, img_channels),
n_classes=n_classes,
mode='training',
l2_regularization=0.0005,
scales=scales,
aspect_ratios_per_layer=aspect_ratios,
two_boxes_for_ar1=two_boxes_for_ar1,
steps=steps,
offsets=offsets,
clip_boxes=clip_boxes,
variances=variances,
normalize_coords=normalize_coords,
subtract_mean=mean_color,
swap_channels=swap_channels)
# 2: Load some weights into the model.
# TODO: Set the path to the weights you want to load.
weights_path = 'path/to/VGG_ILSVRC_16_layers_fc_reduced.h5'
model.load_weights(weights_path, by_name=True)
# 3: Instantiate an optimizer and the SSD loss function and compile the model.
# If you want to follow the original Caffe implementation, use the preset SGD
# optimizer, otherwise I'd recommend the commented-out Adam optimizer.
#adam = Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)
sgd = SGD(lr=0.001, momentum=0.9, decay=0.0, nesterov=False)
ssd_loss = SSDLoss(neg_pos_ratio=3, alpha=1.0)
model.compile(optimizer=sgd, loss=ssd_loss.compute_loss)
###Output
_____no_output_____
###Markdown
2.2 Load a previously created modelIf you have previously created and saved a model and would now like to load it, execute the next code cell. The only thing you need to do here is to set the path to the saved model HDF5 file that you would like to load.The SSD model contains custom objects: Neither the loss function nor the anchor box or L2-normalization layer types are contained in the Keras core library, so we need to provide them to the model loader.This next code cell assumes that you want to load a model that was created in 'training' mode. If you want to load a model that was created in 'inference' or 'inference_fast' mode, you'll have to add the `DecodeDetections` or `DecodeDetectionsFast` layer type to the `custom_objects` dictionary below.
###Code
# TODO: Set the path to the `.h5` file of the model to be loaded.
model_path = 'path/to/trained/model.h5'
# We need to create an SSDLoss object in order to pass that to the model loader.
ssd_loss = SSDLoss(neg_pos_ratio=3, alpha=1.0)
K.clear_session() # Clear previous models from memory.
model = load_model(model_path, custom_objects={'AnchorBoxes': AnchorBoxes,
'L2Normalization': L2Normalization,
'compute_loss': ssd_loss.compute_loss})
###Output
_____no_output_____
###Markdown
3. Set up the data generators for the trainingThe code cells below set up the data generators for the training and validation datasets to train the model. The settings below reproduce the original SSD training on Pascal VOC 2007 `trainval` plus 2012 `trainval` and validation on Pascal VOC 2007 `test`.The only thing you need to change here are the filepaths to the datasets on your local machine. Note that parsing the labels from the XML annotations files can take a while.Note that the generator provides two options to speed up the training. By default, it loads the individual images for a batch from disk. This has two disadvantages. First, for compressed image formats like JPG, this is a huge computational waste, because every image needs to be decompressed again and again every time it is being loaded. Second, the images on disk are likely not stored in a contiguous block of memory, which may also slow down the loading process. The first option that `DataGenerator` provides to deal with this is to load the entire dataset into memory, which reduces the access time for any image to a negligible amount, but of course this is only an option if you have enough free memory to hold the whole dataset. As a second option, `DataGenerator` provides the possibility to convert the dataset into a single HDF5 file. This HDF5 file stores the images as uncompressed arrays in a contiguous block of memory, which dramatically speeds up the loading time. It's not as good as having the images in memory, but it's a lot better than the default option of loading them from their compressed JPG state every time they are needed. Of course such an HDF5 dataset may require significantly more disk space than the compressed images (around 9 GB total for Pascal VOC 2007 `trainval` plus 2012 `trainval` and another 2.6 GB for 2007 `test`). You can later load these HDF5 datasets directly in the constructor.The original SSD implementation uses a batch size of 32 for the training. In case you run into GPU memory issues, reduce the batch size accordingly. You need at least 7 GB of free GPU memory to train an SSD300 with 20 object classes with a batch size of 32.The `DataGenerator` itself is fairly generic. I doesn't contain any data augmentation or bounding box encoding logic. Instead, you pass a list of image transformations and an encoder for the bounding boxes in the `transformations` and `label_encoder` arguments of the data generator's `generate()` method, and the data generator will then apply those given transformations and the encoding to the data. Everything here is preset already, but if you'd like to learn more about the data generator and its data augmentation capabilities, take a look at the detailed tutorial in [this](https://github.com/pierluigiferrari/data_generator_object_detection_2d) repository.The data augmentation settings defined further down reproduce the data augmentation pipeline of the original SSD training. The training generator receives an object `ssd_data_augmentation`, which is a transformation object that is itself composed of a whole chain of transformations that replicate the data augmentation procedure used to train the original Caffe implementation. The validation generator receives an object `resize`, which simply resizes the input images.An `SSDInputEncoder` object, `ssd_input_encoder`, is passed to both the training and validation generators. As explained above, it matches the ground truth labels to the model's anchor boxes and encodes the box coordinates into the format that the model needs.In order to train the model on a dataset other than Pascal VOC, either choose `DataGenerator`'s appropriate parser method that corresponds to your data format, or, if `DataGenerator` does not provide a suitable parser for your data format, you can write an additional parser and add it. Out of the box, `DataGenerator` can handle datasets that use the Pascal VOC format (use `parse_xml()`), the MS COCO format (use `parse_json()`) and a wide range of CSV formats (use `parse_csv()`).
###Code
# 1: Instantiate two `DataGenerator` objects: One for training, one for validation.
# Optional: If you have enough memory, consider loading the images into memory for the reasons explained above.
train_dataset = DataGenerator(load_images_into_memory=False, hdf5_dataset_path=None)
val_dataset = DataGenerator(load_images_into_memory=False, hdf5_dataset_path=None)
# 2: Parse the image and label lists for the training and validation datasets. This can take a while.
# TODO: Set the paths to the datasets here.
# The directories that contain the images.
VOC_2007_images_dir = '../../datasets/VOCdevkit/VOC2007/JPEGImages/'
VOC_2012_images_dir = '../../datasets/VOCdevkit/VOC2012/JPEGImages/'
# The directories that contain the annotations.
VOC_2007_annotations_dir = '../../datasets/VOCdevkit/VOC2007/Annotations/'
VOC_2012_annotations_dir = '../../datasets/VOCdevkit/VOC2012/Annotations/'
# The paths to the image sets.
VOC_2007_train_image_set_filename = '../../datasets/VOCdevkit/VOC2007/ImageSets/Main/train.txt'
VOC_2012_train_image_set_filename = '../../datasets/VOCdevkit/VOC2012/ImageSets/Main/train.txt'
VOC_2007_val_image_set_filename = '../../datasets/VOCdevkit/VOC2007/ImageSets/Main/val.txt'
VOC_2012_val_image_set_filename = '../../datasets/VOCdevkit/VOC2012/ImageSets/Main/val.txt'
VOC_2007_trainval_image_set_filename = '../../datasets/VOCdevkit/VOC2007/ImageSets/Main/trainval.txt'
VOC_2012_trainval_image_set_filename = '../../datasets/VOCdevkit/VOC2012/ImageSets/Main/trainval.txt'
VOC_2007_test_image_set_filename = '../../datasets/VOCdevkit/VOC2007/ImageSets/Main/test.txt'
# The XML parser needs to now what object class names to look for and in which order to map them to integers.
classes = ['background',
'aeroplane', 'bicycle', 'bird', 'boat',
'bottle', 'bus', 'car', 'cat',
'chair', 'cow', 'diningtable', 'dog',
'horse', 'motorbike', 'person', 'pottedplant',
'sheep', 'sofa', 'train', 'tvmonitor']
train_dataset.parse_xml(images_dirs=[VOC_2007_images_dir,
VOC_2012_images_dir],
image_set_filenames=[VOC_2007_trainval_image_set_filename,
VOC_2012_trainval_image_set_filename],
annotations_dirs=[VOC_2007_annotations_dir,
VOC_2012_annotations_dir],
classes=classes,
include_classes='all',
exclude_truncated=False,
exclude_difficult=False,
ret=False)
val_dataset.parse_xml(images_dirs=[VOC_2007_images_dir],
image_set_filenames=[VOC_2007_test_image_set_filename],
annotations_dirs=[VOC_2007_annotations_dir],
classes=classes,
include_classes='all',
exclude_truncated=False,
exclude_difficult=True,
ret=False)
# Optional: Convert the dataset into an HDF5 dataset. This will require more disk space, but will
# speed up the training. Doing this is not relevant in case you activated the `load_images_into_memory`
# option in the constructor, because in that cas the images are in memory already anyway. If you don't
# want to create HDF5 datasets, comment out the subsequent two function calls.
train_dataset.create_hdf5_dataset(file_path='dataset_pascal_voc_07+12_trainval.h5',
resize=False,
variable_image_size=True,
verbose=True)
val_dataset.create_hdf5_dataset(file_path='dataset_pascal_voc_07_test.h5',
resize=False,
variable_image_size=True,
verbose=True)
# 3: Set the batch size.
batch_size = 32 # Change the batch size if you like, or if you run into GPU memory issues.
# 4: Set the image transformations for pre-processing and data augmentation options.
# For the training generator:
ssd_data_augmentation = SSDDataAugmentation(img_height=img_height,
img_width=img_width,
background=mean_color)
# For the validation generator:
convert_to_3_channels = ConvertTo3Channels()
resize = Resize(height=img_height, width=img_width)
# 5: Instantiate an encoder that can encode ground truth labels into the format needed by the SSD loss function.
# The encoder constructor needs the spatial dimensions of the model's predictor layers to create the anchor boxes.
predictor_sizes = [model.get_layer('conv4_3_norm_mbox_conf').output_shape[1:3],
model.get_layer('fc7_mbox_conf').output_shape[1:3],
model.get_layer('conv6_2_mbox_conf').output_shape[1:3],
model.get_layer('conv7_2_mbox_conf').output_shape[1:3],
model.get_layer('conv8_2_mbox_conf').output_shape[1:3],
model.get_layer('conv9_2_mbox_conf').output_shape[1:3]]
ssd_input_encoder = SSDInputEncoder(img_height=img_height,
img_width=img_width,
n_classes=n_classes,
predictor_sizes=predictor_sizes,
scales=scales,
aspect_ratios_per_layer=aspect_ratios,
two_boxes_for_ar1=two_boxes_for_ar1,
steps=steps,
offsets=offsets,
clip_boxes=clip_boxes,
variances=variances,
matching_type='multi',
pos_iou_threshold=0.5,
neg_iou_limit=0.5,
normalize_coords=normalize_coords)
# 6: Create the generator handles that will be passed to Keras' `fit_generator()` function.
train_generator = train_dataset.generate(batch_size=batch_size,
shuffle=True,
transformations=[ssd_data_augmentation],
label_encoder=ssd_input_encoder,
returns={'processed_images',
'encoded_labels'},
keep_images_without_gt=False)
val_generator = val_dataset.generate(batch_size=batch_size,
shuffle=False,
transformations=[convert_to_3_channels,
resize],
label_encoder=ssd_input_encoder,
returns={'processed_images',
'encoded_labels'},
keep_images_without_gt=False)
# Get the number of samples in the training and validations datasets.
train_dataset_size = train_dataset.get_dataset_size()
val_dataset_size = val_dataset.get_dataset_size()
print("Number of images in the training dataset:\t{:>6}".format(train_dataset_size))
print("Number of images in the validation dataset:\t{:>6}".format(val_dataset_size))
###Output
Number of images in the training dataset: 16551
Number of images in the validation dataset: 4952
###Markdown
4. Set the remaining training parametersWe've already chosen an optimizer and set the batch size above, now let's set the remaining training parameters. I'll set one epoch to consist of 1,000 training steps. The next code cell defines a learning rate schedule that replicates the learning rate schedule of the original Caffe implementation for the training of the SSD300 Pascal VOC "07+12" model. That model was trained for 120,000 steps with a learning rate of 0.001 for the first 80,000 steps, 0.0001 for the next 20,000 steps, and 0.00001 for the last 20,000 steps. If you're training on a different dataset, define the learning rate schedule however you see fit.I'll set only a few essential Keras callbacks below, feel free to add more callbacks if you want TensorBoard summaries or whatever. We obviously need the learning rate scheduler and we want to save the best models during the training. It also makes sense to continuously stream our training history to a CSV log file after every epoch, because if we didn't do that, in case the training terminates with an exception at some point or if the kernel of this Jupyter notebook dies for some reason or anything like that happens, we would lose the entire history for the trained epochs. Finally, we'll also add a callback that makes sure that the training terminates if the loss becomes `NaN`. Depending on the optimizer you use, it can happen that the loss becomes `NaN` during the first iterations of the training. In later iterations it's less of a risk. For example, I've never seen a `NaN` loss when I trained SSD using an Adam optimizer, but I've seen a `NaN` loss a couple of times during the very first couple of hundred training steps of training a new model when I used an SGD optimizer.
###Code
# Define a learning rate schedule.
def lr_schedule(epoch):
if epoch < 80:
return 0.001
elif epoch < 100:
return 0.0001
else:
return 0.00001
# Define model callbacks.
# TODO: Set the filepath under which you want to save the model.
model_checkpoint = ModelCheckpoint(filepath='ssd300_pascal_07+12_epoch-{epoch:02d}_loss-{loss:.4f}_val_loss-{val_loss:.4f}.h5',
monitor='val_loss',
verbose=1,
save_best_only=True,
save_weights_only=False,
mode='auto',
period=1)
#model_checkpoint.best =
csv_logger = CSVLogger(filename='ssd300_pascal_07+12_training_log.csv',
separator=',',
append=True)
learning_rate_scheduler = LearningRateScheduler(schedule=lr_schedule,
verbose=1)
terminate_on_nan = TerminateOnNaN()
callbacks = [model_checkpoint,
csv_logger,
learning_rate_scheduler,
terminate_on_nan]
###Output
_____no_output_____
###Markdown
5. Train In order to reproduce the training of the "07+12" model mentioned above, at 1,000 training steps per epoch you'd have to train for 120 epochs. That is going to take really long though, so you might not want to do all 120 epochs in one go and instead train only for a few epochs at a time. You can find a summary of a full training [here](https://github.com/pierluigiferrari/ssd_keras/blob/master/training_summaries/ssd300_pascal_07%2B12_training_summary.md).In order to only run a partial training and resume smoothly later on, there are a few things you should note:1. Always load the full model if you can, rather than building a new model and loading previously saved weights into it. Optimizers like SGD or Adam keep running averages of past gradient moments internally. If you always save and load full models when resuming a training, then the state of the optimizer is maintained and the training picks up exactly where it left off. If you build a new model and load weights into it, the optimizer is being initialized from scratch, which, especially in the case of Adam, leads to small but unnecessary setbacks every time you resume the training with previously saved weights.2. In order for the learning rate scheduler callback above to work properly, `fit_generator()` needs to know which epoch we're in, otherwise it will start with epoch 0 every time you resume the training. Set `initial_epoch` to be the next epoch of your training. Note that this parameter is zero-based, i.e. the first epoch is epoch 0. If you had trained for 10 epochs previously and now you'd want to resume the training from there, you'd set `initial_epoch = 10` (since epoch 10 is the eleventh epoch). Furthermore, set `final_epoch` to the last epoch you want to run. To stick with the previous example, if you had trained for 10 epochs previously and now you'd want to train for another 10 epochs, you'd set `initial_epoch = 10` and `final_epoch = 20`.3. In order for the model checkpoint callback above to work correctly after a kernel restart, set `model_checkpoint.best` to the best validation loss from the previous training. If you don't do this and a new `ModelCheckpoint` object is created after a kernel restart, that object obviously won't know what the last best validation loss was, so it will always save the weights of the first epoch of your new training and record that loss as its new best loss. This isn't super-important, I just wanted to mention it.
###Code
# If you're resuming a previous training, set `initial_epoch` and `final_epoch` accordingly.
initial_epoch = 0
final_epoch = 120
steps_per_epoch = 1000
history = model.fit_generator(generator=train_generator,
steps_per_epoch=steps_per_epoch,
epochs=final_epoch,
callbacks=callbacks,
validation_data=val_generator,
validation_steps=ceil(val_dataset_size/batch_size),
initial_epoch=initial_epoch)
###Output
_____no_output_____
###Markdown
6. Make predictionsNow let's make some predictions on the validation dataset with the trained model. For convenience we'll use the validation generator that we've already set up above. Feel free to change the batch size.You can set the `shuffle` option to `False` if you would like to check the model's progress on the same image(s) over the course of the training.
###Code
# 1: Set the generator for the predictions.
predict_generator = val_dataset.generate(batch_size=1,
shuffle=True,
transformations=[convert_to_3_channels,
resize],
label_encoder=None,
returns={'processed_images',
'filenames',
'inverse_transform',
'original_images',
'original_labels'},
keep_images_without_gt=False)
# 2: Generate samples.
batch_images, batch_filenames, batch_inverse_transforms, batch_original_images, batch_original_labels = next(predict_generator)
i = 0 # Which batch item to look at
print("Image:", batch_filenames[i])
print()
print("Ground truth boxes:\n")
print(np.array(batch_original_labels[i]))
# 3: Make predictions.
y_pred = model.predict(batch_images)
###Output
_____no_output_____
###Markdown
Now let's decode the raw predictions in `y_pred`.Had we created the model in 'inference' or 'inference_fast' mode, then the model's final layer would be a `DecodeDetections` layer and `y_pred` would already contain the decoded predictions, but since we created the model in 'training' mode, the model outputs raw predictions that still need to be decoded and filtered. This is what the `decode_detections()` function is for. It does exactly what the `DecodeDetections` layer would do, but using Numpy instead of TensorFlow (i.e. on the CPU instead of the GPU).`decode_detections()` with default argument values follows the procedure of the original SSD implementation: First, a very low confidence threshold of 0.01 is applied to filter out the majority of the predicted boxes, then greedy non-maximum suppression is performed per class with an intersection-over-union threshold of 0.45, and out of what is left after that, the top 200 highest confidence boxes are returned. Those settings are for precision-recall scoring purposes though. In order to get some usable final predictions, we'll set the confidence threshold much higher, e.g. to 0.5, since we're only interested in the very confident predictions.
###Code
# 4: Decode the raw predictions in `y_pred`.
y_pred_decoded = decode_detections(y_pred,
confidence_thresh=0.5,
iou_threshold=0.4,
top_k=200,
normalize_coords=normalize_coords,
img_height=img_height,
img_width=img_width)
###Output
_____no_output_____
###Markdown
We made the predictions on the resized images, but we'd like to visualize the outcome on the original input images, so we'll convert the coordinates accordingly. Don't worry about that opaque `apply_inverse_transforms()` function below, in this simple case it just aplies `(* original_image_size / resized_image_size)` to the box coordinates.
###Code
# 5: Convert the predictions for the original image.
y_pred_decoded_inv = apply_inverse_transforms(y_pred_decoded, batch_inverse_transforms)
np.set_printoptions(precision=2, suppress=True, linewidth=90)
print("Predicted boxes:\n")
print(' class conf xmin ymin xmax ymax')
print(y_pred_decoded_inv[i])
###Output
Predicted boxes:
class conf xmin ymin xmax ymax
[[ 9. 0.8 364.79 5.24 496.51 203.59]
[ 12. 1. 115.44 50. 384.22 330.76]
[ 12. 0.86 68.99 212.78 331.63 355.72]
[ 15. 0.95 2.62 20.18 235.83 253.07]]
###Markdown
Finally, let's draw the predicted boxes onto the image. Each predicted box says its confidence next to the category name. The ground truth boxes are also drawn onto the image in green for comparison.
###Code
# 5: Draw the predicted boxes onto the image
# Set the colors for the bounding boxes
colors = plt.cm.hsv(np.linspace(0, 1, n_classes+1)).tolist()
classes = ['background',
'aeroplane', 'bicycle', 'bird', 'boat',
'bottle', 'bus', 'car', 'cat',
'chair', 'cow', 'diningtable', 'dog',
'horse', 'motorbike', 'person', 'pottedplant',
'sheep', 'sofa', 'train', 'tvmonitor']
plt.figure(figsize=(20,12))
plt.imshow(batch_original_images[i])
current_axis = plt.gca()
for box in batch_original_labels[i]:
xmin = box[1]
ymin = box[2]
xmax = box[3]
ymax = box[4]
label = '{}'.format(classes[int(box[0])])
current_axis.add_patch(plt.Rectangle((xmin, ymin), xmax-xmin, ymax-ymin, color='green', fill=False, linewidth=2))
current_axis.text(xmin, ymin, label, size='x-large', color='white', bbox={'facecolor':'green', 'alpha':1.0})
for box in y_pred_decoded_inv[i]:
xmin = box[2]
ymin = box[3]
xmax = box[4]
ymax = box[5]
color = colors[int(box[0])]
label = '{}: {:.2f}'.format(classes[int(box[0])], box[1])
current_axis.add_patch(plt.Rectangle((xmin, ymin), xmax-xmin, ymax-ymin, color=color, fill=False, linewidth=2))
current_axis.text(xmin, ymin, label, size='x-large', color='white', bbox={'facecolor':color, 'alpha':1.0})
###Output
_____no_output_____ |
analysis/Split-halves analysis.ipynb | ###Markdown
Split Halves AnalysisThis notebook will conduct an analysis of inter-rater reliability using split-halves analysis1. We first split the dataset into our two conditions: interesting/stable2. We then take each tower and randomly assign the rating to two groups.3. We calcualte the mean for each group in each tower4. Then take the correlation of the two group means across towers.We run this process many times to get a sampling distribution of correlations, then compare the mean correlation (and CI) of stable to interesting using a t-test
###Code
import pandas as pd
import numpy as np
from statistics import mean
%precision %.2f
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings("ignore")
###Output
_____no_output_____
###Markdown
Read in most recet data
###Code
df = pd.read_csv('curiotower_raw_data_run_0.csv')
print(df.shape)
df.head(2)
###Output
(24048, 47)
###Markdown
split into two conditions
###Code
df_stable = df[df['condition'] == 'stable']
df_interesting = df[df['condition'] == 'interesting']
###Output
_____no_output_____
###Markdown
Create dummy df of prolific IDs for randomization
###Code
df_subject = pd.DataFrame(df['prolificID'].unique(),
columns = ['prolificID'])
grup_num = np.random.randint(2, size=len(df_subject['prolificID']))
df_subject['group_num'] = pd.Series(grup_num)
df_subject
df_test = pd.merge(df, df_subject, left_on='prolificID', right_on='prolificID', how='left')
df_test[["prolificID", 'group_num']].head()
###Output
_____no_output_____
###Markdown
split-halves design
###Code
conditions = ['stable', 'interesting']
corr_stable = []
corr_interesting = []
for condition in conditions:
print('sampling from:', condition)
df_condition = df[df['condition'] == condition]
for i in range(0,1000):
df_subject = pd.DataFrame(df['prolificID'].unique(),
columns = ['prolificID'])
rand_group = np.random.randint(2, size=len(df_subject['prolificID']))
df_subject['rand_group'] = pd.Series(rand_group)
df_condition_rand = pd.merge(df_condition, df_subject, left_on='prolificID', right_on='prolificID', how='left')
# rand_group = np.random.randint(2, size=len(df_condition['towerID']))
# df_condition['rand_group'] = pd.Series(rand_group)
out = df_condition_rand.pivot_table(index=["towerID"],
columns='rand_group',
values='button_pressed',
aggfunc='mean').reset_index()
out.columns = ['towerID', 'group0', 'group1']
sample_corr = out['group0'].corr(out['group1'])
if condition == 'stable':
#corr_stable.append(sample_corr)
corr_stable.append(2*sample_corr/(1+sample_corr))
elif condition == 'interesting':
#corr_interesting.append(sample_corr)
#Spearman brown correction
corr_interesting.append(2*sample_corr/(1+sample_corr))
plt.xlim([min(corr_stable + corr_interesting)-0.01, max(corr_stable + corr_interesting)+0.01])
plt.hist(corr_stable, alpha=0.5, label = 'stable')
plt.hist(corr_interesting, alpha = 0.5, color = 'orange', label = 'interesting')
plt.title('Sample Distritbutions of Corr for Conditions')
plt.xlabel('Corr')
plt.ylabel('count')
plt.legend()
plt.show()
print("Mean prop for stable:",
round(mean(corr_stable),3),
"+/-",
round((1.96*np.std(corr_stable)/
np.sqrt(len(corr_stable))),3))
print("Mean prop for interesting:",
round(mean(corr_interesting),3),
"+/-",
round((1.96*np.std(corr_interesting)/
np.sqrt(len(corr_interesting))),10))
###Output
Mean prop for stable: 0.997 +/- 0.0
Mean prop for interesting: 0.996 +/- 7.48176e-05
###Markdown
Now caluclate the r for each model and compute the proportion of variance explained
###Code
dat = df[df['condition'] == 'interesting']
dat.columns
dat = pd.get_dummies(dat, columns=['stability'])
dat.columns
from sklearn.linear_model import LinearRegression
from sklearn.preprocessing import PolynomialFeatures
regressors = ['num_blocks']
# regressors = ['num_blocks', 'stability_high', 'stability_low', 'stability_med']
#regressors = ['num_blocks', 'stability_high', 'stability_low', 'stability_med']
# X = dat[regressors]
# interaction = PolynomialFeatures(degree=3, include_bias=False, interaction_only=True)
# X = interaction.fit_transform(X)
regressors = ['num_blocks', 'stability_high', 'stability_low', 'stability_med']
X = dat[regressors]
y = dat['button_pressed']
##Sklearn regression
reg = LinearRegression().fit(X, y)
reg.score(X,y)
###Output
_____no_output_____
###Markdown
Mode analysisfor each condition; for each tower; calculate proportion of responses selecting mode; --> then average acrosss towers (and report CI)
###Code
#annoying function to get min when equal counts for mode
def try_min(value):
try:
return min(value)
except:
return value
df_stable_mode = df[df['condition']=='stable'].groupby(['towerID']).agg(pd.Series.mode).reset_index()[['towerID','button_pressed']]
df_stable_mode['mode'] = df_stable_mode['button_pressed'].apply(lambda x: try_min(x))
conditions = ['stable', 'interesting']
towers = df['towerID'].unique()
mode_proportion_list_stable = []
mode_proportion_list_interesting = []
for condition in conditions:
print('sampling from:', condition)
df_condition = df[df['condition'] == condition]
df_mode = df_condition.groupby(['towerID']).agg(pd.Series.mode).reset_index()[['towerID','button_pressed']]
df_mode['mode'] = df_mode['button_pressed'].apply(lambda x: try_min(x))
for tower in towers:
mode_response = int(df_mode[df_mode['towerID'] == tower]['mode'])
prop = (len(df_condition.loc[(df_condition['towerID'] == tower) & (df_condition['button_pressed'] == mode_response)])/
len(df_condition.loc[(df_condition['towerID'] == tower)]))
if condition == 'stable':
mode_proportion_list_stable.append(prop)
elif condition == 'interesting':
mode_proportion_list_interesting.append(prop)
print("Mean prop for stable:",
round(mean(mode_proportion_list_stable),3),
"+/-",
round((1.96*np.std(mode_proportion_list_stable)/
np.sqrt(len(mode_proportion_list_stable))),3))
print("Mean prop for interesting:",
round(mean(mode_proportion_list_interesting),3),
"+/-",
round((1.96*np.std(mode_proportion_list_interesting)/
np.sqrt(len(mode_proportion_list_interesting))),3))
###Output
sampling from: stable
sampling from: interesting
Mean prop for stable: 0.516 +/- 0.027
Mean prop for interesting: 0.542 +/- 0.031
|
questions/q1_game_of_names/GameOfNames.ipynb | ###Markdown
Game of NamesOn a Sunday morning, some friends have joined to play: Game of Names. There is an N x N board. Each player is playing, turn by turn. At the turn of ith player, the player places first character of its name at one of the unfilled cells. The first player who places its character for 3 consecutive vertical, horizontal or diagonal cells is the winner.You are provided a board with some filled and some unfilled cells. You have to tell the winner. If there is no winner, then you must print โongoingโ. Input format:The first line of input contains a single integer N (1 <= N <= 30), which denotes a board of size N x N. Each of the following N lines of input contains N characters. These denote N characters of a particular row. For unfilled cells are denoted by โ.โ character and filled cells will be denoted by an uppercase letter. Output formatPrint the winning character or โongoingโ, as described in the task above. Sample Input 1:3XOCXOCX.. Sample Output 1:X Sample Input 2:4......A.AAB..B.B Sample Output 2:Ongoing
###Code
n = int(input('Enter thr size of the board : '))
arr = []
for i in range(n) :
arr.append(input().split(''))
print(arr)
###Output
Enter thr size of the board : 3
ZOC
|
Pipeline progression/13_variance-weighted-done.ipynb | ###Markdown
Ye ab tak ka code:
###Code
import cv2
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
def process_image(frame):
def cal_undistort(img):
# Reads mtx and dist matrices, peforms image distortion correction and returns the undistorted image
import pickle
# Read in the saved matrices
my_dist_pickle = pickle.load( open( "output_files/calib_pickle_files/dist_pickle.p", "rb" ) )
mtx = my_dist_pickle["mtx"]
dist = my_dist_pickle["dist"]
img_size = (img.shape[1], img.shape[0])
undistorted_img = cv2.undistort(img, mtx, dist, None, mtx)
#undistorted_img = cv2.cvtColor(undistorted_img, cv2.COLOR_BGR2RGB) #Use if you use cv2 to import image. ax.imshow() needs RGB image
return undistorted_img
def yellow_threshold(img, sxbinary):
# Convert to HLS color space and separate the S channel
# Note: img is the undistorted image
hls = cv2.cvtColor(img, cv2.COLOR_RGB2HLS)
s_channel = hls[:,:,2]
h_channel = hls[:,:,0]
# Threshold color channel
s_thresh_min = 100
s_thresh_max = 255
#for 360 degree, my value for yellow ranged between 35 and 50. So uska half kar diya
h_thresh_min = 10
h_thresh_max = 25
s_binary = np.zeros_like(s_channel)
s_binary[(s_channel >= s_thresh_min) & (s_channel <= s_thresh_max)] = 1
h_binary = np.zeros_like(h_channel)
h_binary[(h_channel >= h_thresh_min) & (h_channel <= h_thresh_max)] = 1
# Combine the two binary thresholds
yellow_binary = np.zeros_like(s_binary)
yellow_binary[(((s_binary == 1) | (sxbinary == 1) ) & (h_binary ==1))] = 1
return yellow_binary
def xgrad_binary(img, thresh_min=30, thresh_max=100):
# Grayscale image
gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
# Sobel x
sobelx = cv2.Sobel(gray, cv2.CV_64F, 1, 0) # Take the derivative in x
abs_sobelx = np.absolute(sobelx) # Absolute x derivative to accentuate lines away from horizontal
scaled_sobel = np.uint8(255*abs_sobelx/np.max(abs_sobelx))
# Threshold x gradient
#thresh_min = 30 #Already given above
#thresh_max = 100
sxbinary = np.zeros_like(scaled_sobel)
sxbinary[(scaled_sobel >= thresh_min) & (scaled_sobel <= thresh_max)] = 1
return sxbinary
def white_threshold(img, sxbinary, lower_white_thresh = 170):
r_channel = img[:,:,0]
g_channel = img[:,:,1]
b_channel = img[:,:,2]
# Threshold color channel
r_thresh_min = lower_white_thresh
r_thresh_max = 255
r_binary = np.zeros_like(r_channel)
r_binary[(r_channel >= r_thresh_min) & (r_channel <= r_thresh_max)] = 1
g_thresh_min = lower_white_thresh
g_thresh_max = 255
g_binary = np.zeros_like(g_channel)
g_binary[(g_channel >= g_thresh_min) & (g_channel <= g_thresh_max)] = 1
b_thresh_min = lower_white_thresh
b_thresh_max = 255
b_binary = np.zeros_like(b_channel)
b_binary[(b_channel >= b_thresh_min) & (b_channel <= b_thresh_max)] = 1
white_binary = np.zeros_like(r_channel)
white_binary[((r_binary ==1) & (g_binary ==1) & (b_binary ==1) & (sxbinary==1))] = 1
return white_binary
def thresh_img(img):
#sxbinary = xgrad_binary(img, thresh_min=30, thresh_max=100)
sxbinary = xgrad_binary(img, thresh_min=25, thresh_max=130)
yellow_binary = yellow_threshold(img, sxbinary) #(((s) | (sx)) & (h))
white_binary = white_threshold(img, sxbinary, lower_white_thresh = 150)
# Combine the two binary thresholds
combined_binary = np.zeros_like(sxbinary)
combined_binary[((yellow_binary == 1) | (white_binary == 1))] = 1
out_img = np.dstack((combined_binary, combined_binary, combined_binary))*255
return out_img
def perspective_transform(img):
# Define calibration box in source (original) and destination (desired or warped) coordinates
img_size = (img.shape[1], img.shape[0])
"""Notice the format used for img_size. Yaha bhi ulta hai. x axis aur fir y axis chahiye.
Apne format mein rows(y axis) and columns (x axis) hain"""
# Four source coordinates
# Order of points: top left, top right, bottom right, bottom left
src = np.array(
[[435*img.shape[1]/960, 350*img.shape[0]/540],
[530*img.shape[1]/960, 350*img.shape[0]/540],
[885*img.shape[1]/960, img.shape[0]],
[220*img.shape[1]/960, img.shape[0]]], dtype='f')
# Next, we'll define a desired rectangle plane for the warped image.
# We'll choose 4 points where we want source points to end up
# This time we'll choose our points by eyeballing a rectangle
dst = np.array(
[[290*img.shape[1]/960, 0],
[740*img.shape[1]/960, 0],
[740*img.shape[1]/960, img.shape[0]],
[290*img.shape[1]/960, img.shape[0]]], dtype='f')
#Compute the perspective transform, M, given source and destination points:
M = cv2.getPerspectiveTransform(src, dst)
#Warp an image using the perspective transform, M; using linear interpolation
#Interpolating points is just filling in missing points as it warps an image
# The input image for this function can be a colored image too
warped = cv2.warpPerspective(img, M, img_size, flags=cv2.INTER_LINEAR)
return warped, src, dst
def rev_perspective_transform(img, src, dst):
img_size = (img.shape[1], img.shape[0])
#Compute the perspective transform, M, given source and destination points:
Minv = cv2.getPerspectiveTransform(dst, src)
#Warp an image using the perspective transform, M; using linear interpolation
#Interpolating points is just filling in missing points as it warps an image
# The input image for this function can be a colored image too
un_warped = cv2.warpPerspective(img, Minv, img_size, flags=cv2.INTER_LINEAR)
return un_warped
def draw_polygon(img1, img2, src, dst):
src = src.astype(int) #Very important step (Pixels cannot be in decimals)
dst = dst.astype(int)
cv2.polylines(img1, [src], True, (255,0,0), 3)
cv2.polylines(img2, [dst], True, (255,0,0), 3)
def histogram_bottom_peaks (warped_img):
# This will detect the bottom point of our lane lines
# Take a histogram of the bottom half of the image
bottom_half = warped_img[((2*warped_img.shape[0])//5):,:,0] # Collecting all pixels in the bottom half
histogram = np.sum(bottom_half, axis=0) # Summing them along y axis (or along columns)
# Find the peak of the left and right halves of the histogram
# These will be the starting point for the left and right lines
midpoint = np.int(histogram.shape[0]//2) # 1D array hai histogram toh uska bas 0th index filled hoga
#print(np.shape(histogram)) #OUTPUT:(1280,)
leftx_base = np.argmax(histogram[:midpoint])
rightx_base = np.argmax(histogram[midpoint:]) + midpoint
return leftx_base, rightx_base
def find_lane_pixels(warped_img):
leftx_base, rightx_base = histogram_bottom_peaks(warped_img)
# HYPERPARAMETERS
# Choose the number of sliding windows
nwindows = 9
# Set the width of the windows +/- margin. So width = 2*margin
margin = 90
# Set minimum number of pixels found to recenter window
minpix = 1000 #I've changed this from 50 as given in lectures
# Set height of windows - based on nwindows above and image shape
window_height = np.int(warped_img.shape[0]//nwindows)
# Identify the x and y positions of all nonzero pixels in the image
nonzero = warped_img.nonzero() #pixel ke coordinates dega 2 seperate arrays mein
nonzeroy = np.array(nonzero[0]) # Y coordinates milenge 1D array mein. They will we arranged in the order of pixels
nonzerox = np.array(nonzero[1])
# Current positions to be updated later for each window in nwindows
leftx_current = leftx_base #initially set kar diya hai. For loop ke end mein change karenge
rightx_current = rightx_base
# Create empty lists to receive left and right lane pixel indices
left_lane_inds = [] # Ismein lane-pixels ke indices collect karenge.
# 'nonzerox' array mein index daalke coordinate mil jaayega
right_lane_inds = []
# Step through the windows one by one
for window in range(nwindows):
# Identify window boundaries in x and y (and right and left)
win_y_low = warped_img.shape[0] - (window+1)*window_height
win_y_high = warped_img.shape[0] - window*window_height
"""### TO-DO: Find the four below boundaries of the window ###"""
win_xleft_low = leftx_current - margin
win_xleft_high = leftx_current + margin
win_xright_low = rightx_current - margin
win_xright_high = rightx_current + margin
"""
# Create an output image to draw on and visualize the result
out_img = np.copy(warped_img)
# Draw the windows on the visualization image
cv2.rectangle(out_img,(win_xleft_low,win_y_low),
(win_xleft_high,win_y_high),(0,255,0), 2)
cv2.rectangle(out_img,(win_xright_low,win_y_low),
(win_xright_high,win_y_high),(0,255,0), 2)
"""
### TO-DO: Identify the nonzero pixels in x and y within the window ###
#Iska poora explanation seperate page mein likha hai
good_left_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) &
(nonzerox >= win_xleft_low) & (nonzerox < win_xleft_high)).nonzero()[0]
good_right_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) &
(nonzerox >= win_xright_low) & (nonzerox < win_xright_high)).nonzero()[0]
# Append these indices to the lists
left_lane_inds.append(good_left_inds)
right_lane_inds.append(good_right_inds)
# If you found > minpix pixels, recenter next window on the mean position of the pixels in your current window (re-centre)
if len(good_left_inds) > minpix:
leftx_current = np.int(np.mean(nonzerox[good_left_inds]))
if len(good_right_inds) > minpix:
rightx_current = np.int(np.mean(nonzerox[good_right_inds]))
# Concatenate the arrays of indices (previously was a list of lists of pixels)
try:
left_lane_inds = np.concatenate(left_lane_inds)
right_lane_inds = np.concatenate(right_lane_inds)
except ValueError:
# Avoids an error if the above is not implemented fully
pass
# Extract left and right line pixel positions
leftx = nonzerox[left_lane_inds]
lefty = nonzeroy[left_lane_inds]
rightx = nonzerox[right_lane_inds]
righty = nonzeroy[right_lane_inds]
"""return leftx, lefty, rightx, righty, out_img""" #agar rectangles bana rahe ho toh out_image rakhna
return leftx, lefty, rightx, righty
def fit_polynomial(warped_img, leftx, lefty, rightx, righty, right_fit_history, right_variance_history):
#Fit a second order polynomial to each using `np.polyfit` ###
left_fit = np.polyfit(lefty,leftx,2)
right_fit = np.polyfit(righty,rightx,2)
# Generate x and y values for plotting.
#NOTE: y is the independent variable. Refer "fit polynomial" notes for explanation
# We'll plot x as a function of y
ploty = np.linspace(0, warped_img.shape[0]-1, warped_img.shape[0])
# Eqn of parabola: a(x**2) + bx + c. Where a and b denote the shape of parabola. Shape of parabola will be amost constant inn our case
variance_new=0 #initializing the variable
try:
left_fitx = left_fit[0]*ploty**2 + left_fit[1]*ploty + left_fit[2]
if(right_fit_history == None):
a2 = (0.6*left_fit[0] + 0.4*right_fit[0])
b2 = (0.6*left_fit[1] + 0.4*right_fit[1])
c2 = (warped_img.shape[1] - (left_fit[0]*(warped_img.shape[0]-1)**2 + left_fit[1]*(warped_img.shape[0]-1) + left_fit[2]))*0.1 + 0.9*right_fit[2]
for index in range(len(rightx)):
variance_new+= abs(rightx[index]-(a2*righty[index]**2 + b2*righty[index] + c2))
variance_new=variance_new/len(rightx)
print("variance_new",variance_new)
else:
a2_new = (0.6*left_fit[0] + 0.4*right_fit[0])
b2_new = (0.6*left_fit[1] + 0.4*right_fit[1])
c2_new = (warped_img.shape[1] - (left_fit[0]*(warped_img.shape[0]-1)**2 + left_fit[1]*(warped_img.shape[0]-1) + left_fit[2]))*0.1 + 0.9*right_fit[2]
# Finding weighted average for the previous elements data within right_fit_history
a2_old= sum([(0.2*(index+1)*element[0]) for index,element in enumerate(right_fit_history)])/sum([0.2*(index+1) for index in range(0,5)])
b2_old= sum([(0.2*(index+1)*element[1]) for index,element in enumerate(right_fit_history)])/sum([0.2*(index+1) for index in range(0,5)])
c2_old= sum([(0.2*(index+1)*element[2]) for index,element in enumerate(right_fit_history)])/sum([0.2*(index+1) for index in range(0,5)])
"""Trying to find variance"""
for index in range(len(rightx)):
variance_new+= abs(rightx[index]-(a2_new*righty[index]**2 + b2_new*righty[index] + c2_new))
variance_new=variance_new/len(rightx)
print("variance_new",variance_new)
#variance_old = sum([(0.2*(index+1)*element) for index,element in enumerate(right_variance_history)])/sum([0.2*(index+1) for index in range(0,5)])
variance_old = sum([(0.2*((5-index)**3)*element) for index,element in enumerate(right_variance_history)])/sum([0.2*((5-index)**3) for index in range(0,5)])
#variance_old = right_variance_history[4]
#variance_old = sum([element for element in right_variance_history])/5
"""yaha ke coefficients variance se aa sakte hain"""
coeff_new=variance_old/(variance_new+variance_old)
coeff_old=variance_new/(variance_new+variance_old)
a2= a2_new*coeff_new + a2_old*coeff_old
b2= b2_new*coeff_new + b2_old*coeff_old
c2= c2_new*coeff_new + c2_old*coeff_old
right_fitx = a2*ploty**2 + b2*ploty + c2
status = True
#try:
# left_fitx = left_fit[0]*ploty**2 + left_fit[1]*ploty + left_fit[2]
# right_fitx = right_fit[0]*ploty**2 + right_fit[1]*ploty + right_fit[2]
except TypeError:
# Avoids an error if `left` and `right_fit` are still none or incorrect
print('The function failed to fit a line!')
left_fitx = 1*ploty**2 + 1*ploty
right_fitx = 1*ploty**2 + 1*ploty
status = False
return left_fit, [a2,b2,c2], left_fitx, right_fitx, status, variance_new
# out_img here has boxes drawn and the pixels are colored
def color_pixels_and_curve(out_img, leftx, lefty, rightx, righty, left_fitx, right_fitx):
ploty = np.linspace(0, warped_img.shape[0]-1, warped_img.shape[0])
## Visualization ##
# Colors in the left and right lane regions
out_img[lefty, leftx] = [255, 0, 0]
out_img[righty, rightx] = [0, 0, 255]
# Converting the coordinates of our line into integer values as index of the image can't take decimals
left_fitx_int = left_fitx.astype(np.int32)
right_fitx_int = right_fitx.astype(np.int32)
ploty_int = ploty.astype(np.int32)
# Coloring the curve as yellow
out_img[ploty_int,left_fitx_int] = [255,255,0]
out_img[ploty_int,right_fitx_int] = [255,255,0]
# To thicken the curve
out_img[ploty_int,left_fitx_int+1] = [255,255,0]
out_img[ploty_int,right_fitx_int+1] = [255,255,0]
out_img[ploty_int,left_fitx_int-1] = [255,255,0]
out_img[ploty_int,right_fitx_int-1] = [255,255,0]
out_img[ploty_int,left_fitx_int+2] = [255,255,0]
out_img[ploty_int,right_fitx_int+2] = [255,255,0]
out_img[ploty_int,left_fitx_int-2] = [255,255,0]
out_img[ploty_int,right_fitx_int-2] = [255,255,0]
def search_around_poly(warped_img, left_fit, right_fit):
# HYPERPARAMETER
# Choose the width of the margin around the previous polynomial to search
# The quiz grader expects 100 here, but feel free to tune on your own!
margin = 100
# Grab activated pixels
nonzero = warped_img.nonzero()
nonzeroy = np.array(nonzero[0])
nonzerox = np.array(nonzero[1])
### TO-DO: Set the area of search based on activated x-values ###
### within the +/- margin of our polynomial function ###
### Hint: consider the window areas for the similarly named variables ###
### in the previous quiz, but change the windows to our new search area ###
left_lane_inds = ((nonzerox > (left_fit[0]*(nonzeroy**2) + left_fit[1]*nonzeroy +
left_fit[2] - margin)) & (nonzerox < (left_fit[0]*(nonzeroy**2) +
left_fit[1]*nonzeroy + left_fit[2] + margin)))
right_lane_inds = ((nonzerox > (right_fit[0]*(nonzeroy**2) + right_fit[1]*nonzeroy +
right_fit[2] - margin)) & (nonzerox < (right_fit[0]*(nonzeroy**2) +
right_fit[1]*nonzeroy + right_fit[2] + margin)))
# Again, extract left and right line pixel positions
leftx = nonzerox[left_lane_inds]
lefty = nonzeroy[left_lane_inds]
rightx = nonzerox[right_lane_inds]
righty = nonzeroy[right_lane_inds]
return leftx, lefty, rightx, righty
def modify_array(array, new_value):
if len(array)!=5:
for i in range(0,5):
array.append(new_value)
else:
dump_var=array[0]
array[0]=array[1]
array[1]=array[2]
array[2]=array[3]
array[3]=array[4]
array[4]=new_value
return array
undist_img = cal_undistort(frame)
thresh_img = thresh_img(undist_img) # Note: This is not a binary iamge. It has been stacked already within the function
warped_img, src, dst = perspective_transform(thresh_img)
#draw_polygon(frame, warped_img, src, dst) #the first image is the original image that you import into the system
print("starting count",lane.count)
if (lane.count == 0):
leftx, lefty, rightx, righty = find_lane_pixels(warped_img) # Find our lane pixels first
left_fit, right_fit, left_fitx, right_fitx, status, variance_new = fit_polynomial(warped_img, leftx, lefty, rightx, righty, right_fit_history=None, right_variance_history=None)
print("First case mein variance ye hai", variance_new)
elif (lane.count > 0):
left_fit_previous = [i[0] for i in lane.curve_fit]
right_fit_previous = [i[1] for i in lane.curve_fit]
#print(left_fit_previous)
#print(right_fit_previous)
leftx, lefty, rightx, righty = search_around_poly(warped_img, left_fit_previous[4], right_fit_previous[4])
left_fit, right_fit, left_fitx, right_fitx, status, variance_new = fit_polynomial(warped_img, leftx, lefty, rightx, righty, right_fit_history=right_fit_previous, right_variance_history=lane.right_variance)
color_pixels_and_curve(warped_img, leftx, lefty, rightx, righty, left_fitx, right_fitx)
lane.detected = status
lane.curve_fit = modify_array(lane.curve_fit,[left_fit, right_fit])
lane.right_variance = modify_array(lane.right_variance, variance_new)
print(lane.right_variance)
#lane.current_xfitted.append([left_fitx, right_fitx])
#lane.allx.append([leftx,rightx])
#lane.ally.append([lefty, righty])
#lane.image_output.append(warped_img)
unwarped_img = rev_perspective_transform(warped_img, src, dst)
lane.count = lane.count+1
return unwarped_img
###Output
_____no_output_____
###Markdown
Let's try classes
###Code
# Define a class to receive the characteristics of each line detection
class Line():
def __init__(self):
#Let's count the number of consecutive frames
self.count = 0
# was the line detected in the last iteration?
self.detected = False
#polynomial coefficients for the most recent fit
self.curve_fit = []
# Traking variance for the right lane
self.right_variance = []
# x values of the curve that we fit intially
#self.current_xfitted = []
# x values for detected line pixels
#self.allx = []
# y values for detected line pixels
#self.ally = []
#store your image in this
#self.image_output = []
# x values of the last n fits of the line
self.recent_xfitted = []
#average x values of the fitted line over the last n iterations
self.bestx = None
#polynomial coefficients averaged over the last n iterations
self.best_fit = None
#radius of curvature of the line in some units
self.radius_of_curvature = None
#distance in meters of vehicle center from the line
self.line_base_pos = None
#difference in fit coefficients between last and new fits
self.diffs = np.array([0,0,0], dtype='float')
lane=Line()
frame1= mpimg.imread("my_test_images/Highway_snaps/image (1).jpg")
frame2= mpimg.imread("my_test_images/Highway_snaps/image (2).jpg")
frame3= mpimg.imread("my_test_images/Highway_snaps/image (3).jpg")
print("starting count value",lane.count)
(process_image(frame1))
(process_image(frame2))
plt.imshow(process_image(frame3))
###Output
starting count value 0
starting count 0
variance_new 16.411728943874007
First case mein variance ye hai 16.411728943874007
[16.411728943874007, 16.411728943874007, 16.411728943874007, 16.411728943874007, 16.411728943874007]
starting count 1
variance_new 20.454135208135213
[16.411728943874007, 16.411728943874007, 16.411728943874007, 16.411728943874007, 20.454135208135213]
starting count 2
variance_new 14.975480798140142
[16.411728943874007, 16.411728943874007, 16.411728943874007, 20.454135208135213, 14.975480798140142]
###Markdown
Videoo test
###Code
# Define a class to receive the characteristics of each line detection
class Line():
def __init__(self):
#Let's count the number of consecutive frames
self.count = 0
# was the line detected in the last iteration?
self.detected = False
#polynomial coefficients for the most recent fit
self.curve_fit = []
# Traking variance for the right lane
self.right_variance = []
# x values of the curve that we fit intially
#self.current_xfitted = []
# x values for detected line pixels
#self.allx = []
# y values for detected line pixels
#self.ally = []
#store your image in this
#self.image_output = []
# x values of the last n fits of the line
self.recent_xfitted = []
#average x values of the fitted line over the last n iterations
self.bestx = None
#polynomial coefficients averaged over the last n iterations
self.best_fit = None
#radius of curvature of the line in some units
self.radius_of_curvature = None
#distance in meters of vehicle center from the line
self.line_base_pos = None
#difference in fit coefficients between last and new fits
self.diffs = np.array([0,0,0], dtype='float')
lane=Line()
project_output = 'output_files/video_clips/project_video_with_history.mp4'
#clip1 = VideoFileClip("project_video.mp4")
clip1 = VideoFileClip("project_video.mp4").subclip(20,23)
project_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!
%time project_clip.write_videofile(project_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(project_output))
###Output
_____no_output_____
###Markdown
.
###Code
import numpy as np
def modify_array(array, new_value):
if len(array)!=5:
for i in range(0,5):
array.append(new_value)
else:
dump_var=array[0]
array[0]=array[1]
array[1]=array[2]
array[2]=array[3]
array[3]=array[4]
array[4]=new_value
return array
a=[]
modify_array(a,[4,2])
modify_array(a,[7,3])
modify_array(a,[2,1])
modify_array(a,[9,6])
print(a)
Ans = [i[0] for i in a]
print(Ans)
"""a[:,0] """ # This wont work. TypeError: list indices must be integers or slices, not tuple
a = np.array(a)
modify_array(a,[1,4])
print(a)
a[:,0]
a=[[10,20,30],[30,60,80],[60,10,20], [100,20,10], [90,70,10]]
ans = sum([(0.2*(index+1)*element[0]) for index,element in enumerate(a)])/sum([0.2*(index+1) for index in range(0,5)])
print(ans)
[(0.25*(index+1)*element[0]) for index,element in enumerate(a)]
###Output
_____no_output_____ |
M2AA3/M2AA3-Polynomials/Lesson 02 - Newton Method/.ipynb_checkpoints/Newton Tableau-checkpoint.ipynb | ###Markdown
IntroductionLast time we have used Lagrange basis to interpolate polynomial. However, it is not efficient to update the interpolating polynomial when a new data point is added. We look at an iterative approach.Given points $\{(z_i, f_i) \}_{i=0}^{n-1}$, $z_i$ are distinct and $p_{n-1} \in \mathbb{C}[z]_{n-1}\, , p_{n-1}(z_i) = f_i$. We add a point $(z_n, f_n)$ and find a polynomial $p_n \in \mathbb{C}[x]_{n-1}$ which satisfies $\{(z_i, f_i) \}_{i=0}^{n}$. We assume $p_n(z)$ be the form\begin{equation}p_n(z) = p_{n-1}(z) + C\prod_{i=0}^{n-1}(z - z_i)\end{equation}so that the second term vanishes at $z = z_0,...,z_{n-1}$ and $p_n(z_i) = p_{n-1}(z_i), i = 0,...,n-1$. We also want $p_n(z_n) = f_n$ so we have\begin{equation}f_n = p_{n-1}(z_n) + C\prod_{i=0}^{n-1}(z_n - z_i) \Rightarrow C = \frac{f_n - p_{n-1}(z_n)}{\prod_{i=0}^{n-1}(z_n - z_i)}\end{equation}Thus we may perform interpolation iteratively. **Example:** Last time we have\begin{equation}(z_0, f_0) = (-1,-3), \quad(z_1, f_1) = (0,-1), \quad(z_2, f_2) = (2,4), \quad(z_3, f_3) = (5,1)\end{equation}and \begin{equation}p_3(x) = \frac{-13}{90}z^3 + \frac{14}{45}z^2 + \frac{221}{90}z - 1\end{equation}
###Code
z0 = -1; f0 = -3; z1 = 0; f1 = -1; z2 = 2; f2 = 4; z3 = 5; f3 = 1; z4 = 1; f4 = 1
p3 = -13*x**3/90 + 14*x**2/45 + 221*x/90 - 1
###Output
_____no_output_____
###Markdown
We add a point $(z_4,f_4) = (1,1)$ and obtain $p_4(x)$
###Code
z4 = 1; f4 = 1
C = (f4 - p3.subs(x,z4))/((z4-z0)*(z4-z1)*(z4-z2)*(z4-z3))
C
p4 = p3 + C*(x-z0)*(x-z1)*(x-z2)*(x-z3)
sp.expand(p4)
###Output
_____no_output_____
###Markdown
**Remark:** the constant $C$ is usually written as $f[z_0,z_1,z_2,z_3,z_4]$. Moreover by iteration we have$$p_n(z) = \sum_{i=0}^n f[z_0,...,z_n] \prod_{j=0}^i (z - z_j)$$ Newton Tableau We look at efficient ways to compute $f[z_0,...,z_n]$, iteratively from $f[z_0,...,z_{n-1}]$ and $f[z_1,...,z_n]$. We may first construct $p_{n-1}$ and $q_{n-1}$ before constructing $p_n$ itself, where\begin{gather}p_{n-1}(z_i) = f_i \quad i = 0,...,n-1\\q_{n-1}(z_i) = f_i \quad i = 1,...,n\end{gather}**Claim:** The following polynomial interpolate $\{(z_i,f_i)\}_{i=0}^n$\begin{equation}p_n(z_i) = \frac{(z - z_n)p_{n-1}(z) - (z - z_0)q_{n-1}(z)}{z_0 - z_n}\end{equation}Since interpolating polynomial is unique, by comparing coefficient of $z_n$, we have$$f[z_0,...,z_{n}] = \frac{f[z_0,...,z_{n-1}]-f[z_1,...,z_{n}]}{z_0 - z_n}$$
###Code
def product(xs,key,i):
#Key: Forward or Backward
n = len(xs)-1
l = 1
for j in range(i):
if key == 'forward':
l *= (x - xs[j])
else:
l *= (x - xs[n-j])
return l
def newton(xs,ys,key):
# Key: Forward or Backward
n = len(xs)-1
# print(xs)
print(ys)
old_column = ys
if key == 'forward':
coeff = [fs[0]]
elif key == 'backward':
coeff = [fs[len(fs)-1]]
else:
return 'error'
for i in range(1,n+1): # Column Index
new_column = [(old_column[j+1] - old_column[j])/(xs[j+i] - xs[j]) for j in range(n-i+1)]
print(new_column)
if key == 'forward':
coeff.append(new_column[0])
else:
coeff.append(new_column[len(new_column)-1])
old_column = new_column
# print(coeff)
poly = 0
for i in range(n+1):
poly += coeff[i] * product(xs,key,i)
return poly
zs = [1, 4/3, 5/3, 2]; fs = [np.sin(x) for x in zs]
p = newton(zs,fs,'forward')
print(p)
print(sp.simplify(p))
###Output
[0.8414709848078965, 0.9719379013633127, 0.9954079577517649, 0.9092974268256817]
[0.3914007496662487, 0.07041016916535667, -0.25833159277824974]
[-0.481485870751338, -0.4931126429154095]
[-0.011626772164071542]
|
docs/contents/same_value.ipynb | ###Markdown
Same value
###Code
import evidence as evi
datum1 = evi.Evidence(3.12)
datum1.add_reference({'database':'DOI', 'id':'XXX'})
datum2 = evi.Evidence(3.12)
datum2.add_reference({'database':'PubMed', 'id':'YYY'})
datum3 = evi.Evidence(3.12)
datum3.add_reference({'database':'PubMed', 'id':'ZZZ'})
datum4 = evi.Evidence(6.58)
datum4.add_reference({'database':'PubMed', 'id':'ZZZ'})
datum1.value == datum2.value
evi.same_value([datum1, datum2, datum3])
evi.same_value([datum1, datum2, datum4])
###Output
_____no_output_____ |
experiments/old/experiment_07_omalizumab_full.ipynb | ###Markdown
General pruning
###Code
need_pruning = True
method = 'pruning'
methods = []
splits = []
explanations = []
explanations_inv = []
model_accuracies = []
explanation_accuracies = []
explanation_accuracies_inv = []
elapsed_times = []
elapsed_times_inv = []
for split, (train_index, test_index) in enumerate(skf.split(x.cpu().detach().numpy(), y.cpu().detach().numpy())):
print(f'Split [{split+1}/{n_splits}]')
x_train, x_test = torch.FloatTensor(x[train_index]), torch.FloatTensor(x[test_index])
y_train, y_test = torch.LongTensor(y[train_index]), torch.LongTensor(y[test_index])
# if split not in [5]: continue
model = train_nn(x_train, y_train, need_pruning, split, device)
y_preds = model(x_test.to(device)).cpu().detach().numpy()
model_accuracy = accuracy_score(y_test.cpu().detach().numpy(), y_preds.argmax(axis=1))
print(f'\t Model\'s accuracy: {model_accuracy:.4f}')
# positive class
target_class = 1
start = time.time()
global_explanation, _, counter = logic.relunn.combine_local_explanations(model,
x_train.to(device), y_train.to(device),
target_class=target_class,
topk_explanations=2,
method=method, device=device)
elapsed_time = time.time() - start
if global_explanation:
explanation_accuracy, _ = logic.base.test_explanation(global_explanation, target_class, x_test, y_test)
explanation = logic.base.replace_names(global_explanation, concepts)
print(f'\t Class {target_class} - Global explanation: "{explanation}" - Accuracy: {explanation_accuracy:.4f}')
print(f'\t Elapsed time {elapsed_time}')
# negative class
target_class = 0
start = time.time()
global_explanation_inv, _, counter_inv = logic.relunn.combine_local_explanations(model,
x_train.to(device), y_train.to(device),
target_class=target_class,
topk_explanations=2,
method=method, device=device)
elapsed_time_inv = time.time() - start
if global_explanation_inv:
explanation_accuracy_inv, _ = logic.base.test_explanation(global_explanation_inv, target_class, x_test, y_test)
explanation_inv = logic.base.replace_names(global_explanation_inv, concepts)
print(f'\t Class {target_class} - Global explanation: "{explanation_inv}" - Accuracy: {explanation_accuracy_inv:.4f}')
print(f'\t Elapsed time {elapsed_time_inv}')
methods.append(method)
splits.append(split)
explanations.append(explanation)
explanations_inv.append(explanation_inv)
model_accuracies.append(model_accuracy)
explanation_accuracies.append(explanation_accuracy)
explanation_accuracies_inv.append(explanation_accuracy_inv)
elapsed_times.append(elapsed_time)
elapsed_times_inv.append(elapsed_time_inv)
results_pruning = pd.DataFrame({
'method': methods,
'split': splits,
'explanation': explanations,
'explanation_inv': explanations_inv,
'model_accuracy': model_accuracies,
'explanation_accuracy': explanation_accuracies,
'explanation_accuracy_inv': explanation_accuracies_inv,
'elapsed_time': elapsed_times,
'elapsed_time_inv': elapsed_times_inv,
})
results_pruning.to_csv(os.path.join(results_dir, 'results_pruning.csv'))
results_pruning
###Output
_____no_output_____
###Markdown
LIME
###Code
need_pruning = False
method = 'lime'
methods = []
splits = []
explanations = []
explanations_inv = []
model_accuracies = []
explanation_accuracies = []
explanation_accuracies_inv = []
elapsed_times = []
elapsed_times_inv = []
for seed in range(n_rep):
print(f'Seed [{seed+1}/{n_rep}]')
model = train_nn(x_train, y_train, need_pruning, seed, device)
y_preds = model(x_test.to(device)).cpu().detach().numpy()
model_accuracy = accuracy_score(y_test.cpu().detach().numpy(), y_preds.argmax(axis=1))
print(f'\t Model\'s accuracy: {model_accuracy:.4f}')
# positive class
target_class = 1
start = time.time()
global_explanation, _, _ = logic.relunn.combine_local_explanations(model,
x_train.to(device), y_train.to(device),
topk_explanations=2,
target_class=target_class,
method=method, device=device)
elapsed_time = time.time() - start
if global_explanation:
explanation_accuracy, _ = logic.base.test_explanation(global_explanation, target_class, x_test, y_test)
explanation = logic.base.replace_names(global_explanation, concepts)
print(f'\t Class {target_class} - Global explanation: "{explanation}" - Accuracy: {explanation_accuracy:.4f}')
print(f'\t Elapsed time {elapsed_time}')
# negative class
target_class = 0
start = time.time()
global_explanation_inv, _, _ = logic.relunn.combine_local_explanations(model,
x_train.to(device), y_train.to(device),
topk_explanations=2,
target_class=target_class,
method=method, device=device)
elapsed_time_inv = time.time() - start
if global_explanation_inv:
explanation_accuracy_inv, _ = logic.base.test_explanation(global_explanation_inv, target_class, x_test, y_test)
explanation_inv = logic.base.replace_names(global_explanation_inv, concepts)
print(f'\t Class {target_class} - Global explanation: "{explanation_inv}" - Accuracy: {explanation_accuracy_inv:.4f}')
print(f'\t Elapsed time {elapsed_time_inv}')
methods.append(method)
splits.append(seed)
explanations.append(explanation)
explanations_inv.append(explanation_inv)
model_accuracies.append(model_accuracy)
explanation_accuracies.append(explanation_accuracy)
explanation_accuracies_inv.append(explanation_accuracy_inv)
elapsed_times.append(elapsed_time)
elapsed_times_inv.append(elapsed_time_inv)
results_lime = pd.DataFrame({
'method': methods,
'split': splits,
'explanation': explanations,
'explanation_inv': explanations_inv,
'model_accuracy': model_accuracies,
'explanation_accuracy': explanation_accuracies,
'explanation_accuracy_inv': explanation_accuracies_inv,
'elapsed_time': elapsed_times,
'elapsed_time_inv': elapsed_times_inv,
})
results_lime.to_csv(os.path.join(results_dir, 'results_lime.csv'))
results_lime
###Output
_____no_output_____
###Markdown
Weights
###Code
need_pruning = False
method = 'weights'
methods = []
splits = []
explanations = []
explanations_inv = []
model_accuracies = []
explanation_accuracies = []
explanation_accuracies_inv = []
elapsed_times = []
elapsed_times_inv = []
for split, (train_index, test_index) in enumerate(skf.split(x.cpu().detach().numpy(), y.cpu().detach().numpy())):
print(f'Split [{split+1}/{n_splits}]')
x_train, x_test = torch.FloatTensor(x[train_index]), torch.FloatTensor(x[test_index])
y_train, y_test = torch.LongTensor(y[train_index]), torch.LongTensor(y[test_index])
model = train_nn(x_train, y_train, need_pruning, split, device, relu=True)
y_preds = model(x_test.to(device)).cpu().detach().numpy()
model_accuracy = accuracy_score(y_test.cpu().detach().numpy(), y_preds.argmax(axis=1))
print(f'\t Model\'s accuracy: {model_accuracy:.4f}')
# positive class
target_class = 1
start = time.time()
global_explanation, _, _ = logic.relunn.combine_local_explanations(model,
x_train.to(device), y_train.to(device),
topk_explanations=2,
target_class=target_class,
method=method, device=device)
elapsed_time = time.time() - start
if global_explanation:
explanation_accuracy, _ = logic.base.test_explanation(global_explanation, target_class, x_test, y_test)
explanation = logic.base.replace_names(global_explanation, concepts)
print(f'\t Class {target_class} - Global explanation: "{explanation}" - Accuracy: {explanation_accuracy:.4f}')
print(f'\t Elapsed time {elapsed_time}')
# negative class
target_class = 0
start = time.time()
global_explanation_inv, _, _ = logic.relunn.combine_local_explanations(model,
x_train.to(device), y_train.to(device),
topk_explanations=2,
target_class=target_class,
method=method, device=device)
elapsed_time_inv = time.time() - start
if global_explanation_inv:
explanation_accuracy_inv, _ = logic.base.test_explanation(global_explanation_inv, target_class, x_test, y_test)
explanation_inv = logic.base.replace_names(global_explanation_inv, concepts)
print(f'\t Class {target_class} - Global explanation: "{explanation_inv}" - Accuracy: {explanation_accuracy_inv:.4f}')
print(f'\t Elapsed time {elapsed_time_inv}')
methods.append(method)
splits.append(split)
explanations.append(explanation)
explanations_inv.append(explanation_inv)
model_accuracies.append(model_accuracy)
explanation_accuracies.append(explanation_accuracy)
explanation_accuracies_inv.append(explanation_accuracy_inv)
elapsed_times.append(elapsed_time)
elapsed_times_inv.append(elapsed_time_inv)
results_weights = pd.DataFrame({
'method': methods,
'split': splits,
'explanation': explanations,
'explanation_inv': explanations_inv,
'model_accuracy': model_accuracies,
'explanation_accuracy': explanation_accuracies,
'explanation_accuracy_inv': explanation_accuracies_inv,
'elapsed_time': elapsed_times,
'elapsed_time_inv': elapsed_times_inv,
})
results_weights.to_csv(os.path.join(results_dir, 'results_weights.csv'))
results_weights
###Output
_____no_output_____
###Markdown
Psi network
###Code
def train_psi_nn(x_train, y_train, need_pruning, seed, device):
set_seed(seed)
x_train = x_train.to(device)
y_train = y_train.to(device).to(torch.float)
layers = [
torch.nn.Linear(x_train.size(1), 10),
torch.nn.Sigmoid(),
torch.nn.Linear(10, 4),
torch.nn.Sigmoid(),
torch.nn.Linear(4, 1),
torch.nn.Sigmoid(),
]
model = torch.nn.Sequential(*layers).to(device)
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
loss_form = torch.nn.BCELoss()
model.train()
for epoch in range(tot_epochs):
# forward pass
optimizer.zero_grad()
y_pred = model(x_train).squeeze()
# Compute Loss
loss = loss_form(y_pred, y_train)
for module in model.children():
if isinstance(module, torch.nn.Linear):
loss += 0.0001 * torch.norm(module.weight, 1)
# backward pass
loss.backward()
optimizer.step()
if epoch > 1500 and need_pruning:
model = prune_equal_fanin(model, 2, validate=True, device=device)
need_pruning = False
# compute accuracy
if epoch % 500 == 0:
y_pred_d = y_pred > 0.5
accuracy = y_pred_d.eq(y_train).sum().item() / y_train.size(0)
print(f'\t Epoch {epoch}: train accuracy: {accuracy:.4f}')
return model
need_pruning = True
method = 'psi'
methods = []
splits = []
explanations = []
explanations_inv = []
model_accuracies = []
explanation_accuracies = []
explanation_accuracies_inv = []
elapsed_times = []
elapsed_times_inv = []
for seed in range(n_rep):
print(f'Seed [{seed+1}/{n_rep}]')
# positive class
target_class = 1
model = train_psi_nn(x_train, y_train, need_pruning, seed, device)
y_preds = model(x_test.to(device)).cpu().detach().numpy()
model_accuracy = accuracy_score(y_test.cpu().detach().numpy(), y_preds > 0.5)
print(f'\t Model\'s accuracy: {model_accuracy:.4f}')
start = time.time()
global_explanation = logic.generate_fol_explanations(model, device)[0]
elapsed_time = time.time() - start
explanation_accuracy, _ = logic.base.test_explanation(global_explanation, target_class, x_test, y_test)
explanation = logic.base.replace_names(global_explanation, concepts)
print(f'\t Class {target_class} - Global explanation: "{explanation}" - Accuracy: {explanation_accuracy:.4f}')
print(f'\t Elapsed time {elapsed_time}')
# negative class
target_class = 0
model = train_psi_nn(x_train, y_train.eq(target_class), need_pruning, seed, device)
y_preds = model(x_test.to(device)).cpu().detach().numpy()
model_accuracy = accuracy_score(y_test.eq(target_class).cpu().detach().numpy(), y_preds > 0.5)
print(f'\t Model\'s accuracy: {model_accuracy:.4f}')
start = time.time()
global_explanation_inv = logic.generate_fol_explanations(model, device)[0]
elapsed_time_inv = time.time() - start
explanation_accuracy_inv, _ = logic.base.test_explanation(global_explanation_inv,
target_class, x_test, y_test)
explanation_inv = logic.base.replace_names(global_explanation_inv, concepts)
print(f'\t Class {target_class} - Global explanation: "{explanation_inv}" - Accuracy: {explanation_accuracy_inv:.4f}')
print(f'\t Elapsed time {elapsed_time_inv}')
methods.append(method)
splits.append(seed)
explanations.append(explanation)
explanations_inv.append(explanation_inv)
model_accuracies.append(model_accuracy)
explanation_accuracies.append(explanation_accuracy)
explanation_accuracies_inv.append(explanation_accuracy_inv)
elapsed_times.append(elapsed_time)
elapsed_times_inv.append(elapsed_time_inv)
results_psi = pd.DataFrame({
'method': methods,
'split': splits,
'explanation': explanations,
'explanation_inv': explanations_inv,
'model_accuracy': model_accuracies,
'explanation_accuracy': explanation_accuracies,
'explanation_accuracy_inv': explanation_accuracies_inv,
'elapsed_time': elapsed_times,
'elapsed_time_inv': elapsed_times_inv,
})
results_psi.to_csv(os.path.join(results_dir, 'results_psi.csv'))
results_psi
###Output
_____no_output_____
###Markdown
Decision tree
###Code
need_pruning = False
method = 'decision_tree'
methods = []
splits = []
explanations = []
explanations_inv = []
model_accuracies = []
explanation_accuracies = []
explanation_accuracies_inv = []
elapsed_times = []
elapsed_times_inv = []
for split, (train_index, test_index) in enumerate(skf.split(x.cpu().detach().numpy(), y.cpu().detach().numpy())):
print(f'Split [{split+1}/{n_splits}]')
x_train, x_test = x[train_index], x[test_index]
y_train, y_test = y[train_index], y[test_index]
classifier = DecisionTreeClassifier(random_state=split)
classifier.fit(x_train.cpu().detach().numpy(), y_train.cpu().detach().numpy())
y_preds = classifier.predict(x_test.cpu().detach().numpy())
model_accuracy = accuracy_score(y_test.cpu().detach().numpy(), y_preds)
print(f'\t Model\'s accuracy: {model_accuracy:.4f}')
target_class = 1
start = time.time()
explanation = tree_to_formula(classifier, concepts, target_class)
elapsed_time = time.time() - start
print(f'\t Class {target_class} - Global explanation: {explanation}')
print(f'\t Elapsed time {elapsed_time}')
target_class = 0
start = time.time()
explanation_inv = tree_to_formula(classifier, concepts, target_class)
elapsed_time_inv = time.time() - start
print(f'\t Class {target_class} - Global explanation: {explanation_inv}')
print(f'\t Elapsed time {elapsed_time_inv}')
methods.append(method)
splits.append(split)
explanations.append(explanation)
explanations_inv.append(explanation_inv)
model_accuracies.append(model_accuracy)
explanation_accuracies.append(model_accuracy)
explanation_accuracies_inv.append(model_accuracy)
elapsed_times.append(0)
elapsed_times_inv.append(0)
results_tree = pd.DataFrame({
'method': methods,
'split': splits,
'explanation': explanations,
'explanation_inv': explanations_inv,
'model_accuracy': model_accuracies,
'explanation_accuracy': explanation_accuracies,
'explanation_accuracy_inv': explanation_accuracies_inv,
'elapsed_time': elapsed_times,
'elapsed_time_inv': elapsed_times_inv,
})
results_tree.to_csv(os.path.join(results_dir, 'results_tree.csv'))
results_tree
###Output
_____no_output_____
###Markdown
Summary
###Code
cols = ['model_accuracy', 'explanation_accuracy', 'explanation_accuracy_inv', 'elapsed_time', 'elapsed_time_inv']
mean_cols = [f'{c}_mean' for c in cols]
sem_cols = [f'{c}_sem' for c in cols]
# pruning
df_mean = results_pruning[cols].mean()
df_sem = results_pruning[cols].sem()
df_mean.columns = mean_cols
df_sem.columns = sem_cols
summary_pruning = pd.concat([df_mean, df_sem])
summary_pruning.name = 'pruning'
# lime
df_mean = results_lime[cols].mean()
df_sem = results_lime[cols].sem()
df_mean.columns = mean_cols
df_sem.columns = sem_cols
summary_lime = pd.concat([df_mean, df_sem])
summary_lime.name = 'lime'
# weights
df_mean = results_weights[cols].mean()
df_sem = results_weights[cols].sem()
df_mean.columns = mean_cols
df_sem.columns = sem_cols
summary_weights = pd.concat([df_mean, df_sem])
summary_weights.name = 'weights'
# psi
df_mean = results_psi[cols].mean()
df_sem = results_psi[cols].sem()
df_mean.columns = mean_cols
df_sem.columns = sem_cols
summary_psi = pd.concat([df_mean, df_sem])
summary_psi.name = 'psi'
# tree
df_mean = results_tree[cols].mean()
df_sem = results_tree[cols].sem()
df_mean.columns = mean_cols
df_sem.columns = sem_cols
summary_tree = pd.concat([df_mean, df_sem])
summary_tree.name = 'tree'
summary = pd.concat([summary_pruning,
summary_lime,
summary_weights,
summary_psi,
summary_tree], axis=1).T
summary.columns = mean_cols + sem_cols
summary
summary.to_csv(os.path.join(results_dir, 'summary.csv'))
###Output
_____no_output_____ |
Dynamic Programming/1005/516. Longest Palindromic Subsequence.ipynb | ###Markdown
็ฌฌไบ็ฑปๅบ้ดๅDP๏ผ ็ปๅฎๅญ็ฌฆไธฒs๏ผๆพๅฐๆ้ฟๅๆๅญๅบๅ็้ฟๅบฆsใๆจๅฏไปฅๅ่ฎพs็ๆๅคง้ฟๅบฆไธบ1000ใExample 1: Input: "bbbab" Output: 4 One possible longest palindromic subsequence is "bbbb". Example 2: Input: "cbbd" Output: 2 One possible longest palindromic subsequence is "bb".Constraints: 1ใ1 <= s.length <= 1000 2ใs consists only of lowercase English letters.
###Code
class Solution:
def longestPalindromeSubseq(self, s: str) -> int:
s = '0' + s
len_s = len(s)
dp = [[0] * len_s for _ in range(len_s)]
for i in range(1, len_s):
dp[1][i] = 1
for sub_len in range(2, len_s + 1):
start = 1
while start + sub_len - 1 < len_s:
end = start + sub_len
print(s[start:end])
if s[start] == s[end]:
dp[sub_len][end] = max(dp[sub_len][end], dp[sub_len])
else:
dp[sub_len][end] = max(dp[sub_len - 1][end-1], dp[sub_len - 1][start])
start += 1
print(dp)
return dp[-1][-1]
class Solution:
def longestPalindromeSubseq(self, s: str) -> int:
s = '0' + s
len_s = len(s)
# dp[i][j] ไปฃ่กจไบไปs็็ฌฌ i ไธช idx ๅฐ j ไธชidx๏ผๅๆๆฐๆๅคง้ฟๅบฆ
dp = [[0] * len_s for _ in range(len_s)]
for i in range(1, len_s):
dp[i][i] = 1
for sub_len in range(2, len_s + 1): # ไปฃ่กจไบไปsไธญๆชๅไธๅ้ฟๅบฆ็sub_s
start = 1
while start + sub_len - 1 < len_s:
end = start + sub_len - 1 # ๆซๅฐพๅญ็ฌฆไธฒ็็ดขๅผ
print(s[start:end+1], sub_len, start, end)
if s[start] == s[end]:
dp[start][end] = max(dp[start][end], dp[start+1][end-1] + 2)
else:
dp[start][end] = max(dp[start+1][end], dp[start][end-1])
start += 1
return dp[1][-1]
solution = Solution()
solution.longestPalindromeSubseq('cbbd')
###Output
cb 2 1 2
bb 2 2 3
bd 2 3 4
cbb 3 1 3
bbd 3 2 4
cbbd 4 1 4
|
notebooks/results_downstream_fully_obs.ipynb | ###Markdown
Visualize Results: Downstream Performance - "Fully Observed" ExperimentThis notebook should answer the questions: *Does imputation lead to better downstream performances?* Notebook Structure * Application Scenario 2 - Downstream Performance * Categorical Columns (Classification) * Numerical Columns (Regression) * Heterogenous Columns (Classification and Regression Combined)
###Code
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import os
import pandas as pd
import re
import seaborn as sns
from pathlib import Path
from data_imputation_paper.experiment import read_experiment, read_csv_files
from data_imputation_paper.plotting import draw_cat_box_plot
%matplotlib inline
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Settings
###Code
sns.set(style="whitegrid")
sns.set_context('paper', font_scale=1.5)
mpl.rcParams['lines.linewidth'] = '2'
EXPERIMENT = "fully_observed_fix"
EXPERIMENT_PATH = Path(f"../data/experiments/{EXPERIMENT}/")
CLF_METRIC = "Classification Tasks"
REG_METRIC = "Regression Tasks"
DOWNSTREAM_RESULT_TYPE = "downstream_performance_mean"
IMPUTE_RESULT_TYPE = "impute_performance_mean"
FIGURES_PATH = Path(f"../paper/figures/")
###Output
_____no_output_____
###Markdown
Import the data
###Code
%%time
results = read_csv_files(read_experiment(EXPERIMENT_PATH), read_details=False)
results.head()
na_impute_results = results[
(results["result_type"] == IMPUTE_RESULT_TYPE) &
(results["metric"].isin(["F1_macro", "RMSE"]))
]
na_impute_results.drop(["baseline", "corrupted", "imputed"], axis=1, inplace=True)
na_impute_results = na_impute_results[na_impute_results.isna().any(axis=1)]
na_impute_results.shape
downstream_results = results[
(results["result_type"] == DOWNSTREAM_RESULT_TYPE) &
(results["metric"].isin(["F1_macro", "RMSE"]))
]
# remove experiments where imputation failed
downstream_results = downstream_results.merge(
na_impute_results,
how = "left",
validate = "one_to_one",
indicator = True,
suffixes=("", "_imp"),
on = ["experiment", "imputer", "task", "missing_type", "missing_fraction", "strategy", "column"]
)
downstream_results = downstream_results[downstream_results["_merge"]=="left_only"]
assert len(results["strategy"].unique()) == 1
downstream_results.drop(["experiment", "strategy", "result_type_imp", "metric_imp", "train", "test", "train_imp", "test_imp", "_merge"], axis=1, inplace=True)
downstream_results = downstream_results.rename(
{
"imputer": "Imputation Method",
"task": "Task",
"missing_type": "Missing Type",
"missing_fraction": "Missing Fraction",
"column": "Column",
"baseline": "Baseline",
"imputed": "Imputed",
"corrupted": "Corrupted"
},
axis = 1
)
rename_imputer_dict = {
"ModeImputer": "Mean/Mode",
"KNNImputer": "$k$-NN",
"ForestImputer": "Random Forest",
"AutoKerasImputer": "Discriminative DL",
"VAEImputer": "VAE",
"GAINImputer": "GAIN"
}
rename_metric_dict = {
"F1_macro": CLF_METRIC,
"RMSE": REG_METRIC
}
downstream_results = downstream_results.replace(rename_imputer_dict)
downstream_results = downstream_results.replace(rename_metric_dict)
downstream_results
###Output
_____no_output_____
###Markdown
Robustness: check which imputers yielded `NaN`values
###Code
for col in downstream_results.columns:
na_sum = downstream_results[col].isna().sum()
if na_sum > 0:
print("-----" * 10)
print(col, na_sum)
print("-----" * 10)
na_idx = downstream_results[col].isna()
print(downstream_results.loc[na_idx, "Imputation Method"].value_counts(dropna=False))
print("\n")
###Output
_____no_output_____
###Markdown
Compute Downstream Performance relative to Baseline
###Code
clf_row_idx = downstream_results["metric"] == CLF_METRIC
reg_row_idx = downstream_results["metric"] == REG_METRIC
downstream_results["Improvement"] = (downstream_results["Imputed"] - downstream_results["Corrupted"] ) / downstream_results["Baseline"]
downstream_results.loc[reg_row_idx, "Improvement"] = downstream_results.loc[reg_row_idx, "Improvement"] * -1
downstream_results
###Output
_____no_output_____
###Markdown
Application Scenario 2 - Downstream Performance Categorical Columns (Classification)
###Code
draw_cat_box_plot(
downstream_results,
"Improvement",
(-0.15, 0.3),
FIGURES_PATH,
"fully_observed_downstream_boxplot.eps",
hue_order=list(rename_imputer_dict.values()),
row_order=list(rename_metric_dict.values())
)
###Output
_____no_output_____ |
src/notebooks/252-baseline-options-for-stacked-area-chart.ipynb | ###Markdown
The `stackplot()` function of [matplotlib](http://python-graph-gallery.com/matplotlib/) allows to make [stacked area chart](http://python-graph-gallery.com/stacked-area-plot/). It provides a **baseline** argument that allows to custom the position of the areas around the baseline. Four possibilities are exist, and they are represented here. This chart is strongly inspired from the [Hooked](http://thoppe.github.io/) answer on this [stack overflow question](https://stackoverflow.com/questions/2225995/how-can-i-create-stacked-line-graph-with-matplotlib), thanks to him!
###Code
# libraries
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# Create data
X = np.arange(0, 10, 1)
Y = X + 5 * np.random.random((5, X.size))
# There are 4 types of baseline we can use:
baseline = ["zero", "sym", "wiggle", "weighted_wiggle"]
# Let's make 4 plots, 1 for each baseline
for n, v in enumerate(baseline):
if n<3 :
plt.tick_params(labelbottom='off')
plt.subplot(2 ,2, n + 1)
plt.stackplot(X, *Y, baseline=v)
plt.title(v)
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Welcome in the introductory template of the python graph gallery. Here is how to proceed to add a new `.ipynb` file that will be converted to a blogpost in the gallery! Notebook Metadata It is very important to add the following fields to your notebook. It helps building the page later on:- **slug**: the URL of the blogPost. It should be exactly the same as the file title. Example: `70-basic-density-plot-with-seaborn`- **chartType**: the chart type like density or heatmap. For a complete list see [here](https://github.com/holtzy/The-Python-Graph-Gallery/blob/master/src/util/sectionDescriptions.js), it must be one of the `id` options.- **title**: what will be written in big on top of the blogpost! use html syntax there.- **description**: what will be written just below the title, centered text.- **keyword**: list of keywords related with the blogpost- **seoDescription**: a description for the bloppost meta. Should be a bit shorter than the description and must not contain any html syntax. Add a chart description A chart example always come with some explanation. It must:contain keywordslink to related pages like the parent page (graph section)give explanations. In depth for complicated charts. High level for beginner level charts Add a chart
###Code
import seaborn as sns, numpy as np
np.random.seed(0)
x = np.random.randn(100)
ax = sns.distplot(x)
###Output
_____no_output_____ |
project/tgkim/newton/notebooks/NewtonRNN (PyTorch - SHOF).ipynb | ###Markdown
Load Data
###Code
sho = netCDF4.Dataset('../data/sho_friction2.nc').variables
t_sho = np.array(sho['t'][:], dtype=np.float32)
s_sho = np.array(sho['s'][:], dtype=np.float32)
v_sho = np.array(sho['v'][:], dtype=np.float32)
plt.figure(figsize=(10, 6), dpi=150)
plt.plot(t_sho, s_sho)
plt.show()
plt.figure(figsize=(10, 6), dpi=150)
plt.plot(t_sho, v_sho)
plt.show()
# X_total = np.column_stack([s_sho, v_sho])
X_total = s_sho.reshape(-1, 1)
sc = MinMaxScaler()
X_normalized = sc.fit_transform(X_total)
plt.figure(figsize=(10, 6), dpi=150)
plt.plot(t_sho, X_normalized[:,0])
plt.show()
def sliding_window(data, seq_length):
x = []
y = []
for i in range(len(data)-seq_length-1):
_x = data[i:(i+seq_length)]
_y = data[i+seq_length]
x.append(_x)
y.append(_y)
return np.array(x),np.array(y)
def train_test(df, test_periods):
train = df[:-test_periods]
test = df[-test_periods:]
return train, test
X, y = sliding_window(X_normalized, 10)
X.shape
y.shape
N = X.shape[0]
N_train = 100
N_test = N - N_train
X_train = X[:N_train]
y_train = y[:N_train]
X_test = X[N_train:150]
y_test = y[N_train:150]
class NewtonData(Dataset):
def __init__(self, X, y):
self.X = X
self.y = y
def __len__(self):
return self.X.shape[0]
def __getitem__(self, idx):
return self.X[idx], self.y[idx]
ds_train = NewtonData(X_train, y_train)
ds_val = NewtonData(X_test, y_test)
len(ds_train)
len(ds_val)
ds_train[0][0].shape
dl_train = DataLoader(ds_train, batch_size=10, shuffle=True)
ds_train[0]
class SingleRNN(nn.Module):
def __init__(self, input_size, hidden_size, dropout=0, bidirectional=False):
super(SingleRNN, self).__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.num_direction = int(bidirectional) + 1
self.rnn = nn.LSTM(input_size, hidden_size, 1, dropout=dropout, batch_first=True, bidirectional=bidirectional)
self.fc = nn.Linear(hidden_size, 1)
def forward(self, x):
# input shape: batch, seq, dim
rnn_output, _ = self.rnn(x)
output = self.fc(rnn_output)
return output[:,-1,:]
model = SingleRNN(input_size=1, hidden_size=5)
criterion = nn.MSELoss()
optimizer = optim.Adam(model.parameters(), lr=0.001)
model.eval()
dl_iter = iter(dl_train)
x_one, y_one = next(dl_iter)
x_one.shape
y_one.shape
model(x_one).shape
epochs = 500
model.train()
for epoch in range(epochs+1):
for x, y in dl_train:
y_hat = model(x)
optimizer.zero_grad()
loss = criterion(y_hat, y)
loss.backward()
optimizer.step()
if epoch%100==0:
print(f'epoch: {epoch:4} loss:{loss.item():10.8f}')
model.eval()
X, y = sliding_window(X_normalized, 10)
total_data = NewtonData(X, y)
dl = DataLoader(total_data, batch_size=len(total_data))
dl_iter = iter(dl)
X, y = next(dl_iter)
y.shape
t = t_sho[10:-1]
t.shape
plt.plot(t, y.cpu().numpy())
y_pred = model(X)
plt.plot(t, y_pred.detach().numpy())
plt.figure(figsize=(10, 6), dpi=150)
plt.plot(t, y.cpu().numpy())
plt.plot(t, y_pred.detach().numpy())
plt.show()
X.shape
t.shape
N_extrap = 190
X_new = X[0]
X_new.shape
X_new.view(1,-1,1)
y_ex = []
t_ex = []
dt = 1e-1
for i in range(N_extrap):
y_new = model(X_new.view(1, -1, 1))
X_new = torch.concat([X_new[1:], y_new])
t_ex.append(t[0] + i * dt)
y_ex.append(y_new.view(-1).detach().numpy())
# t_total = np.concatenate([t[0], t_ex])
t_total = t_ex
# y_pred_total = np.concatenate([y_pred.detach().numpy()[0:100], y_ex])
y_pred_total = y_ex
shoo = netCDF4.Dataset('../data/sho_friction3.nc').variables
t_shoo = np.array(shoo['t'][:], dtype=np.float32)
s_shoo = np.array(shoo['s'][:], dtype=np.float32)
v_shoo = np.array(shoo['v'][:], dtype=np.float32)
sc = MinMaxScaler()
s_shoo_new = sc.fit_transform(s_shoo.reshape(-1, 1))
plt.figure(figsize=(10, 6), dpi=300)
plt.plot(t, y.cpu().numpy(), '--', alpha=0.8)
plt.plot(t_total, y_pred_total, '--', alpha=0.8)
plt.plot(t_shoo[10:], s_shoo_new[10:], '--', alpha=0.8)
# plt.axvline(t[100], linestyle='--', color='r')
plt.show()
###Output
_____no_output_____ |
notebooks/1k3f_pc-saft_params.ipynb | ###Markdown
Guessing the Parameters for a PC-SAFT Model of 1k3f (VORATEC SD 301) PolyolBegun July 24, 2021 to produce plots for ICTAM 2020+1.**NOTE: ALL CALCULATIONS SHOWN HERE USE N = 41, BUT TO BE CONSISTENT WITH N = 123 FOR 3K2F (2700 G/MOL), SHOULD USE N = 45 (~123/2.7)**
###Code
%load_ext autoreload
%autoreload 2
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import dataproc
import plot
from importlib import reload
# System parameters
# molecular weight of CO2
mw_co2 = 44.01
# conversion of m3 per mL
m3_per_mL = 1E-6
# Save plots?
save_plots = True
# file path to saved data
data_folder = '../g-adsa_results/'
# csv data files
csv_file_list = ['1k3f_30c', '1k3f_60c']
###Output
_____no_output_____
###Markdown
Loads data into dictionary.
###Code
d = dataproc.load_proc_data(csv_file_list, data_folder)
###Output
_____no_output_____
###Markdown
Compare results of prediction with guessed PC-SAFT parameters to actual data. Solubility
###Code
tk_fs = 18
ax_fs = 20
# folder of csv data files showing sensitivity of DFT predictions to PC-SAFT parameters
dft_sensitivity_folder = 'dft_pred//1k3f_30c_sensitivity//'
# loads dft predictions into a similarly structured dictionary
d_dft = dataproc.load_dft(dft_sensitivity_folder)
fig = plt.figure(figsize=(12, 4))
ax = fig.add_subplot(111)
# plots interfacial tension for 30 C
ax = plot.sensitivity_manual(d, d_dft, '1k3f_30c', 'solub', 'sigma', 3.17, data_folder, '', ['epsn_233-0~sigma_3-01'],
color='#1181B3', ms=10, m_ads='o', m_des='o', lw=4, ax=ax)
# folder of csv data files showing sensitivity of DFT predictions to PC-SAFT parameters
dft_sensitivity_folder = 'dft_pred//1k3f_60c_sensitivity//'
# loads dft predictions into a similarly structured dictionary
d_dft = dataproc.load_dft(dft_sensitivity_folder)
ax = plot.sensitivity_manual(d, d_dft, '1k3f_60c', 'solub', 'sigma', 3.17, data_folder, '', ['epsn_233-0~sigma_3-01'],
color='#B74A0D', ms=10, m_ads='o', m_des='o', lw=4, ax=ax)
ax.tick_params(labelsize=tk_fs)
ax.set_title('')
ax.set_xlabel(ax.xaxis.get_label().get_text(), fontsize=ax_fs)
ax.set_ylabel(ax.yaxis.get_label().get_text(), fontsize=ax_fs)
###Output
Analyzing dft_pred//1k3f_30c_sensitivity\1k3f_30c.csv
Analyzing dft_pred//1k3f_30c_sensitivity\epsn_229-3~sigma_3-01.csv
Analyzing dft_pred//1k3f_30c_sensitivity\epsn_233-0~sigma_3-01.csv
Analyzing dft_pred//1k3f_30c_sensitivity\epsn_263-0~sigma_3-17.csv
Analyzing dft_pred//1k3f_60c_sensitivity\1k3f_60c.csv
Analyzing dft_pred//1k3f_60c_sensitivity\epsn_229-3~sigma_3-01.csv
Analyzing dft_pred//1k3f_60c_sensitivity\epsn_233-0~sigma_3-01.csv
Analyzing dft_pred//1k3f_60c_sensitivity\epsn_263-0~sigma_3-17.csv
###Markdown
Interfacial Tension
###Code
reload(dataproc)
reload(plot)
ax_fs = 20
tk_fs = 18
# folder of csv data files showing sensitivity of DFT predictions to PC-SAFT parameters
dft_sensitivity_folder = 'dft_pred//1k3f_30c_sensitivity//'
# loads dft predictions into a similarly structured dictionary
d_dft = dataproc.load_dft(dft_sensitivity_folder)
fig = plt.figure(figsize=(12, 4))
ax = fig.add_subplot(111)
# plots interfacial tension for 30 C
ax = plot.sensitivity_manual(d, d_dft, '1k3f_30c', 'if_tension', 'sigma', 3.17, data_folder, '', [],
color='#1181B3', ms=10, m_ads='o', m_des='o', lw=4, ax=ax)
# folder of csv data files showing sensitivity of DFT predictions to PC-SAFT parameters
dft_sensitivity_folder = 'dft_pred//1k3f_60c_sensitivity//'
# loads dft predictions into a similarly structured dictionary
d_dft = dataproc.load_dft(dft_sensitivity_folder)
ax = plot.sensitivity_manual(d, d_dft, '1k3f_60c', 'if_tension', 'sigma', 3.17, data_folder, '', [],
color='#B74A0D', ms=10, m_ads='o', m_des='o', lw=4, ax=ax)
ax.tick_params(labelsize=tk_fs)
ax.set_title('')
ax.set_xlabel(ax.xaxis.get_label().get_text(), fontsize=ax_fs)
ax.set_ylabel(ax.yaxis.get_label().get_text(), fontsize=ax_fs)
###Output
_____no_output_____
###Markdown
Specific Volume
###Code
# folder of csv data files showing sensitivity of DFT predictions to PC-SAFT parameters
dft_sensitivity_folder = 'dft_pred//1k3f_30c_sensitivity//'
# loads dft predictions into a similarly structured dictionary
d_dft = dataproc.load_dft(dft_sensitivity_folder)
fig = plt.figure(figsize=(12, 4))
ax = fig.add_subplot(111)
# plots interfacial tension for 30 C
ax = plot.sensitivity_manual(d, d_dft, '1k3f_30c', 'spec_vol', 'sigma', 3.17, data_folder, '', ['epsn_233-0~sigma_3-01'],
color='#1181B3', ms=10, m_ads='o', m_des='o', lw=4, ax=ax)
# folder of csv data files showing sensitivity of DFT predictions to PC-SAFT parameters
dft_sensitivity_folder = 'dft_pred//1k3f_60c_sensitivity//'
# loads dft predictions into a similarly structured dictionary
d_dft = dataproc.load_dft(dft_sensitivity_folder)
ax = plot.sensitivity_manual(d, d_dft, '1k3f_60c', 'spec_vol', 'sigma', 3.17, data_folder, '', [],
color='#B74A0D', ms=10, m_ads='o', m_des='o', lw=4, ax=ax)
ax.tick_params(labelsize=tk_fs)
ax.set_title('')
ax.set_xlabel(ax.xaxis.get_label().get_text(), fontsize=ax_fs)
ax.set_ylabel(ax.yaxis.get_label().get_text(), fontsize=ax_fs)
###Output
Analyzing dft_pred//1k3f_30c_sensitivity\1k3f_30c.csv
Analyzing dft_pred//1k3f_30c_sensitivity\epsn_229-3~sigma_3-01.csv
Analyzing dft_pred//1k3f_30c_sensitivity\epsn_233-0~sigma_3-01.csv
Analyzing dft_pred//1k3f_30c_sensitivity\epsn_263-0~sigma_3-17.csv
Analyzing dft_pred//1k3f_60c_sensitivity\1k3f_60c.csv
Analyzing dft_pred//1k3f_60c_sensitivity\epsn_229-3~sigma_3-01.csv
Analyzing dft_pred//1k3f_60c_sensitivity\epsn_233-0~sigma_3-01.csv
Analyzing dft_pred//1k3f_60c_sensitivity\epsn_263-0~sigma_3-17.csv
|
pkgs/bokeh-0.11.1-py27_0/Examples/bokeh/plotting/notebook/color_scatterplot.ipynb | ###Markdown
This IPython Notebook contains simple examples of the line function. To clear all previously rendered cell outputs, select from the menu: Cell -> All Output -> Clear
###Code
import numpy as np
from six.moves import zip
from bokeh.plotting import figure, show, output_notebook
N = 4000
x = np.random.random(size=N) * 100
y = np.random.random(size=N) * 100
radii = np.random.random(size=N) * 1.5
colors = ["#%02x%02x%02x" % (int(r), int(g), 150) for r, g in zip(50+2*x, 30+2*y)]
output_notebook()
TOOLS="resize,crosshair,pan,wheel_zoom,box_zoom,reset,tap,previewsave,box_select,poly_select,lasso_select"
p = figure(tools=TOOLS)
p.scatter(x,y, radius=radii, fill_color=colors, fill_alpha=0.6, line_color=None)
show(p)
###Output
_____no_output_____ |
m03_v01_store_sales_predict.ipynb | ###Markdown
0.0. IMPORTS
###Code
import math
import inflection
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from IPython.core.display import HTML
from IPython.display import Image
from datetime import datetime, timedelta
###Output
_____no_output_____
###Markdown
0.1. Helper Functions 0.2. Loading data
###Code
df_sale_raw = pd.read_csv( 'base de dados/train.csv', low_memory=False)
df_store_raw = pd.read_csv( 'base de dados/store.csv', low_memory=False)
df_sale_raw.sample()
df_store_raw.sample()
df_raw = pd.merge( df_sale_raw, df_store_raw, how='left', on='Store')
df_raw.sample()
###Output
_____no_output_____
###Markdown
1.0. PASSO 01 - DESCRICAO DOS DADOS
###Code
df1 = df_raw.copy()
###Output
_____no_output_____
###Markdown
1.1. Rename Columns
###Code
df1.columns
cols_old = ['Store', 'DayOfWeek', 'Date', 'Sales', 'Customers', 'Open', 'Promo',
'StateHoliday', 'SchoolHoliday', 'StoreType', 'Assortment',
'CompetitionDistance', 'CompetitionOpenSinceMonth',
'CompetitionOpenSinceYear', 'Promo2', 'Promo2SinceWeek',
'Promo2SinceYear', 'PromoInterval']
snakecase = lambda x: inflection.underscore( x )
cols_new = list( map( snakecase, cols_old ) )
#rename
df1.columns = cols_new
df1.columns
###Output
_____no_output_____
###Markdown
1.2. Data Dimensions
###Code
print( f'Number of Rows: {df1.shape[0]}')
print( f'Number of Columns: {df1.shape[1]}')
###Output
Number of Rows: 1017209
Number of Columns: 18
###Markdown
1.3. Data Types
###Code
df1['date'] = pd.to_datetime( df1['date'] )
df1.dtypes
###Output
_____no_output_____
###Markdown
1.4. Check NA
###Code
df1.isna().sum()
###Output
_____no_output_____
###Markdown
1.5. Fillout NA
###Code
df1.sample()
# competition_distance
df1['competition_distance'] = df1['competition_distance'].apply( lambda x: 200000.0 if math.isnan( x ) else x )
# competition_open_since_month
df1['competition_open_since_month'] = df1.apply( lambda x: x['date'].month if math.isnan( x['competition_open_since_month'] ) else x['competition_open_since_month'], axis = 1)
# competition_open_since_year
df1['competition_open_since_year'] = df1.apply( lambda x: x['date'].year if math.isnan( x['competition_open_since_year'] ) else x['competition_open_since_year'], axis = 1)
# promo2_since_week
df1['promo2_since_week'] = df1.apply( lambda x: x['date'].week if math.isnan( x['promo2_since_week'] ) else x['promo2_since_week'], axis = 1)
# promo2_since_year
df1['promo2_since_year'] = df1.apply( lambda x: x['date'].year if math.isnan( x['promo2_since_year'] ) else x['promo2_since_year'], axis = 1)
#promo_interval
month_map = {1: 'Jan', 2: 'Fev', 3: 'Mar', 4: 'Apr', 5: 'May', 6: 'Jun', 7: 'Jul', 8: 'Aug', 9: 'Sep', 10: 'Oct', 11: 'Nov', 12: 'Dec'}
df1['promo_interval'].fillna(0, inplace=True )
df1['month_map'] = df1['date'].dt.month.map( month_map )
df1['is_promo'] = df1[['promo_interval', 'month_map']].apply( lambda x: 0 if x['promo_interval'] == 0 else 1 if x['month_map'] in x['promo_interval'].split( ',' ) else 0, axis= 1 )
df1.isna().sum()
###Output
_____no_output_____
###Markdown
1.6. Change Types
###Code
df1.dtypes
df1['competition_open_since_month'] = df1['competition_open_since_month'].astype( int )
df1['competition_open_since_year'] = df1['competition_open_since_year'].astype( int )
df1['promo2_since_week'] = df1['promo2_since_week'].astype( int )
df1['promo2_since_year'] = df1['promo2_since_year'].astype( int )
###Output
_____no_output_____
###Markdown
1.7. Descriptive Statistical
###Code
num_attributes = df1.select_dtypes( include= [ 'int64', 'float64'] )
cat_attributes = df1.select_dtypes( exclude= [ 'int64', 'float64', 'datetime64[ns]'] )
cat_attributes.sample()
###Output
_____no_output_____
###Markdown
1.7.1. Numerical Attributes
###Code
# Central Tendency - mean, median
ct1 = pd.DataFrame( num_attributes.apply( np.mean ) ).T
ct2 = pd.DataFrame( num_attributes.apply( np.median ) ).T
# Dispersion - std, min, max, range, skew, kurtois
d1 = pd.DataFrame( num_attributes.apply( np.std) ).T
d2 = pd.DataFrame( num_attributes.apply( min ) ).T
d3 = pd.DataFrame( num_attributes.apply( max ) ).T
d4 = pd.DataFrame( num_attributes.apply( lambda x: x.max() - x.min() ) ).T
d5 = pd.DataFrame( num_attributes.apply( lambda x: x.skew() ) ).T
d6 = pd.DataFrame( num_attributes.apply( lambda x: x.kurtosis() ) ).T
# Concatenate
m = pd.concat( [d2, d3, d4, ct1, ct2, d1, d5, d6] ).T.reset_index()
m.columns = ['attributes', 'min', 'max', 'range', 'mean', 'median', 'std', 'skew', 'kurtosis']
m
sns.distplot( df1['competition_distance'] )
###Output
/opt/anaconda3/envs/store_sales_predict/lib/python3.8/site-packages/seaborn/distributions.py:2557: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms).
warnings.warn(msg, FutureWarning)
###Markdown
1.7.2. Categorical Attributes
###Code
cat_attributes.apply( lambda x: x.unique().shape[0])
aux1 = df1[(df1['state_holiday'] != '0' ) & (df1['sales'] > 0 )]
plt.subplot(1, 3, 1)
sns.boxplot( x='state_holiday' ,y='sales' , data=aux1 )
plt.subplot(1, 3, 2)
sns.boxplot( x='store_type' ,y='sales' , data=aux1 )
plt.subplot(1, 3, 3)
sns.boxplot( x='assortment' ,y='sales' , data=aux1 )
###Output
_____no_output_____
###Markdown
2.0. PASSO 02 - FEATURE ENGINEERING
###Code
df2 = df1.copy()
Image( 'images/MIndMapHypothesis.png')
###Output
_____no_output_____
###Markdown
2.1. Criacao de Hipรณtesis 2.1.1 Hipรณteses Loja **1.** Lojas com maior quadro de funcionarios deveriam vender mais**2.** Lojas com maior capacidade de estoque deveriam vender mais**3.** Lojas com maior porte deveriam vender mais **4.** Lojas com maior sortimento deveriam vender mais**5.** Lojas com competidores mais prรณximos deveriam vender menos**6.** Lojas com competidores a mais tempo deveriam vender mais 2.1.2 Hipรณteses do produto **1.** Lojas que investem mais em Marketing deveriam vender mais**2.** Lojas com maior exposicao do produtos deveriam vender mais**3.** Lojas com produtos com menor preรงo deveriam vender mais**4.** Lojas com promoรงรตes mais agressivas (descontos maiores) deveriam vender mais**5.** Lojas com promoรงรตes ativas por mais tempo deveriam vender mais**6.** Lojas com mais dias de promoรงรตes deveriam vender mais**7** Lojas com mais promoรงoes consecultivas deveriam vender mais 2.1.3. Hipรณteses Tempo **1.** Lojas abertas durante o feriado de Natal deveriam vender mais**2.** Lojas deveriam vnder mais ao longo dos anos**3.** Lojas deveriam vender mais no segundo semestre do ano**4.** Lojas deveriam vender mais depois do dia 10 de cada mรชs**5.** Lojas deveriam vender mais nos finais de semana**6.** Lojas deveriam deveriam vender menos durante os feriados escolares 2.2 Lista de Hipรณteses **1.** Lojas com maior sortimento deveriam vender mais**2.** Lojas com competidores mais prรณximos deveriam vender menos**3.** Lojas com competidores a mais tempo deveriam vender mais**4.** Lojas com promoรงรตes ativas por mais tempo deveriam vender mais**5.** Lojas com mais dias de promoรงรตes deveriam vender mais**6.** Lojas com mais promoรงoes consecultivas deveriam vender mais**7.** Lojas abertas durante o feriado de Natal deveriam vender mais**8.** Lojas deveriam vnder mais ao longo dos anos**9.** Lojas deveriam vender mais no segundo semestre do ano**10.** Lojas deveriam vender mais depois do dia 10 de cada mรชs**11.** Lojas deveriam vender mais nos finais de semana**12.** Lojas deveriam deveriam vender menos durante os feriados escolares 2.3. Feature Engineering
###Code
# year
df2['year'] = df2['date'].dt.year
# month
df2['month'] = df2['date'].dt.month
# day
df2['day'] = df2['date'].dt.day
# week of year
df2['week_of_year'] = df2['date'].dt.weekofyear
# year week
df2['year_week'] = df2['date'].dt.strftime( '%Y-%W' )
# competition since
df2['competition_since'] = df2.apply( lambda x: datetime( year=x['competition_open_since_year'], month=x['competition_open_since_month'], day=1), axis= 1 )
df2['competition_time_month'] = ( ( df2['date'] - df2['competition_since'] )/30 ).apply(lambda x: x.days).astype( int )
# promo since
df2['promo_since'] = df2['promo2_since_year'].astype( str ) + '-' + df2['promo2_since_week'].astype( str )
df2['promo_since'] = df2['promo_since'].apply( lambda x: datetime.strptime( x + '-1', '%Y-%W-%w' ) - timedelta( days=7 ) )
df2['promo_time_week'] = ( ( df2['date'] - df2['promo_since'] )/7 ).apply( lambda x: x.days ).astype( int )
# # assortment
#df2['assortment'] = df2['assortment'].apply( lambda x: 'basic' if x == 'a' else 'extra' if x == 'b' else 'extended' )
# state holiday
df2['state_holiday'] = df2['state_holiday'].apply(lambda x: 'public_holiday' if x == 'a' else 'easter_holiday' if x == 'b' else 'christmas' if x == 'c' else 'regular_day' )
df2.head().T
###Output
_____no_output_____
###Markdown
3.0. PASSO 03 - FILTRAGEM DE VARIรVEIS
###Code
df3 = df2.copy()
df3.head().T
###Output
_____no_output_____
###Markdown
3.1. Filtragem das Linhas
###Code
df3 = df3[(df3['open'] != 0) & (df3['sales'] > 0 )]
###Output
_____no_output_____
###Markdown
3.2. Seleรงรฃo das Colunas
###Code
cols_drop = ['customers' , 'open', 'promo_interval', 'month_map']
df3 = df3.drop( cols_drop, axis= 1 )
df3.columns
###Output
_____no_output_____ |
Untitled23.ipynb | ###Markdown
The set index() and reset and reset_index()
###Code
import pandas as pd
bond=pd.read_csv("jamesbond.csv")
bond.set_index(["Film"],inplace=True)
bond.head()
bond.reset_index().head()
###Output
_____no_output_____
###Markdown
prime numbers
###Code
lower = 1
upper = 200
print("Prime numbers between", lower, "and", upper, "are:")
for num in range(lower, upper + 1):
if num > 1:
for i in range(2, num):
if (num % i) == 0:
break
else:
print(num)
###Output
Prime numbers between 1 and 200 are:
2
3
5
7
11
13
17
19
23
29
31
37
41
43
47
53
59
61
67
71
73
79
83
89
97
101
103
107
109
113
127
131
137
139
149
151
157
163
167
173
179
181
191
193
197
199
###Markdown
safe landing of plane
###Code
alt = 4500
if alt <= 1000:
print("safe land")
elif alt > 5000:
print("turn around")
else:
print("bring back to 1000")
###Output
bring back to 1000
###Markdown
Electrostatic force between the charges.The coulemb's force between two charges q1 and q2 separated at distance r is given as; F=kq1q2/r^2, where k=9*10^9NM^2/C^2write a code for function to calculate the coulemb's force between two given distance.Use list to calculate the variation of coulemb's force between two charges placed at a range.Make a plot that shows the variations of coulemb's force with distance using list. let the two charges be q1=3C and q2=5C
###Code
k=9*10**9
def force (r,q1,q2): #function to calculate coulemb's force
return k*q1*q2/(r**2)
#Range of disatances
F=[]
R=[]
q1=3
q2=5
for r in range(1,50):
F.append(force(r,q1,q2))
R.append(r)
#plot of the range of distances vs corresponding force
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(R,F)
###Output
_____no_output_____ |
Notebook_2/Notebook_2.ipynb | ###Markdown
Notebook 2: Requesting information After getting all the access token as well as refreshing the token, we started requesting information for our analysis. Just to remind, our four goals are to find out which are the top twenty friends that like our post the most, demographic for places we have been tagged, reactions for every Trump's post, and lastly, the events on Facebook. My partner and I divide the tasks in half and each of us worked on each half in order to save times. I started working on getting top friends that like our post the most and the places that we have beend tagged and my partner worked on the other half. Before starting getting the information, we played around with the Graph API Explorer to see what kind of information we can get from Facebook. Then, we started with the first question by using python to get the list of photo and see what kind of format it looked like. We used GET method to get the list of photo by parsing the url "https://graph.facebook.com/me?fields=photos.limit(200)". We would then decode the result and convert it into json. As soon as it was in json, we created a list that contain or the photo ids so that we could get number of reactions out of them. This was due to the fact that by parsing only the post id, we can get any type of information such as likes, reactions, and even comments over the Facebook Graph API. After creating a list, we then started to parse in each item to the list(which are the photo ids) into a GET request to obtain the reaction types. We then struggled with counting the total number of likes for each friend for every photo. Therefore, we had to stop for a moment and design an algorithm that could count total number of likes from each friend. Eventually, we figured that out and created a dictionary in which for every friend, it shows the total number of likes that they give for evey of our photo. Lastly, we wrapped up the first question by generate a dictionary that contains top friends who like our post the most. Furthermore, we also imported the dictionary into dataframe then into csv to prepared for the third notebook. As explained in Notebook 1: Facebook does not provide a way to refresh their tokens once expired. To get a new token the login flow must be followed again to obtain a short-lived token which needs to be exhanged, once again, for a long lived token. This is expressed in the facebook documentations as follows:"Even the long-lived access token will eventually expire. At any point, you can generate a new long-lived token by sending the person back to the login flow used by your web app - note that the person will not actually need to login again, they have already authorized your app, so they will immediately redirect back to your app from the login flow with a refreshed token" For this notebook, we are using the long-lived token which lasts for over 2 months.
###Code
import requests
import importlib
import json
import pandas as pd
import keys_project
importlib.reload(keys_project)
keychain = keys_project.keychain
d={}
d['access_token']=keychain['facebook']['access_token'] # Getting the long-lived access token
###Output
_____no_output_____
###Markdown
Below are all of the helper functions that we have used. The return type of a response from the graph api is not easy to parse and hence we convert all repsonses to JSON. The other functions are supplementing our data requests and modifications as described in the program level docs.
###Code
def response_to_json(response):
'''
This function converts the response into json format
Parameter:
response: the request response to convert to json
Return:
the response in json
'''
string_response = response.content.decode('utf-8') #decoding the response to string
return json.loads(string_response) # converting the string to json
def get_reaction_count(object_id,reaction_type):
'''
This function gets the total reactions for each post
Parameter:
object_id: the id of the object to get reaction data
reaction_type: the reaction_type to retrieve from NONE, LIKE, LOVE, WOW, HAHA, SAD, ANGRY, THANKFUL
Return:
the number of reactions on the request object of type reaction_type
'''
request_url="https://graph.facebook.com/"+str(object_id)+\
"/reactions?summary=true&type="+reaction_type # getting reaction summary data
response= requests.get(request_url,params=d)
response_json=response_to_json(response)
return response_json['summary']['total_count'] #getting the count for reaction reaction_type
def most_frequent(myDict,number_top):
'''
This function creates a dictionary which includes the friend's name and the number of likes
Parameter:
myDict: A dictionary with the key as facebook friend's name and value of the number of times they liked the upload type
number_top: The number of top friends who have made likes
Return:
A dictionary of the top 20 friends
'''
# Frequency for top 20 people who like your upload_type
value = []
for key in myDict:
value.append(myDict[key])
value = sorted(value,reverse=True)
values = value[0:number_top]
most_liked_Dict = {}
for key in myDict:
if myDict[key] in values:
most_liked_Dict[key] = myDict[key]
return most_liked_Dict
def feed_(feed_id):
'''
This function get the feed data from Facebook
Parameter:
feed_id:the id of the feed in string
Return:
a dictionary of feed data
'''
request_url="https://graph.facebook.com/"+feed_id+\
"?fields=type,name,created_time,status_type,shares" #creating the url based on the feed_id
response= requests.get(request_url,params=d)
response_json=response_to_json(response)
return response_json
def to_csv(filename,df):
'''
This function creates a CSV file. It exports data from a pandas dataframe to the file.
Parameters:
String of filename desired, pandas dataframe
Returns:
None
'''
df.to_csv(filename,encoding='utf-8') # exporting to a csv file
###Output
_____no_output_____
###Markdown
Last but not least, we imported the dictionary into csv file for later analysis in Notebook 3. This question took us quite long time. However, the questions later on were pretty straightforward and similar to this question. Question: Getting the number of facebook reactions of each reaction type for a particular upload type. This function takes a user_id which can be any facebook user or page, a limit which is the number of upload types we want to check for and upload type which are facebook ulpoad objects such as pictures or posts. By offereing these paramteres, we offer flexibity on the kind of data recieved. Inititally, we used Facebook's Graph API explorer to test our requests. The link to the explorer is : https://developers.facebook.com/tools/explorer/. In the facebook graph, information is composed in the following format: 1. nodes: "things" such as a User, a Photo, a Page, a Comment2. edges: the connections between those "things", such as a Page's Photos, or a Photo's Comments3. fields:info about those "things", such as a person's birthday, or the name of a PageUnderstanding how to query all three of these parts of the social graph were important in obtaining good data. For this question, we had to first had to get a 'User' or 'Page' node. From which we had to query the user's edges to find its ulploads (posts or photos). Once we got the ID asscoiated with each edge, we used the fields of those edges to get reaction counts. For our anaylasis, get the reaction counts for Donald Trump and Hillary Clinton to compare their social media presence and following. For each of our questions, we also had to modify our JSON response to clear it of noise and get it in the format to be accepted by a pandas dataframe
###Code
def reaction_statistics(id_,limit,fb_upload_type):
'''
This function gets the total reactions of each feed
ParameterL
id_: a string id to a facebook object such as a page or person
limit: the limit to the numner of posts obtained from the request in string
fb_upload_type: a valid type of upload as specified in FB docs: photo, post, videos etc in string
Return:
a list of dictionary of the number of each different kind of reaction for each post
'''
request_url="https://graph.facebook.com/"+id_+"?fields="+fb_upload_type+".limit("+limit+"){id}" #creating request url
response= requests.get(request_url,params=d)
response_json=response_to_json(response) # converting response to json
user=[]
reaction_type=['LIKE','LOVE','WOW','HAHA','SAD','ANGRY','THANKFUL']
for object_ in response_json[fb_upload_type]['data']:
buffer={}
for type_ in reaction_type:
buffer[type_]=get_reaction_count(object_['id'],type_) #getting the count of each reaction
buffer['id']=object_['id']
user.append(buffer)
return user
donald_trump=pd.DataFrame(reaction_statistics('153080620724','5','posts'))
hillary_clinton=pd.DataFrame(reaction_statistics('889307941125736','5','posts'))
donald_trump.head(5)
hillary_clinton.head(5)
###Output
_____no_output_____
###Markdown
Hence, for each cell we can see the upload_type ID to identify the post or photo and the number of reactions for each upload. QUESTION: Obtaining feed data to anaylize the kinds, times and popularity of a user or page's feed. In this question, we get feed information for the artist Bon Dylan (though are function us abstracted to get information for any user whose feed is publically available or a user who has authenticated us though OAUTH 2.0)After obtaining the user ID, we used the Facebook Graph API explorer to see the response contents of a request to the fields to the user's feed. There were various kind of data available whihc can also be found on FB's docs (https://developers.facebook.com/docs/graph-api/reference/v2.11/user/feed).From the different fields we picked ones which would be interesting to look at such as number of shares on the feed post, the times and dates of the posts to see the frequency of the user's FB usage, the kind of post(status,story,video etc) and other such information. Once again, we had to modify the JSON response so that it would be accepted by a pandas DF.
###Code
def feed_data(object_id,limit):
'''
This function generates a list of dictionaries for each feed of information
Parameters:
object_id: the id of the object posting events in string
limit: the number of most recent events in string
Return:
a list of dictionaries where each data is a single feed of information
'''
request_url="https://graph.facebook.com/"+object_id+"?fields=feed.limit("+str(limit)+"){id}"
response= requests.get(request_url,params=d)
response_json=response_to_json(response) # converting response to json
feed_list=[] #creaing an empty list to hold feed dictionaries
for feed_id in response_json['feed']['data']:
feed_info={}
feed_info= feed_(feed_id['id'])
feed_info['share_count']=feed_info['shares']['count']
del feed_info['shares']
feed_list.append(feed_info)
return feed_list #returning the feed list
Bob_Dylan=pd.DataFrame(feed_data('153080620724','10'))
Bob_Dylan.head(5)
###Output
_____no_output_____
###Markdown
Question: Get the top twenty frequency of friends who like our post In the cell below, it is our code for the first question, which is the top friends who like our post the most. First, we created a function to convert the response into json format since we would be making a lot of requests and create dictionary from them. This was quite easy and did not take a lot of our time. Next, we wrote a function to get the total number of reactions. We did this by parsing the ids of the object(which are the posts) into a GET request so that it can get the information from all objects. Then, we wrote another function called most_frequent to get the number of likes from each friend. This function took most of our time since we had to design an algorithm to sum up the total of likes from every friend. When this function worked, the rest was easier since we only had to put them in a dictionary and get the top 20 frequency. Lastly, we imported the dataframe of top 20 frequency into dataframe and to csv. In this question overall, another problem that we also struggled with was getting the top 20 frequency. First, after getting the total likes from everyone, we had to append the likes into a list. Then , we sorted the list from the most likes to the least likes and got the top 20. Then, we checked if the names and likes in the total likes dictionary were also in the top 20 list. If they were, we would put them into a new dictionary, whose keys were names and values were number of likes. Beside the frequency and the total likes algorithm that we designed, the other functions were quite straightforward. The function takes a facebook object id which could be a user or page, the numner of posts or photos we want to check for and the type of the post we want to check for. Hence, we offer a good amount of flexibility.
###Code
def friend_likes(id_,limit,fb_upload_type):
'''
This function gets a dictionary for each kind of reactions for each post
Parameter:
id_: a string id to a facebook object such as a page or person
limit: the limit to the numner of posts obtained from the request in string
fb_upload_type: a valid type of upload as specified in FB docs: photo, post, videos etc in string
Return:
a list of dictionary of the number of each different kind of reaction for each post
'''
request_url="https://graph.facebook.com/"+id_+"?fields="+fb_upload_type+".limit("+limit+"){id}"
response= requests.get(request_url,params=d)
photoID_list=response_to_json(response) # converting response to json
myDict={} # Dictionary that contains the frequency of likes for each friend
for object_ in photoID_list[fb_upload_type]['data']:
response=requests.get("https://graph.facebook.com/"+object_['id']+"/reactions",params=d) # Get the likes data
response_json=response_to_json(response)
# For each ulpoad_type, let's get the list of friends and the number of time they like the
for name_dict in response_json['data']:
name=name_dict['name']
if name not in myDict.keys() : # Check if the friends have already like the photo
myDict[name] = 1
else:
myDict[name]= myDict[name]+1
return most_frequent(myDict,20)
friend_likes('me','200','posts')
# Getting the like frequency for top 20 friends for past 200 posts
df_likes_posts= pd.DataFrame([friend_likes('me','200','posts')])
# Getting the like frequency for top 20 friends for past 200 posts
df_likes_photo= pd.DataFrame([friend_likes('me','200','photos')])
to_csv('df_likes_posts.csv',df_likes_posts)
to_csv('df_likes_photos.csv',df_likes_photo)
df_likes_posts
df_likes_photo
###Output
_____no_output_____
###Markdown
Question: Demographic analysis for place that we have been tagged In this question, we want to explore the places that we have travelled and been tagged on Facebook. We want to create a demographic plot that show where we have been based on the latitudes and longitudes. Since we already know how to perform a GET request from the previous questions, this question did not take us a lot of times. We did this question by writing a function called tagged_data.First, this function took object_id, which was the id for places, as a parameter. The parameter then would be parsed into the GET request to get the locations for each id. Once the request was successful, we converted the response into json format and perform an iteration. We used list apprehension to create a list that include the data for places that we have been tagged. Then, we created a dictionary such that for each location data in the list, we would put the latitudes, longitudes and location names as the keys for the dictionary, and their values are the values in the dictionary. We later on appended each tagged location dictionary to a list.
###Code
def tagged_data(object_id):
'''
This function generates a dictionary which includes the longitudes, latitudes, and names for places.
Parameter:
id_: a string id to a facebook object such as a page or person
Return:
a list of dictionaries of latitude,longitude, country and name of tagged places
'''
request_url="https://graph.facebook.com/"+object_id+"?fields=tagged_places.limit(200)"
response= requests.get(request_url,params=d)
place_list=response_to_json(response) # converting response to json
tagged_place_list = [element['place'] for element in place_list['tagged_places']['data']] # Create a list of photo id
tagged_list=[]
for place in tagged_place_list:
buffer_dict={} #creating a buffer dictionary
buffer_dict['latitude']= place['location']['latitude']
buffer_dict['longitude']= place['location']['longitude']
buffer_dict['name']=place['name']
tagged_list.append(buffer_dict) # appending each tagged location dictionary to a list
return tagged_list
###Output
_____no_output_____
###Markdown
We will import a dataframe that contains the data about latitude, longitude, and name. Then, we created a csv file out of this dataframe.
###Code
df_tagged_places= pd.DataFrame(tagged_data('me'))
to_csv('df_tagged_places.csv',df_tagged_places)
###Output
_____no_output_____
###Markdown
We then showed the first ten row in this dataframe.
###Code
df_tagged_places.head(10)
###Output
_____no_output_____ |
demo/predict-taxi-trip-duration-nb/develop_ml_application_tour.ipynb | ###Markdown
ๅบไบๆบๅจๅญฆไน ๆฐๆฎๅบ้ฃ้ไธ็บฟAIๅบ็จๅคงๅฎถๅนณๆถๅฏ่ฝ้ฝไผๆ่ฝฆ๏ผไปๅบๅ็ๅฐ็นๅฐ็ฎ็ๅฐ๏ผ่ก็จ่ๆถๅฏ่ฝไผๅญๅจๅค็งๅ ็ด ๏ผๆฏๅฆๅคฉๆฐ๏ผๆฏๅฆๅจไบ๏ผๅฆๆ่ทๅๆดๅ็กฎ็่ๆถ้ขๆต๏ผๅฏนไบบๆฅ่ฏดๆฏไธไธชๅคๆ็้ฎ้ข๏ผ่ๅฏนๆบๅจๅฐฑๅๅพๅพ็ฎๅ๏ผไปๅคฉ็ไปปๅกๅฐฑๆฏๅผๅไธไธช้่ฟๆบๅจๅญฆไน ๆจกๅ่ฟ่กๅบ็ง่ฝฆ่ก็จ่ๆถ็ๅฎๆถๆบ่ฝๅบ็จ๏ผๆดไธชๅบ็จๅผๅๆฏๅบไบ[notebook](http://ipython.org/notebook.html) ๅๅงๅ็ฏๅขๆดไธชๅๅงๅ่ฟ็จๅ
ๅซๅฎ่ฃ
fedb๏ผไปฅๅ็ธๅ
ณ่ฟ่ก็ฏๅข๏ผๅๅงๅ่ๆญฅๅฏไปฅๅ่https://github.com/4paradigm/DemoApps/blob/main/predict-taxi-trip-duration-nb/demo/init.sh
###Code
!cd demo && sh init.sh
###Output
_____no_output_____
###Markdown
ๅฏผๅ
ฅ่ก็จๅๅฒๆฐๆฎๅฐfedbไฝฟ็จfedb่ฟ่กๆถๅบ็นๅพ่ฎก็ฎๆฏ้่ฆๅๅฒๆฐๆฎ็๏ผๆไปฅๆไปฌๅฐๅๅฒ็่ก็จๆฐๆฎๅฏผๅ
ฅๅฐfedb๏ผไปฅไพฟๅฎๆถๆจ็ๅฏไปฅไฝฟ็จๅๅฒๆฐๆฎ่ฟ่ก็นๅพๆจ็๏ผๅฏผๅ
ฅไปฃ็ ๅฏไปฅๅ่https://github.com/4paradigm/DemoApps/blob/main/predict-taxi-trip-duration-nb/demo/import.py
###Code
!cd demo && python3 import.py
###Output
_____no_output_____
###Markdown
ไฝฟ็จ่ก็จๆฐๆฎ่ฟ่กๆจกๅ่ฎญ็ป้่ฟlabelๆฐๆฎ่ฟ่กๆจกๅ่ฎญ็ป๏ผไธไธๆฏ่ฟๆฌกไปปๅกไฝฟ็จ็ไปฃ็ * ่ฎญ็ป่ๆฌไปฃ็ https://github.com/4paradigm/DemoApps/blob/main/predict-taxi-trip-duration-nb/demo/train_sql.py * ่ฎญ็ปๆฐๆฎ https://github.com/4paradigm/DemoApps/blob/main/predict-taxi-trip-duration-nb/demo/data/taxi_tour_table_train_simple.snappy.parquetๆดไธชไปปๅกๆ็ปไผ็ๆไธไธชmodel.txt
###Code
!cd demo && python3 train.py ./fe.sql /tmp/model.txt
###Output
_____no_output_____
###Markdown
ไฝฟ็จ่ฎญ็ป็ๆจกๅๆญๅปบ้พๆฅfedb็ๅฎๆถๆจ็httpๆๅกๅบไบไธไธๆญฅ็ๆ็ๆจกๅๅfedbๅๅฒๆฐๆฎ๏ผๆญๅปบไธไธชๅฎๆถๆจ็ๆๅก๏ผๆดไธชๆจ็ๆๅกไปฃ็ ๅ่https://github.com/4paradigm/DemoApps/blob/main/predict-taxi-trip-duration-nb/demo/predict_server.py
###Code
!cd demo && sh start_predict_server.sh ./fe.sql 8887 /tmp/model.txt
###Output
_____no_output_____
###Markdown
้่ฟhttp่ฏทๆฑๅ้ไธไธชๆจ็่ฏทๆฑๆดไธช่ฏทๆฑๅพ็ฎๅ๏ผๅ
ทไฝไปฃ็ ๅฆไธ```pythonurl = "http://127.0.0.1:8887/predict"req ={"id":"id0376262", "vendor_id":1, "pickup_datetime":1467302350000, "dropoff_datetime":1467304896000, "passenger_count":2, "pickup_longitude":-73.873093, "pickup_latitude":40.774097, "dropoff_longitude":-73.926704, "dropoff_latitude":40.856739, "store_and_fwd_flag":"N", "trip_duration":1}r = requests.post(url, json=req)print(r.text)print("Congraduation! You have finished the task.")tmp = os.urandom(44)secret_key = base64.b64encode(tmp)print("Your Key:" + str(secret_key))```
###Code
!cd demo && python3 predict.py
###Output
_____no_output_____ |
notebooks/22_0_L_ExploratoryDataAnalysis.ipynb | ###Markdown
Imports
###Code
# Pandas, Numpy and Matplotlib
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# Import All nltk
import nltk
#nltk.download_shell()
###Output
_____no_output_____
###Markdown
Get tagged words
###Code
# Set name of file
filename = '../data/interim/disease_tags.pkl'
# Read to DataFrame
df = pd.read_pickle(filename)
# Echo
df.head()
# Drop nulls, exclude start/end/disease_tag columns
tags = df['Id ont unique_id'.split()].dropna(axis=0)
# Rename fields, create combined field ont:unique_id
tags['summary_id'] = tags['Id']
tags['disease_id'] = tags['ont']+':'+tags['unique_id']
tags['year'] = 2017 #pd.Series(np.random.randint(2000,2019,tags.shape[0]))
# Leave only important fields
tags = tags['year summary_id disease_id'.split()]
# Drop duplicates
tags = tags.drop_duplicates(subset='summary_id disease_id'.split())
# Echo
tags.head(10)
# Drop nulls, exclude start/end/disease_tag columns
tags = df['Id ont unique_id'.split()].dropna(axis=0)
# Rename fields, create combined field ont:unique_id
tags['summary_id'] = tags['Id']
tags['disease_id'] = tags['ont']+':'+tags['unique_id']
tags['year'] = 2017 #pd.Series(np.random.randint(2000,2019,tags.shape[0]))
# Leave only important fields
tags = tags['year summary_id disease_id'.split()]
# Echo
tags.head(10)
# Set strength of duplicates
tags['combined_id'] = tags['summary_id'] +'_'+ tags['disease_id']
tags.head()
###Output
_____no_output_____
###Markdown
Create links between tags in same summary
###Code
links = set()
for index, record in df.iterrows():
for tag1 in record['Tags']:
for tag2 in record['Tags']:
links.add((tag1, tag2))
len(links)
import csv
with open('Links_250.csv', 'w') as outfile:
w = csv.writer(outfile, delimiter=',', quotechar='"')
w.writerow(['Source','Target'])
for element in links:
#print(list(element))
w.writerow(element)
###Output
_____no_output_____ |
Task 2/Task 2.ipynb | ###Markdown
็ปๅทดๅ่ฏ
###Code
import jieba
seg_list = jieba.cut("ๆๆฅๅฐๅไบฌๆธ
ๅๅคงๅญฆ")
print(' '.join(seg_list))
###Output
Building prefix dict from the default dictionary ...
Dumping model to file cache C:\Users\Jan\AppData\Local\Temp\jieba.cache
Loading model cost 0.935 seconds.
Prefix dict has been built succesfully.
###Markdown
่ชๅฎไน่ฏๅ
ธ
###Code
jieba.load_userdict("dict.txt")
import jieba.posseg as pseg
test_sent = (
"ๆๅฐ็ฆๆฏๅๆฐๅไธปไปปไนๆฏไบ่ฎก็ฎๆน้ข็ไธๅฎถ; ไปไนๆฏๅ
ซไธๅ้นฟ\n"
"ไพๅฆๆ่พๅ
ฅไธไธชๅธฆโ้ฉ็่ต้ดโ็ๆ ้ข๏ผๅจ่ชๅฎไน่ฏๅบไธญไนๅขๅ ไบๆญค่ฏไธบN็ฑป\n"
"ใๅฐไธญใๆญฃ็ขบๆ่ฉฒไธๆ่ขซๅ้ใmacไธๅฏๅๅบใ็ณๅขจ็ฏใ๏ผๆญคๆๅๅฏไปฅๅๅบไพๅฑ็น็ณไบใ"
)
words = jieba.cut(test_sent)
' '.join(words)
###Output
_____no_output_____
###Markdown
ๅบไบ TF-IDF ็ฎๆณ็ๅ
ณ้ฎ่ฏๆฝๅ
###Code
sentence = """
ใๅคไป่
่็4ใไธๆ 16ๅคฉ๏ผ่ฟ็ปญ16ๅคฉ่ทๅพๅๆฅ็ฅจๆฟๅ ๅ๏ผใไฝไปฅไธบๅฎถใไปฅไผ่ดจ็ๅฃ็ขๆญฃๅจๅฒๅป3ไบฟ็ฅจๆฟ๏ผไฝๅธๅบๅคง็ๅๅๆฌกๅ่ฝ่ณ4ๅไธๅ
ไธๅคฉ็ๆฐดๅนณ๏ผ้็ๅฝฑ็็ญๅบฆ้ๆธ้ๅด๏ผ้ ๅฎไปฌโ็ปญๅฝโ็ๅฝฑ้ขไน้ๅ็ป่ฅ็ชๅขใ
"""
import jieba.analyse
jieba.analyse.extract_tags(sentence, topK=20, withWeight=False, allowPOS=())
###Output
_____no_output_____
###Markdown
ๅบไบ TextRank ็ฎๆณ็ๅ
ณ้ฎ่ฏๆฝๅ
###Code
jieba.analyse.textrank(sentence, topK=20, withWeight=False, allowPOS=('ns', 'n', 'vn', 'v'))
###Output
_____no_output_____
###Markdown
Task2: Supervised Machine Learning By MITHIl Objective:In this regression task we will predict the percentage ofmarks that a student is expected to score based upon thenumber of hours they studied. This is a simple linearregression task as it involves just two variables. Importing the necessary libraries
###Code
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.linear_model import SGDRegressor
from tqdm import tqdm_notebook
from sklearn import metrics
import warnings
warnings.filterwarnings('ignore')
from math import sqrt
###Output
_____no_output_____
###Markdown
Getting the data We get the data provided in the dataset using pandas dataframe
###Code
data=pd.read_csv("student_scores.csv")
data.head()
data.describe()
data.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 25 entries, 0 to 24
Data columns (total 2 columns):
Hours 25 non-null float64
Scores 25 non-null int64
dtypes: float64(1), int64(1)
memory usage: 480.0 bytes
###Markdown
**Observation:** We see that the dataset consists of two columns. **Hours** and **Scores** are its respective feature names. By our objective we need to **predict scores** when number are hours is given by the user.- From the descripton of data we can se that the mean of hours is at **5.01** and the scores mean is around **51**.- The Hours columns has **float** as the data type with **no null values**.- The Scores columns has **integer** as the data type with **no null values**. Exploratory data analysis:In this section we will be implementing univariate analysis and visualise the data for the given dataset. Hours column:We will analyzing the scores column to see the distribution and possibly eliminate any outliers
###Code
plt.hist(data['Hours'])
plt.show()
###Output
_____no_output_____
###Markdown
**Observation:** We can see that most of the students from the data spend **2 to 2.5** hours and the least amount of student spend **3.5 to 4.5** hours and **5.7 to 6.7** hours.
###Code
plt.boxplot(data['Hours'])
plt.show()
###Output
_____no_output_____
###Markdown
**Observation:** Fortunately there are no outliers in this column as we can see from the above diagram. Scores We will analyze the scores column since this is our models labels.
###Code
plt.hist(data['Scores'])
plt.show()
###Output
_____no_output_____
###Markdown
**Observation:** We can see that most of the students scored around 25 to 32 marks which is co-relating with the given score barplots.
###Code
plt.boxplot(data['Scores'])
plt.show()
###Output
_____no_output_____
###Markdown
**Observation:** They are no outliers from the above boxxplot as we can see. Visualization of dot plot of the dataset.
###Code
plt.plot(data['Hours'],data['Scores'], 'bo')
plt.title('Hours vs Percentage')
plt.xlabel('Hours Studied')
plt.ylabel('Percentage Score')
plt.grid()
plt.show()
###Output
_____no_output_____
###Markdown
**Observation:** We can see that the data points from hours and study are linear and we can apply linear models to the given data. Preparing the dataset
###Code
X=data.drop('Scores',axis=1)
X.head()
y=data['Scores']
y.head()
###Output
_____no_output_____
###Markdown
Splitting the data into train test and validation
###Code
X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.20,random_state=0)
X_train,X_val,y_train,y_val=train_test_split(X_train,y_train,test_size=0.20,random_state=0)
###Output
_____no_output_____
###Markdown
Training our model:We will be using linear regression through SGDRegressor for hyperparameter tuning. Here we tune alpha (regularization parameter) as the hyperparameter to get a optimal regression model.The loss function we will be using is RMSE (Root Mean Squared Error).
###Code
def LinearRegression(X_train,y_train,X_val,y_val):
alphas=[0.00001,0.0001,0.001,0.01,0.1]
models={}
for i in tqdm_notebook(alphas):
model=SGDRegressor(alpha=i)
model.fit(X_train,y_train)
train_predictions=model.predict(X_train)
val_predictions=model.predict(X_val)
train_score=sqrt(metrics.mean_squared_error(y_train,train_predictions))
val_score=sqrt(metrics.mean_squared_error(y_val,val_predictions))
print("For alpha={} the train mean sqaured error and validation mean squared error is {} and {} respectively.".format(i,train_score,val_score))
models[val_score]=model
model_loss=min(models.keys())
#print(model_loss)
print("Optimal model:")
print(models[model_loss])
return models[model_loss]
best_model=LinearRegression(X_train,y_train,X_val,y_val)
test_predictions=best_model.predict(X_test)
test_score=sqrt(metrics.mean_squared_error(y_test,test_predictions))
print("The mean squared error for our test data is {}".format(test_score))
line = best_model.coef_*X+best_model.intercept_
plt.scatter(X, y)
plt.plot(X, line);
plt.grid()
plt.show()
###Output
_____no_output_____
###Markdown
**Observation:** We can see that the models line fits perfectly to the linear data provided. This shows that our model is accurate visually. Question:What will be predicted score if a student study for 9.25 hrs in aday?
###Code
query_ans=best_model.predict([[9.25],])
print("According to the algorithm if a student studied for 9.25 hours he would score ",query_ans[0],"%")
###Output
According to the algorithm if a student studied for 9.25 hours he would score 93.82283270370657 %
###Markdown
Prediction Using Unsupervised ML (From the given โIrisโ dataset, predict the optimum number of clusters and representing it visually.) Done By :- Kanika Joshi **K-Means**
###Code
K-means is a centroid-based algorithm, or a distance-based algorithm,where we calculate the distances to assign a point to a cluster.
In K-Means, each cluster is associated with a centroid.
###Output
_____no_output_____
###Markdown
**Importing Required Libraries...**
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn import datasets
###Output
_____no_output_____
###Markdown
**Loading the dataset**
###Code
iris = datasets.load_iris()
iris_df=pd.DataFrame(iris.data,columns=iris.feature_names)
iris.feature_names
#VIsualizing the first 5 rows
print(iris_df.head())
###Output
sepal length (cm) sepal width (cm) petal length (cm) petal width (cm)
0 5.1 3.5 1.4 0.2
1 4.9 3.0 1.4 0.2
2 4.7 3.2 1.3 0.2
3 4.6 3.1 1.5 0.2
4 5.0 3.6 1.4 0.2
###Markdown
**Determining the Optimum Number Of Clusters Using the Elbow Method...**
###Code
# Finding the optimum number of clusters for k-means classification
x = iris_df.iloc[:, [0, 1, 2, 3]].values
from sklearn.cluster import KMeans
wcss = []
for i in range(1, 11):
kmeans = KMeans(n_clusters = i, init = 'k-means++',
max_iter = 300, n_init = 10, random_state = 0)
kmeans.fit(x)
wcss.append(kmeans.inertia_)
#Plotting the results onto a line graph,
#allowing us to observe 'The elbow'
plt.plot(range(1, 11), wcss,color="red")
plt.grid(color='blue')
plt.title('The elbow method')
plt.xlabel('Number of clusters')
plt.ylabel('WCSS') #Within cluster sum of squares
plt.show()
###Output
_____no_output_____
###Markdown
**Comments and conclusion: OPTIMUM NUMBER OF CLUSTERS ARE 3** Creating the K-means Classifier...
###Code
# Applying kmeans to the dataset / Creating the kmeans classifier
kmeans = KMeans(n_clusters = 3, init = 'k-means++',
max_iter = 300, n_init = 10, random_state = 0)
y_kmeans = kmeans.fit_predict(x)
###Output
_____no_output_____
###Markdown
Ploting the Clusters...
###Code
#Visualising the clusters
plt.scatter(x[y_kmeans == 0, 0], x[y_kmeans == 0, 1], s = 100, c = 'red', label = 'Iris-setosa')
plt.scatter(x[y_kmeans == 1, 0], x[y_kmeans == 1, 1], s = 100, c = 'blue', label = 'Iris-versicolour')
plt.scatter(x[y_kmeans == 2, 0], x[y_kmeans == 2, 1], s = 100, c = 'green', label = 'Iris-virginica')
#Plotting the centroids of the clusters
plt.scatter(kmeans.cluster_centers_[:, 0], kmeans.cluster_centers_[:,1], s = 100, c = 'cyan', label = 'Centroids')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
**Comments and conclusion: This shows the clusters present in the given dataset among species setosa, versicolour, virginica.**
###Code
# 3d scatterplot using matplotlib
fig = plt.figure(figsize = (15,15))
ax = fig.add_subplot(111, projection='3d')
plt.scatter(x[y_kmeans == 0, 0], x[y_kmeans == 0, 1], s = 50, c = 'red', label = 'Iris-setosa')
plt.scatter(x[y_kmeans == 1, 0], x[y_kmeans == 1, 1], s = 50, c = 'blue', label = 'Iris-versicolour')
plt.scatter(x[y_kmeans == 2, 0], x[y_kmeans == 2, 1], s = 50, c = 'green', label = 'Iris-virginica')
#Plotting the centroids of the clusters
plt.scatter(kmeans.cluster_centers_[:, 0], kmeans.cluster_centers_[:,1], s = 50, c = 'cyan', label = 'Centroids')
plt.show()
###Output
_____no_output_____
###Markdown
Labeling the Predictions...
###Code
#considering
#0 Corresponds to 'Iris-setosa'
#1 to 'Iris-versicolour'
#2 to 'Iris-virginica'
y_kmeans = np.where(y_kmeans==0, 'Iris-setosa', y_kmeans)
y_kmeans = np.where(y_kmeans=='1', 'Iris-versicolour', y_kmeans)
y_kmeans = np.where(y_kmeans=='2', 'Iris-virginica', y_kmeans)
###Output
_____no_output_____
###Markdown
Adding the Prediction to the Dataset...
###Code
data_with_clusters = iris_df.copy()
data_with_clusters["Cluster"] = y_kmeans
print(data_with_clusters.head(5))
###Output
sepal length (cm) sepal width (cm) petal length (cm) petal width (cm) \
0 5.1 3.5 1.4 0.2
1 4.9 3.0 1.4 0.2
2 4.7 3.2 1.3 0.2
3 4.6 3.1 1.5 0.2
4 5.0 3.6 1.4 0.2
Cluster
0 Iris-versicolour
1 Iris-versicolour
2 Iris-versicolour
3 Iris-versicolour
4 Iris-versicolour
###Markdown
DATA VISUALISATION Barplot- Cluster Distribution...
###Code
# Bar plot
sns.set_style('darkgrid')
sns.barplot(x = data_with_clusters["Cluster"] .unique(),
y = data_with_clusters["Cluster"] .value_counts(),
palette=sns.color_palette(["#00008B", "#800080", "#FF00FF"]));
plt.xlabel("Distribution" , fontsize = 15)
plt.ylabel("Cluster", fontsize = 15)
plt.show()
###Output
_____no_output_____
###Markdown
**Bar Plot Comments and conclusion**
###Code
There are around 62 iris-versicolour , 50 Iris-virginica and roughly 38 Iris-setosa samples in the dataset as predicted.
###Output
_____no_output_____
###Markdown
Violin plot...
###Code
sns.violinplot(x="Cluster",y="petal width (cm)",data=data_with_clusters)
plt.show()
sns.violinplot(x="Cluster",y="sepal width (cm)",data=data_with_clusters)
plt.show()
sns.violinplot(x="Cluster",y="petal length (cm)",data=data_with_clusters)
plt.show()
sns.violinplot(x="Cluster",y="sepal length (cm)",data=data_with_clusters)
plt.show()
###Output
_____no_output_____
###Markdown
Pairplot...
###Code
### hue = species colours plot as per species
### It will give 3 colours in the plot
sns.set_style('whitegrid') ### Sets grid style
sns.pairplot(data_with_clusters,hue = 'Cluster',palette="brg");
###Output
_____no_output_____
###Markdown
Conclusion
###Code
1.petal-length and petal-width seem to be positively correlated(seem to be having a linear relationship).
2.Iris-Setosa seems to have smaller petal length and petal width as compared to others.
3.Looking at the overall scenario, it seems to be the case that Iris-Setosa has smaller dimensions than other flowers.
###Output
_____no_output_____ |
quests/serverlessml/07_caip/solution/export_data.ipynb | ###Markdown
Exporting data from BigQuery to Google Cloud StorageIn this notebook, we export BigQuery data to GCS so that we can reuse our Keras model that was developed on CSV data.
###Code
%%bash
export PROJECT=$(gcloud config list project --format "value(core.project)")
echo "Your current GCP Project Name is: "$PROJECT
import os
PROJECT = "your-gcp-project-here" # REPLACE WITH YOUR PROJECT NAME
REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# Do not change these
os.environ["PROJECT"] = PROJECT
os.environ["REGION"] = REGION
os.environ["BUCKET"] = PROJECT + '-ml' # DEFAULT BUCKET WILL BE PROJECT ID -ml
if PROJECT == "your-gcp-project-here":
print("Don't forget to update your PROJECT name! Currently:", PROJECT)
###Output
_____no_output_____
###Markdown
Create BigQuery dataset and GCS BucketIf you haven't already, create the the BigQuery dataset and GCS Bucket we will need.
###Code
%%bash
## Create a BigQuery dataset for serverlessml if it doesn't exist
datasetexists=$(bq ls -d | grep -w serverlessml)
if [ -n "$datasetexists" ]; then
echo -e "BigQuery dataset already exists, let's not recreate it."
else
echo "Creating BigQuery dataset titled: serverlessml"
bq --location=US mk --dataset \
--description 'Taxi Fare' \
$PROJECT:serverlessml
echo "\nHere are your current datasets:"
bq ls
fi
## Create new ML GCS bucket if it doesn't exist already...
exists=$(gsutil ls -d | grep -w gs://${PROJECT}-ml/)
if [ -n "$exists" ]; then
echo -e "Bucket exists, let's not recreate it."
else
echo "Creating a new GCS bucket."
gsutil mb -l ${REGION} gs://${PROJECT}-ml
echo -e "\nHere are your current buckets:"
gsutil ls
fi
###Output
BigQuery dataset already exists, let's not recreate it.
Bucket exists, let's not recreate it.
###Markdown
Create BigQuery tables Let's create a table with 1 million examples.Note that the order of columns is exactly what was in our CSV files.
###Code
%%bigquery
CREATE OR REPLACE TABLE serverlessml.feateng_training_data AS
SELECT
(tolls_amount + fare_amount) AS fare_amount,
pickup_datetime,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count*1.0 AS passengers,
'unused' AS key
FROM `nyc-tlc.yellow.trips`
WHERE MOD(ABS(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING))), 1000) = 1
AND
trip_distance > 0
AND fare_amount >= 2.5
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
AND passenger_count > 0
###Output
_____no_output_____
###Markdown
Make the validation dataset be 1/10 the size of the training dataset.
###Code
%%bigquery
CREATE OR REPLACE TABLE serverlessml.feateng_valid_data AS
SELECT
(tolls_amount + fare_amount) AS fare_amount,
pickup_datetime,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count*1.0 AS passengers,
'unused' AS key
FROM `nyc-tlc.yellow.trips`
WHERE MOD(ABS(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING))), 10000) = 2
AND
trip_distance > 0
AND fare_amount >= 2.5
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
AND passenger_count > 0
###Output
_____no_output_____
###Markdown
Export the tables as CSV filesChange the BUCKET variable below to match a bucket that you own.
###Code
%%bash
OUTDIR=gs://$BUCKET/quests/serverlessml/data
echo "Deleting current contents of $OUTDIR"
gsutil -m -q rm -rf $OUTDIR
echo "Extracting training data to $OUTDIR"
bq --location=US extract \
--destination_format CSV \
--field_delimiter "," --noprint_header \
serverlessml.feateng_training_data \
$OUTDIR/taxi-train-*.csv
echo "Extracting validation data to $OUTDIR"
bq --location=US extract \
--destination_format CSV \
--field_delimiter "," --noprint_header \
serverlessml.feateng_valid_data \
$OUTDIR/taxi-valid-*.csv
gsutil ls -l $OUTDIR
!gsutil cat gs://$BUCKET/quests/serverlessml/data/taxi-train-000000000000.csv | head -2
###Output
52,2015-02-07 23:10:27 UTC,-73.781852722167969,40.644840240478516,-73.967453002929688,40.771881103515625,2,unused
57.33,2015-02-15 12:22:12 UTC,-73.98321533203125,40.738700866699219,-73.78955078125,40.642852783203125,2,unused
###Markdown
Exporting data from BigQuery to Google Cloud StorageIn this notebook, we export BigQuery data to GCS so that we can reuse our Keras model that was developed on CSV data.
###Code
%%bash
export PROJECT=$(gcloud config list project --format "value(core.project)")
echo "Your current GCP Project Name is: "$PROJECT
import os
PROJECT = "your-gcp-project-here" # REPLACE WITH YOUR PROJECT NAME
REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# Do not change these
os.environ["PROJECT"] = PROJECT
os.environ["REGION"] = REGION
os.environ["BUCKET"] = PROJECT + '-ml' # DEFAULT BUCKET WILL BE PROJECT ID -ml
if PROJECT == "your-gcp-project-here":
print("Don't forget to update your PROJECT name! Currently:", PROJECT)
###Output
_____no_output_____
###Markdown
Create BigQuery dataset and GCS BucketIf you haven't already, create the the BigQuery dataset and GCS Bucket we will need.
###Code
%%bash
## Create a BigQuery dataset for serverlessml if it doesn't exist
datasetexists=$(bq ls -d | grep -w serverlessml)
if [ -n "$datasetexists" ]; then
echo -e "BigQuery dataset already exists, let's not recreate it."
else
echo "Creating BigQuery dataset titled: serverlessml"
bq --location=US mk --dataset \
--description 'Taxi Fare' \
$PROJECT:serverlessml
echo "\nHere are your current datasets:"
bq ls
fi
## Create new ML GCS bucket if it doesn't exist already...
exists=$(gsutil ls -d | grep -w gs://${PROJECT}-ml/)
if [ -n "$exists" ]; then
echo -e "Bucket exists, let's not recreate it."
else
echo "Creating a new GCS bucket."
gsutil mb -l ${REGION} gs://${PROJECT}-ml
echo -e "\nHere are your current buckets:"
gsutil ls
fi
###Output
BigQuery dataset already exists, let's not recreate it.
Bucket exists, let's not recreate it.
###Markdown
Create BigQuery tables Let's create a table with 1 million examples.Note that the order of columns is exactly what was in our CSV files.
###Code
%%bigquery
CREATE OR REPLACE TABLE serverlessml.feateng_training_data AS
SELECT
(tolls_amount + fare_amount) AS fare_amount,
pickup_datetime,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count*1.0 AS passengers,
'unused' AS key
FROM `nyc-tlc.yellow.trips`
WHERE ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 1000)) = 1
AND
trip_distance > 0
AND fare_amount >= 2.5
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
AND passenger_count > 0
###Output
_____no_output_____
###Markdown
Make the validation dataset be 1/10 the size of the training dataset.
###Code
%%bigquery
CREATE OR REPLACE TABLE serverlessml.feateng_valid_data AS
SELECT
(tolls_amount + fare_amount) AS fare_amount,
pickup_datetime,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count*1.0 AS passengers,
'unused' AS key
FROM `nyc-tlc.yellow.trips`
WHERE ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 10000)) = 2
AND
trip_distance > 0
AND fare_amount >= 2.5
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
AND passenger_count > 0
###Output
_____no_output_____
###Markdown
Export the tables as CSV filesChange the BUCKET variable below to match a bucket that you own.
###Code
%%bash
OUTDIR=gs://$BUCKET/quests/serverlessml/data
echo "Deleting current contents of $OUTDIR"
gsutil -m -q rm -rf $OUTDIR
echo "Extracting training data to $OUTDIR"
bq --location=US extract \
--destination_format CSV \
--field_delimiter "," --noprint_header \
serverlessml.feateng_training_data \
$OUTDIR/taxi-train-*.csv
echo "Extracting validation data to $OUTDIR"
bq --location=US extract \
--destination_format CSV \
--field_delimiter "," --noprint_header \
serverlessml.feateng_valid_data \
$OUTDIR/taxi-valid-*.csv
gsutil ls -l $OUTDIR
!gsutil cat gs://$BUCKET/quests/serverlessml/data/taxi-train-000000000000.csv | head -2
###Output
52,2015-02-07 23:10:27 UTC,-73.781852722167969,40.644840240478516,-73.967453002929688,40.771881103515625,2,unused
57.33,2015-02-15 12:22:12 UTC,-73.98321533203125,40.738700866699219,-73.78955078125,40.642852783203125,2,unused
###Markdown
Exporting data from BigQuery to Google Cloud StorageIn this notebook, we export BigQuery data to GCS so that we can reuse our Keras model that was developed on CSV data.
###Code
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
!pip install tensorflow==2.1 --user
###Output
_____no_output_____
###Markdown
Please ignore any compatibility warnings and errors.Make sure to restart your kernel to ensure this change has taken place.
###Code
%%bash
export PROJECT=$(gcloud config list project --format "value(core.project)")
echo "Your current GCP Project Name is: "$PROJECT
import os
PROJECT = "your-gcp-project-here" # REPLACE WITH YOUR PROJECT NAME
REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# Do not change these
os.environ["PROJECT"] = PROJECT
os.environ["REGION"] = REGION
os.environ["BUCKET"] = PROJECT + '-ml' # DEFAULT BUCKET WILL BE PROJECT ID -ml
if PROJECT == "your-gcp-project-here":
print("Don't forget to update your PROJECT name! Currently:", PROJECT)
###Output
_____no_output_____
###Markdown
Create BigQuery dataset and GCS BucketIf you haven't already, create the the BigQuery dataset and GCS Bucket we will need.
###Code
%%bash
## Create a BigQuery dataset for serverlessml if it doesn't exist
datasetexists=$(bq ls -d | grep -w serverlessml)
if [ -n "$datasetexists" ]; then
echo -e "BigQuery dataset already exists, let's not recreate it."
else
echo "Creating BigQuery dataset titled: serverlessml"
bq --location=US mk --dataset \
--description 'Taxi Fare' \
$PROJECT:serverlessml
echo "\nHere are your current datasets:"
bq ls
fi
## Create new ML GCS bucket if it doesn't exist already...
exists=$(gsutil ls -d | grep -w gs://${PROJECT}-ml/)
if [ -n "$exists" ]; then
echo -e "Bucket exists, let's not recreate it."
else
echo "Creating a new GCS bucket."
gsutil mb -l ${REGION} gs://${PROJECT}-ml
echo -e "\nHere are your current buckets:"
gsutil ls
fi
###Output
BigQuery dataset already exists, let's not recreate it.
Bucket exists, let's not recreate it.
###Markdown
Create BigQuery tables Let's create a table with 1 million examples.Note that the order of columns is exactly what was in our CSV files.
###Code
%%bigquery
CREATE OR REPLACE TABLE serverlessml.feateng_training_data AS
SELECT
(tolls_amount + fare_amount) AS fare_amount,
pickup_datetime,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count*1.0 AS passengers,
'unused' AS key
FROM `nyc-tlc.yellow.trips`
WHERE ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 1000)) = 1
AND
trip_distance > 0
AND fare_amount >= 2.5
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
AND passenger_count > 0
###Output
_____no_output_____
###Markdown
Make the validation dataset be 1/10 the size of the training dataset.
###Code
%%bigquery
CREATE OR REPLACE TABLE serverlessml.feateng_valid_data AS
SELECT
(tolls_amount + fare_amount) AS fare_amount,
pickup_datetime,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count*1.0 AS passengers,
'unused' AS key
FROM `nyc-tlc.yellow.trips`
WHERE ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 10000)) = 2
AND
trip_distance > 0
AND fare_amount >= 2.5
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
AND passenger_count > 0
###Output
_____no_output_____
###Markdown
Export the tables as CSV filesChange the BUCKET variable below to match a bucket that you own.
###Code
%%bash
OUTDIR=gs://$BUCKET/quests/serverlessml/data
echo "Deleting current contents of $OUTDIR"
gsutil -m -q rm -rf $OUTDIR
echo "Extracting training data to $OUTDIR"
bq --location=US extract \
--destination_format CSV \
--field_delimiter "," --noprint_header \
serverlessml.feateng_training_data \
$OUTDIR/taxi-train-*.csv
echo "Extracting validation data to $OUTDIR"
bq --location=US extract \
--destination_format CSV \
--field_delimiter "," --noprint_header \
serverlessml.feateng_valid_data \
$OUTDIR/taxi-valid-*.csv
gsutil ls -l $OUTDIR
!gsutil cat gs://$BUCKET/quests/serverlessml/data/taxi-train-000000000000.csv | head -2
###Output
52,2015-02-07 23:10:27 UTC,-73.781852722167969,40.644840240478516,-73.967453002929688,40.771881103515625,2,unused
57.33,2015-02-15 12:22:12 UTC,-73.98321533203125,40.738700866699219,-73.78955078125,40.642852783203125,2,unused
###Markdown
Exporting data from BigQuery to Google Cloud StorageIn this notebook, we export BigQuery data to GCS so that we can reuse our Keras model that was developed on CSV data.
###Code
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
!pip install tensorflow==2.1 --user
###Output
_____no_output_____
###Markdown
Please ignore any compatibility warnings and errors.Make sure to restart your kernel to ensure this change has taken place.
###Code
%%bash
export PROJECT=$(gcloud config list project --format "value(core.project)")
echo "Your current GCP Project Name is: "$PROJECT
import os
PROJECT = "your-gcp-project-here" # REPLACE WITH YOUR PROJECT NAME
REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# Do not change these
os.environ["PROJECT"] = PROJECT
os.environ["REGION"] = REGION
os.environ["BUCKET"] = PROJECT + '-ml' # DEFAULT BUCKET WILL BE PROJECT ID -ml
if PROJECT == "your-gcp-project-here":
print("Don't forget to update your PROJECT name! Currently:", PROJECT)
###Output
_____no_output_____
###Markdown
Create BigQuery dataset and GCS BucketIf you haven't already, create the the BigQuery dataset and GCS Bucket we will need.
###Code
%%bash
## Create a BigQuery dataset for serverlessml if it doesn't exist
datasetexists=$(bq ls -d | grep -w serverlessml)
if [ -n "$datasetexists" ]; then
echo -e "BigQuery dataset already exists, let's not recreate it."
else
echo "Creating BigQuery dataset titled: serverlessml"
bq --location=US mk --dataset \
--description 'Taxi Fare' \
$PROJECT:serverlessml
echo "\nHere are your current datasets:"
bq ls
fi
## Create new ML GCS bucket if it doesn't exist already...
exists=$(gsutil ls -d | grep -w gs://${PROJECT}-ml/)
if [ -n "$exists" ]; then
echo -e "Bucket exists, let's not recreate it."
else
echo "Creating a new GCS bucket."
gsutil mb -l ${REGION} gs://${PROJECT}-ml
echo -e "\nHere are your current buckets:"
gsutil ls
fi
###Output
BigQuery dataset already exists, let's not recreate it.
Bucket exists, let's not recreate it.
###Markdown
Create BigQuery tables Let's create a table with 1 million examples.Note that the order of columns is exactly what was in our CSV files.
###Code
%%bigquery
CREATE OR REPLACE TABLE serverlessml.feateng_training_data AS
SELECT
(tolls_amount + fare_amount) AS fare_amount,
pickup_datetime,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count*1.0 AS passengers,
'unused' AS key
FROM `nyc-tlc.yellow.trips`
WHERE ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 1000)) = 1
AND
trip_distance > 0
AND fare_amount >= 2.5
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
AND passenger_count > 0
###Output
_____no_output_____
###Markdown
Make the validation dataset be 1/10 the size of the training dataset.
###Code
%%bigquery
CREATE OR REPLACE TABLE serverlessml.feateng_valid_data AS
SELECT
(tolls_amount + fare_amount) AS fare_amount,
pickup_datetime,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count*1.0 AS passengers,
'unused' AS key
FROM `nyc-tlc.yellow.trips`
WHERE ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 10000)) = 2
AND
trip_distance > 0
AND fare_amount >= 2.5
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
AND passenger_count > 0
###Output
_____no_output_____
###Markdown
Export the tables as CSV filesChange the BUCKET variable below to match a bucket that you own.
###Code
%%bash
OUTDIR=gs://$BUCKET/quests/serverlessml/data
echo "Deleting current contents of $OUTDIR"
gsutil -m -q rm -rf $OUTDIR
echo "Extracting training data to $OUTDIR"
bq --location=US extract \
--destination_format CSV \
--field_delimiter "," --noprint_header \
serverlessml.feateng_training_data \
$OUTDIR/taxi-train-*.csv
echo "Extracting validation data to $OUTDIR"
bq --location=US extract \
--destination_format CSV \
--field_delimiter "," --noprint_header \
serverlessml.feateng_valid_data \
$OUTDIR/taxi-valid-*.csv
gsutil ls -l $OUTDIR
!gsutil cat gs://$BUCKET/quests/serverlessml/data/taxi-train-000000000000.csv | head -2
###Output
52,2015-02-07 23:10:27 UTC,-73.781852722167969,40.644840240478516,-73.967453002929688,40.771881103515625,2,unused
57.33,2015-02-15 12:22:12 UTC,-73.98321533203125,40.738700866699219,-73.78955078125,40.642852783203125,2,unused
|
talktorials/1_ChEMBL/T1_ChEMBL.ipynb | ###Markdown
Talktorial 1 Compound data acquisition (ChEMBL) Developed in the CADD seminars 2017 and 2018, AG Volkamer, Charitรฉ/FU Berlin Paula Junge and Svetlana Leng Aim of this talktorialWe learn how to extract data from ChEMBL:* Find ligands which were tested on a certain target* Filter by available bioactivity data* Calculate pIC50 values* Merge dataframes and draw extracted molecules Learning goals Theory* ChEMBL database * ChEMBL web services * ChEMBL webresource client* Compound activity measures * IC50 * pIC50 Practical Goal: Get list of compounds with bioactivity data for a given target* Connect to ChEMBL database* Get target data (EGFR kinase)* Bioactivity data * Download and filter bioactivities * Clean and convert* Compound data * Get list of compounds * Prepare output data* Output * Draw molecules with highest pIC50 * Write output file References* ChEMBL bioactivity database (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5210557/)* ChEMBL web services: Nucleic Acids Res. (2015), 43, 612-620 (https://academic.oup.com/nar/article/43/W1/W612/2467881) * ChEMBL webrescource client GitHub (https://github.com/chembl/chembl_webresource_client)* myChEMBL webservices version 2.x (https://github.com/chembl/mychembl/blob/master/ipython_notebooks/09_myChEMBL_web_services.ipynb)* ChEMBL web-interface (https://www.ebi.ac.uk/chembl/)* EBI-RDF platform (https://www.ncbi.nlm.nih.gov/pubmed/24413672)* IC50 and pIC50 (https://en.wikipedia.org/wiki/IC50)* UniProt website (https://www.uniprot.org/) _____________________________________________________________________________________________________________________ Theory ChEMBL database* Open large-scale bioactivity database* **Current data content (as of 10.2018):** * \>1.8 million distinct compound structures * \>15 million activity values from 1 million assays * Assays are mapped to โผ12 000 targets* **Data sources** include scientific literature, PubChem bioassays, Drugs for Neglected Diseases Initiative (DNDi), BindingDB database, ...* ChEMBL data can be accessed via a [web-interface](https://www.ebi.ac.uk/chembl/), the [EBI-RDF platform](https://www.ncbi.nlm.nih.gov/pubmed/24413672) and the [ChEMBL web services](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4489243/B5) ChEMBL web services* RESTful web service* ChEMBL web service version 2.x resource schema: [](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4489243/figure/F2/)*Figure 1:* "ChEMBL web service schema diagram. The oval shapes represent ChEMBL web service resources and the line between two resources indicates that they share a common attribute. The arrow direction shows where the primary information about a resource type can be found. A dashed line indicates the relationship between two resources behaves differently. For example, the `Image` resource provides a graphical based representation of a `Molecule`."Figure and description taken from: [Nucleic Acids Res. (2015), 43, 612-620](https://academic.oup.com/nar/article/43/W1/W612/2467881). ChEMBL webresource client* Python client library for accessing ChEMBL data* Handles interaction with the HTTPS protocol* Lazy evaluation of results -> reduced number of network requests Compound activity measures IC50 * [Half maximal inhibitory concentration](https://en.wikipedia.org/wiki/IC50)* Indicates how much of a particular drug or other substance is needed to inhibit a given biological process by half[](https://commons.wikimedia.org/wiki/File:Example_IC50_curve_demonstrating_visually_how_IC50_is_derived.png)*Figure 2:* Visual demonstration of how to derive an IC50 value: Arrange data with inhibition on vertical axis and log(concentration) on horizontal axis; then identify max and min inhibition; then the IC50 is the concentration at which the curve passes through the 50% inhibition level. pIC50* To facilitate the comparison of IC50 values, we define pIC50 values on a logarithmic scale, such that $ pIC_{50} = -log_{10}(IC_{50}) $ where $ IC_{50}$ is specified in units of M.* Higher pIC50 values indicate exponentially greater potency of the drug* pIC50 is given in terms of molar concentration (mol/L or M) * IC50 should be specified in M to convert to pIC50 * For nM: $pIC_{50} = -log_{10}(IC_{50}*10^{-9})= 9-log_{10}(IC_{50}) $ Besides, IC50 and pIC50, other bioactivity measures are used, such as the equilibrium constant [KI](https://en.wikipedia.org/wiki/Equilibrium_constant) and the half maximal effective concentration [EC50](https://en.wikipedia.org/wiki/EC50). PracticalIn the following, we want to download all molecules that have been tested against our target of interest, the EGFR kinase. Connect to ChEMBL database First, the ChEMBL webresource client as well as other python libraries are imported.
###Code
from chembl_webresource_client.new_client import new_client
import pandas as pd
import math
from rdkit.Chem import PandasTools
###Output
/home/andrea/anaconda2/envs/cadd-py36/lib/python3.6/site-packages/grequests.py:21: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.contrib.pyopenssl (/home/andrea/anaconda2/envs/cadd-py36/lib/python3.6/site-packages/urllib3/contrib/pyopenssl.py)', 'urllib3.util (/home/andrea/anaconda2/envs/cadd-py36/lib/python3.6/site-packages/urllib3/util/__init__.py)'].
curious_george.patch_all(thread=False, select=False)
###Markdown
Create resource objects for API access.
###Code
targets = new_client.target
compounds = new_client.molecule
bioactivities = new_client.activity
###Output
_____no_output_____
###Markdown
Target data* Get UniProt-ID (http://www.uniprot.org/uniprot/P00533) of the target of interest (EGFR kinase) from UniProt website (https://www.uniprot.org/)* Use UniProt-ID to get target information* Select a different UniProt-ID if you are interested into another target
###Code
uniprot_id = 'P00533'
# Get target information from ChEMBL but restrict to specified values only
target_P00533 = targets.get(target_components__accession=uniprot_id) \
.only('target_chembl_id', 'organism', 'pref_name', 'target_type')
print(type(target_P00533))
pd.DataFrame.from_records(target_P00533)
###Output
<class 'chembl_webresource_client.query_set.QuerySet'>
###Markdown
After checking the entries, we select the first entry as our target of interest`CHEMBL203`: It is a single protein and represents the human Epidermal growth factor receptor (EGFR, also named erbB1)
###Code
target = target_P00533[0]
target
###Output
_____no_output_____
###Markdown
Save selected ChEMBL-ID.
###Code
chembl_id = target['target_chembl_id']
chembl_id
###Output
_____no_output_____
###Markdown
Bioactivity dataNow, we want to query bioactivity data for the target of interest. Download and filter bioactivities for the target In this step, we download and filter the bioactivity data and only consider* human proteins* bioactivity type IC50* exact measurements (relation '=') * binding data (assay type 'B')
###Code
bioact = bioactivities.filter(target_chembl_id = chembl_id) \
.filter(type = 'IC50') \
.filter(relation = '=') \
.filter(assay_type = 'B') \
.only('activity_id','assay_chembl_id', 'assay_description', 'assay_type', \
'molecule_chembl_id', 'type', 'units', 'relation', 'value', \
'target_chembl_id', 'target_organism')
len(bioact), len(bioact[0]), type(bioact), type(bioact[0])
###Output
_____no_output_____
###Markdown
If you experience difficulties to query the ChEMBL database, we provide here a file containing the results for the query in the previous cell (11 April 2019). We do this using the Python package pickle which serializes Python objects so they can be saved to a file, and loaded in a program again later on.(Learn more about object serialization on [DataCamp](https://www.datacamp.com/community/tutorials/pickle-python-tutorial))You can load the "pickled" compounds by uncommenting and running the next cell.
###Code
#import pickle
#bioact = pickle.load(open("../data/T1/EGFR_compounds_from_chembl_query_20190411.p", "rb"))
###Output
_____no_output_____
###Markdown
Clean and convert bioactivity dataThe data is stored as a list of dictionaries
###Code
bioact[0]
###Output
_____no_output_____
###Markdown
Convert to pandas dataframe (this might take some minutes).
###Code
bioact_df = pd.DataFrame.from_records(bioact)
bioact_df.head(10)
bioact_df.shape
###Output
_____no_output_____
###Markdown
Delete entries with missing values.
###Code
bioact_df = bioact_df.dropna(axis=0, how = 'any')
bioact_df.shape
###Output
_____no_output_____
###Markdown
Delete duplicates:Sometimes the same molecule (`molecule_chembl_id`) has been tested more than once, in this case, we only keep the first one.
###Code
bioact_df = bioact_df.drop_duplicates('molecule_chembl_id', keep = 'first')
bioact_df.shape
###Output
_____no_output_____
###Markdown
We would like to only keep bioactivity data measured in molar units. The following print statements will help us to see what units are contained and to control what is kept after dropping some rows.
###Code
print(bioact_df.units.unique())
bioact_df = bioact_df.drop(bioact_df.index[~bioact_df.units.str.contains('M')])
print(bioact_df.units.unique())
bioact_df.shape
###Output
['uM' 'nM' 'M' "10'1 ug/ml" 'ug ml-1' "10'-1microM" "10'1 uM"
"10'-1 ug/ml" "10'-2 ug/ml" "10'2 uM" '/uM' "10'-6g/ml" 'mM' 'umol/L'
'nmol/L']
['uM' 'nM' 'M' "10'-1microM" "10'1 uM" "10'2 uM" '/uM' 'mM']
###Markdown
Since we deleted some rows, but we want to iterate over the index later, we reset index to be continuous.
###Code
bioact_df = bioact_df.reset_index(drop=True)
bioact_df.head()
###Output
_____no_output_____
###Markdown
To allow further comparison of the IC50 values, we convert all units to nM. First, we write a helper function, which can be applied to the whole dataframe in the next step.
###Code
def convert_to_NM(unit, bioactivity):
# c=0
# for i, unit in enumerate(bioact_df.units):
if unit != "nM":
if unit == "pM":
value = float(bioactivity)/1000
elif unit == "10'-11M":
value = float(bioactivity)/100
elif unit == "10'-10M":
value = float(bioactivity)/10
elif unit == "10'-8M":
value = float(bioactivity)*10
elif unit == "10'-1microM" or unit == "10'-7M":
value = float(bioactivity)*100
elif unit == "uM" or unit == "/uM" or unit == "10'-6M":
value = float(bioactivity)*1000
elif unit == "10'1 uM":
value = float(bioactivity)*10000
elif unit == "10'2 uM":
value = float(bioactivity)*100000
elif unit == "mM":
value = float(bioactivity)*1000000
elif unit == "M":
value = float(bioactivity)*1000000000
else:
print ('unit not recognized...', unit)
return value
else: return bioactivity
bioactivity_nM = []
for i, row in bioact_df.iterrows():
bioact_nM = convert_to_NM(row['units'], row['value'])
bioactivity_nM.append(bioact_nM)
bioact_df['value'] = bioactivity_nM
bioact_df['units'] = 'nM'
bioact_df.head()
###Output
_____no_output_____
###Markdown
Compound dataWe have a data frame containing all molecules tested (with the respective measure) against EGFR. Now, we want to get the molecules that are stored behind the respective ChEMBL IDs. Get list of compoundsLet's have a look at the compounds from ChEMBL we have defined bioactivity data for. First, we retrieve ChEMBL ID and structures for the compounds with desired bioactivity data.
###Code
cmpd_id_list = list(bioact_df['molecule_chembl_id'])
compound_list = compounds.filter(molecule_chembl_id__in = cmpd_id_list) \
.only('molecule_chembl_id','molecule_structures')
###Output
_____no_output_____
###Markdown
Then, we convert the list to a pandas dataframe and delete duplicates (again, the pandas from_records function might take some time).
###Code
compound_df = pd.DataFrame.from_records(compound_list)
compound_df = compound_df.drop_duplicates('molecule_chembl_id', keep = 'first')
print(compound_df.shape)
print(bioact_df.shape)
compound_df.head()
###Output
(4780, 2)
(4780, 11)
###Markdown
So far, we have multiple different molecular structure representations. We only want to keep the canonical SMILES.
###Code
for i, cmpd in compound_df.iterrows():
if compound_df.loc[i]['molecule_structures'] != None:
compound_df.loc[i]['molecule_structures'] = cmpd['molecule_structures']['canonical_smiles']
print (compound_df.shape)
###Output
(4780, 2)
###Markdown
Prepare output data Merge values of interest in one dataframe on ChEMBL-IDs:* ChEMBL-IDs* SMILES* units* IC50
###Code
output_df = pd.merge(bioact_df[['molecule_chembl_id','units','value']], compound_df, on='molecule_chembl_id')
print(output_df.shape)
output_df.head()
###Output
(4780, 4)
###Markdown
For distinct column names, we rename IC50 and SMILES columns.
###Code
output_df = output_df.rename(columns= {'molecule_structures':'smiles', 'value':'IC50'})
output_df.shape
###Output
_____no_output_____
###Markdown
If we do not have a SMILES representation of a compound, we can not further use it in the following talktorials. Therefore, we delete compounds without SMILES column.
###Code
output_df = output_df[~output_df['smiles'].isnull()]
print(output_df.shape)
output_df.head()
###Output
(4771, 4)
###Markdown
In the next cell, you see that the low IC50 values are difficult to read. Therefore, we prefer to convert the IC50 values to pIC50.
###Code
output_df = output_df.reset_index(drop=True)
ic50 = output_df.IC50.astype(float)
print(len(ic50))
print(ic50.head(10))
# Convert IC50 to pIC50 and add pIC50 column:
pIC50 = pd.Series()
i = 0
while i < len(output_df.IC50):
value = 9 - math.log10(ic50[i]) # pIC50=-log10(IC50 mol/l) --> for nM: -log10(IC50*10**-9)= 9-log10(IC50)
if value < 0:
print("Negative pIC50 value at index"+str(i))
pIC50.at[i] = value
i += 1
output_df['pIC50'] = pIC50
output_df.head()
###Output
_____no_output_____
###Markdown
Collected bioactivity data for EGFR Let's have a look at our collected data set. Draw moleculesIn the next steps, we add a molecule column to our datafame and look at the structures of the molecules with the highest pIC50 values.
###Code
PandasTools.AddMoleculeColumnToFrame(output_df, smilesCol='smiles')
###Output
_____no_output_____
###Markdown
Sort molecules by pIC50.
###Code
output_df.sort_values(by="pIC50", ascending=False, inplace=True)
output_df.reset_index(drop=True, inplace=True)
###Output
_____no_output_____
###Markdown
Show the most active molecules = molecules with the highest pIC50 values.
###Code
output_df.drop("smiles", axis=1).head()
###Output
_____no_output_____
###Markdown
Write output fileTo use the data for the following talktorials, we save the data as csv file. Note that it is advisable to drop the molecule column (only contains an image of the molecules) when saving the data.
###Code
output_df.drop("ROMol", axis=1).to_csv("../data/T1/EGFR_compounds.csv")
###Output
_____no_output_____
###Markdown
Talktorial 1 Compound data acquisition (ChEMBL) Developed in the CADD seminars 2017 and 2018, AG Volkamer, Charitรฉ/FU Berlin Paula Junge and Svetlana Leng Aim of this talktorialWe learn how to extract data from ChEMBL:* Find ligands which were tested on a certain target* Filter by available bioactivity data* Calculate pIC50 values* Merge dataframes and draw extracted molecules Learning goals Theory* ChEMBL database * ChEMBL web services * ChEMBL webresource client* Compound activity measures * IC50 * pIC50 Practical Goal: Get list of compounds with bioactivity data for a given target* Connect to ChEMBL database* Get target data (EGFR kinase)* Bioactivity data * Download and filter bioactivities * Clean and convert* Compound data * Get list of compounds * Prepare output data* Output * Draw molecules with highest pIC50 * Write output file References* ChEMBL bioactivity database (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5210557/)* ChEMBL web services: Nucleic Acids Res. (2015), 43, 612-620 (https://academic.oup.com/nar/article/43/W1/W612/2467881) * ChEMBL webrescource client GitHub (https://github.com/chembl/chembl_webresource_client)* myChEMBL webservices version 2.x (https://github.com/chembl/mychembl/blob/master/ipython_notebooks/09_myChEMBL_web_services.ipynb)* ChEMBL web-interface (https://www.ebi.ac.uk/chembl/)* EBI-RDF platform (https://www.ncbi.nlm.nih.gov/pubmed/24413672)* IC50 and pIC50 (https://en.wikipedia.org/wiki/IC50)* UniProt website (https://www.uniprot.org/) _____________________________________________________________________________________________________________________ Theory ChEMBL database* Open large-scale bioactivity database* **Current data content (as of 10.2018):** * \>1.8 million distinct compound structures * \>15 million activity values from 1 million assays * Assays are mapped to โผ12 000 targets* **Data sources** include scientific literature, PubChem bioassays, Drugs for Neglected Diseases Initiative (DNDi), BindingDB database, ...* ChEMBL data can be accessed via a [web-interface](https://www.ebi.ac.uk/chembl/), the [EBI-RDF platform](https://www.ncbi.nlm.nih.gov/pubmed/24413672) and the [ChEMBL web services](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4489243/B5) ChEMBL web services* RESTful web service* ChEMBL web service version 2.x resource schema: [](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4489243/figure/F2/)*Figure 1:* "ChEMBL web service schema diagram. The oval shapes represent ChEMBL web service resources and the line between two resources indicates that they share a common attribute. The arrow direction shows where the primary information about a resource type can be found. A dashed line indicates the relationship between two resources behaves differently. For example, the `Image` resource provides a graphical based representation of a `Molecule`."Figure and description taken from: [Nucleic Acids Res. (2015), 43, 612-620](https://academic.oup.com/nar/article/43/W1/W612/2467881). ChEMBL webresource client* Python client library for accessing ChEMBL data* Handles interaction with the HTTPS protocol* Lazy evaluation of results -> reduced number of network requests Compound activity measures IC50 * [Half maximal inhibitory concentration](https://en.wikipedia.org/wiki/IC50)* Indicates how much of a particular drug or other substance is needed to inhibit a given biological process by half[](https://commons.wikimedia.org/wiki/File:Example_IC50_curve_demonstrating_visually_how_IC50_is_derived.png)*Figure 2:* Visual demonstration of how to derive an IC50 value: Arrange data with inhibition on vertical axis and log(concentration) on horizontal axis; then identify max and min inhibition; then the IC50 is the concentration at which the curve passes through the 50% inhibition level. pIC50* To facilitate the comparison of IC50 values, we define pIC50 values on a logarithmic scale, such that $ pIC_{50} = -log_{10}(IC_{50}) $ where $ IC_{50}$ is specified in units of M.* Higher pIC50 values indicate exponentially greater potency of the drug* pIC50 is given in terms of molar concentration (mol/L or M) * IC50 should be specified in M to convert to pIC50 * For nM: $pIC_{50} = -log_{10}(IC_{50}*10^{-9})= 9-log_{10}(IC_{50}) $ Besides, IC50 and pIC50, other bioactivity measures are used, such as the equilibrium constant [KI](https://en.wikipedia.org/wiki/Equilibrium_constant) and the half maximal effective concentration [EC50](https://en.wikipedia.org/wiki/EC50). PracticalIn the following, we want to download all molecules that have been tested against our target of interest, the EGFR kinase. Connect to ChEMBL database First, the ChEMBL webresource client as well as other python libraries are imported.
###Code
from chembl_webresource_client.new_client import new_client
import pandas as pd
import math
from rdkit.Chem import PandasTools
###Output
_____no_output_____
###Markdown
Create resource objects for API access.
###Code
targets = new_client.target
compounds = new_client.molecule
bioactivities = new_client.activity
###Output
_____no_output_____
###Markdown
Target data* Get UniProt-ID (http://www.uniprot.org/uniprot/P00533) of the target of interest (EGFR kinase) from UniProt website (https://www.uniprot.org/)* Use UniProt-ID to get target information* Select a different UniProt-ID if you are interested into another target
###Code
uniprot_id = 'P00533'
# Get target information from ChEMBL but restrict to specified values only
target_P00533 = targets.get(target_components__accession=uniprot_id) \
.only('target_chembl_id', 'organism', 'pref_name', 'target_type')
print(type(target_P00533))
pd.DataFrame.from_records(target_P00533)
###Output
<class 'chembl_webresource_client.query_set.QuerySet'>
###Markdown
After checking the entries, we select the first entry as our target of interest`CHEMBL203`: It is a single protein and represents the human Epidermal growth factor receptor (EGFR, also named erbB1)
###Code
target = target_P00533[0]
target
###Output
_____no_output_____
###Markdown
Save selected ChEMBL-ID.
###Code
chembl_id = target['target_chembl_id']
chembl_id
###Output
_____no_output_____
###Markdown
Bioactivity dataNow, we want to query bioactivity data for the target of interest. Download and filter bioactivities for the target In this step, we download and filter the bioactivity data and only consider* human proteins* bioactivity type IC50* exact measurements (relation '=') * binding data (assay type 'B')
###Code
bioact = bioactivities.filter(target_chembl_id = chembl_id).filter(type = 'IC50').filter(relation = '=').\
filter(assay_type = 'B').only('activity_id','assay_chembl_id', 'assay_description', 'assay_type',
'molecule_chembl_id', 'type', 'units', 'relation', 'value','target_chembl_id',
'target_organism')
###Output
_____no_output_____
###Markdown
If you experience difficulties to query the ChEMBL database, we provide here a file containing the results for the query in the previous cell (11 April 2019). We do this using the Python package pickle which serializes Python objects so they can be saved to a file, and loaded in a program again later on.(Learn more about object serialization on [DataCamp](https://www.datacamp.com/community/tutorials/pickle-python-tutorial))You can load the "pickled" compounds by uncommenting and running the next cell.
###Code
#import pickle
#bioact = pickle.load(open("../data/T1/EGFR_compounds_from_chembl_query_20190411.p", "rb"))
###Output
_____no_output_____
###Markdown
Clean and convert bioactivity dataThe data is stored as a list of dictionaries
###Code
bioact[0]
###Output
_____no_output_____
###Markdown
Convert to pandas dataframe (this might take some minutes).
###Code
bioact_df = pd.DataFrame.from_records(bioact)
bioact_df.head(10)
bioact_df.shape
###Output
_____no_output_____
###Markdown
Delete entries with missing values.
###Code
bioact_df = bioact_df.dropna(axis=0, how = 'any')
bioact_df.shape
###Output
_____no_output_____
###Markdown
Delete duplicates:Sometimes the same molecule (`molecule_chembl_id`) has been tested more than once, in this case, we only keep the first one.
###Code
bioact_df = bioact_df.drop_duplicates('molecule_chembl_id', keep = 'first')
bioact_df.shape
###Output
_____no_output_____
###Markdown
We would like to only keep bioactivity data measured in molar units. The following print statements will help us to see what units are contained and to control what is kept after dropping some rows.
###Code
print(bioact_df.units.unique())
bioact_df = bioact_df.drop(bioact_df.index[~bioact_df.units.str.contains('M')])
print(bioact_df.units.unique())
bioact_df.shape
bioact_df.units
###Output
_____no_output_____
###Markdown
Since we deleted some rows, but we want to iterate over the index later, we reset index to be continuous.
###Code
bioact_df = bioact_df.reset_index(drop=True)
bioact_df.head()
###Output
_____no_output_____
###Markdown
To allow further comparison of the IC50 values, we convert all units to nM. First, we write a helper function, which can be applied to the whole dataframe in the next step.
###Code
def convert_to_NM(unit, bioactivity):
# c=0
# for i, unit in enumerate(bioact_df.units):
if unit != "nM":
if unit == "pM":
value = float(bioactivity)/1000
elif unit == "10'-11M":
value = float(bioactivity)/100
elif unit == "10'-10M":
value = float(bioactivity)/10
elif unit == "10'-8M":
value = float(bioactivity)*10
elif unit == "10'-1microM" or unit == "10'-7M":
value = float(bioactivity)*100
elif unit == "uM" or unit == "/uM" or unit == "10'-6M":
value = float(bioactivity)*1000
elif unit == "10'1 uM":
value = float(bioactivity)*10000
elif unit == "10'2 uM":
value = float(bioactivity)*100000
elif unit == "mM":
value = float(bioactivity)*1000000
elif unit == "M":
value = float(bioactivity)*1000000000
else:
print ('unit not recognized...', unit)
return value
else: return bioactivity
bioactivity_nM = []
for i, row in bioact_df.iterrows():
bioact_nM = convert_to_NM(row['units'], row['value'])
bioactivity_nM.append(bioact_nM)
bioact_df['value'] = bioactivity_nM
bioact_df['units'] = 'nM'
bioact_df.head()
###Output
_____no_output_____
###Markdown
Compound dataWe have a data frame containing all molecules tested (with the respective measure) against EGFR. Now, we want to get the molecules that are stored behind the respective ChEMBL IDs. Get list of compoundsLet's have a look at the compounds from ChEMBL we have defined bioactivity data for. First, we retrieve ChEMBL ID and structures for the compounds with desired bioactivity data.
###Code
cmpd_id_list = list(bioact_df['molecule_chembl_id'])
compound_list = compounds.filter(molecule_chembl_id__in = cmpd_id_list) \
.only('molecule_chembl_id','molecule_structures')
###Output
_____no_output_____
###Markdown
Then, we convert the list to a pandas dataframe and delete duplicates (again, the pandas from_records function might take some time).
###Code
compound_df = pd.DataFrame.from_records(compound_list)
compound_df = compound_df.drop_duplicates('molecule_chembl_id', keep = 'first')
print(compound_df.shape)
print(bioact_df.shape)
compound_df.head()
###Output
(6027, 2)
(6027, 11)
###Markdown
So far, we have multiple different molecular structure representations. We only want to keep the canonical SMILES.
###Code
for i, cmpd in compound_df.iterrows():
if compound_df.loc[i]['molecule_structures'] != None:
compound_df.loc[i]['molecule_structures'] = cmpd['molecule_structures']['canonical_smiles']
print (compound_df.shape)
###Output
(6027, 2)
###Markdown
Prepare output data Merge values of interest in one dataframe on ChEMBL-IDs:* ChEMBL-IDs* SMILES* units* IC50
###Code
output_df = pd.merge(bioact_df[['molecule_chembl_id','units','value']], compound_df, on='molecule_chembl_id')
print(output_df.shape)
output_df.head()
###Output
(6027, 4)
###Markdown
For distinct column names, we rename IC50 and SMILES columns.
###Code
output_df = output_df.rename(columns= {'molecule_structures':'smiles', 'value':'IC50'})
output_df.shape
###Output
_____no_output_____
###Markdown
If we do not have a SMILES representation of a compound, we can not further use it in the following talktorials. Therefore, we delete compounds without SMILES column.
###Code
output_df = output_df[~output_df['smiles'].isnull()]
print(output_df.shape)
output_df.head()
###Output
(6020, 4)
###Markdown
In the next cell, you see that the low IC50 values are difficult to read. Therefore, we prefer to convert the IC50 values to pIC50.
###Code
output_df = output_df.reset_index(drop=True)
ic50 = output_df.IC50.astype(float)
print(len(ic50))
print(ic50.head(10))
# Convert IC50 to pIC50 and add pIC50 column:
pIC50 = pd.Series()
i = 0
while i < len(output_df.IC50):
value = 9 - math.log10(ic50[i]) # pIC50=-log10(IC50 mol/l) --> for nM: -log10(IC50*10**-9)= 9-log10(IC50)
if value < 0:
print("Negative pIC50 value at index"+str(i))
pIC50.at[i] = value
i += 1
output_df['pIC50'] = pIC50
output_df.head()
###Output
C:\Users\Public\Documents\iSkysoft\CreatorTemp/ipykernel_17244/3452389729.py:2: DeprecationWarning: The default dtype for empty Series will be 'object' instead of 'float64' in a future version. Specify a dtype explicitly to silence this warning.
pIC50 = pd.Series()
###Markdown
Collected bioactivity data for EGFR Let's have a look at our collected data set. Draw moleculesIn the next steps, we add a molecule column to our datafame and look at the structures of the molecules with the highest pIC50 values.
###Code
PandasTools.AddMoleculeColumnToFrame(output_df, smilesCol='smiles')
###Output
_____no_output_____
###Markdown
Sort molecules by pIC50.
###Code
output_df.sort_values(by="pIC50", ascending=False, inplace=True)
output_df.reset_index(drop=True, inplace=True)
###Output
_____no_output_____
###Markdown
Show the most active molecules = molecules with the highest pIC50 values.
###Code
output_df.drop("smiles", axis=1).head()
###Output
_____no_output_____
###Markdown
Write output fileTo use the data for the following talktorials, we save the data as csv file. Note that it is advisable to drop the molecule column (only contains an image of the molecules) when saving the data.
###Code
output_df.drop("ROMol", axis=1).to_csv("../data/T1/EGFR_compounds.csv")
###Output
_____no_output_____ |
iPython Notebooks/Introduction to Pandas Part 1.ipynb | ###Markdown
Data Science Boot Camp Introduction to Pandas Part 1 * __Pandas__ is a Python package providing fast, flexible, and expressive data structures designed to work with *relational* or *labeled* data both.* It is a fundamental high-level building block for doing practical, real world data analysis in Python.* Python has always been great for prepping and munging data, but it's never been great for analysis - you'd usually end up using R or loading it into a database and using SQL. Pandas makes Python great for analysis. * Pandas is well suited for: * Tabular data with heterogeneously-typed columns, as in an SQL table or Excel spreadsheet * Ordered and unordered (not necessarily fixed-frequency) time series data. * Arbitrary matrix data (homogeneously typed or heterogeneous) with row and column labels * Any other form of observational / statistical data sets. The data actually need not be labeled at all to be placed into a pandas data structure * Key features of Pandas: * Easy handling of __missing data__ * __Size mutability__: columns can be inserted and deleted from DataFrame and higher dimensional objects. * Automatic and explicit __data alignment__: objects can be explicitly aligned to a set of labels, or the data can be aligned automatically. * __Fast__ and __efficient__ DataFrame object with default and customized indexing. * __Reshaping__ and __pivoting__ of data sets. * Key features of Pandas (Continued): * Label-based __slicing__, __indexing__, __fancy indexing__ and __subsetting__ of large data sets. * __Group by__ data for aggregation and transformations. * High performance __merging__ and __joining__ of data. * __IO Tools__ for loading data into in-memory data objects from different file formats. * __Time Series__ functionality. * First thing we have to import pandas and numpy library under the aliases pd and np.* Then check our pandas version.
###Code
%matplotlib inline
import pandas as pd
import numpy as np
print(pd.__version__)
###Output
0.22.0
###Markdown
* Let's set some options for `Pandas`
###Code
pd.set_option('display.notebook_repr_html', False)
pd.set_option('max_columns', 10)
pd.set_option('max_rows', 10)
###Output
_____no_output_____
###Markdown
Pandas Objects * At the very basic level, Pandas objects can be thought of as enhanced versions of NumPy structured arrays in which the rows and columns are identified with labels rather than simple integer indices.* There are three fundamental Pandas data structures: the Series, DataFrame, and Index. Series * A __Series__ is a single vector of data (like a NumPy array) with an *index* that labels each element in the vector.* It can be created from a list or array as follows:
###Code
counts = pd.Series([15029231, 7529491, 7499740, 5445026, 2702492, 2742534, 4279677, 2133548, 2146129])
counts
###Output
_____no_output_____
###Markdown
* If an index is not specified, a default sequence of integers is assigned as the index. A NumPy array comprises the values of the `Series`, while the index is a pandas `Index` object.
###Code
counts.values
counts.index
###Output
_____no_output_____
###Markdown
* We can assign meaningful labels to the index, if they are available:
###Code
population = pd.Series([15029231, 7529491, 7499740, 5445026, 2702492, 2742534, 4279677, 2133548, 2146129],
index=['Istanbul Total', 'Istanbul Males', 'Istanbul Females', 'Ankara Total', 'Ankara Males', 'Ankara Females', 'Izmir Total', 'Izmir Males', 'Izmir Females'])
population
###Output
_____no_output_____
###Markdown
* These labels can be used to refer to the values in the `Series`.
###Code
population['Istanbul Total']
mask = [city.endswith('Females') for city in population.index]
mask
population[mask]
###Output
_____no_output_____
###Markdown
* As you noticed that we can masking in series.* Also we can still use positional indexing even we assign meaningful labels to the index, if we wish.
###Code
population[0]
###Output
_____no_output_____
###Markdown
* We can give both the array of values and the index meaningful labels themselves:
###Code
population.name = 'population'
population.index.name = 'city'
population
###Output
_____no_output_____
###Markdown
* Also, NumPy's math functions and other operations can be applied to Series without losing the data structure.
###Code
np.ceil(population / 1000000) * 1000000
###Output
_____no_output_____
###Markdown
* We can also filter according to the values in the `Series` like in the Numpy's:
###Code
population[population>3000000]
###Output
_____no_output_____
###Markdown
* A `Series` can be thought of as an ordered key-value store. In fact, we can create one from a `dict`:
###Code
populationDict = {'Istanbul Total': 15029231, 'Ankara Total': 5445026, 'Izmir Total': 4279677}
pd.Series(populationDict)
###Output
_____no_output_____
###Markdown
* Notice that the `Series` is created in key-sorted order.* If we pass a custom index to `Series`, it will select the corresponding values from the dict, and treat indices without corrsponding values as missing. Pandas uses the `NaN` (not a number) type for missing values.
###Code
population2 = pd.Series(populationDict, index=['Istanbul Total','Ankara Total','Izmir Total','Bursa Total', 'Antalya Total'])
population2
population2.isnull()
###Output
_____no_output_____
###Markdown
* Critically, the labels are used to **align data** when used in operations with other Series objects:
###Code
population + population2
###Output
_____no_output_____
###Markdown
* Contrast this with NumPy arrays, where arrays of the same length will combine values element-wise; adding Series combined values with the same label in the resulting series. Notice also that the missing values were propogated by addition. DataFrame * A `DataFrame` represents a tabular, spreadsheet-like data structure containing an or- dered collection of columns, each of which can be a different value type (numeric, string, boolean, etc.).* `DataFrame` has both a row and column index; it can be thought of as a dict of Series (one for all sharing the same index).
###Code
areaDict = {'Istanbul': 5461, 'Ankara': 25632, 'Izmir': 11891,
'Bursa': 10813, 'Antalya': 20177}
area = pd.Series(areaDict)
area
populationDict = {'Istanbul': 15029231, 'Ankara': 5445026, 'Izmir': 4279677, 'Bursa': 2936803, 'Antalya': 2364396}
population3 = pd.Series(populationDict)
population3
###Output
_____no_output_____
###Markdown
* Now that we have 2 Series population by cities and areas by cities, we can use a dictionary to construct a single two-dimensional object containing this information:
###Code
cities = pd.DataFrame({'population': population3, 'area': area})
cities
###Output
_____no_output_____
###Markdown
* Or we can create our cities `DataFrame` with lists and indexes.
###Code
cities = pd.DataFrame({
'population':[15029231, 5445026, 4279677, 2936803, 2364396],
'area':[5461, 25632, 11891, 10813, 20177],
'city':['Istanbul', 'Ankara', 'Izmir', 'Bursa', 'Antalya']
})
cities
###Output
_____no_output_____
###Markdown
Notice the `DataFrame` is sorted by column name. We can change the order by indexing them in the order we desire:
###Code
cities[['city','area', 'population']]
###Output
_____no_output_____
###Markdown
* A `DataFrame` has a second index, representing the columns:
###Code
cities.columns
###Output
_____no_output_____
###Markdown
* If we wish to access columns, we can do so either by dictionary like indexing or by attribute:
###Code
cities['area']
cities.area
type(cities.area)
type(cities[['area']])
###Output
_____no_output_____
###Markdown
* Notice this is different than with `Series`, where dictionary like indexing retrieved a particular element (row). If we want access to a row in a `DataFrame`, we index its `iloc` attribute.
###Code
cities.iloc[2]
cities.iloc[0:2]
###Output
_____no_output_____
###Markdown
Alternatively, we can create a `DataFrame` with a dict of dicts:
###Code
cities = pd.DataFrame({
0: {'city': 'Istanbul', 'area': 5461, 'population': 15029231},
1: {'city': 'Ankara', 'area': 25632, 'population': 5445026},
2: {'city': 'Izmir', 'area': 11891, 'population': 4279677},
3: {'city': 'Bursa', 'area': 10813, 'population': 2936803},
4: {'city': 'Antalya', 'area': 20177, 'population': 2364396},
})
cities
###Output
_____no_output_____
###Markdown
* We probably want this transposed:
###Code
cities = cities.T
cities
###Output
_____no_output_____
###Markdown
* It's important to note that the Series returned when a DataFrame is indexted is merely a **view** on the DataFrame, and not a copy of the data itself. * So you must be cautious when manipulating this data just like in the Numpy.
###Code
areas = cities.area
areas
areas[3] = 0
areas
cities
###Output
_____no_output_____
###Markdown
* It's a usefull behavior for large data sets but for preventing this you can use copy method.
###Code
areas = cities.area.copy()
areas[3] = 10813
areas
cities
###Output
_____no_output_____
###Markdown
* We can create or modify columns by assignment:
###Code
cities.area[3] = 10813
cities
cities['year'] = 2017
cities
###Output
_____no_output_____
###Markdown
* But note that, we can not use the attribute indexing method to add a new column:
###Code
cities.projection2020 = 20000000
cities
###Output
_____no_output_____
###Markdown
* It creates another variable.
###Code
cities.projection2020
###Output
_____no_output_____
###Markdown
* Specifying a `Series` as a new columns cause its values to be added according to the `DataFrame`'s index:
###Code
populationIn2000 = pd.Series([11076840, 3889199, 3431204, 2150571, 1430539])
populationIn2000
cities['population_2000'] = populationIn2000
cities
###Output
_____no_output_____
###Markdown
* Other Python data structures (ones without an index) need to be the same length as the `DataFrame`:
###Code
populationIn2007 = [12573836, 4466756, 3739353, 2439876]
cities['population_2007'] = populationIn2007
###Output
_____no_output_____
###Markdown
* We can use `del` to remove columns, in the same way `dict` entries can be removed:
###Code
cities
del cities['population_2000']
cities
###Output
_____no_output_____
###Markdown
* We can extract the underlying data as a simple `ndarray` by accessing the `values` attribute:
###Code
cities.values
###Output
_____no_output_____
###Markdown
* Notice that because of the mix of string and integer (and could be`NaN`) values, the dtype of the array is `object`. * The dtype will automatically be chosen to be as general as needed to accomodate all the columns.
###Code
df = pd.DataFrame({'integers': [1,2,3], 'floatNumbers':[0.5, -1.25, 2.5]})
df
print(df.values.dtype)
df.values
###Output
float64
###Markdown
* Pandas uses a custom data structure to represent the indices of Series and DataFrames.
###Code
cities.index
###Output
_____no_output_____
###Markdown
* Index objects are immutable:
###Code
cities.index[0] = 15
###Output
_____no_output_____
###Markdown
* This is so that Index objects can be shared between data structures without fear that they will be changed.* That means you can move, copy your meaningful labels to other `DataFrames`
###Code
cities
cities.index = population2.index
cities
###Output
_____no_output_____
###Markdown
Importing data * A key, but often under appreciated, step in data analysis is importing the data that we wish to analyze.* Though it is easy to load basic data structures into Python using built-in tools or those provided by packages like NumPy, it is non-trivial to import structured data well, and to easily convert this input into a robust data structure.* Pandas provides a convenient set of functions for importing tabular data in a number of formats directly into a `DataFrame` object. * Let's start with some more population data, stored in csv format.
###Code
!cat data/population.csv
###Output
Provinces;2000;2001;2002;2003;2004;2005;2006;2007;2008;2009;2010;2011;2012;2013;2014;2015;2016;2017
Total;64729501;65603160;66401851;67187251;68010215;68860539;69729967;70586256;71517100;72561312;73722988;74724269;75627384;76667864;77695904;78741053;79814871;80810525
Adana;1879695;1899324;1916637;1933428;1951142;1969512;1988277;2006650;2026319;2062226;2085225;2108805;2125635;2149260;2165595;2183167;2201670;2216475
Adฤฑyaman;568432;571180;573149;574886;576808;578852;580926;582762;585067;588475;590935;593931;595261;597184;597835;602774;610484;615076
Afyonkarahisar;696292;698029;698773;699193;699794;700502;701204;701572;697365;701326;697559;698626;703948;707123;706371;709015;714523;715693
Aฤrฤฑ;519190;521514;523123;524514;526070;527732;529417;530879;532180;537665;542022;555479;552404;551177;549435;547210;542255;536285
Amasya;333927;333768;333110;332271;331491;330739;329956;328674;323675;324268;334786;323079;322283;321977;321913;322167;326351;329888
Ankara;3889199;3971642;4050309;4128889;4210596;4294678;4380736;4466756;4548939;4650802;4771716;4890893;4965542;5045083;5150072;5270575;5346518;5445026
Antalya;1430539;1480282;1529110;1578367;1629338;1681656;1735239;1789295;1859275;1919729;1978333;2043482;2092537;2158265;2222562;2288456;2328555;2364396
Artvin;167909;168184;168215;168164;168153;168164;168170;168092;166584;165580;164759;166394;167082;169334;169674;168370;168068;166143
Aydฤฑn;870460;881911;892345;902594;913340;924446;935800;946971;965500;979155;989862;999163;1006541;1020957;1041979;1053506;1068260;1080839
Balฤฑkesir;1069260;1077362;1084072;1090411;1097187;1104261;1111475;1118313;1130276;1140085;1152323;1154314;1160731;1162761;1189057;1186688;1196176;1204824
Bilecik;197625;198736;199580;200346;201182;202063;202960;203777;193169;202061;225381;203849;204116;208888;209925;212361;218297;221693
Bingรถl;240337;242183;243717;245168;246718;248336;249986;251552;256091;255745;255170;262263;262507;265514;266019;267184;269560;273354
Bitlis;318886;320555;321791;322898;324114;325401;326709;327886;326897;328489;328767;336624;337253;337156;338023;340449;341225;341474
Bolu;255576;257926;259953;261902;263967;266114;268305;270417;268882;271545;271208;276506;281080;283496;284789;291095;299896;303184
Burdur;246060;247106;247811;248412;249090;249816;250552;251181;247437;251550;258868;250527;254341;257267;256898;258339;261401;264779
Bursa;2150571;2192169;2231582;2270852;2311735;2353834;2396916;2439876;2507963;2550645;2605495;2652126;2688171;2740970;2787539;2842547;2901396;2936803
รanakkale;449418;453632;457280;460792;464511;468375;472320;476128;474791;477735;490397;486445;493691;502328;511790;513341;519793;530417
รankฤฑrฤฑ;169044;169955;170637;171252;171924;172635;173358;174012;176093;185019;179067;177211;184406;190909;183550;180945;183880;186074
รorum;567609;566094;563698;560968;558300;555649;552911;549828;545444;540704;535405;534578;529975;532080;527220;525180;527863;528422
Denizli;845493;854958;863396;871614;880267;889229;898387;907325;917836;926362;931823;942278;950557;963464;978700;993442;1005687;1018735
Diyarbakฤฑr;1317750;1338378;1357550;1376518;1396333;1416775;1437684;1460714;1492828;1515011;1528958;1570943;1592167;1607437;1635048;1654196;1673119;1699901
Edirne;392134;393292;393896;394320;394852;395449;396047;396462;394644;395463;390428;399316;399708;398582;400280;402537;401701;406855
Elazฤฑฤ;517551;521467;524710;527774;531048;534467;537954;541258;547562;550667;552646;558556;562703;568239;568753;574304;578789;583671
Erzincan;206815;208015;208937;209779;210694;211658;212639;213538;210645;213288;224949;215277;217886;219996;223633;222918;226032;231511
Erzurum;801287;800311;798119;795482;792968;790505;787952;784941;774967;774207;769085;780847;778195;766729;763320;762321;762021;760476
Eskiลehir;651672;662354;672328;682212;692529;703168;714051;724849;741739;755427;764584;781247;789750;799724;812320;826716;844842;860620
Gaziantep;1292817;1330205;1366581;1403165;1441079;1480026;1519905;1560023;1612223;1653670;1700763;1753596;1799558;1844438;1889466;1931836;1974244;2005515
Giresun;410946;412428;413335;414062;414909;415830;416760;417505;421766;421860;419256;419498;419555;425007;429984;426686;444467;437393
Gรผmรผลhane;116008;118147;120166;122175;124267;126423;128628;130825;131367;130976;129618;132374;135216;141412;146353;151449;172034;170173
Hakkari;223264;226676;229839;232966;236234;239606;243055;246469;258590;256761;251302;272165;279982;273041;276287;278775;267813;275761
Hatay;1280457;1296401;1310828;1324961;1339798;1355144;1370831;1386224;1413287;1448418;1480571;1474223;1483674;1503066;1519836;1533507;1555165;1575226
Isparta;418507;419307;419505;419502;419601;419758;419905;419845;407463;420796;448298;411245;416663;417774;418780;421766;427324;433830
Mersin;1488755;1505196;1519824;1534060;1549054;1564588;1580460;1595938;1602908;1640888;1647899;1667939;1682848;1705774;1727255;1745221;1773852;1793931
ฤฐstanbul;11076840;11292009;11495948;11699172;11910733;12128577;12351506;12573836;12697164;12915158;13255685;13624240;13854740;14160467;14377018;14657434;14804116;15029231
ฤฐzmir;3431204;3477209;3519233;3560544;3603838;3648575;3694316;3739353;3795978;3868308;3948848;3965232;4005459;4061074;4113072;4168415;4223545;4279677
Kars;326292;324908;323005;320898;318812;316723;314570;312205;312128;306536;301766;305755;304821;300874;296466;292660;289786;287654
Kastamonu;351582;353271;354479;355541;356719;357972;359243;360366;360424;359823;361222;359759;359808;368093;368907;372633;376945;372373
Kayseri;1038671;1056995;1074221;1091336;1109179;1127566;1146378;1165088;1184386;1205872;1234651;1255349;1274968;1295355;1322376;1341056;1358980;1376722
Kฤฑrklareli;323427;325213;326561;327782;329116;330523;331955;333256;336942;333179;332791;340199;341218;340559;343723;346973;351684;356050
Kฤฑrลehir;221473;222028;222267;222403;222596;222824;223050;223170;222735;223102;221876;221015;221209;223498;222707;225562;229975;234529
Kocaeli;1192053;1226460;1259932;1293594;1328481;1364317;1401013;1437926;1490358;1522408;1560138;1601720;1634691;1676202;1722795;1780055;1830772;1883270
Konya;1835987;1855057;1871862;1888154;1905345;1923174;1941386;1959082;1969868;1992675;2013845;2038555;2052281;2079225;2108808;2130544;2161303;2180149
Kรผtahya;592921;592607;591405;589883;588464;587092;585666;583910;565884;571804;590496;564264;573421;572059;571554;571463;573642;572256
Malatya;685533;691399;696387;701155;706222;711496;716879;722065;733789;736884;740643;757930;762366;762538;769544;772904;781305;786676
Manisa;1276590;1284241;1290180;1295630;1301542;1307760;1314090;1319920;1316750;1331957;1379484;1340074;1346162;1359463;1367905;1380366;1396945;1413041
Kahramanmaraล;937074;947317;956417;965268;974592;984254;994126;1004414;1029298;1037491;1044816;1054210;1063174;1075706;1089038;1096610;1112634;1127623
Mardin;709316;715211;720195;724946;730002;735267;740641;745778;750697;737852;744606;764033;773026;779738;788996;796591;796237;809719
Muฤla;663606;678204;692171;706136;720650;735582;750865;766156;791424;802381;817503;838324;851145;866665;894509;908877;923773;938751
Muล;403236;404138;404462;404596;404832;405127;405416;405509;404309;404484;406886;414706;413260;412553;411216;408728;406501;404544
Nevลehir;275262;276309;276971;277514;278138;278814;279498;280058;281699;284025;282337;283247;285190;285460;286250;286767;290895;292365
Niฤde;321330;323181;324600;325894;327302;328786;330295;331677;338447;339921;337931;337553;340270;343658;343898;346114;351468;352727
Ordu;705746;708079;709420;710444;711670;713018;714375;715409;719278;723507;719183;714390;741371;731452;724268;728949;750588;742341
Rize;307133;308800;310052;311181;312417;313722;315049;316252;319410;319569;319637;323012;324152;328205;329779;328979;331048;331041
Sakarya;750485;762848;774397;785845;797793;810112;822715;835222;851292;861570;872872;888556;902267;917373;932706;953181;976948;990214
Samsun;1191926;1198574;1203611;1208179;1213165;1218424;1223774;1228959;1233677;1250076;1252693;1251729;1251722;1261810;1269989;1279884;1295927;1312990
Siirt;270832;273982;276806;279562;282461;285462;288529;291528;299819;303622;300695;310468;310879;314153;318366;320351;322664;324394
Sinop;194318;195151;195715;196196;196739;197319;197908;198412;200791;201134;202740;203027;201311;204568;204526;204133;205478;207427
Sivas;651825;650946;649078;646845;644709;642614;640442;638464;631112;633347;642224;627056;623535;623824;623116;618617;621224;621301
Tekirdaฤ;577812;598658;619152;639837;661237;683199;705692;728396;770772;783310;798109;829873;852321;874475;906732;937910;972875;1005463
Tokat;641033;639371;636715;633682;630722;627781;624744;620722;617158;624439;617802;608299;613990;598708;597920;593990;602662;602086
Trabzon;720620;724340;727080;729529;732221;735072;737969;740569;748982;765127;763714;757353;757898;758237;766782;768417;779379;786326
Tunceli;82554;82871;83074;83241;83433;83640;83849;84022;86449;83061;76699;85062;86276;85428;86527;86076;82193;82498
ลanlฤฑurfa;1257753;1294842;1330964;1367305;1404961;1443639;1483244;1523099;1574224;1613737;1663371;1716254;1762075;1801980;1845667;1892320;1940627;1985753
Uลak;320535;322814;324673;326417;328287;330243;332237;334115;334111;335860;338019;339731;342269;346508;349459;353048;358736;364971
Van;895836;908296;919727;930984;942771;954945;967394;979671;1004369;1022310;1035418;1022532;1051975;1070113;1085542;1096397;1100190;1106891
Yozgat;544446;538313;531220;523696;516096;508398;500487;492127;484206;487365;476096;465696;453211;444211;432560;419440;421041;418650
Zonguldak;630323;629346;627407;625114;622912;620744;618500;615890;619151;619812;619703;612406;606527;601567;598796;595907;597524;596892
Aksaray;351474;353939;355942;357819;359834;361941;364089;366109;370598;376907;377505;378823;379915;382806;384252;386514;396673;402404
Bayburt;75221;75517;75709;75868;76050;76246;76444;76609;75675;74710;74412;76724;75797;75620;80607;78550;90154;80417
Karaman;214461;216318;217902;219417;221026;222700;224409;226049;230145;231872;232633;234005;235424;237939;240362;242196;245610;246672
Kฤฑrฤฑkkale;287427;286900;285933;284803;283711;282633;281518;280234;279325;280834;276647;274992;274727;274658;271092;270271;277984;278749
Batman;408820;418186;427172;436165;445508;455118;464954;472487;485616;497998;510200;524499;534205;547581;557593;566633;576899;585252
ลฤฑrnak;362700;370314;377574;384824;392364;400123;408065;416001;429287;430424;430109;457997;466982;475255;488966;490184;483788;503236
Bartฤฑn;175982;177060;177903;178678;179519;180401;181300;182131;185368;188449;187758;187291;188436;189139;189405;190708;192389;193577
Ardahan;122409;121305;119993;118590;117178;115750;114283;112721;112242;108169;105454;107455;106643;102782;100809;99265;98335;97096
Iฤdฤฑr;174285;175550;176588;177563;178609;179701;180815;181866;184025;183486;184418;188857;190409;190424;192056;192435;192785;194775
Yalova;144923;150027;155041;160099;165333;170705;176207;181758;197412;202531;203741;206535;211799;220122;226514;233009;241665;251203
Karabรผk;205172;207241;209056;210812;212667;214591;216557;218463;216248;218564;227610;219728;225145;230251;231333;236978;242347;244453
Kilis;109698;111024;112219;113387;114615;115886;117185;118457;120991;122104;123135;124452;124320;128586;128781;130655;130825;136319
Osmaniye;411163;417418;423214;428943;434930;441108;447428;452880;464704;471804;479221;485357;492135;498981;506807;512873;522175;527724
Dรผzce;296712;300686;304316;307884;311623;315487;319438;323328;328611;335156;338188;342146;346493;351509;355549;360388;370371;377610
###Markdown
* This table can be read into a DataFrame using `read_csv`:
###Code
populationDF = pd.read_csv("data/population.csv")
populationDF
###Output
_____no_output_____
###Markdown
* Notice that `read_csv` automatically considered the first row in the file to be a header row.* We can override default behavior by customizing some the arguments, like `header`, `names` or `index_col`. * `read_csv` is just a convenience function for `read_table`, since csv is such a common format:
###Code
pd.set_option('max_columns', 5)
populationDF = pd.read_table("data/population_missing.csv", sep=';')
populationDF
###Output
_____no_output_____
###Markdown
* The `sep` argument can be customized as needed to accomodate arbitrary separators. * If we have sections of data that we do not wish to import (for example, in this example empty rows), we can populate the `skiprows` argument:
###Code
populationDF = pd.read_csv("data/population_missing.csv", sep=';', skiprows=[1,2])
populationDF
###Output
_____no_output_____
###Markdown
* For a more useful index, we can specify the first column, which provide a unique index to the data.
###Code
populationDF = pd.read_csv("data/population.csv", sep=';', index_col='Provinces')
populationDF.index
###Output
_____no_output_____
###Markdown
Conversely, if we only want to import a small number of rows from, say, a very large data file we can use `nrows`:
###Code
pd.read_csv("data/population.csv", sep=';', nrows=4)
###Output
_____no_output_____
###Markdown
* Most real-world data is incomplete, with values missing due to incomplete observation, data entry or transcription error, or other reasons. Pandas will automatically recognize and parse common missing data indicators, including `NA`, `NaN`, `NULL`.
###Code
pd.read_csv("data/population_missing.csv", sep=';').head(10)
###Output
_____no_output_____
###Markdown
Above, Pandas recognized `NaN` and an empty field as missing data.
###Code
pd.isnull(pd.read_csv("data/population_missing.csv", sep=';')).head(10)
###Output
_____no_output_____
###Markdown
Microsoft Excel * Since so much financial and scientific data ends up in Excel spreadsheets, Pandas' ability to directly import Excel spreadsheets is valuable. * This support is contingent on having one or two dependencies (depending on what version of Excel file is being imported) installed: `xlrd` and `openpyxl`.* Importing Excel data to Pandas is a two-step process. First, we create an `ExcelFile` object using the path of the file:
###Code
excel_file = pd.ExcelFile('data/population.xlsx')
excel_file
###Output
_____no_output_____
###Markdown
* Then, since modern spreadsheets consist of one or more "sheets", we parse the sheet with the data of interest:
###Code
excelDf = excel_file.parse("Sheet 1 ")
excelDf
###Output
_____no_output_____
###Markdown
* Also, there is a `read_excel` conveneince function in Pandas that combines these steps into a single call:
###Code
excelDf2 = pd.read_excel('data/population.xlsx', sheet_name='Sheet 1 ')
excelDf2.head(10)
###Output
_____no_output_____
###Markdown
* In, the first day we learned how to read and write `JSON` Files, with that way you can also import JSON files to `DataFrames`. * Also, you can connect to databases and import your data into `DataFrames` by help of 3rd party libraries. Pandas Fundamentals * This section introduces the new user to the key functionality of Pandas that is required to use the software effectively.* For some variety, we will leave our population data behind and employ some `Superhero` data. * The data comes from Marvel Wikia.* The file has the following variables: VariableDefinitionpage_idThe unique identifier for that characters page within the wikianameThe name of the characterurlslugThe unique url within the wikia that takes you to the characterIDThe identity status of the character (Secret Identity, Public identity No Dual Identity)ALIGNIf the character is Good, Bad or NeutralEYEEye color of the characterHAIRHair color of the characterSEXSex of the character (e.g. Male, Female, etc.)GSMIf the character is a gender or sexual minority (e.g. Homosexual characters, bisexual characters)ALIVEIf the character is alive or deceasedAPPEARANCESThe number of appareances of the character in comic books (as of Sep. 2, 2014. Number will become increasingly out of date as time goes on.)FIRST APPEARANCEThe month and year of the character's first appearance in a comic book, if availableYEARThe year of the character's first appearance in a comic book, if available
###Code
pd.set_option('max_columns', 12)
pd.set_option('display.notebook_repr_html', True)
marvelDF = pd.read_csv("data/marvel-wikia-data.csv", index_col='page_id')
marvelDF.head(5)
###Output
_____no_output_____
###Markdown
* Notice that we specified the `page_id` column as the index, since it appears to be a unique identifier. We could try to create a unique index ourselves by trimming `name`: * First, import the regex module of python.* Then, trim the name column with regex.
###Code
import re
pattern = re.compile('([a-zA-Z]|-|\s|\.|\')*([a-zA-Z])')
heroName = []
for name in marvelDF.name:
match = re.search(pattern, name)
if match:
heroName.append(match.group())
else:
heroName.append(name)
heroName
###Output
_____no_output_____
###Markdown
* This looks okay, let's copy '__marvelDF__' to '__marvelDF_newID__' and assign new indexes.
###Code
marvelDF_newID = marvelDF.copy()
marvelDF_newID.index = heroName
marvelDF_newID.head(5)
###Output
_____no_output_____
###Markdown
* Let's check the uniqueness of ID's:
###Code
marvelDF_newID.index.is_unique
###Output
_____no_output_____
###Markdown
* So, indices need not be unique. Our choice is not unique because some of superheros have some differenet variations.
###Code
pd.Series(marvelDF_newID.index).value_counts()
###Output
_____no_output_____
###Markdown
* The most important consequence of a non-unique index is that indexing by label will return multiple values for some labels:
###Code
marvelDF_newID.loc['Peter Parker']
###Output
_____no_output_____
###Markdown
* Let's give a truly unique index by not triming `name` column:
###Code
hero_id = marvelDF.name
marvelDF_newID = marvelDF.copy()
marvelDF_newID.index = hero_id
marvelDF_newID.head()
marvelDF_newID.index.is_unique
###Output
_____no_output_____
###Markdown
* We can create meaningful indices more easily using a hierarchical index.* For now, we will stick with the numeric IDs as our index for '__NewID__' DataFrame.
###Code
marvelDF_newID.index = range(16376)
marvelDF.index = marvelDF['name']
marvelDF_newID.head(5)
###Output
_____no_output_____
###Markdown
Manipulating indices * __Reindexing__ allows users to manipulate the data labels in a DataFrame. * It forces a DataFrame to conform to the new index, and optionally, fill in missing data if requested.* A simple use of `reindex` is reverse the order of the rows:
###Code
marvelDF_newID.reindex(marvelDF_newID.index[::-1]).head()
###Output
_____no_output_____
###Markdown
* Keep in mind that `reindex` does not work if we pass a non-unique index series. * We can remove rows or columns via the `drop` method:
###Code
marvelDF_newID.shape
marvelDF_dropped = marvelDF_newID.drop([16375, 16374])
print(marvelDF_newID.shape)
print(marvelDF_dropped.shape)
marvelDF_dropped = marvelDF_newID.drop(['EYE','HAIR'], axis=1)
print(marvelDF_newID.shape)
print(marvelDF_dropped.shape)
###Output
(16376, 12)
(16376, 10)
###Markdown
Indexing and Selection * Indexing works like indexing in NumPy arrays, except we can use the labels in the `Index` object to extract values in addition to arrays of integers.
###Code
heroAppearances = marvelDF.APPEARANCES
heroAppearances
###Output
_____no_output_____
###Markdown
* Let's start with Numpy style indexing:
###Code
heroAppearances[:3]
###Output
_____no_output_____
###Markdown
* Indexing by Label:
###Code
heroAppearances[['Spider-Man (Peter Parker)','Hulk (Robert Bruce Banner)']]
###Output
_____no_output_____
###Markdown
* We can also slice with data labels, since they have an intrinsic order within the Index:
###Code
heroAppearances['Spider-Man (Peter Parker)':'Matthew Murdock (Earth-616)']
###Output
_____no_output_____
###Markdown
* You can change sliced array, and if you get warning it's ok.
###Code
heroAppearances['Minister of Castile D\'or (Earth-616)':'Yologarch (Earth-616)'] = 0
heroAppearances
###Output
/Users/alpyuzbasioglu/anaconda2/lib/python2.7/site-packages/ipykernel_launcher.py:1: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
"""Entry point for launching an IPython kernel.
###Markdown
* In a `DataFrame` we can slice along either or both axes:
###Code
marvelDF[['SEX','ALIGN']]
mask = marvelDF.APPEARANCES>50
marvelDF[mask]
###Output
_____no_output_____
###Markdown
* The indexing field `loc` allows us to select subsets of rows and columns in an intuitive way:
###Code
marvelDF.loc['Spider-Man (Peter Parker)', ['ID', 'EYE', 'HAIR']]
marvelDF.loc[['Spider-Man (Peter Parker)','Thor (Thor Odinson)'],['ID', 'EYE', 'HAIR']]
###Output
_____no_output_____
###Markdown
Operations * `DataFrame` and `Series` objects allow for several operations to take place either on a single object, or between two or more objects.* For example, we can perform arithmetic on the elements of two objects, such as change in population across years:
###Code
populationDF
pop2000 = populationDF['2000']
pop2017 = populationDF['2017']
pop2000DF = pd.Series(pop2000.values, index=populationDF.index)
pop2017DF = pd.Series(pop2017.values, index=populationDF.index)
popDiff = pop2017DF - pop2000DF
popDiff
###Output
_____no_output_____
###Markdown
* Let's assume our '__pop2000DF__' DataFrame has not row which index is "Yalova"
###Code
pop2000DF["Yalova"] = np.nan
pop2000DF
popDiff = pop2017DF - pop2000DF
popDiff
###Output
_____no_output_____
###Markdown
* For accessing not null elements, we can use Pandas'notnull function.
###Code
popDiff[popDiff.notnull()]
###Output
_____no_output_____
###Markdown
* We can add `fill_value` argument to insert a zero for home `NaN` values.
###Code
pop2017DF.subtract(pop2000DF, fill_value=0)
###Output
_____no_output_____
###Markdown
* We can also use functions to each column or row of a `DataFrame`
###Code
minPop = pop2017DF.values.min()
indexOfMinPop = pop2017DF.index[pop2017DF.values.argmin()]
print(indexOfMinPop + " -> " + str(minPop))
populationDF['2000'] = np.ceil(populationDF['2000'] / 10000) * 10000
populationDF
###Output
_____no_output_____
###Markdown
Sorting and Ranking * Pandas objects include methods for re-ordering data.
###Code
populationDF.sort_index(ascending=True).head()
populationDF.sort_index().head()
populationDF.sort_index(axis=1, ascending=False).head()
###Output
_____no_output_____
###Markdown
* We can also use `order` to sort a `Series` by value, rather than by label. * For a `DataFrame`, we can sort according to the values of one or more columns using the `by` argument of `sort_values`:
###Code
populationDF[['2017','2001']].sort_values(by=['2017', '2001'],ascending=[False,True]).head(10)
###Output
_____no_output_____
###Markdown
* __Ranking__ does not re-arrange data, but instead returns an index that ranks each value relative to others in the Series.
###Code
populationDF['2010'].rank(ascending=False)
populationDF[['2017','2001']].sort_values(by=['2017', '2001'],ascending=[False,True]).rank(ascending=False)
###Output
_____no_output_____
###Markdown
* Ties are assigned the mean value of the tied ranks, which may result in decimal values.
###Code
pd.Series([50,60,50]).rank()
###Output
_____no_output_____
###Markdown
* Alternatively, you can break ties via one of several methods, such as by the order in which they occur in the dataset:
###Code
pd.Series([100,50,100]).rank(method='first')
###Output
_____no_output_____
###Markdown
* Calling the `DataFrame`'s `rank` method results in the ranks of all columns:
###Code
populationDF.rank(ascending=False)
###Output
_____no_output_____
###Markdown
Hierarchical indexing * Hierarchical indexing is an important feature of pandas enabling you to have multiple (two or more) index levels on an axis.* Somewhat abstractly, it provides a way for you to work with higher dimensional data in a lower dimensional form. * Letโs create a Series with a list of lists or arrays as the index:
###Code
data = pd.Series(np.random.randn(10),
index=[['a', 'a', 'a', 'b', 'b', 'b', 'c', 'c', 'd', 'd'],
[1, 2, 3, 1, 2, 3, 1, 2, 2, 3]])
data
data.index
###Output
_____no_output_____
###Markdown
* With a hierarchically-indexed object, so-called partial indexing is possible, enabling you to concisely select subsets of the data:
###Code
data['b']
data['a':'c']
###Output
_____no_output_____
###Markdown
* Selection is even possible in some cases from an โinnerโ level:
###Code
data[:, 1]
###Output
_____no_output_____
###Markdown
* Hierarchical indexing plays a critical role in reshaping data and group-based operations like forming a pivot table. For example, this data could be rearranged into a DataFrame using its unstack method:
###Code
dataDF = data.unstack()
dataDF
###Output
_____no_output_____
###Markdown
* The inverse operation of unstack is stack:
###Code
dataDF.stack()
###Output
_____no_output_____
###Markdown
Missing data * The occurence of missing data is so prevalent that it pays to use tools like Pandas, which seamlessly integrates missing data handling so that it can be dealt with easily, and in the manner required by the analysis at hand.* Missing data are represented in `Series` and `DataFrame` objects by the `NaN` floating point value. However, `None` is also treated as missing, since it is commonly used as such in other contexts (NumPy).
###Code
weirdSeries = pd.Series([np.nan, None, 'string', 1])
weirdSeries
weirdSeries.isnull()
###Output
_____no_output_____
###Markdown
* Missing values may be dropped or indexed out:
###Code
population2
population2.dropna()
population2[population2.notnull()]
dataDF
###Output
_____no_output_____
###Markdown
* By default, `dropna` drops entire rows in which one or more values are missing.
###Code
dataDF.dropna()
###Output
_____no_output_____
###Markdown
* This can be overridden by passing the `how='all'` argument, which only drops a row when every field is a missing value.
###Code
dataDF.dropna(how='all')
###Output
_____no_output_____
###Markdown
* This can be customized further by specifying how many values need to be present before a row is dropped via the `thresh` argument.
###Code
dataDF[2]['c'] = np.nan
dataDF
dataDF.dropna(thresh=2)
###Output
_____no_output_____
###Markdown
* If we want to drop missing values column-wise instead of row-wise, we use `axis=1`.
###Code
dataDF[1]['d'] = np.random.randn(1)
dataDF
dataDF.dropna(axis=1)
###Output
_____no_output_____
###Markdown
* Rather than omitting missing data from an analysis, in some cases it may be suitable to fill the missing value in, either with a default value (such as zero) or a value that is either imputed or carried forward/backward from similar data points. * We can do this programmatically in Pandas with the `fillna` argument.
###Code
dataDF
dataDF.fillna(0)
dataDF.fillna({2: 1.5, 3:0.50})
###Output
_____no_output_____
###Markdown
* Notice that `fillna` by default returns a new object with the desired filling behavior, rather than changing the `Series` or `DataFrame` in place.
###Code
dataDF
###Output
_____no_output_____
###Markdown
* If you don't like this behaviour you can alter values in-place using `inplace=True`.
###Code
dataDF.fillna({2: 1.5, 3:0.50}, inplace=True)
dataDF
###Output
_____no_output_____
###Markdown
* Missing values can also be interpolated, using any one of a variety of methods:
###Code
dataDF[2]['c'] = np.nan
dataDF[3]['d'] = np.nan
dataDF
###Output
_____no_output_____
###Markdown
* We can also propagate non-null values forward or backward.
###Code
dataDF.fillna(method='ffill')
dataDF.fillna(dataDF.mean())
###Output
_____no_output_____
###Markdown
Data summarization * We often wish to summarize data in `Series` or `DataFrame` objects, so that they can more easily be understood or compared with similar data.* The NumPy package contains several functions that are useful here, but several summarization or reduction methods are built into Pandas data structures.
###Code
marvelDF.sum()
###Output
_____no_output_____
###Markdown
* Clearly, `sum` is more meaningful for some columns than others.(Total Appearances) * For methods like `mean` for which application to string variables is not just meaningless, but impossible, these columns are automatically exculded:
###Code
marvelDF.mean()
###Output
_____no_output_____
###Markdown
* The important difference between NumPy's functions and Pandas' methods is that Numpy have different functions for handling missing data like 'nansum' but Pandas use same functions.
###Code
dataDF
dataDF.mean()
###Output
_____no_output_____
###Markdown
* Sometimes we may not want to ignore missing values, and allow the `nan` to propagate.
###Code
dataDF.mean(skipna=False)
###Output
_____no_output_____
###Markdown
* A useful summarization that gives a quick snapshot of multiple statistics for a `Series` or `DataFrame` is `describe`:
###Code
dataDF.describe()
###Output
_____no_output_____
###Markdown
* `describe` can detect non-numeric data and sometimes yield useful information about it. Writing Data to Files * Pandas can also export data to a variety of storage formats.* We will bring your attention to just a couple of these.
###Code
myDF = populationDF['2000']
myDF.to_csv("data/roundedPopulation2000.csv")
###Output
_____no_output_____ |
BGflow_examples/Alanine_dipeptide/ala2_use_saved_model.ipynb | ###Markdown
Training a Boltzmann Generator for Alanine DipeptideThis notebook introduces basic concepts behind `bgflow`. It shows how to build an train a Boltzmann generator for a small peptide. The most important aspects it will cover are- retrieval of molecular training data- defining a internal coordinate transform- defining normalizing flow classes- combining different normalizing flows- training a Boltzmann generator via NLL and KLLThe main purpose of this tutorial is to introduce the implementation. The network design is optimized for educational purposes rather than good performance. In the conlusions, we will discuss some aspects of the generator that are not ideal and outline improvements. Some PreliminariesWe instruct jupyter to reload any imports automatically and define the device and datatype, on which we want to perform the computations.
###Code
%load_ext autoreload
%autoreload 2
import torch
device = "cuda:3" if torch.cuda.is_available() else "cpu"
dtype = torch.float32
# a context tensor to send data to the right device and dtype via '.to(ctx)'
ctx = torch.zeros([], device=device, dtype=dtype)
###Output
_____no_output_____
###Markdown
Load the Data and the Molecular SystemMolecular trajectories and their corresponding potential energy functions are available from the `bgmol` repository.
###Code
# import os
# from bgmol.datasets import Ala2TSF300
# target_energy = Ala2TSF300().get_energy_model(n_workers=1)
import os
import mdtraj
#dataset = mdtraj.load('output.dcd', top='ala2_fromURL.pdb')
dataset = mdtraj.load('TSFtraj.dcd', top='ala2_fromURL.pdb')
#fname = "obc_xmlsystem_savedmodel"
#coordinates = dataset.xyz
#target_energy = Ala2TSF300().get_energy_model(n_workers=1)
print(dataset)
import numpy as np
rigid_block = np.array([6, 8, 9, 10, 14])
z_matrix = np.array([
[0, 1, 4, 6],
[1, 4, 6, 8],
[2, 1, 4, 0],
[3, 1, 4, 0],
[4, 6, 8, 14],
[5, 4, 6, 8],
[7, 6, 8, 4],
[11, 10, 8, 6],
[12, 10, 8, 11],
[13, 10, 8, 11],
[15, 14, 8, 16],
[16, 14, 8, 6],
[17, 16, 14, 15],
[18, 16, 14, 8],
[19, 18, 16, 14],
[20, 18, 16, 19],
[21, 18, 16, 19]
])
def dimensions(dataset):
return np.prod(dataset.xyz[0].shape)
dim = dimensions(dataset)
print(dim)
from simtk import openmm
with open('ala2_xml_system.txt') as f:
xml = f.read()
system = openmm.XmlSerializer.deserialize(xml)
from bgflow.distribution.energy.openmm import OpenMMBridge, OpenMMEnergy
from openmmtools import integrators
from simtk import unit
temperature = 300.0 * unit.kelvin
collision_rate = 1.0 / unit.picosecond
timestep = 4.0 * unit.femtosecond
integrator = integrators.LangevinIntegrator(temperature=temperature,collision_rate=collision_rate,timestep=timestep)
energy_bridge = OpenMMBridge(system, integrator, n_workers=1)
target_energy = OpenMMEnergy(int(dim), energy_bridge)
###Output
_____no_output_____
###Markdown
The energy model is a `bgflow.Energy` that wraps around OpenMM. The `n_workers` argument determines the number of openmm contexts that are used for energy evaluations. In notebooks, we set `n_workers=1` to avoid hickups. In production, we can omit this argument so that `n_workers` is automatically set to the number of CPU cores. Visualize Data: Ramachandran Plot for the Backbone Angles
###Code
# def compute_phi_psi(trajectory):
# phi_atoms = [4, 6, 8, 14]
# phi = md.compute_dihedrals(trajectory, indices=[phi_atoms])[:, 0]
# psi_atoms = [6, 8, 14, 16]
# psi = md.compute_dihedrals(trajectory, indices=[psi_atoms])[:, 0]
# return phi, psi
import numpy as np
import mdtraj as md
from matplotlib import pyplot as plt
from matplotlib.colors import LogNorm
# def plot_phi_psi(ax, trajectory):
# if not isinstance(trajectory, md.Trajectory):
# trajectory = md.Trajectory(
# xyz=trajectory.cpu().detach().numpy().reshape(-1, 22, 3),
# topology=md.load('ala2_fromURL.pdb').topology
# )
# phi, psi = compute_phi_psi(trajectory)
# ax.hist2d(phi, psi, 50, norm=LogNorm())
# ax.set_xlim(-np.pi, np.pi)
# ax.set_ylim(-np.pi, np.pi)
# ax.set_xlabel("$\phi$")
# _ = ax.set_ylabel("$\psi$")
# return trajectory
import numpy as np
n_train = len(dataset)//2
n_test = len(dataset) - n_train
permutation = np.random.permutation(n_train)
all_data = dataset.xyz.reshape(-1, dimensions(dataset))
training_data = torch.tensor(all_data[permutation]).to(ctx)
test_data = torch.tensor(all_data[permutation + n_train]).to(ctx)
#print(training_data.shape)
###Output
torch.Size([143147, 66])
###Markdown
Define the Internal Coordinate TransformRather than generating all-Cartesian coordinates, we use a mixed internal coordinate transform.The five central alanine atoms will serve as a Cartesian "anchor", from which all other atoms are placed with respect to internal coordinates (IC) defined through a z-matrix. We have deposited a valid `z_matrix` and the corresponding `rigid_block` in the `dataset.system` from `bgmol`.
###Code
import bgflow as bg
# throw away 6 degrees of freedom (rotation and translation)
dim_cartesian = len(rigid_block) * 3 - 6
print(dim_cartesian)
#dim_cartesian = len(system.rigid_block) * 3
dim_bonds = len(z_matrix)
print(dim_bonds)
dim_angles = dim_bonds
dim_torsions = dim_bonds
coordinate_transform = bg.MixedCoordinateTransformation(
data=training_data,
z_matrix=z_matrix,
fixed_atoms=rigid_block,
#keepdims=None,
keepdims=dim_cartesian,
normalize_angles=True,
).to(ctx)
###Output
_____no_output_____
###Markdown
For demonstration, we transform the first 3 samples from the training data set into internal coordinates as follows:
###Code
# bonds, angles, torsions, cartesian, dlogp = coordinate_transform.forward(training_data[:3])
# bonds.shape, angles.shape, torsions.shape, cartesian.shape, dlogp.shape
# #print(bonds)
###Output
_____no_output_____
###Markdown
Prior DistributionThe next step is to define a prior distribution that we can easily sample from. The normalizing flow will be trained to transform such latent samples into molecular coordinates. Here, we just take a normal distribution, which is a rather naive choice for reasons that will be discussed in other notebooks.
###Code
dim_ics = dim_bonds + dim_angles + dim_torsions + dim_cartesian
mean = torch.zeros(dim_ics).to(ctx)
# passing the mean explicitly to create samples on the correct device
prior = bg.NormalDistribution(dim_ics, mean=mean)
###Output
_____no_output_____
###Markdown
Normalizing FlowNext, we set up the normalizing flow by stacking together different neural networks. For now, we will do this in a rather naive way, not distinguishing between bonds, angles, and torsions. Therefore, we will first define a flow that splits the output from the prior into the different IC terms. Split Layer
###Code
split_into_ics_flow = bg.SplitFlow(dim_bonds, dim_angles, dim_torsions, dim_cartesian)
# test
#print(prior.sample(3))
# ics = split_into_ics_flow(prior.sample(1))
# #print(_ics)
# coordinate_transform.forward(*ics, inverse=True)[0].shape
###Output
_____no_output_____
###Markdown
Coupling LayersNext, we will set up so-called RealNVP coupling layers, which split the input into two channels and then learn affine transformations of channel 1 conditioned on channel 2. Here we will do the split naively between the first and second half of the degrees of freedom.
###Code
class RealNVP(bg.SequentialFlow):
def __init__(self, dim, hidden):
self.dim = dim
self.hidden = hidden
super().__init__(self._create_layers())
def _create_layers(self):
dim_channel1 = self.dim//2
dim_channel2 = self.dim - dim_channel1
split_into_2 = bg.SplitFlow(dim_channel1, dim_channel2)
layers = [
# -- split
split_into_2,
# --transform
self._coupling_block(dim_channel1, dim_channel2),
bg.SwapFlow(),
self._coupling_block(dim_channel2, dim_channel1),
# -- merge
bg.InverseFlow(split_into_2)
]
return layers
def _dense_net(self, dim1, dim2):
return bg.DenseNet(
[dim1, *self.hidden, dim2],
activation=torch.nn.ReLU()
)
def _coupling_block(self, dim1, dim2):
return bg.CouplingFlow(bg.AffineTransformer(
shift_transformation=self._dense_net(dim1, dim2),
scale_transformation=self._dense_net(dim1, dim2)
))
#RealNVP(dim_ics, hidden=[128]).to(ctx).forward(prior.sample(3))[0].shape
###Output
_____no_output_____
###Markdown
Boltzmann GeneratorFinally, we define the Boltzmann generator.It will sample molecular conformations by 1. sampling in latent space from the normal prior distribution,2. transforming the samples into a more complication distribution through a number of RealNVP blocks (the parameters of these blocks will be subject to optimization),3. splitting the output of the network into blocks that define the internal coordinates, and4. transforming the internal coordinates into Cartesian coordinates through the inverse IC transform.
###Code
n_realnvp_blocks = 5
layers = []
for i in range(n_realnvp_blocks):
layers.append(RealNVP(dim_ics, hidden=[128, 128, 128]))
layers.append(split_into_ics_flow)
layers.append(bg.InverseFlow(coordinate_transform))
flow = bg.SequentialFlow(layers).to(ctx)
# test
#flow.forward(prior.sample(3))[0].shape
flow.load_state_dict(torch.load('modelTSFtraj_xmlsystem_20000KLL.pt'))
# print number of trainable parameters
"#Parameters:", np.sum([np.prod(p.size()) for p in flow.parameters()])
generator = bg.BoltzmannGenerator(
flow=flow,
prior=prior,
target=target_energy
)
def plot_energies(ax, samples, target_energy, test_data):
sample_energies = target_energy.energy(samples).cpu().detach().numpy()
md_energies = target_energy.energy(test_data[:len(samples)]).cpu().detach().numpy()
cut = max(np.percentile(sample_energies, 80), 20)
ax.set_xlabel("Energy [$k_B T$]")
# y-axis on the right
ax2 = plt.twinx(ax)
ax.get_yaxis().set_visible(False)
ax2.hist(sample_energies, range=(-50, cut), bins=40, density=False, label="BG")
ax2.hist(md_energies, range=(-50, cut), bins=40, density=False, label="MD")
ax2.set_ylabel(f"Count [#Samples / {len(samples)}]")
ax2.legend()
def plot_energy_onlyMD(ax, target_energy, test_data):
md_energies = target_energy.energy(test_data[:1000]).cpu().detach().numpy()
ax.set_xlabel("Energy [$k_B T$]")
# y-axis on the right
ax2 = plt.twinx(ax)
ax.get_yaxis().set_visible(False)
#ax2.hist(sample_energies, range=(-50, cut), bins=40, density=False, label="BG")
ax2.hist(md_energies, bins=40, density=False, label="MD")
ax2.set_ylabel(f"Count [#Samples / 1000]")
ax2.legend()
n_samples = 10000
samples = generator.sample(n_samples)
print(samples.shape)
fig, axes = plt.subplots(1, 2, figsize=(6,3))
fig.tight_layout()
samplestrajectory = plot_phi_psi(axes[0], samples)
plot_energies(axes[1], samples, target_energy, test_data)
#plt.savefig(f"varysnapshots/{fname}.png", bbox_inches = 'tight')
#samplestrajectory.save("mytraj_full_samples.dcd")
#del samples
###Output
torch.Size([10000, 66])
###Markdown
bonds, angles, torsions, cartesian, dlogp = coordinate_transform.forward(samples)print(bonds.shape)print('1:', bonds[0])CHbond_indices = [0, 2, 3 ,7 ,8, 9 ,14 ,15 ,16]bonds_new = bonds.clone().detach()bonds_new[:,CHbond_indices] = 0.109print('2:', bonds_new[0:3])samples_corrected = coordinate_transform.forward(bonds_new,angles,torsions,cartesian,inverse=True)print(samples_corrected[0].shape)
###Code
samplestrajectory = mdtraj.Trajectory(
xyz=samples[0].cpu().detach().numpy().reshape(-1, 22, 3),
topology=mdtraj.load('ala2_fromURL.pdb').topology
)
#samplestrajectory.save('mysamples_traj_correctedonce.dcd')
import nglview as nv
#samplestrajectory.save("Samplestraj.pdb")
#md.save(samplestrajectory, "obcstride10Samplestraj.dcd")
widget = nv.show_mdtraj(samplestrajectory)
widget
###Output
_____no_output_____ |
Practicas/.ipynb_checkpoints/Practica 5 - Modelado de Robots-checkpoint.ipynb | ###Markdown
Modelado de Robots Recordando la prรกctica anterior, tenemos que la ecuaciรณn diferencial que caracteriza a un sistema masa-resorte-amoritguador es:$$m \ddot{x} + c \dot{x} + k x = F$$y revisamos 3 maneras de obtener el comportamiento de ese sistema, sin embargo nos interesa saber el comportamiento de un sistema mas complejo, un robot; empezaremos con un pendulo simple, el cual tiene la siguiente ecuaciรณn de movimiento:$$m l^2 \ddot{q} + m g l \cos{q} = \tau$$Como podemos ver, son similares en el sentido de que involucran una sola variable, sin embargo, en la segunda ecuaciรณn, nuestra variable esta involucrada adentro de una funciรณn no lineal ($\cos{q}$), por lo que nuestra ecuaciรณn diferencial es no lineal, y por lo tanto _no_ podemos usar el formalismo de funciรณn de transferencia para resolverla; tenemos que usar la funciรณn ```odeint``` para poder resolverla.Como es de segundo grado, tenemos que dividir nuestra ecuaciรณn diferencial en dos mas simples, por lo tanto usaremos el siguiente truco:$$\frac{d}{dt} q = \dot{q}$$entonces, tenemos dos ecuaciones diferenciales, por lo que podemos resolver dos incognitas $q$ y $\dot{q}$.Utilizando nuestros conocimientos de algebra lineal, podemos acomodar nuestro sistema de ecuaciones en una matriz, de tal manera que si antes teniamos que:$$\begin{align}\frac{d}{dt} q &= \dot{q} \\\frac{d}{dt} \dot{q} &= \ddot{q} = \frac{\tau - m g l \cos{q}}{ml^2}\end{align}$$Por lo que podemos ver que nuestro sistema de ecuaciones tiene un estado mas grande que antes; la ecuaciรณn diferencial que teniamos como no lineal, de segundo orden, podemos escribirla como no lineal, de primer orden siempre y cuando nuestro estado sea mas grande.Definamos a lo que nos referimos con estado:$$x =\begin{pmatrix}q \\\dot{q}\end{pmatrix}$$con esta definiciรณn de estado, podemos escribir el sistema de ecuaciรณnes de arriba como:$$\frac{d}{dt} x = \dot{x} = \frac{d}{dt}\begin{pmatrix}q \\\dot{q}\end{pmatrix} =\begin{pmatrix}\dot{q} \\\frac{\tau - m g l \cos{q}}{ml^2}\end{pmatrix}$$o bien $\dot{x} = f(x)$, en donde $f(x)$ es una funciรณn vectorial, o bien, un vector de funciones:$$f(x) =\begin{pmatrix}\dot{q} \\\frac{\tau - m g l \cos{q}}{ml^2}\end{pmatrix}$$Por lo que ya estamos listos para simular este sistema mecรกnico, con la ayuda de ```odeint()```; empecemos importando laas librerias necesarias:
###Code
from scipy.integrate import odeint
from numpy import linspace
###Output
_____no_output_____
###Markdown
y definiendo una funciรณn que devuelva un arreglo con los valores de $f(x)$
###Code
def f(x, t):
from numpy import cos
q, qฬ = x
ฯ = 0
m = 1
g = 9.81
l = 1
return [qฬ, ฯ - m*g*l*cos(q)/(m*l**2)]
###Output
_____no_output_____
###Markdown
Vamos a simular desde el tiempo $0$, hasta $10$, y las condiciones iniciales del pendulo son $q=0$ y $\dot{q} = 0$.
###Code
ts = linspace(0, 10, 100)
x0 = [0, 0]
###Output
_____no_output_____
###Markdown
Utilizamos la funciรณn ```odeint``` para simular el comportamiento del pendulo, dandole la funciรณn que programamos con la dinรกmica de $f(x)$ y sacamos los valores de $q$ y $\dot{q}$ que nos devolviรณ ```odeint``` envueltos en el estado $x$
###Code
xs = odeint(func = f, y0 = x0, t = ts)
qs, qฬs = list(zip(*xs.tolist()))
###Output
_____no_output_____
###Markdown
En este punto ya tenemos nuestros datos de la simulaciรณn, tan solo queda graficarlos para interpretar los resultados:
###Code
%matplotlib inline
from matplotlib.pyplot import style, plot, figure
style.use("ggplot")
fig1 = figure(figsize = (8, 8))
ax1 = fig1.gca()
ax1.plot(xs);
fig2 = figure(figsize = (8, 8))
ax2 = fig2.gca()
ax2.plot(qs)
ax2.plot(qฬs);
###Output
_____no_output_____
###Markdown
Pero las grรกficas de trayectoria son aburridas, recordemos que podemos hacer una animaciรณn con matplotlib:
###Code
from matplotlib import animation
from numpy import sin, cos, arange
# Se define el tamaรฑo de la figura
fig = figure(figsize=(8, 8))
# Se define una sola grafica en la figura y se dan los limites de los ejes x y y
axi = fig.add_subplot(111, autoscale_on=False, xlim=(-1.5, 1.5), ylim=(-2, 1))
# Se utilizan graficas de linea para el eslabon del pendulo
linea, = axi.plot([], [], "-o", lw=2, color='gray')
def init():
# Esta funcion se ejecuta una sola vez y sirve para inicializar el sistema
linea.set_data([], [])
return linea
def animate(i):
# Esta funcion se ejecuta para cada cuadro del GIF
# Se obtienen las coordenadas x y y para el eslabon
xs, ys = [[0, cos(qs[i])], [0, sin(qs[i])]]
linea.set_data(xs, ys)
return linea
# Se hace la animacion dandole el nombre de la figura definida al principio, la funcion que
# se debe ejecutar para cada cuadro, el numero de cuadros que se debe de hacer, el periodo
# de cada cuadro y la funcion inicial
ani = animation.FuncAnimation(fig, animate, arange(1, len(qs)), interval=25,
blit=True, init_func=init)
# Se guarda el GIF en el archivo indicado
ani.save('./imagenes/pendulo-simple.gif', writer='imagemagick');
###Output
_____no_output_____ |
nlp-labs/Day_09/QnA_Model/QnA_Handson.ipynb | ###Markdown
> **Copyright (c) 2020 Skymind Holdings Berhad**> **Copyright (c) 2021 Skymind Education Group Sdn. Bhd.**Licensed under the Apache License, Version 2.0 (the \"License\");you may not use this file except in compliance with the License.You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0/Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an \"AS IS\" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License.**SPDX-License-Identifier: Apache-2.0** INSTRUCTION: Follow the steps in the commented line for each section and run the code.
###Code
"""
install torch(PyTorch) and transformers
to install them type in your terminal:
pip install torch
pip install transformers
"""
# import the necessary library
from transformers import pipeline
# write your context (where model seeks the answer for the question)
context = """
You can add your own context here. Try to write something or copy from other source.
"""
# write your own question
question = ""
# initialize your model
"""
This is a pretrained model that we can get from huggingface
There are more models that we can find there: https://huggingface.co/
Go to this web page and import a model and a tokenizer by putting the model and tokenizer into the parameters
"""
# uncomment this code below
# question_answering = pipeline('question-answering', model= , tokenizer=)
# test the model (uncomment the code below)
# result = question_answering(question=question, context=context)
# print("Answer:", result['answer'])
# print("Score:", result['score'])
###Output
_____no_output_____ |
1. Extending the dataset with data from other sources/X - Deprecated - Connecting the regions through BAMS information.ipynb | ###Markdown
Deprecated - Connecting Brain region through BAMS informationThis script connects brain regions through BAMS conenctivity informtation.However, at this level the connectivity information has no reference to the original, and that is not ok. Thereby do **not** use this.
###Code
### DEPRECATED
import pandas as pd
import re
import itertools
from difflib import SequenceMatcher
root = "Data/csvs/basal_ganglia/regions"
sim_csv_loc = "/region_similarity.csv"
def similar(a, b):
return SequenceMatcher(None, a, b).ratio()
## Prepare regions and regions_other csvs
df_all_regions = pd.read_csv(root + "/all_regions.csv", dtype="object")
df = pd.DataFrame(columns = ["ID1", "Region_name_1", "ID2", "Region_name_2", "Sim"])
# Put region names and ID into tuple list
subset = df_all_regions[["ID", "Region_name"]]
region_name_tuples = [tuple(x) for x in subset.to_numpy()]
# Find all combinations of region_names and look at similarity in name
for a, b in itertools.combinations(region_name_tuples, 2):
id1, reg1 = a
id2, reg2 = b
sim_score = similar(reg1, reg2)
if(sim_score > 0.7):
a_row = pd.Series([id1, reg1, id2, reg2, sim_score], index = ["ID1", "Region_name_1", "ID2", "Region_name_2", "Sim"])
df = df.append(a_row, ignore_index=True)
# Store similarities
df_sorted = df.sort_values('Sim')
df_sorted.to_csv(root + sim_csv_loc, encoding='utf-8')
print("Similarities stored in", sim_csv_loc)
def get_count_of_type(label, session):
q = "MATCH (n:%s) RETURN count(n)" % label
res = session.run(q)
print("Added", res.value()[0], "nodes of type", label)
def get_count_of_relationship(label, session):
q = "MATCH ()-[r:%s]-() RETURN count(*)" %label
res = session.run(q)
print("Added", res.value()[0], "relationships of type", label)
def get_csv_path(csv_file):
path_all_csv = os.path.realpath("Data/csvs/basal_ganglia/regions")
return os.path.join(path_all_csv, csv_file).replace("\\","/")
## Then find the regions that correspond to each other and stor that in a new CSV file
# Add relation to all areas that define positions
positioning = ["caudal", "rostral", "ventral", "dorsal"]
area_describing = ["internal", "compact", "core", "shell"]
df_sims = pd.read_csv(root + sim_csv_loc, converters = {"Sims": float})
# ALl with score above 0.95 are the same
# Also the same: Substantia innominata, basal",103,"Substantia innominata, basal part" 0.91
df_equals = df_sims.loc[df_sims['Sim'] > 0.95]
df_sorted.to_csv(root + "/regions_equal.csv", encoding='utf-8')
from neo4j import GraphDatabase, basic_auth
from dotenv import load_dotenv
import os
load_dotenv()
neo4jUser = os.getenv("NEO4J_USER")
neo4jPwd = os.getenv("NEO4J_PASSWORD")
driver = GraphDatabase.driver("bolt://localhost:7687",auth=basic_auth(neo4jUser, neo4jPwd))
# Relationship EQUALS between equal BrainRegion nodes
csv_file_path = "file:///%s" % get_csv_path("regions_equal.csv")
query="""
LOAD CSV WITH HEADERS FROM "%s" AS row
MATCH (a:BrainRegion { id: row.ID1})
MATCH (c:BrainRegion { id: row.ID2 })
MERGE (a)-[:EQUALS]->(c)
""" % csv_file_path
with driver.session() as session:
session.run(query)
get_count_of_relationship("EQUALS", session)
## TODO add rel for belongs-to/part of
###Output
Added 6124 relationships of type EQUALS
|
notebooks/ML.ipynb | ###Markdown
Machine Learning OverviewMachine learning is the ability of computers to take a dataset of objects and learn patterns about them. This dataset is structured as a table, where each row is a vector representing some object by encoding their properties as the values of the vector. The columns represent **features** - properties that all the objects share.There are, broadly speaking, two kinds of machine learning. **Supervised learning** has an extra column at the end of the dataset, and the program learns to predict the value of this based on the input features for some new object. If the output value is continuous, it is **regression**, otherwise it is **classification**. **Unsupervised learning** seeks to find patterns within the data by, for example, clustering. Supervised LearningOne of the most critical concepts in supervised learning is the dataset. This represents the knowledge about the set of objects in question that you wish the machine to learn. It is essentially a table where the rows represent objects, and the columns represent the properties. 'Training' is essentially the creation of an object called a model, which can take a row missing the last column, and predict what its value will be by examining the data in the dataset. For example...
###Code
import pandas as pd
iris_dataset = pd.read_csv("../data/iris.csv")
iris_dataset.head()
###Output
_____no_output_____
###Markdown
Here a dataset has been loaded from CSV into a pandas dataframe. Each row represents a flower, on which four measurements have been taken, and each flower belongs to one of three classes. A supervised learning model would take this dataset of 150 flowers and train such that any other flower for which the relevant measurements were known could have its class predicted. This would obviously be a classification problem, not regression.A very simple model would take just two features and map them to one of two classes. The dataset can be reduced to this form asd follows:
###Code
simple_iris = iris_dataset.iloc[0:100, [0, 2, 4]]
simple_iris.head()
simple_iris.tail()
###Output
_____no_output_____
###Markdown
Because this is just two dimensions, it can be easily visualised as a scatter plot.
###Code
import sys
sys.path.append("..")
import numerus.learning as ml
ml.plot_dataset(simple_iris)
###Output
_____no_output_____
###Markdown
The data can be seen to be **linearly separable** - there is a line that can be drawn between them that would separate them perfectly.One of the simplest classifiers for supervised learning is the perceptron. Perceptrons have a weights vector which they dot with an input vector to get some level of activation. If the activation is above some threshold, one class is predicted - otherwise the other is predicted. Training a perceptron means giving the model training inputs until it has values for the weights and threshold that effectively separate the classes.The data must be split into training and test data, and then a perceptron created from the training data.
###Code
train_simple_iris, test_simple_iris = ml.split_data(simple_iris)
ml.plot_dataset(train_simple_iris, title="Training Data")
perceptron = ml.Perceptron(train_simple_iris)
print(perceptron)
###Output
_____no_output_____
###Markdown
This explores different ways to analyze the quality of PSM quantifications
###Code
from itertools import chain
bad_data = [
('ELcSAAITMSDNTAANLLLTTIGGPk', 8846),
('FVESVDVAVNLGIDAR',7466 ),
('ELcSAAITMSDNTAANLLLTTIGGPK', 9209),
('FVESVDVAVNLGIDAR', 9213),
('FVESVDVAVNLGIDAR', 9426),
('AVTLYLGAVAATVR', 6660),
('AVTLYLGAVAATVR', 8958),
('IVVIYTTGSQATMDER', 4505),
('VGYIELDLNSGk', 5624),
('LLTGELLTLASR', 6942),
('FVESVDVAVNLGIDAr', 9184),
('ELcSAAITMSDNTAANLLLTTIGGPk', 9458),
('VGYIELDLNSGk', 5238),
('IVVIYTTGSQATMDERNR', 4024),
('AVTLYLGAVAATVR', 9652),
('ELcSAAITMSDNTAANLLLTTIGGPk', 8883),
('IVVIYTTGSQATMDERNR', 4005),
('FVESVDVAVNLGIDAR', 9950),
('AQHSALDDIPR', 2510),
('FVESVDVAVNLGIDAR', 9980),
('VGYIELDLNSGk', 9546),
('IVVIYTTGSQATMDER', 9933),
('HFESTPDTPEIIATIHGEGYR', 4488),
('YYLGNADEIAAK', 3703),
('FVESVDVAVNLGIDAR', 6879),
('RDDSILLAQHTR', 1849),
('EQGYALDSEENEQGVR', 2536),
('VLLcGAVLSR', 4541),
('LGYPITDDLDIYTr', 5790),
('VGYIELDLNSGk', 8965),
('FVESVDVAVNLGIDAR', 7796),
]
good_data = [
('VHIINLEK', 2373),
('HITDRDVR', 863),
('GATVLPHGTGr', 1244),
('GATVLPHGTGR', 1238),
('EQGLHFYAAGHHATER', 1570),
('VPLHTLr', 1371),
('IHVAVAQEVPGTGVDTPEDLER', 4157),
('cIFDNISLTVPR', 6174),
('HLTDGmTVR', 974),
('AGVHFGHQTR', 1002),
('AHHYPSELSGGQQQR', 1142),
('HYGALQGLNk', 1738),
('HITGLHYNPITNTFk', 3590),
('IGLLEHANR', 2008),
('ALEINSQSLDNNAAFIR', 5217),
('RIYGVLER', 2188),
('FQDVGSFDYGR', 3734),
('AVQNAMR', 995),
('IGVGGTITYPR', 3358),
('GmGESNPVTGNTcDNVk', 1558),
('MVEEDPAHPr', 1177),
('AIENQAYVAGcNr', 1914),
('FIAQQLGVSR', 3332),
('MPEDLLTr', 3424),
('mVEEDPAHPr', 1016),
('GFSVNFER', 3790),
('TPVGNTAAIcIYPR', 4031),
('IDAILVDR', 3375),
('LVAVGNTFVYPIAGYSk', 5966),
]
peptides = ' '.join(i[0] for i in chain(bad_data, good_data))
scans = ' '.join(str(i[1]) for i in chain(bad_data, good_data))
out = 'ml_train'
# %%bash -s "$peptides" "$scans" "$out"
# pyQuant --search-file "/home/chris/gdrive/Dropbox/Manuscripts/SILAC Fix/EColi/PD/Chris_Ecoli_1-2-4-(01).msf" \
# --scan-file "/home/chris/gdrive/Dropbox/Manuscripts/SILAC Fix/EColi/Chris_Ecoli_1-2-4.mzML" \
# --peptide $1 --scan $2 \
# -o $3 \
# -p 9
# %%bash -s "$peptides" "$scans" "$out"
# pyQuant --search-file "/home/chris/gdrive/Dropbox/Manuscripts/SILAC Fix/EColi/PD/Chris_Ecoli_1-2-4-(01).msf" \
# --scan-file "/home/chris/gdrive/Dropbox/Manuscripts/SILAC Fix/EColi/Chris_Ecoli_1-2-4.mzML" \
# -o $3 \
# -p 9
%matplotlib inline
from tpot import TPOT
from sklearn.cross_validation import train_test_split
import numpy as np
from scipy.special import logit
import pandas as pd
pd.options.display.max_columns = None
from patsy import dmatrix
dat = pd.read_table(out)
dat = dat[dat['Peptide'].str.count('R')+dat['Peptide'].str.count('K')+dat['Peptide'].str.count('k')+dat['Peptide'].str.count('r') == 1]
dat['Class'] = None
dat.loc[dat['Peptide'].str.count('R')+dat['Peptide'].str.count('r') == 1, 'Class'] = 'R'
dat.loc[dat['Peptide'].str.count('K')+dat['Peptide'].str.count('k') == 1, 'Class'] = 'K'
dat.set_index(['Peptide', 'MS2 Spectrum ID'], inplace=True)
dat.drop(['Modifications', 'Raw File', 'Accession', 'MS1 Spectrum ID', 'Charge', 'Medium Calibrated Precursor', 'Medium Precursor', 'Heavy/Medium', 'Heavy Calibrated Precursor', 'Heavy Precursor', 'Light Calibrated Precursor', 'Light Precursor', 'Retention Time', 'Heavy/Light Confidence', 'Medium/Heavy', 'Medium/Heavy Confidence', 'Medium/Light Confidence', 'Light/Medium Confidence', 'Heavy/Medium Confidence', 'Light/Heavy Confidence'], inplace=True, axis=1)
# Arg H/L -> -1.86
# Arg M/L = -1
# Lys H/L -> 1.89
# Lys M/L = 0.72
nds = []
for numerator, denominator in zip(['Heavy', 'Medium'], ['Light', 'Light']):
ratio = '{}/{}'.format(numerator, denominator)
cols=['Isotopes Found', 'Intensity', 'RT Width', 'Mean Offset', 'Residual', 'R^2', 'SNR']
nd = pd.DataFrame([], columns=[
'Label1 Isotopes Found',
'Label1 Intensity',
'Label1 RT Width',
'Label1 Mean Offset',
'Label1 Residual',
'Label1 R^2',
'Label1 SNR',
'Label2 Isotopes Found',
'Label2 Intensity',
'Label2 RT Width',
'Label2 Mean Offset',
'Label2 Residual',
'Label2 R^2',
'Label2 SNR',
'Deviation',
'Class',
])
median, std = np.log2(dat[dat['Class']=='R'][ratio]).median(), np.log2(dat[dat['Class']=='R'][ratio]).std()
expected = median
nd['Deviation'] = np.log2(dat[dat['Class']=='R'][ratio])-expected
nd['Class'] = np.abs(np.log2(dat[dat['Class']=='R'][ratio])-median).apply(lambda x: 1 if x < std else 0)
for label, new_label in zip([numerator, denominator], ['Label1', 'Label2']):
for col in cols:
nd['{} {}'.format(new_label, col)] = dat['{} {}'.format(label, col)]
nd['Label1 Intensity'] = np.log2(nd['Label1 Intensity'])
nd['Label2 Intensity'] = np.log2(nd['Label2 Intensity'])
nd['Label1 R^2'] = logit(nd['Label1 R^2'])
nd['Label2 R^2'] = logit(nd['Label2 R^2'])
nds.append(nd)
for numerator, denominator in zip(['Heavy', 'Medium'], ['Light', 'Light']):
ratio = '{}/{}'.format(numerator, denominator)
cols=['Isotopes Found', 'Intensity', 'RT Width', 'Mean Offset', 'Residual', 'R^2', 'SNR']
nd = pd.DataFrame([], columns=[
'Label1 Isotopes Found',
'Label1 Intensity',
'Label1 RT Width',
'Label1 Mean Offset',
'Label1 Residual',
'Label1 R^2',
'Label1 SNR',
'Label2 Isotopes Found',
'Label2 Intensity',
'Label2 RT Width',
'Label2 Mean Offset',
'Label2 Residual',
'Label2 R^2',
'Label2 SNR',
'Deviation',
'Class'
])
median, std = np.log2(dat[dat['Class']=='K'][ratio]).median(), np.log2(dat[dat['Class']=='K'][ratio]).std()
expected = median
nd['Deviation'] = np.log2(dat[dat['Class']=='K'][ratio])-expected
nd['Class'] = np.abs(np.log2(dat[dat['Class']=='K'][ratio])-median).apply(lambda x: 1 if x < std else 0)
for label, new_label in zip([numerator, denominator], ['Label1', 'Label2']):
for col in cols:
nd['{} {}'.format(new_label, col)] = dat['{} {}'.format(label, col)]
nd['Label1 Intensity'] = np.log2(nd['Label1 Intensity'])
nd['Label2 Intensity'] = np.log2(nd['Label2 Intensity'])
nd['Label1 R^2'] = logit(nd['Label1 R^2'])
nd['Label2 R^2'] = logit(nd['Label2 R^2'])
nds.append(nd)
pd.concat(nds)
df = pd.concat(nds)
df = df.replace([np.inf,-np.inf], np.nan).dropna()
from sklearn.model_selection import train_test_split
from sklearn import preprocessing
from sklearn.model_selection import GridSearchCV
X = preprocessing.scale(df.drop('Deviation', axis=1).drop('Class', axis=1).values)
y = df.loc[:, ['Deviation', 'Class']].values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=0)
y_test_reg = y_test[:, 0]
y_test_class = y_test[:, 1]
y_train_reg = y_train[:, 0]
y_train_class = y_train[:, 1]
from sklearn.svm import SVC as Classifier
clf = Classifier()
clf = clf.fit(X_train, y_train_class)
from sklearn.metrics import accuracy_score
print accuracy_score(y_test_class, clf.predict(X_test))
from sklearn.qda import QDA as Classifier
clf = Classifier()
clf = clf.fit(X_train, y_train_class)
from sklearn.metrics import accuracy_score
print accuracy_score(y_test_class, clf.predict(X_test))
from sklearn.gaussian_process import GaussianProcessClassifier as Classifier
clf = Classifier()
clf = clf.fit(X_train, y_train_class)
from sklearn.metrics import accuracy_score
print accuracy_score(y_test_class, clf.predict(X_test))
from sklearn.neural_network import MLPClassifier as Classifier
clf = Classifier()
clf = clf.fit(X_train, y_train_class)
from sklearn.metrics import accuracy_score
print accuracy_score(y_test_class, clf.predict(X_test))
import pickle
pickle.dump(clf, open('/home/chris/Devel/pyquant/pyquant/static/new_classifier2.pickle', 'wb'))
from sklearn.ensemble import AdaBoostRegressor as Regressor
clf = Regressor()
clf = clf.fit(X_train, y_train_reg)
from sklearn import metrics
print metrics.median_absolute_error(y_test_reg, clf.predict(X_test))
from matplotlib import pyplot as plt
plt.scatter(y_test_reg, clf.predict(X_test))
plt.plot([-6, 6], [-6, 6], 'r-')
from sklearn.neural_network import MLPRegressor
reg = MLPRegressor()
clf = GridSearchCV(reg, {})
clf.fit(X_train, y_train_reg)
print metrics.median_absolute_error(y_test_reg, clf.predict(X_test))
plt.scatter(y_test_reg, clf.predict(X_test))
plt.plot([-6, 6], [-6, 6], 'r-')
from sklearn.ensemble import GradientBoostingRegressor
reg = GradientBoostingRegressor()
parameters = {
'loss': ['ls', 'lad'],
'learning_rate': [0.01, 0.1, 0.5],
'n_estimators': [50, 100, 200],
}
clf = GridSearchCV(reg, parameters)
clf.fit(X_train, y_train_reg)
from sklearn.metrics import r2_score
r2_score(y_test_reg, clf.predict(X_test))
plt.scatter(y_test_reg, clf.predict(X_test))
plt.plot([-6, 6], [-6, 6], 'r-')
from sklearn.tree import DecisionTreeRegressor as Regressor
clf = Regressor()
clf.fit(X_train, y_train_reg)
print r2_score(y_test_reg, clf.predict(X_test))
plt.scatter(y_test_reg, clf.predict(X_test))
plt.plot([-6, 6], [-6, 6], 'r-')
np.log2(dat[dat['Class']=='R']['Heavy/Light']).plot(kind='hist')
dat.columns.tolist()
from tpot import TPOT
from sklearn.cross_validation import train_test_split
import numpy as np
import pandas as pd
from patsy import dmatrix
dat = pd.read_table(out)
dat.set_index(['Peptide', 'MS2 Spectrum ID'], inplace=True)
dat.drop(['Modifications', 'Raw File', 'Accession', 'MS1 Spectrum ID', 'Charge', 'Retention Time', 'Heavy/Light', 'Heavy/Light Confidence', 'Medium/Light', 'Medium/Heavy', 'Medium/Heavy Confidence', 'Medium/Light', 'Medium/Light Confidence', 'Light/Medium', 'Light/Medium Confidence', 'Heavy/Medium', 'Heavy/Medium Confidence', 'Light/Heavy Confidence', 'Light/Heavy'], inplace=True, axis=1)
for i in ['Heavy', 'Medium', 'Light']:
for j in ['Precursor', 'Calibrated Precursor']:
dat.drop(i + ' ' +j, inplace=True, axis=1)
to_drop = []
for j in dat.columns:
if j.startswith('Heavy'):
to_drop.append(j)
dat.drop(to_drop, inplace=True, axis=1)
dat['Class'] = None
for i in bad_data:
dat.loc[i, 'Class'] = 0
for i in good_data:
dat.loc[i, 'Class'] = 1
dat.dropna(inplace=True)
labels = dat['Class']
# # preprocess
dat['Medium Intensity'] = np.log2(dat['Medium Intensity'])
dat['Light Intensity'] = np.log2(dat['Light Intensity'])
# extra info
for i in ['RT Width', 'Isotopes Found']:
dat['Medium/Light {}'.format(i)] = dat['Medium {}'.format(i)]/dat['Light {}'.format(i)]
# dat = dat.loc[:, ['Medium R^2', 'Light R^2', 'Class']]
dat.reset_index(drop=True, inplace=True)
training_indices, testing_indices = train_test_split(dat.index, stratify = labels.values, train_size=0.5, test_size=0.5)
tpot = TPOT(verbosity=2, generations=10)
tpot.fit(dat.drop('Class',axis=1).loc[training_indices].values, dat.loc[training_indices,'Class'].values.astype(int))
tpot.score(dat.drop('Class',axis=1).loc[testing_indices].values, dat.loc[testing_indices, 'Class'].values.astype(int))
# %matplotlib inline
# from sklearn.svm import SVC
# predictor = SVC()
# predictor.fit(dat.drop('Class',axis=1).loc[training_indices].values, dat.loc[training_indices,'Class'].values.astype(int))
# predictor.score(dat.drop('Class',axis=1).loc[training_indices].values, dat.loc[training_indices,'Class'].values.astype(int))
# # plt.scatter(dat.iloc[:, 0], dat.iloc[:, 1], c=dat.iloc[:, 2])
tpot.export('pipe.py')
dat = pd.read_table('/home/chris/Devel/pyquant/ml_test_cl2_stats')
dat = dat[dat['Peptide'].str.count('R')+dat['Peptide'].str.count('K')+dat['Peptide'].str.count('k')+dat['Peptide'].str.count('r') == 1]
dat['Class'] = None
dat.loc[dat['Peptide'].str.count('R')+dat['Peptide'].str.count('r') == 1, 'Class'] = 'R'
dat.loc[dat['Peptide'].str.count('K')+dat['Peptide'].str.count('k') == 1, 'Class'] = 'K'
np.log2(dat.loc[dat['Class']=='R','Heavy/Light']).plot(kind='density', c='r')
np.log2(dat.loc[(dat['Class']=='R') & (dat['Heavy/Light Confidence']>5),'Heavy/Light']).plot(kind='density', c='g')
np.log2(dat.loc[(dat['Class']=='R') & (dat['Heavy/Light Confidence']>8),'Heavy/Light']).plot(kind='density', c='k')
isotope = 'K'
ratio = 'Heavy/Light'
df_1 = np.log2(dat.loc[dat['Class']==isotope,ratio])
df_2 = np.log2(dat.loc[(dat['Class']==isotope) & (dat['{} Confidence'.format(ratio)]>5),ratio])
df_3 = np.log2(dat.loc[(dat['Class']==isotope) & (dat['{} Confidence'.format(ratio)]>9),ratio])
df = pd.concat([df_1, df_2, df_3], axis=1)
df.columns=['All', '5', '8']
df.plot(kind='box')
dat.loc[dat['Class']=='K', '{} Confidence'.format('Heavy/Light')].plot(kind='density')
###Output
_____no_output_____
###Markdown
Feature extraction
###Code
df.columns
###Output
_____no_output_____
###Markdown
MODEL
###Code
df.dropna(inplace=True)
X_train = df.iloc[:, :-2]
y_train = df['y'].values
%%time
from imblearn.combine import SMOTETomek
smt = SMOTETomek()
X_train, y_train = smt.fit_resample(X_train, y_train)
from xgboost import XGBClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import AdaBoostClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.neighbors import KNeighborsClassifier
model = KNeighborsClassifier(n_neighbors=30, n_jobs=-1)
model.fit(X_train, y_train)
test = pd.read_csv('player-2.csv')
del test['Unnamed: 0']
test.dropna(inplace=True)
X_test = test.iloc[:, :-2]
y_test = test['y']
from sklearn.metrics import classification_report
y_pred_class = model.predict(X_test.values)
print(classification_report(y_test, y_pred_class))
from sklearn.metrics import accuracy_score
accuracy_score(y_test, y_pred_class)
from joblib import dump, load
dump(model, 'ml_model.joblib')
###Output
_____no_output_____
###Markdown
Why Machine Learning? Main reason of using machine learning as compared to normal computer programs is - User specific output for large number of cases(scale). Without machine for every user a seperate program has to be written.- Writing programs specific to each user needs lot of understanding of that user. What kind of problems ML can solve? - Generalisation from known examples- Inference building from unknown data Typical examples of ML applications- Identifying the zip code from handwritten digits on an envelope- Determining whether a tumor is benign based on a medical image- Detecting fraudulent activity in credit card transactions- Song suggestion on app like spotify- Spam detection Some slightly complex problems- Identifying topics in a set of blog posts- Segmenting customers into groups with similar preferences- Detecting abnormal access patterns to a website Things required - numpy - arrays - shape of array - reshaping- scipy- matplotlib - sample plot- pandas - Series/Dataframe - selection and filtering - grouping - merging etc.- mglearn numpy
###Code
import numpy as np
zeros = np.zeros(100)
ones = np.ones(100)
np.ones_like(zeros)
zeros.shape
np.linspace(0,100,50)
np.random.randint(0, 100, 10)
###Output
_____no_output_____
###Markdown
matplotlib
###Code
import matplotlib.pyplot as plt
%matplotlib inline
x = np.linspace(-10,10,100)
y = np.sin(x)
plt.plot(x, y, marker='x')
###Output
_____no_output_____
###Markdown
pandas
###Code
import pandas as pd
# create a simple dataset of people
data = {'Name': ["John", "Anna", "Peter", "Linda"],
'Location' : ["New York", "Paris", "Berlin", "London"],
'Age' : [24, 13, 53, 33]
}
data_pandas = pd.DataFrame(data)
# IPython.display allows "pretty printing" of dataframes
# in the Jupyter notebook
display(data_pandas)
###Output
_____no_output_____
###Markdown
Simple example of classificationA hobby botanist has collected some data of iris flowers. She has collected some parameters for sepal and petal of the flower (**features**). From this observed data botanists also puts known species name (**target**). Now question is , using this data can we build a program which can predict specis of flower if petal and sepal attributes are given?
###Code
from IPython.display import Image
sepal_peatal_url="https://external-content.duckduckgo.com/iu/?u=http%3A%2F%2Fwww.marysrosaries.com%2Fcollaboration%2Fimages%2Fe%2Fe0%2FSepal_001.png&f=1&nofb=1"
Image(url=sepal_peatal_url, width=400, height=400)
from sklearn.datasets import load_iris
iris_dataset = load_iris()
iris_dataset.keys()
iris_dataset.feature_names
iris_dataset.target
iris_dataset.target_names
iris_dataset.data.shape
iris_dataset['data'][:5]
###Output
_____no_output_____
###Markdown
Measuring sucess of any prediction- We take few of inputs and outputs to train the model- Next predict fro some inputs whoose output is already known- Now compare known output and predicted output to measure sucess of algorithm
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(iris_dataset['data'],
iris_dataset['target'],
random_state=0)
X_train.shape # 75% of data
X_test.shape
y_train.shape # remaining 25% of data
y_test.shape
y_test[:5]
###Output
_____no_output_____
###Markdown
In scikit-learn , data is usually denoted with a capital X , while labels are denoted bya lowercase y . This is inspired by the standard formulation `f(x)=y` in mathematics,where x is the input to a function and y is the output. Following more conventionsfrom mathematics, we use a capital X because the data is a two-dimensional array (amatrix) and a lowercase y because the target is a one-dimensional array (a vector). Looking at data
###Code
import mglearn
# create dataframe from data in X_train
# label the columns using the strings in iris_dataset.feature_names
iris_dataframe = pd.DataFrame(X_train, columns=iris_dataset.feature_names)
# create a scatter matrix from the dataframe, color by y_train
grr = pd.plotting.scatter_matrix(iris_dataframe, c=y_train, figsize=(15, 15), marker='o',
hist_kwds={'bins': 20}, s=60, alpha=.8, cmap=mglearn.cm3)
###Output
_____no_output_____
###Markdown
Building first model
###Code
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors=1)
knn.fit(X_train, y_train)
X_new = np.array([[5, 2.9, 1, 0.2]])
X_new.shape
prediction = knn.predict(X_new)
prediction
iris_dataset['target_names'][prediction]
###Output
_____no_output_____
###Markdown
Evaluating Model
###Code
y_pred = knn.predict(X_test)
print(y_pred)
score = np.mean(y_pred == y_test) # how much fraction of prediction is matching with original output?
score
knn.score(X_test, y_test)
X_train, X_test, y_train, y_test = train_test_split(
iris_dataset['data'], iris_dataset['target'], random_state=0)
knn = KNeighborsClassifier(n_neighbors=1)
knn.fit(X_train, y_train)
print("Test set score: {:.2f}".format(knn.score(X_test, y_test)))
###Output
Test set score: 0.97
###Markdown
Supervised learningIt is an algorithm which learns from known inputs and outputs and predicts the output for a new inputs.There are two major types of any learning algorithms based on kind of output.1. **classification:** the goal is to predict class labels, which is chosen from a predefined list of possibilities.2. **regression:** the goal is to predict a continious number sample datasets
###Code
# generate dataset
X, y = mglearn.datasets.make_forge()
# plot dataset
mglearn.discrete_scatter(X[:, 0], X[:, 1], y)
plt.legend(["Class 0", "Class 1"], loc=4)
plt.xlabel("First feature")
plt.ylabel("Second feature")
print("X.shape: {}".format(X.shape))
X, y = mglearn.datasets.make_wave(n_samples=40)
plt.plot(X, y, 'o')
plt.ylim(-3, 3)
plt.xlabel("Feature")
plt.ylabel("Target")
###Output
_____no_output_____
###Markdown
k-Neighbors classificationHere are Predictions made by the one-nearest-neighbor model on the forge dataset. Lets try to understand it from this diagram
###Code
mglearn.plots.plot_knn_classification(n_neighbors=1)
mglearn.plots.plot_knn_classification(n_neighbors=3)
from sklearn.model_selection import train_test_split
X, y = mglearn.datasets.make_forge()
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
from sklearn.neighbors import KNeighborsClassifier
clf = KNeighborsClassifier(n_neighbors=3)
clf.fit(X_train, y_train)
print("Test set predictions: {}".format(clf.predict(X_test)))
clf.score(X_test, y_test)
###Output
_____no_output_____
###Markdown
Analyzing KNeighborsClassifier
###Code
fig, axes = plt.subplots(1, 3, figsize=(10, 3))
for n_neighbors, ax in zip([1, 3, 9], axes):
# the fit method returns the object self, so we can instantiate
# and fit in one line
clf = KNeighborsClassifier(n_neighbors=n_neighbors).fit(X, y)
mglearn.plots.plot_2d_separator(clf, X, fill=True, eps=0.5, ax=ax, alpha=.4)
mglearn.discrete_scatter(X[:, 0], X[:, 1], y, ax=ax)
ax.set_title("{} neighbor(s)".format(n_neighbors))
ax.set_xlabel("feature 0")
ax.set_ylabel("feature 1")
axes[0].legend(loc=3)
###Output
_____no_output_____
###Markdown
- A smoother boundary corresponds to a simpler model. - In other words, using few neighbors corresponds to high model complexity (as shown on the right side of Figure 2-1)- using many neighbors corresponds to low model complexity (as shown on the left side of Figure 2-1). breast cancer dataset
###Code
from sklearn.datasets import load_breast_cancer
cancer = load_breast_cancer()
X_train, X_test, y_train, y_test = train_test_split(
cancer.data, cancer.target, stratify=cancer.target, random_state=66)
training_accuracy = []
test_accuracy = []
# try n_neighbors from 1 to 10
neighbors_settings = range(1, 11)
for n_neighbors in neighbors_settings:
# build the model
clf = KNeighborsClassifier(n_neighbors=n_neighbors)
clf.fit(X_train, y_train)
# record training set accuracy
training_accuracy.append(clf.score(X_train, y_train))
# record generalization accuracy
test_accuracy.append(clf.score(X_test, y_test))
plt.plot(neighbors_settings, training_accuracy, label="training accuracy")
plt.plot(neighbors_settings, test_accuracy, label="test accuracy")
plt.ylabel("Accuracy")
plt.xlabel("n_neighbors")
plt.legend()
###Output
_____no_output_____
###Markdown
Importing packages
###Code
import pickle
import nltk
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.metrics import log_loss, hamming_loss, accuracy_score, f1_score, roc_curve, auc
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
from gensim.models.doc2vec import Doc2Vec
from nltk.tokenize import word_tokenize
from PIL import Image
from tqdm import tqdm
import gc
nltk.download('punkt')
plt.rcParams['figure.figsize'] = (10, 8)
from google.colab import drive
drive.mount('/content/drive')
###Output
_____no_output_____
###Markdown
Copying the files to the local colab machine from google drive to speed up performance
###Code
!cp -r "/content/drive/Shareddrives/CIS 522 Final Project/shopee-product-matching.zip" .
!unzip "/content/shopee-product-matching.zip"
pd.read_csv('/content/drive/Shareddrives/CIS 522 Final Project/Data/triplet_train.csv').head()
###Output
_____no_output_____
###Markdown
Importing the dataset, dividing into train and test and loading the nlp model
###Code
train_dataset = pd.read_csv('/content/drive/Shareddrives/CIS 522 Final Project/Data/triplet_train.csv')
nlp_model = Doc2Vec.load('/content/drive/Shareddrives/CIS 522 Final Project/Models/d2v.model')
train_dataset = train_dataset.drop_duplicates(subset=['posting_id_anchor'])
train_dataset, valid_dataset = train_test_split(train_dataset, test_size=0.10, random_state=1)
train_dataset = train_dataset.drop_duplicates(subset=['posting_id_anchor'])
valid_dataset = valid_dataset.drop_duplicates(subset=['posting_id_anchor'])
train_labels = train_dataset['label_group_positive']
valid_labels = valid_dataset['label_group_positive']
###Output
_____no_output_____
###Markdown
Loading the images as numpy arrays and saving the results
###Code
train_image_inputs = np.array([np.asarray(Image.open('train_images/{}'.format(image)).resize((224, 224))).flatten() for image in train_dataset['image_anchor']])
valid_image_inputs = np.array([np.asarray(Image.open('train_images/{}'.format(image)).resize((224, 224))).flatten() for image in valid_dataset['image_anchor']])
with open('/content/drive/Shareddrives/CIS 522 Final Project/ml_train_image_inputs.npy', 'wb') as f:
np.save(f, train_image_inputs)
with open('/content/drive/Shareddrives/CIS 522 Final Project/ml_valid_image_inputs.npy', 'wb') as f:
np.save(f, valid_image_inputs)
train_image_inputs = np.load('/content/drive/Shareddrives/CIS 522 Final Project/ml_train_image_inputs.npy')
valid_image_inputs = np.load('/content/drive/Shareddrives/CIS 522 Final Project/ml_valid_image_inputs.npy')
train_labels = train_labels
valid_labels = valid_labels
###Output
_____no_output_____
###Markdown
Freeing up unused memory
###Code
gc.collect()
###Output
_____no_output_____
###Markdown
Tokenizing the titles and generating embeddings for them
###Code
train_text_inputs = np.array([nlp_model.infer_vector(word_tokenize(text.lower())) for text in train_dataset['title_anchor']])
valid_text_inputs = np.array([nlp_model.infer_vector(word_tokenize(text.lower())) for text in valid_dataset['title_anchor']])
###Output
_____no_output_____
###Markdown
Combining the title embeddings with the images array representation
###Code
train_inputs = np.concatenate((train_image_inputs, train_text_inputs), axis=1)
valid_inputs = np.concatenate((valid_image_inputs, valid_text_inputs), axis=1)
###Output
_____no_output_____
###Markdown
Defining the machine learning model
###Code
lr = KNeighborsClassifier(n_jobs=-1)
lr.fit(train_image_inputs, train_labels)
results = lr.predict(valid_image_inputs)
###Output
_____no_output_____
###Markdown
Reporting the results for all the metrics (accuracy, F1-micro, F1-macro)
###Code
accuracy_score(valid_labels, results)
f1_score(valid_labels, results, average='micro')
f1_score(valid_labels, results, average='macro')
###Output
_____no_output_____ |
modeling/basic_model_framework.ipynb | ###Markdown
Params:
###Code
aggregate_by_state = False
outcome_type = 'cases'
###Output
_____no_output_____
###Markdown
Basic Data Visualization
###Code
# Just something to quickly summarize the number of cases and distributions each day
# 'deaths' and 'cases' contain the time-series of the outbreak
df = load_data.load_county_level(data_dir = '../data/')
df = df.sort_values('#Deaths_3/30/2020', ascending=False)
# outcome_cases = load_data.outcome_cases # most recent day
# outcome_deaths = load_data.outcome_deaths
important_vars = load_data.important_keys(df)
very_important_vars = ['PopulationDensityperSqMile2010',
# 'MedicareEnrollment,AgedTot2017',
'PopulationEstimate2018',
'#ICU_beds',
'MedianAge2010',
'Smokers_Percentage',
'DiabetesPercentage',
'HeartDiseaseMortality',
'#Hospitals'
# 'PopMale60-642010',
# 'PopFmle60-642010',
# 'PopMale65-742010',
# 'PopFmle65-742010',
# 'PopMale75-842010',
# 'PopFmle75-842010',
# 'PopMale>842010',
# 'PopFmle>842010'
]
def sum_lists(list_of_lists):
arr = np.array(list(list_of_lists))
sum_arr = np.sum(arr,0)
return list(sum_arr)
if aggregate_by_state:
# Aggregate by State
state_deaths_df = df.groupby('StateNameAbbreviation').deaths.agg(sum_lists).to_frame()
state_cases_df = df.groupby('StateNameAbbreviation').cases.agg(sum_lists).to_frame()
df = pd.concat([state_cases_df,state_deaths_df],axis =1 )
# Distribution of the maximum number of cases
_cases = list(df['cases'])
max_cases = []
for i in range(len(df)):
max_cases.append(max(_cases[i]))
print('Number of counties with non-zero cases')
print(sum([v >0 for v in max_cases]))
# cases truncated below 20 and above 1000 for plot readability
plt.hist([v for v in max_cases if v > 20 and v < 1000],bins = 100)
sum(max_cases)
print(sum([v > 50 for v in max_cases]))
np.quantile(max_cases,.5)
# Distribution of the maximum number of cases
_deaths = list(df['deaths'])
max_deaths = []
for i in range(len(df)):
max_deaths.append(max(_deaths[i]))
print('Number of counties with non-zero deaths')
print(sum([v > 0 for v in max_deaths]))
# plt.hist(max_cases)
# print(sum([v >0 for v in max_cases]))
plt.hist([v for v in max_deaths if v > 5],bins=30)
sum(max_deaths)
max(max_deaths)
np.quantile(max_deaths,.7)
###Output
_____no_output_____
###Markdown
Clean data
###Code
# Remove counties with zero cases
max_cases = [max(v) for v in df['cases']]
df['max_cases'] = max_cases
max_deaths = [max(v) for v in df['deaths']]
df['max_deaths'] = max_deaths
df = df[df['max_cases'] > 0]
###Output
_____no_output_____
###Markdown
Predict data from model:
###Code
method_keys = []
# clear predictions
for m in method_keys:
del df[m]
# target_day = np.array([1])
# # Trains model on train_df and produces predictions for the final day for test_df and writes prediction
# # to a new column for test_df
# # fit_and_predict(df, method='exponential', outcome=outcome_type, mode='eval_mode',target_day=target_day)
# # fit_and_predict(df,method='shared_exponential', outcome=outcome_type, mode='eval_mode',target_day=target_day)
# # fit_and_predict(train_df, test_df,'shared_exponential', mode='eval_mode',demographic_vars=important_vars)
# # fit_and_predict(df,method='shared_exponential', outcome=outcome_type, mode='eval_mode',demographic_vars=very_important_vars,target_day=target_day)
# fit_and_predict(df, outcome=outcome_type, mode='eval_mode',demographic_vars=[],
# method='ensemble',target_day=target_day)
# fit_and_predict(df, outcome=outcome_type, mode='eval_mode',demographic_vars=[],
# method='ensemble',target_day=np.array([1,2,3]))
# # fit_and_predict(train_df, test_d f,method='exponential',mode='eval_mode',target_day = np.array([1,2]))
# # Finds the names of all the methods
# method_keys = [c for c in df if 'predicted' in c]
# method_keys
# for days_ahead in [1, 2, 3]:
# for method in ['exponential', 'shared_exponential', 'ensemble']:
# fit_and_predict(df, method=method, outcome=outcome_type, mode='eval_mode',target_day=np.array([days_ahead]))
# if method == 'shared_exponential':
# fit_and_predict(df,method='shared_exponential',
# outcome=outcome_type,
# mode='eval_mode',
# demographic_vars=very_important_vars,
# target_day=np.array([days_ahead]))
# method_keys = [c for c in df if 'predicted' in c]
# geo = ['countyFIPS', 'CountyNamew/StateAbbrev']
# method_keys = [c for c in df if 'predicted' in c]
# df_preds = df[method_keys + geo + ['deaths']]
# df_preds.to_pickle("multi_day_6.pkl")
###Output
_____no_output_____
###Markdown
Ensemble predictions
###Code
exponential = {'model_type':'exponential'}
shared_exponential = {'model_type':'shared_exponential'}
demographics = {'model_type':'shared_exponential', 'demographic_vars':very_important_vars}
linear = {'model_type':'linear'}
# import fit_and_predict
# for d in [1, 2, 3]:
# df = fit_and_predict.fit_and_predict_ensemble(df,
# target_day=np.array([d]),
# mode='eval_mode',
# outcome=outcome_type,
# output_key=f'predicted_{outcome_type}_ensemble_{d}'
# )
import fit_and_predict
for d in [1, 3, 5, 7]:
df = fit_and_predict.fit_and_predict_ensemble(df,
target_day=np.array(range(1, d+1)),
mode='eval_mode',
outcome=outcome_type,
methods=[exponential,
shared_exponential,
demographics,
linear
],
output_key=f'predicted_{outcome_type}_ensemble_{d}_with_exponential'
)
method_keys = [c for c in df if 'predicted' in c]
# df = fit_and_predict.fit_and_predict_ensemble(df)
method_keys
###Output
_____no_output_____
###Markdown
Evaluate and visualize models Compute MSE and log MSE on relevant cases
###Code
# TODO: add average rank as metric
# Computes the mse in log space and non-log space for all columns
def l1(arr1,arr2,norm=True):
"""
arr2 ground truth
arr1 predictions
"""
if norm:
sum_percent_dif = 0
for i in range(len(arr1)):
sum_percent_dif += np.abs(arr2[i]-arr1[i])/arr1[i]
return sum_percent_dif/len(arr1)
return sum([np.abs(a1-a2) for (a1,a2) in zip(arr1,arr2)])/len(arr1)
mse = sklearn.metrics.mean_squared_error
# Only evaluate points that exceed this number of deaths
# lower_threshold, upper_threshold = 10, 100000
lower_threshold, upper_threshold = 10, np.inf
# Log scaled
outcome = np.array([df[outcome_type].values[i][-1] for i in range(len(df))])
for key in method_keys:
preds = [np.log(p[-1] + 1) for p in df[key][(outcome > lower_threshold)]] # * (outcome < upper_threshold)]]
print('Log scale MSE for '+key)
print(mse(np.log(outcome[(outcome > lower_threshold) * (outcome < upper_threshold)] + 1),preds))
# Log scaled
outcome = np.array([df[outcome_type].values[i][-1] for i in range(len(df))])
for key in method_keys:
preds = [np.log(p[-1] + 1) for p in df[key][outcome > lower_threshold]]
print('Log scale l1 for '+key)
print(l1(np.log(outcome[outcome > lower_threshold] + 1),preds))
# No log scale
outcome = np.array([df[outcome_type].values[i][-1] for i in range(len(df))])
for key in method_keys:
preds = [p[-1] for p in df[key][outcome > lower_threshold]]
print('Raw MSE for '+key)
print(mse(outcome[outcome > lower_threshold],preds))
# No log scale
outcome = np.array([df[outcome_type].values[i][-1] for i in range(len(df))])
for key in method_keys:
preds = [p[-1] for p in df[key][outcome > lower_threshold]]
print('Raw l1 for '+key)
print(l1(outcome[outcome > lower_threshold],preds))
# No log scale
outcome = np.array([df[outcome_type].values[i][-1] for i in range(len(df))])
for key in method_keys:
preds = [p[-1] for p in df[key][outcome > lower_threshold]]
print('Raw l1 for '+key)
print(l1(outcome[outcome > lower_threshold],preds,norm=False))
###Output
Raw l1 for predicted_cases_ensemble_1
15.702192279696032
Raw l1 for predicted_cases_ensemble_3
56.27341453693248
###Markdown
Plot residuals
###Code
# TODO: Create bounds automatically, create a plot function and call it instead of copying code, figure out way
# to plot more than two things at once cleanly
# Creates residual plots log scaled and raw
# We only look at cases with number of deaths greater than 5
def method_name_to_pretty_name(key):
# TODO: hacky, fix
words = key.split('_')
words2 = []
for w in words:
if not w.isnumeric():
words2.append(w)
else:
num = w
model_name = ' '.join(words2[2:])
# model_name = 'model'
if num == '1':
model_name += ' predicting 1 day ahead'
else:
model_name += ' predicting ' +w+' days ahead'
return model_name
# Make log plots:
bounds = [1.5, 7]
outcome = np.array([df[outcome_type].values[i][-1] for i in range(len(df))])
for key in method_keys:
preds = [np.log(p[-1]) for p in df[key][outcome > 5]]
plt.scatter(np.log(outcome[outcome > 5]),preds,label=method_name_to_pretty_name(key))
plt.xlabel('actual '+outcome_type)
plt.ylabel('predicted '+outcome_type)
plt.xlim(bounds)
plt.ylim(bounds)
plt.legend()
plt.plot(bounds, bounds, ls="--", c=".3")
plt.show()
# Make log plots zoomed in for the counties that have a fewer number of deaths
bounds = [1.5, 4]
outcome = np.array([df[outcome_type].values[i][-1] for i in range(len(df))])
for key in method_keys:
preds = [np.log(p[-1]) for p in df[key][outcome > 5]]
plt.scatter(np.log(outcome[outcome > 5]),preds,label=method_name_to_pretty_name(key))
plt.xlabel('actual '+outcome_type)
plt.ylabel('predicted '+outcome_type)
plt.xlim(bounds)
plt.ylim(bounds)
plt.legend()
plt.plot(bounds, bounds, ls="--", c=".3")
plt.show()
# Make non-log plots zoomed in for the counties that have a fewer number of deaths# We set bounds
bounds = [10,400]
outcome = np.array([df[outcome_type].values[i][-1] for i in range(len(df))])
for key in method_keys:
preds = [p[-1] for p in df[key][outcome > 5]]
plt.scatter(outcome[outcome > 5],preds,label=method_name_to_pretty_name(key))
plt.xlabel('actual '+outcome_type)
plt.ylabel('predicted '+outcome_type)
plt.xlim(bounds)
plt.ylim(bounds)
plt.legend()
plt.plot(bounds, bounds, ls="--", c=".3")
plt.show()
###Output
_____no_output_____
###Markdown
Graph Visualizations
###Code
# Here we visualize predictions on a per county level.
# The blue lines are the true number of deaths, and the dots are our predictions for each model for those days.
def plot_prediction(row):
"""
Plots model predictions vs actual
row: dataframe row
window: autoregressive window size
"""
gold_key = outcome_type
for i,val in enumerate(row[gold_key]):
if val > 0:
start_point = i
break
# plt.plot(row[gold_key][start_point:], label=gold_key)
if len(row[gold_key][start_point:]) < 3:
return
sns.lineplot(list(range(len(row[gold_key][start_point:]))),row[gold_key][start_point:], label=gold_key)
for key in method_keys:
preds = row[key]
sns.scatterplot(list(range(len(row[gold_key][start_point:])))[-len(preds):],preds,label=method_name_to_pretty_name(key))
# plt.scatter(list(range(len(row[gold_key][start_point:])))[-len(preds):],preds,label=key)
# plt.legend()
# plt.show()
# sns.legend()
plt.title(row['CountyName']+' in '+row['StateNameAbbreviation'])
plt.ylabel(outcome_type)
plt.xlabel('Days since first death')
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.figure(dpi=500)
plt.show()
# feature_vals = {
# 'PopulationDensityperSqMile2010' : 1.1525491065255939e-05,
# "MedicareEnrollment,AgedTot2017" : -2.119520577282583e-06,
# 'PopulationEstimate2018' : 2.8898343032154275e-07,
# '#ICU_beds' : -0.000647030727828718,
# 'MedianAge2010' : 0.05032666600339253,
# 'Smokers_Percentage' : -0.013410742818946319,
# 'DiabetesPercentage' : 0.04395318355581005,
# 'HeartDiseaseMortality' : 0.0015473771787186525,
# '#Hospitals': 0.019248102357644396,
# 'log(deaths)' : 0.8805209010821442,
# 'bias' : -1.871552103871495
# }
df = df.sort_values(by='max_deaths',ascending=False)
for i in range(len(df)):
row = df.iloc[i]
# If number of deaths greater than 10
if max(row['deaths']) > 10:
print(row['CountyName']+' in '+row['StateNameAbbreviation'])
plot_prediction(row)
for v in very_important_vars:
print(v+ ': '+str(row[v])) #+';\t contrib: '+ str(feature_vals[v]*float(row[v])))
print('\n')
###Output
_____no_output_____
###Markdown
Params:
###Code
aggregate_by_state = False
outcome_type = 'cases'
###Output
_____no_output_____
###Markdown
Basic Data Visualization
###Code
# Just something to quickly summarize the number of cases and distributions each day
# 'deaths' and 'cases' contain the time-series of the outbreak
df = load_data.load_county_level(data_dir = '../data/')
df = df.sort_values('#Deaths_3/30/2020', ascending=False)
# outcome_cases = load_data.outcome_cases # most recent day
# outcome_deaths = load_data.outcome_deaths
important_vars = load_data.important_keys(df)
very_important_vars = ['PopulationDensityperSqMile2010',
# 'MedicareEnrollment,AgedTot2017',
'PopulationEstimate2018',
'#ICU_beds',
'MedianAge2010',
'Smokers_Percentage',
'DiabetesPercentage',
'HeartDiseaseMortality',
'#Hospitals'
# 'PopMale60-642010',
# 'PopFmle60-642010',
# 'PopMale65-742010',
# 'PopFmle65-742010',
# 'PopMale75-842010',
# 'PopFmle75-842010',
# 'PopMale>842010',
# 'PopFmle>842010'
]
def sum_lists(list_of_lists):
arr = np.array(list(list_of_lists))
sum_arr = np.sum(arr,0)
return list(sum_arr)
if aggregate_by_state:
# Aggregate by State
state_deaths_df = df.groupby('StateNameAbbreviation').deaths.agg(sum_lists).to_frame()
state_cases_df = df.groupby('StateNameAbbreviation').cases.agg(sum_lists).to_frame()
df = pd.concat([state_cases_df,state_deaths_df],axis =1 )
# Distribution of the maximum number of cases
_cases = list(df['cases'])
max_cases = []
for i in range(len(df)):
max_cases.append(max(_cases[i]))
print('Number of counties with non-zero cases')
print(sum([v >0 for v in max_cases]))
# cases truncated below 20 and above 1000 for plot readability
plt.hist([v for v in max_cases if v > 20 and v < 1000],bins = 100)
sum(max_cases)
print(sum([v > 50 for v in max_cases]))
np.quantile(max_cases,.5)
# Distribution of the maximum number of cases
_deaths = list(df['deaths'])
max_deaths = []
for i in range(len(df)):
max_deaths.append(max(_deaths[i]))
print('Number of counties with non-zero deaths')
print(sum([v > 0 for v in max_deaths]))
# plt.hist(max_cases)
# print(sum([v >0 for v in max_cases]))
plt.hist([v for v in max_deaths if v > 5],bins=30)
sum(max_deaths)
max(max_deaths)
np.quantile(max_deaths,.7)
###Output
_____no_output_____
###Markdown
Clean data
###Code
# Remove counties with zero cases
max_cases = [max(v) for v in df['cases']]
df['max_cases'] = max_cases
max_deaths = [max(v) for v in df['deaths']]
df['max_deaths'] = max_deaths
df = df[df['max_cases'] > 0]
###Output
_____no_output_____
###Markdown
Predict data from model:
###Code
method_keys = []
# clear predictions
for m in method_keys:
del df[m]
# target_day = np.array([1])
# # Trains model on train_df and produces predictions for the final day for test_df and writes prediction
# # to a new column for test_df
# # fit_and_predict(df, method='exponential', outcome=outcome_type, mode='eval_mode',target_day=target_day)
# # fit_and_predict(df,method='shared_exponential', outcome=outcome_type, mode='eval_mode',target_day=target_day)
# # fit_and_predict(train_df, test_df,'shared_exponential', mode='eval_mode',demographic_vars=important_vars)
# # fit_and_predict(df,method='shared_exponential', outcome=outcome_type, mode='eval_mode',demographic_vars=very_important_vars,target_day=target_day)
# fit_and_predict(df, outcome=outcome_type, mode='eval_mode',demographic_vars=[],
# method='ensemble',target_day=target_day)
# fit_and_predict(df, outcome=outcome_type, mode='eval_mode',demographic_vars=[],
# method='ensemble',target_day=np.array([1,2,3]))
# # fit_and_predict(train_df, test_d f,method='exponential',mode='eval_mode',target_day = np.array([1,2]))
# # Finds the names of all the methods
# method_keys = [c for c in df if 'predicted' in c]
# method_keys
# for days_ahead in [1, 2, 3]:
# for method in ['exponential', 'shared_exponential', 'ensemble']:
# fit_and_predict(df, method=method, outcome=outcome_type, mode='eval_mode',target_day=np.array([days_ahead]))
# if method == 'shared_exponential':
# fit_and_predict(df,method='shared_exponential',
# outcome=outcome_type,
# mode='eval_mode',
# demographic_vars=very_important_vars,
# target_day=np.array([days_ahead]))
# method_keys = [c for c in df if 'predicted' in c]
# geo = ['countyFIPS', 'CountyNamew/StateAbbrev']
# method_keys = [c for c in df if 'predicted' in c]
# df_preds = df[method_keys + geo + ['deaths']]
# df_preds.to_pickle("multi_day_6.pkl")
###Output
_____no_output_____
###Markdown
Ensemble predictions
###Code
exponential = {'model_type':'exponential'}
shared_exponential = {'model_type':'shared_exponential'}
demographics = {'model_type':'shared_exponential', 'demographic_vars':very_important_vars}
linear = {'model_type':'linear'}
# import fit_and_predict
# for d in [1, 2, 3]:
# df = fit_and_predict.fit_and_predict_ensemble(df,
# target_day=np.array([d]),
# mode='eval_mode',
# outcome=outcome_type,
# output_key=f'predicted_{outcome_type}_ensemble_{d}'
# )
import fit_and_predict
for d in [1, 3, 5, 7]:
df = fit_and_predict.fit_and_predict_ensemble(df,
target_day=np.array(range(1, d+1)),
mode='eval_mode',
outcome=outcome_type,
methods=[exponential,
shared_exponential,
demographics,
linear
],
output_key=f'predicted_{outcome_type}_ensemble_{d}_with_exponential'
)
method_keys = [c for c in df if 'predicted' in c]
# df = fit_and_predict.fit_and_predict_ensemble(df)
method_keys
###Output
_____no_output_____
###Markdown
Evaluate and visualize models Compute MSE and log MSE on relevant cases
###Code
# TODO: add average rank as metric
# Computes the mse in log space and non-log space for all columns
def l1(arr1,arr2,norm=True):
"""
arr2 ground truth
arr1 predictions
"""
if norm:
sum_percent_dif = 0
for i in range(len(arr1)):
sum_percent_dif += np.abs(arr2[i]-arr1[i])/arr1[i]
return sum_percent_dif/len(arr1)
return sum([np.abs(a1-a2) for (a1,a2) in zip(arr1,arr2)])/len(arr1)
mse = sklearn.metrics.mean_squared_error
# Only evaluate points that exceed this number of deaths
# lower_threshold, upper_threshold = 10, 100000
lower_threshold, upper_threshold = 10, np.inf
# Log scaled
outcome = np.array([df[outcome_type].values[i][-1] for i in range(len(df))])
for key in method_keys:
preds = [np.log(p[-1] + 1) for p in df[key][(outcome > lower_threshold)]] # * (outcome < upper_threshold)]]
print('Log scale MSE for '+key)
print(mse(np.log(outcome[(outcome > lower_threshold) * (outcome < upper_threshold)] + 1),preds))
# Log scaled
outcome = np.array([df[outcome_type].values[i][-1] for i in range(len(df))])
for key in method_keys:
preds = [np.log(p[-1] + 1) for p in df[key][outcome > lower_threshold]]
print('Log scale l1 for '+key)
print(l1(np.log(outcome[outcome > lower_threshold] + 1),preds))
# No log scale
outcome = np.array([df[outcome_type].values[i][-1] for i in range(len(df))])
for key in method_keys:
preds = [p[-1] for p in df[key][outcome > lower_threshold]]
print('Raw MSE for '+key)
print(mse(outcome[outcome > lower_threshold],preds))
# No log scale
outcome = np.array([df[outcome_type].values[i][-1] for i in range(len(df))])
for key in method_keys:
preds = [p[-1] for p in df[key][outcome > lower_threshold]]
print('Raw l1 for '+key)
print(l1(outcome[outcome > lower_threshold],preds))
# No log scale
outcome = np.array([df[outcome_type].values[i][-1] for i in range(len(df))])
for key in method_keys:
preds = [p[-1] for p in df[key][outcome > lower_threshold]]
print('Raw l1 for '+key)
print(l1(outcome[outcome > lower_threshold],preds,norm=False))
###Output
Raw l1 for predicted_cases_ensemble_1
15.702192279696032
Raw l1 for predicted_cases_ensemble_3
56.27341453693248
###Markdown
Plot residuals
###Code
# TODO: Create bounds automatically, create a plot function and call it instead of copying code, figure out way
# to plot more than two things at once cleanly
# Creates residual plots log scaled and raw
# We only look at cases with number of deaths greater than 5
def method_name_to_pretty_name(key):
# TODO: hacky, fix
words = key.split('_')
words2 = []
for w in words:
if not w.isnumeric():
words2.append(w)
else:
num = w
model_name = ' '.join(words2[2:])
# model_name = 'model'
if num == '1':
model_name += ' predicting 1 day ahead'
else:
model_name += ' predicting ' +w+' days ahead'
return model_name
# Make log plots:
bounds = [1.5, 7]
outcome = np.array([df[outcome_type].values[i][-1] for i in range(len(df))])
for key in method_keys:
preds = [np.log(p[-1]) for p in df[key][outcome > 5]]
plt.scatter(np.log(outcome[outcome > 5]),preds,label=method_name_to_pretty_name(key))
plt.xlabel('actual '+outcome_type)
plt.ylabel('predicted '+outcome_type)
plt.xlim(bounds)
plt.ylim(bounds)
plt.legend()
plt.plot(bounds, bounds, ls="--", c=".3")
plt.show()
# Make log plots zoomed in for the counties that have a fewer number of deaths
bounds = [1.5, 4]
outcome = np.array([df[outcome_type].values[i][-1] for i in range(len(df))])
for key in method_keys:
preds = [np.log(p[-1]) for p in df[key][outcome > 5]]
plt.scatter(np.log(outcome[outcome > 5]),preds,label=method_name_to_pretty_name(key))
plt.xlabel('actual '+outcome_type)
plt.ylabel('predicted '+outcome_type)
plt.xlim(bounds)
plt.ylim(bounds)
plt.legend()
plt.plot(bounds, bounds, ls="--", c=".3")
plt.show()
# Make non-log plots zoomed in for the counties that have a fewer number of deaths# We set bounds
bounds = [10,400]
outcome = np.array([df[outcome_type].values[i][-1] for i in range(len(df))])
for key in method_keys:
preds = [p[-1] for p in df[key][outcome > 5]]
plt.scatter(outcome[outcome > 5],preds,label=method_name_to_pretty_name(key))
plt.xlabel('actual '+outcome_type)
plt.ylabel('predicted '+outcome_type)
plt.xlim(bounds)
plt.ylim(bounds)
plt.legend()
plt.plot(bounds, bounds, ls="--", c=".3")
plt.show()
###Output
_____no_output_____
###Markdown
Graph Visualizations
###Code
# Here we visualize predictions on a per county level.
# The blue lines are the true number of deaths, and the dots are our predictions for each model for those days.
def plot_prediction(row):
"""
Plots model predictions vs actual
row: dataframe row
window: autoregressive window size
"""
gold_key = outcome_type
for i,val in enumerate(row[gold_key]):
if val > 0:
start_point = i
break
# plt.plot(row[gold_key][start_point:], label=gold_key)
if len(row[gold_key][start_point:]) < 3:
return
sns.lineplot(list(range(len(row[gold_key][start_point:]))),row[gold_key][start_point:], label=gold_key)
for key in method_keys:
preds = row[key]
sns.scatterplot(list(range(len(row[gold_key][start_point:])))[-len(preds):],preds,label=method_name_to_pretty_name(key))
# plt.scatter(list(range(len(row[gold_key][start_point:])))[-len(preds):],preds,label=key)
# plt.legend()
# plt.show()
# sns.legend()
plt.title(row['CountyName']+' in '+row['StateNameAbbreviation'])
plt.ylabel(outcome_type)
plt.xlabel('Days since first death')
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.figure(dpi=500)
plt.show()
# feature_vals = {
# 'PopulationDensityperSqMile2010' : 1.1525491065255939e-05,
# "MedicareEnrollment,AgedTot2017" : -2.119520577282583e-06,
# 'PopulationEstimate2018' : 2.8898343032154275e-07,
# '#ICU_beds' : -0.000647030727828718,
# 'MedianAge2010' : 0.05032666600339253,
# 'Smokers_Percentage' : -0.013410742818946319,
# 'DiabetesPercentage' : 0.04395318355581005,
# 'HeartDiseaseMortality' : 0.0015473771787186525,
# '#Hospitals': 0.019248102357644396,
# 'log(deaths)' : 0.8805209010821442,
# 'bias' : -1.871552103871495
# }
df = df.sort_values(by='max_deaths',ascending=False)
for i in range(len(df)):
row = df.iloc[i]
# If number of deaths greater than 10
if max(row['deaths']) > 10:
print(row['CountyName']+' in '+row['StateNameAbbreviation'])
plot_prediction(row)
for v in very_important_vars:
print(v+ ': '+str(row[v])) #+';\t contrib: '+ str(feature_vals[v]*float(row[v])))
print('\n')
###Output
_____no_output_____
###Markdown
Params:
###Code
aggregate_by_state = False
outcome_type = 'deaths'
###Output
_____no_output_____
###Markdown
Basic Data Visualization
###Code
# Just something to quickly summarize the number of cases and distributions each day
# 'deaths' and 'cases' contain the time-series of the outbreak
df = load_data.load_county_level(data_dir = '../data/')
df = df.sort_values('#Deaths_3/30/2020', ascending=False)
# outcome_cases = load_data.outcome_cases # most recent day
# outcome_deaths = load_data.outcome_deaths
important_vars = load_data.important_keys(df)
very_important_vars = ['PopulationDensityperSqMile2010',
# 'MedicareEnrollment,AgedTot2017',
'PopulationEstimate2018',
'#ICU_beds',
'MedianAge2010',
'Smokers_Percentage',
'DiabetesPercentage',
'HeartDiseaseMortality',
'#Hospitals'
# 'PopMale60-642010',
# 'PopFmle60-642010',
# 'PopMale65-742010',
# 'PopFmle65-742010',
# 'PopMale75-842010',
# 'PopFmle75-842010',
# 'PopMale>842010',
# 'PopFmle>842010'
]
def sum_lists(list_of_lists):
arr = np.array(list(list_of_lists))
sum_arr = np.sum(arr,0)
return list(sum_arr)
if aggregate_by_state:
# Aggregate by State
state_deaths_df = df.groupby('StateNameAbbreviation').deaths.agg(sum_lists).to_frame()
state_cases_df = df.groupby('StateNameAbbreviation').cases.agg(sum_lists).to_frame()
df = pd.concat([state_cases_df,state_deaths_df],axis =1 )
# Distribution of the maximum number of cases
_cases = list(df['cases'])
max_cases = []
for i in range(len(df)):
max_cases.append(max(_cases[i]))
print('Number of counties with non-zero cases')
print(sum([v >0 for v in max_cases]))
# cases truncated below 20 and above 1000 for plot readability
plt.hist([v for v in max_cases if v > 20 and v < 1000],bins = 100)
sum(max_cases)
print(sum([v > 50 for v in max_cases]))
np.quantile(max_cases,.5)
# Distribution of the maximum number of cases
_deaths = list(df['deaths'])
max_deaths = []
for i in range(len(df)):
max_deaths.append(max(_deaths[i]))
print('Number of counties with non-zero deaths')
print(sum([v > 0 for v in max_deaths]))
# plt.hist(max_cases)
# print(sum([v >0 for v in max_cases]))
plt.hist([v for v in max_deaths if v > 5],bins=30)
sum(max_deaths)
max(max_deaths)
np.quantile(max_deaths,.7)
###Output
_____no_output_____
###Markdown
Clean data
###Code
# Remove counties with zero cases
max_cases = [max(v) for v in df['cases']]
df['max_cases'] = max_cases
max_deaths = [max(v) for v in df['deaths']]
df['max_deaths'] = max_deaths
df = df[df['max_cases'] > 0]
###Output
_____no_output_____
###Markdown
Predict data from model:
###Code
method_keys = []
# clear predictions
for m in method_keys:
del df[m]
target_day = np.array([1])
# Trains model on train_df and produces predictions for the final day for test_df and writes prediction
# to a new column for test_df
# fit_and_predict(df, method='exponential', outcome=outcome_type, mode='eval_mode',target_day=target_day)
# fit_and_predict(df,method='shared_exponential', outcome=outcome_type, mode='eval_mode',target_day=target_day)
# fit_and_predict(train_df, test_df,'shared_exponential', mode='eval_mode',demographic_vars=important_vars)
# fit_and_predict(df,method='shared_exponential', outcome=outcome_type, mode='eval_mode',demographic_vars=very_important_vars,target_day=target_day)
fit_and_predict(df, outcome=outcome_type, mode='eval_mode',demographic_vars=[],
method='ensemble',target_day=target_day)
fit_and_predict(df, outcome=outcome_type, mode='eval_mode',demographic_vars=[],
method='ensemble',target_day=np.array([1,2,3]))
# fit_and_predict(train_df, test_d f,method='exponential',mode='eval_mode',target_day = np.array([1,2]))
# Finds the names of all the methods
method_keys = [c for c in df if 'predicted' in c]
method_keys
for days_ahead in [1, 2, 3]:
for method in ['exponential', 'shared_exponential', 'ensemble']:
fit_and_predict(df, method=method, outcome=outcome_type, mode='eval_mode',target_day=np.array([days_ahead]))
if method == 'shared_exponential':
fit_and_predict(df,method='shared_exponential',
outcome=outcome_type,
mode='eval_mode',
demographic_vars=very_important_vars,
target_day=np.array([days_ahead]))
method_keys = [c for c in df if 'predicted' in c]
geo = ['countyFIPS', 'CountyNamew/StateAbbrev']
method_keys = [c for c in df if 'predicted' in c]
df_preds = df[method_keys + geo + ['deaths']]
df_preds.to_pickle("multi_day_6.pkl")
###Output
_____no_output_____
###Markdown
Ensemble predictions
###Code
exponential = {'model_type':'exponential'}
shared_exponential = {'model_type':'shared_exponential'}
demographics = {'model_type':'shared_exponential', 'demographic_vars':very_important_vars}
linear = {'model_type':'linear'}
import fit_and_predict
for d in [1, 2, 3]:
df = fit_and_predict.fit_and_predict_ensemble(df,
target_day=np.array([d]),
mode='eval_mode',
output_key=f'predicted_deaths_ensemble_{d}'
)
method_keys = [c for c in df if 'predicted' in c]
df = fit_and_predict.fit_and_predict_ensemble(df)
df[f'predicted_deaths_ensemble_1']
###Output
_____no_output_____
###Markdown
Evaluate and visualize models Compute MSE and log MSE on relevant cases
###Code
# TODO: add average rank as metric
# Computes the mse in log space and non-log space for all columns
def l1(arr1,arr2):
return sum([np.abs(a1-a2) for (a1,a2) in zip(arr1,arr2)])/len(arr1)
mse = sklearn.metrics.mean_squared_error
# Only evaluate points that exceed this number of deaths
lower_threshold, upper_threshold = 10, 1000
lower_threshold, upper_threshold = 20, np.inf
# Log scaled
outcome = np.array([df[outcome_type].values[i][-1] for i in range(len(df))])
for key in method_keys:
preds = [np.log(p[-1] + 1) for p in df[key][(outcome > lower_threshold)]] # * (outcome < upper_threshold)]]
print('Log scale MSE for '+key)
print(mse(np.log(outcome[(outcome > lower_threshold) * (outcome < upper_threshold)] + 1),preds))
# Log scaled
outcome = np.array([df[outcome_type].values[i][-1] for i in range(len(df))])
for key in method_keys:
preds = [np.log(p[-1] + 1) for p in df[key][outcome > lower_threshold]]
print('Log scale l1 for '+key)
print(l1(np.log(outcome[outcome > lower_threshold] + 1),preds))
# No log scale
outcome = np.array([df[outcome_type].values[i][-1] for i in range(len(df))])
for key in method_keys:
preds = [p[-1] for p in df[key][outcome > lower_threshold]]
print('Raw MSE for '+key)
print(mse(outcome[outcome > lower_threshold],preds))
# No log scale
outcome = np.array([df[outcome_type].values[i][-1] for i in range(len(df))])
for key in method_keys:
preds = [p[-1] for p in df[key][outcome > lower_threshold]]
print('Raw l1 for '+key)
print(l1(outcome[outcome > lower_threshold],preds))
###Output
Raw l1 for predicted_deaths_ensemble_1
7.708694007145851
Raw l1 for predicted_deaths_ensemble_2
17.290549426954943
Raw l1 for predicted_deaths_ensemble_3
12.879244117209646
###Markdown
Plot residuals
###Code
# TODO: Create bounds automatically, create a plot function and call it instead of copying code, figure out way
# to plot more than two things at once cleanly
# Creates residual plots log scaled and raw
# We only look at cases with number of deaths greater than 5
def method_name_to_pretty_name(key):
# TODO: hacky, fix
words = key.split('_')
words2 = []
for w in words:
if not w.isnumeric():
words2.append(w)
else:
num = w
model_name = ' '.join(words2[2:])
# model_name = 'model'
if num == '1':
model_name += ' predicting 1 day ahead'
else:
model_name += ' predicting ' +w+' days ahead'
return model_name
# Make log plots:
bounds = [1.5, 7]
outcome = np.array([df[outcome_type].values[i][-1] for i in range(len(df))])
for key in method_keys:
preds = [np.log(p[-1]) for p in df[key][outcome > 5]]
plt.scatter(np.log(outcome[outcome > 5]),preds,label=method_name_to_pretty_name(key))
plt.xlabel('actual deaths')
plt.ylabel('predicted deaths')
plt.xlim(bounds)
plt.ylim(bounds)
plt.legend()
plt.plot(bounds, bounds, ls="--", c=".3")
plt.show()
# Make log plots zoomed in for the counties that have a fewer number of deaths
bounds = [1.5, 4]
outcome = np.array([df['deaths'].values[i][-1] for i in range(len(df))])
for key in method_keys:
preds = [np.log(p[-1]) for p in df[key][outcome > 5]]
plt.scatter(np.log(outcome[outcome > 5]),preds,label=method_name_to_pretty_name(key))
plt.xlabel('actual deaths')
plt.ylabel('predicted deaths')
plt.xlim(bounds)
plt.ylim(bounds)
plt.legend()
plt.plot(bounds, bounds, ls="--", c=".3")
plt.show()
# Make non-log plots zoomed in for the counties that have a fewer number of deaths# We set bounds
bounds = [3, 300]
outcome = np.array([df['deaths'].values[i][-1] for i in range(len(df))])
for key in method_keys:
preds = [p[-1] for p in df[key][outcome > 5]]
plt.scatter(outcome[outcome > 5],preds,label=method_name_to_pretty_name(key))
plt.xlabel('actual deaths')
plt.ylabel('predicted deaths')
plt.xlim(bounds)
plt.ylim(bounds)
plt.legend()
plt.plot(bounds, bounds, ls="--", c=".3")
plt.show()
###Output
_____no_output_____
###Markdown
Graph Visualizations
###Code
# Here we visualize predictions on a per county level.
# The blue lines are the true number of deaths, and the dots are our predictions for each model for those days.
def plot_prediction(row):
"""
Plots model predictions vs actual
row: dataframe row
window: autoregressive window size
"""
gold_key = 'deaths'
for i,val in enumerate(row[gold_key]):
if val > 0:
start_point = i
break
# plt.plot(row[gold_key][start_point:], label=gold_key)
if len(row[gold_key][start_point:]) < 3:
return
sns.lineplot(list(range(len(row[gold_key][start_point:]))),row[gold_key][start_point:], label=gold_key)
for key in method_keys:
preds = row[key]
sns.scatterplot(list(range(len(row[gold_key][start_point:])))[-len(preds):],preds,label=method_name_to_pretty_name(key))
# plt.scatter(list(range(len(row[gold_key][start_point:])))[-len(preds):],preds,label=key)
# plt.legend()
# plt.show()
# sns.legend()
plt.title(row['CountyName']+' in '+row['StateNameAbbreviation'])
plt.ylabel('Deaths')
plt.xlabel('Days since first death')
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.figure(dpi=500)
plt.show()
# feature_vals = {
# 'PopulationDensityperSqMile2010' : 1.1525491065255939e-05,
# "MedicareEnrollment,AgedTot2017" : -2.119520577282583e-06,
# 'PopulationEstimate2018' : 2.8898343032154275e-07,
# '#ICU_beds' : -0.000647030727828718,
# 'MedianAge2010' : 0.05032666600339253,
# 'Smokers_Percentage' : -0.013410742818946319,
# 'DiabetesPercentage' : 0.04395318355581005,
# 'HeartDiseaseMortality' : 0.0015473771787186525,
# '#Hospitals': 0.019248102357644396,
# 'log(deaths)' : 0.8805209010821442,
# 'bias' : -1.871552103871495
# }
df = df.sort_values(by='max_deaths',ascending=False)
for i in range(len(df)):
row = df.iloc[i]
# If number of deaths greater than 10
if max(row['deaths']) > 10:
print(row['CountyName']+' in '+row['StateNameAbbreviation'])
plot_prediction(row)
for v in very_important_vars:
print(v+ ': '+str(row[v])) #+';\t contrib: '+ str(feature_vals[v]*float(row[v])))
print('\n')
###Output
Queens in NY
|
special_orthogonalization/svd_vs_gs_simulations.ipynb | ###Markdown
Imports and Functions
###Code
import numpy as np
from scipy.stats import special_ortho_group
from scipy.spatial.transform import Rotation
from scipy.linalg import svd
import matplotlib.pyplot as plt
plt.style.use('seaborn-whitegrid')
FIGURE_SCALE = 1.0
FONT_SIZE = 20
plt.rcParams.update({
'figure.figsize': np.array((8, 6)) * FIGURE_SCALE,
'axes.labelsize': FONT_SIZE,
'axes.titlesize': FONT_SIZE,
'xtick.labelsize': FONT_SIZE,
'ytick.labelsize': FONT_SIZE,
'legend.fontsize': FONT_SIZE,
'lines.linewidth': 3,
'lines.markersize': 10,
})
def SO3_via_svd(A):
"""Map 3x3 matrix onto SO(3) via SVD."""
u, s, vt = np.linalg.svd(A)
s_SO3 = [1, 1, np.sign(np.linalg.det(np.matmul(u, vt)))]
return np.matmul(np.matmul(u, np.diag(s_SO3)), vt)
def SO3_via_gramschmidt(A):
"""Map 3x3 matrix on SO(3) via GS, ignores last column."""
x_normalized = A[:, 0] / np.linalg.norm(A[:, 0])
z = np.cross(x_normalized, A[:, 1])
z_normalized = z / np.linalg.norm(z)
y_normalized = np.cross(z_normalized, x_normalized)
return np.stack([x_normalized, y_normalized, z_normalized], axis=1)
def rotate_from_z(v):
"""Construct a rotation matrix R such that R * [0,0,||v||]^T = v.
Input v is shape (3,), output shape is 3x3 """
vn = v / np.linalg.norm(v)
theta = np.arccos(vn[2])
phi = np.arctan2(vn[1], vn[0])
r = Rotation.from_euler('zyz', [0, theta, phi])
R = np.squeeze(r.as_dcm()) # Maps Z to vn
return R
def perturb_rotation_matrix(R, kappa):
"""Perturb a random rotation matrix with noise.
Noise is random small rotation applied to each of the three
column vectors of R. Angle of rotation is sampled from the
von-Mises distribution on the circle (with uniform random azimuth).
The von-Mises distribution is analagous to Gaussian distribution on the circle.
Note, the concentration parameter kappa is inversely related to variance,
so higher kappa means less variance, less noise applied. Good ranges for
kappa are 64 (high noise) up to 512 (low noise).
"""
R_perturb = []
theta = np.random.vonmises(mu=0.0, kappa=kappa, size=(3,))
phi = np.random.uniform(low=0.0, high=np.pi*2.0, size=(3,))
for i in range(3):
v = R[:, i]
R_z_to_v = rotate_from_z(v)
r_noise_z = np.squeeze(Rotation.from_euler('zyz', [0, theta[i], phi[i]]).as_dcm())
v_perturb = np.matmul(R_z_to_v, np.matmul(r_noise_z, np.array([0,0,1])))
R_perturb.append(v_perturb)
R_perturb = np.stack(R_perturb, axis=-1)
return R_perturb
def sigma_to_kappa(sigma):
return ((0.5 - sigma) * 1024) + 64
# We create a ground truth special orthogonal matrix and perturb it with
# additive noise. We then see which orthogonalization process (SVD or GS) is
# better at recovering the ground truth matrix.
def run_expt(sigmas, num_trials, noise_type='gaussian'):
# Always use identity as ground truth, or pick random matrix.
# Nothing should change if we pick random (can verify by setting to True) since
# SVD and Gram-Schmidt are both Equivariant to rotations.
pick_random_ground_truth=False
all_errs_svd = []
all_errs_gs = []
all_geo_errs_svd = []
all_geo_errs_gs = []
all_noise_norms = []
all_noise_sq_norms = []
for sig in sigmas:
svd_errors = np.zeros(num_trials)
gs_errors = np.zeros(num_trials)
svd_geo_errors = np.zeros(num_trials)
gs_geo_errors = np.zeros(num_trials)
noise_norms = np.zeros(num_trials)
noise_sq_norms = np.zeros(num_trials)
for t in range(num_trials):
if pick_random_ground_truth:
A = special_ortho_group.rvs(3) # Pick a random ground truth matrix
else:
A = np.eye(3) # Our ground truth matrix in SO(3)
N = None
if noise_type == 'gaussian':
N = np.random.standard_normal(size=(3,3)) * sig
if noise_type == 'uniform':
N = np.random.uniform(-1, 1, (3, 3)) * sig
if noise_type == 'rademacher':
N = np.sign(np.random.uniform(-1, 1, (3, 3))) * sig
if noise_type == 'rotation':
A_perturb = perturb_rotation_matrix(A, kappa=sigma_to_kappa(sig))
N = A_perturb - A
if N is None:
print ('Error: unknown noise_type: %s', noise_type)
return
AplusN = A + N # Ground-truth plus noise
noise_norm = np.linalg.norm(N)
noise_norm_sq = noise_norm**2
# Compute SVD result and error.
res_svd = SO3_via_svd(AplusN)
error_svd = np.linalg.norm(res_svd - A, ord='fro')**2
error_geodesic_svd = np.arccos(
(np.trace(np.matmul(np.transpose(res_svd), A))-1.0)/2.0);
# Compute GS result and error.
res_gs = SO3_via_gramschmidt(AplusN)
error_gs = np.linalg.norm(res_gs - A, ord='fro')**2
error_geodesic_gs = np.arccos(
(np.trace(np.matmul(np.transpose(res_gs), A))-1.0)/2.0);
svd_errors[t] = error_svd
gs_errors[t] = error_gs
svd_geo_errors[t] = error_geodesic_svd
gs_geo_errors[t] = error_geodesic_gs
noise_norms[t] = noise_norm
noise_sq_norms[t] = noise_norm_sq
all_errs_svd.append(svd_errors)
all_errs_gs.append(gs_errors)
all_geo_errs_svd.append(svd_geo_errors)
all_geo_errs_gs.append(gs_geo_errors)
all_noise_norms.append(noise_norms)
all_noise_sq_norms.append(noise_sq_norms)
print('finished sigma = %f / kappa = %f' % (sig, sigma_to_kappa(sig)))
return [np.array(x) for x in (
all_errs_svd, all_errs_gs,
all_geo_errs_svd, all_geo_errs_gs,
all_noise_norms, all_noise_sq_norms)]
boxprops = dict(linewidth=2)
medianprops = dict(linewidth=2)
whiskerprops = dict(linewidth=2)
capprops = dict(linewidth=2)
def make_diff_plot(svd_errs, gs_errs, xvalues, title='', ytitle='', xtitle=''):
plt.figure(figsize=(8,6))
plt.title(title, fontsize=16)
diff = gs_errs - svd_errs
step_size = np.abs(xvalues[1] - xvalues[0])
plt.boxplot(diff.T, positions=xvalues, widths=step_size/2, whis=[5, 95],
boxprops=boxprops, medianprops=medianprops, whiskerprops=whiskerprops, capprops=capprops,
showmeans=False, meanline=True, showfliers=False)
plt.plot(xvalues, np.max(diff, axis=1), 'kx', markeredgewidth=2)
plt.plot(xvalues, np.min(diff, axis=1), 'kx', markeredgewidth=2)
xlim = [np.min(xvalues) - (step_size / 3), np.max(xvalues) + (step_size / 3)]
plt.xlim(xlim)
plt.plot(xlim, [0, 0], 'k--', linewidth=1)
plt.xlabel(xtitle, fontsize=16)
plt.ylabel(ytitle, fontsize=16)
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Global Params
###Code
num_trials = 100000 # Num trials at each sigma
sigmas = np.linspace(0.125, 0.5, 4)
###Output
_____no_output_____
###Markdown
Gaussian NoiseHere we generate a noise matrix with iid Gaussian entries drawn from$\sigma N(0,1)$.The "Frobenius Error Diff" shows the distributions of the error differences$\|A - \textrm{GS}(\tilde A)\|_F^2 - \|A - \textrm{SVD}(\tilde A)\|_F^2$ fordifferent values of $\sigma$. The "Geodesic Error Diff" plot shows theanalagous data, but in terms of the geodesic error.
###Code
(all_errs_svd, all_errs_gs,
all_geo_errs_svd, all_geo_errs_gs,
all_noise_norms, all_noise_sq_norms
) = run_expt(sigmas, num_trials, noise_type='gaussian')
plt.plot(sigmas,
3*sigmas**2,
'--b',
label='3 $\\sigma^2$')
plt.errorbar(sigmas,
all_errs_svd.mean(axis=1),
color='b',
label='E[$\\|\\|\\mathrm{SVD}^+(M) - R\\|\\|_F^2]$')
plt.plot(sigmas, 6*sigmas**2,
'--r',
label='6 $\\sigma^2$')
plt.errorbar(sigmas,
all_errs_gs.mean(axis=1),
color='r',
label='E[$\\|\\|\\mathrm{GS}^+(M) - R\\|\\|_F^2$]')
plt.xlabel('$\\sigma$')
plt.legend(loc='upper left')
make_diff_plot(all_errs_svd, all_errs_gs, sigmas, title='Gaussian Noise', ytitle='Frobenius Error Diff', xtitle='$\\sigma$')
make_diff_plot(all_geo_errs_svd, all_geo_errs_gs, sigmas, title='Gaussian Noise', ytitle='Geodesic Error Diff', xtitle='$\\sigma$')
###Output
_____no_output_____
###Markdown
Uniform NoiseHere, the noise matrix is constructed with iid entries drawn from $\sigma \textrm{Unif}(-1, 1)$.
###Code
(all_errs_svd, all_errs_gs,
all_geo_errs_svd, all_geo_errs_gs,
all_noise_norms, all_noise_sq_norms
) = run_expt(sigmas, num_trials, noise_type='uniform')
make_diff_plot(all_errs_svd, all_errs_gs, sigmas, title='Uniform Noise', ytitle='Frobenius Error Diff', xtitle='$\\phi$')
make_diff_plot(all_geo_errs_svd, all_geo_errs_gs, sigmas, title='Uniform Noise', ytitle='Geodesic Error Diff', xtitle='$\\phi$')
###Output
_____no_output_____
###Markdown
Rotation Noise
###Code
(all_errs_svd, all_errs_gs,
all_geo_errs_svd, all_geo_errs_gs,
all_noise_norms, all_noise_sq_norms
) = run_expt(sigmas, num_trials, noise_type='rotation')
make_diff_plot(all_errs_svd, all_errs_gs, sigma_to_kappa(sigmas), title='Rotation Noise', ytitle='Frobenius Error Diff', xtitle='$\\kappa$')
make_diff_plot(all_geo_errs_svd, all_geo_errs_gs, sigma_to_kappa(sigmas), title='Rotation Noise', ytitle='Geodesic Error Diff', xtitle='$\\kappa$')
###Output
_____no_output_____
###Markdown
Imports and Functions
###Code
import numpy as np
from scipy.stats import special_ortho_group
from scipy.spatial.transform import Rotation
from scipy.linalg import svd
import matplotlib.pyplot as plt
plt.style.use('seaborn-whitegrid')
FIGURE_SCALE = 1.0
FONT_SIZE = 20
plt.rcParams.update({
'figure.figsize': np.array((8, 6)) * FIGURE_SCALE,
'axes.labelsize': FONT_SIZE,
'axes.titlesize': FONT_SIZE,
'xtick.labelsize': FONT_SIZE,
'ytick.labelsize': FONT_SIZE,
'legend.fontsize': FONT_SIZE,
'lines.linewidth': 3,
'lines.markersize': 10,
})
def SO3_via_svd(A):
"""Map 3x3 matrix onto SO(3) via SVD."""
u, s, vt = np.linalg.svd(A)
s_SO3 = [1, 1, np.sign(np.linalg.det(np.matmul(u, vt)))]
return np.matmul(np.matmul(u, np.diag(s_SO3)), vt)
def SO3_via_gramschmidt(A):
"""Map 3x3 matrix on SO(3) via GS, ignores last column."""
x_normalized = A[:, 0] / np.linalg.norm(A[:, 0])
z = np.cross(x_normalized, A[:, 1])
z_normalized = z / np.linalg.norm(z)
y_normalized = np.cross(z_normalized, x_normalized)
return np.stack([x_normalized, y_normalized, z_normalized], axis=1)
def rotate_from_z(v):
"""Construct a rotation matrix R such that R * [0,0,||v||]^T = v.
Input v is shape (3,), output shape is 3x3 """
vn = v / np.linalg.norm(v)
theta = np.arccos(vn[2])
phi = np.arctan2(vn[1], vn[0])
r = Rotation.from_euler('zyz', [0, theta, phi])
R = np.squeeze(r.as_matrix()) # Maps Z to vn
return R
def perturb_rotation_matrix(R, kappa):
"""Perturb a random rotation matrix with noise.
Noise is random small rotation applied to each of the three
column vectors of R. Angle of rotation is sampled from the
von-Mises distribution on the circle (with uniform random azimuth).
The von-Mises distribution is analagous to Gaussian distribution on the circle.
Note, the concentration parameter kappa is inversely related to variance,
so higher kappa means less variance, less noise applied. Good ranges for
kappa are 64 (high noise) up to 512 (low noise).
"""
R_perturb = []
theta = np.random.vonmises(mu=0.0, kappa=kappa, size=(3,))
phi = np.random.uniform(low=0.0, high=np.pi*2.0, size=(3,))
for i in range(3):
v = R[:, i]
R_z_to_v = rotate_from_z(v)
r_noise_z = np.squeeze(Rotation.from_euler('zyz', [0, theta[i], phi[i]]).as_matrix())
v_perturb = np.matmul(R_z_to_v, np.matmul(r_noise_z, np.array([0,0,1])))
R_perturb.append(v_perturb)
R_perturb = np.stack(R_perturb, axis=-1)
return R_perturb
def sigma_to_kappa(sigma):
return ((0.5 - sigma) * 1024) + 64
# We create a ground truth special orthogonal matrix and perturb it with
# additive noise. We then see which orthogonalization process (SVD or GS) is
# better at recovering the ground truth matrix.
def run_expt(sigmas, num_trials, noise_type='gaussian'):
# Always use identity as ground truth, or pick random matrix.
# Nothing should change if we pick random (can verify by setting to True) since
# SVD and Gram-Schmidt are both Equivariant to rotations.
pick_random_ground_truth=False
all_errs_svd = []
all_errs_gs = []
all_geo_errs_svd = []
all_geo_errs_gs = []
all_noise_norms = []
all_noise_sq_norms = []
for sig in sigmas:
svd_errors = np.zeros(num_trials)
gs_errors = np.zeros(num_trials)
svd_geo_errors = np.zeros(num_trials)
gs_geo_errors = np.zeros(num_trials)
noise_norms = np.zeros(num_trials)
noise_sq_norms = np.zeros(num_trials)
for t in range(num_trials):
if pick_random_ground_truth:
A = special_ortho_group.rvs(3) # Pick a random ground truth matrix
else:
A = np.eye(3) # Our ground truth matrix in SO(3)
N = None
if noise_type == 'gaussian':
N = np.random.standard_normal(size=(3,3)) * sig
if noise_type == 'uniform':
N = np.random.uniform(-1, 1, (3, 3)) * sig
if noise_type == 'rademacher':
N = np.sign(np.random.uniform(-1, 1, (3, 3))) * sig
if noise_type == 'rotation':
A_perturb = perturb_rotation_matrix(A, kappa=sigma_to_kappa(sig))
N = A_perturb - A
if N is None:
print ('Error: unknown noise_type: %s', noise_type)
return
AplusN = A + N # Ground-truth plus noise
noise_norm = np.linalg.norm(N)
noise_norm_sq = noise_norm**2
# Compute SVD result and error.
res_svd = SO3_via_svd(AplusN)
error_svd = np.linalg.norm(res_svd - A, ord='fro')**2
error_geodesic_svd = np.arccos(
(np.trace(np.matmul(np.transpose(res_svd), A))-1.0)/2.0);
# Compute GS result and error.
res_gs = SO3_via_gramschmidt(AplusN)
error_gs = np.linalg.norm(res_gs - A, ord='fro')**2
error_geodesic_gs = np.arccos(
(np.trace(np.matmul(np.transpose(res_gs), A))-1.0)/2.0);
svd_errors[t] = error_svd
gs_errors[t] = error_gs
svd_geo_errors[t] = error_geodesic_svd
gs_geo_errors[t] = error_geodesic_gs
noise_norms[t] = noise_norm
noise_sq_norms[t] = noise_norm_sq
all_errs_svd.append(svd_errors)
all_errs_gs.append(gs_errors)
all_geo_errs_svd.append(svd_geo_errors)
all_geo_errs_gs.append(gs_geo_errors)
all_noise_norms.append(noise_norms)
all_noise_sq_norms.append(noise_sq_norms)
print('finished sigma = %f / kappa = %f' % (sig, sigma_to_kappa(sig)))
return [np.array(x) for x in (
all_errs_svd, all_errs_gs,
all_geo_errs_svd, all_geo_errs_gs,
all_noise_norms, all_noise_sq_norms)]
boxprops = dict(linewidth=2)
medianprops = dict(linewidth=2)
whiskerprops = dict(linewidth=2)
capprops = dict(linewidth=2)
def make_diff_plot(svd_errs, gs_errs, xvalues, title='', ytitle='', xtitle=''):
plt.figure(figsize=(8,6))
plt.title(title, fontsize=16)
diff = gs_errs - svd_errs
step_size = np.abs(xvalues[1] - xvalues[0])
plt.boxplot(diff.T, positions=xvalues, widths=step_size/2, whis=[5, 95],
boxprops=boxprops, medianprops=medianprops, whiskerprops=whiskerprops, capprops=capprops,
showmeans=False, meanline=True, showfliers=False)
plt.plot(xvalues, np.max(diff, axis=1), 'kx', markeredgewidth=2)
plt.plot(xvalues, np.min(diff, axis=1), 'kx', markeredgewidth=2)
xlim = [np.min(xvalues) - (step_size / 3), np.max(xvalues) + (step_size / 3)]
plt.xlim(xlim)
plt.plot(xlim, [0, 0], 'k--', linewidth=1)
plt.xlabel(xtitle, fontsize=16)
plt.ylabel(ytitle, fontsize=16)
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Global Params
###Code
num_trials = 100000 # Num trials at each sigma
sigmas = np.linspace(0.125, 0.5, 4)
###Output
_____no_output_____
###Markdown
Gaussian NoiseHere we generate a noise matrix with iid Gaussian entries drawn from$\sigma N(0,1)$.The "Frobenius Error Diff" shows the distributions of the error differences$\|A - \textrm{GS}(\tilde A)\|_F^2 - \|A - \textrm{SVD}(\tilde A)\|_F^2$ fordifferent values of $\sigma$. The "Geodesic Error Diff" plot shows theanalagous data, but in terms of the geodesic error.
###Code
(all_errs_svd, all_errs_gs,
all_geo_errs_svd, all_geo_errs_gs,
all_noise_norms, all_noise_sq_norms
) = run_expt(sigmas, num_trials, noise_type='gaussian')
plt.plot(sigmas,
3*sigmas**2,
'--b',
label='3 $\\sigma^2$')
plt.errorbar(sigmas,
all_errs_svd.mean(axis=1),
color='b',
label='E[$\\|\\|\\mathrm{SVD}^+(M) - R\\|\\|_F^2]$')
plt.plot(sigmas, 6*sigmas**2,
'--r',
label='6 $\\sigma^2$')
plt.errorbar(sigmas,
all_errs_gs.mean(axis=1),
color='r',
label='E[$\\|\\|\\mathrm{GS}^+(M) - R\\|\\|_F^2$]')
plt.xlabel('$\\sigma$')
plt.legend(loc='upper left')
make_diff_plot(all_errs_svd, all_errs_gs, sigmas, title='Gaussian Noise', ytitle='Frobenius Error Diff', xtitle='$\\sigma$')
make_diff_plot(all_geo_errs_svd, all_geo_errs_gs, sigmas, title='Gaussian Noise', ytitle='Geodesic Error Diff', xtitle='$\\sigma$')
###Output
_____no_output_____
###Markdown
Uniform NoiseHere, the noise matrix is constructed with iid entries drawn from $\sigma \textrm{Unif}(-1, 1)$.
###Code
(all_errs_svd, all_errs_gs,
all_geo_errs_svd, all_geo_errs_gs,
all_noise_norms, all_noise_sq_norms
) = run_expt(sigmas, num_trials, noise_type='uniform')
make_diff_plot(all_errs_svd, all_errs_gs, sigmas, title='Uniform Noise', ytitle='Frobenius Error Diff', xtitle='$\\phi$')
make_diff_plot(all_geo_errs_svd, all_geo_errs_gs, sigmas, title='Uniform Noise', ytitle='Geodesic Error Diff', xtitle='$\\phi$')
###Output
_____no_output_____
###Markdown
Rotation Noise
###Code
(all_errs_svd, all_errs_gs,
all_geo_errs_svd, all_geo_errs_gs,
all_noise_norms, all_noise_sq_norms
) = run_expt(sigmas, num_trials, noise_type='rotation')
make_diff_plot(all_errs_svd, all_errs_gs, sigma_to_kappa(sigmas), title='Rotation Noise', ytitle='Frobenius Error Diff', xtitle='$\\kappa$')
make_diff_plot(all_geo_errs_svd, all_geo_errs_gs, sigma_to_kappa(sigmas), title='Rotation Noise', ytitle='Geodesic Error Diff', xtitle='$\\kappa$')
###Output
_____no_output_____ |
session-2/session-2.ipynb | ###Markdown
Session 2 - Training a Network w/ TensorflowAssignment: Teach a Deep Neural Network to PaintParag K. MitalCreative Applications of Deep Learning w/ TensorflowKadenze AcademyCADLThis work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Learning Goals* Learn how to create a Neural Network* Learn to use a neural network to paint an image* Apply creative thinking to the inputs, outputs, and definition of a network Outline- [Assignment Synopsis](assignment-synopsis)- [Part One - Fully Connected Network](part-one---fully-connected-network) - [Instructions](instructions) - [Code](code) - [Variable Scopes](variable-scopes)- [Part Two - Image Painting Network](part-two---image-painting-network) - [Instructions](instructions-1) - [Preparing the Data](preparing-the-data) - [Cost Function](cost-function) - [Explore](explore) - [A Note on Crossvalidation](a-note-on-crossvalidation)- [Part Three - Learning More than One Image](part-three---learning-more-than-one-image) - [Instructions](instructions-2) - [Code](code-1)- [Part Four - Open Exploration \(Extra Credit\)](part-four---open-exploration-extra-credit)- [Assignment Submission](assignment-submission)This next section will just make sure you have the right version of python and the libraries that we'll be using. Don't change the code here but make sure you "run" it (use "shift+enter")!
###Code
# First check the Python version
import sys
if sys.version_info < (3,4):
print('You are running an older version of Python!\n\n' \
'You should consider updating to Python 3.4.0 or ' \
'higher as the libraries built for this course ' \
'have only been tested in Python 3.4 and higher.\n')
print('Try installing the Python 3.5 version of anaconda '
'and then restart `jupyter notebook`:\n' \
'https://www.continuum.io/downloads\n\n')
# Now get necessary libraries
try:
import os
import numpy as np
import matplotlib.pyplot as plt
from skimage.transform import resize
from skimage import data
from scipy.misc import imresize
except ImportError:
print('You are missing some packages! ' \
'We will try installing them before continuing!')
!pip install "numpy>=1.11.0" "matplotlib>=1.5.1" "scikit-image>=0.11.3" "scikit-learn>=0.17" "scipy>=0.17.0"
import os
import numpy as np
import matplotlib.pyplot as plt
from skimage.transform import resize
from skimage import data
from scipy.misc import imresize
print('Done!')
# Import Tensorflow
try:
import tensorflow as tf
except ImportError:
print("You do not have tensorflow installed!")
print("Follow the instructions on the following link")
print("to install tensorflow before continuing:")
print("")
print("https://github.com/pkmital/CADL#installation-preliminaries")
# This cell includes the provided libraries from the zip file
# and a library for displaying images from ipython, which
# we will use to display the gif
try:
from libs import utils, gif
import IPython.display as ipyd
except ImportError:
print("Make sure you have started notebook in the same directory" +
" as the provided zip file which includes the 'libs' folder" +
" and the file 'utils.py' inside of it. You will NOT be able"
" to complete this assignment unless you restart jupyter"
" notebook inside the directory created by extracting"
" the zip file or cloning the github repo.")
# We'll tell matplotlib to inline any drawn figures like so:
%matplotlib inline
plt.style.use('ggplot')
# Bit of formatting because I don't like the default inline code style:
from IPython.core.display import HTML
HTML("""<style> .rendered_html code {
padding: 2px 4px;
color: #c7254e;
background-color: #f9f2f4;
border-radius: 4px;
} </style>""")
###Output
_____no_output_____
###Markdown
Assignment SynopsisIn this assignment, we're going to create our first neural network capable of taking any two continuous values as inputs. Those two values will go through a series of multiplications, additions, and nonlinearities, coming out of the network as 3 outputs. Remember from the last homework, we used convolution to filter an image so that the representations in the image were accentuated. We're not going to be using convolution w/ Neural Networks until the next session, but we're effectively doing the same thing here: using multiplications to accentuate the representations in our data, in order to minimize whatever our cost function is. To find out what those multiplications need to be, we're going to use Gradient Descent and Backpropagation, which will take our cost, and find the appropriate updates to all the parameters in our network to best optimize the cost. In the next session, we'll explore much bigger networks and convolution. This "toy" network is really to help us get up and running with neural networks, and aid our exploration of the different components that make up a neural network. You will be expected to explore manipulations of the neural networks in this notebook as much as possible to help aid your understanding of how they effect the final result.We're going to build our first neural network to understand what color "to paint" given a location in an image, or the row, col of the image. So in goes a row/col, and out goes a R/G/B. In the next lesson, we'll learn what this network is really doing is performing regression. For now, we'll focus on the creative applications of such a network to help us get a better understanding of the different components that make up the neural network. You'll be asked to explore many of the different components of a neural network, including changing the inputs/outputs (i.e. the dataset), the number of layers, their activation functions, the cost functions, learning rate, and batch size. You'll also explore a modification to this same network which takes a 3rd input: an index for an image. This will let us try to learn multiple images at once, though with limited success.We'll now dive right into creating deep neural networks, and I'm going to show you the math along the way. Don't worry if a lot of it doesn't make sense, and it really takes a bit of practice before it starts to come together. Part One - Fully Connected Network InstructionsCreate the operations necessary for connecting an input to a network, defined by a `tf.Placeholder`, to a series of fully connected, or linear, layers, using the formula: $$\textbf{H} = \phi(\textbf{X}\textbf{W} + \textbf{b})$$where $\textbf{H}$ is an output layer representing the "hidden" activations of a network, $\phi$ represents some nonlinearity, $\textbf{X}$ represents an input to that layer, $\textbf{W}$ is that layer's weight matrix, and $\textbf{b}$ is that layer's bias. If you're thinking, what is going on? Where did all that math come from? Don't be afraid of it. Once you learn how to "speak" the symbolic representation of the equation, it starts to get easier. And once we put it into practice with some code, it should start to feel like there is some association with what is written in the equation, and what we've written in code. Practice trying to say the equation in a meaningful way: "The output of a hidden layer is equal to some input multiplied by another matrix, adding some bias, and applying a non-linearity". Or perhaps: "The hidden layer is equal to a nonlinearity applied to an input multiplied by a matrix and adding some bias". Explore your own interpretations of the equation, or ways of describing it, and it starts to become much, much easier to apply the equation.The first thing that happens in this equation is the input matrix $\textbf{X}$ is multiplied by another matrix, $\textbf{W}$. This is the most complicated part of the equation. It's performing matrix multiplication, as we've seen from last session, and is effectively scaling and rotating our input. The bias $\textbf{b}$ allows for a global shift in the resulting values. Finally, the nonlinearity of $\phi$ allows the input space to be nonlinearly warped, allowing it to express a lot more interesting distributions of data. Have a look below at some common nonlinearities. If you're unfamiliar with looking at graphs like this, it is common to read the horizontal axis as X, as the input, and the vertical axis as Y, as the output.
###Code
xs = np.linspace(-6, 6, 100)
plt.plot(xs, np.maximum(xs, 0), label='relu')
plt.plot(xs, 1 / (1 + np.exp(-xs)), label='sigmoid')
plt.plot(xs, np.tanh(xs), label='tanh')
plt.xlabel('Input')
plt.xlim([-6, 6])
plt.ylabel('Output')
plt.ylim([-1.5, 1.5])
plt.title('Common Activation Functions/Nonlinearities')
plt.legend(loc='lower right')
###Output
_____no_output_____
###Markdown
Remember, having series of linear followed by nonlinear operations is what makes neural networks expressive. By stacking a lot of "linear" + "nonlinear" operations in a series, we can create a deep neural network! Have a look at the output ranges of the above nonlinearity when considering which nonlinearity seems most appropriate. For instance, the `relu` is always above 0, but does not saturate at any value above 0, meaning it can be anything above 0. That's unlike the `sigmoid` which does saturate at both 0 and 1, meaning its values for a single output neuron will always be between 0 and 1. Similarly, the `tanh` saturates at -1 and 1.Choosing between these is often a matter of trial and error. Though you can make some insights depending on your normalization scheme. For instance, if your output is expected to be in the range of 0 to 1, you may not want to use a `tanh` function, which ranges from -1 to 1, but likely would want to use a `sigmoid`. Keep the ranges of these activation functions in mind when designing your network, especially the final output layer of your network. CodeIn this section, we're going to work out how to represent a fully connected neural network with code. First, create a 2D `tf.placeholder` called $\textbf{X}$ with `None` for the batch size and 2 features. Make its `dtype` `tf.float32`. Recall that we use the dimension of `None` for the batch size dimension to say that this dimension can be any number. Here is the docstring for the `tf.placeholder` function, have a look at what args it takes:Help on function placeholder in module `tensorflow.python.ops.array_ops`:```pythonplaceholder(dtype, shape=None, name=None)``` Inserts a placeholder for a tensor that will be always fed. **Important**: This tensor will produce an error if evaluated. Its value must be fed using the `feed_dict` optional argument to `Session.run()`, `Tensor.eval()`, or `Operation.run()`. For example:```pythonx = tf.placeholder(tf.float32, shape=(1024, 1024))y = tf.matmul(x, x)with tf.Session() as sess: print(sess.run(y)) ERROR: will fail because x was not fed. rand_array = np.random.rand(1024, 1024) print(sess.run(y, feed_dict={x: rand_array})) Will succeed.``` Args: dtype: The type of elements in the tensor to be fed. shape: The shape of the tensor to be fed (optional). If the shape is not specified, you can feed a tensor of any shape. name: A name for the operation (optional). Returns: A `Tensor` that may be used as a handle for feeding a value, but not evaluated directly. TODO! COMPLETE THIS SECTION!
###Code
# Create a placeholder with None x 2 dimensions of dtype tf.float32, and name it "X":
X = ...
###Output
_____no_output_____
###Markdown
Now multiply the tensor using a new variable, $\textbf{W}$, which has 2 rows and 20 columns, so that when it is left mutiplied by $\textbf{X}$, the output of the multiplication is None x 20, giving you 20 output neurons. Recall that the `tf.matmul` function takes two arguments, the left hand ($\textbf{X}$) and right hand side ($\textbf{W}$) of a matrix multiplication.To create $\textbf{W}$, you will use `tf.get_variable` to create a matrix which is `2 x 20` in dimension. Look up the docstrings of functions `tf.get_variable` and `tf.random_normal_initializer` to get familiar with these functions. There are many options we will ignore for now. Just be sure to set the `name`, `shape` (this is the one that has to be [2, 20]), `dtype` (i.e. tf.float32), and `initializer` (the `tf.random_normal_intializer` you should create) when creating your $\textbf{W}$ variable with `tf.get_variable(...)`.For the random normal initializer, often the mean is set to 0, and the standard deviation is set based on the number of neurons. But that really depends on the input and outputs of your network, how you've "normalized" your dataset, what your nonlinearity/activation function is, and what your expected range of inputs/outputs are. Don't worry about the values for the initializer for now, as this part will take a bit more experimentation to understand better!This part is to encourage you to learn how to look up the documentation on Tensorflow, ideally using `tf.get_variable?` in the notebook. If you are really stuck, just scroll down a bit and I've shown you how to use it. TODO! COMPLETE THIS SECTION!
###Code
W = tf.get_variable(...
h = tf.matmul(...
###Output
_____no_output_____
###Markdown
And add to this result another new variable, $\textbf{b}$, which has [20] dimensions. These values will be added to every output neuron after the multiplication above. Instead of the `tf.random_normal_initializer` that you used for creating $\textbf{W}$, now use the `tf.constant_initializer`. Often for bias, you'll set the constant bias initialization to 0 or 1.TODO! COMPLETE THIS SECTION!
###Code
b = tf.get_variable(...
h = tf.nn.bias_add(...
###Output
_____no_output_____
###Markdown
So far we have done:$$\textbf{X}\textbf{W} + \textbf{b}$$Finally, apply a nonlinear activation to this output, such as `tf.nn.relu`, to complete the equation:$$\textbf{H} = \phi(\textbf{X}\textbf{W} + \textbf{b})$$TODO! COMPLETE THIS SECTION!
###Code
h = ...
###Output
_____no_output_____
###Markdown
Now that we've done all of this work, let's stick it inside a function. I've already done this for you and placed it inside the `utils` module under the function name `linear`. We've already imported the `utils` module so we can call it like so, `utils.linear(...)`. The docstring is copied below, and the code itself. Note that this function is slightly different to the one in the lecture. It does not require you to specify `n_input`, and the input `scope` is called `name`. It also has a few more extras in there including automatically converting a 4-d input tensor to a 2-d tensor so that you can fully connect the layer with a matrix multiply (don't worry about what this means if it doesn't make sense!).```pythonutils.linear??``````pythondef linear(x, n_output, name=None, activation=None, reuse=None): """Fully connected layer Parameters ---------- x : tf.Tensor Input tensor to connect n_output : int Number of output neurons name : None, optional Scope to apply Returns ------- op : tf.Tensor Output of fully connected layer. """ if len(x.get_shape()) != 2: x = flatten(x, reuse=reuse) n_input = x.get_shape().as_list()[1] with tf.variable_scope(name or "fc", reuse=reuse): W = tf.get_variable( name='W', shape=[n_input, n_output], dtype=tf.float32, initializer=tf.contrib.layers.xavier_initializer()) b = tf.get_variable( name='b', shape=[n_output], dtype=tf.float32, initializer=tf.constant_initializer(0.0)) h = tf.nn.bias_add( name='h', value=tf.matmul(x, W), bias=b) if activation: h = activation(h) return h, W``` Variable ScopesNote that since we are using `variable_scope` and explicitly telling the scope which name we would like, if there is *already* a variable created with the same name, then Tensorflow will raise an exception! If this happens, you should consider one of three possible solutions:1. If this happens while you are interactively editing a graph, you may need to reset the current graph:```python tf.reset_default_graph()```You should really only have to use this if you are in an interactive console! If you are creating Python scripts to run via command line, you should really be using solution 3 listed below, and be explicit with your graph contexts! 2. If this happens and you were not expecting any name conflicts, then perhaps you had a typo and created another layer with the same name! That's a good reason to keep useful names for everything in your graph!3. More likely, you should be using context managers when creating your graphs and running sessions. This works like so: ```python g = tf.Graph() with tf.Session(graph=g) as sess: Y_pred, W = linear(X, 2, 3, activation=tf.nn.relu) ``` or: ```python g = tf.Graph() with tf.Session(graph=g) as sess, g.as_default(): Y_pred, W = linear(X, 2, 3, activation=tf.nn.relu) ``` You can now write the same process as the above steps by simply calling:
###Code
h, W = utils.linear(
x=X, n_output=20, name='linear', activation=tf.nn.relu)
###Output
_____no_output_____
###Markdown
Part Two - Image Painting Network InstructionsFollow along the steps below, first setting up input and output data of the network, $\textbf{X}$ and $\textbf{Y}$. Then work through building the neural network which will try to compress the information in $\textbf{X}$ through a series of linear and non-linear functions so that whatever it is given as input, it minimized the error of its prediction, $\hat{\textbf{Y}}$, and the true output $\textbf{Y}$ through its training process. You'll also create an animated GIF of the training which you'll need to submit for the homework!Through this, we'll explore our first creative application: painting an image. This network is just meant to demonstrate how easily networks can be scaled to more complicated tasks without much modification. It is also meant to get you thinking about neural networks as building blocks that can be reconfigured, replaced, reorganized, and get you thinking about how the inputs and outputs can be anything you can imagine. Preparing the DataWe'll follow an example that Andrej Karpathy has done in his online demonstration of "image inpainting". What we're going to do is teach the network to go from the location on an image frame to a particular color. So given any position in an image, the network will need to learn what color to paint. Let's first get an image that we'll try to teach a neural network to paint.TODO! COMPLETE THIS SECTION!
###Code
# First load an image
img = ...
# Be careful with the size of your image.
# Try a fairly small image to begin with,
# then come back here and try larger sizes.
img = imresize(img, (100, 100))
plt.figure(figsize=(5, 5))
plt.imshow(img)
# Make sure you save this image as "reference.png"
# and include it in your zipped submission file
# so we can tell what image you are trying to paint!
plt.imsave(fname='reference.png', arr=img)
###Output
_____no_output_____
###Markdown
In the lecture, I showed how to aggregate the pixel locations and their colors using a loop over every pixel position. I put that code into a function `split_image` below. Feel free to experiment with other features for `xs` or `ys`.
###Code
def split_image(img):
# We'll first collect all the positions in the image in our list, xs
xs = []
# And the corresponding colors for each of these positions
ys = []
# Now loop over the image
for row_i in range(img.shape[0]):
for col_i in range(img.shape[1]):
# And store the inputs
xs.append([row_i, col_i])
# And outputs that the network needs to learn to predict
ys.append(img[row_i, col_i])
# we'll convert our lists to arrays
xs = np.array(xs)
ys = np.array(ys)
return xs, ys
###Output
_____no_output_____
###Markdown
Let's use this function to create the inputs (xs) and outputs (ys) to our network as the pixel locations (xs) and their colors (ys):
###Code
xs, ys = split_image(img)
# and print the shapes
xs.shape, ys.shape
###Output
_____no_output_____
###Markdown
Also remember, we should normalize our input values!TODO! COMPLETE THIS SECTION!
###Code
# Normalize the input (xs) using its mean and standard deviation
xs = ...
# Just to make sure you have normalized it correctly:
print(np.min(xs), np.max(xs))
assert(np.min(xs) > -3.0 and np.max(xs) < 3.0)
###Output
_____no_output_____
###Markdown
Similarly for the output:
###Code
print(np.min(ys), np.max(ys))
###Output
_____no_output_____
###Markdown
We'll normalize the output using a simpler normalization method, since we know the values range from 0-255:
###Code
ys = ys / 255.0
print(np.min(ys), np.max(ys))
###Output
_____no_output_____
###Markdown
Scaling the image values like this has the advantage that it is still interpretable as an image, unlike if we have negative values.What we're going to do is use regression to predict the value of a pixel given its (row, col) position. So the input to our network is `X = (row, col)` value. And the output of the network is `Y = (r, g, b)`.We can get our original image back by reshaping the colors back into the original image shape. This works because the `ys` are still in order:
###Code
plt.imshow(ys.reshape(img.shape))
###Output
_____no_output_____
###Markdown
But when we give inputs of (row, col) to our network, it won't know what order they are, because we will randomize them. So it will have to *learn* what color value should be output for any given (row, col).Create 2 placeholders of `dtype` `tf.float32`: one for the input of the network, a `None x 2` dimension placeholder called $\textbf{X}$, and another for the true output of the network, a `None x 3` dimension placeholder called $\textbf{Y}$.TODO! COMPLETE THIS SECTION!
###Code
# Let's reset the graph:
tf.reset_default_graph()
# Create a placeholder of None x 2 dimensions and dtype tf.float32
# This will be the input to the network which takes the row/col
X = tf.placeholder(...
# Create the placeholder, Y, with 3 output dimensions instead of 2.
# This will be the output of the network, the R, G, B values.
Y = tf.placeholder(...
###Output
_____no_output_____
###Markdown
Now create a deep neural network that takes your network input $\textbf{X}$ of 2 neurons, multiplies it by a linear and non-linear transformation which makes its shape [None, 20], meaning it will have 20 output neurons. Then repeat the same process again to give you 20 neurons again, and then again and again until you've done 6 layers of 20 neurons. Then finally one last layer which will output 3 neurons, your predicted output, which I've been denoting mathematically as $\hat{\textbf{Y}}$, for a total of 6 hidden layers, or 8 layers total including the input and output layers. Mathematically, we'll be creating a deep neural network that looks just like the previous fully connected layer we've created, but with a few more connections. So recall the first layer's connection is:\begin{align}\textbf{H}_1=\phi(\textbf{X}\textbf{W}_1 + \textbf{b}_1) \\\end{align}So the next layer will take that output, and connect it up again:\begin{align}\textbf{H}_2=\phi(\textbf{H}_1\textbf{W}_2 + \textbf{b}_2) \\\end{align}And same for every other layer:\begin{align}\textbf{H}_3=\phi(\textbf{H}_2\textbf{W}_3 + \textbf{b}_3) \\\textbf{H}_4=\phi(\textbf{H}_3\textbf{W}_4 + \textbf{b}_4) \\\textbf{H}_5=\phi(\textbf{H}_4\textbf{W}_5 + \textbf{b}_5) \\\textbf{H}_6=\phi(\textbf{H}_5\textbf{W}_6 + \textbf{b}_6) \\\end{align}Including the very last layer, which will be the prediction of the network:\begin{align}\hat{\textbf{Y}}=\phi(\textbf{H}_6\textbf{W}_7 + \textbf{b}_7)\end{align}Remember if you run into issues with variable scopes/names, that you cannot recreate a variable with the same name! Revisit the section on Variable Scopes if you get stuck with name issues.TODO! COMPLETE THIS SECTION!
###Code
# We'll create 6 hidden layers. Let's create a variable
# to say how many neurons we want for each of the layers
# (try 20 to begin with, then explore other values)
n_neurons = ...
# Create the first linear + nonlinear layer which will
# take the 2 input neurons and fully connects it to 20 neurons.
# Use the `utils.linear` function to do this just like before,
# but also remember to give names for each layer, such as
# "1", "2", ... "5", or "layer1", "layer2", ... "layer6".
h1, W1 = ...
# Create another one:
h2, W2 = ...
# and four more (or replace all of this with a loop if you can!):
h3, W3 = ...
h4, W4 = ...
h5, W5 = ...
h6, W6 = ...
# Now, make one last layer to make sure your network has 3 outputs:
Y_pred, W7 = utils.linear(h6, 3, activation=None, name='pred')
assert(X.get_shape().as_list() == [None, 2])
assert(Y_pred.get_shape().as_list() == [None, 3])
assert(Y.get_shape().as_list() == [None, 3])
###Output
_____no_output_____
###Markdown
Cost FunctionNow we're going to work on creating a `cost` function. The cost should represent how much `error` there is in the network, and provide the optimizer this value to help it train the network's parameters using gradient descent and backpropagation.Let's say our error is `E`, then the cost will be:$$cost(\textbf{Y}, \hat{\textbf{Y}}) = \frac{1}{\text{B}} \displaystyle\sum\limits_{b=0}^{\text{B}} \textbf{E}_b$$where the error is measured as, e.g.:$$\textbf{E} = \displaystyle\sum\limits_{c=0}^{\text{C}} (\textbf{Y}_{c} - \hat{\textbf{Y}}_{c})^2$$Don't worry if this scares you. This is mathematically expressing the same concept as: "the cost of an actual $\textbf{Y}$, and a predicted $\hat{\textbf{Y}}$ is equal to the mean across batches, of which there are $\text{B}$ total batches, of the sum of distances across $\text{C}$ color channels of every predicted output and true output". Basically, we're trying to see on average, or at least within a single minibatches average, how wrong was our prediction? We create a measure of error for every output feature by squaring the predicted output and the actual output it should have, i.e. the actual color value it should have output for a given input pixel position. By squaring it, we penalize large distances, but not so much small distances.Consider how the square function (i.e., $f(x) = x^2$) changes for a given error. If our color values range between 0-255, then a typical amount of error would be between $0$ and $128^2$. For example if my prediction was (120, 50, 167), and the color should have been (0, 100, 120), then the error for the Red channel is (120 - 0) or 120. And the Green channel is (50 - 100) or -50, and for the Blue channel, (167 - 120) = 47. When I square this result, I get: (120)^2, (-50)^2, and (47)^2. I then add all of these and that is my error, $\textbf{E}$, for this one observation. But I will have a few observations per minibatch. So I add all the error in my batch together, then divide by the number of observations in the batch, essentially finding the mean error of my batch. Let's try to see what the square in our measure of error is doing graphically.
###Code
error = np.linspace(0.0, 128.0**2, 100)
loss = error**2.0
plt.plot(error, loss)
plt.xlabel('error')
plt.ylabel('loss')
###Output
_____no_output_____
###Markdown
This is known as the $l_2$ (pronounced el-two) loss. It doesn't penalize small errors as much as it does large errors. This is easier to see when we compare it with another common loss, the $l_1$ (el-one) loss. It is linear in error, by taking the absolute value of the error. We'll compare the $l_1$ loss with normalized values from $0$ to $1$. So instead of having $0$ to $255$ for our RGB values, we'd have $0$ to $1$, simply by dividing our color values by $255.0$.
###Code
error = np.linspace(0.0, 1.0, 100)
plt.plot(error, error**2, label='l_2 loss')
plt.plot(error, np.abs(error), label='l_1 loss')
plt.xlabel('error')
plt.ylabel('loss')
plt.legend(loc='lower right')
###Output
_____no_output_____
###Markdown
So unlike the $l_2$ loss, the $l_1$ loss is really quickly upset if there is *any* error at all: as soon as error moves away from $0.0$, to $0.1$, the $l_1$ loss is $0.1$. But the $l_2$ loss is $0.1^2 = 0.01$. Having a stronger penalty on smaller errors often leads to what the literature calls "sparse" solutions, since it favors activations that try to explain as much of the data as possible, rather than a lot of activations that do a sort of good job, but when put together, do a great job of explaining the data. Don't worry about what this means if you are more unfamiliar with Machine Learning. There is a lot of literature surrounding each of these loss functions that we won't have time to get into, but look them up if they interest you.During the lecture, we've seen how to create a cost function using Tensorflow. To create a $l_2$ loss function, you can for instance use tensorflow's `tf.squared_difference` or for an $l_1$ loss function, `tf.abs`. You'll need to refer to the `Y` and `Y_pred` variables only, and your resulting cost should be a single value. Try creating the $l_1$ loss to begin with, and come back here after you have trained your network, to compare the performance with a $l_2$ loss.The equation for computing cost I mentioned above is more succintly written as, for $l_2$ norm:$$cost(\textbf{Y}, \hat{\textbf{Y}}) = \frac{1}{\text{B}} \displaystyle\sum\limits_{b=0}^{\text{B}} \displaystyle\sum\limits_{c=0}^{\text{C}} (\textbf{Y}_{c} - \hat{\textbf{Y}}_{c})^2$$For $l_1$ norm, we'd have:$$cost(\textbf{Y}, \hat{\textbf{Y}}) = \frac{1}{\text{B}} \displaystyle\sum\limits_{b=0}^{\text{B}} \displaystyle\sum\limits_{c=0}^{\text{C}} \text{abs}(\textbf{Y}_{c} - \hat{\textbf{Y}}_{c})$$Remember, to understand this equation, try to say it out loud: the $cost$ given two variables, $\textbf{Y}$, the actual output we want the network to have, and $\hat{\textbf{Y}}$ the predicted output from the network, is equal to the mean across $\text{B}$ batches, of the sum of $\textbf{C}$ color channels distance between the actual and predicted outputs. If you're still unsure, refer to the lecture where I've computed this, or scroll down a bit to where I've included the answer.TODO! COMPLETE THIS SECTION!
###Code
# first compute the error, the inner part of the summation.
# This should be the l1-norm or l2-norm of the distance
# between each color channel.
error = ...
assert(error.get_shape().as_list() == [None, 3])
###Output
_____no_output_____
###Markdown
TODO! COMPLETE THIS SECTION!
###Code
# Now sum the error for each feature in Y.
# If Y is [Batch, Features], the sum should be [Batch]:
sum_error = ...
assert(sum_error.get_shape().as_list() == [None])
###Output
_____no_output_____
###Markdown
TODO! COMPLETE THIS SECTION!
###Code
# Finally, compute the cost, as the mean error of the batch.
# This should be a single value.
cost = ...
assert(cost.get_shape().as_list() == [])
###Output
_____no_output_____
###Markdown
We now need an `optimizer` which will take our `cost` and a `learning_rate`, which says how far along the gradient to move. This optimizer calculates all the gradients in our network with respect to the `cost` variable and updates all of the weights in our network using backpropagation. We'll then create mini-batches of our training data and run the `optimizer` using a `session`.TODO! COMPLETE THIS SECTION!
###Code
# Refer to the help for the function
optimizer = tf.train....minimize(cost)
# Create parameters for the number of iterations to run for (< 100)
n_iterations = ...
# And how much data is in each minibatch (< 500)
batch_size = ...
# Then create a session
sess = tf.Session()
###Output
_____no_output_____
###Markdown
We'll now train our network! The code below should do this for you if you've setup everything else properly. Please read through this and make sure you understand each step! Note that this can take a VERY LONG time depending on the size of your image (make it < 100 x 100 pixels), the number of neurons per layer (e.g. < 30), the number of layers (e.g. < 8), and number of iterations (< 1000). Welcome to Deep Learning :)
###Code
# Initialize all your variables and run the operation with your session
sess.run(tf.initialize_all_variables())
# Optimize over a few iterations, each time following the gradient
# a little at a time
imgs = []
costs = []
gif_step = n_iterations // 10
step_i = 0
for it_i in range(n_iterations):
# Get a random sampling of the dataset
idxs = np.random.permutation(range(len(xs)))
# The number of batches we have to iterate over
n_batches = len(idxs) // batch_size
# Now iterate over our stochastic minibatches:
for batch_i in range(n_batches):
# Get just minibatch amount of data
idxs_i = idxs[batch_i * batch_size: (batch_i + 1) * batch_size]
# And optimize, also returning the cost so we can monitor
# how our optimization is doing.
training_cost = sess.run(
[cost, optimizer],
feed_dict={X: xs[idxs_i], Y: ys[idxs_i]})[0]
# Also, every 20 iterations, we'll draw the prediction of our
# input xs, which should try to recreate our image!
if (it_i + 1) % gif_step == 0:
costs.append(training_cost / n_batches)
ys_pred = Y_pred.eval(feed_dict={X: xs}, session=sess)
img = np.clip(ys_pred.reshape(img.shape), 0, 1)
imgs.append(img)
# Plot the cost over time
fig, ax = plt.subplots(1, 2)
ax[0].plot(costs)
ax[0].set_xlabel('Iteration')
ax[0].set_ylabel('Cost')
ax[1].imshow(img)
fig.suptitle('Iteration {}'.format(it_i))
plt.show()
# Save the images as a GIF
_ = gif.build_gif(imgs, saveto='single.gif', show_gif=False)
###Output
_____no_output_____
###Markdown
Let's now display the GIF we've just created:
###Code
ipyd.Image(url='single.gif?{}'.format(np.random.rand()),
height=500, width=500)
###Output
_____no_output_____
###Markdown
ExploreGo back over the previous cells and exploring changing different parameters of the network. I would suggest first trying to change the `learning_rate` parameter to different values and see how the cost curve changes. What do you notice? Try exponents of $10$, e.g. $10^1$, $10^2$, $10^3$... and so on. Also try changing the `batch_size`: $50, 100, 200, 500, ...$ How does it effect how the cost changes over time?Be sure to explore other manipulations of the network, such as changing the loss function to $l_2$ or $l_1$. How does it change the resulting learning? Also try changing the activation functions, the number of layers/neurons, different optimizers, and anything else that you may think of, and try to get a basic understanding on this toy problem of how it effects the network's training. Also try comparing creating a fairly shallow/wide net (e.g. 1-2 layers with many neurons, e.g. > 100), versus a deep/narrow net (e.g. 6-20 layers with fewer neurons, e.g. < 20). What do you notice? A Note on CrossvalidationThe cost curve plotted above is only showing the cost for our "training" dataset. Ideally, we should split our dataset into what are called "train", "validation", and "test" sets. This is done by taking random subsets of the entire dataset. For instance, we partition our dataset by saying we'll only use 80% of it for training, 10% for validation, and the last 10% for testing. Then when training as above, you would only use the 80% of the data you had partitioned, and then monitor accuracy on both the data you have used to train, but also that new 10% of unseen validation data. This gives you a sense of how "general" your network is. If it is performing just as well on that 10% of data, then you know it is doing a good job. Finally, once you are done training, you would test one last time on your "test" dataset. Ideally, you'd do this a number of times, so that every part of the dataset had a chance to be the test set. This would also give you a measure of the variance of the accuracy on the final test. If it changes a lot, you know something is wrong. If it remains fairly stable, then you know that it is a good representation of the model's accuracy on unseen data.We didn't get a chance to cover this in class, as it is less useful for exploring creative applications, though it is very useful to know and to use in practice, as it avoids overfitting/overgeneralizing your network to all of the data. Feel free to explore how to do this on the application above! Part Three - Learning More than One Image InstructionsWe're now going to make use of our Dataset from Session 1 and apply what we've just learned to try and paint every single image in our dataset. How would you guess is the best way to approach this? We could for instance feed in every possible image by having multiple row, col -> r, g, b values. So for any given row, col, we'd have 100 possible r, g, b values. This likely won't work very well as there are many possible values a pixel could take, not just one. What if we also tell the network *which* image's row and column we wanted painted? We're going to try and see how that does.You can execute all of the cells below unchanged to see how this works with the first 100 images of the celeb dataset. But you should replace the images with your own dataset, and vary the parameters of the network to get the best results!I've placed the same code for running the previous algorithm into two functions, `build_model` and `train`. You can directly call the function `train` with a 4-d image shaped as N x H x W x C, and it will collect all of the points of every image and try to predict the output colors of those pixels, just like before. The only difference now is that you are able to try this with a few images at a time. There are a few ways we could have tried to handle multiple images. The way I've shown in the `train` function is to include an additional input neuron for *which* image it is. So as well as receiving the row and column, the network will also receive as input which image it is as a number. This should help the network to better distinguish the patterns it uses, as it has knowledge that helps it separates its process based on which image is fed as input.
###Code
def build_model(xs, ys, n_neurons, n_layers, activation_fn,
final_activation_fn, cost_type):
xs = np.asarray(xs)
ys = np.asarray(ys)
if xs.ndim != 2:
raise ValueError(
'xs should be a n_observates x n_features, ' +
'or a 2-dimensional array.')
if ys.ndim != 2:
raise ValueError(
'ys should be a n_observates x n_features, ' +
'or a 2-dimensional array.')
n_xs = xs.shape[1]
n_ys = ys.shape[1]
X = tf.placeholder(name='X', shape=[None, n_xs],
dtype=tf.float32)
Y = tf.placeholder(name='Y', shape=[None, n_ys],
dtype=tf.float32)
current_input = X
for layer_i in range(n_layers):
current_input = utils.linear(
current_input, n_neurons,
activation=activation_fn,
name='layer{}'.format(layer_i))[0]
Y_pred = utils.linear(
current_input, n_ys,
activation=final_activation_fn,
name='pred')[0]
if cost_type == 'l1_norm':
cost = tf.reduce_mean(tf.reduce_sum(
tf.abs(Y - Y_pred), 1))
elif cost_type == 'l2_norm':
cost = tf.reduce_mean(tf.reduce_sum(
tf.squared_difference(Y, Y_pred), 1))
else:
raise ValueError(
'Unknown cost_type: {}. '.format(
cost_type) + 'Use only "l1_norm" or "l2_norm"')
return {'X': X, 'Y': Y, 'Y_pred': Y_pred, 'cost': cost}
def train(imgs,
learning_rate=0.0001,
batch_size=200,
n_iterations=10,
gif_step=2,
n_neurons=30,
n_layers=10,
activation_fn=tf.nn.relu,
final_activation_fn=tf.nn.tanh,
cost_type='l2_norm'):
N, H, W, C = imgs.shape
all_xs, all_ys = [], []
for img_i, img in enumerate(imgs):
xs, ys = split_image(img)
all_xs.append(np.c_[xs, np.repeat(img_i, [xs.shape[0]])])
all_ys.append(ys)
xs = np.array(all_xs).reshape(-1, 3)
xs = (xs - np.mean(xs, 0)) / np.std(xs, 0)
ys = np.array(all_ys).reshape(-1, 3)
ys = ys / 127.5 - 1
g = tf.Graph()
with tf.Session(graph=g) as sess:
model = build_model(xs, ys, n_neurons, n_layers,
activation_fn, final_activation_fn,
cost_type)
optimizer = tf.train.AdamOptimizer(
learning_rate=learning_rate).minimize(model['cost'])
sess.run(tf.initialize_all_variables())
gifs = []
costs = []
step_i = 0
for it_i in range(n_iterations):
# Get a random sampling of the dataset
idxs = np.random.permutation(range(len(xs)))
# The number of batches we have to iterate over
n_batches = len(idxs) // batch_size
training_cost = 0
# Now iterate over our stochastic minibatches:
for batch_i in range(n_batches):
# Get just minibatch amount of data
idxs_i = idxs[batch_i * batch_size:
(batch_i + 1) * batch_size]
# And optimize, also returning the cost so we can monitor
# how our optimization is doing.
cost = sess.run(
[model['cost'], optimizer],
feed_dict={model['X']: xs[idxs_i],
model['Y']: ys[idxs_i]})[0]
training_cost += cost
print('iteration {}/{}: cost {}'.format(
it_i + 1, n_iterations, training_cost / n_batches))
# Also, every 20 iterations, we'll draw the prediction of our
# input xs, which should try to recreate our image!
if (it_i + 1) % gif_step == 0:
costs.append(training_cost / n_batches)
ys_pred = model['Y_pred'].eval(
feed_dict={model['X']: xs}, session=sess)
img = ys_pred.reshape(imgs.shape)
gifs.append(img)
return gifs
###Output
_____no_output_____
###Markdown
CodeBelow, I've shown code for loading the first 100 celeb files. Run through the next few cells to see how this works with the celeb dataset, and then come back here and replace the `imgs` variable with your own set of images. For instance, you can try your entire sorted dataset from Session 1 as an N x H x W x C array. Explore!TODO! COMPLETE THIS SECTION!
###Code
celeb_imgs = utils.get_celeb_imgs()
plt.figure(figsize=(10, 10))
plt.imshow(utils.montage(celeb_imgs).astype(np.uint8))
# It doesn't have to be 100 images, explore!
imgs = np.array(celeb_imgs).copy()
###Output
_____no_output_____
###Markdown
Explore changing the parameters of the `train` function and your own dataset of images. Note, you do not have to use the dataset from the last assignment! Explore different numbers of images, whatever you prefer.TODO! COMPLETE THIS SECTION!
###Code
# Change the parameters of the train function and
# explore changing the dataset
gifs = train(imgs=imgs)
###Output
_____no_output_____
###Markdown
Now we'll create a gif out of the training process. Be sure to call this 'multiple.gif' for your homework submission:
###Code
montage_gifs = [np.clip(utils.montage(
(m * 127.5) + 127.5), 0, 255).astype(np.uint8)
for m in gifs]
_ = gif.build_gif(montage_gifs, saveto='multiple.gif')
###Output
_____no_output_____
###Markdown
Session 2 - Training a Network w/ TensorflowAssignment: Teach a Deep Neural Network to PaintParag K. MitalCreative Applications of Deep Learning w/ TensorflowKadenze AcademyCADLThis work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Learning Goals* Learn how to create a Neural Network* Learn to use a neural network to paint an image* Apply creative thinking to the inputs, outputs, and definition of a network Outline- [Assignment Synopsis](assignment-synopsis)- [Part One - Fully Connected Network](part-one---fully-connected-network) - [Instructions](instructions) - [Code](code) - [Variable Scopes](variable-scopes)- [Part Two - Image Painting Network](part-two---image-painting-network) - [Instructions](instructions-1) - [Preparing the Data](preparing-the-data) - [Cost Function](cost-function) - [Explore](explore) - [A Note on Crossvalidation](a-note-on-crossvalidation)- [Part Three - Learning More than One Image](part-three---learning-more-than-one-image) - [Instructions](instructions-2) - [Code](code-1)- [Part Four - Open Exploration \(Extra Credit\)](part-four---open-exploration-extra-credit)- [Assignment Submission](assignment-submission)This next section will just make sure you have the right version of python and the libraries that we'll be using. Don't change the code here but make sure you "run" it (use "shift+enter")!
###Code
# First check the Python version
import sys
if sys.version_info < (3,4):
print('You are running an older version of Python!\n\n' \
'You should consider updating to Python 3.4.0 or ' \
'higher as the libraries built for this course ' \
'have only been tested in Python 3.4 and higher.\n')
print('Try installing the Python 3.5 version of anaconda '
'and then restart `jupyter notebook`:\n' \
'https://www.continuum.io/downloads\n\n')
# Now get necessary libraries
try:
import os
import numpy as np
import matplotlib.pyplot as plt
from skimage.transform import resize
from skimage import data
from scipy.misc import imresize
except ImportError:
print('You are missing some packages! ' \
'We will try installing them before continuing!')
!pip install "numpy>=1.11.0" "matplotlib>=1.5.1" "scikit-image>=0.11.3" "scikit-learn>=0.17" "scipy>=0.17.0"
import os
import numpy as np
import matplotlib.pyplot as plt
from skimage.transform import resize
from skimage import data
from scipy.misc import imresize
print('Done!')
# Import Tensorflow
try:
import tensorflow as tf
except ImportError:
print("You do not have tensorflow installed!")
print("Follow the instructions on the following link")
print("to install tensorflow before continuing:")
print("")
print("https://github.com/pkmital/CADL#installation-preliminaries")
# This cell includes the provided libraries from the zip file
# and a library for displaying images from ipython, which
# we will use to display the gif
try:
from libs import utils, gif
import IPython.display as ipyd
except ImportError:
print("Make sure you have started notebook in the same directory" +
" as the provided zip file which includes the 'libs' folder" +
" and the file 'utils.py' inside of it. You will NOT be able"
" to complete this assignment unless you restart jupyter"
" notebook inside the directory created by extracting"
" the zip file or cloning the github repo.")
# We'll tell matplotlib to inline any drawn figures like so:
%matplotlib inline
plt.style.use('ggplot')
# Bit of formatting because I don't like the default inline code style:
from IPython.core.display import HTML
HTML("""<style> .rendered_html code {
padding: 2px 4px;
color: #c7254e;
background-color: #f9f2f4;
border-radius: 4px;
} </style>""")
###Output
_____no_output_____
###Markdown
Assignment SynopsisIn this assignment, we're going to create our first neural network capable of taking any two continuous values as inputs. Those two values will go through a series of multiplications, additions, and nonlinearities, coming out of the network as 3 outputs. Remember from the last homework, we used convolution to filter an image so that the representations in the image were accentuated. We're not going to be using convolution w/ Neural Networks until the next session, but we're effectively doing the same thing here: using multiplications to accentuate the representations in our data, in order to minimize whatever our cost function is. To find out what those multiplications need to be, we're going to use Gradient Descent and Backpropagation, which will take our cost, and find the appropriate updates to all the parameters in our network to best optimize the cost. In the next session, we'll explore much bigger networks and convolution. This "toy" network is really to help us get up and running with neural networks, and aid our exploration of the different components that make up a neural network. You will be expected to explore manipulations of the neural networks in this notebook as much as possible to help aid your understanding of how they effect the final result.We're going to build our first neural network to understand what color "to paint" given a location in an image, or the row, col of the image. So in goes a row/col, and out goes a R/G/B. In the next lesson, we'll learn what this network is really doing is performing regression. For now, we'll focus on the creative applications of such a network to help us get a better understanding of the different components that make up the neural network. You'll be asked to explore many of the different components of a neural network, including changing the inputs/outputs (i.e. the dataset), the number of layers, their activation functions, the cost functions, learning rate, and batch size. You'll also explore a modification to this same network which takes a 3rd input: an index for an image. This will let us try to learn multiple images at once, though with limited success.We'll now dive right into creating deep neural networks, and I'm going to show you the math along the way. Don't worry if a lot of it doesn't make sense, and it really takes a bit of practice before it starts to come together. Part One - Fully Connected Network InstructionsCreate the operations necessary for connecting an input to a network, defined by a `tf.Placeholder`, to a series of fully connected, or linear, layers, using the formula: $$\textbf{H} = \phi(\textbf{X}\textbf{W} + \textbf{b})$$where $\textbf{H}$ is an output layer representing the "hidden" activations of a network, $\phi$ represents some nonlinearity, $\textbf{X}$ represents an input to that layer, $\textbf{W}$ is that layer's weight matrix, and $\textbf{b}$ is that layer's bias. If you're thinking, what is going on? Where did all that math come from? Don't be afraid of it. Once you learn how to "speak" the symbolic representation of the equation, it starts to get easier. And once we put it into practice with some code, it should start to feel like there is some association with what is written in the equation, and what we've written in code. Practice trying to say the equation in a meaningful way: "The output of a hidden layer is equal to some input multiplied by another matrix, adding some bias, and applying a non-linearity". Or perhaps: "The hidden layer is equal to a nonlinearity applied to an input multiplied by a matrix and adding some bias". Explore your own interpretations of the equation, or ways of describing it, and it starts to become much, much easier to apply the equation.The first thing that happens in this equation is the input matrix $\textbf{X}$ is multiplied by another matrix, $\textbf{W}$. This is the most complicated part of the equation. It's performing matrix multiplication, as we've seen from last session, and is effectively scaling and rotating our input. The bias $\textbf{b}$ allows for a global shift in the resulting values. Finally, the nonlinearity of $\phi$ allows the input space to be nonlinearly warped, allowing it to express a lot more interesting distributions of data. Have a look below at some common nonlinearities. If you're unfamiliar with looking at graphs like this, it is common to read the horizontal axis as X, as the input, and the vertical axis as Y, as the output.
###Code
xs = np.linspace(-6, 6, 100)
plt.plot(xs, np.maximum(xs, 0), label='relu')
plt.plot(xs, 1 / (1 + np.exp(-xs)), label='sigmoid')
plt.plot(xs, np.tanh(xs), label='tanh')
plt.xlabel('Input')
plt.xlim([-6, 6])
plt.ylabel('Output')
plt.ylim([-1.5, 1.5])
plt.title('Common Activation Functions/Nonlinearities')
plt.legend(loc='lower right')
###Output
_____no_output_____
###Markdown
Remember, having series of linear followed by nonlinear operations is what makes neural networks expressive. By stacking a lot of "linear" + "nonlinear" operations in a series, we can create a deep neural network! Have a look at the output ranges of the above nonlinearity when considering which nonlinearity seems most appropriate. For instance, the `relu` is always above 0, but does not saturate at any value above 0, meaning it can be anything above 0. That's unlike the `sigmoid` which does saturate at both 0 and 1, meaning its values for a single output neuron will always be between 0 and 1. Similarly, the `tanh` saturates at -1 and 1.Choosing between these is often a matter of trial and error. Though you can make some insights depending on your normalization scheme. For instance, if your output is expected to be in the range of 0 to 1, you may not want to use a `tanh` function, which ranges from -1 to 1, but likely would want to use a `sigmoid`. Keep the ranges of these activation functions in mind when designing your network, especially the final output layer of your network. CodeIn this section, we're going to work out how to represent a fully connected neural network with code. First, create a 2D `tf.placeholder` called $\textbf{X}$ with `None` for the batch size and 2 features. Make its `dtype` `tf.float32`. Recall that we use the dimension of `None` for the batch size dimension to say that this dimension can be any number. Here is the docstring for the `tf.placeholder` function, have a look at what args it takes:Help on function placeholder in module `tensorflow.python.ops.array_ops`:```pythonplaceholder(dtype, shape=None, name=None)``` Inserts a placeholder for a tensor that will be always fed. **Important**: This tensor will produce an error if evaluated. Its value must be fed using the `feed_dict` optional argument to `Session.run()`, `Tensor.eval()`, or `Operation.run()`. For example:```pythonx = tf.placeholder(tf.float32, shape=(1024, 1024))y = tf.matmul(x, x)with tf.Session() as sess: print(sess.run(y)) ERROR: will fail because x was not fed. rand_array = np.random.rand(1024, 1024) print(sess.run(y, feed_dict={x: rand_array})) Will succeed.``` Args: dtype: The type of elements in the tensor to be fed. shape: The shape of the tensor to be fed (optional). If the shape is not specified, you can feed a tensor of any shape. name: A name for the operation (optional). Returns: A `Tensor` that may be used as a handle for feeding a value, but not evaluated directly. TODO! COMPLETE THIS SECTION!
###Code
# Create a placeholder with None x 2 dimensions of dtype tf.float32, and name it "X":
X = ...
###Output
_____no_output_____
###Markdown
Now multiply the tensor using a new variable, $\textbf{W}$, which has 2 rows and 20 columns, so that when it is left mutiplied by $\textbf{X}$, the output of the multiplication is None x 20, giving you 20 output neurons. Look up the docstrings of functions `tf.get_variable` and `tf.random_normal_initializer` to get familiar with these functions. There are many options we will ignore for now. Just be sure to set the `name`, `shape`, `dtype`, and `initializer` when creating your $\textbf{W}$ variable with `tf.get_variable(...)`. For the random normal initializer, often the mean is set to 0, and the standard deviation is set based on the number of neurons. But that really depends on the input and outputs of your network, how you've "normalized" your dataset, what your nonlinearity/activation function is, and what your expected range of inputs/outputs are. Don't worry about the values for the initializer for now, as this part will take a bit more experimentation to understand better!TODO! COMPLETE THIS SECTION!
###Code
W = tf.get_variable(...
h = tf.matmul(...
###Output
_____no_output_____
###Markdown
And add to this result another new variable, $\textbf{b}$, which has [20] dimensions. These values will be added to every output neuron after the multiplication above. Instead of the `tf.random_normal_initializer` that you used for creating $\textbf{W}$, now use the `tf.constant_initializer`. Often for bias, you'll set the constant bias initialization to 0 or 1.TODO! COMPLETE THIS SECTION!
###Code
b = tf.get_variable(...
h = tf.nn.bias_add(...
###Output
_____no_output_____
###Markdown
So far we have done:$$\textbf{X}\textbf{W} + \textbf{b}$$Finally, apply a nonlinear activation to this output, such as `tf.nn.relu`, to complete the equation:$$\textbf{H} = \phi(\textbf{X}\textbf{W} + \textbf{b})$$TODO! COMPLETE THIS SECTION!
###Code
h = ...
###Output
_____no_output_____
###Markdown
Now that we've done all of this work, let's stick it inside a function. I've already done this for you and placed it inside the `utils` module under the function name `linear`. We've already imported the `utils` module so we can call it like so, `utils.linear(...)`. The docstring is copied below, and the code itself. Note that this function is slightly different to the one in the lecture. It does not require you to specify `n_input`, and the input `scope` is called `name`. It also has a few more extras in there including automatically converting a 4-d input tensor to a 2-d tensor so that you can fully connect the layer with a matrix multiply (don't worry about what this means if it doesn't make sense!).```pythonutils.linear??``````pythondef linear(x, n_output, name=None, activation=None, reuse=None): """Fully connected layer Parameters ---------- x : tf.Tensor Input tensor to connect n_output : int Number of output neurons name : None, optional Scope to apply Returns ------- op : tf.Tensor Output of fully connected layer. """ if len(x.get_shape()) != 2: x = flatten(x, reuse=reuse) n_input = x.get_shape().as_list()[1] with tf.variable_scope(name or "fc", reuse=reuse): W = tf.get_variable( name='W', shape=[n_input, n_output], dtype=tf.float32, initializer=tf.contrib.layers.xavier_initializer()) b = tf.get_variable( name='b', shape=[n_output], dtype=tf.float32, initializer=tf.constant_initializer(0.0)) h = tf.nn.bias_add( name='h', value=tf.matmul(x, W), bias=b) if activation: h = activation(h) return h, W``` Variable ScopesNote that since we are using `variable_scope` and explicitly telling the scope which name we would like, if there is *already* a variable created with the same name, then Tensorflow will raise an exception! If this happens, you should consider one of three possible solutions:1. If this happens while you are interactively editing a graph, you may need to reset the current graph:```python tf.reset_default_graph()```You should really only have to use this if you are in an interactive console! If you are creating Python scripts to run via command line, you should really be using solution 3 listed below, and be explicit with your graph contexts! 2. If this happens and you were not expecting any name conflicts, then perhaps you had a typo and created another layer with the same name! That's a good reason to keep useful names for everything in your graph!3. More likely, you should be using context managers when creating your graphs and running sessions. This works like so: ```python g = tf.Graph() with tf.Session(graph=g) as sess: Y_pred, W = linear(X, 2, 3, activation=tf.nn.relu) ``` or: ```python g = tf.Graph() with tf.Session(graph=g) as sess, g.as_default(): Y_pred, W = linear(X, 2, 3, activation=tf.nn.relu) ``` You can now write the same process as the above steps by simply calling:
###Code
h, W = utils.linear(
x=X, n_output=20, name='linear', activation=tf.nn.relu)
###Output
_____no_output_____
###Markdown
Part Two - Image Painting Network InstructionsFollow along the steps below, first setting up input and output data of the network, $\textbf{X}$ and $\textbf{Y}$. Then work through building the neural network which will try to compress the information in $\textbf{X}$ through a series of linear and non-linear functions so that whatever it is given as input, it minimized the error of its prediction, $\hat{\textbf{Y}}$, and the true output $\textbf{Y}$ through its training process. You'll also create an animated GIF of the training which you'll need to submit for the homework!Through this, we'll explore our first creative application: painting an image. This network is just meant to demonstrate how easily networks can be scaled to more complicated tasks without much modification. It is also meant to get you thinking about neural networks as building blocks that can be reconfigured, replaced, reorganized, and get you thinking about how the inputs and outputs can be anything you can imagine. Preparing the DataWe'll follow an example that Andrej Karpathy has done in his online demonstration of "image inpainting". What we're going to do is teach the network to go from the location on an image frame to a particular color. So given any position in an image, the network will need to learn what color to paint. Let's first get an image that we'll try to teach a neural network to paint.TODO! COMPLETE THIS SECTION!
###Code
# First load an image
img = ...
# Be careful with the size of your image.
# Try a fairly small image to begin with,
# then come back here and try larger sizes.
img = imresize(img, (100, 100))
plt.figure(figsize=(5, 5))
plt.imshow(img)
# Make sure you save this image as "reference.png"
# and include it in your zipped submission file
# so we can tell what image you are trying to paint!
plt.imsave(fname='reference.png', arr=img)
###Output
_____no_output_____
###Markdown
In the lecture, I showed how to aggregate the pixel locations and their colors using a loop over every pixel position. I put that code into a function `split_image` below. Feel free to experiment with other features for `xs` or `ys`.
###Code
def split_image(img):
# We'll first collect all the positions in the image in our list, xs
xs = []
# And the corresponding colors for each of these positions
ys = []
# Now loop over the image
for row_i in range(img.shape[0]):
for col_i in range(img.shape[1]):
# And store the inputs
xs.append([row_i, col_i])
# And outputs that the network needs to learn to predict
ys.append(img[row_i, col_i])
# we'll convert our lists to arrays
xs = np.array(xs)
ys = np.array(ys)
return xs, ys
###Output
_____no_output_____
###Markdown
Let's use this function to create the inputs (xs) and outputs (ys) to our network as the pixel locations (xs) and their colors (ys):
###Code
xs, ys = split_image(img)
# and print the shapes
xs.shape, ys.shape
###Output
_____no_output_____
###Markdown
Also remember, we should normalize our input values!TODO! COMPLETE THIS SECTION!
###Code
# Normalize the input (xs) using its mean and standard deviation
xs = ...
# Just to make sure you have normalized it correctly:
print(np.min(xs), np.max(xs))
assert(np.min(xs) > -3.0 and np.max(xs) < 3.0)
###Output
_____no_output_____
###Markdown
Similarly for the output:
###Code
print(np.min(ys), np.max(ys))
###Output
_____no_output_____
###Markdown
We'll normalize the output using a simpler normalization method, since we know the values range from 0-255:
###Code
ys = ys / 255.0
print(np.min(ys), np.max(ys))
###Output
_____no_output_____
###Markdown
Scaling the image values like this has the advantage that it is still interpretable as an image, unlike if we have negative values.What we're going to do is use regression to predict the value of a pixel given its (row, col) position. So the input to our network is `X = (row, col)` value. And the output of the network is `Y = (r, g, b)`.We can get our original image back by reshaping the colors back into the original image shape. This works because the `ys` are still in order:
###Code
plt.imshow(ys.reshape(img.shape))
###Output
_____no_output_____
###Markdown
But when we give inputs of (row, col) to our network, it won't know what order they are, because we will randomize them. So it will have to *learn* what color value should be output for any given (row, col).Create 2 placeholders of `dtype` `tf.float32`: one for the input of the network, a `None x 2` dimension placeholder called $\textbf{X}$, and another for the true output of the network, a `None x 3` dimension placeholder called $\textbf{Y}$.TODO! COMPLETE THIS SECTION!
###Code
# Let's reset the graph:
tf.reset_default_graph()
# Create a placeholder of None x 2 dimensions and dtype tf.float32
# This will be the input to the network which takes the row/col
X = tf.placeholder(...
# Create the placeholder, Y, with 3 output dimensions instead of 2.
# This will be the output of the network, the R, G, B values.
Y = tf.placeholder(...
###Output
_____no_output_____
###Markdown
Now create a deep neural network that takes your network input $\textbf{X}$ of 2 neurons, multiplies it by a linear and non-linear transformation which makes its shape [None, 20], meaning it will have 20 output neurons. Then repeat the same process again to give you 20 neurons again, and then again and again until you've done 6 layers of 20 neurons. Then finally one last layer which will output 3 neurons, your predicted output, which I've been denoting mathematically as $\hat{\textbf{Y}}$, for a total of 6 hidden layers, or 8 layers total including the input and output layers. Mathematically, we'll be creating a deep neural network that looks just like the previous fully connected layer we've created, but with a few more connections. So recall the first layer's connection is:\begin{align}\textbf{H}_1=\phi(\textbf{X}\textbf{W}_1 + \textbf{b}_1) \\\end{align}So the next layer will take that output, and connect it up again:\begin{align}\textbf{H}_2=\phi(\textbf{H}_1\textbf{W}_2 + \textbf{b}_2) \\\end{align}And same for every other layer:\begin{align}\textbf{H}_3=\phi(\textbf{H}_2\textbf{W}_3 + \textbf{b}_3) \\\textbf{H}_4=\phi(\textbf{H}_3\textbf{W}_4 + \textbf{b}_4) \\\textbf{H}_5=\phi(\textbf{H}_4\textbf{W}_5 + \textbf{b}_5) \\\textbf{H}_6=\phi(\textbf{H}_5\textbf{W}_6 + \textbf{b}_6) \\\end{align}Including the very last layer, which will be the prediction of the network:\begin{align}\hat{\textbf{Y}}=\phi(\textbf{H}_6\textbf{W}_7 + \textbf{b}_7)\end{align}Remember if you run into issues with variable scopes/names, that you cannot recreate a variable with the same name! Revisit the section on Variable Scopes if you get stuck with name issues.TODO! COMPLETE THIS SECTION!
###Code
# We'll create 6 hidden layers. Let's create a variable
# to say how many neurons we want for each of the layers
# (try 20 to begin with, then explore other values)
n_neurons = ...
# Create the first linear + nonlinear layer which will
# take the 2 input neurons and fully connects it to 20 neurons.
# Use the `utils.linear` function to do this just like before,
# but also remember to give names for each layer, such as
# "1", "2", ... "5", or "layer1", "layer2", ... "layer6".
h1, W1 = ...
# Create another one:
h2, W2 = ...
# and four more (or replace all of this with a loop if you can!):
h3, W3 = ...
h4, W4 = ...
h5, W5 = ...
h6, W6 = ...
# Now, make one last layer to make sure your network has 3 outputs:
Y_pred, W7 = utils.linear(h6, 3, activation=None, name='pred')
assert(X.get_shape().as_list() == [None, 2])
assert(Y_pred.get_shape().as_list() == [None, 3])
assert(Y.get_shape().as_list() == [None, 3])
###Output
_____no_output_____
###Markdown
Cost FunctionNow we're going to work on creating a `cost` function. The cost should represent how much `error` there is in the network, and provide the optimizer this value to help it train the network's parameters using gradient descent and backpropagation.Let's say our error is `E`, then the cost will be:$$cost(\textbf{Y}, \hat{\textbf{Y}}) = \frac{1}{\text{B}} \displaystyle\sum\limits_{b=0}^{\text{B}} \textbf{E}_b$$where the error is measured as, e.g.:$$\textbf{E} = \displaystyle\sum\limits_{c=0}^{\text{C}} (\textbf{Y}_{c} - \hat{\textbf{Y}}_{c})^2$$Don't worry if this scares you. This is mathematically expressing the same concept as: "the cost of an actual $\textbf{Y}$, and a predicted $\hat{\textbf{Y}}$ is equal to the mean across batches, of which there are $\text{B}$ total batches, of the sum of distances across $\text{C}$ color channels of every predicted output and true output". Basically, we're trying to see on average, or at least within a single minibatches average, how wrong was our prediction? We create a measure of error for every output feature by squaring the predicted output and the actual output it should have, i.e. the actual color value it should have output for a given input pixel position. By squaring it, we penalize large distances, but not so much small distances.Consider how the square function (i.e., $f(x) = x^2$) changes for a given error. If our color values range between 0-255, then a typical amount of error would be between $0$ and $128^2$. For example if my prediction was (120, 50, 167), and the color should have been (0, 100, 120), then the error for the Red channel is (120 - 0) or 120. And the Green channel is (50 - 100) or -50, and for the Blue channel, (167 - 120) = 47. When I square this result, I get: (120)^2, (-50)^2, and (47)^2. I then add all of these and that is my error, $\textbf{E}$, for this one observation. But I will have a few observations per minibatch. So I add all the error in my batch together, then divide by the number of observations in the batch, essentially finding the mean error of my batch. Let's try to see what the square in our measure of error is doing graphically.
###Code
error = np.linspace(0.0, 128.0**2, 100)
loss = error**2.0
plt.plot(error, loss)
plt.xlabel('error')
plt.ylabel('loss')
###Output
_____no_output_____
###Markdown
This is known as the $l_2$ (pronounced el-two) loss. It doesn't penalize small errors as much as it does large errors. This is easier to see when we compare it with another common loss, the $l_1$ (el-one) loss. It is linear in error, by taking the absolute value of the error. We'll compare the $l_1$ loss with normalized values from $0$ to $1$. So instead of having $0$ to $255$ for our RGB values, we'd have $0$ to $1$, simply by dividing our color values by $255.0$.
###Code
error = np.linspace(0.0, 1.0, 100)
plt.plot(error, error**2, label='l_2 loss')
plt.plot(error, np.abs(error), label='l_1 loss')
plt.xlabel('error')
plt.ylabel('loss')
plt.legend(loc='lower right')
###Output
_____no_output_____
###Markdown
So unlike the $l_2$ loss, the $l_1$ loss is really quickly upset if there is *any* error at all: as soon as error moves away from $0.0$, to $0.1$, the $l_1$ loss is $0.1$. But the $l_2$ loss is $0.1^2 = 0.01$. Having a stronger penalty on smaller errors often leads to what the literature calls "sparse" solutions, since it favors activations that try to explain as much of the data as possible, rather than a lot of activations that do a sort of good job, but when put together, do a great job of explaining the data. Don't worry about what this means if you are more unfamiliar with Machine Learning. There is a lot of literature surrounding each of these loss functions that we won't have time to get into, but look them up if they interest you.During the lecture, we've seen how to create a cost function using Tensorflow. To create a $l_2$ loss function, you can for instance use tensorflow's `tf.squared_difference` or for an $l_1$ loss function, `tf.abs`. You'll need to refer to the `Y` and `Y_pred` variables only, and your resulting cost should be a single value. Try creating the $l_1$ loss to begin with, and come back here after you have trained your network, to compare the performance with a $l_2$ loss.The equation for computing cost I mentioned above is more succintly written as, for $l_2$ norm:$$cost(\textbf{Y}, \hat{\textbf{Y}}) = \frac{1}{\text{B}} \displaystyle\sum\limits_{b=0}^{\text{B}} \displaystyle\sum\limits_{c=0}^{\text{C}} (\textbf{Y}_{c} - \hat{\textbf{Y}}_{c})^2$$For $l_1$ norm, we'd have:$$cost(\textbf{Y}, \hat{\textbf{Y}}) = \frac{1}{\text{B}} \displaystyle\sum\limits_{b=0}^{\text{B}} \displaystyle\sum\limits_{c=0}^{\text{C}} \text{abs}(\textbf{Y}_{c} - \hat{\textbf{Y}}_{c})$$Remember, to understand this equation, try to say it out loud: the $cost$ given two variables, $\textbf{Y}$, the actual output we want the network to have, and $\hat{\textbf{Y}}$ the predicted output from the network, is equal to the mean across $\text{B}$ batches, of the sum of $\textbf{C}$ color channels distance between the actual and predicted outputs. If you're still unsure, refer to the lecture where I've computed this, or scroll down a bit to where I've included the answer.TODO! COMPLETE THIS SECTION!
###Code
# first compute the error, the inner part of the summation.
# This should be the l1-norm or l2-norm of the distance
# between each color channel.
error = ...
assert(error.get_shape().as_list() == [None, 3])
###Output
_____no_output_____
###Markdown
TODO! COMPLETE THIS SECTION!
###Code
# Now sum the error for each feature in Y.
# If Y is [Batch, Features], the sum should be [Batch]:
sum_error = ...
assert(sum_error.get_shape().as_list() == [None])
###Output
_____no_output_____
###Markdown
TODO! COMPLETE THIS SECTION!
###Code
# Finally, compute the cost, as the mean error of the batch.
# This should be a single value.
cost = ...
assert(cost.get_shape().as_list() == [])
###Output
_____no_output_____
###Markdown
We now need an `optimizer` which will take our `cost` and a `learning_rate`, which says how far along the gradient to move. This optimizer calculates all the gradients in our network with respect to the `cost` variable and updates all of the weights in our network using backpropagation. We'll then create mini-batches of our training data and run the `optimizer` using a `session`.TODO! COMPLETE THIS SECTION!
###Code
# Refer to the help for the function
optimizer = tf.train....minimize(cost)
# Create parameters for the number of iterations to run for (< 100)
n_iterations = ...
# And how much data is in each minibatch (< 500)
batch_size = ...
# Then create a session
sess = tf.Session()
###Output
_____no_output_____
###Markdown
We'll now train our network! The code below should do this for you if you've setup everything else properly. Please read through this and make sure you understand each step! Note that this can take a VERY LONG time depending on the size of your image (make it < 100 x 100 pixels), the number of neurons per layer (e.g. < 30), the number of layers (e.g. < 8), and number of iterations (< 1000). Welcome to Deep Learning :)
###Code
# Initialize all your variables and run the operation with your session
sess.run(tf.initialize_all_variables())
# Optimize over a few iterations, each time following the gradient
# a little at a time
imgs = []
costs = []
gif_step = n_iterations // 10
step_i = 0
for it_i in range(n_iterations):
# Get a random sampling of the dataset
idxs = np.random.permutation(range(len(xs)))
# The number of batches we have to iterate over
n_batches = len(idxs) // batch_size
# Now iterate over our stochastic minibatches:
for batch_i in range(n_batches):
# Get just minibatch amount of data
idxs_i = idxs[batch_i * batch_size: (batch_i + 1) * batch_size]
# And optimize, also returning the cost so we can monitor
# how our optimization is doing.
training_cost = sess.run(
[cost, optimizer],
feed_dict={X: xs[idxs_i], Y: ys[idxs_i]})[0]
# Also, every 20 iterations, we'll draw the prediction of our
# input xs, which should try to recreate our image!
if (it_i + 1) % gif_step == 0:
costs.append(training_cost / n_batches)
ys_pred = Y_pred.eval(feed_dict={X: xs}, session=sess)
img = np.clip(ys_pred.reshape(img.shape), 0, 1)
imgs.append(img)
# Plot the cost over time
fig, ax = plt.subplots(1, 2)
ax[0].plot(costs)
ax[0].set_xlabel('Iteration')
ax[0].set_ylabel('Cost')
ax[1].imshow(img)
fig.suptitle('Iteration {}'.format(it_i))
plt.show()
# Save the images as a GIF
_ = gif.build_gif(imgs, saveto='single.gif', show_gif=False)
###Output
_____no_output_____
###Markdown
Let's now display the GIF we've just created:
###Code
ipyd.Image(url='single.gif?{}'.format(np.random.rand()),
height=500, width=500)
###Output
_____no_output_____
###Markdown
ExploreGo back over the previous cells and exploring changing different parameters of the network. I would suggest first trying to change the `learning_rate` parameter to different values and see how the cost curve changes. What do you notice? Try exponents of $10$, e.g. $10^1$, $10^2$, $10^3$... and so on. Also try changing the `batch_size`: $50, 100, 200, 500, ...$ How does it effect how the cost changes over time?Be sure to explore other manipulations of the network, such as changing the loss function to $l_2$ or $l_1$. How does it change the resulting learning? Also try changing the activation functions, the number of layers/neurons, different optimizers, and anything else that you may think of, and try to get a basic understanding on this toy problem of how it effects the network's training. Also try comparing creating a fairly shallow/wide net (e.g. 1-2 layers with many neurons, e.g. > 100), versus a deep/narrow net (e.g. 6-20 layers with fewer neurons, e.g. < 20). What do you notice? A Note on CrossvalidationThe cost curve plotted above is only showing the cost for our "training" dataset. Ideally, we should split our dataset into what are called "train", "validation", and "test" sets. This is done by taking random subsets of the entire dataset. For instance, we partition our dataset by saying we'll only use 80% of it for training, 10% for validation, and the last 10% for testing. Then when training as above, you would only use the 80% of the data you had partitioned, and then monitor accuracy on both the data you have used to train, but also that new 10% of unseen validation data. This gives you a sense of how "general" your network is. If it is performing just as well on that 10% of data, then you know it is doing a good job. Finally, once you are done training, you would test one last time on your "test" dataset. Ideally, you'd do this a number of times, so that every part of the dataset had a chance to be the test set. This would also give you a measure of the variance of the accuracy on the final test. If it changes a lot, you know something is wrong. If it remains fairly stable, then you know that it is a good representation of the model's accuracy on unseen data.We didn't get a chance to cover this in class, as it is less useful for exploring creative applications, though it is very useful to know and to use in practice, as it avoids overfitting/overgeneralizing your network to all of the data. Feel free to explore how to do this on the application above! Part Three - Learning More than One Image InstructionsWe're now going to make use of our Dataset from Session 1 and apply what we've just learned to try and paint every single image in our dataset. How would you guess is the best way to approach this? We could for instance feed in every possible image by having multiple row, col -> r, g, b values. So for any given row, col, we'd have 100 possible r, g, b values. This likely won't work very well as there are many possible values a pixel could take, not just one. What if we also tell the network *which* image's row and column we wanted painted? We're going to try and see how that does.You can execute all of the cells below unchanged to see how this works with the first 100 images of the celeb dataset. But you should replace the images with your own dataset, and vary the parameters of the network to get the best results!I've placed the same code for running the previous algorithm into two functions, `build_model` and `train`. You can directly call the function `train` with a 4-d image shaped as N x H x W x C, and it will collect all of the points of every image and try to predict the output colors of those pixels, just like before. The only difference now is that you are able to try this with a few images at a time. There are a few ways we could have tried to handle multiple images. The way I've shown in the `train` function is to include an additional input neuron for *which* image it is. So as well as receiving the row and column, the network will also receive as input which image it is as a number. This should help the network to better distinguish the patterns it uses, as it has knowledge that helps it separates its process based on which image is fed as input.
###Code
def build_model(xs, ys, n_neurons, n_layers, activation_fn,
final_activation_fn, cost_type):
xs = np.asarray(xs)
ys = np.asarray(ys)
if xs.ndim != 2:
raise ValueError(
'xs should be a n_observates x n_features, ' +
'or a 2-dimensional array.')
if ys.ndim != 2:
raise ValueError(
'ys should be a n_observates x n_features, ' +
'or a 2-dimensional array.')
n_xs = xs.shape[1]
n_ys = ys.shape[1]
X = tf.placeholder(name='X', shape=[None, n_xs],
dtype=tf.float32)
Y = tf.placeholder(name='Y', shape=[None, n_ys],
dtype=tf.float32)
current_input = X
for layer_i in range(n_layers):
current_input = utils.linear(
current_input, n_neurons,
activation=activation_fn,
name='layer{}'.format(layer_i))[0]
Y_pred = utils.linear(
current_input, n_ys,
activation=final_activation_fn,
name='pred')[0]
if cost_type == 'l1_norm':
cost = tf.reduce_mean(tf.reduce_sum(
tf.abs(Y - Y_pred), 1))
elif cost_type == 'l2_norm':
cost = tf.reduce_mean(tf.reduce_sum(
tf.squared_difference(Y, Y_pred), 1))
else:
raise ValueError(
'Unknown cost_type: {}. '.format(
cost_type) + 'Use only "l1_norm" or "l2_norm"')
return {'X': X, 'Y': Y, 'Y_pred': Y_pred, 'cost': cost}
def train(imgs,
learning_rate=0.0001,
batch_size=200,
n_iterations=10,
gif_step=2,
n_neurons=30,
n_layers=10,
activation_fn=tf.nn.relu,
final_activation_fn=tf.nn.tanh,
cost_type='l2_norm'):
N, H, W, C = imgs.shape
all_xs, all_ys = [], []
for img_i, img in enumerate(imgs):
xs, ys = split_image(img)
all_xs.append(np.c_[xs, np.repeat(img_i, [xs.shape[0]])])
all_ys.append(ys)
xs = np.array(all_xs).reshape(-1, 3)
xs = (xs - np.mean(xs, 0)) / np.std(xs, 0)
ys = np.array(all_ys).reshape(-1, 3)
ys = ys / 127.5 - 1
g = tf.Graph()
with tf.Session(graph=g) as sess:
model = build_model(xs, ys, n_neurons, n_layers,
activation_fn, final_activation_fn,
cost_type)
optimizer = tf.train.AdamOptimizer(
learning_rate=learning_rate).minimize(model['cost'])
sess.run(tf.initialize_all_variables())
gifs = []
costs = []
step_i = 0
for it_i in range(n_iterations):
# Get a random sampling of the dataset
idxs = np.random.permutation(range(len(xs)))
# The number of batches we have to iterate over
n_batches = len(idxs) // batch_size
training_cost = 0
# Now iterate over our stochastic minibatches:
for batch_i in range(n_batches):
# Get just minibatch amount of data
idxs_i = idxs[batch_i * batch_size:
(batch_i + 1) * batch_size]
# And optimize, also returning the cost so we can monitor
# how our optimization is doing.
cost = sess.run(
[model['cost'], optimizer],
feed_dict={model['X']: xs[idxs_i],
model['Y']: ys[idxs_i]})[0]
training_cost += cost
print('iteration {}/{}: cost {}'.format(
it_i + 1, n_iterations, training_cost / n_batches))
# Also, every 20 iterations, we'll draw the prediction of our
# input xs, which should try to recreate our image!
if (it_i + 1) % gif_step == 0:
costs.append(training_cost / n_batches)
ys_pred = model['Y_pred'].eval(
feed_dict={model['X']: xs}, session=sess)
img = ys_pred.reshape(imgs.shape)
gifs.append(img)
return gifs
###Output
_____no_output_____
###Markdown
CodeBelow, I've shown code for loading the first 100 celeb files. Run through the next few cells to see how this works with the celeb dataset, and then come back here and replace the `imgs` variable with your own set of images. For instance, you can try your entire sorted dataset from Session 1 as an N x H x W x C array. Explore!TODO! COMPLETE THIS SECTION!
###Code
celeb_imgs = utils.get_celeb_imgs()
plt.figure(figsize=(10, 10))
plt.imshow(utils.montage(celeb_imgs).astype(np.uint8))
# Replace this with your own dataset of N x H x W x C!
# It doesn't have to be 100 images, explore!
imgs = np.array(celeb_imgs).copy()
###Output
_____no_output_____
###Markdown
Explore changing the parameters of the `train` function and your own dataset of images. Note, you do not have to use the dataset from the last assignment! Explore different numbers of images, whatever you prefer.TODO! COMPLETE THIS SECTION!
###Code
# Change the parameters of the train function and
# explore changing the dataset
gifs = train(imgs=imgs)
###Output
_____no_output_____
###Markdown
Now we'll create a gif out of the training process. Be sure to call this 'multiple.gif' for your homework submission:
###Code
montage_gifs = [np.clip(utils.montage(
(m * 127.5) + 127.5), 0, 255).astype(np.uint8)
for m in gifs]
_ = gif.build_gif(montage_gifs, saveto='multiple.gif')
###Output
_____no_output_____
###Markdown
And show it in the notebook
###Code
ipyd.Image(url='multiple.gif?{}'.format(np.random.rand()),
height=500, width=500)
###Output
_____no_output_____
###Markdown
What we're seeing is the training process over time. We feed in our `xs`, which consist of the pixel values of each of our 100 images, it goes through the neural network, and out come predicted color values for every possible input value. We visualize it above as a gif by seeing how at each iteration the network has predicted the entire space of the inputs. We can visualize just the last iteration as a "latent" space, going from the first image (the top left image in the montage), to the last image, (the bottom right image).
###Code
final = gifs[-1]
final_gif = [np.clip(((m * 127.5) + 127.5), 0, 255).astype(np.uint8) for m in final]
gif.build_gif(final_gif, saveto='final.gif')
ipyd.Image(url='final.gif?{}'.format(np.random.rand()),
height=200, width=200)
###Output
_____no_output_____
###Markdown
Part Four - Open Exploration (Extra Credit)I now what you to explore what other possible manipulations of the network and/or dataset you could imagine. Perhaps a process that does the reverse, tries to guess where a given color should be painted? What if it was only taught a certain palette, and had to reason about other colors, how it would interpret those colors? Or what if you fed it pixel locations that weren't part of the training set, or outside the frame of what it was trained on? Or what happens with different activation functions, number of layers, increasing number of neurons or lesser number of neurons? I leave any of these as an open exploration for you.Try exploring this process with your own ideas, materials, and networks, and submit something you've created as a gif! To aid exploration, be sure to scale the image down quite a bit or it will require a much larger machine, and much more time to train. Then whenever you think you may be happy with the process you've created, try scaling up the resolution and leave the training to happen over a few hours/overnight to produce something truly stunning!Make sure to name the result of your gif: "explore.gif", and be sure to include it in your zip file. TODO! COMPLETE THIS SECTION!
###Code
# Train a network to produce something, storing every few
# iterations in the variable gifs, then export the training
# over time as a gif.
...
gif.build_gif(montage_gifs, saveto='explore.gif')
ipyd.Image(url='explore.gif?{}'.format(np.random.rand()),
height=500, width=500)
###Output
_____no_output_____
###Markdown
Assignment SubmissionAfter you've completed the notebook, create a zip file of the current directory using the code below. This code will make sure you have included this completed ipython notebook and the following files named exactly as: session-2/ session-2.ipynb single.gif multiple.gif final.gif explore.gif* libs/ utils.py * = optional/extra-creditYou'll then submit this zip file for your second assignment on Kadenze for "Assignment 2: Teach a Deep Neural Network to Paint"! If you have any questions, remember to reach out on the forums and connect with your peers or with me.To get assessed, you'll need to be a premium student! This will allow you to build an online portfolio of all of your work and receive grades. If you aren't already enrolled as a student, register now at http://www.kadenze.com/ and join the [CADL](https://twitter.com/hashtag/CADL) community to see what your peers are doing! https://www.kadenze.com/courses/creative-applications-of-deep-learning-with-tensorflow/infoAlso, if you share any of the GIFs on Facebook/Twitter/Instagram/etc..., be sure to use the CADL hashtag so that other students can find your work!
###Code
utils.build_submission('session-2.zip',
('reference.png',
'single.gif',
'multiple.gif',
'final.gif',
'session-2.ipynb'),
('explore.gif'))
###Output
_____no_output_____
###Markdown
Session 2 - Training a Network w/ TensorflowAssignment: Teach a Deep Neural Network to PaintParag K. MitalCreative Applications of Deep Learning w/ TensorflowKadenze AcademyCADLThis work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Learning Goals* Learn how to create a Neural Network* Learn to use a neural network to paint an image* Apply creative thinking to the inputs, outputs, and definition of a network Outline- [Assignment Synopsis](assignment-synopsis)- [Part One - Fully Connected Network](part-one---fully-connected-network) - [Instructions](instructions) - [Code](code) - [Variable Scopes](variable-scopes)- [Part Two - Image Painting Network](part-two---image-painting-network) - [Instructions](instructions-1) - [Preparing the Data](preparing-the-data) - [Cost Function](cost-function) - [Explore](explore) - [A Note on Crossvalidation](a-note-on-crossvalidation)- [Part Three - Learning More than One Image](part-three---learning-more-than-one-image) - [Instructions](instructions-2) - [Code](code-1)- [Part Four - Open Exploration \(Extra Credit\)](part-four---open-exploration-extra-credit)- [Assignment Submission](assignment-submission)This next section will just make sure you have the right version of python and the libraries that we'll be using. Don't change the code here but make sure you "run" it (use "shift+enter")!
###Code
# First check the Python version
import sys
if sys.version_info < (3,4):
print('You are running an older version of Python!\n\n' \
'You should consider updating to Python 3.4.0 or ' \
'higher as the libraries built for this course ' \
'have only been tested in Python 3.4 and higher.\n')
print('Try installing the Python 3.5 version of anaconda '
'and then restart `jupyter notebook`:\n' \
'https://www.continuum.io/downloads\n\n')
# Now get necessary libraries
try:
import os
import numpy as np
import matplotlib.pyplot as plt
from skimage.transform import resize
from skimage import data
from scipy.misc import imresize
except ImportError:
print('You are missing some packages! ' \
'We will try installing them before continuing!')
!pip install "numpy>=1.11.0" "matplotlib>=1.5.1" "scikit-image>=0.11.3" "scikit-learn>=0.17" "scipy>=0.17.0"
import os
import numpy as np
import matplotlib.pyplot as plt
from skimage.transform import resize
from skimage import data
from scipy.misc import imresize
print('Done!')
# Import Tensorflow
try:
import tensorflow as tf
except ImportError:
print("You do not have tensorflow installed!")
print("Follow the instructions on the following link")
print("to install tensorflow before continuing:")
print("")
print("https://github.com/pkmital/CADL#installation-preliminaries")
# This cell includes the provided libraries from the zip file
# and a library for displaying images from ipython, which
# we will use to display the gif
try:
from libs import utils, gif
import IPython.display as ipyd
except ImportError:
print("Make sure you have started notebook in the same directory" +
" as the provided zip file which includes the 'libs' folder" +
" and the file 'utils.py' inside of it. You will NOT be able"
" to complete this assignment unless you restart jupyter"
" notebook inside the directory created by extracting"
" the zip file or cloning the github repo.")
# We'll tell matplotlib to inline any drawn figures like so:
%matplotlib inline
plt.style.use('ggplot')
# Bit of formatting because I don't like the default inline code style:
from IPython.core.display import HTML
HTML("""<style> .rendered_html code {
padding: 2px 4px;
color: #c7254e;
background-color: #f9f2f4;
border-radius: 4px;
} </style>""")
###Output
_____no_output_____
###Markdown
Assignment SynopsisIn this assignment, we're going to create our first neural network capable of taking any two continuous values as inputs. Those two values will go through a series of multiplications, additions, and nonlinearities, coming out of the network as 3 outputs. Remember from the last homework, we used convolution to filter an image so that the representations in the image were accentuated. We're not going to be using convolution w/ Neural Networks until the next session, but we're effectively doing the same thing here: using multiplications to accentuate the representations in our data, in order to minimize whatever our cost function is. To find out what those multiplications need to be, we're going to use Gradient Descent and Backpropagation, which will take our cost, and find the appropriate updates to all the parameters in our network to best optimize the cost. In the next session, we'll explore much bigger networks and convolution. This "toy" network is really to help us get up and running with neural networks, and aid our exploration of the different components that make up a neural network. You will be expected to explore manipulations of the neural networks in this notebook as much as possible to help aid your understanding of how they effect the final result.We're going to build our first neural network to understand what color "to paint" given a location in an image, or the row, col of the image. So in goes a row/col, and out goes a R/G/B. In the next lesson, we'll learn what this network is really doing is performing regression. For now, we'll focus on the creative applications of such a network to help us get a better understanding of the different components that make up the neural network. You'll be asked to explore many of the different components of a neural network, including changing the inputs/outputs (i.e. the dataset), the number of layers, their activation functions, the cost functions, learning rate, and batch size. You'll also explore a modification to this same network which takes a 3rd input: an index for an image. This will let us try to learn multiple images at once, though with limited success.We'll now dive right into creating deep neural networks, and I'm going to show you the math along the way. Don't worry if a lot of it doesn't make sense, and it really takes a bit of practice before it starts to come together. Part One - Fully Connected Network InstructionsCreate the operations necessary for connecting an input to a network, defined by a `tf.Placeholder`, to a series of fully connected, or linear, layers, using the formula: $$\textbf{H} = \phi(\textbf{X}\textbf{W} + \textbf{b})$$where $\textbf{H}$ is an output layer representing the "hidden" activations of a network, $\phi$ represents some nonlinearity, $\textbf{X}$ represents an input to that layer, $\textbf{W}$ is that layer's weight matrix, and $\textbf{b}$ is that layer's bias. If you're thinking, what is going on? Where did all that math come from? Don't be afraid of it. Once you learn how to "speak" the symbolic representation of the equation, it starts to get easier. And once we put it into practice with some code, it should start to feel like there is some association with what is written in the equation, and what we've written in code. Practice trying to say the equation in a meaningful way: "The output of a hidden layer is equal to some input multiplied by another matrix, adding some bias, and applying a non-linearity". Or perhaps: "The hidden layer is equal to a nonlinearity applied to an input multiplied by a matrix and adding some bias". Explore your own interpretations of the equation, or ways of describing it, and it starts to become much, much easier to apply the equation.The first thing that happens in this equation is the input matrix $\textbf{X}$ is multiplied by another matrix, $\textbf{W}$. This is the most complicated part of the equation. It's performing matrix multiplication, as we've seen from last session, and is effectively scaling and rotating our input. The bias $\textbf{b}$ allows for a global shift in the resulting values. Finally, the nonlinearity of $\phi$ allows the input space to be nonlinearly warped, allowing it to express a lot more interesting distributions of data. Have a look below at some common nonlinearities. If you're unfamiliar with looking at graphs like this, it is common to read the horizontal axis as X, as the input, and the vertical axis as Y, as the output.
###Code
xs = np.linspace(-6, 6, 100)
plt.plot(xs, np.maximum(xs, 0), label='relu')
plt.plot(xs, 1 / (1 + np.exp(-xs)), label='sigmoid')
plt.plot(xs, np.tanh(xs), label='tanh')
plt.xlabel('Input')
plt.xlim([-6, 6])
plt.ylabel('Output')
plt.ylim([-1.5, 1.5])
plt.title('Common Activation Functions/Nonlinearities')
plt.legend(loc='lower right')
###Output
_____no_output_____
###Markdown
Remember, having series of linear followed by nonlinear operations is what makes neural networks expressive. By stacking a lot of "linear" + "nonlinear" operations in a series, we can create a deep neural network! Have a look at the output ranges of the above nonlinearity when considering which nonlinearity seems most appropriate. For instance, the `relu` is always above 0, but does not saturate at any value above 0, meaning it can be anything above 0. That's unlike the `sigmoid` which does saturate at both 0 and 1, meaning its values for a single output neuron will always be between 0 and 1. Similarly, the `tanh` saturates at -1 and 1.Choosing between these is often a matter of trial and error. Though you can make some insights depending on your normalization scheme. For instance, if your output is expected to be in the range of 0 to 1, you may not want to use a `tanh` function, which ranges from -1 to 1, but likely would want to use a `sigmoid`. Keep the ranges of these activation functions in mind when designing your network, especially the final output layer of your network. CodeIn this section, we're going to work out how to represent a fully connected neural network with code. First, create a 2D `tf.placeholder` called $\textbf{X}$ with `None` for the batch size and 2 features. Make its `dtype` `tf.float32`. Recall that we use the dimension of `None` for the batch size dimension to say that this dimension can be any number. Here is the docstring for the `tf.placeholder` function, have a look at what args it takes:Help on function placeholder in module `tensorflow.python.ops.array_ops`:```pythonplaceholder(dtype, shape=None, name=None)``` Inserts a placeholder for a tensor that will be always fed. **Important**: This tensor will produce an error if evaluated. Its value must be fed using the `feed_dict` optional argument to `Session.run()`, `Tensor.eval()`, or `Operation.run()`. For example:```pythonx = tf.placeholder(tf.float32, shape=(1024, 1024))y = tf.matmul(x, x)with tf.Session() as sess: print(sess.run(y)) ERROR: will fail because x was not fed. rand_array = np.random.rand(1024, 1024) print(sess.run(y, feed_dict={x: rand_array})) Will succeed.``` Args: dtype: The type of elements in the tensor to be fed. shape: The shape of the tensor to be fed (optional). If the shape is not specified, you can feed a tensor of any shape. name: A name for the operation (optional). Returns: A `Tensor` that may be used as a handle for feeding a value, but not evaluated directly. TODO! COMPLETE THIS SECTION!
###Code
# Create a placeholder with None x 2 dimensions of dtype tf.float32, and name it "X":
X = ...
###Output
_____no_output_____
###Markdown
Now multiply the tensor using a new variable, $\textbf{W}$, which has 2 rows and 20 columns, so that when it is left mutiplied by $\textbf{X}$, the output of the multiplication is None x 20, giving you 20 output neurons. Recall that the `tf.matmul` function takes two arguments, the left hand ($\textbf{X}$) and right hand side ($\textbf{W}$) of a matrix multiplication.To create $\textbf{W}$, you will use `tf.get_variable` to create a matrix which is `2 x 20` in dimension. Look up the docstrings of functions `tf.get_variable` and `tf.random_normal_initializer` to get familiar with these functions. There are many options we will ignore for now. Just be sure to set the `name`, `shape` (this is the one that has to be [2, 20]), `dtype` (i.e. tf.float32), and `initializer` (the `tf.random_normal_intializer` you should create) when creating your $\textbf{W}$ variable with `tf.get_variable(...)`.For the random normal initializer, often the mean is set to 0, and the standard deviation is set based on the number of neurons. But that really depends on the input and outputs of your network, how you've "normalized" your dataset, what your nonlinearity/activation function is, and what your expected range of inputs/outputs are. Don't worry about the values for the initializer for now, as this part will take a bit more experimentation to understand better!This part is to encourage you to learn how to look up the documentation on Tensorflow, ideally using `tf.get_variable?` in the notebook. If you are really stuck, just scroll down a bit and I've shown you how to use it. TODO! COMPLETE THIS SECTION!
###Code
W = tf.get_variable(...
h = tf.matmul(...
###Output
_____no_output_____
###Markdown
And add to this result another new variable, $\textbf{b}$, which has [20] dimensions. These values will be added to every output neuron after the multiplication above. Instead of the `tf.random_normal_initializer` that you used for creating $\textbf{W}$, now use the `tf.constant_initializer`. Often for bias, you'll set the constant bias initialization to 0 or 1.TODO! COMPLETE THIS SECTION!
###Code
b = tf.get_variable(...
h = tf.nn.bias_add(...
###Output
_____no_output_____
###Markdown
So far we have done:$$\textbf{X}\textbf{W} + \textbf{b}$$Finally, apply a nonlinear activation to this output, such as `tf.nn.relu`, to complete the equation:$$\textbf{H} = \phi(\textbf{X}\textbf{W} + \textbf{b})$$TODO! COMPLETE THIS SECTION!
###Code
h = ...
###Output
_____no_output_____
###Markdown
Now that we've done all of this work, let's stick it inside a function. I've already done this for you and placed it inside the `utils` module under the function name `linear`. We've already imported the `utils` module so we can call it like so, `utils.linear(...)`. The docstring is copied below, and the code itself. Note that this function is slightly different to the one in the lecture. It does not require you to specify `n_input`, and the input `scope` is called `name`. It also has a few more extras in there including automatically converting a 4-d input tensor to a 2-d tensor so that you can fully connect the layer with a matrix multiply (don't worry about what this means if it doesn't make sense!).```pythonutils.linear??``````pythondef linear(x, n_output, name=None, activation=None, reuse=None): """Fully connected layer Parameters ---------- x : tf.Tensor Input tensor to connect n_output : int Number of output neurons name : None, optional Scope to apply Returns ------- op : tf.Tensor Output of fully connected layer. """ if len(x.get_shape()) != 2: x = flatten(x, reuse=reuse) n_input = x.get_shape().as_list()[1] with tf.variable_scope(name or "fc", reuse=reuse): W = tf.get_variable( name='W', shape=[n_input, n_output], dtype=tf.float32, initializer=tf.contrib.layers.xavier_initializer()) b = tf.get_variable( name='b', shape=[n_output], dtype=tf.float32, initializer=tf.constant_initializer(0.0)) h = tf.nn.bias_add( name='h', value=tf.matmul(x, W), bias=b) if activation: h = activation(h) return h, W``` Variable ScopesNote that since we are using `variable_scope` and explicitly telling the scope which name we would like, if there is *already* a variable created with the same name, then Tensorflow will raise an exception! If this happens, you should consider one of three possible solutions:1. If this happens while you are interactively editing a graph, you may need to reset the current graph:```python tf.reset_default_graph()```You should really only have to use this if you are in an interactive console! If you are creating Python scripts to run via command line, you should really be using solution 3 listed below, and be explicit with your graph contexts! 2. If this happens and you were not expecting any name conflicts, then perhaps you had a typo and created another layer with the same name! That's a good reason to keep useful names for everything in your graph!3. More likely, you should be using context managers when creating your graphs and running sessions. This works like so: ```python g = tf.Graph() with tf.Session(graph=g) as sess: Y_pred, W = linear(X, 2, 3, activation=tf.nn.relu) ``` or: ```python g = tf.Graph() with tf.Session(graph=g) as sess, g.as_default(): Y_pred, W = linear(X, 2, 3, activation=tf.nn.relu) ``` You can now write the same process as the above steps by simply calling:
###Code
h, W = utils.linear(
x=X, n_output=20, name='linear', activation=tf.nn.relu)
###Output
_____no_output_____
###Markdown
Part Two - Image Painting Network InstructionsFollow along the steps below, first setting up input and output data of the network, $\textbf{X}$ and $\textbf{Y}$. Then work through building the neural network which will try to compress the information in $\textbf{X}$ through a series of linear and non-linear functions so that whatever it is given as input, it minimized the error of its prediction, $\hat{\textbf{Y}}$, and the true output $\textbf{Y}$ through its training process. You'll also create an animated GIF of the training which you'll need to submit for the homework!Through this, we'll explore our first creative application: painting an image. This network is just meant to demonstrate how easily networks can be scaled to more complicated tasks without much modification. It is also meant to get you thinking about neural networks as building blocks that can be reconfigured, replaced, reorganized, and get you thinking about how the inputs and outputs can be anything you can imagine. Preparing the DataWe'll follow an example that Andrej Karpathy has done in his online demonstration of "image inpainting". What we're going to do is teach the network to go from the location on an image frame to a particular color. So given any position in an image, the network will need to learn what color to paint. Let's first get an image that we'll try to teach a neural network to paint.TODO! COMPLETE THIS SECTION!
###Code
# First load an image
img = ...
# Be careful with the size of your image.
# Try a fairly small image to begin with,
# then come back here and try larger sizes.
img = imresize(img, (100, 100))
plt.figure(figsize=(5, 5))
plt.imshow(img)
# Make sure you save this image as "reference.png"
# and include it in your zipped submission file
# so we can tell what image you are trying to paint!
plt.imsave(fname='reference.png', arr=img)
###Output
_____no_output_____
###Markdown
In the lecture, I showed how to aggregate the pixel locations and their colors using a loop over every pixel position. I put that code into a function `split_image` below. Feel free to experiment with other features for `xs` or `ys`.
###Code
def split_image(img):
# We'll first collect all the positions in the image in our list, xs
xs = []
# And the corresponding colors for each of these positions
ys = []
# Now loop over the image
for row_i in range(img.shape[0]):
for col_i in range(img.shape[1]):
# And store the inputs
xs.append([row_i, col_i])
# And outputs that the network needs to learn to predict
ys.append(img[row_i, col_i])
# we'll convert our lists to arrays
xs = np.array(xs)
ys = np.array(ys)
return xs, ys
###Output
_____no_output_____
###Markdown
Let's use this function to create the inputs (xs) and outputs (ys) to our network as the pixel locations (xs) and their colors (ys):
###Code
xs, ys = split_image(img)
# and print the shapes
xs.shape, ys.shape
###Output
_____no_output_____
###Markdown
Also remember, we should normalize our input values!TODO! COMPLETE THIS SECTION!
###Code
# Normalize the input (xs) using its mean and standard deviation
xs = ...
# Just to make sure you have normalized it correctly:
print(np.min(xs), np.max(xs))
assert(np.min(xs) > -3.0 and np.max(xs) < 3.0)
###Output
_____no_output_____
###Markdown
Similarly for the output:
###Code
print(np.min(ys), np.max(ys))
###Output
_____no_output_____
###Markdown
We'll normalize the output using a simpler normalization method, since we know the values range from 0-255:
###Code
ys = ys / 255.0
print(np.min(ys), np.max(ys))
###Output
_____no_output_____
###Markdown
Scaling the image values like this has the advantage that it is still interpretable as an image, unlike if we have negative values.What we're going to do is use regression to predict the value of a pixel given its (row, col) position. So the input to our network is `X = (row, col)` value. And the output of the network is `Y = (r, g, b)`.We can get our original image back by reshaping the colors back into the original image shape. This works because the `ys` are still in order:
###Code
plt.imshow(ys.reshape(img.shape))
###Output
_____no_output_____
###Markdown
But when we give inputs of (row, col) to our network, it won't know what order they are, because we will randomize them. So it will have to *learn* what color value should be output for any given (row, col).Create 2 placeholders of `dtype` `tf.float32`: one for the input of the network, a `None x 2` dimension placeholder called $\textbf{X}$, and another for the true output of the network, a `None x 3` dimension placeholder called $\textbf{Y}$.TODO! COMPLETE THIS SECTION!
###Code
# Let's reset the graph:
tf.reset_default_graph()
# Create a placeholder of None x 2 dimensions and dtype tf.float32
# This will be the input to the network which takes the row/col
X = tf.placeholder(...
# Create the placeholder, Y, with 3 output dimensions instead of 2.
# This will be the output of the network, the R, G, B values.
Y = tf.placeholder(...
###Output
_____no_output_____
###Markdown
Now create a deep neural network that takes your network input $\textbf{X}$ of 2 neurons, multiplies it by a linear and non-linear transformation which makes its shape [None, 20], meaning it will have 20 output neurons. Then repeat the same process again to give you 20 neurons again, and then again and again until you've done 6 layers of 20 neurons. Then finally one last layer which will output 3 neurons, your predicted output, which I've been denoting mathematically as $\hat{\textbf{Y}}$, for a total of 6 hidden layers, or 8 layers total including the input and output layers. Mathematically, we'll be creating a deep neural network that looks just like the previous fully connected layer we've created, but with a few more connections. So recall the first layer's connection is:\begin{align}\textbf{H}_1=\phi(\textbf{X}\textbf{W}_1 + \textbf{b}_1) \\\end{align}So the next layer will take that output, and connect it up again:\begin{align}\textbf{H}_2=\phi(\textbf{H}_1\textbf{W}_2 + \textbf{b}_2) \\\end{align}And same for every other layer:\begin{align}\textbf{H}_3=\phi(\textbf{H}_2\textbf{W}_3 + \textbf{b}_3) \\\textbf{H}_4=\phi(\textbf{H}_3\textbf{W}_4 + \textbf{b}_4) \\\textbf{H}_5=\phi(\textbf{H}_4\textbf{W}_5 + \textbf{b}_5) \\\textbf{H}_6=\phi(\textbf{H}_5\textbf{W}_6 + \textbf{b}_6) \\\end{align}Including the very last layer, which will be the prediction of the network:\begin{align}\hat{\textbf{Y}}=\phi(\textbf{H}_6\textbf{W}_7 + \textbf{b}_7)\end{align}Remember if you run into issues with variable scopes/names, that you cannot recreate a variable with the same name! Revisit the section on Variable Scopes if you get stuck with name issues.TODO! COMPLETE THIS SECTION!
###Code
# We'll create 6 hidden layers. Let's create a variable
# to say how many neurons we want for each of the layers
# (try 20 to begin with, then explore other values)
n_neurons = ...
# Create the first linear + nonlinear layer which will
# take the 2 input neurons and fully connects it to 20 neurons.
# Use the `utils.linear` function to do this just like before,
# but also remember to give names for each layer, such as
# "1", "2", ... "5", or "layer1", "layer2", ... "layer6".
h1, W1 = ...
# Create another one:
h2, W2 = ...
# and four more (or replace all of this with a loop if you can!):
h3, W3 = ...
h4, W4 = ...
h5, W5 = ...
h6, W6 = ...
# Now, make one last layer to make sure your network has 3 outputs:
Y_pred, W7 = utils.linear(h6, 3, activation=None, name='pred')
assert(X.get_shape().as_list() == [None, 2])
assert(Y_pred.get_shape().as_list() == [None, 3])
assert(Y.get_shape().as_list() == [None, 3])
###Output
_____no_output_____
###Markdown
Cost FunctionNow we're going to work on creating a `cost` function. The cost should represent how much `error` there is in the network, and provide the optimizer this value to help it train the network's parameters using gradient descent and backpropagation.Let's say our error is `E`, then the cost will be:$$cost(\textbf{Y}, \hat{\textbf{Y}}) = \frac{1}{\text{B}} \displaystyle\sum\limits_{b=0}^{\text{B}} \textbf{E}_b$$where the error is measured as, e.g.:$$\textbf{E} = \displaystyle\sum\limits_{c=0}^{\text{C}} (\textbf{Y}_{c} - \hat{\textbf{Y}}_{c})^2$$Don't worry if this scares you. This is mathematically expressing the same concept as: "the cost of an actual $\textbf{Y}$, and a predicted $\hat{\textbf{Y}}$ is equal to the mean across batches, of which there are $\text{B}$ total batches, of the sum of distances across $\text{C}$ color channels of every predicted output and true output". Basically, we're trying to see on average, or at least within a single minibatches average, how wrong was our prediction? We create a measure of error for every output feature by squaring the predicted output and the actual output it should have, i.e. the actual color value it should have output for a given input pixel position. By squaring it, we penalize large distances, but not so much small distances.Consider how the square function (i.e., $f(x) = x^2$) changes for a given error. If our color values range between 0-255, then a typical amount of error would be between $0$ and $128^2$. For example if my prediction was (120, 50, 167), and the color should have been (0, 100, 120), then the error for the Red channel is (120 - 0) or 120. And the Green channel is (50 - 100) or -50, and for the Blue channel, (167 - 120) = 47. When I square this result, I get: (120)^2, (-50)^2, and (47)^2. I then add all of these and that is my error, $\textbf{E}$, for this one observation. But I will have a few observations per minibatch. So I add all the error in my batch together, then divide by the number of observations in the batch, essentially finding the mean error of my batch. Let's try to see what the square in our measure of error is doing graphically.
###Code
error = np.linspace(0.0, 128.0**2, 100)
loss = error**2.0
plt.plot(error, loss)
plt.xlabel('error')
plt.ylabel('loss')
###Output
_____no_output_____
###Markdown
This is known as the $l_2$ (pronounced el-two) loss. It doesn't penalize small errors as much as it does large errors. This is easier to see when we compare it with another common loss, the $l_1$ (el-one) loss. It is linear in error, by taking the absolute value of the error. We'll compare the $l_1$ loss with normalized values from $0$ to $1$. So instead of having $0$ to $255$ for our RGB values, we'd have $0$ to $1$, simply by dividing our color values by $255.0$.
###Code
error = np.linspace(0.0, 1.0, 100)
plt.plot(error, error**2, label='l_2 loss')
plt.plot(error, np.abs(error), label='l_1 loss')
plt.xlabel('error')
plt.ylabel('loss')
plt.legend(loc='lower right')
###Output
_____no_output_____
###Markdown
So unlike the $l_2$ loss, the $l_1$ loss is really quickly upset if there is *any* error at all: as soon as error moves away from $0.0$, to $0.1$, the $l_1$ loss is $0.1$. But the $l_2$ loss is $0.1^2 = 0.01$. Having a stronger penalty on smaller errors often leads to what the literature calls "sparse" solutions, since it favors activations that try to explain as much of the data as possible, rather than a lot of activations that do a sort of good job, but when put together, do a great job of explaining the data. Don't worry about what this means if you are more unfamiliar with Machine Learning. There is a lot of literature surrounding each of these loss functions that we won't have time to get into, but look them up if they interest you.During the lecture, we've seen how to create a cost function using Tensorflow. To create a $l_2$ loss function, you can for instance use tensorflow's `tf.squared_difference` or for an $l_1$ loss function, `tf.abs`. You'll need to refer to the `Y` and `Y_pred` variables only, and your resulting cost should be a single value. Try creating the $l_1$ loss to begin with, and come back here after you have trained your network, to compare the performance with a $l_2$ loss.The equation for computing cost I mentioned above is more succintly written as, for $l_2$ norm:$$cost(\textbf{Y}, \hat{\textbf{Y}}) = \frac{1}{\text{B}} \displaystyle\sum\limits_{b=0}^{\text{B}} \displaystyle\sum\limits_{c=0}^{\text{C}} (\textbf{Y}_{c} - \hat{\textbf{Y}}_{c})^2$$For $l_1$ norm, we'd have:$$cost(\textbf{Y}, \hat{\textbf{Y}}) = \frac{1}{\text{B}} \displaystyle\sum\limits_{b=0}^{\text{B}} \displaystyle\sum\limits_{c=0}^{\text{C}} \text{abs}(\textbf{Y}_{c} - \hat{\textbf{Y}}_{c})$$Remember, to understand this equation, try to say it out loud: the $cost$ given two variables, $\textbf{Y}$, the actual output we want the network to have, and $\hat{\textbf{Y}}$ the predicted output from the network, is equal to the mean across $\text{B}$ batches, of the sum of $\textbf{C}$ color channels distance between the actual and predicted outputs. If you're still unsure, refer to the lecture where I've computed this, or scroll down a bit to where I've included the answer.TODO! COMPLETE THIS SECTION!
###Code
# first compute the error, the inner part of the summation.
# This should be the l1-norm or l2-norm of the distance
# between each color channel.
error = ...
assert(error.get_shape().as_list() == [None, 3])
###Output
_____no_output_____
###Markdown
TODO! COMPLETE THIS SECTION!
###Code
# Now sum the error for each feature in Y.
# If Y is [Batch, Features], the sum should be [Batch]:
sum_error = ...
assert(sum_error.get_shape().as_list() == [None])
###Output
_____no_output_____
###Markdown
TODO! COMPLETE THIS SECTION!
###Code
# Finally, compute the cost, as the mean error of the batch.
# This should be a single value.
cost = ...
assert(cost.get_shape().as_list() == [])
###Output
_____no_output_____
###Markdown
We now need an `optimizer` which will take our `cost` and a `learning_rate`, which says how far along the gradient to move. This optimizer calculates all the gradients in our network with respect to the `cost` variable and updates all of the weights in our network using backpropagation. We'll then create mini-batches of our training data and run the `optimizer` using a `session`.TODO! COMPLETE THIS SECTION!
###Code
# Refer to the help for the function
optimizer = tf.train....minimize(cost)
# Create parameters for the number of iterations to run for (< 100)
n_iterations = ...
# And how much data is in each minibatch (< 500)
batch_size = ...
# Then create a session
sess = tf.Session()
###Output
_____no_output_____
###Markdown
We'll now train our network! The code below should do this for you if you've setup everything else properly. Please read through this and make sure you understand each step! Note that this can take a VERY LONG time depending on the size of your image (make it < 100 x 100 pixels), the number of neurons per layer (e.g. < 30), the number of layers (e.g. < 8), and number of iterations (< 1000). Welcome to Deep Learning :)
###Code
# Initialize all your variables and run the operation with your session
sess.run(tf.global_variables_initializer())
# Optimize over a few iterations, each time following the gradient
# a little at a time
imgs = []
costs = []
gif_step = n_iterations // 10
step_i = 0
for it_i in range(n_iterations):
# Get a random sampling of the dataset
idxs = np.random.permutation(range(len(xs)))
# The number of batches we have to iterate over
n_batches = len(idxs) // batch_size
# Now iterate over our stochastic minibatches:
for batch_i in range(n_batches):
# Get just minibatch amount of data
idxs_i = idxs[batch_i * batch_size: (batch_i + 1) * batch_size]
# And optimize, also returning the cost so we can monitor
# how our optimization is doing.
training_cost = sess.run(
[cost, optimizer],
feed_dict={X: xs[idxs_i], Y: ys[idxs_i]})[0]
# Also, every 20 iterations, we'll draw the prediction of our
# input xs, which should try to recreate our image!
if (it_i + 1) % gif_step == 0:
costs.append(training_cost / n_batches)
ys_pred = Y_pred.eval(feed_dict={X: xs}, session=sess)
img = np.clip(ys_pred.reshape(img.shape), 0, 1)
imgs.append(img)
# Plot the cost over time
fig, ax = plt.subplots(1, 2)
ax[0].plot(costs)
ax[0].set_xlabel('Iteration')
ax[0].set_ylabel('Cost')
ax[1].imshow(img)
fig.suptitle('Iteration {}'.format(it_i))
plt.show()
# Save the images as a GIF
_ = gif.build_gif(imgs, saveto='single.gif', show_gif=False)
###Output
_____no_output_____
###Markdown
Let's now display the GIF we've just created:
###Code
ipyd.Image(url='single.gif?{}'.format(np.random.rand()),
height=500, width=500)
###Output
_____no_output_____
###Markdown
ExploreGo back over the previous cells and exploring changing different parameters of the network. I would suggest first trying to change the `learning_rate` parameter to different values and see how the cost curve changes. What do you notice? Try exponents of $10$, e.g. $10^1$, $10^2$, $10^3$... and so on. Also try changing the `batch_size`: $50, 100, 200, 500, ...$ How does it effect how the cost changes over time?Be sure to explore other manipulations of the network, such as changing the loss function to $l_2$ or $l_1$. How does it change the resulting learning? Also try changing the activation functions, the number of layers/neurons, different optimizers, and anything else that you may think of, and try to get a basic understanding on this toy problem of how it effects the network's training. Also try comparing creating a fairly shallow/wide net (e.g. 1-2 layers with many neurons, e.g. > 100), versus a deep/narrow net (e.g. 6-20 layers with fewer neurons, e.g. < 20). What do you notice? A Note on CrossvalidationThe cost curve plotted above is only showing the cost for our "training" dataset. Ideally, we should split our dataset into what are called "train", "validation", and "test" sets. This is done by taking random subsets of the entire dataset. For instance, we partition our dataset by saying we'll only use 80% of it for training, 10% for validation, and the last 10% for testing. Then when training as above, you would only use the 80% of the data you had partitioned, and then monitor accuracy on both the data you have used to train, but also that new 10% of unseen validation data. This gives you a sense of how "general" your network is. If it is performing just as well on that 10% of data, then you know it is doing a good job. Finally, once you are done training, you would test one last time on your "test" dataset. Ideally, you'd do this a number of times, so that every part of the dataset had a chance to be the test set. This would also give you a measure of the variance of the accuracy on the final test. If it changes a lot, you know something is wrong. If it remains fairly stable, then you know that it is a good representation of the model's accuracy on unseen data.We didn't get a chance to cover this in class, as it is less useful for exploring creative applications, though it is very useful to know and to use in practice, as it avoids overfitting/overgeneralizing your network to all of the data. Feel free to explore how to do this on the application above! Part Three - Learning More than One Image InstructionsWe're now going to make use of our Dataset from Session 1 and apply what we've just learned to try and paint every single image in our dataset. How would you guess is the best way to approach this? We could for instance feed in every possible image by having multiple row, col -> r, g, b values. So for any given row, col, we'd have 100 possible r, g, b values. This likely won't work very well as there are many possible values a pixel could take, not just one. What if we also tell the network *which* image's row and column we wanted painted? We're going to try and see how that does.You can execute all of the cells below unchanged to see how this works with the first 100 images of the celeb dataset. But you should replace the images with your own dataset, and vary the parameters of the network to get the best results!I've placed the same code for running the previous algorithm into two functions, `build_model` and `train`. You can directly call the function `train` with a 4-d image shaped as N x H x W x C, and it will collect all of the points of every image and try to predict the output colors of those pixels, just like before. The only difference now is that you are able to try this with a few images at a time. There are a few ways we could have tried to handle multiple images. The way I've shown in the `train` function is to include an additional input neuron for *which* image it is. So as well as receiving the row and column, the network will also receive as input which image it is as a number. This should help the network to better distinguish the patterns it uses, as it has knowledge that helps it separates its process based on which image is fed as input.
###Code
def build_model(xs, ys, n_neurons, n_layers, activation_fn,
final_activation_fn, cost_type):
xs = np.asarray(xs)
ys = np.asarray(ys)
if xs.ndim != 2:
raise ValueError(
'xs should be a n_observates x n_features, ' +
'or a 2-dimensional array.')
if ys.ndim != 2:
raise ValueError(
'ys should be a n_observates x n_features, ' +
'or a 2-dimensional array.')
n_xs = xs.shape[1]
n_ys = ys.shape[1]
X = tf.placeholder(name='X', shape=[None, n_xs],
dtype=tf.float32)
Y = tf.placeholder(name='Y', shape=[None, n_ys],
dtype=tf.float32)
current_input = X
for layer_i in range(n_layers):
current_input = utils.linear(
current_input, n_neurons,
activation=activation_fn,
name='layer{}'.format(layer_i))[0]
Y_pred = utils.linear(
current_input, n_ys,
activation=final_activation_fn,
name='pred')[0]
if cost_type == 'l1_norm':
cost = tf.reduce_mean(tf.reduce_sum(
tf.abs(Y - Y_pred), 1))
elif cost_type == 'l2_norm':
cost = tf.reduce_mean(tf.reduce_sum(
tf.squared_difference(Y, Y_pred), 1))
else:
raise ValueError(
'Unknown cost_type: {}. '.format(
cost_type) + 'Use only "l1_norm" or "l2_norm"')
return {'X': X, 'Y': Y, 'Y_pred': Y_pred, 'cost': cost}
def train(imgs,
learning_rate=0.0001,
batch_size=200,
n_iterations=10,
gif_step=2,
n_neurons=30,
n_layers=10,
activation_fn=tf.nn.relu,
final_activation_fn=tf.nn.tanh,
cost_type='l2_norm'):
N, H, W, C = imgs.shape
all_xs, all_ys = [], []
for img_i, img in enumerate(imgs):
xs, ys = split_image(img)
all_xs.append(np.c_[xs, np.repeat(img_i, [xs.shape[0]])])
all_ys.append(ys)
xs = np.array(all_xs).reshape(-1, 3)
xs = (xs - np.mean(xs, 0)) / np.std(xs, 0)
ys = np.array(all_ys).reshape(-1, 3)
ys = ys / 127.5 - 1
g = tf.Graph()
with tf.Session(graph=g) as sess:
model = build_model(xs, ys, n_neurons, n_layers,
activation_fn, final_activation_fn,
cost_type)
optimizer = tf.train.AdamOptimizer(
learning_rate=learning_rate).minimize(model['cost'])
sess.run(tf.global_variables_initializer())
gifs = []
costs = []
step_i = 0
for it_i in range(n_iterations):
# Get a random sampling of the dataset
idxs = np.random.permutation(range(len(xs)))
# The number of batches we have to iterate over
n_batches = len(idxs) // batch_size
training_cost = 0
# Now iterate over our stochastic minibatches:
for batch_i in range(n_batches):
# Get just minibatch amount of data
idxs_i = idxs[batch_i * batch_size:
(batch_i + 1) * batch_size]
# And optimize, also returning the cost so we can monitor
# how our optimization is doing.
cost = sess.run(
[model['cost'], optimizer],
feed_dict={model['X']: xs[idxs_i],
model['Y']: ys[idxs_i]})[0]
training_cost += cost
print('iteration {}/{}: cost {}'.format(
it_i + 1, n_iterations, training_cost / n_batches))
# Also, every 20 iterations, we'll draw the prediction of our
# input xs, which should try to recreate our image!
if (it_i + 1) % gif_step == 0:
costs.append(training_cost / n_batches)
ys_pred = model['Y_pred'].eval(
feed_dict={model['X']: xs}, session=sess)
img = ys_pred.reshape(imgs.shape)
gifs.append(img)
return gifs
###Output
_____no_output_____
###Markdown
CodeBelow, I've shown code for loading the first 100 celeb files. Run through the next few cells to see how this works with the celeb dataset, and then come back here and replace the `imgs` variable with your own set of images. For instance, you can try your entire sorted dataset from Session 1 as an N x H x W x C array. Explore!TODO! COMPLETE THIS SECTION!
###Code
celeb_imgs = utils.get_celeb_imgs()
plt.figure(figsize=(10, 10))
plt.imshow(utils.montage(celeb_imgs).astype(np.uint8))
# It doesn't have to be 100 images, explore!
imgs = np.array(celeb_imgs).copy()
###Output
_____no_output_____
###Markdown
Explore changing the parameters of the `train` function and your own dataset of images. Note, you do not have to use the dataset from the last assignment! Explore different numbers of images, whatever you prefer.TODO! COMPLETE THIS SECTION!
###Code
# Change the parameters of the train function and
# explore changing the dataset
gifs = train(imgs=imgs)
###Output
_____no_output_____
###Markdown
Now we'll create a gif out of the training process. Be sure to call this 'multiple.gif' for your homework submission:
###Code
montage_gifs = [np.clip(utils.montage(
(m * 127.5) + 127.5), 0, 255).astype(np.uint8)
for m in gifs]
_ = gif.build_gif(montage_gifs, saveto='multiple.gif')
###Output
_____no_output_____
###Markdown
And show it in the notebook
###Code
ipyd.Image(url='multiple.gif?{}'.format(np.random.rand()),
height=500, width=500)
###Output
_____no_output_____
###Markdown
What we're seeing is the training process over time. We feed in our `xs`, which consist of the pixel values of each of our 100 images, it goes through the neural network, and out come predicted color values for every possible input value. We visualize it above as a gif by seeing how at each iteration the network has predicted the entire space of the inputs. We can visualize just the last iteration as a "latent" space, going from the first image (the top left image in the montage), to the last image, (the bottom right image).
###Code
final = gifs[-1]
final_gif = [np.clip(((m * 127.5) + 127.5), 0, 255).astype(np.uint8) for m in final]
gif.build_gif(final_gif, saveto='final.gif')
ipyd.Image(url='final.gif?{}'.format(np.random.rand()),
height=200, width=200)
###Output
_____no_output_____
###Markdown
Part Four - Open Exploration (Extra Credit)I now what you to explore what other possible manipulations of the network and/or dataset you could imagine. Perhaps a process that does the reverse, tries to guess where a given color should be painted? What if it was only taught a certain palette, and had to reason about other colors, how it would interpret those colors? Or what if you fed it pixel locations that weren't part of the training set, or outside the frame of what it was trained on? Or what happens with different activation functions, number of layers, increasing number of neurons or lesser number of neurons? I leave any of these as an open exploration for you.Try exploring this process with your own ideas, materials, and networks, and submit something you've created as a gif! To aid exploration, be sure to scale the image down quite a bit or it will require a much larger machine, and much more time to train. Then whenever you think you may be happy with the process you've created, try scaling up the resolution and leave the training to happen over a few hours/overnight to produce something truly stunning!Make sure to name the result of your gif: "explore.gif", and be sure to include it in your zip file. TODO! COMPLETE THIS SECTION!
###Code
# Train a network to produce something, storing every few
# iterations in the variable gifs, then export the training
# over time as a gif.
...
gif.build_gif(montage_gifs, saveto='explore.gif')
ipyd.Image(url='explore.gif?{}'.format(np.random.rand()),
height=500, width=500)
###Output
_____no_output_____
###Markdown
Assignment SubmissionAfter you've completed the notebook, create a zip file of the current directory using the code below. This code will make sure you have included this completed ipython notebook and the following files named exactly as: session-2/ session-2.ipynb single.gif multiple.gif final.gif explore.gif* libs/ utils.py * = optional/extra-creditYou'll then submit this zip file for your second assignment on Kadenze for "Assignment 2: Teach a Deep Neural Network to Paint"! If you have any questions, remember to reach out on the forums and connect with your peers or with me.To get assessed, you'll need to be a premium student! This will allow you to build an online portfolio of all of your work and receive grades. If you aren't already enrolled as a student, register now at http://www.kadenze.com/ and join the [CADL](https://twitter.com/hashtag/CADL) community to see what your peers are doing! https://www.kadenze.com/courses/creative-applications-of-deep-learning-with-tensorflow/infoAlso, if you share any of the GIFs on Facebook/Twitter/Instagram/etc..., be sure to use the CADL hashtag so that other students can find your work!
###Code
utils.build_submission('session-2.zip',
('reference.png',
'single.gif',
'multiple.gif',
'final.gif',
'session-2.ipynb'),
('explore.gif'))
###Output
_____no_output_____
###Markdown
And show it in the notebook
###Code
ipyd.Image(url='multiple.gif?{}'.format(np.random.rand()),
height=500, width=500)
###Output
_____no_output_____
###Markdown
What we're seeing is the training process over time. We feed in our `xs`, which consist of the pixel values of each of our 100 images, it goes through the neural network, and out come predicted color values for every possible input value. We visualize it above as a gif by seeing how at each iteration the network has predicted the entire space of the inputs. We can visualize just the last iteration as a "latent" space, going from the first image (the top left image in the montage), to the last image, (the bottom right image).
###Code
final = gifs[-1]
final_gif = [np.clip(((m * 127.5) + 127.5), 0, 255).astype(np.uint8) for m in final]
gif.build_gif(final_gif, saveto='final.gif')
ipyd.Image(url='final.gif?{}'.format(np.random.rand()),
height=200, width=200)
###Output
_____no_output_____
###Markdown
Part Four - Open Exploration (Extra Credit)I now what you to explore what other possible manipulations of the network and/or dataset you could imagine. Perhaps a process that does the reverse, tries to guess where a given color should be painted? What if it was only taught a certain palette, and had to reason about other colors, how it would interpret those colors? Or what if you fed it pixel locations that weren't part of the training set, or outside the frame of what it was trained on? Or what happens with different activation functions, number of layers, increasing number of neurons or lesser number of neurons? I leave any of these as an open exploration for you.Try exploring this process with your own ideas, materials, and networks, and submit something you've created as a gif! To aid exploration, be sure to scale the image down quite a bit or it will require a much larger machine, and much more time to train. Then whenever you think you may be happy with the process you've created, try scaling up the resolution and leave the training to happen over a few hours/overnight to produce something truly stunning!Make sure to name the result of your gif: "explore.gif", and be sure to include it in your zip file. TODO! COMPLETE THIS SECTION!
###Code
# Train a network to produce something, storing every few
# iterations in the variable gifs, then export the training
# over time as a gif.
...
gif.build_gif(montage_gifs, saveto='explore.gif')
ipyd.Image(url='explore.gif?{}'.format(np.random.rand()),
height=500, width=500)
###Output
_____no_output_____
###Markdown
Assignment SubmissionAfter you've completed the notebook, create a zip file of the current directory using the code below. This code will make sure you have included this completed ipython notebook and the following files named exactly as: session-2/ session-2.ipynb single.gif multiple.gif final.gif explore.gif* libs/ utils.py * = optional/extra-creditYou'll then submit this zip file for your second assignment on Kadenze for "Assignment 2: Teach a Deep Neural Network to Paint"! If you have any questions, remember to reach out on the forums and connect with your peers or with me.To get assessed, you'll need to be a premium student! This will allow you to build an online portfolio of all of your work and receive grades. If you aren't already enrolled as a student, register now at http://www.kadenze.com/ and join the [CADL](https://twitter.com/hashtag/CADL) community to see what your peers are doing! https://www.kadenze.com/courses/creative-applications-of-deep-learning-with-tensorflow/infoAlso, if you share any of the GIFs on Facebook/Twitter/Instagram/etc..., be sure to use the CADL hashtag so that other students can find your work!
###Code
utils.build_submission('session-2.zip',
('reference.png',
'single.gif',
'multiple.gif',
'final.gif',
'session-2.ipynb'),
('explore.gif'))
###Output
_____no_output_____
###Markdown
Session 2 - Training a Network w/ TensorflowAssignment: Teach a Deep Neural Network to PaintParag K. MitalCreative Applications of Deep Learning w/ TensorflowKadenze AcademyCADLThis work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Learning Goals* Learn how to create a Neural Network* Learn to use a neural network to paint an image* Apply creative thinking to the inputs, outputs, and definition of a network Outline- [Assignment Synopsis](assignment-synopsis)- [Part One - Fully Connected Network](part-one---fully-connected-network) - [Instructions](instructions) - [Code](code) - [Variable Scopes](variable-scopes)- [Part Two - Image Painting Network](part-two---image-painting-network) - [Instructions](instructions-1) - [Preparing the Data](preparing-the-data) - [Cost Function](cost-function) - [Explore](explore) - [A Note on Crossvalidation](a-note-on-crossvalidation)- [Part Three - Learning More than One Image](part-three---learning-more-than-one-image) - [Instructions](instructions-2) - [Code](code-1)- [Part Four - Open Exploration \(Extra Credit\)](part-four---open-exploration-extra-credit)- [Assignment Submission](assignment-submission)This next section will just make sure you have the right version of python and the libraries that we'll be using. Don't change the code here but make sure you "run" it (use "shift+enter")!
###Code
# First check the Python version
import sys
if sys.version_info < (3,4):
print('You are running an older version of Python!\n\n' \
'You should consider updating to Python 3.4.0 or ' \
'higher as the libraries built for this course ' \
'have only been tested in Python 3.4 and higher.\n')
print('Try installing the Python 3.5 version of anaconda '
'and then restart `jupyter notebook`:\n' \
'https://www.continuum.io/downloads\n\n')
# Now get necessary libraries
try:
import os
import numpy as np
import matplotlib.pyplot as plt
from skimage.transform import resize
from skimage import data
from scipy.misc import imresize
except ImportError:
print('You are missing some packages! ' \
'We will try installing them before continuing!')
!pip install "numpy>=1.11.0" "matplotlib>=1.5.1" "scikit-image>=0.11.3" "scikit-learn>=0.17" "scipy>=0.17.0"
import os
import numpy as np
import matplotlib.pyplot as plt
from skimage.transform import resize
from skimage import data
from scipy.misc import imresize
print('Done!')
# Import Tensorflow
try:
import tensorflow as tf
except ImportError:
print("You do not have tensorflow installed!")
print("Follow the instructions on the following link")
print("to install tensorflow before continuing:")
print("")
print("https://github.com/pkmital/CADL#installation-preliminaries")
# This cell includes the provided libraries from the zip file
# and a library for displaying images from ipython, which
# we will use to display the gif
try:
from libs import utils, gif
import IPython.display as ipyd
except ImportError:
print("Make sure you have started notebook in the same directory" +
" as the provided zip file which includes the 'libs' folder" +
" and the file 'utils.py' inside of it. You will NOT be able"
" to complete this assignment unless you restart jupyter"
" notebook inside the directory created by extracting"
" the zip file or cloning the github repo.")
# We'll tell matplotlib to inline any drawn figures like so:
%matplotlib inline
plt.style.use('ggplot')
# Bit of formatting because I don't like the default inline code style:
from IPython.core.display import HTML
HTML("""<style> .rendered_html code {
padding: 2px 4px;
color: #c7254e;
background-color: #f9f2f4;
border-radius: 4px;
} </style>""")
###Output
_____no_output_____
###Markdown
Assignment SynopsisIn this assignment, we're going to create our first neural network capable of taking any two continuous values as inputs. Those two values will go through a series of multiplications, additions, and nonlinearities, coming out of the network as 3 outputs. Remember from the last homework, we used convolution to filter an image so that the representations in the image were accentuated. We're not going to be using convolution w/ Neural Networks until the next session, but we're effectively doing the same thing here: using multiplications to accentuate the representations in our data, in order to minimize whatever our cost function is. To find out what those multiplications need to be, we're going to use Gradient Descent and Backpropagation, which will take our cost, and find the appropriate updates to all the parameters in our network to best optimize the cost. In the next session, we'll explore much bigger networks and convolution. This "toy" network is really to help us get up and running with neural networks, and aid our exploration of the different components that make up a neural network. You will be expected to explore manipulations of the neural networks in this notebook as much as possible to help aid your understanding of how they effect the final result.We're going to build our first neural network to understand what color "to paint" given a location in an image, or the row, col of the image. So in goes a row/col, and out goes a R/G/B. In the next lesson, we'll learn what this network is really doing is performing regression. For now, we'll focus on the creative applications of such a network to help us get a better understanding of the different components that make up the neural network. You'll be asked to explore many of the different components of a neural network, including changing the inputs/outputs (i.e. the dataset), the number of layers, their activation functions, the cost functions, learning rate, and batch size. You'll also explore a modification to this same network which takes a 3rd input: an index for an image. This will let us try to learn multiple images at once, though with limited success.We'll now dive right into creating deep neural networks, and I'm going to show you the math along the way. Don't worry if a lot of it doesn't make sense, and it really takes a bit of practice before it starts to come together. Part One - Fully Connected Network InstructionsCreate the operations necessary for connecting an input to a network, defined by a `tf.Placeholder`, to a series of fully connected, or linear, layers, using the formula: $$\textbf{H} = \phi(\textbf{X}\textbf{W} + \textbf{b})$$where $\textbf{H}$ is an output layer representing the "hidden" activations of a network, $\phi$ represents some nonlinearity, $\textbf{X}$ represents an input to that layer, $\textbf{W}$ is that layer's weight matrix, and $\textbf{b}$ is that layer's bias. If you're thinking, what is going on? Where did all that math come from? Don't be afraid of it. Once you learn how to "speak" the symbolic representation of the equation, it starts to get easier. And once we put it into practice with some code, it should start to feel like there is some association with what is written in the equation, and what we've written in code. Practice trying to say the equation in a meaningful way: "The output of a hidden layer is equal to some input multiplied by another matrix, adding some bias, and applying a non-linearity". Or perhaps: "The hidden layer is equal to a nonlinearity applied to an input multiplied by a matrix and adding some bias". Explore your own interpretations of the equation, or ways of describing it, and it starts to become much, much easier to apply the equation.The first thing that happens in this equation is the input matrix $\textbf{X}$ is multiplied by another matrix, $\textbf{W}$. This is the most complicated part of the equation. It's performing matrix multiplication, as we've seen from last session, and is effectively scaling and rotating our input. The bias $\textbf{b}$ allows for a global shift in the resulting values. Finally, the nonlinearity of $\phi$ allows the input space to be nonlinearly warped, allowing it to express a lot more interesting distributions of data. Have a look below at some common nonlinearities. If you're unfamiliar with looking at graphs like this, it is common to read the horizontal axis as X, as the input, and the vertical axis as Y, as the output.
###Code
xs = np.linspace(-6, 6, 100)
plt.plot(xs, np.maximum(xs, 0), label='relu')
plt.plot(xs, 1 / (1 + np.exp(-xs)), label='sigmoid')
plt.plot(xs, np.tanh(xs), label='tanh')
plt.xlabel('Input')
plt.xlim([-6, 6])
plt.ylabel('Output')
plt.ylim([-1.5, 1.5])
plt.title('Common Activation Functions/Nonlinearities')
plt.legend(loc='lower right')
###Output
_____no_output_____
###Markdown
Remember, having series of linear followed by nonlinear operations is what makes neural networks expressive. By stacking a lot of "linear" + "nonlinear" operations in a series, we can create a deep neural network! Have a look at the output ranges of the above nonlinearity when considering which nonlinearity seems most appropriate. For instance, the `relu` is always above 0, but does not saturate at any value above 0, meaning it can be anything above 0. That's unlike the `sigmoid` which does saturate at both 0 and 1, meaning its values for a single output neuron will always be between 0 and 1. Similarly, the `tanh` saturates at -1 and 1.Choosing between these is often a matter of trial and error. Though you can make some insights depending on your normalization scheme. For instance, if your output is expected to be in the range of 0 to 1, you may not want to use a `tanh` function, which ranges from -1 to 1, but likely would want to use a `sigmoid`. Keep the ranges of these activation functions in mind when designing your network, especially the final output layer of your network. CodeIn this section, we're going to work out how to represent a fully connected neural network with code. First, create a 2D `tf.placeholder` called $\textbf{X}$ with `None` for the batch size and 2 features. Make its `dtype` `tf.float32`. Recall that we use the dimension of `None` for the batch size dimension to say that this dimension can be any number. Here is the docstring for the `tf.placeholder` function, have a look at what args it takes:Help on function placeholder in module `tensorflow.python.ops.array_ops`:```pythonplaceholder(dtype, shape=None, name=None)``` Inserts a placeholder for a tensor that will be always fed. **Important**: This tensor will produce an error if evaluated. Its value must be fed using the `feed_dict` optional argument to `Session.run()`, `Tensor.eval()`, or `Operation.run()`. For example:```pythonx = tf.placeholder(tf.float32, shape=(1024, 1024))y = tf.matmul(x, x)with tf.Session() as sess: print(sess.run(y)) ERROR: will fail because x was not fed. rand_array = np.random.rand(1024, 1024) print(sess.run(y, feed_dict={x: rand_array})) Will succeed.``` Args: dtype: The type of elements in the tensor to be fed. shape: The shape of the tensor to be fed (optional). If the shape is not specified, you can feed a tensor of any shape. name: A name for the operation (optional). Returns: A `Tensor` that may be used as a handle for feeding a value, but not evaluated directly. TODO! COMPLETE THIS SECTION!
###Code
# Create a placeholder with None x 2 dimensions of dtype tf.float32, and name it "X":
X = tf.placeholder(dtype=tf.float32, shape=(None, 2), name='X')
###Output
_____no_output_____
###Markdown
Now multiply the tensor using a new variable, $\textbf{W}$, which has 2 rows and 20 columns, so that when it is left mutiplied by $\textbf{X}$, the output of the multiplication is None x 20, giving you 20 output neurons. Recall that the `tf.matmul` function takes two arguments, the left hand ($\textbf{X}$) and right hand side ($\textbf{W}$) of a matrix multiplication.To create $\textbf{W}$, you will use `tf.get_variable` to create a matrix which is `2 x 20` in dimension. Look up the docstrings of functions `tf.get_variable` and `tf.random_normal_initializer` to get familiar with these functions. There are many options we will ignore for now. Just be sure to set the `name`, `shape` (this is the one that has to be [2, 20]), `dtype` (i.e. tf.float32), and `initializer` (the `tf.random_normal_intializer` you should create) when creating your $\textbf{W}$ variable with `tf.get_variable(...)`.For the random normal initializer, often the mean is set to 0, and the standard deviation is set based on the number of neurons. But that really depends on the input and outputs of your network, how you've "normalized" your dataset, what your nonlinearity/activation function is, and what your expected range of inputs/outputs are. Don't worry about the values for the initializer for now, as this part will take a bit more experimentation to understand better!This part is to encourage you to learn how to look up the documentation on Tensorflow, ideally using `tf.get_variable?` in the notebook. If you are really stuck, just scroll down a bit and I've shown you how to use it. TODO! COMPLETE THIS SECTION!
###Code
W = tf.get_variable(
name='W',
shape=[2, 20],
initializer=tf.random_normal_initializer(mean=0.0, stddev=0.1))
h = tf.matmul(X, W)
###Output
_____no_output_____
###Markdown
And add to this result another new variable, $\textbf{b}$, which has [20] dimensions. These values will be added to every output neuron after the multiplication above. Instead of the `tf.random_normal_initializer` that you used for creating $\textbf{W}$, now use the `tf.constant_initializer`. Often for bias, you'll set the constant bias initialization to 0 or 1.TODO! COMPLETE THIS SECTION!
###Code
b = tf.get_variable(
name='b',
shape=[20],
initializer=tf.constant_initializer())
h = tf.nn.bias_add(W, b)
tf.nn.bias_add?
###Output
_____no_output_____
###Markdown
So far we have done:$$\textbf{X}\textbf{W} + \textbf{b}$$Finally, apply a nonlinear activation to this output, such as `tf.nn.relu`, to complete the equation:$$\textbf{H} = \phi(\textbf{X}\textbf{W} + \textbf{b})$$TODO! COMPLETE THIS SECTION!
###Code
h = tf.nn.relu(h)
###Output
_____no_output_____
###Markdown
Now that we've done all of this work, let's stick it inside a function. I've already done this for you and placed it inside the `utils` module under the function name `linear`. We've already imported the `utils` module so we can call it like so, `utils.linear(...)`. The docstring is copied below, and the code itself. Note that this function is slightly different to the one in the lecture. It does not require you to specify `n_input`, and the input `scope` is called `name`. It also has a few more extras in there including automatically converting a 4-d input tensor to a 2-d tensor so that you can fully connect the layer with a matrix multiply (don't worry about what this means if it doesn't make sense!).```pythonutils.linear??``````pythondef linear(x, n_output, name=None, activation=None, reuse=None): """Fully connected layer Parameters ---------- x : tf.Tensor Input tensor to connect n_output : int Number of output neurons name : None, optional Scope to apply Returns ------- op : tf.Tensor Output of fully connected layer. """ if len(x.get_shape()) != 2: x = flatten(x, reuse=reuse) n_input = x.get_shape().as_list()[1] with tf.variable_scope(name or "fc", reuse=reuse): W = tf.get_variable( name='W', shape=[n_input, n_output], dtype=tf.float32, initializer=tf.contrib.layers.xavier_initializer()) b = tf.get_variable( name='b', shape=[n_output], dtype=tf.float32, initializer=tf.constant_initializer(0.0)) h = tf.nn.bias_add( name='h', value=tf.matmul(x, W), bias=b) if activation: h = activation(h) return h, W``` Variable ScopesNote that since we are using `variable_scope` and explicitly telling the scope which name we would like, if there is *already* a variable created with the same name, then Tensorflow will raise an exception! If this happens, you should consider one of three possible solutions:1. If this happens while you are interactively editing a graph, you may need to reset the current graph:```python tf.reset_default_graph()```You should really only have to use this if you are in an interactive console! If you are creating Python scripts to run via command line, you should really be using solution 3 listed below, and be explicit with your graph contexts! 2. If this happens and you were not expecting any name conflicts, then perhaps you had a typo and created another layer with the same name! That's a good reason to keep useful names for everything in your graph!3. More likely, you should be using context managers when creating your graphs and running sessions. This works like so: ```python g = tf.Graph() with tf.Session(graph=g) as sess: Y_pred, W = linear(X, 2, 3, activation=tf.nn.relu) ``` or: ```python g = tf.Graph() with tf.Session(graph=g) as sess, g.as_default(): Y_pred, W = linear(X, 2, 3, activation=tf.nn.relu) ``` You can now write the same process as the above steps by simply calling:
###Code
h, W = utils.linear(
x=X, n_output=20, name='linear', activation=tf.nn.relu)
###Output
_____no_output_____
###Markdown
Part Two - Image Painting Network InstructionsFollow along the steps below, first setting up input and output data of the network, $\textbf{X}$ and $\textbf{Y}$. Then work through building the neural network which will try to compress the information in $\textbf{X}$ through a series of linear and non-linear functions so that whatever it is given as input, it minimized the error of its prediction, $\hat{\textbf{Y}}$, and the true output $\textbf{Y}$ through its training process. You'll also create an animated GIF of the training which you'll need to submit for the homework!Through this, we'll explore our first creative application: painting an image. This network is just meant to demonstrate how easily networks can be scaled to more complicated tasks without much modification. It is also meant to get you thinking about neural networks as building blocks that can be reconfigured, replaced, reorganized, and get you thinking about how the inputs and outputs can be anything you can imagine. Preparing the DataWe'll follow an example that Andrej Karpathy has done in his online demonstration of "image inpainting". What we're going to do is teach the network to go from the location on an image frame to a particular color. So given any position in an image, the network will need to learn what color to paint. Let's first get an image that we'll try to teach a neural network to paint.TODO! COMPLETE THIS SECTION!
###Code
# First load an image
from skimage.data import astronaut
img = astronaut()
# Be careful with the size of your image.
# Try a fairly small image to begin with,
# then come back here and try larger sizes.
img = imresize(img, (100, 100))
plt.figure(figsize=(5, 5))
plt.imshow(img)
# Make sure you save this image as "reference.png"
# and include it in your zipped submission file
# so we can tell what image you are trying to paint!
plt.imsave(fname='reference.png', arr=img)
###Output
_____no_output_____
###Markdown
In the lecture, I showed how to aggregate the pixel locations and their colors using a loop over every pixel position. I put that code into a function `split_image` below. Feel free to experiment with other features for `xs` or `ys`.
###Code
def split_image(img):
# We'll first collect all the positions in the image in our list, xs
xs = []
# And the corresponding colors for each of these positions
ys = []
# Now loop over the image
for row_i in range(img.shape[0]):
for col_i in range(img.shape[1]):
# And store the inputs
xs.append([row_i, col_i])
# And outputs that the network needs to learn to predict
ys.append(img[row_i, col_i])
# we'll convert our lists to arrays
xs = np.array(xs)
ys = np.array(ys)
return xs, ys
###Output
_____no_output_____
###Markdown
Let's use this function to create the inputs (xs) and outputs (ys) to our network as the pixel locations (xs) and their colors (ys):
###Code
xs, ys = split_image(img)
# and print the shapes
xs.shape, ys.shape
###Output
_____no_output_____
###Markdown
Also remember, we should normalize our input values!TODO! COMPLETE THIS SECTION!
###Code
# Normalize the input (xs) using its mean and standard deviation
# xs = (xs - np.mean(xs)) / np.std(xs)
#mean, var = tf.nn.moments(xs, axes=[1])
xs = (xs - np.mean(xs)) / np.std(xs)
# Just to make sure you have normalized it correctly:
print(np.min(xs), np.max(xs))
assert(np.min(xs) > -3.0 and np.max(xs) < 3.0)
###Output
-1.71481604244 1.71481604244
###Markdown
Similarly for the output:
###Code
print(np.min(ys), np.max(ys))
###Output
0 254
###Markdown
We'll normalize the output using a simpler normalization method, since we know the values range from 0-255:
###Code
ys = ys / 255.0
print(np.min(ys), np.max(ys))
###Output
0.0 0.996078431373
###Markdown
Scaling the image values like this has the advantage that it is still interpretable as an image, unlike if we have negative values.What we're going to do is use regression to predict the value of a pixel given its (row, col) position. So the input to our network is `X = (row, col)` value. And the output of the network is `Y = (r, g, b)`.We can get our original image back by reshaping the colors back into the original image shape. This works because the `ys` are still in order:
###Code
plt.imshow(ys.reshape(img.shape))
###Output
_____no_output_____
###Markdown
But when we give inputs of (row, col) to our network, it won't know what order they are, because we will randomize them. So it will have to *learn* what color value should be output for any given (row, col).Create 2 placeholders of `dtype` `tf.float32`: one for the input of the network, a `None x 2` dimension placeholder called $\textbf{X}$, and another for the true output of the network, a `None x 3` dimension placeholder called $\textbf{Y}$.TODO! COMPLETE THIS SECTION!
###Code
# Let's reset the graph:
tf.reset_default_graph()
# Create a placeholder of None x 2 dimensions and dtype tf.float32
# This will be the input to the network which takes the row/col
X = tf.placeholder(tf.float32, (None, 2), 'X')
# Create the placeholder, Y, with 3 output dimensions instead of 2.
# This will be the output of the network, the R, G, B values.
Y = tf.placeholder(tf.float32, (None, 3), 'Y')
###Output
_____no_output_____
###Markdown
Now create a deep neural network that takes your network input $\textbf{X}$ of 2 neurons, multiplies it by a linear and non-linear transformation which makes its shape [None, 20], meaning it will have 20 output neurons. Then repeat the same process again to give you 20 neurons again, and then again and again until you've done 6 layers of 20 neurons. Then finally one last layer which will output 3 neurons, your predicted output, which I've been denoting mathematically as $\hat{\textbf{Y}}$, for a total of 6 hidden layers, or 8 layers total including the input and output layers. Mathematically, we'll be creating a deep neural network that looks just like the previous fully connected layer we've created, but with a few more connections. So recall the first layer's connection is:\begin{align}\textbf{H}_1=\phi(\textbf{X}\textbf{W}_1 + \textbf{b}_1) \\\end{align}So the next layer will take that output, and connect it up again:\begin{align}\textbf{H}_2=\phi(\textbf{H}_1\textbf{W}_2 + \textbf{b}_2) \\\end{align}And same for every other layer:\begin{align}\textbf{H}_3=\phi(\textbf{H}_2\textbf{W}_3 + \textbf{b}_3) \\\textbf{H}_4=\phi(\textbf{H}_3\textbf{W}_4 + \textbf{b}_4) \\\textbf{H}_5=\phi(\textbf{H}_4\textbf{W}_5 + \textbf{b}_5) \\\textbf{H}_6=\phi(\textbf{H}_5\textbf{W}_6 + \textbf{b}_6) \\\end{align}Including the very last layer, which will be the prediction of the network:\begin{align}\hat{\textbf{Y}}=\phi(\textbf{H}_6\textbf{W}_7 + \textbf{b}_7)\end{align}Remember if you run into issues with variable scopes/names, that you cannot recreate a variable with the same name! Revisit the section on Variable Scopes if you get stuck with name issues.TODO! COMPLETE THIS SECTION!
###Code
# We'll create 6 hidden layers. Let's create a variable
# to say how many neurons we want for each of the layers
# (try 20 to begin with, then explore other values)
n_neurons = ...
# Create the first linear + nonlinear layer which will
# take the 2 input neurons and fully connects it to 20 neurons.
# Use the `utils.linear` function to do this just like before,
# but also remember to give names for each layer, such as
# "1", "2", ... "5", or "layer1", "layer2", ... "layer6".
h1, W1 = ...
# Create another one:
h2, W2 = ...
# and four more (or replace all of this with a loop if you can!):
h3, W3 = ...
h4, W4 = ...
h5, W5 = ...
h6, W6 = ...
# Now, make one last layer to make sure your network has 3 outputs:
Y_pred, W7 = utils.linear(h6, 3, activation=None, name='pred')
assert(X.get_shape().as_list() == [None, 2])
assert(Y_pred.get_shape().as_list() == [None, 3])
assert(Y.get_shape().as_list() == [None, 3])
###Output
_____no_output_____
###Markdown
Cost FunctionNow we're going to work on creating a `cost` function. The cost should represent how much `error` there is in the network, and provide the optimizer this value to help it train the network's parameters using gradient descent and backpropagation.Let's say our error is `E`, then the cost will be:$$cost(\textbf{Y}, \hat{\textbf{Y}}) = \frac{1}{\text{B}} \displaystyle\sum\limits_{b=0}^{\text{B}} \textbf{E}_b$$where the error is measured as, e.g.:$$\textbf{E} = \displaystyle\sum\limits_{c=0}^{\text{C}} (\textbf{Y}_{c} - \hat{\textbf{Y}}_{c})^2$$Don't worry if this scares you. This is mathematically expressing the same concept as: "the cost of an actual $\textbf{Y}$, and a predicted $\hat{\textbf{Y}}$ is equal to the mean across batches, of which there are $\text{B}$ total batches, of the sum of distances across $\text{C}$ color channels of every predicted output and true output". Basically, we're trying to see on average, or at least within a single minibatches average, how wrong was our prediction? We create a measure of error for every output feature by squaring the predicted output and the actual output it should have, i.e. the actual color value it should have output for a given input pixel position. By squaring it, we penalize large distances, but not so much small distances.Consider how the square function (i.e., $f(x) = x^2$) changes for a given error. If our color values range between 0-255, then a typical amount of error would be between $0$ and $128^2$. For example if my prediction was (120, 50, 167), and the color should have been (0, 100, 120), then the error for the Red channel is (120 - 0) or 120. And the Green channel is (50 - 100) or -50, and for the Blue channel, (167 - 120) = 47. When I square this result, I get: (120)^2, (-50)^2, and (47)^2. I then add all of these and that is my error, $\textbf{E}$, for this one observation. But I will have a few observations per minibatch. So I add all the error in my batch together, then divide by the number of observations in the batch, essentially finding the mean error of my batch. Let's try to see what the square in our measure of error is doing graphically.
###Code
error = np.linspace(0.0, 128.0**2, 100)
loss = error**2.0
plt.plot(error, loss)
plt.xlabel('error')
plt.ylabel('loss')
###Output
_____no_output_____
###Markdown
This is known as the $l_2$ (pronounced el-two) loss. It doesn't penalize small errors as much as it does large errors. This is easier to see when we compare it with another common loss, the $l_1$ (el-one) loss. It is linear in error, by taking the absolute value of the error. We'll compare the $l_1$ loss with normalized values from $0$ to $1$. So instead of having $0$ to $255$ for our RGB values, we'd have $0$ to $1$, simply by dividing our color values by $255.0$.
###Code
error = np.linspace(0.0, 1.0, 100)
plt.plot(error, error**2, label='l_2 loss')
plt.plot(error, np.abs(error), label='l_1 loss')
plt.xlabel('error')
plt.ylabel('loss')
plt.legend(loc='lower right')
###Output
_____no_output_____
###Markdown
So unlike the $l_2$ loss, the $l_1$ loss is really quickly upset if there is *any* error at all: as soon as error moves away from $0.0$, to $0.1$, the $l_1$ loss is $0.1$. But the $l_2$ loss is $0.1^2 = 0.01$. Having a stronger penalty on smaller errors often leads to what the literature calls "sparse" solutions, since it favors activations that try to explain as much of the data as possible, rather than a lot of activations that do a sort of good job, but when put together, do a great job of explaining the data. Don't worry about what this means if you are more unfamiliar with Machine Learning. There is a lot of literature surrounding each of these loss functions that we won't have time to get into, but look them up if they interest you.During the lecture, we've seen how to create a cost function using Tensorflow. To create a $l_2$ loss function, you can for instance use tensorflow's `tf.squared_difference` or for an $l_1$ loss function, `tf.abs`. You'll need to refer to the `Y` and `Y_pred` variables only, and your resulting cost should be a single value. Try creating the $l_1$ loss to begin with, and come back here after you have trained your network, to compare the performance with a $l_2$ loss.The equation for computing cost I mentioned above is more succintly written as, for $l_2$ norm:$$cost(\textbf{Y}, \hat{\textbf{Y}}) = \frac{1}{\text{B}} \displaystyle\sum\limits_{b=0}^{\text{B}} \displaystyle\sum\limits_{c=0}^{\text{C}} (\textbf{Y}_{c} - \hat{\textbf{Y}}_{c})^2$$For $l_1$ norm, we'd have:$$cost(\textbf{Y}, \hat{\textbf{Y}}) = \frac{1}{\text{B}} \displaystyle\sum\limits_{b=0}^{\text{B}} \displaystyle\sum\limits_{c=0}^{\text{C}} \text{abs}(\textbf{Y}_{c} - \hat{\textbf{Y}}_{c})$$Remember, to understand this equation, try to say it out loud: the $cost$ given two variables, $\textbf{Y}$, the actual output we want the network to have, and $\hat{\textbf{Y}}$ the predicted output from the network, is equal to the mean across $\text{B}$ batches, of the sum of $\textbf{C}$ color channels distance between the actual and predicted outputs. If you're still unsure, refer to the lecture where I've computed this, or scroll down a bit to where I've included the answer.TODO! COMPLETE THIS SECTION!
###Code
# first compute the error, the inner part of the summation.
# This should be the l1-norm or l2-norm of the distance
# between each color channel.
error = ...
assert(error.get_shape().as_list() == [None, 3])
###Output
_____no_output_____
###Markdown
TODO! COMPLETE THIS SECTION!
###Code
# Now sum the error for each feature in Y.
# If Y is [Batch, Features], the sum should be [Batch]:
sum_error = ...
assert(sum_error.get_shape().as_list() == [None])
###Output
_____no_output_____
###Markdown
TODO! COMPLETE THIS SECTION!
###Code
# Finally, compute the cost, as the mean error of the batch.
# This should be a single value.
cost = ...
assert(cost.get_shape().as_list() == [])
###Output
_____no_output_____
###Markdown
We now need an `optimizer` which will take our `cost` and a `learning_rate`, which says how far along the gradient to move. This optimizer calculates all the gradients in our network with respect to the `cost` variable and updates all of the weights in our network using backpropagation. We'll then create mini-batches of our training data and run the `optimizer` using a `session`.TODO! COMPLETE THIS SECTION!
###Code
# Refer to the help for the function
optimizer = tf.train....minimize(cost)
# Create parameters for the number of iterations to run for (< 100)
n_iterations = ...
# And how much data is in each minibatch (< 500)
batch_size = ...
# Then create a session
sess = tf.Session()
###Output
_____no_output_____
###Markdown
We'll now train our network! The code below should do this for you if you've setup everything else properly. Please read through this and make sure you understand each step! Note that this can take a VERY LONG time depending on the size of your image (make it < 100 x 100 pixels), the number of neurons per layer (e.g. < 30), the number of layers (e.g. < 8), and number of iterations (< 1000). Welcome to Deep Learning :)
###Code
# Initialize all your variables and run the operation with your session
sess.run(tf.global_variables_initializer())
# Optimize over a few iterations, each time following the gradient
# a little at a time
imgs = []
costs = []
gif_step = n_iterations // 10
step_i = 0
for it_i in range(n_iterations):
# Get a random sampling of the dataset
idxs = np.random.permutation(range(len(xs)))
# The number of batches we have to iterate over
n_batches = len(idxs) // batch_size
# Now iterate over our stochastic minibatches:
for batch_i in range(n_batches):
# Get just minibatch amount of data
idxs_i = idxs[batch_i * batch_size: (batch_i + 1) * batch_size]
# And optimize, also returning the cost so we can monitor
# how our optimization is doing.
training_cost = sess.run(
[cost, optimizer],
feed_dict={X: xs[idxs_i], Y: ys[idxs_i]})[0]
# Also, every 20 iterations, we'll draw the prediction of our
# input xs, which should try to recreate our image!
if (it_i + 1) % gif_step == 0:
costs.append(training_cost / n_batches)
ys_pred = Y_pred.eval(feed_dict={X: xs}, session=sess)
img = np.clip(ys_pred.reshape(img.shape), 0, 1)
imgs.append(img)
# Plot the cost over time
fig, ax = plt.subplots(1, 2)
ax[0].plot(costs)
ax[0].set_xlabel('Iteration')
ax[0].set_ylabel('Cost')
ax[1].imshow(img)
fig.suptitle('Iteration {}'.format(it_i))
plt.show()
# Save the images as a GIF
_ = gif.build_gif(imgs, saveto='single.gif', show_gif=False)
###Output
_____no_output_____
###Markdown
Let's now display the GIF we've just created:
###Code
ipyd.Image(url='single.gif?{}'.format(np.random.rand()),
height=500, width=500)
###Output
_____no_output_____
###Markdown
ExploreGo back over the previous cells and exploring changing different parameters of the network. I would suggest first trying to change the `learning_rate` parameter to different values and see how the cost curve changes. What do you notice? Try exponents of $10$, e.g. $10^1$, $10^2$, $10^3$... and so on. Also try changing the `batch_size`: $50, 100, 200, 500, ...$ How does it effect how the cost changes over time?Be sure to explore other manipulations of the network, such as changing the loss function to $l_2$ or $l_1$. How does it change the resulting learning? Also try changing the activation functions, the number of layers/neurons, different optimizers, and anything else that you may think of, and try to get a basic understanding on this toy problem of how it effects the network's training. Also try comparing creating a fairly shallow/wide net (e.g. 1-2 layers with many neurons, e.g. > 100), versus a deep/narrow net (e.g. 6-20 layers with fewer neurons, e.g. < 20). What do you notice? A Note on CrossvalidationThe cost curve plotted above is only showing the cost for our "training" dataset. Ideally, we should split our dataset into what are called "train", "validation", and "test" sets. This is done by taking random subsets of the entire dataset. For instance, we partition our dataset by saying we'll only use 80% of it for training, 10% for validation, and the last 10% for testing. Then when training as above, you would only use the 80% of the data you had partitioned, and then monitor accuracy on both the data you have used to train, but also that new 10% of unseen validation data. This gives you a sense of how "general" your network is. If it is performing just as well on that 10% of data, then you know it is doing a good job. Finally, once you are done training, you would test one last time on your "test" dataset. Ideally, you'd do this a number of times, so that every part of the dataset had a chance to be the test set. This would also give you a measure of the variance of the accuracy on the final test. If it changes a lot, you know something is wrong. If it remains fairly stable, then you know that it is a good representation of the model's accuracy on unseen data.We didn't get a chance to cover this in class, as it is less useful for exploring creative applications, though it is very useful to know and to use in practice, as it avoids overfitting/overgeneralizing your network to all of the data. Feel free to explore how to do this on the application above! Part Three - Learning More than One Image InstructionsWe're now going to make use of our Dataset from Session 1 and apply what we've just learned to try and paint every single image in our dataset. How would you guess is the best way to approach this? We could for instance feed in every possible image by having multiple row, col -> r, g, b values. So for any given row, col, we'd have 100 possible r, g, b values. This likely won't work very well as there are many possible values a pixel could take, not just one. What if we also tell the network *which* image's row and column we wanted painted? We're going to try and see how that does.You can execute all of the cells below unchanged to see how this works with the first 100 images of the celeb dataset. But you should replace the images with your own dataset, and vary the parameters of the network to get the best results!I've placed the same code for running the previous algorithm into two functions, `build_model` and `train`. You can directly call the function `train` with a 4-d image shaped as N x H x W x C, and it will collect all of the points of every image and try to predict the output colors of those pixels, just like before. The only difference now is that you are able to try this with a few images at a time. There are a few ways we could have tried to handle multiple images. The way I've shown in the `train` function is to include an additional input neuron for *which* image it is. So as well as receiving the row and column, the network will also receive as input which image it is as a number. This should help the network to better distinguish the patterns it uses, as it has knowledge that helps it separates its process based on which image is fed as input.
###Code
def build_model(xs, ys, n_neurons, n_layers, activation_fn,
final_activation_fn, cost_type):
xs = np.asarray(xs)
ys = np.asarray(ys)
if xs.ndim != 2:
raise ValueError(
'xs should be a n_observates x n_features, ' +
'or a 2-dimensional array.')
if ys.ndim != 2:
raise ValueError(
'ys should be a n_observates x n_features, ' +
'or a 2-dimensional array.')
n_xs = xs.shape[1]
n_ys = ys.shape[1]
X = tf.placeholder(name='X', shape=[None, n_xs],
dtype=tf.float32)
Y = tf.placeholder(name='Y', shape=[None, n_ys],
dtype=tf.float32)
current_input = X
for layer_i in range(n_layers):
current_input = utils.linear(
current_input, n_neurons,
activation=activation_fn,
name='layer{}'.format(layer_i))[0]
Y_pred = utils.linear(
current_input, n_ys,
activation=final_activation_fn,
name='pred')[0]
if cost_type == 'l1_norm':
cost = tf.reduce_mean(tf.reduce_sum(
tf.abs(Y - Y_pred), 1))
elif cost_type == 'l2_norm':
cost = tf.reduce_mean(tf.reduce_sum(
tf.squared_difference(Y, Y_pred), 1))
else:
raise ValueError(
'Unknown cost_type: {}. '.format(
cost_type) + 'Use only "l1_norm" or "l2_norm"')
return {'X': X, 'Y': Y, 'Y_pred': Y_pred, 'cost': cost}
def train(imgs,
learning_rate=0.0001,
batch_size=200,
n_iterations=10,
gif_step=2,
n_neurons=30,
n_layers=10,
activation_fn=tf.nn.relu,
final_activation_fn=tf.nn.tanh,
cost_type='l2_norm'):
N, H, W, C = imgs.shape
all_xs, all_ys = [], []
for img_i, img in enumerate(imgs):
xs, ys = split_image(img)
all_xs.append(np.c_[xs, np.repeat(img_i, [xs.shape[0]])])
all_ys.append(ys)
xs = np.array(all_xs).reshape(-1, 3)
xs = (xs - np.mean(xs, 0)) / np.std(xs, 0)
ys = np.array(all_ys).reshape(-1, 3)
ys = ys / 127.5 - 1
g = tf.Graph()
with tf.Session(graph=g) as sess:
model = build_model(xs, ys, n_neurons, n_layers,
activation_fn, final_activation_fn,
cost_type)
optimizer = tf.train.AdamOptimizer(
learning_rate=learning_rate).minimize(model['cost'])
sess.run(tf.global_variables_initializer())
gifs = []
costs = []
step_i = 0
for it_i in range(n_iterations):
# Get a random sampling of the dataset
idxs = np.random.permutation(range(len(xs)))
# The number of batches we have to iterate over
n_batches = len(idxs) // batch_size
training_cost = 0
# Now iterate over our stochastic minibatches:
for batch_i in range(n_batches):
# Get just minibatch amount of data
idxs_i = idxs[batch_i * batch_size:
(batch_i + 1) * batch_size]
# And optimize, also returning the cost so we can monitor
# how our optimization is doing.
cost = sess.run(
[model['cost'], optimizer],
feed_dict={model['X']: xs[idxs_i],
model['Y']: ys[idxs_i]})[0]
training_cost += cost
print('iteration {}/{}: cost {}'.format(
it_i + 1, n_iterations, training_cost / n_batches))
# Also, every 20 iterations, we'll draw the prediction of our
# input xs, which should try to recreate our image!
if (it_i + 1) % gif_step == 0:
costs.append(training_cost / n_batches)
ys_pred = model['Y_pred'].eval(
feed_dict={model['X']: xs}, session=sess)
img = ys_pred.reshape(imgs.shape)
gifs.append(img)
return gifs
###Output
_____no_output_____
###Markdown
CodeBelow, I've shown code for loading the first 100 celeb files. Run through the next few cells to see how this works with the celeb dataset, and then come back here and replace the `imgs` variable with your own set of images. For instance, you can try your entire sorted dataset from Session 1 as an N x H x W x C array. Explore!TODO! COMPLETE THIS SECTION!
###Code
celeb_imgs = utils.get_celeb_imgs()
plt.figure(figsize=(10, 10))
plt.imshow(utils.montage(celeb_imgs).astype(np.uint8))
# It doesn't have to be 100 images, explore!
imgs = np.array(celeb_imgs).copy()
###Output
_____no_output_____
###Markdown
Explore changing the parameters of the `train` function and your own dataset of images. Note, you do not have to use the dataset from the last assignment! Explore different numbers of images, whatever you prefer.TODO! COMPLETE THIS SECTION!
###Code
# Change the parameters of the train function and
# explore changing the dataset
gifs = train(imgs=imgs)
###Output
_____no_output_____
###Markdown
Now we'll create a gif out of the training process. Be sure to call this 'multiple.gif' for your homework submission:
###Code
montage_gifs = [np.clip(utils.montage(
(m * 127.5) + 127.5), 0, 255).astype(np.uint8)
for m in gifs]
_ = gif.build_gif(montage_gifs, saveto='multiple.gif')
###Output
_____no_output_____
###Markdown
And show it in the notebook
###Code
ipyd.Image(url='multiple.gif?{}'.format(np.random.rand()),
height=500, width=500)
###Output
_____no_output_____
###Markdown
What we're seeing is the training process over time. We feed in our `xs`, which consist of the pixel values of each of our 100 images, it goes through the neural network, and out come predicted color values for every possible input value. We visualize it above as a gif by seeing how at each iteration the network has predicted the entire space of the inputs. We can visualize just the last iteration as a "latent" space, going from the first image (the top left image in the montage), to the last image, (the bottom right image).
###Code
final = gifs[-1]
final_gif = [np.clip(((m * 127.5) + 127.5), 0, 255).astype(np.uint8) for m in final]
gif.build_gif(final_gif, saveto='final.gif')
ipyd.Image(url='final.gif?{}'.format(np.random.rand()),
height=200, width=200)
###Output
_____no_output_____
###Markdown
Part Four - Open Exploration (Extra Credit)I now what you to explore what other possible manipulations of the network and/or dataset you could imagine. Perhaps a process that does the reverse, tries to guess where a given color should be painted? What if it was only taught a certain palette, and had to reason about other colors, how it would interpret those colors? Or what if you fed it pixel locations that weren't part of the training set, or outside the frame of what it was trained on? Or what happens with different activation functions, number of layers, increasing number of neurons or lesser number of neurons? I leave any of these as an open exploration for you.Try exploring this process with your own ideas, materials, and networks, and submit something you've created as a gif! To aid exploration, be sure to scale the image down quite a bit or it will require a much larger machine, and much more time to train. Then whenever you think you may be happy with the process you've created, try scaling up the resolution and leave the training to happen over a few hours/overnight to produce something truly stunning!Make sure to name the result of your gif: "explore.gif", and be sure to include it in your zip file. TODO! COMPLETE THIS SECTION!
###Code
# Train a network to produce something, storing every few
# iterations in the variable gifs, then export the training
# over time as a gif.
...
gif.build_gif(montage_gifs, saveto='explore.gif')
ipyd.Image(url='explore.gif?{}'.format(np.random.rand()),
height=500, width=500)
###Output
_____no_output_____
###Markdown
Assignment SubmissionAfter you've completed the notebook, create a zip file of the current directory using the code below. This code will make sure you have included this completed ipython notebook and the following files named exactly as: session-2/ session-2.ipynb single.gif multiple.gif final.gif explore.gif* libs/ utils.py * = optional/extra-creditYou'll then submit this zip file for your second assignment on Kadenze for "Assignment 2: Teach a Deep Neural Network to Paint"! If you have any questions, remember to reach out on the forums and connect with your peers or with me.To get assessed, you'll need to be a premium student! This will allow you to build an online portfolio of all of your work and receive grades. If you aren't already enrolled as a student, register now at http://www.kadenze.com/ and join the [CADL](https://twitter.com/hashtag/CADL) community to see what your peers are doing! https://www.kadenze.com/courses/creative-applications-of-deep-learning-with-tensorflow/infoAlso, if you share any of the GIFs on Facebook/Twitter/Instagram/etc..., be sure to use the CADL hashtag so that other students can find your work!
###Code
utils.build_submission('session-2.zip',
('reference.png',
'single.gif',
'multiple.gif',
'final.gif',
'session-2.ipynb'),
('explore.gif'))
###Output
_____no_output_____
###Markdown
Session 2 - Training a Network w/ TensorflowAssignment: Teach a Deep Neural Network to PaintParag K. MitalCreative Applications of Deep Learning w/ TensorflowKadenze AcademyCADLThis work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Learning Goals* Learn how to create a Neural Network* Learn to use a neural network to paint an image* Apply creative thinking to the inputs, outputs, and definition of a network Outline- [Assignment Synopsis](assignment-synopsis)- [Part One - Fully Connected Network](part-one---fully-connected-network) - [Instructions](instructions) - [Code](code) - [Variable Scopes](variable-scopes)- [Part Two - Image Painting Network](part-two---image-painting-network) - [Instructions](instructions-1) - [Preparing the Data](preparing-the-data) - [Cost Function](cost-function) - [Explore](explore) - [A Note on Crossvalidation](a-note-on-crossvalidation)- [Part Three - Learning More than One Image](part-three---learning-more-than-one-image) - [Instructions](instructions-2) - [Code](code-1)- [Part Four - Open Exploration \(Extra Credit\)](part-four---open-exploration-extra-credit)- [Assignment Submission](assignment-submission)This next section will just make sure you have the right version of python and the libraries that we'll be using. Don't change the code here but make sure you "run" it (use "shift+enter")!
###Code
# First check the Python version
import sys
if sys.version_info < (3,4):
print('You are running an older version of Python!\n\n' \
'You should consider updating to Python 3.4.0 or ' \
'higher as the libraries built for this course ' \
'have only been tested in Python 3.4 and higher.\n')
print('Try installing the Python 3.5 version of anaconda '
'and then restart `jupyter notebook`:\n' \
'https://www.continuum.io/downloads\n\n')
# Now get necessary libraries
try:
import os
import numpy as np
import matplotlib.pyplot as plt
from skimage.transform import resize
from skimage import data
from scipy.misc import imresize
except ImportError:
print('You are missing some packages! ' \
'We will try installing them before continuing!')
!pip install "numpy>=1.11.0" "matplotlib>=1.5.1" "scikit-image>=0.11.3" "scikit-learn>=0.17" "scipy>=0.17.0"
import os
import numpy as np
import matplotlib.pyplot as plt
from skimage.transform import resize
from skimage import data
from scipy.misc import imresize
print('Done!')
# Import Tensorflow
try:
import tensorflow as tf
except ImportError:
print("You do not have tensorflow installed!")
print("Follow the instructions on the following link")
print("to install tensorflow before continuing:")
print("")
print("https://github.com/pkmital/CADL#installation-preliminaries")
# This cell includes the provided libraries from the zip file
# and a library for displaying images from ipython, which
# we will use to display the gif
try:
from libs import utils, gif
import IPython.display as ipyd
except ImportError:
print("Make sure you have started notebook in the same directory" +
" as the provided zip file which includes the 'libs' folder" +
" and the file 'utils.py' inside of it. You will NOT be able"
" to complete this assignment unless you restart jupyter"
" notebook inside the directory created by extracting"
" the zip file or cloning the github repo.")
# We'll tell matplotlib to inline any drawn figures like so:
%matplotlib inline
plt.style.use('ggplot')
# Bit of formatting because I don't like the default inline code style:
from IPython.core.display import HTML
HTML("""<style> .rendered_html code {
padding: 2px 4px;
color: #c7254e;
background-color: #f9f2f4;
border-radius: 4px;
} </style>""")
###Output
_____no_output_____
###Markdown
Assignment SynopsisIn this assignment, we're going to create our first neural network capable of taking any two continuous values as inputs. Those two values will go through a series of multiplications, additions, and nonlinearities, coming out of the network as 3 outputs. Remember from the last homework, we used convolution to filter an image so that the representations in the image were accentuated. We're not going to be using convolution w/ Neural Networks until the next session, but we're effectively doing the same thing here: using multiplications to accentuate the representations in our data, in order to minimize whatever our cost function is. To find out what those multiplications need to be, we're going to use Gradient Descent and Backpropagation, which will take our cost, and find the appropriate updates to all the parameters in our network to best optimize the cost. In the next session, we'll explore much bigger networks and convolution. This "toy" network is really to help us get up and running with neural networks, and aid our exploration of the different components that make up a neural network. You will be expected to explore manipulations of the neural networks in this notebook as much as possible to help aid your understanding of how they effect the final result.We're going to build our first neural network to understand what color "to paint" given a location in an image, or the row, col of the image. So in goes a row/col, and out goes a R/G/B. In the next lesson, we'll learn what this network is really doing is performing regression. For now, we'll focus on the creative applications of such a network to help us get a better understanding of the different components that make up the neural network. You'll be asked to explore many of the different components of a neural network, including changing the inputs/outputs (i.e. the dataset), the number of layers, their activation functions, the cost functions, learning rate, and batch size. You'll also explore a modification to this same network which takes a 3rd input: an index for an image. This will let us try to learn multiple images at once, though with limited success.We'll now dive right into creating deep neural networks, and I'm going to show you the math along the way. Don't worry if a lot of it doesn't make sense, and it really takes a bit of practice before it starts to come together. Part One - Fully Connected Network InstructionsCreate the operations necessary for connecting an input to a network, defined by a `tf.Placeholder`, to a series of fully connected, or linear, layers, using the formula: $$\textbf{H} = \phi(\textbf{X}\textbf{W} + \textbf{b})$$where $\textbf{H}$ is an output layer representing the "hidden" activations of a network, $\phi$ represents some nonlinearity, $\textbf{X}$ represents an input to that layer, $\textbf{W}$ is that layer's weight matrix, and $\textbf{b}$ is that layer's bias. If you're thinking, what is going on? Where did all that math come from? Don't be afraid of it. Once you learn how to "speak" the symbolic representation of the equation, it starts to get easier. And once we put it into practice with some code, it should start to feel like there is some association with what is written in the equation, and what we've written in code. Practice trying to say the equation in a meaningful way: "The output of a hidden layer is equal to some input multiplied by another matrix, adding some bias, and applying a non-linearity". Or perhaps: "The hidden layer is equal to a nonlinearity applied to an input multiplied by a matrix and adding some bias". Explore your own interpretations of the equation, or ways of describing it, and it starts to become much, much easier to apply the equation.The first thing that happens in this equation is the input matrix $\textbf{X}$ is multiplied by another matrix, $\textbf{W}$. This is the most complicated part of the equation. It's performing matrix multiplication, as we've seen from last session, and is effectively scaling and rotating our input. The bias $\textbf{b}$ allows for a global shift in the resulting values. Finally, the nonlinearity of $\phi$ allows the input space to be nonlinearly warped, allowing it to express a lot more interesting distributions of data. Have a look below at some common nonlinearities. If you're unfamiliar with looking at graphs like this, it is common to read the horizontal axis as X, as the input, and the vertical axis as Y, as the output.
###Code
xs = np.linspace(-6, 6, 100)
plt.plot(xs, np.maximum(xs, 0), label='relu')
plt.plot(xs, 1 / (1 + np.exp(-xs)), label='sigmoid')
plt.plot(xs, np.tanh(xs), label='tanh')
plt.xlabel('Input')
plt.xlim([-6, 6])
plt.ylabel('Output')
plt.ylim([-1.5, 1.5])
plt.title('Common Activation Functions/Nonlinearities')
plt.legend(loc='lower right')
###Output
_____no_output_____
###Markdown
Remember, having series of linear followed by nonlinear operations is what makes neural networks expressive. By stacking a lot of "linear" + "nonlinear" operations in a series, we can create a deep neural network! Have a look at the output ranges of the above nonlinearity when considering which nonlinearity seems most appropriate. For instance, the `relu` is always above 0, but does not saturate at any value above 0, meaning it can be anything above 0. That's unlike the `sigmoid` which does saturate at both 0 and 1, meaning its values for a single output neuron will always be between 0 and 1. Similarly, the `tanh` saturates at -1 and 1.Choosing between these is often a matter of trial and error. Though you can make some insights depending on your normalization scheme. For instance, if your output is expected to be in the range of 0 to 1, you may not want to use a `tanh` function, which ranges from -1 to 1, but likely would want to use a `sigmoid`. Keep the ranges of these activation functions in mind when designing your network, especially the final output layer of your network. CodeIn this section, we're going to work out how to represent a fully connected neural network with code. First, create a 2D `tf.placeholder` called $\textbf{X}$ with `None` for the batch size and 2 features. Make its `dtype` `tf.float32`. Recall that we use the dimension of `None` for the batch size dimension to say that this dimension can be any number. Here is the docstring for the `tf.placeholder` function, have a look at what args it takes:Help on function placeholder in module `tensorflow.python.ops.array_ops`:```pythonplaceholder(dtype, shape=None, name=None)``` Inserts a placeholder for a tensor that will be always fed. **Important**: This tensor will produce an error if evaluated. Its value must be fed using the `feed_dict` optional argument to `Session.run()`, `Tensor.eval()`, or `Operation.run()`. For example:```pythonx = tf.placeholder(tf.float32, shape=(1024, 1024))y = tf.matmul(x, x)with tf.Session() as sess: print(sess.run(y)) ERROR: will fail because x was not fed. rand_array = np.random.rand(1024, 1024) print(sess.run(y, feed_dict={x: rand_array})) Will succeed.``` Args: dtype: The type of elements in the tensor to be fed. shape: The shape of the tensor to be fed (optional). If the shape is not specified, you can feed a tensor of any shape. name: A name for the operation (optional). Returns: A `Tensor` that may be used as a handle for feeding a value, but not evaluated directly. TODO! COMPLETE THIS SECTION!
###Code
# Create a placeholder with None x 2 dimensions of dtype tf.float32, and name it "X":
X = ...
###Output
_____no_output_____
###Markdown
Now multiply the tensor using a new variable, $\textbf{W}$, which has 2 rows and 20 columns, so that when it is left mutiplied by $\textbf{X}$, the output of the multiplication is None x 20, giving you 20 output neurons. Recall that the `tf.matmul` function takes two arguments, the left hand ($\textbf{W}$) and right hand side ($\textbf{X}$) of a matrix multiplication.To create $\textbf{W}$, you will use `tf.get_variable` to create a matrix which is `2 x 20` in dimension. Look up the docstrings of functions `tf.get_variable` and `tf.random_normal_initializer` to get familiar with these functions. There are many options we will ignore for now. Just be sure to set the `name`, `shape` (this is the one that has to be [2, 20]), `dtype` (i.e. tf.float32), and `initializer` (the `tf.random_normal_intializer` you should create) when creating your $\textbf{W}$ variable with `tf.get_variable(...)`.For the random normal initializer, often the mean is set to 0, and the standard deviation is set based on the number of neurons. But that really depends on the input and outputs of your network, how you've "normalized" your dataset, what your nonlinearity/activation function is, and what your expected range of inputs/outputs are. Don't worry about the values for the initializer for now, as this part will take a bit more experimentation to understand better!This part is to encourage you to learn how to look up the documentation on Tensorflow, ideally using `tf.get_variable?` in the notebook. If you are really stuck, just scroll down a bit and I've shown you how to use it. TODO! COMPLETE THIS SECTION!
###Code
W = tf.get_variable(...
h = tf.matmul(...
###Output
_____no_output_____
###Markdown
And add to this result another new variable, $\textbf{b}$, which has [20] dimensions. These values will be added to every output neuron after the multiplication above. Instead of the `tf.random_normal_initializer` that you used for creating $\textbf{W}$, now use the `tf.constant_initializer`. Often for bias, you'll set the constant bias initialization to 0 or 1.TODO! COMPLETE THIS SECTION!
###Code
b = tf.get_variable(...
h = tf.nn.bias_add(...
###Output
_____no_output_____
###Markdown
So far we have done:$$\textbf{X}\textbf{W} + \textbf{b}$$Finally, apply a nonlinear activation to this output, such as `tf.nn.relu`, to complete the equation:$$\textbf{H} = \phi(\textbf{X}\textbf{W} + \textbf{b})$$TODO! COMPLETE THIS SECTION!
###Code
h = ...
###Output
_____no_output_____
###Markdown
Now that we've done all of this work, let's stick it inside a function. I've already done this for you and placed it inside the `utils` module under the function name `linear`. We've already imported the `utils` module so we can call it like so, `utils.linear(...)`. The docstring is copied below, and the code itself. Note that this function is slightly different to the one in the lecture. It does not require you to specify `n_input`, and the input `scope` is called `name`. It also has a few more extras in there including automatically converting a 4-d input tensor to a 2-d tensor so that you can fully connect the layer with a matrix multiply (don't worry about what this means if it doesn't make sense!).```pythonutils.linear??``````pythondef linear(x, n_output, name=None, activation=None, reuse=None): """Fully connected layer Parameters ---------- x : tf.Tensor Input tensor to connect n_output : int Number of output neurons name : None, optional Scope to apply Returns ------- op : tf.Tensor Output of fully connected layer. """ if len(x.get_shape()) != 2: x = flatten(x, reuse=reuse) n_input = x.get_shape().as_list()[1] with tf.variable_scope(name or "fc", reuse=reuse): W = tf.get_variable( name='W', shape=[n_input, n_output], dtype=tf.float32, initializer=tf.contrib.layers.xavier_initializer()) b = tf.get_variable( name='b', shape=[n_output], dtype=tf.float32, initializer=tf.constant_initializer(0.0)) h = tf.nn.bias_add( name='h', value=tf.matmul(x, W), bias=b) if activation: h = activation(h) return h, W``` Variable ScopesNote that since we are using `variable_scope` and explicitly telling the scope which name we would like, if there is *already* a variable created with the same name, then Tensorflow will raise an exception! If this happens, you should consider one of three possible solutions:1. If this happens while you are interactively editing a graph, you may need to reset the current graph:```python tf.reset_default_graph()```You should really only have to use this if you are in an interactive console! If you are creating Python scripts to run via command line, you should really be using solution 3 listed below, and be explicit with your graph contexts! 2. If this happens and you were not expecting any name conflicts, then perhaps you had a typo and created another layer with the same name! That's a good reason to keep useful names for everything in your graph!3. More likely, you should be using context managers when creating your graphs and running sessions. This works like so: ```python g = tf.Graph() with tf.Session(graph=g) as sess: Y_pred, W = linear(X, 2, 3, activation=tf.nn.relu) ``` or: ```python g = tf.Graph() with tf.Session(graph=g) as sess, g.as_default(): Y_pred, W = linear(X, 2, 3, activation=tf.nn.relu) ``` You can now write the same process as the above steps by simply calling:
###Code
h, W = utils.linear(
x=X, n_output=20, name='linear', activation=tf.nn.relu)
###Output
_____no_output_____
###Markdown
Part Two - Image Painting Network InstructionsFollow along the steps below, first setting up input and output data of the network, $\textbf{X}$ and $\textbf{Y}$. Then work through building the neural network which will try to compress the information in $\textbf{X}$ through a series of linear and non-linear functions so that whatever it is given as input, it minimized the error of its prediction, $\hat{\textbf{Y}}$, and the true output $\textbf{Y}$ through its training process. You'll also create an animated GIF of the training which you'll need to submit for the homework!Through this, we'll explore our first creative application: painting an image. This network is just meant to demonstrate how easily networks can be scaled to more complicated tasks without much modification. It is also meant to get you thinking about neural networks as building blocks that can be reconfigured, replaced, reorganized, and get you thinking about how the inputs and outputs can be anything you can imagine. Preparing the DataWe'll follow an example that Andrej Karpathy has done in his online demonstration of "image inpainting". What we're going to do is teach the network to go from the location on an image frame to a particular color. So given any position in an image, the network will need to learn what color to paint. Let's first get an image that we'll try to teach a neural network to paint.TODO! COMPLETE THIS SECTION!
###Code
# First load an image
img = ...
# Be careful with the size of your image.
# Try a fairly small image to begin with,
# then come back here and try larger sizes.
img = imresize(img, (100, 100))
plt.figure(figsize=(5, 5))
plt.imshow(img)
# Make sure you save this image as "reference.png"
# and include it in your zipped submission file
# so we can tell what image you are trying to paint!
plt.imsave(fname='reference.png', arr=img)
###Output
_____no_output_____
###Markdown
In the lecture, I showed how to aggregate the pixel locations and their colors using a loop over every pixel position. I put that code into a function `split_image` below. Feel free to experiment with other features for `xs` or `ys`.
###Code
def split_image(img):
# We'll first collect all the positions in the image in our list, xs
xs = []
# And the corresponding colors for each of these positions
ys = []
# Now loop over the image
for row_i in range(img.shape[0]):
for col_i in range(img.shape[1]):
# And store the inputs
xs.append([row_i, col_i])
# And outputs that the network needs to learn to predict
ys.append(img[row_i, col_i])
# we'll convert our lists to arrays
xs = np.array(xs)
ys = np.array(ys)
return xs, ys
###Output
_____no_output_____
###Markdown
Let's use this function to create the inputs (xs) and outputs (ys) to our network as the pixel locations (xs) and their colors (ys):
###Code
xs, ys = split_image(img)
# and print the shapes
xs.shape, ys.shape
###Output
_____no_output_____
###Markdown
Also remember, we should normalize our input values!TODO! COMPLETE THIS SECTION!
###Code
# Normalize the input (xs) using its mean and standard deviation
xs = ...
# Just to make sure you have normalized it correctly:
print(np.min(xs), np.max(xs))
assert(np.min(xs) > -3.0 and np.max(xs) < 3.0)
###Output
_____no_output_____
###Markdown
Similarly for the output:
###Code
print(np.min(ys), np.max(ys))
###Output
_____no_output_____
###Markdown
We'll normalize the output using a simpler normalization method, since we know the values range from 0-255:
###Code
ys = ys / 255.0
print(np.min(ys), np.max(ys))
###Output
_____no_output_____
###Markdown
Scaling the image values like this has the advantage that it is still interpretable as an image, unlike if we have negative values.What we're going to do is use regression to predict the value of a pixel given its (row, col) position. So the input to our network is `X = (row, col)` value. And the output of the network is `Y = (r, g, b)`.We can get our original image back by reshaping the colors back into the original image shape. This works because the `ys` are still in order:
###Code
plt.imshow(ys.reshape(img.shape))
###Output
_____no_output_____
###Markdown
But when we give inputs of (row, col) to our network, it won't know what order they are, because we will randomize them. So it will have to *learn* what color value should be output for any given (row, col).Create 2 placeholders of `dtype` `tf.float32`: one for the input of the network, a `None x 2` dimension placeholder called $\textbf{X}$, and another for the true output of the network, a `None x 3` dimension placeholder called $\textbf{Y}$.TODO! COMPLETE THIS SECTION!
###Code
# Let's reset the graph:
tf.reset_default_graph()
# Create a placeholder of None x 2 dimensions and dtype tf.float32
# This will be the input to the network which takes the row/col
X = tf.placeholder(...
# Create the placeholder, Y, with 3 output dimensions instead of 2.
# This will be the output of the network, the R, G, B values.
Y = tf.placeholder(...
###Output
_____no_output_____
###Markdown
Now create a deep neural network that takes your network input $\textbf{X}$ of 2 neurons, multiplies it by a linear and non-linear transformation which makes its shape [None, 20], meaning it will have 20 output neurons. Then repeat the same process again to give you 20 neurons again, and then again and again until you've done 6 layers of 20 neurons. Then finally one last layer which will output 3 neurons, your predicted output, which I've been denoting mathematically as $\hat{\textbf{Y}}$, for a total of 6 hidden layers, or 8 layers total including the input and output layers. Mathematically, we'll be creating a deep neural network that looks just like the previous fully connected layer we've created, but with a few more connections. So recall the first layer's connection is:\begin{align}\textbf{H}_1=\phi(\textbf{X}\textbf{W}_1 + \textbf{b}_1) \\\end{align}So the next layer will take that output, and connect it up again:\begin{align}\textbf{H}_2=\phi(\textbf{H}_1\textbf{W}_2 + \textbf{b}_2) \\\end{align}And same for every other layer:\begin{align}\textbf{H}_3=\phi(\textbf{H}_2\textbf{W}_3 + \textbf{b}_3) \\\textbf{H}_4=\phi(\textbf{H}_3\textbf{W}_4 + \textbf{b}_4) \\\textbf{H}_5=\phi(\textbf{H}_4\textbf{W}_5 + \textbf{b}_5) \\\textbf{H}_6=\phi(\textbf{H}_5\textbf{W}_6 + \textbf{b}_6) \\\end{align}Including the very last layer, which will be the prediction of the network:\begin{align}\hat{\textbf{Y}}=\phi(\textbf{H}_6\textbf{W}_7 + \textbf{b}_7)\end{align}Remember if you run into issues with variable scopes/names, that you cannot recreate a variable with the same name! Revisit the section on Variable Scopes if you get stuck with name issues.TODO! COMPLETE THIS SECTION!
###Code
# We'll create 6 hidden layers. Let's create a variable
# to say how many neurons we want for each of the layers
# (try 20 to begin with, then explore other values)
n_neurons = ...
# Create the first linear + nonlinear layer which will
# take the 2 input neurons and fully connects it to 20 neurons.
# Use the `utils.linear` function to do this just like before,
# but also remember to give names for each layer, such as
# "1", "2", ... "5", or "layer1", "layer2", ... "layer6".
h1, W1 = ...
# Create another one:
h2, W2 = ...
# and four more (or replace all of this with a loop if you can!):
h3, W3 = ...
h4, W4 = ...
h5, W5 = ...
h6, W6 = ...
# Now, make one last layer to make sure your network has 3 outputs:
Y_pred, W7 = utils.linear(h6, 3, activation=None, name='pred')
assert(X.get_shape().as_list() == [None, 2])
assert(Y_pred.get_shape().as_list() == [None, 3])
assert(Y.get_shape().as_list() == [None, 3])
###Output
_____no_output_____
###Markdown
Cost FunctionNow we're going to work on creating a `cost` function. The cost should represent how much `error` there is in the network, and provide the optimizer this value to help it train the network's parameters using gradient descent and backpropagation.Let's say our error is `E`, then the cost will be:$$cost(\textbf{Y}, \hat{\textbf{Y}}) = \frac{1}{\text{B}} \displaystyle\sum\limits_{b=0}^{\text{B}} \textbf{E}_b$$where the error is measured as, e.g.:$$\textbf{E} = \displaystyle\sum\limits_{c=0}^{\text{C}} (\textbf{Y}_{c} - \hat{\textbf{Y}}_{c})^2$$Don't worry if this scares you. This is mathematically expressing the same concept as: "the cost of an actual $\textbf{Y}$, and a predicted $\hat{\textbf{Y}}$ is equal to the mean across batches, of which there are $\text{B}$ total batches, of the sum of distances across $\text{C}$ color channels of every predicted output and true output". Basically, we're trying to see on average, or at least within a single minibatches average, how wrong was our prediction? We create a measure of error for every output feature by squaring the predicted output and the actual output it should have, i.e. the actual color value it should have output for a given input pixel position. By squaring it, we penalize large distances, but not so much small distances.Consider how the square function (i.e., $f(x) = x^2$) changes for a given error. If our color values range between 0-255, then a typical amount of error would be between $0$ and $128^2$. For example if my prediction was (120, 50, 167), and the color should have been (0, 100, 120), then the error for the Red channel is (120 - 0) or 120. And the Green channel is (50 - 100) or -50, and for the Blue channel, (167 - 120) = 47. When I square this result, I get: (120)^2, (-50)^2, and (47)^2. I then add all of these and that is my error, $\textbf{E}$, for this one observation. But I will have a few observations per minibatch. So I add all the error in my batch together, then divide by the number of observations in the batch, essentially finding the mean error of my batch. Let's try to see what the square in our measure of error is doing graphically.
###Code
error = np.linspace(0.0, 128.0**2, 100)
loss = error**2.0
plt.plot(error, loss)
plt.xlabel('error')
plt.ylabel('loss')
###Output
_____no_output_____
###Markdown
This is known as the $l_2$ (pronounced el-two) loss. It doesn't penalize small errors as much as it does large errors. This is easier to see when we compare it with another common loss, the $l_1$ (el-one) loss. It is linear in error, by taking the absolute value of the error. We'll compare the $l_1$ loss with normalized values from $0$ to $1$. So instead of having $0$ to $255$ for our RGB values, we'd have $0$ to $1$, simply by dividing our color values by $255.0$.
###Code
error = np.linspace(0.0, 1.0, 100)
plt.plot(error, error**2, label='l_2 loss')
plt.plot(error, np.abs(error), label='l_1 loss')
plt.xlabel('error')
plt.ylabel('loss')
plt.legend(loc='lower right')
###Output
_____no_output_____
###Markdown
So unlike the $l_2$ loss, the $l_1$ loss is really quickly upset if there is *any* error at all: as soon as error moves away from $0.0$, to $0.1$, the $l_1$ loss is $0.1$. But the $l_2$ loss is $0.1^2 = 0.01$. Having a stronger penalty on smaller errors often leads to what the literature calls "sparse" solutions, since it favors activations that try to explain as much of the data as possible, rather than a lot of activations that do a sort of good job, but when put together, do a great job of explaining the data. Don't worry about what this means if you are more unfamiliar with Machine Learning. There is a lot of literature surrounding each of these loss functions that we won't have time to get into, but look them up if they interest you.During the lecture, we've seen how to create a cost function using Tensorflow. To create a $l_2$ loss function, you can for instance use tensorflow's `tf.squared_difference` or for an $l_1$ loss function, `tf.abs`. You'll need to refer to the `Y` and `Y_pred` variables only, and your resulting cost should be a single value. Try creating the $l_1$ loss to begin with, and come back here after you have trained your network, to compare the performance with a $l_2$ loss.The equation for computing cost I mentioned above is more succintly written as, for $l_2$ norm:$$cost(\textbf{Y}, \hat{\textbf{Y}}) = \frac{1}{\text{B}} \displaystyle\sum\limits_{b=0}^{\text{B}} \displaystyle\sum\limits_{c=0}^{\text{C}} (\textbf{Y}_{c} - \hat{\textbf{Y}}_{c})^2$$For $l_1$ norm, we'd have:$$cost(\textbf{Y}, \hat{\textbf{Y}}) = \frac{1}{\text{B}} \displaystyle\sum\limits_{b=0}^{\text{B}} \displaystyle\sum\limits_{c=0}^{\text{C}} \text{abs}(\textbf{Y}_{c} - \hat{\textbf{Y}}_{c})$$Remember, to understand this equation, try to say it out loud: the $cost$ given two variables, $\textbf{Y}$, the actual output we want the network to have, and $\hat{\textbf{Y}}$ the predicted output from the network, is equal to the mean across $\text{B}$ batches, of the sum of $\textbf{C}$ color channels distance between the actual and predicted outputs. If you're still unsure, refer to the lecture where I've computed this, or scroll down a bit to where I've included the answer.TODO! COMPLETE THIS SECTION!
###Code
# first compute the error, the inner part of the summation.
# This should be the l1-norm or l2-norm of the distance
# between each color channel.
error = ...
assert(error.get_shape().as_list() == [None, 3])
###Output
_____no_output_____
###Markdown
TODO! COMPLETE THIS SECTION!
###Code
# Now sum the error for each feature in Y.
# If Y is [Batch, Features], the sum should be [Batch]:
sum_error = ...
assert(sum_error.get_shape().as_list() == [None])
###Output
_____no_output_____
###Markdown
TODO! COMPLETE THIS SECTION!
###Code
# Finally, compute the cost, as the mean error of the batch.
# This should be a single value.
cost = ...
assert(cost.get_shape().as_list() == [])
###Output
_____no_output_____
###Markdown
We now need an `optimizer` which will take our `cost` and a `learning_rate`, which says how far along the gradient to move. This optimizer calculates all the gradients in our network with respect to the `cost` variable and updates all of the weights in our network using backpropagation. We'll then create mini-batches of our training data and run the `optimizer` using a `session`.TODO! COMPLETE THIS SECTION!
###Code
# Refer to the help for the function
optimizer = tf.train....minimize(cost)
# Create parameters for the number of iterations to run for (< 100)
n_iterations = ...
# And how much data is in each minibatch (< 500)
batch_size = ...
# Then create a session
sess = tf.Session()
###Output
_____no_output_____
###Markdown
We'll now train our network! The code below should do this for you if you've setup everything else properly. Please read through this and make sure you understand each step! Note that this can take a VERY LONG time depending on the size of your image (make it < 100 x 100 pixels), the number of neurons per layer (e.g. < 30), the number of layers (e.g. < 8), and number of iterations (< 1000). Welcome to Deep Learning :)
###Code
# Initialize all your variables and run the operation with your session
sess.run(tf.initialize_all_variables())
# Optimize over a few iterations, each time following the gradient
# a little at a time
imgs = []
costs = []
gif_step = n_iterations // 10
step_i = 0
for it_i in range(n_iterations):
# Get a random sampling of the dataset
idxs = np.random.permutation(range(len(xs)))
# The number of batches we have to iterate over
n_batches = len(idxs) // batch_size
# Now iterate over our stochastic minibatches:
for batch_i in range(n_batches):
# Get just minibatch amount of data
idxs_i = idxs[batch_i * batch_size: (batch_i + 1) * batch_size]
# And optimize, also returning the cost so we can monitor
# how our optimization is doing.
training_cost = sess.run(
[cost, optimizer],
feed_dict={X: xs[idxs_i], Y: ys[idxs_i]})[0]
# Also, every 20 iterations, we'll draw the prediction of our
# input xs, which should try to recreate our image!
if (it_i + 1) % gif_step == 0:
costs.append(training_cost / n_batches)
ys_pred = Y_pred.eval(feed_dict={X: xs}, session=sess)
img = np.clip(ys_pred.reshape(img.shape), 0, 1)
imgs.append(img)
# Plot the cost over time
fig, ax = plt.subplots(1, 2)
ax[0].plot(costs)
ax[0].set_xlabel('Iteration')
ax[0].set_ylabel('Cost')
ax[1].imshow(img)
fig.suptitle('Iteration {}'.format(it_i))
plt.show()
# Save the images as a GIF
_ = gif.build_gif(imgs, saveto='single.gif', show_gif=False)
###Output
_____no_output_____
###Markdown
Let's now display the GIF we've just created:
###Code
ipyd.Image(url='single.gif?{}'.format(np.random.rand()),
height=500, width=500)
###Output
_____no_output_____
###Markdown
ExploreGo back over the previous cells and exploring changing different parameters of the network. I would suggest first trying to change the `learning_rate` parameter to different values and see how the cost curve changes. What do you notice? Try exponents of $10$, e.g. $10^1$, $10^2$, $10^3$... and so on. Also try changing the `batch_size`: $50, 100, 200, 500, ...$ How does it effect how the cost changes over time?Be sure to explore other manipulations of the network, such as changing the loss function to $l_2$ or $l_1$. How does it change the resulting learning? Also try changing the activation functions, the number of layers/neurons, different optimizers, and anything else that you may think of, and try to get a basic understanding on this toy problem of how it effects the network's training. Also try comparing creating a fairly shallow/wide net (e.g. 1-2 layers with many neurons, e.g. > 100), versus a deep/narrow net (e.g. 6-20 layers with fewer neurons, e.g. < 20). What do you notice? A Note on CrossvalidationThe cost curve plotted above is only showing the cost for our "training" dataset. Ideally, we should split our dataset into what are called "train", "validation", and "test" sets. This is done by taking random subsets of the entire dataset. For instance, we partition our dataset by saying we'll only use 80% of it for training, 10% for validation, and the last 10% for testing. Then when training as above, you would only use the 80% of the data you had partitioned, and then monitor accuracy on both the data you have used to train, but also that new 10% of unseen validation data. This gives you a sense of how "general" your network is. If it is performing just as well on that 10% of data, then you know it is doing a good job. Finally, once you are done training, you would test one last time on your "test" dataset. Ideally, you'd do this a number of times, so that every part of the dataset had a chance to be the test set. This would also give you a measure of the variance of the accuracy on the final test. If it changes a lot, you know something is wrong. If it remains fairly stable, then you know that it is a good representation of the model's accuracy on unseen data.We didn't get a chance to cover this in class, as it is less useful for exploring creative applications, though it is very useful to know and to use in practice, as it avoids overfitting/overgeneralizing your network to all of the data. Feel free to explore how to do this on the application above! Part Three - Learning More than One Image InstructionsWe're now going to make use of our Dataset from Session 1 and apply what we've just learned to try and paint every single image in our dataset. How would you guess is the best way to approach this? We could for instance feed in every possible image by having multiple row, col -> r, g, b values. So for any given row, col, we'd have 100 possible r, g, b values. This likely won't work very well as there are many possible values a pixel could take, not just one. What if we also tell the network *which* image's row and column we wanted painted? We're going to try and see how that does.You can execute all of the cells below unchanged to see how this works with the first 100 images of the celeb dataset. But you should replace the images with your own dataset, and vary the parameters of the network to get the best results!I've placed the same code for running the previous algorithm into two functions, `build_model` and `train`. You can directly call the function `train` with a 4-d image shaped as N x H x W x C, and it will collect all of the points of every image and try to predict the output colors of those pixels, just like before. The only difference now is that you are able to try this with a few images at a time. There are a few ways we could have tried to handle multiple images. The way I've shown in the `train` function is to include an additional input neuron for *which* image it is. So as well as receiving the row and column, the network will also receive as input which image it is as a number. This should help the network to better distinguish the patterns it uses, as it has knowledge that helps it separates its process based on which image is fed as input.
###Code
def build_model(xs, ys, n_neurons, n_layers, activation_fn,
final_activation_fn, cost_type):
xs = np.asarray(xs)
ys = np.asarray(ys)
if xs.ndim != 2:
raise ValueError(
'xs should be a n_observates x n_features, ' +
'or a 2-dimensional array.')
if ys.ndim != 2:
raise ValueError(
'ys should be a n_observates x n_features, ' +
'or a 2-dimensional array.')
n_xs = xs.shape[1]
n_ys = ys.shape[1]
X = tf.placeholder(name='X', shape=[None, n_xs],
dtype=tf.float32)
Y = tf.placeholder(name='Y', shape=[None, n_ys],
dtype=tf.float32)
current_input = X
for layer_i in range(n_layers):
current_input = utils.linear(
current_input, n_neurons,
activation=activation_fn,
name='layer{}'.format(layer_i))[0]
Y_pred = utils.linear(
current_input, n_ys,
activation=final_activation_fn,
name='pred')[0]
if cost_type == 'l1_norm':
cost = tf.reduce_mean(tf.reduce_sum(
tf.abs(Y - Y_pred), 1))
elif cost_type == 'l2_norm':
cost = tf.reduce_mean(tf.reduce_sum(
tf.squared_difference(Y, Y_pred), 1))
else:
raise ValueError(
'Unknown cost_type: {}. '.format(
cost_type) + 'Use only "l1_norm" or "l2_norm"')
return {'X': X, 'Y': Y, 'Y_pred': Y_pred, 'cost': cost}
def train(imgs,
learning_rate=0.0001,
batch_size=200,
n_iterations=10,
gif_step=2,
n_neurons=30,
n_layers=10,
activation_fn=tf.nn.relu,
final_activation_fn=tf.nn.tanh,
cost_type='l2_norm'):
N, H, W, C = imgs.shape
all_xs, all_ys = [], []
for img_i, img in enumerate(imgs):
xs, ys = split_image(img)
all_xs.append(np.c_[xs, np.repeat(img_i, [xs.shape[0]])])
all_ys.append(ys)
xs = np.array(all_xs).reshape(-1, 3)
xs = (xs - np.mean(xs, 0)) / np.std(xs, 0)
ys = np.array(all_ys).reshape(-1, 3)
ys = ys / 127.5 - 1
g = tf.Graph()
with tf.Session(graph=g) as sess:
model = build_model(xs, ys, n_neurons, n_layers,
activation_fn, final_activation_fn,
cost_type)
optimizer = tf.train.AdamOptimizer(
learning_rate=learning_rate).minimize(model['cost'])
sess.run(tf.initialize_all_variables())
gifs = []
costs = []
step_i = 0
for it_i in range(n_iterations):
# Get a random sampling of the dataset
idxs = np.random.permutation(range(len(xs)))
# The number of batches we have to iterate over
n_batches = len(idxs) // batch_size
training_cost = 0
# Now iterate over our stochastic minibatches:
for batch_i in range(n_batches):
# Get just minibatch amount of data
idxs_i = idxs[batch_i * batch_size:
(batch_i + 1) * batch_size]
# And optimize, also returning the cost so we can monitor
# how our optimization is doing.
cost = sess.run(
[model['cost'], optimizer],
feed_dict={model['X']: xs[idxs_i],
model['Y']: ys[idxs_i]})[0]
training_cost += cost
print('iteration {}/{}: cost {}'.format(
it_i + 1, n_iterations, training_cost / n_batches))
# Also, every 20 iterations, we'll draw the prediction of our
# input xs, which should try to recreate our image!
if (it_i + 1) % gif_step == 0:
costs.append(training_cost / n_batches)
ys_pred = model['Y_pred'].eval(
feed_dict={model['X']: xs}, session=sess)
img = ys_pred.reshape(imgs.shape)
gifs.append(img)
return gifs
###Output
_____no_output_____
###Markdown
CodeBelow, I've shown code for loading the first 100 celeb files. Run through the next few cells to see how this works with the celeb dataset, and then come back here and replace the `imgs` variable with your own set of images. For instance, you can try your entire sorted dataset from Session 1 as an N x H x W x C array. Explore!TODO! COMPLETE THIS SECTION!
###Code
celeb_imgs = utils.get_celeb_imgs()
plt.figure(figsize=(10, 10))
plt.imshow(utils.montage(celeb_imgs).astype(np.uint8))
# It doesn't have to be 100 images, explore!
imgs = np.array(celeb_imgs).copy()
###Output
_____no_output_____
###Markdown
Explore changing the parameters of the `train` function and your own dataset of images. Note, you do not have to use the dataset from the last assignment! Explore different numbers of images, whatever you prefer.TODO! COMPLETE THIS SECTION!
###Code
# Change the parameters of the train function and
# explore changing the dataset
gifs = train(imgs=imgs)
###Output
_____no_output_____
###Markdown
Now we'll create a gif out of the training process. Be sure to call this 'multiple.gif' for your homework submission:
###Code
montage_gifs = [np.clip(utils.montage(
(m * 127.5) + 127.5), 0, 255).astype(np.uint8)
for m in gifs]
_ = gif.build_gif(montage_gifs, saveto='multiple.gif')
###Output
_____no_output_____
###Markdown
And show it in the notebook
###Code
ipyd.Image(url='multiple.gif?{}'.format(np.random.rand()),
height=500, width=500)
###Output
_____no_output_____
###Markdown
What we're seeing is the training process over time. We feed in our `xs`, which consist of the pixel values of each of our 100 images, it goes through the neural network, and out come predicted color values for every possible input value. We visualize it above as a gif by seeing how at each iteration the network has predicted the entire space of the inputs. We can visualize just the last iteration as a "latent" space, going from the first image (the top left image in the montage), to the last image, (the bottom right image).
###Code
final = gifs[-1]
final_gif = [np.clip(((m * 127.5) + 127.5), 0, 255).astype(np.uint8) for m in final]
gif.build_gif(final_gif, saveto='final.gif')
ipyd.Image(url='final.gif?{}'.format(np.random.rand()),
height=200, width=200)
###Output
_____no_output_____
###Markdown
Part Four - Open Exploration (Extra Credit)I now what you to explore what other possible manipulations of the network and/or dataset you could imagine. Perhaps a process that does the reverse, tries to guess where a given color should be painted? What if it was only taught a certain palette, and had to reason about other colors, how it would interpret those colors? Or what if you fed it pixel locations that weren't part of the training set, or outside the frame of what it was trained on? Or what happens with different activation functions, number of layers, increasing number of neurons or lesser number of neurons? I leave any of these as an open exploration for you.Try exploring this process with your own ideas, materials, and networks, and submit something you've created as a gif! To aid exploration, be sure to scale the image down quite a bit or it will require a much larger machine, and much more time to train. Then whenever you think you may be happy with the process you've created, try scaling up the resolution and leave the training to happen over a few hours/overnight to produce something truly stunning!Make sure to name the result of your gif: "explore.gif", and be sure to include it in your zip file. TODO! COMPLETE THIS SECTION!
###Code
# Train a network to produce something, storing every few
# iterations in the variable gifs, then export the training
# over time as a gif.
...
gif.build_gif(montage_gifs, saveto='explore.gif')
ipyd.Image(url='explore.gif?{}'.format(np.random.rand()),
height=500, width=500)
###Output
_____no_output_____
###Markdown
Assignment SubmissionAfter you've completed the notebook, create a zip file of the current directory using the code below. This code will make sure you have included this completed ipython notebook and the following files named exactly as: session-2/ session-2.ipynb single.gif multiple.gif final.gif explore.gif* libs/ utils.py * = optional/extra-creditYou'll then submit this zip file for your second assignment on Kadenze for "Assignment 2: Teach a Deep Neural Network to Paint"! If you have any questions, remember to reach out on the forums and connect with your peers or with me.To get assessed, you'll need to be a premium student! This will allow you to build an online portfolio of all of your work and receive grades. If you aren't already enrolled as a student, register now at http://www.kadenze.com/ and join the [CADL](https://twitter.com/hashtag/CADL) community to see what your peers are doing! https://www.kadenze.com/courses/creative-applications-of-deep-learning-with-tensorflow/infoAlso, if you share any of the GIFs on Facebook/Twitter/Instagram/etc..., be sure to use the CADL hashtag so that other students can find your work!
###Code
utils.build_submission('session-2.zip',
('reference.png',
'single.gif',
'multiple.gif',
'final.gif',
'session-2.ipynb'),
('explore.gif'))
###Output
_____no_output_____
###Markdown
Session 2 - Training a Network w/ TensorflowAssignment: Teach a Deep Neural Network to PaintParag K. MitalCreative Applications of Deep Learning w/ TensorflowKadenze AcademyCADLThis work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Learning Goals* Learn how to create a Neural Network* Learn to use a neural network to paint an image* Apply creative thinking to the inputs, outputs, and definition of a network Outline- [Assignment Synopsis](assignment-synopsis)- [Part One - Fully Connected Network](part-one---fully-connected-network) - [Instructions](instructions) - [Code](code) - [Variable Scopes](variable-scopes)- [Part Two - Image Painting Network](part-two---image-painting-network) - [Instructions](instructions-1) - [Preparing the Data](preparing-the-data) - [Cost Function](cost-function) - [Explore](explore) - [A Note on Crossvalidation](a-note-on-crossvalidation)- [Part Three - Learning More than One Image](part-three---learning-more-than-one-image) - [Instructions](instructions-2) - [Code](code-1)- [Part Four - Open Exploration \(Extra Credit\)](part-four---open-exploration-extra-credit)- [Assignment Submission](assignment-submission)This next section will just make sure you have the right version of python and the libraries that we'll be using. Don't change the code here but make sure you "run" it (use "shift+enter")!
###Code
# First check the Python version
import sys
if sys.version_info < (3,4):
print('You are running an older version of Python!\n\n' \
'You should consider updating to Python 3.4.0 or ' \
'higher as the libraries built for this course ' \
'have only been tested in Python 3.4 and higher.\n')
print('Try installing the Python 3.5 version of anaconda '
'and then restart `jupyter notebook`:\n' \
'https://www.continuum.io/downloads\n\n')
# Now get necessary libraries
try:
import os
import numpy as np
import matplotlib.pyplot as plt
from skimage.transform import resize
from skimage import data
from scipy.misc import imresize
except ImportError:
print('You are missing some packages! ' \
'We will try installing them before continuing!')
!pip install "numpy>=1.11.0" "matplotlib>=1.5.1" "scikit-image>=0.11.3" "scikit-learn>=0.17" "scipy>=0.17.0"
import os
import numpy as np
import matplotlib.pyplot as plt
from skimage.transform import resize
from skimage import data
from scipy.misc import imresize
print('Done!')
# Import Tensorflow
try:
import tensorflow as tf
except ImportError:
print("You do not have tensorflow installed!")
print("Follow the instructions on the following link")
print("to install tensorflow before continuing:")
print("")
print("https://github.com/pkmital/CADL#installation-preliminaries")
# This cell includes the provided libraries from the zip file
# and a library for displaying images from ipython, which
# we will use to display the gif
try:
from libs import utils, gif
import IPython.display as ipyd
except ImportError:
print("Make sure you have started notebook in the same directory" +
" as the provided zip file which includes the 'libs' folder" +
" and the file 'utils.py' inside of it. You will NOT be able"
" to complete this assignment unless you restart jupyter"
" notebook inside the directory created by extracting"
" the zip file or cloning the github repo.")
# We'll tell matplotlib to inline any drawn figures like so:
%matplotlib inline
plt.style.use('ggplot')
# Bit of formatting because I don't like the default inline code style:
from IPython.core.display import HTML
HTML("""<style> .rendered_html code {
padding: 2px 4px;
color: #c7254e;
background-color: #f9f2f4;
border-radius: 4px;
} </style>""")
###Output
_____no_output_____
###Markdown
Assignment SynopsisIn this assignment, we're going to create our first neural network capable of taking any two continuous values as inputs. Those two values will go through a series of multiplications, additions, and nonlinearities, coming out of the network as 3 outputs. Remember from the last homework, we used convolution to filter an image so that the representations in the image were accentuated. We're not going to be using convolution w/ Neural Networks until the next session, but we're effectively doing the same thing here: using multiplications to accentuate the representations in our data, in order to minimize whatever our cost function is. To find out what those multiplications need to be, we're going to use Gradient Descent and Backpropagation, which will take our cost, and find the appropriate updates to all the parameters in our network to best optimize the cost. In the next session, we'll explore much bigger networks and convolution. This "toy" network is really to help us get up and running with neural networks, and aid our exploration of the different components that make up a neural network. You will be expected to explore manipulations of the neural networks in this notebook as much as possible to help aid your understanding of how they effect the final result.We're going to build our first neural network to understand what color "to paint" given a location in an image, or the row, col of the image. So in goes a row/col, and out goes a R/G/B. In the next lesson, we'll learn what this network is really doing is performing regression. For now, we'll focus on the creative applications of such a network to help us get a better understanding of the different components that make up the neural network. You'll be asked to explore many of the different components of a neural network, including changing the inputs/outputs (i.e. the dataset), the number of layers, their activation functions, the cost functions, learning rate, and batch size. You'll also explore a modification to this same network which takes a 3rd input: an index for an image. This will let us try to learn multiple images at once, though with limited success.We'll now dive right into creating deep neural networks, and I'm going to show you the math along the way. Don't worry if a lot of it doesn't make sense, and it really takes a bit of practice before it starts to come together. Part One - Fully Connected Network InstructionsCreate the operations necessary for connecting an input to a network, defined by a `tf.Placeholder`, to a series of fully connected, or linear, layers, using the formula: $$\textbf{H} = \phi(\textbf{X}\textbf{W} + \textbf{b})$$where $\textbf{H}$ is an output layer representing the "hidden" activations of a network, $\phi$ represents some nonlinearity, $\textbf{X}$ represents an input to that layer, $\textbf{W}$ is that layer's weight matrix, and $\textbf{b}$ is that layer's bias. If you're thinking, what is going on? Where did all that math come from? Don't be afraid of it. Once you learn how to "speak" the symbolic representation of the equation, it starts to get easier. And once we put it into practice with some code, it should start to feel like there is some association with what is written in the equation, and what we've written in code. Practice trying to say the equation in a meaningful way: "The output of a hidden layer is equal to some input multiplied by another matrix, adding some bias, and applying a non-linearity". Or perhaps: "The hidden layer is equal to a nonlinearity applied to an input multiplied by a matrix and adding some bias". Explore your own interpretations of the equation, or ways of describing it, and it starts to become much, much easier to apply the equation.The first thing that happens in this equation is the input matrix $\textbf{X}$ is multiplied by another matrix, $\textbf{W}$. This is the most complicated part of the equation. It's performing matrix multiplication, as we've seen from last session, and is effectively scaling and rotating our input. The bias $\textbf{b}$ allows for a global shift in the resulting values. Finally, the nonlinearity of $\phi$ allows the input space to be nonlinearly warped, allowing it to express a lot more interesting distributions of data. Have a look below at some common nonlinearities. If you're unfamiliar with looking at graphs like this, it is common to read the horizontal axis as X, as the input, and the vertical axis as Y, as the output.
###Code
xs = np.linspace(-6, 6, 100)
plt.plot(xs, np.maximum(xs, 0), label='relu')
plt.plot(xs, 1 / (1 + np.exp(-xs)), label='sigmoid')
plt.plot(xs, np.tanh(xs), label='tanh')
plt.xlabel('Input')
plt.xlim([-6, 6])
plt.ylabel('Output')
plt.ylim([-1.5, 1.5])
plt.title('Common Activation Functions/Nonlinearities')
plt.legend(loc='lower right')
###Output
_____no_output_____
###Markdown
Remember, having series of linear followed by nonlinear operations is what makes neural networks expressive. By stacking a lot of "linear" + "nonlinear" operations in a series, we can create a deep neural network! Have a look at the output ranges of the above nonlinearity when considering which nonlinearity seems most appropriate. For instance, the `relu` is always above 0, but does not saturate at any value above 0, meaning it can be anything above 0. That's unlike the `sigmoid` which does saturate at both 0 and 1, meaning its values for a single output neuron will always be between 0 and 1. Similarly, the `tanh` saturates at -1 and 1.Choosing between these is often a matter of trial and error. Though you can make some insights depending on your normalization scheme. For instance, if your output is expected to be in the range of 0 to 1, you may not want to use a `tanh` function, which ranges from -1 to 1, but likely would want to use a `sigmoid`. Keep the ranges of these activation functions in mind when designing your network, especially the final output layer of your network. CodeIn this section, we're going to work out how to represent a fully connected neural network with code. First, create a 2D `tf.placeholder` called $\textbf{X}$ with `None` for the batch size and 2 features. Make its `dtype` `tf.float32`. Recall that we use the dimension of `None` for the batch size dimension to say that this dimension can be any number. Here is the docstring for the `tf.placeholder` function, have a look at what args it takes:Help on function placeholder in module `tensorflow.python.ops.array_ops`:```pythonplaceholder(dtype, shape=None, name=None)``` Inserts a placeholder for a tensor that will be always fed. **Important**: This tensor will produce an error if evaluated. Its value must be fed using the `feed_dict` optional argument to `Session.run()`, `Tensor.eval()`, or `Operation.run()`. For example:```pythonx = tf.placeholder(tf.float32, shape=(1024, 1024))y = tf.matmul(x, x)with tf.Session() as sess: print(sess.run(y)) ERROR: will fail because x was not fed. rand_array = np.random.rand(1024, 1024) print(sess.run(y, feed_dict={x: rand_array})) Will succeed.``` Args: dtype: The type of elements in the tensor to be fed. shape: The shape of the tensor to be fed (optional). If the shape is not specified, you can feed a tensor of any shape. name: A name for the operation (optional). Returns: A `Tensor` that may be used as a handle for feeding a value, but not evaluated directly. TODO! COMPLETE THIS SECTION!
###Code
# Create a placeholder with None x 2 dimensions of dtype tf.float32, and name it "X":
X = ...
###Output
_____no_output_____
###Markdown
Now multiply the tensor using a new variable, $\textbf{W}$, which has 2 rows and 20 columns, so that when it is left mutiplied by $\textbf{X}$, the output of the multiplication is None x 20, giving you 20 output neurons. Recall that the `tf.matmul` function takes two arguments, the left hand ($\textbf{X}$) and right hand side ($\textbf{W}$) of a matrix multiplication.To create $\textbf{W}$, you will use `tf.get_variable` to create a matrix which is `2 x 20` in dimension. Look up the docstrings of functions `tf.get_variable` and `tf.random_normal_initializer` to get familiar with these functions. There are many options we will ignore for now. Just be sure to set the `name`, `shape` (this is the one that has to be [2, 20]), `dtype` (i.e. tf.float32), and `initializer` (the `tf.random_normal_intializer` you should create) when creating your $\textbf{W}$ variable with `tf.get_variable(...)`.For the random normal initializer, often the mean is set to 0, and the standard deviation is set based on the number of neurons. But that really depends on the input and outputs of your network, how you've "normalized" your dataset, what your nonlinearity/activation function is, and what your expected range of inputs/outputs are. Don't worry about the values for the initializer for now, as this part will take a bit more experimentation to understand better!This part is to encourage you to learn how to look up the documentation on Tensorflow, ideally using `tf.get_variable?` in the notebook. If you are really stuck, just scroll down a bit and I've shown you how to use it. TODO! COMPLETE THIS SECTION!
###Code
W = tf.get_variable(...
h = tf.matmul(...
###Output
_____no_output_____
###Markdown
And add to this result another new variable, $\textbf{b}$, which has [20] dimensions. These values will be added to every output neuron after the multiplication above. Instead of the `tf.random_normal_initializer` that you used for creating $\textbf{W}$, now use the `tf.constant_initializer`. Often for bias, you'll set the constant bias initialization to 0 or 1.TODO! COMPLETE THIS SECTION!
###Code
b = tf.get_variable(...
h = tf.nn.bias_add(...
###Output
_____no_output_____
###Markdown
So far we have done:$$\textbf{X}\textbf{W} + \textbf{b}$$Finally, apply a nonlinear activation to this output, such as `tf.nn.relu`, to complete the equation:$$\textbf{H} = \phi(\textbf{X}\textbf{W} + \textbf{b})$$TODO! COMPLETE THIS SECTION!
###Code
h = ...
###Output
_____no_output_____
###Markdown
Now that we've done all of this work, let's stick it inside a function. I've already done this for you and placed it inside the `utils` module under the function name `linear`. We've already imported the `utils` module so we can call it like so, `utils.linear(...)`. The docstring is copied below, and the code itself. Note that this function is slightly different to the one in the lecture. It does not require you to specify `n_input`, and the input `scope` is called `name`. It also has a few more extras in there including automatically converting a 4-d input tensor to a 2-d tensor so that you can fully connect the layer with a matrix multiply (don't worry about what this means if it doesn't make sense!).```pythonutils.linear??``````pythondef linear(x, n_output, name=None, activation=None, reuse=None): """Fully connected layer Parameters ---------- x : tf.Tensor Input tensor to connect n_output : int Number of output neurons name : None, optional Scope to apply Returns ------- op : tf.Tensor Output of fully connected layer. """ if len(x.get_shape()) != 2: x = flatten(x, reuse=reuse) n_input = x.get_shape().as_list()[1] with tf.variable_scope(name or "fc", reuse=reuse): W = tf.get_variable( name='W', shape=[n_input, n_output], dtype=tf.float32, initializer=tf.contrib.layers.xavier_initializer()) b = tf.get_variable( name='b', shape=[n_output], dtype=tf.float32, initializer=tf.constant_initializer(0.0)) h = tf.nn.bias_add( name='h', value=tf.matmul(x, W), bias=b) if activation: h = activation(h) return h, W``` Variable ScopesNote that since we are using `variable_scope` and explicitly telling the scope which name we would like, if there is *already* a variable created with the same name, then Tensorflow will raise an exception! If this happens, you should consider one of three possible solutions:1. If this happens while you are interactively editing a graph, you may need to reset the current graph:```python tf.reset_default_graph()```You should really only have to use this if you are in an interactive console! If you are creating Python scripts to run via command line, you should really be using solution 3 listed below, and be explicit with your graph contexts! 2. If this happens and you were not expecting any name conflicts, then perhaps you had a typo and created another layer with the same name! That's a good reason to keep useful names for everything in your graph!3. More likely, you should be using context managers when creating your graphs and running sessions. This works like so: ```python g = tf.Graph() with tf.Session(graph=g) as sess: Y_pred, W = linear(X, 2, 3, activation=tf.nn.relu) ``` or: ```python g = tf.Graph() with tf.Session(graph=g) as sess, g.as_default(): Y_pred, W = linear(X, 2, 3, activation=tf.nn.relu) ``` You can now write the same process as the above steps by simply calling:
###Code
h, W = utils.linear(
x=X, n_output=20, name='linear', activation=tf.nn.relu)
###Output
_____no_output_____
###Markdown
Part Two - Image Painting Network InstructionsFollow along the steps below, first setting up input and output data of the network, $\textbf{X}$ and $\textbf{Y}$. Then work through building the neural network which will try to compress the information in $\textbf{X}$ through a series of linear and non-linear functions so that whatever it is given as input, it minimized the error of its prediction, $\hat{\textbf{Y}}$, and the true output $\textbf{Y}$ through its training process. You'll also create an animated GIF of the training which you'll need to submit for the homework!Through this, we'll explore our first creative application: painting an image. This network is just meant to demonstrate how easily networks can be scaled to more complicated tasks without much modification. It is also meant to get you thinking about neural networks as building blocks that can be reconfigured, replaced, reorganized, and get you thinking about how the inputs and outputs can be anything you can imagine. Preparing the DataWe'll follow an example that Andrej Karpathy has done in his online demonstration of "image inpainting". What we're going to do is teach the network to go from the location on an image frame to a particular color. So given any position in an image, the network will need to learn what color to paint. Let's first get an image that we'll try to teach a neural network to paint.TODO! COMPLETE THIS SECTION!
###Code
# First load an image
img = ...
# Be careful with the size of your image.
# Try a fairly small image to begin with,
# then come back here and try larger sizes.
img = imresize(img, (100, 100))
plt.figure(figsize=(5, 5))
plt.imshow(img)
# Make sure you save this image as "reference.png"
# and include it in your zipped submission file
# so we can tell what image you are trying to paint!
plt.imsave(fname='reference.png', arr=img)
###Output
_____no_output_____
###Markdown
In the lecture, I showed how to aggregate the pixel locations and their colors using a loop over every pixel position. I put that code into a function `split_image` below. Feel free to experiment with other features for `xs` or `ys`.
###Code
def split_image(img):
# We'll first collect all the positions in the image in our list, xs
xs = []
# And the corresponding colors for each of these positions
ys = []
# Now loop over the image
for row_i in range(img.shape[0]):
for col_i in range(img.shape[1]):
# And store the inputs
xs.append([row_i, col_i])
# And outputs that the network needs to learn to predict
ys.append(img[row_i, col_i])
# we'll convert our lists to arrays
xs = np.array(xs)
ys = np.array(ys)
return xs, ys
###Output
_____no_output_____
###Markdown
Let's use this function to create the inputs (xs) and outputs (ys) to our network as the pixel locations (xs) and their colors (ys):
###Code
xs, ys = split_image(img)
# and print the shapes
xs.shape, ys.shape
###Output
_____no_output_____
###Markdown
Also remember, we should normalize our input values!TODO! COMPLETE THIS SECTION!
###Code
# Normalize the input (xs) using its mean and standard deviation
xs = ...
# Just to make sure you have normalized it correctly:
print(np.min(xs), np.max(xs))
assert(np.min(xs) > -3.0 and np.max(xs) < 3.0)
###Output
_____no_output_____
###Markdown
Similarly for the output:
###Code
print(np.min(ys), np.max(ys))
###Output
_____no_output_____
###Markdown
We'll normalize the output using a simpler normalization method, since we know the values range from 0-255:
###Code
ys = ys / 255.0
print(np.min(ys), np.max(ys))
###Output
_____no_output_____
###Markdown
Scaling the image values like this has the advantage that it is still interpretable as an image, unlike if we have negative values.What we're going to do is use regression to predict the value of a pixel given its (row, col) position. So the input to our network is `X = (row, col)` value. And the output of the network is `Y = (r, g, b)`.We can get our original image back by reshaping the colors back into the original image shape. This works because the `ys` are still in order:
###Code
plt.imshow(ys.reshape(img.shape))
###Output
_____no_output_____
###Markdown
But when we give inputs of (row, col) to our network, it won't know what order they are, because we will randomize them. So it will have to *learn* what color value should be output for any given (row, col).Create 2 placeholders of `dtype` `tf.float32`: one for the input of the network, a `None x 2` dimension placeholder called $\textbf{X}$, and another for the true output of the network, a `None x 3` dimension placeholder called $\textbf{Y}$.TODO! COMPLETE THIS SECTION!
###Code
# Let's reset the graph:
tf.reset_default_graph()
# Create a placeholder of None x 2 dimensions and dtype tf.float32
# This will be the input to the network which takes the row/col
X = tf.placeholder(...
# Create the placeholder, Y, with 3 output dimensions instead of 2.
# This will be the output of the network, the R, G, B values.
Y = tf.placeholder(...
###Output
_____no_output_____
###Markdown
Now create a deep neural network that takes your network input $\textbf{X}$ of 2 neurons, multiplies it by a linear and non-linear transformation which makes its shape [None, 20], meaning it will have 20 output neurons. Then repeat the same process again to give you 20 neurons again, and then again and again until you've done 6 layers of 20 neurons. Then finally one last layer which will output 3 neurons, your predicted output, which I've been denoting mathematically as $\hat{\textbf{Y}}$, for a total of 6 hidden layers, or 8 layers total including the input and output layers. Mathematically, we'll be creating a deep neural network that looks just like the previous fully connected layer we've created, but with a few more connections. So recall the first layer's connection is:\begin{align}\textbf{H}_1=\phi(\textbf{X}\textbf{W}_1 + \textbf{b}_1) \\\end{align}So the next layer will take that output, and connect it up again:\begin{align}\textbf{H}_2=\phi(\textbf{H}_1\textbf{W}_2 + \textbf{b}_2) \\\end{align}And same for every other layer:\begin{align}\textbf{H}_3=\phi(\textbf{H}_2\textbf{W}_3 + \textbf{b}_3) \\\textbf{H}_4=\phi(\textbf{H}_3\textbf{W}_4 + \textbf{b}_4) \\\textbf{H}_5=\phi(\textbf{H}_4\textbf{W}_5 + \textbf{b}_5) \\\textbf{H}_6=\phi(\textbf{H}_5\textbf{W}_6 + \textbf{b}_6) \\\end{align}Including the very last layer, which will be the prediction of the network:\begin{align}\hat{\textbf{Y}}=\phi(\textbf{H}_6\textbf{W}_7 + \textbf{b}_7)\end{align}Remember if you run into issues with variable scopes/names, that you cannot recreate a variable with the same name! Revisit the section on Variable Scopes if you get stuck with name issues.TODO! COMPLETE THIS SECTION!
###Code
# We'll create 6 hidden layers. Let's create a variable
# to say how many neurons we want for each of the layers
# (try 20 to begin with, then explore other values)
n_neurons = ...
# Create the first linear + nonlinear layer which will
# take the 2 input neurons and fully connects it to 20 neurons.
# Use the `utils.linear` function to do this just like before,
# but also remember to give names for each layer, such as
# "1", "2", ... "5", or "layer1", "layer2", ... "layer6".
h1, W1 = ...
# Create another one:
h2, W2 = ...
# and four more (or replace all of this with a loop if you can!):
h3, W3 = ...
h4, W4 = ...
h5, W5 = ...
h6, W6 = ...
# Now, make one last layer to make sure your network has 3 outputs:
Y_pred, W7 = utils.linear(h6, 3, activation=None, name='pred')
assert(X.get_shape().as_list() == [None, 2])
assert(Y_pred.get_shape().as_list() == [None, 3])
assert(Y.get_shape().as_list() == [None, 3])
###Output
_____no_output_____
###Markdown
Cost FunctionNow we're going to work on creating a `cost` function. The cost should represent how much `error` there is in the network, and provide the optimizer this value to help it train the network's parameters using gradient descent and backpropagation.Let's say our error is `E`, then the cost will be:$$cost(\textbf{Y}, \hat{\textbf{Y}}) = \frac{1}{\text{B}} \displaystyle\sum\limits_{b=0}^{\text{B}} \textbf{E}_b$$where the error is measured as, e.g.:$$\textbf{E} = \displaystyle\sum\limits_{c=0}^{\text{C}} (\textbf{Y}_{c} - \hat{\textbf{Y}}_{c})^2$$Don't worry if this scares you. This is mathematically expressing the same concept as: "the cost of an actual $\textbf{Y}$, and a predicted $\hat{\textbf{Y}}$ is equal to the mean across batches, of which there are $\text{B}$ total batches, of the sum of distances across $\text{C}$ color channels of every predicted output and true output". Basically, we're trying to see on average, or at least within a single minibatches average, how wrong was our prediction? We create a measure of error for every output feature by squaring the predicted output and the actual output it should have, i.e. the actual color value it should have output for a given input pixel position. By squaring it, we penalize large distances, but not so much small distances.Consider how the square function (i.e., $f(x) = x^2$) changes for a given error. If our color values range between 0-255, then a typical amount of error would be between $0$ and $128^2$. For example if my prediction was (120, 50, 167), and the color should have been (0, 100, 120), then the error for the Red channel is (120 - 0) or 120. And the Green channel is (50 - 100) or -50, and for the Blue channel, (167 - 120) = 47. When I square this result, I get: (120)^2, (-50)^2, and (47)^2. I then add all of these and that is my error, $\textbf{E}$, for this one observation. But I will have a few observations per minibatch. So I add all the error in my batch together, then divide by the number of observations in the batch, essentially finding the mean error of my batch. Let's try to see what the square in our measure of error is doing graphically.
###Code
error = np.linspace(0.0, 128.0**2, 100)
loss = error**2.0
plt.plot(error, loss)
plt.xlabel('error')
plt.ylabel('loss')
###Output
_____no_output_____
###Markdown
This is known as the $l_2$ (pronounced el-two) loss. It doesn't penalize small errors as much as it does large errors. This is easier to see when we compare it with another common loss, the $l_1$ (el-one) loss. It is linear in error, by taking the absolute value of the error. We'll compare the $l_1$ loss with normalized values from $0$ to $1$. So instead of having $0$ to $255$ for our RGB values, we'd have $0$ to $1$, simply by dividing our color values by $255.0$.
###Code
error = np.linspace(0.0, 1.0, 100)
plt.plot(error, error**2, label='l_2 loss')
plt.plot(error, np.abs(error), label='l_1 loss')
plt.xlabel('error')
plt.ylabel('loss')
plt.legend(loc='lower right')
###Output
_____no_output_____
###Markdown
So unlike the $l_2$ loss, the $l_1$ loss is really quickly upset if there is *any* error at all: as soon as error moves away from $0.0$, to $0.1$, the $l_1$ loss is $0.1$. But the $l_2$ loss is $0.1^2 = 0.01$. Having a stronger penalty on smaller errors often leads to what the literature calls "sparse" solutions, since it favors activations that try to explain as much of the data as possible, rather than a lot of activations that do a sort of good job, but when put together, do a great job of explaining the data. Don't worry about what this means if you are more unfamiliar with Machine Learning. There is a lot of literature surrounding each of these loss functions that we won't have time to get into, but look them up if they interest you.During the lecture, we've seen how to create a cost function using Tensorflow. To create a $l_2$ loss function, you can for instance use tensorflow's `tf.squared_difference` or for an $l_1$ loss function, `tf.abs`. You'll need to refer to the `Y` and `Y_pred` variables only, and your resulting cost should be a single value. Try creating the $l_1$ loss to begin with, and come back here after you have trained your network, to compare the performance with a $l_2$ loss.The equation for computing cost I mentioned above is more succintly written as, for $l_2$ norm:$$cost(\textbf{Y}, \hat{\textbf{Y}}) = \frac{1}{\text{B}} \displaystyle\sum\limits_{b=0}^{\text{B}} \displaystyle\sum\limits_{c=0}^{\text{C}} (\textbf{Y}_{c} - \hat{\textbf{Y}}_{c})^2$$For $l_1$ norm, we'd have:$$cost(\textbf{Y}, \hat{\textbf{Y}}) = \frac{1}{\text{B}} \displaystyle\sum\limits_{b=0}^{\text{B}} \displaystyle\sum\limits_{c=0}^{\text{C}} \text{abs}(\textbf{Y}_{c} - \hat{\textbf{Y}}_{c})$$Remember, to understand this equation, try to say it out loud: the $cost$ given two variables, $\textbf{Y}$, the actual output we want the network to have, and $\hat{\textbf{Y}}$ the predicted output from the network, is equal to the mean across $\text{B}$ batches, of the sum of $\textbf{C}$ color channels distance between the actual and predicted outputs. If you're still unsure, refer to the lecture where I've computed this, or scroll down a bit to where I've included the answer.TODO! COMPLETE THIS SECTION!
###Code
# first compute the error, the inner part of the summation.
# This should be the l1-norm or l2-norm of the distance
# between each color channel.
error = ...
assert(error.get_shape().as_list() == [None, 3])
###Output
_____no_output_____
###Markdown
TODO! COMPLETE THIS SECTION!
###Code
# Now sum the error for each feature in Y.
# If Y is [Batch, Features], the sum should be [Batch]:
sum_error = ...
assert(sum_error.get_shape().as_list() == [None])
###Output
_____no_output_____
###Markdown
TODO! COMPLETE THIS SECTION!
###Code
# Finally, compute the cost, as the mean error of the batch.
# This should be a single value.
cost = ...
assert(cost.get_shape().as_list() == [])
###Output
_____no_output_____
###Markdown
We now need an `optimizer` which will take our `cost` and a `learning_rate`, which says how far along the gradient to move. This optimizer calculates all the gradients in our network with respect to the `cost` variable and updates all of the weights in our network using backpropagation. We'll then create mini-batches of our training data and run the `optimizer` using a `session`.TODO! COMPLETE THIS SECTION!
###Code
# Refer to the help for the function
optimizer = tf.train....minimize(cost)
# Create parameters for the number of iterations to run for (< 100)
n_iterations = ...
# And how much data is in each minibatch (< 500)
batch_size = ...
# Then create a session
sess = tf.Session()
###Output
_____no_output_____
###Markdown
We'll now train our network! The code below should do this for you if you've setup everything else properly. Please read through this and make sure you understand each step! Note that this can take a VERY LONG time depending on the size of your image (make it < 100 x 100 pixels), the number of neurons per layer (e.g. < 30), the number of layers (e.g. < 8), and number of iterations (< 1000). Welcome to Deep Learning :)
###Code
# Initialize all your variables and run the operation with your session
sess.run(tf.global_variables_initializer())
# Optimize over a few iterations, each time following the gradient
# a little at a time
imgs = []
costs = []
gif_step = n_iterations // 10
step_i = 0
for it_i in range(n_iterations):
# Get a random sampling of the dataset
idxs = np.random.permutation(range(len(xs)))
# The number of batches we have to iterate over
n_batches = len(idxs) // batch_size
# Now iterate over our stochastic minibatches:
for batch_i in range(n_batches):
# Get just minibatch amount of data
idxs_i = idxs[batch_i * batch_size: (batch_i + 1) * batch_size]
# And optimize, also returning the cost so we can monitor
# how our optimization is doing.
training_cost = sess.run(
[cost, optimizer],
feed_dict={X: xs[idxs_i], Y: ys[idxs_i]})[0]
# Also, every 20 iterations, we'll draw the prediction of our
# input xs, which should try to recreate our image!
if (it_i + 1) % gif_step == 0:
costs.append(training_cost / n_batches)
ys_pred = Y_pred.eval(feed_dict={X: xs}, session=sess)
img = np.clip(ys_pred.reshape(img.shape), 0, 1)
imgs.append(img)
# Plot the cost over time
fig, ax = plt.subplots(1, 2)
ax[0].plot(costs)
ax[0].set_xlabel('Iteration')
ax[0].set_ylabel('Cost')
ax[1].imshow(img)
fig.suptitle('Iteration {}'.format(it_i))
plt.show()
# Save the images as a GIF
_ = gif.build_gif(imgs, saveto='single.gif', show_gif=False)
###Output
_____no_output_____
###Markdown
Let's now display the GIF we've just created:
###Code
ipyd.Image(url='single.gif?{}'.format(np.random.rand()),
height=500, width=500)
###Output
_____no_output_____
###Markdown
ExploreGo back over the previous cells and exploring changing different parameters of the network. I would suggest first trying to change the `learning_rate` parameter to different values and see how the cost curve changes. What do you notice? Try exponents of $10$, e.g. $10^1$, $10^2$, $10^3$... and so on. Also try changing the `batch_size`: $50, 100, 200, 500, ...$ How does it effect how the cost changes over time?Be sure to explore other manipulations of the network, such as changing the loss function to $l_2$ or $l_1$. How does it change the resulting learning? Also try changing the activation functions, the number of layers/neurons, different optimizers, and anything else that you may think of, and try to get a basic understanding on this toy problem of how it effects the network's training. Also try comparing creating a fairly shallow/wide net (e.g. 1-2 layers with many neurons, e.g. > 100), versus a deep/narrow net (e.g. 6-20 layers with fewer neurons, e.g. < 20). What do you notice? A Note on CrossvalidationThe cost curve plotted above is only showing the cost for our "training" dataset. Ideally, we should split our dataset into what are called "train", "validation", and "test" sets. This is done by taking random subsets of the entire dataset. For instance, we partition our dataset by saying we'll only use 80% of it for training, 10% for validation, and the last 10% for testing. Then when training as above, you would only use the 80% of the data you had partitioned, and then monitor accuracy on both the data you have used to train, but also that new 10% of unseen validation data. This gives you a sense of how "general" your network is. If it is performing just as well on that 10% of data, then you know it is doing a good job. Finally, once you are done training, you would test one last time on your "test" dataset. Ideally, you'd do this a number of times, so that every part of the dataset had a chance to be the test set. This would also give you a measure of the variance of the accuracy on the final test. If it changes a lot, you know something is wrong. If it remains fairly stable, then you know that it is a good representation of the model's accuracy on unseen data.We didn't get a chance to cover this in class, as it is less useful for exploring creative applications, though it is very useful to know and to use in practice, as it avoids overfitting/overgeneralizing your network to all of the data. Feel free to explore how to do this on the application above! Part Three - Learning More than One Image InstructionsWe're now going to make use of our Dataset from Session 1 and apply what we've just learned to try and paint every single image in our dataset. How would you guess is the best way to approach this? We could for instance feed in every possible image by having multiple row, col -> r, g, b values. So for any given row, col, we'd have 100 possible r, g, b values. This likely won't work very well as there are many possible values a pixel could take, not just one. What if we also tell the network *which* image's row and column we wanted painted? We're going to try and see how that does.You can execute all of the cells below unchanged to see how this works with the first 100 images of the celeb dataset. But you should replace the images with your own dataset, and vary the parameters of the network to get the best results!I've placed the same code for running the previous algorithm into two functions, `build_model` and `train`. You can directly call the function `train` with a 4-d image shaped as N x H x W x C, and it will collect all of the points of every image and try to predict the output colors of those pixels, just like before. The only difference now is that you are able to try this with a few images at a time. There are a few ways we could have tried to handle multiple images. The way I've shown in the `train` function is to include an additional input neuron for *which* image it is. So as well as receiving the row and column, the network will also receive as input which image it is as a number. This should help the network to better distinguish the patterns it uses, as it has knowledge that helps it separates its process based on which image is fed as input.
###Code
def build_model(xs, ys, n_neurons, n_layers, activation_fn,
final_activation_fn, cost_type):
xs = np.asarray(xs)
ys = np.asarray(ys)
if xs.ndim != 2:
raise ValueError(
'xs should be a n_observates x n_features, ' +
'or a 2-dimensional array.')
if ys.ndim != 2:
raise ValueError(
'ys should be a n_observates x n_features, ' +
'or a 2-dimensional array.')
n_xs = xs.shape[1]
n_ys = ys.shape[1]
X = tf.placeholder(name='X', shape=[None, n_xs],
dtype=tf.float32)
Y = tf.placeholder(name='Y', shape=[None, n_ys],
dtype=tf.float32)
current_input = X
for layer_i in range(n_layers):
current_input = utils.linear(
current_input, n_neurons,
activation=activation_fn,
name='layer{}'.format(layer_i))[0]
Y_pred = utils.linear(
current_input, n_ys,
activation=final_activation_fn,
name='pred')[0]
if cost_type == 'l1_norm':
cost = tf.reduce_mean(tf.reduce_sum(
tf.abs(Y - Y_pred), 1))
elif cost_type == 'l2_norm':
cost = tf.reduce_mean(tf.reduce_sum(
tf.squared_difference(Y, Y_pred), 1))
else:
raise ValueError(
'Unknown cost_type: {}. '.format(
cost_type) + 'Use only "l1_norm" or "l2_norm"')
return {'X': X, 'Y': Y, 'Y_pred': Y_pred, 'cost': cost}
def train(imgs,
learning_rate=0.0001,
batch_size=200,
n_iterations=10,
gif_step=2,
n_neurons=30,
n_layers=10,
activation_fn=tf.nn.relu,
final_activation_fn=tf.nn.tanh,
cost_type='l2_norm'):
N, H, W, C = imgs.shape
all_xs, all_ys = [], []
for img_i, img in enumerate(imgs):
xs, ys = split_image(img)
all_xs.append(np.c_[xs, np.repeat(img_i, [xs.shape[0]])])
all_ys.append(ys)
xs = np.array(all_xs).reshape(-1, 3)
xs = (xs - np.mean(xs, 0)) / np.std(xs, 0)
ys = np.array(all_ys).reshape(-1, 3)
ys = ys / 127.5 - 1
g = tf.Graph()
with tf.Session(graph=g) as sess:
model = build_model(xs, ys, n_neurons, n_layers,
activation_fn, final_activation_fn,
cost_type)
optimizer = tf.train.AdamOptimizer(
learning_rate=learning_rate).minimize(model['cost'])
sess.run(tf.global_variables_initializer())
gifs = []
costs = []
step_i = 0
for it_i in range(n_iterations):
# Get a random sampling of the dataset
idxs = np.random.permutation(range(len(xs)))
# The number of batches we have to iterate over
n_batches = len(idxs) // batch_size
training_cost = 0
# Now iterate over our stochastic minibatches:
for batch_i in range(n_batches):
# Get just minibatch amount of data
idxs_i = idxs[batch_i * batch_size:
(batch_i + 1) * batch_size]
# And optimize, also returning the cost so we can monitor
# how our optimization is doing.
cost = sess.run(
[model['cost'], optimizer],
feed_dict={model['X']: xs[idxs_i],
model['Y']: ys[idxs_i]})[0]
training_cost += cost
print('iteration {}/{}: cost {}'.format(
it_i + 1, n_iterations, training_cost / n_batches))
# Also, every 20 iterations, we'll draw the prediction of our
# input xs, which should try to recreate our image!
if (it_i + 1) % gif_step == 0:
costs.append(training_cost / n_batches)
ys_pred = model['Y_pred'].eval(
feed_dict={model['X']: xs}, session=sess)
img = ys_pred.reshape(imgs.shape)
gifs.append(img)
return gifs
###Output
_____no_output_____
###Markdown
CodeBelow, I've shown code for loading the first 100 celeb files. Run through the next few cells to see how this works with the celeb dataset, and then come back here and replace the `imgs` variable with your own set of images. For instance, you can try your entire sorted dataset from Session 1 as an N x H x W x C array. Explore!TODO! COMPLETE THIS SECTION!
###Code
celeb_imgs = utils.get_celeb_imgs()
plt.figure(figsize=(10, 10))
plt.imshow(utils.montage(celeb_imgs).astype(np.uint8))
# It doesn't have to be 100 images, explore!
imgs = np.array(celeb_imgs).copy()
###Output
_____no_output_____
###Markdown
Explore changing the parameters of the `train` function and your own dataset of images. Note, you do not have to use the dataset from the last assignment! Explore different numbers of images, whatever you prefer.TODO! COMPLETE THIS SECTION!
###Code
# Change the parameters of the train function and
# explore changing the dataset
gifs = train(imgs=imgs)
###Output
_____no_output_____
###Markdown
Now we'll create a gif out of the training process. Be sure to call this 'multiple.gif' for your homework submission:
###Code
montage_gifs = [np.clip(utils.montage(
(m * 127.5) + 127.5), 0, 255).astype(np.uint8)
for m in gifs]
_ = gif.build_gif(montage_gifs, saveto='multiple.gif')
###Output
_____no_output_____
###Markdown
And show it in the notebook
###Code
ipyd.Image(url='multiple.gif?{}'.format(np.random.rand()),
height=500, width=500)
###Output
_____no_output_____
###Markdown
What we're seeing is the training process over time. We feed in our `xs`, which consist of the pixel values of each of our 100 images, it goes through the neural network, and out come predicted color values for every possible input value. We visualize it above as a gif by seeing how at each iteration the network has predicted the entire space of the inputs. We can visualize just the last iteration as a "latent" space, going from the first image (the top left image in the montage), to the last image, (the bottom right image).
###Code
final = gifs[-1]
final_gif = [np.clip(((m * 127.5) + 127.5), 0, 255).astype(np.uint8) for m in final]
gif.build_gif(final_gif, saveto='final.gif')
ipyd.Image(url='final.gif?{}'.format(np.random.rand()),
height=200, width=200)
###Output
_____no_output_____
###Markdown
Part Four - Open Exploration (Extra Credit)I now what you to explore what other possible manipulations of the network and/or dataset you could imagine. Perhaps a process that does the reverse, tries to guess where a given color should be painted? What if it was only taught a certain palette, and had to reason about other colors, how it would interpret those colors? Or what if you fed it pixel locations that weren't part of the training set, or outside the frame of what it was trained on? Or what happens with different activation functions, number of layers, increasing number of neurons or lesser number of neurons? I leave any of these as an open exploration for you.Try exploring this process with your own ideas, materials, and networks, and submit something you've created as a gif! To aid exploration, be sure to scale the image down quite a bit or it will require a much larger machine, and much more time to train. Then whenever you think you may be happy with the process you've created, try scaling up the resolution and leave the training to happen over a few hours/overnight to produce something truly stunning!Make sure to name the result of your gif: "explore.gif", and be sure to include it in your zip file. TODO! COMPLETE THIS SECTION!
###Code
# Train a network to produce something, storing every few
# iterations in the variable gifs, then export the training
# over time as a gif.
...
gif.build_gif(montage_gifs, saveto='explore.gif')
ipyd.Image(url='explore.gif?{}'.format(np.random.rand()),
height=500, width=500)
###Output
_____no_output_____
###Markdown
Assignment SubmissionAfter you've completed the notebook, create a zip file of the current directory using the code below. This code will make sure you have included this completed ipython notebook and the following files named exactly as: session-2/ session-2.ipynb single.gif multiple.gif final.gif explore.gif* libs/ utils.py * = optional/extra-creditYou'll then submit this zip file for your second assignment on Kadenze for "Assignment 2: Teach a Deep Neural Network to Paint"! If you have any questions, remember to reach out on the forums and connect with your peers or with me.To get assessed, you'll need to be a premium student! This will allow you to build an online portfolio of all of your work and receive grades. If you aren't already enrolled as a student, register now at http://www.kadenze.com/ and join the [CADL](https://twitter.com/hashtag/CADL) community to see what your peers are doing! https://www.kadenze.com/courses/creative-applications-of-deep-learning-with-tensorflow/infoAlso, if you share any of the GIFs on Facebook/Twitter/Instagram/etc..., be sure to use the CADL hashtag so that other students can find your work!
###Code
utils.build_submission('session-2.zip',
('reference.png',
'single.gif',
'multiple.gif',
'final.gif',
'session-2.ipynb'),
('explore.gif'))
###Output
_____no_output_____ |
notebooks/W2D1_Wine_Amr_Sara_Sascha_assignment_2Reduction.ipynb | ###Markdown
Set up package install
###Code
!sudo apt-get install build-essential swig
!curl https://raw.githubusercontent.com/automl/auto-sklearn/master/requirements.txt | xargs -n 1 -L 1 pip install
!pip install auto-sklearn
!pip install pipelineprofiler # visualize the pipelines created by auto-sklearn
!pip install shap
!pip install --upgrade plotly
!pip3 install -U scikit-learn
###Output
_____no_output_____
###Markdown
Packages imports
###Code
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn import metrics
from sklearn import set_config
from sklearn.pipeline import Pipeline
from sklearn.metrics import mean_squared_error
import autosklearn.regression
import plotly.express as px
import plotly.graph_objects as go
from joblib import dump
import shap
import datetime
import logging
import matplotlib.pyplot as plt
###Output
/usr/local/lib/python3.7/dist-packages/pyparsing.py:3190: FutureWarning: Possible set intersection at position 3
self.re = re.compile(self.reString)
###Markdown
Google Drive connection
###Code
from google.colab import drive
drive.mount('/content/drive', force_remount=True)
###Output
Mounted at /content/drive
###Markdown
options and settings
###Code
data_path = "/content/drive/MyDrive/Introduction2DataScience/tutorials/w2d2/data/raw/"
model_path = "/content/drive/MyDrive/Introduction2DataScience/tutorials/w2d2/models/"
timesstr = str(datetime.datetime.now()).replace(' ', '_')
logging.basicConfig(filename=f"{model_path}explog_{timesstr}.log", level=logging.INFO)
###Output
_____no_output_____
###Markdown
Please Download the data from [this source](https://drive.google.com/file/d/1MUZrfW214Pv9p5cNjNNEEosiruIlLUXz/view?usp=sharing), and upload it on your Introduction2DataScience/data google drive folder. Loading Data and Train-Test Split
###Code
df = pd.read_csv(f'{data_path}winequality-red.csv')
test_size = 0.2
random_state = 0
train, test = train_test_split(df, test_size=test_size, random_state=random_state)
logging.info(f'train test split with test_size={test_size} and random state={random_state}')
train.to_csv(f'{data_path}winequality-red.csv', index=False)
train= train.copy()
test.to_csv(f'{data_path}winequality-red.csv', index=False)
test = test.copy()
###Output
_____no_output_____
###Markdown
Modelling
###Code
X_train, y_train = train.iloc[:,:-1], train.iloc[:,-1]
total_time = 600
per_run_time_limit = 30
automl = autosklearn.regression.AutoSklearnRegressor(
time_left_for_this_task=total_time,
per_run_time_limit=per_run_time_limit,
)
automl.fit(X_train, y_train)
logging.info(f'Ran autosklearn regressor for a total time of {total_time} seconds, with a maximum of {per_run_time_limit} seconds per model run')
dump(automl, f'{model_path}model{timesstr}.pkl')
logging.info(f'Saved regressor model at {model_path}model{timesstr}.pkl ')
logging.info(f'autosklearn model statistics:')
logging.info(automl.sprint_statistics())
#profiler_data= PipelineProfiler.import_autosklearn(automl)
#PipelineProfiler.plot_pipeline_matrix(profiler_data)
###Output
_____no_output_____
###Markdown
Model Evluation and Explainability Let's separate our test dataframe into a feature variable (X_test), and a target variable (y_test):
###Code
X_test, y_test = test.iloc[:,:-1], test.iloc[:,-1]
###Output
_____no_output_____
###Markdown
Model Evaluation Now, we can attempt to predict the median house value from our test set. To do that, we just use the .predict method on the object "automl" that we created and trained in the last sections:
###Code
y_pred = automl.predict(X_test)
###Output
_____no_output_____
###Markdown
Let's now evaluate it using the mean_squared_error function from scikit learn:
###Code
logging.info(f"Mean Squared Error is {mean_squared_error(y_test, y_pred)}, \n R2 score is {automl.score(X_test, y_test)}")
###Output
_____no_output_____
###Markdown
we can also plot the y_test vs y_pred scatter:
###Code
df = pd.DataFrame(np.concatenate((X_test, y_test.to_numpy().reshape(-1,1), y_pred.reshape(-1,1)), axis=1))
df.columns = ['fixed acidity', 'volatile acidity', 'citric acid', 'residual sugar',
'chlorides', 'free sulfur dioxide',
'total sulfur dioxide', 'density', 'pH', 'sulphates', 'alcohol', 'Actual Target', 'Predicted Target']
fig = px.scatter(df, x='Predicted Target', y='Actual Target')
fig.write_html(f"{model_path}residualfig_{timesstr}.html")
logging.info(f"Figure of residuals saved as {model_path}residualfig_{timesstr}.html")
###Output
_____no_output_____
###Markdown
Model Explainability
###Code
explainer = shap.KernelExplainer(model = automl.predict, data = X_test.iloc[:50, :], link = "identity")
# Set the index of the specific example to explain
X_idx = 0
shap_value_single = explainer.shap_values(X = X_test.iloc[X_idx:X_idx+1,:], nsamples = 100)
X_test.iloc[X_idx:X_idx+1,:]
# print the JS visualization code to the notebook
#shap.initjs()
shap.force_plot(base_value = explainer.expected_value,
shap_values = shap_value_single,
features = X_test.iloc[X_idx:X_idx+1,:],
show=False,
matplotlib=True
)
plt.savefig(f"{model_path}shap_example_{timesstr}.png")
logging.info(f"Shapley example saved as {model_path}shap_example_{timesstr}.png")
shap_values = explainer.shap_values(X = X_test.iloc[0:50,:], nsamples = 100)
# print the JS visualization code to the notebook
#shap.initjs()
fig = shap.summary_plot(shap_values = shap_values,
features = X_test.iloc[0:50,:],
show=False)
plt.savefig(f"{model_path}shap_summary_{timesstr}.png")
logging.info(f"Shapley summary saved as {model_path}shap_summary_{timesstr}.png")
###Output
_____no_output_____ |
02B_RESULT_LongDocPreProcessing.ipynb | ###Markdown
[1] Mount Drive
###Code
from google.colab import drive
drive.mount('/content/drive')
###Output
Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount("/content/drive", force_remount=True).
###Markdown
[2] Install Requirements and Load Libs Install RequirementsRequirements
###Code
!pip install datasets &> /dev/null
!pip install rouge_score &> /dev/null
!pip install -q transformers==4.8.2 &> /dev/null
!pip install sentencepiece
!pip install nltk
# Import Python Lib
import os
import shutil
import pandas as pd
import numpy as np
from ast import literal_eval
import re
import torch
import nltk
from rouge_score import rouge_scorer
from IPython.display import display, HTML
# Import local lib
%load_ext autoreload
%autoreload 2
path_utils = '/content/drive/MyDrive/Github/Synopsis/utils'
os.chdir(path_utils)
#importlib.reload(utils_lsstr)
#importlib.reload(utils_model)
from utils_model import Summarization_Model, Tokenizer, \
str_summarize, segment_to_split_size, str_seg_and_summarize, str_led_summarize
from utils_lsstr import str_word_count, ls_word_count
from utils_lsstr import split_str_to_batch_ls, \
str_remove_duplicated_consective_token
from Screenplay import SC_Elements
# instantiate SC_Element
sc = SC_Elements()
###Output
[nltk_data] Downloading package punkt to /root/nltk_data...
[nltk_data] Unzipping tokenizers/punkt.zip.
###Markdown
Set Common Paths
###Code
path_datasets ='/content/drive/MyDrive/Github/Synopsis/Datasets'
path_results = '/content/drive/MyDrive/Github/Synopsis/results'
###Output
_____no_output_____
###Markdown
[3] Compare Segmentation Methods Load Various Tokenizergoogle/pegasus-large, 500/1000google/pegasus-cnn_dailymail, 500 (no large version)facebook/bart-large, 500/1000facebook/bart-large-cnn, 500/1000google/bigbird-pegasus-large-arxiv, 512/1000/4000allenai/led-large-16384, 512/1000/4000/16000allenai/led-large-16384-arxiv, 512/1000/4000/16000
###Code
# initalize tokenizer by model
model_name = 'allenai/led-base-16384'
tokenizer = Tokenizer(model_name)
# Instantiate word tokenizer and detokenizer
from nltk.tokenize import RegexpTokenizer
from nltk.tokenize import line_tokenize, sent_tokenize, word_tokenize
from nltk.tokenize import TreebankWordTokenizer
from nltk.tokenize.treebank import TreebankWordDetokenizer
nltk.download('punkt')
###Output
_____no_output_____
###Markdown
Preprocessing Test Data Overview
###Code
# Load SSGD
path_datasets = '/content/drive/MyDrive/Github/Synopsis/Datasets'
path_dfssgd = '/content/drive/MyDrive/Github/Synopsis/Datasets/SSGD-2021-07-23-719SC-TVTbl.json'
df_wscript = pd.read_json(path_dfssgd)
df_wscript['dfsc'] = df_wscript['dfsc'].apply(lambda x: pd.DataFrame(x))
# Load Turning Points from TRIPOD (for testing splits methods using turning poitns
path_TRIPOD = '/content/drive/MyDrive/Github/Synopsis/Datasets/TRIPOD-master'
path_tps= path_TRIPOD + '/Synopses_and_annotations/TRIPOD_screenplays_test.csv'
df_tps = pd.read_csv(path_tps, header=0)
# for each film title with turniing points, find the corresponding SSGD record
# Save to df_cases, use df_cases for Long Document Processing Experiments
df_cases = df_wscript[df_wscript['title'].isin(df_tps['movie_name'])]
df_tmp = df_tps.melt(id_vars=['movie_name'])
df_tmp['value'] = df_tmp['value'].apply(lambda x: literal_eval(x)).apply(lambda x: [x])
df_tmp = df_tmp.groupby('movie_name')['value'].sum().reset_index()
df_tmp.columns = ['title', 'ls_tps']
df_cases = df_cases.merge(df_tmp, on='title', how='left')
df_cases = df_cases.drop_duplicates('title')
# assign tps to scenes in dfsc
for i, row in df_cases.iterrows():
df_cases.loc[i,'dfsc'] ['tps'] = 0
for j, ls in enumerate(df_cases.loc[i, 'ls_tps']):
df_cases.loc[i,'dfsc'].loc[df_cases.loc[i,'dfsc']['Scene'].isin(ls),'tps'] = j+1
# fillna for Scene numbers and insure type as i
for i, case in df_cases.iterrows():
df_cases.loc[i, 'dfsc']['Scene'] =\
df_cases.loc[i, 'dfsc']['Scene'].fillna('-1').astype('int')
df_cases['gold'] = df_cases['ls_sums_sorted'].apply(lambda x: x[0])
df_cases['nScenes'] = df_cases['dfsc'].apply(lambda x: x['Scene'].nunique())
df_cases['gold_wc'] = df_cases['gold'].apply(lambda x: len(word_tokenize(x)))
df_cases['nScenes_tps'] = df_cases['ls_tps'].apply(lambda x: sum([len(ls) for ls in x]))
df_cases['dict_AF']= df_cases['dfsc'].apply(sc.extract_str_by_method, method='AF', return_type='df').apply(lambda x: x.dropna().to_dict(), axis=1)
def calc_tc(x):
for k, v in x.items():
x[k] = len(tokenizer(v)['input_ids'])
return list(x.values())
df_cases['Scene_tc'] = df_cases['dict_AF'].apply(calc_tc)
Scene_tc = pd.DataFrame(df_cases['Scene_tc'].sum()).describe().astype('int')
Scene_tc.columns =['ๅ่ฏๆฐ้']
df_cases
###Output
_____no_output_____
###Markdown
้ฟๆๆฌ้ขๆต่ฏ้ๆฆ่ง
###Code
overview = df_cases[['title', 'word_count', 'nScenes', '%Dialog', 'gold_wc', 'nScenes_tps']].copy()
overview.columns = ['็ๅ', 'ๅงๆฌๅ่ฏ้', 'ๅบๆฌกๆฐ้', 'ๅฏน็ฝๅ ๆฏ', 'ๅ่ๆป็ปๅ่ฏ้', '้็นๅบๆฌกๆฐ้']
overview['ๅ็ผฉๅๆฐ'] = overview['ๅงๆฌๅ่ฏ้'] / overview['ๅ่ๆป็ปๅ่ฏ้']
overview.loc['ๅๆฐ'] = overview.mean()
overview = overview.fillna('ๅๆฐ')
overview.round()
df_cases.columns
# Specify selection methods or define custom
# methods are in utils/Screenplay.py
# SELECTION METHOD
#####################
selection_method = 'PSentAO_F1'
#####################
path_utils = '/content/drive/MyDrive/Github/Synopsis/utils'
os.chdir(path_utils)
#%load_ext autoreload
%autoreload 2
from Screenplay import SC_Elements
sc = SC_Elements()
# Initialize df
df = df_cases[['title', 'ls_sums_sorted', 'dfsc']].copy()
# Get selection by selection_method
df['selection'] = df['dfsc'].apply(sc.extract_str_by_method,
method=selection_method, return_type='df').apply(
lambda x: x.dropna().to_dict(),
axis=1
)
df['selection']
# Input Huggingface model name or local model path
#####################################
model_name = 'facebook/bart-large'
#####################################
# assign cuda to device if it exists
if torch.cuda.device_count() > 0:
device = 'cuda:' + str(torch.cuda.current_device())
else:
device = 'cpu'
# Instantiate tokenizer and model
tokenizer = Tokenizer(model_name=None)
#model = Summarization_Model(model_name=model_name, device=device)
tmp
###Output
_____no_output_____
###Markdown
by token count vs. by scene
###Code
path_compare_segmentation_methods = '/content/drive/MyDrive/Github/Synopsis/results/by_SegMethod'
for root, dirs, files in os.walk(path_compare_segmentation_methods):
if root == path_compare_segmentation_methods:
fns = files
break
rouge_scores = []
for fn in fns:
record = [fn[3:-5]]
dftmp = pd.read_json(path_compare_segmentation_methods + '/' + fn)
record.extend(dftmp[['R1', 'R2', 'RL', 's0_wc', 's0_tc',
'sum_wc_max', 'sum_tc_max', 's1_wc', 's1_tc']].mean().tolist())
rouge_scores.append(record)
dfscores = pd.DataFrame(rouge_scores)
dfscores.columns = ['method', 'R1', 'R2', 'RL', 's0_wc', 's0_tc',
'sum_wc_max', 'sum_tc_max', 's1_wc', 's1_tc']
dfscores['s0_wc'] = dfscores['s0_wc'].astype('int')
dfscores['s0_tc'] = dfscores['s0_tc'].astype('int')
dfscores['s1_wc'] = dfscores['s1_wc'].astype('int')
dfscores['s1_tc'] = dfscores['s1_tc'].astype('int')
dfscores['sum_tc_max'] = dfscores['sum_tc_max'].astype('int')
methods = dfscores['method'].str.split('_', expand=True)
methods.columns = ['็ญ้ๆนๆณ', 'ๅๆฎตๆนๆณ', 'ๆจกๅ', 'ๅๆฎตtc่ท็ฆป']
dfscores = dfscores.merge(methods, left_index=True, right_index=True).drop('method', axis=1)
# create view
view = dfscores[[
'ๅๆฎตๆนๆณ', 'ๆจกๅ', 'ๅๆฎตtc่ท็ฆป', 'R1', 'R2', 'RL',
's0_wc', 'sum_wc_max', 's1_wc',
's0_tc', 'sum_tc_max', 's1_tc']].sort_values(['R2'], ascending=[False])
view.columns = ['ๅๆฎตๆนๆณ', 'ๆจกๅ', 'ๅๆฎตtc', 'R1', 'R2', 'RL',
'่พๅ
ฅwc', 'ๅ่ๆขๆฆwc', '็ๆๆขๆฆwc',
'่พๅ
ฅtc', 'ๅ่ๆขๆฆtc', '็ๆๆขๆฆtc']
view['ๅ่ๆขๆฆwc'] = view['ๅ่ๆขๆฆwc'].astype('int')
view.round(2)
###Output
_____no_output_____
###Markdown
[4] Compare Selection Method
###Code
path_selections = '/content/drive/MyDrive/Github/Synopsis/results/by_selections'
# Get result file names
for root, dirs, files in os.walk(path_selections):
if root == path_selections:
fns = files
break
from rouge_score import rouge_scorer
scorer = rouge_scorer.RougeScorer(['rouge1', 'rouge2', 'rougeL'],
use_stemmer=True)
rouge_scores = []
for fn in fns:
record = [fn[3:-5]]
dftmp = pd.read_json(path_selections + '/' + fn)
dftmp['s0_wc'] = dftmp['s0'].apply(lambda x: len(word_tokenize(x)))
record.extend(dftmp[['R1', 'R2', 'RL', 's0_wc']].mean().tolist())
rouge_scores.append(record)
dfscores = pd.DataFrame(rouge_scores)
dfscores.columns = ['method', 'R1', 'R2', 'RL', 's0_wc']
methods = dfscores['method'].str.split('_', expand=True)
methods.columns = ['selection_method', 'model', 'split-size']
dfscores = dfscores.merge(methods, left_index=True, right_index=True).drop('method', axis=1)
# Create View
view = dfscores[['selection_method', 's0_wc', 'R1', 'R2', 'RL']].copy()
view.columns = ['็ญ้ๆนๅผ', '่พๅ
ฅwc','R1', 'R2', 'RL']
view['่พๅ
ฅwc'] = view['่พๅ
ฅwc'].astype('int')
view.sort_values('R2', ascending=False).round(2)
view.sort_values('R2', ascending=False).round(2)
view.sort_values('่พๅ
ฅwc').round(2)
###Output
_____no_output_____
###Markdown
[5] Compare Finetuning Method by window size
###Code
path_wsize = '/content/drive/MyDrive/Github/Synopsis/results/by_FTmodel/by_window_size'
# Get result file names
for root, dirs, files in os.walk(path_wsize):
if root == path_wsize:
fns = files
break
from rouge_score import rouge_scorer
scorer = rouge_scorer.RougeScorer(['rouge1', 'rouge2', 'rougeL'],
use_stemmer=True)
rouge_scores = []
for fn in fns:
record = [fn[3:-5]]
dftmp = pd.read_json(path_wsize + '/' + fn)
record.extend(dftmp[['R1', 'R2', 'RL']].mean().tolist())
rouge_scores.append(record)
dfscores = pd.DataFrame(rouge_scores)
dfscores.columns = ['method', 'R1', 'R2', 'RL']
methods = dfscores['method'].str.split('_', expand=True)
methods.columns = ['selection_method', 'model', 'pred GA', 'split-size']
dfscores = dfscores.merge(methods, left_index=True, right_index=True).drop('method', axis=1)
# Create View
view = dfscores.sort_values(
['selection_method', 'R2'], ascending=[True, False])
view = view[['model', 'R1', 'R2', 'RL']]
view.columns = ['ๆจกๅ', 'R1', 'R2', 'RL']
view['็ชๅฃ'] = view['ๆจกๅ'].str.extract('([0-9]*)W')
view = view[['็ชๅฃ', 'R1', 'R2', 'RL', 'ๆจกๅ']]
view.round(2)
###Output
_____no_output_____
###Markdown
by global attention application
###Code
path_global_attention = '/content/drive/MyDrive/Github/Synopsis/results/by_FTmodel/by_global_attention'
# Get result file names
for root, dirs, files in os.walk(path_global_attention):
if root == path_global_attention:
fns = files
break
from rouge_score import rouge_scorer
scorer = rouge_scorer.RougeScorer(['rouge1', 'rouge2', 'rougeL'],
use_stemmer=True)
rouge_scores = []
for fn in fns:
record = [fn[3:-5]]
dftmp = pd.read_json(path_global_attention + '/' + fn)
record.extend(dftmp[['R1', 'R2', 'RL']].mean().tolist())
rouge_scores.append(record)
dfscores = pd.DataFrame(rouge_scores)
dfscores.columns = ['method', 'R1', 'R2', 'RL']
methods =dfscores['method'].str.split('-', expand=True)
dfscores['่ฎญ็ปGA'] = methods[5].apply(lambda x: x[2:])
dfscores['้ขๆตGA'] = methods[7].str.extract('_(.*)_')
view = dfscores[['่ฎญ็ปGA', '้ขๆตGA', 'R1', 'R2', 'RL', 'method']].sort_values('R2', ascending=False)
view['ๆจกๅ'] = view['method'].apply(lambda x: x[10:])
view.drop('method', axis=1).round(2)
pd.read_json(path_global_attention + '/' + fn)
###Output
_____no_output_____
###Markdown
by expanding gold tc range
###Code
fp_goldtcrange= '/content/drive/MyDrive/Github/Synopsis/results/by_FTmodel/by_expand_goldtcrange'
# Get result file names
for root, dirs, files in os.walk(fp_goldtcrange):
if root == fp_goldtcrange:
fns = files
break
from rouge_score import rouge_scorer
scorer = rouge_scorer.RougeScorer(['rouge1', 'rouge2', 'rougeL'],
use_stemmer=True)
rouge_scores = []
for fn in fns:
record = [fn[3:-5]]
dftmp = pd.read_json(fp_goldtcrange + '/' + fn)
record.extend(dftmp[['R1', 'R2', 'RL', 's1_wc']].mean().tolist())
rouge_scores.append(record)
dfscores = pd.DataFrame(rouge_scores)
dfscores.columns = ['method', 'R1', 'R2', 'RL', 's1_wc']
dfscores['s1_wc'] = dfscores['s1_wc'].astype('int')
methods = dfscores['method'].str.split('_', expand=True)
methods.columns = ['้ขๆต้็ญ้ๆนๆณ', 'model', '้ขๆตGA', 'split-size']
dfscores = dfscores.merge(methods, left_index=True, right_index=True).drop('method', axis=1)
selection_methods = ['Grp_n06', 'Grp_n06', 'Grp_n06']
gold_tc_range = [[256,1024], [0,1024], [512, 1024]]
training_time = ['1ๅฐๆถ06ๅ้', '5ๅฐๆถ07ๅ้', '3ๅ้15็ง']
methods = dfscores['model'].str.split('-', expand=True)
dfscores['่ฎญ็ปๆญฅๆฐ'] = methods[7].apply(lambda x: re.sub('steps', '', x))
dfscores['่ฎญ็ปๆ ทๆฌ้'] = methods[6].apply(lambda x: re.split('T', x)[0])
dfscores['่ฎญ็ป่พๅ
ฅ็ญ้ๆณ'] = selection_methods
dfscores['ๅ่ๆป็ปtc่ๅด'] = gold_tc_range
dfscores['่ฎญ็ป่ๆถ'] = training_time
dfscores['ๅบ็กๆจกๅ'] = dfscores['model'].apply(
lambda x: re.split('-6L', x)[0])
view = dfscores.sort_values('R2', ascending=False)
view = view[['่ฎญ็ป่พๅ
ฅ็ญ้ๆณ','ๅ่ๆป็ปtc่ๅด',
'่ฎญ็ปๆ ทๆฌ้', '่ฎญ็ปๆญฅๆฐ',
'R1', 'R2', 'RL', '่ฎญ็ป่ๆถ', 's1_wc']]
view.columns = ['่ฎญ็ป่พๅ
ฅ็ญ้ๆนๅผ','ๅ่ๆป็ปtc่ๅด',
'่ฎญ็ปๆ ทๆฌ', '่ฎญ็ปๆญฅๆฐ',
'R1', 'R2', 'RL', '่ฎญ็ป่ๆถ', '็ๆๆขๆฆwcๅๅผ']
view.round(2)
###Output
_____no_output_____
###Markdown
by augmentaiton with selection methods
###Code
fp_aug_sm = '/content/drive/MyDrive/Github/Synopsis/results/by_FTmodel/by_augmentation_w_selections'
# Get result file names
for root, dirs, files in os.walk(fp_aug_sm):
if root == fp_aug_sm:
fns = files
break
from rouge_score import rouge_scorer
scorer = rouge_scorer.RougeScorer(['rouge1', 'rouge2', 'rougeL'],
use_stemmer=True)
rouge_scores = []
for fn in fns:
record = [fn[3:-5]]
dftmp = pd.read_json(fp_aug_sm + '/' + fn)
record.extend(dftmp[['R1', 'R2', 'RL', 's0_wc', 's1_wc']].mean().tolist())
rouge_scores.append(record)
dfscores = pd.DataFrame(rouge_scores)
dfscores.columns = ['method', 'R1', 'R2', 'RL','s0_wc', 's1_wc']
dfscores['s0_wc'] = dfscores['s0_wc'].astype('int')
dfscores['s1_wc'] = dfscores['s1_wc'].astype('int')
methods = dfscores['method'].str.split('_', expand=True)
methods.columns = ['้ขๆต้็ญ้ๆนๆณ', 'model', '้ขๆตGA', 'split-size']
selection_methods = ['Grp_n12', 'Grp_n06', 'Grp_n19', 'Grp_n24']
gold_tc_range = [[512,1024], [512,1024], [512, 1024], [512, 1024]]
training_time = ['11ๅ้24็ง', '3ๅ้15็ง', '15ๅ้47็ง', '48ๅ้02็ง']
dfscores = dfscores.merge(methods, left_index=True, right_index=True).drop('method', axis=1)
methods = dfscores['model'].str.split('-', expand=True)
dfscores['่ฎญ็ปๆญฅๆฐ'] = methods[7].apply(lambda x: re.sub('steps', '', x))
dfscores['่ฎญ็ปๆ ทๆฌ้'] = methods[6].apply(lambda x: re.split('T', x)[0])
dfscores['่ฎญ็ป่พๅ
ฅ็ญ้ๆณ'] = selection_methods
dfscores['ๅ่ๆป็ปtc่ๅด'] = gold_tc_range
dfscores['่ฎญ็ป่ๆถ'] = training_time
# create view
view = dfscores[['่ฎญ็ป่พๅ
ฅ็ญ้ๆณ','ๅ่ๆป็ปtc่ๅด',
'่ฎญ็ปๆ ทๆฌ้', '่ฎญ็ปๆญฅๆฐ',
'R1', 'R2', 'RL', '่ฎญ็ป่ๆถ', 's1_wc']].sort_values('R2', ascending=False)
view.columns = ['่ฎญ็ป่พๅ
ฅ็ญ้ๆนๅผ','ๅ่ๆป็ปtc่ๅด',
'่ฎญ็ปๆ ทๆฌ้', '่ฎญ็ปๆญฅๆฐ',
'R1', 'R2', 'RL', '่ฎญ็ป่ๆถ', '่พๅบwcๅๅผ']
view.round(2)
###Output
_____no_output_____
###Markdown
by prediction with selection methods
###Code
fp_pred_sm = '/content/drive/MyDrive/Github/Synopsis/results/by_FTmodel/by_pred_selection_methods'
# Get result file names
for root, dirs, files in os.walk(fp_pred_sm):
if root == fp_pred_sm:
fns = files
break
from rouge_score import rouge_scorer
scorer = rouge_scorer.RougeScorer(['rouge1', 'rouge2', 'rougeL'],
use_stemmer=True)
rouge_scores = []
for fn in fns:
record = [fn[3:-5]]
dftmp = pd.read_json(fp_pred_sm + '/' + fn)
record.extend(dftmp[['R1', 'R2', 'RL', 's0_wc', 's1_wc']].mean().tolist())
rouge_scores.append(record)
dfscores = pd.DataFrame(rouge_scores)
dfscores.columns = ['method', 'R1', 'R2', 'RL','s0_wc', 's1_wc']
dfscores['s0_wc'] = dfscores['s0_wc'].astype('int')
dfscores['s1_wc'] = dfscores['s1_wc'].astype('int')
methods = dfscores['method'].str.split('_', expand=True)
methods.columns = ['้ขๆต่พๅ
ฅ็ญ้ๆนๆณ', 'model', '้ขๆตGA', 'split-size']
dfscores = dfscores.merge(methods, left_index=True, right_index=True).drop('method', axis=1)
view = dfscores.sort_values('R2', ascending=False).round(2)
view = view[['้ขๆต่พๅ
ฅ็ญ้ๆนๆณ', '้ขๆตGA', 'R1', 'R2', 'RL', 's0_wc', 's1_wc']]
left = view[view['้ขๆตGA'] == 'None'].sort_values('้ขๆต่พๅ
ฅ็ญ้ๆนๆณ')
left = left[['s0_wc', 's1_wc', 'R1', 'R2', 'RL', '้ขๆตGA', '้ขๆต่พๅ
ฅ็ญ้ๆนๆณ']]
left.loc['ๅๅผ'] = left.mean()
left.fillna('')
right = view[view.้ขๆตGA == 'boScene'].sort_values('้ขๆต่พๅ
ฅ็ญ้ๆนๆณ')
right.loc['ๅๅผ'] = right.mean()
right.fillna('')
view.loc[view['้ขๆตGA'] == 'boScene'].sort_values('้ขๆต่พๅ
ฅ็ญ้ๆนๆณ')
view.sort_values(['้ขๆต่พๅ
ฅ็ญ้ๆนๆณ', 'R2'], ascending=[True, False])
view.sort_values(
['้ขๆต่พๅ
ฅ็ญ้ๆนๆณ', 'R2'],
ascending=[True, False]
)[['้ขๆต่พๅ
ฅ็ญ้ๆนๆณ', 'R1', 'R2', 'RL', '้ขๆตGA', 's0_wc', 's1_wc']]
###Output
_____no_output_____
###Markdown
Pred
###Code
fptest = '/content/drive/MyDrive/Github/Synopsis/results/df_results_pt_then_ft.json'
#dftest.to_json(fptest)
dftest = pd.read_json(fptest)
dftest['dfsc'] = dftest['dfsc'].apply(lambda x: pd.DataFrame(x))
dftest['s0_EVERY4at0'] = dftest['dfsc'].apply(sc.extract_str_by_method, method='EVERY4at0')
dftest
###Output
_____no_output_____
###Markdown
[8] View Generated Summaries
###Code
path_global_training = '/content/drive/MyDrive/Github/Synopsis/results/by_FTmodel'
# Get result file names
for root, dirs, files in os.walk(path_global_training):
if root == path_global_training:
fns = files
break
dfs = pd.read_json(path_global_training + '/' + fns[0])[['title', 'ls_sums_sorted']]
dfs['gold'] = dfs['ls_sums_sorted'].apply(lambda x: x[0])
for fn in fns:
# append method
df = pd.read_json(path_global_training + '/' + fn)
# append title
dftmp = df[['title']].copy()
# append predicted summary
dftmp['pred_sum_{}'.format(fn[3:-5])] = df['s1']
# append rouge2_f1
dftmp['rouge2_f1_{}'.format(fn[3:-5])] = df['rouge2_f1']
# merge summary to dfc
dfs = dfs.merge(dftmp, on='title', how='left')
dfs.T
HTML(dfs.T[[10]].to_html())
dfs.columns
###Output
_____no_output_____ |
docs_src/jekyll_metadata.ipynb | ###Markdown
To update this notebook Run `tools/sgen_notebooks.py Or run below: You need to make sure to refresh right after
###Code
import glob
for f in Path().glob('*.ipynb'):
generate_missing_metadata(f)
###Output
_____no_output_____
###Markdown
Metadata generated below
###Code
update_nb_metadata('torch_core.ipynb',
summary='Basic functions using pytorch',
title='torch_core')
update_nb_metadata('gen_doc.convert2html.ipynb',
summary='Converting the documentation notebooks to HTML pages',
title='gen_doc.convert2html')
update_nb_metadata('metrics.ipynb',
summary='Useful metrics for training',
title='metrics')
update_nb_metadata('callbacks.fp16.ipynb',
summary='Training in mixed precision implementation',
title='callback.fp16')
update_nb_metadata('callbacks.general_sched.ipynb',
summary='Implementation of a flexible training API',
title='callbacks.general_sched')
update_nb_metadata('text.ipynb',
keywords='fastai',
summary='Application to NLP, including ULMFiT fine-tuning',
title='text')
update_nb_metadata('callback.ipynb',
summary='Implementation of the callback system',
title='callback')
update_nb_metadata('tabular.models.ipynb',
keywords='fastai',
summary='Model for training tabular/structured data',
title='tabular.model')
update_nb_metadata('callbacks.mixup.ipynb',
summary='Implementation of mixup',
title='callbacks.mixup')
update_nb_metadata('applications.ipynb',
summary='Types of problems you can apply the fastai library to',
title='applications')
update_nb_metadata('vision.data.ipynb',
summary='Basic dataset for computer vision and helper function to get a DataBunch',
title='vision')
update_nb_metadata('overview.ipynb',
summary='Overview of the core modules',
title='overview')
update_nb_metadata('training.ipynb',
keywords='fastai',
summary='Overview of fastai training modules, including Learner, metrics, and callbacks',
title='training')
update_nb_metadata('text.transform.ipynb',
summary='NLP data processing; tokenizes text and creates vocab indexes',
title='text.transform')
update_nb_metadata('jekyll_metadata.ipynb')
update_nb_metadata('collab.ipynb',
summary='Application to collaborative filtering',
title='collab')
update_nb_metadata('text.learner.ipynb',
summary='Easy access of language models and ULMFiT',
title='text.learner')
update_nb_metadata('gen_doc.nbdoc.ipynb',
summary='Helper function to build the documentation',
title='gen_doc.nbdoc')
update_nb_metadata('vision.learner.ipynb',
summary='`Learner` support for computer vision',
title='vision.learner')
update_nb_metadata('core.ipynb',
summary='Basic helper functions for the fastai library',
title='core')
update_nb_metadata('fastai_typing.ipynb',
keywords='fastai',
summary='Type annotations names',
title='fastai_typing')
update_nb_metadata('gen_doc.gen_notebooks.ipynb',
summary='Generation of documentation notebook skeletons from python module',
title='gen_doc.gen_notebooks')
update_nb_metadata('basic_train.ipynb',
summary='Learner class and training loop',
title='basic_train')
update_nb_metadata('gen_doc.ipynb',
keywords='fastai',
summary='Documentation modules overview',
title='gen_doc')
update_nb_metadata('callbacks.rnn.ipynb',
summary='Implementation of a callback for RNN training',
title='callbacks.rnn')
update_nb_metadata('callbacks.one_cycle.ipynb',
summary='Implementation of the 1cycle policy',
title='callbacks.one_cycle')
update_nb_metadata('tta.ipynb',
summary='Module brings TTA (Test Time Functionality) to the `Learner` class. Use `learner.TTA()` instead',
title='tta')
update_nb_metadata('vision.ipynb',
summary='Application to Computer Vision',
title='vision')
update_nb_metadata('vision.transform.ipynb',
summary='List of transforms for data augmentation in CV',
title='vision.transform')
update_nb_metadata('callbacks.lr_finder.ipynb',
summary='Implementation of the LR Range test from Leslie Smith',
title='callbacks.lr_finder')
update_nb_metadata('text.data.ipynb',
summary='Basic dataset for NLP tasks and helper functions to create a DataBunch',
title='text.data')
update_nb_metadata('text.models.ipynb',
summary='Implementation of the AWD-LSTM and the RNN models',
title='text.models')
update_nb_metadata('tabular.data.ipynb',
summary='Base class to deal with tabular data and get a DataBunch',
title='tabular.data')
update_nb_metadata('callbacks.ipynb',
keywords='fastai',
summary='Callbacks implemented in the fastai library',
title='callbacks')
update_nb_metadata('train.ipynb',
summary='Extensions to Learner that easily implement Callback',
title='train')
update_nb_metadata('callbacks.hooks.ipynb',
summary='Implement callbacks using hooks',
title='callbacks.hooks')
update_nb_metadata('text.models.qrnn.ipynb')
update_nb_metadata('vision.image.ipynb',
summary='Image class, variants and internal data augmentation pipeline',
title='vision.image')
update_nb_metadata('vision.models.unet.ipynb',
summary='Dynamic Unet that can use any pretrained model as a backbone.',
title='vision.models.unet')
update_nb_metadata('vision.models.ipynb',
keywords='fastai',
summary='Overview of the models used for CV in fastai',
title='vision.models')
update_nb_metadata('gen_doc.sgen_notebooks.ipynb',
keywords='fastai',
summary='Script to generate notebooks and update html',
title='gen_doc.sgen_notebooks')
update_nb_metadata('tabular.transform.ipynb',
summary='Transforms to clean and preprocess tabular data',
title='tabular.transform')
update_nb_metadata('data.ipynb',
summary='Basic classes to contain the data for model training.',
title='data')
update_nb_metadata('index.ipynb',
keywords='fastai',
toc='false')
update_nb_metadata('layers.ipynb',
summary='Provides essential functions to building and modifying `Model` architectures.',
title='layers')
update_nb_metadata('tabular.ipynb',
keywords='fastai',
summary='Application to tabular/structured data',
title='tabular')
update_nb_metadata('tmp.ipynb')
update_nb_metadata('Untitled.ipynb')
###Output
_____no_output_____
###Markdown
To update this notebook Run `tools/sgen_notebooks.py Or run below: You need to make sure to refresh right after
###Code
import glob
for f in Path().glob('*.ipynb'):
generate_missing_metadata(f)
###Output
_____no_output_____
###Markdown
Metadata generated below
###Code
update_nb_metadata('tutorial.itemlist.ipynb',
summary='Advanced tutorial, explains how to create your custom `ItemBase` or `ItemList`',
title='Custom ItemList')
update_nb_metadata('tutorial.inference.ipynb',
summary='Intermediate tutorial, explains how to create a Learner for inference',
title='Inference Learner')
update_nb_metadata('tutorial.data.ipynb',
summary="Beginner's tutorial, explains how to quickly look at your data or model predictions",
title='Look at data')
update_nb_metadata('vision.gan.ipynb',
summary='All the modules and callbacks necessary to train a GAN',
title='vision.gan')
update_nb_metadata('callbacks.csv_logger.ipynb',
summary='Callbacks that saves the tracked metrics during training',
title='callbacks.csv_logger')
update_nb_metadata('callbacks.tracker.ipynb',
summary='Callbacks that take decisions depending on the evolution of metrics during training',
title='callbacks.tracker')
update_nb_metadata('torch_core.ipynb',
summary='Basic functions using pytorch',
title='torch_core')
update_nb_metadata('gen_doc.convert2html.ipynb',
summary='Converting the documentation notebooks to HTML pages',
title='gen_doc.convert2html')
update_nb_metadata('metrics.ipynb',
summary='Useful metrics for training',
title='metrics')
update_nb_metadata('callbacks.fp16.ipynb',
summary='Training in mixed precision implementation',
title='callbacks.fp16')
update_nb_metadata('callbacks.general_sched.ipynb',
summary='Implementation of a flexible training API',
title='callbacks.general_sched')
update_nb_metadata('text.ipynb',
keywords='fastai',
summary='Application to NLP, including ULMFiT fine-tuning',
title='text')
update_nb_metadata('callback.ipynb',
summary='Implementation of the callback system',
title='callback')
update_nb_metadata('tabular.models.ipynb',
keywords='fastai',
summary='Model for training tabular/structured data',
title='tabular.models')
update_nb_metadata('callbacks.mixup.ipynb',
summary='Implementation of mixup',
title='callbacks.mixup')
update_nb_metadata('applications.ipynb',
summary='Types of problems you can apply the fastai library to',
title='applications')
update_nb_metadata('vision.data.ipynb',
summary='Basic dataset for computer vision and helper function to get a DataBunch',
title='vision.data')
update_nb_metadata('overview.ipynb',
summary='Overview of the core modules',
title='overview')
update_nb_metadata('training.ipynb',
keywords='fastai',
summary='Overview of fastai training modules, including Learner, metrics, and callbacks',
title='training')
update_nb_metadata('text.transform.ipynb',
summary='NLP data processing; tokenizes text and creates vocab indexes',
title='text.transform')
# do not overwrite this notebook, or changes may get lost!
# update_nb_metadata('jekyll_metadata.ipynb')
update_nb_metadata('collab.ipynb',
summary='Application to collaborative filtering',
title='collab')
update_nb_metadata('text.learner.ipynb',
summary='Easy access of language models and ULMFiT',
title='text.learner')
update_nb_metadata('gen_doc.nbdoc.ipynb',
summary='Helper function to build the documentation',
title='gen_doc.nbdoc')
update_nb_metadata('vision.learner.ipynb',
summary='`Learner` support for computer vision',
title='vision.learner')
update_nb_metadata('core.ipynb',
summary='Basic helper functions for the fastai library',
title='core')
update_nb_metadata('fastai_typing.ipynb',
keywords='fastai',
summary='Type annotations names',
title='fastai_typing')
update_nb_metadata('gen_doc.gen_notebooks.ipynb',
summary='Generation of documentation notebook skeletons from python module',
title='gen_doc.gen_notebooks')
update_nb_metadata('basic_train.ipynb',
summary='Learner class and training loop',
title='basic_train')
update_nb_metadata('gen_doc.ipynb',
keywords='fastai',
summary='Documentation modules overview',
title='gen_doc')
update_nb_metadata('callbacks.rnn.ipynb',
summary='Implementation of a callback for RNN training',
title='callbacks.rnn')
update_nb_metadata('callbacks.one_cycle.ipynb',
summary='Implementation of the 1cycle policy',
title='callbacks.one_cycle')
update_nb_metadata('vision.ipynb',
summary='Application to Computer Vision',
title='vision')
update_nb_metadata('vision.transform.ipynb',
summary='List of transforms for data augmentation in CV',
title='vision.transform')
update_nb_metadata('callbacks.lr_finder.ipynb',
summary='Implementation of the LR Range test from Leslie Smith',
title='callbacks.lr_finder')
update_nb_metadata('text.data.ipynb',
summary='Basic dataset for NLP tasks and helper functions to create a DataBunch',
title='text.data')
update_nb_metadata('text.models.ipynb',
summary='Implementation of the AWD-LSTM and the RNN models',
title='text.models')
update_nb_metadata('tabular.data.ipynb',
summary='Base class to deal with tabular data and get a DataBunch',
title='tabular.data')
update_nb_metadata('callbacks.ipynb',
keywords='fastai',
summary='Callbacks implemented in the fastai library',
title='callbacks')
update_nb_metadata('train.ipynb',
summary='Extensions to Learner that easily implement Callback',
title='train')
update_nb_metadata('callbacks.hooks.ipynb',
summary='Implement callbacks using hooks',
title='callbacks.hooks')
update_nb_metadata('vision.image.ipynb',
summary='Image class, variants and internal data augmentation pipeline',
title='vision.image')
update_nb_metadata('vision.models.unet.ipynb',
summary='Dynamic Unet that can use any pretrained model as a backbone.',
title='vision.models.unet')
update_nb_metadata('vision.models.ipynb',
keywords='fastai',
summary='Overview of the models used for CV in fastai',
title='vision.models')
update_nb_metadata('tabular.transform.ipynb',
summary='Transforms to clean and preprocess tabular data',
title='tabular.transform')
update_nb_metadata('index.ipynb',
keywords='fastai',
toc='false',
title='Welcome to fastai')
update_nb_metadata('layers.ipynb',
summary='Provides essential functions to building and modifying `Model` architectures.',
title='layers')
update_nb_metadata('tabular.ipynb',
keywords='fastai',
summary='Application to tabular/structured data',
title='tabular')
update_nb_metadata('basic_data.ipynb',
summary='Basic classes to contain the data for model training.',
title='basic_data')
update_nb_metadata('datasets.ipynb')
update_nb_metadata('tmp.ipynb',
keywords='fastai')
update_nb_metadata('callbacks.tracking.ipynb')
update_nb_metadata('data_block.ipynb',
keywords='fastai',
summary='The data block API',
title='data_block')
update_nb_metadata('callbacks.tracker.ipynb',
keywords='fastai',
summary='Callbacks that take decisions depending on the evolution of metrics during training',
title='callbacks.tracking')
update_nb_metadata('widgets.ipynb')
update_nb_metadata('text_tmp.ipynb')
update_nb_metadata('tabular_tmp.ipynb')
update_nb_metadata('tutorial.data.ipynb')
update_nb_metadata('tutorial.itemlist.ipynb')
update_nb_metadata('tutorial.inference.ipynb')
update_nb_metadata('vision.gan.ipynb')
update_nb_metadata('utils.collect_env.ipynb')
update_nb_metadata('widgets.image_cleaner.ipynb')
update_nb_metadata('utils.mem.ipynb')
update_nb_metadata('callbacks.mem.ipynb')
update_nb_metadata('gen_doc.nbtest.ipynb',
summary='Helper functions to search for api tests',
title='gen_doc.nbtest')
update_nb_metadata('utils.ipython.ipynb')
update_nb_metadata('callbacks.misc.ipynb')
update_nb_metadata('utils.mod_display.ipynb')
update_nb_metadata('text.interpret.ipynb',
keywords='fastai',
summary='Easy access of language models and ULMFiT',
title='text.learner')
update_nb_metadata('vision.interpret.ipynb',
keywords='fastai',
summary='`Learner` support for computer vision',
title='vision.learner')
###Output
_____no_output_____
###Markdown
To update this notebook Run `tools/sgen_notebooks.py Or run below: You need to make sure to refresh right after
###Code
import glob
for f in Path().glob('*.ipynb'):
generate_missing_metadata(f)
###Output
_____no_output_____
###Markdown
Metadata generated below
###Code
update_nb_metadata('callbacks.csv_logger.ipynb',
summary='Callbacks that saves the tracked metrics during training',
title='callbacks.csv_logger')
update_nb_metadata('callbacks.tracker.ipynb',
summary='Callbacks that take decisions depending on the evolution of metrics during training',
title='callbacks.tracker')
update_nb_metadata('torch_core.ipynb',
summary='Basic functions using pytorch',
title='torch_core')
update_nb_metadata('gen_doc.convert2html.ipynb',
summary='Converting the documentation notebooks to HTML pages',
title='gen_doc.convert2html')
update_nb_metadata('metrics.ipynb',
summary='Useful metrics for training',
title='metrics')
update_nb_metadata('callbacks.fp16.ipynb',
summary='Training in mixed precision implementation',
title='callbacks.fp16')
update_nb_metadata('callbacks.general_sched.ipynb',
summary='Implementation of a flexible training API',
title='callbacks.general_sched')
update_nb_metadata('text.ipynb',
keywords='fastai',
summary='Application to NLP, including ULMFiT fine-tuning',
title='text')
update_nb_metadata('callback.ipynb',
summary='Implementation of the callback system',
title='callback')
update_nb_metadata('tabular.models.ipynb',
keywords='fastai',
summary='Model for training tabular/structured data',
title='tabular.models')
update_nb_metadata('callbacks.mixup.ipynb',
summary='Implementation of mixup',
title='callbacks.mixup')
update_nb_metadata('applications.ipynb',
summary='Types of problems you can apply the fastai library to',
title='applications')
update_nb_metadata('vision.data.ipynb',
summary='Basic dataset for computer vision and helper function to get a DataBunch',
title='vision.data')
update_nb_metadata('overview.ipynb',
summary='Overview of the core modules',
title='overview')
update_nb_metadata('training.ipynb',
keywords='fastai',
summary='Overview of fastai training modules, including Learner, metrics, and callbacks',
title='training')
update_nb_metadata('text.transform.ipynb',
summary='NLP data processing; tokenizes text and creates vocab indexes',
title='text.transform')
# do not overwrite this notebook, or changes may get lost!
# update_nb_metadata('jekyll_metadata.ipynb')
update_nb_metadata('collab.ipynb',
summary='Application to collaborative filtering',
title='collab')
update_nb_metadata('text.learner.ipynb',
summary='Easy access of language models and ULMFiT',
title='text.learner')
update_nb_metadata('gen_doc.nbdoc.ipynb',
summary='Helper function to build the documentation',
title='gen_doc.nbdoc')
update_nb_metadata('vision.learner.ipynb',
summary='`Learner` support for computer vision',
title='vision.learner')
update_nb_metadata('core.ipynb',
summary='Basic helper functions for the fastai library',
title='core')
update_nb_metadata('fastai_typing.ipynb',
keywords='fastai',
summary='Type annotations names',
title='fastai_typing')
update_nb_metadata('gen_doc.gen_notebooks.ipynb',
summary='Generation of documentation notebook skeletons from python module',
title='gen_doc.gen_notebooks')
update_nb_metadata('basic_train.ipynb',
summary='Learner class and training loop',
title='basic_train')
update_nb_metadata('gen_doc.ipynb',
keywords='fastai',
summary='Documentation modules overview',
title='gen_doc')
update_nb_metadata('callbacks.rnn.ipynb',
summary='Implementation of a callback for RNN training',
title='callbacks.rnn')
update_nb_metadata('callbacks.one_cycle.ipynb',
summary='Implementation of the 1cycle policy',
title='callbacks.one_cycle')
update_nb_metadata('vision.ipynb',
summary='Application to Computer Vision',
title='vision')
update_nb_metadata('vision.transform.ipynb',
summary='List of transforms for data augmentation in CV',
title='vision.transform')
update_nb_metadata('callbacks.lr_finder.ipynb',
summary='Implementation of the LR Range test from Leslie Smith',
title='callbacks.lr_finder')
update_nb_metadata('text.data.ipynb',
summary='Basic dataset for NLP tasks and helper functions to create a DataBunch',
title='text.data')
update_nb_metadata('text.models.ipynb',
summary='Implementation of the AWD-LSTM and the RNN models',
title='text.models')
update_nb_metadata('tabular.data.ipynb',
summary='Base class to deal with tabular data and get a DataBunch',
title='tabular.data')
update_nb_metadata('callbacks.ipynb',
keywords='fastai',
summary='Callbacks implemented in the fastai library',
title='callbacks')
update_nb_metadata('train.ipynb',
summary='Extensions to Learner that easily implement Callback',
title='train')
update_nb_metadata('callbacks.hooks.ipynb',
summary='Implement callbacks using hooks',
title='callbacks.hooks')
update_nb_metadata('vision.image.ipynb',
summary='Image class, variants and internal data augmentation pipeline',
title='vision.image')
update_nb_metadata('vision.models.unet.ipynb',
summary='Dynamic Unet that can use any pretrained model as a backbone.',
title='vision.models.unet')
update_nb_metadata('vision.models.ipynb',
keywords='fastai',
summary='Overview of the models used for CV in fastai',
title='vision.models')
update_nb_metadata('tabular.transform.ipynb',
summary='Transforms to clean and preprocess tabular data',
title='tabular.transform')
update_nb_metadata('index.ipynb',
keywords='fastai',
toc='false',
title='Welcome to fastai')
update_nb_metadata('layers.ipynb',
summary='Provides essential functions to building and modifying `Model` architectures.',
title='layers')
update_nb_metadata('tabular.ipynb',
keywords='fastai',
summary='Application to tabular/structured data',
title='tabular')
update_nb_metadata('basic_data.ipynb',
summary='Basic classes to contain the data for model training.',
title='basic_data')
update_nb_metadata('datasets.ipynb')
update_nb_metadata('tmp.ipynb',
keywords='fastai')
update_nb_metadata('callbacks.tracking.ipynb')
update_nb_metadata('data_block.ipynb',
keywords='fastai',
summary='The data block API',
title='data_block')
update_nb_metadata('callbacks.tracker.ipynb',
keywords='fastai',
summary='Callbacks that take decisions depending on the evolution of metrics during training',
title='callbacks.tracking')
update_nb_metadata('widgets.ipynb')
update_nb_metadata('text_tmp.ipynb')
update_nb_metadata('tabular_tmp.ipynb')
update_nb_metadata('tutorial.data.ipynb')
update_nb_metadata('tutorial.itemlist.ipynb')
update_nb_metadata('tutorial.inference.ipynb')
###Output
_____no_output_____
###Markdown
To update this notebook Run `tools/sgen_notebooks.py Or run below: You need to make sure to refresh right after
###Code
import glob
for f in Path().glob('*.ipynb'):
generate_missing_metadata(f)
###Output
_____no_output_____
###Markdown
Metadata generated below
###Code
update_nb_metadata('tutorial.itemlist.ipynb',
summary='Advanced tutorial, explains how to create your custom `ItemBase` or `ItemList`',
title='Custom ItemList')
update_nb_metadata('tutorial.inference.ipynb',
summary='Intermediate tutorial, explains how to create a Learner for inference',
title='Inference Learner')
update_nb_metadata('tutorial.data.ipynb',
summary="Beginner's tutorial, explains how to quickly look at your data or model predictions",
title='Look at data')
update_nb_metadata('vision.gan.ipynb',
summary='All the modules and callbacks necessary to train a GAN',
title='vision.gan')
update_nb_metadata('callbacks.csv_logger.ipynb',
summary='Callbacks that saves the tracked metrics during training',
title='callbacks.csv_logger')
update_nb_metadata('callbacks.tracker.ipynb',
summary='Callbacks that take decisions depending on the evolution of metrics during training',
title='callbacks.tracker')
update_nb_metadata('torch_core.ipynb',
summary='Basic functions using pytorch',
title='torch_core')
update_nb_metadata('gen_doc.convert2html.ipynb',
summary='Converting the documentation notebooks to HTML pages',
title='gen_doc.convert2html')
update_nb_metadata('metrics.ipynb',
summary='Useful metrics for training',
title='metrics')
update_nb_metadata('callbacks.fp16.ipynb',
summary='Training in mixed precision implementation',
title='callbacks.fp16')
update_nb_metadata('callbacks.general_sched.ipynb',
summary='Implementation of a flexible training API',
title='callbacks.general_sched')
update_nb_metadata('text.ipynb',
keywords='fastai',
summary='Application to NLP, including ULMFiT fine-tuning',
title='text')
update_nb_metadata('callback.ipynb',
summary='Implementation of the callback system',
title='callback')
update_nb_metadata('tabular.models.ipynb',
keywords='fastai',
summary='Model for training tabular/structured data',
title='tabular.models')
update_nb_metadata('callbacks.mixup.ipynb',
summary='Implementation of mixup',
title='callbacks.mixup')
update_nb_metadata('applications.ipynb',
summary='Types of problems you can apply the fastai library to',
title='applications')
update_nb_metadata('vision.data.ipynb',
summary='Basic dataset for computer vision and helper function to get a DataBunch',
title='vision.data')
update_nb_metadata('overview.ipynb',
summary='Overview of the core modules',
title='overview')
update_nb_metadata('training.ipynb',
keywords='fastai',
summary='Overview of fastai training modules, including Learner, metrics, and callbacks',
title='training')
update_nb_metadata('text.transform.ipynb',
summary='NLP data processing; tokenizes text and creates vocab indexes',
title='text.transform')
# do not overwrite this notebook, or changes may get lost!
# update_nb_metadata('jekyll_metadata.ipynb')
update_nb_metadata('collab.ipynb',
summary='Application to collaborative filtering',
title='collab')
update_nb_metadata('text.learner.ipynb',
summary='Easy access of language models and ULMFiT',
title='text.learner')
update_nb_metadata('gen_doc.nbdoc.ipynb',
summary='Helper function to build the documentation',
title='gen_doc.nbdoc')
update_nb_metadata('vision.learner.ipynb',
summary='`Learner` support for computer vision',
title='vision.learner')
update_nb_metadata('core.ipynb',
summary='Basic helper functions for the fastai library',
title='core')
update_nb_metadata('fastai_typing.ipynb',
keywords='fastai',
summary='Type annotations names',
title='fastai_typing')
update_nb_metadata('gen_doc.gen_notebooks.ipynb',
summary='Generation of documentation notebook skeletons from python module',
title='gen_doc.gen_notebooks')
update_nb_metadata('basic_train.ipynb',
summary='Learner class and training loop',
title='basic_train')
update_nb_metadata('gen_doc.ipynb',
keywords='fastai',
summary='Documentation modules overview',
title='gen_doc')
update_nb_metadata('callbacks.rnn.ipynb',
summary='Implementation of a callback for RNN training',
title='callbacks.rnn')
update_nb_metadata('callbacks.one_cycle.ipynb',
summary='Implementation of the 1cycle policy',
title='callbacks.one_cycle')
update_nb_metadata('vision.ipynb',
summary='Application to Computer Vision',
title='vision')
update_nb_metadata('vision.transform.ipynb',
summary='List of transforms for data augmentation in CV',
title='vision.transform')
update_nb_metadata('callbacks.lr_finder.ipynb',
summary='Implementation of the LR Range test from Leslie Smith',
title='callbacks.lr_finder')
update_nb_metadata('text.data.ipynb',
summary='Basic dataset for NLP tasks and helper functions to create a DataBunch',
title='text.data')
update_nb_metadata('text.models.ipynb',
summary='Implementation of the AWD-LSTM and the RNN models',
title='text.models')
update_nb_metadata('tabular.data.ipynb',
summary='Base class to deal with tabular data and get a DataBunch',
title='tabular.data')
update_nb_metadata('callbacks.ipynb',
keywords='fastai',
summary='Callbacks implemented in the fastai library',
title='callbacks')
update_nb_metadata('train.ipynb',
summary='Extensions to Learner that easily implement Callback',
title='train')
update_nb_metadata('callbacks.hooks.ipynb',
summary='Implement callbacks using hooks',
title='callbacks.hooks')
update_nb_metadata('vision.image.ipynb',
summary='Image class, variants and internal data augmentation pipeline',
title='vision.image')
update_nb_metadata('vision.models.unet.ipynb',
summary='Dynamic Unet that can use any pretrained model as a backbone.',
title='vision.models.unet')
update_nb_metadata('vision.models.ipynb',
keywords='fastai',
summary='Overview of the models used for CV in fastai',
title='vision.models')
update_nb_metadata('tabular.transform.ipynb',
summary='Transforms to clean and preprocess tabular data',
title='tabular.transform')
update_nb_metadata('index.ipynb',
keywords='fastai',
toc='false',
title='Welcome to fastai')
update_nb_metadata('layers.ipynb',
summary='Provides essential functions to building and modifying `Model` architectures.',
title='layers')
update_nb_metadata('tabular.ipynb',
keywords='fastai',
summary='Application to tabular/structured data',
title='tabular')
update_nb_metadata('basic_data.ipynb',
summary='Basic classes to contain the data for model training.',
title='basic_data')
update_nb_metadata('datasets.ipynb')
update_nb_metadata('tmp.ipynb',
keywords='fastai')
update_nb_metadata('callbacks.tracking.ipynb')
update_nb_metadata('data_block.ipynb',
keywords='fastai',
summary='The data block API',
title='data_block')
update_nb_metadata('callbacks.tracker.ipynb',
keywords='fastai',
summary='Callbacks that take decisions depending on the evolution of metrics during training',
title='callbacks.tracking')
update_nb_metadata('widgets.ipynb')
update_nb_metadata('text_tmp.ipynb')
update_nb_metadata('tabular_tmp.ipynb')
update_nb_metadata('tutorial.data.ipynb')
update_nb_metadata('tutorial.itemlist.ipynb')
update_nb_metadata('tutorial.inference.ipynb')
update_nb_metadata('vision.gan.ipynb')
update_nb_metadata('utils.collect_env.ipynb')
###Output
_____no_output_____
###Markdown
To update this notebook Run `tools/sgen_notebooks.py Or run below: You need to make sure to refresh right after
###Code
import glob
for f in Path().glob('*.ipynb'):
generate_missing_metadata(f)
###Output
_____no_output_____
###Markdown
Metadata generated below
###Code
update_nb_metadata('callbacks.tracking.ipynb',
summary='Callbacks that take decisions depending on the evolution of metrics during training',
title='callbacks.tracking')
update_nb_metadata('torch_core.ipynb',
summary='Basic functions using pytorch',
title='torch_core')
update_nb_metadata('gen_doc.convert2html.ipynb',
summary='Converting the documentation notebooks to HTML pages',
title='gen_doc.convert2html')
update_nb_metadata('metrics.ipynb',
summary='Useful metrics for training',
title='metrics')
update_nb_metadata('callbacks.fp16.ipynb',
summary='Training in mixed precision implementation',
title='callbacks.fp16')
update_nb_metadata('callbacks.general_sched.ipynb',
summary='Implementation of a flexible training API',
title='callbacks.general_sched')
update_nb_metadata('text.ipynb',
keywords='fastai',
summary='Application to NLP, including ULMFiT fine-tuning',
title='text')
update_nb_metadata('callback.ipynb',
summary='Implementation of the callback system',
title='callback')
update_nb_metadata('tabular.models.ipynb',
keywords='fastai',
summary='Model for training tabular/structured data',
title='tabular.models')
update_nb_metadata('callbacks.mixup.ipynb',
summary='Implementation of mixup',
title='callbacks.mixup')
update_nb_metadata('applications.ipynb',
summary='Types of problems you can apply the fastai library to',
title='applications')
update_nb_metadata('vision.data.ipynb',
summary='Basic dataset for computer vision and helper function to get a DataBunch',
title='vision.data')
update_nb_metadata('overview.ipynb',
summary='Overview of the core modules',
title='overview')
update_nb_metadata('training.ipynb',
keywords='fastai',
summary='Overview of fastai training modules, including Learner, metrics, and callbacks',
title='training')
update_nb_metadata('text.transform.ipynb',
summary='NLP data processing; tokenizes text and creates vocab indexes',
title='text.transform')
# do not overwrite this notebook, or changes may get lost!
# update_nb_metadata('jekyll_metadata.ipynb')
update_nb_metadata('collab.ipynb',
summary='Application to collaborative filtering',
title='collab')
update_nb_metadata('text.learner.ipynb',
summary='Easy access of language models and ULMFiT',
title='text.learner')
update_nb_metadata('gen_doc.nbdoc.ipynb',
summary='Helper function to build the documentation',
title='gen_doc.nbdoc')
update_nb_metadata('vision.learner.ipynb',
summary='`Learner` support for computer vision',
title='vision.learner')
update_nb_metadata('core.ipynb',
summary='Basic helper functions for the fastai library',
title='core')
update_nb_metadata('fastai_typing.ipynb',
keywords='fastai',
summary='Type annotations names',
title='fastai_typing')
update_nb_metadata('gen_doc.gen_notebooks.ipynb',
summary='Generation of documentation notebook skeletons from python module',
title='gen_doc.gen_notebooks')
update_nb_metadata('basic_train.ipynb',
summary='Learner class and training loop',
title='basic_train')
update_nb_metadata('gen_doc.ipynb',
keywords='fastai',
summary='Documentation modules overview',
title='gen_doc')
update_nb_metadata('callbacks.rnn.ipynb',
summary='Implementation of a callback for RNN training',
title='callbacks.rnn')
update_nb_metadata('callbacks.one_cycle.ipynb',
summary='Implementation of the 1cycle policy',
title='callbacks.one_cycle')
update_nb_metadata('vision.ipynb',
summary='Application to Computer Vision',
title='vision')
update_nb_metadata('vision.transform.ipynb',
summary='List of transforms for data augmentation in CV',
title='vision.transform')
update_nb_metadata('callbacks.lr_finder.ipynb',
summary='Implementation of the LR Range test from Leslie Smith',
title='callbacks.lr_finder')
update_nb_metadata('text.data.ipynb',
summary='Basic dataset for NLP tasks and helper functions to create a DataBunch',
title='text.data')
update_nb_metadata('text.models.ipynb',
summary='Implementation of the AWD-LSTM and the RNN models',
title='text.models')
update_nb_metadata('tabular.data.ipynb',
summary='Base class to deal with tabular data and get a DataBunch',
title='tabular.data')
update_nb_metadata('callbacks.ipynb',
keywords='fastai',
summary='Callbacks implemented in the fastai library',
title='callbacks')
update_nb_metadata('train.ipynb',
summary='Extensions to Learner that easily implement Callback',
title='train')
update_nb_metadata('callbacks.hooks.ipynb',
summary='Implement callbacks using hooks',
title='callbacks.hooks')
update_nb_metadata('vision.image.ipynb',
summary='Image class, variants and internal data augmentation pipeline',
title='vision.image')
update_nb_metadata('vision.models.unet.ipynb',
summary='Dynamic Unet that can use any pretrained model as a backbone.',
title='vision.models.unet')
update_nb_metadata('vision.models.ipynb',
keywords='fastai',
summary='Overview of the models used for CV in fastai',
title='vision.models')
update_nb_metadata('tabular.transform.ipynb',
summary='Transforms to clean and preprocess tabular data',
title='tabular.transform')
update_nb_metadata('index.ipynb',
keywords='fastai',
toc='false',
title='Welcome to fastai')
update_nb_metadata('layers.ipynb',
summary='Provides essential functions to building and modifying `Model` architectures.',
title='layers')
update_nb_metadata('tabular.ipynb',
keywords='fastai',
summary='Application to tabular/structured data',
title='tabular')
update_nb_metadata('basic_data.ipynb',
summary='Basic classes to contain the data for model training.',
title='basic_data')
update_nb_metadata('datasets.ipynb')
update_nb_metadata('tmp.ipynb',
keywords='fastai')
update_nb_metadata('callbacks.tracking.ipynb')
update_nb_metadata('data_block.ipynb',
keywords='fastai',
summary='The data block API',
title='data_block')
###Output
_____no_output_____
###Markdown
To update this notebook Run `tools/sgen_notebooks.py Or run below: You need to make sure to refresh right after
###Code
import glob
for f in Path().glob('*.ipynb'):
generate_missing_metadata(f)
###Output
_____no_output_____
###Markdown
Metadata generated below
###Code
update_nb_metadata('callbacks.csv_logger.ipynb',
summary='Callbacks that saves the tracked metrics during training',
title='callbacks.csv_logger')
update_nb_metadata('callbacks.tracker.ipynb',
summary='Callbacks that take decisions depending on the evolution of metrics during training',
title='callbacks.tracker')
update_nb_metadata('torch_core.ipynb',
summary='Basic functions using pytorch',
title='torch_core')
update_nb_metadata('gen_doc.convert2html.ipynb',
summary='Converting the documentation notebooks to HTML pages',
title='gen_doc.convert2html')
update_nb_metadata('metrics.ipynb',
summary='Useful metrics for training',
title='metrics')
update_nb_metadata('callbacks.fp16.ipynb',
summary='Training in mixed precision implementation',
title='callbacks.fp16')
update_nb_metadata('callbacks.general_sched.ipynb',
summary='Implementation of a flexible training API',
title='callbacks.general_sched')
update_nb_metadata('text.ipynb',
keywords='fastai',
summary='Application to NLP, including ULMFiT fine-tuning',
title='text')
update_nb_metadata('callback.ipynb',
summary='Implementation of the callback system',
title='callback')
update_nb_metadata('tabular.models.ipynb',
keywords='fastai',
summary='Model for training tabular/structured data',
title='tabular.models')
update_nb_metadata('callbacks.mixup.ipynb',
summary='Implementation of mixup',
title='callbacks.mixup')
update_nb_metadata('applications.ipynb',
summary='Types of problems you can apply the fastai library to',
title='applications')
update_nb_metadata('vision.data.ipynb',
summary='Basic dataset for computer vision and helper function to get a DataBunch',
title='vision.data')
update_nb_metadata('overview.ipynb',
summary='Overview of the core modules',
title='overview')
update_nb_metadata('training.ipynb',
keywords='fastai',
summary='Overview of fastai training modules, including Learner, metrics, and callbacks',
title='training')
update_nb_metadata('text.transform.ipynb',
summary='NLP data processing; tokenizes text and creates vocab indexes',
title='text.transform')
# do not overwrite this notebook, or changes may get lost!
# update_nb_metadata('jekyll_metadata.ipynb')
update_nb_metadata('collab.ipynb',
summary='Application to collaborative filtering',
title='collab')
update_nb_metadata('text.learner.ipynb',
summary='Easy access of language models and ULMFiT',
title='text.learner')
update_nb_metadata('gen_doc.nbdoc.ipynb',
summary='Helper function to build the documentation',
title='gen_doc.nbdoc')
update_nb_metadata('vision.learner.ipynb',
summary='`Learner` support for computer vision',
title='vision.learner')
update_nb_metadata('core.ipynb',
summary='Basic helper functions for the fastai library',
title='core')
update_nb_metadata('fastai_typing.ipynb',
keywords='fastai',
summary='Type annotations names',
title='fastai_typing')
update_nb_metadata('gen_doc.gen_notebooks.ipynb',
summary='Generation of documentation notebook skeletons from python module',
title='gen_doc.gen_notebooks')
update_nb_metadata('basic_train.ipynb',
summary='Learner class and training loop',
title='basic_train')
update_nb_metadata('gen_doc.ipynb',
keywords='fastai',
summary='Documentation modules overview',
title='gen_doc')
update_nb_metadata('callbacks.rnn.ipynb',
summary='Implementation of a callback for RNN training',
title='callbacks.rnn')
update_nb_metadata('callbacks.one_cycle.ipynb',
summary='Implementation of the 1cycle policy',
title='callbacks.one_cycle')
update_nb_metadata('vision.ipynb',
summary='Application to Computer Vision',
title='vision')
update_nb_metadata('vision.transform.ipynb',
summary='List of transforms for data augmentation in CV',
title='vision.transform')
update_nb_metadata('callbacks.lr_finder.ipynb',
summary='Implementation of the LR Range test from Leslie Smith',
title='callbacks.lr_finder')
update_nb_metadata('text.data.ipynb',
summary='Basic dataset for NLP tasks and helper functions to create a DataBunch',
title='text.data')
update_nb_metadata('text.models.ipynb',
summary='Implementation of the AWD-LSTM and the RNN models',
title='text.models')
update_nb_metadata('tabular.data.ipynb',
summary='Base class to deal with tabular data and get a DataBunch',
title='tabular.data')
update_nb_metadata('callbacks.ipynb',
keywords='fastai',
summary='Callbacks implemented in the fastai library',
title='callbacks')
update_nb_metadata('train.ipynb',
summary='Extensions to Learner that easily implement Callback',
title='train')
update_nb_metadata('callbacks.hooks.ipynb',
summary='Implement callbacks using hooks',
title='callbacks.hooks')
update_nb_metadata('vision.image.ipynb',
summary='Image class, variants and internal data augmentation pipeline',
title='vision.image')
update_nb_metadata('vision.models.unet.ipynb',
summary='Dynamic Unet that can use any pretrained model as a backbone.',
title='vision.models.unet')
update_nb_metadata('vision.models.ipynb',
keywords='fastai',
summary='Overview of the models used for CV in fastai',
title='vision.models')
update_nb_metadata('tabular.transform.ipynb',
summary='Transforms to clean and preprocess tabular data',
title='tabular.transform')
update_nb_metadata('index.ipynb',
keywords='fastai',
toc='false',
title='Welcome to fastai')
update_nb_metadata('layers.ipynb',
summary='Provides essential functions to building and modifying `Model` architectures.',
title='layers')
update_nb_metadata('tabular.ipynb',
keywords='fastai',
summary='Application to tabular/structured data',
title='tabular')
update_nb_metadata('basic_data.ipynb',
summary='Basic classes to contain the data for model training.',
title='basic_data')
update_nb_metadata('datasets.ipynb')
update_nb_metadata('tmp.ipynb',
keywords='fastai')
update_nb_metadata('callbacks.tracking.ipynb')
update_nb_metadata('data_block.ipynb',
keywords='fastai',
summary='The data block API',
title='data_block')
update_nb_metadata('callbacks.tracker.ipynb',
keywords='fastai',
summary='Callbacks that take decisions depending on the evolution of metrics during training',
title='callbacks.tracking')
###Output
_____no_output_____
###Markdown
To update this notebook Run `tools/sgen_notebooks.py Or run below: You need to make sure to refresh right after
###Code
import glob
for f in Path().glob('*.ipynb'):
generate_missing_metadata(f)
###Output
_____no_output_____
###Markdown
Metadata generated below
###Code
update_nb_metadata('tutorial.itemlist.ipynb',
summary='Advanced tutorial, explains how to create your custom `ItemBase` or `ItemList`',
title='Custom ItemList')
update_nb_metadata('tutorial.inference.ipynb',
summary='Intermediate tutorial, explains how to create a Learner for inference',
title='Inference Learner')
update_nb_metadata('tutorial.data.ipynb',
summary="Beginner's tutorial, explains how to quickly look at your data or model predictions",
title='Look at data')
update_nb_metadata('vision.gan.ipynb',
summary='All the modules and callbacks necessary to train a GAN',
title='vision.gan')
update_nb_metadata('callbacks.csv_logger.ipynb',
summary='Callbacks that saves the tracked metrics during training',
title='callbacks.csv_logger')
update_nb_metadata('callbacks.tracker.ipynb',
summary='Callbacks that take decisions depending on the evolution of metrics during training',
title='callbacks.tracker')
update_nb_metadata('torch_core.ipynb',
summary='Basic functions using pytorch',
title='torch_core')
update_nb_metadata('gen_doc.convert2html.ipynb',
summary='Converting the documentation notebooks to HTML pages',
title='gen_doc.convert2html')
update_nb_metadata('metrics.ipynb',
summary='Useful metrics for training',
title='metrics')
update_nb_metadata('callbacks.fp16.ipynb',
summary='Training in mixed precision implementation',
title='callbacks.fp16')
update_nb_metadata('callbacks.general_sched.ipynb',
summary='Implementation of a flexible training API',
title='callbacks.general_sched')
update_nb_metadata('text.ipynb',
keywords='fastai',
summary='Application to NLP, including ULMFiT fine-tuning',
title='text')
update_nb_metadata('callback.ipynb',
summary='Implementation of the callback system',
title='callback')
update_nb_metadata('tabular.models.ipynb',
keywords='fastai',
summary='Model for training tabular/structured data',
title='tabular.models')
update_nb_metadata('callbacks.mixup.ipynb',
summary='Implementation of mixup',
title='callbacks.mixup')
update_nb_metadata('applications.ipynb',
summary='Types of problems you can apply the fastai library to',
title='applications')
update_nb_metadata('vision.data.ipynb',
summary='Basic dataset for computer vision and helper function to get a DataBunch',
title='vision.data')
update_nb_metadata('overview.ipynb',
summary='Overview of the core modules',
title='overview')
update_nb_metadata('training.ipynb',
keywords='fastai',
summary='Overview of fastai training modules, including Learner, metrics, and callbacks',
title='training')
update_nb_metadata('text.transform.ipynb',
summary='NLP data processing; tokenizes text and creates vocab indexes',
title='text.transform')
# do not overwrite this notebook, or changes may get lost!
# update_nb_metadata('jekyll_metadata.ipynb')
update_nb_metadata('collab.ipynb',
summary='Application to collaborative filtering',
title='collab')
update_nb_metadata('text.learner.ipynb',
summary='Easy access of language models and ULMFiT',
title='text.learner')
update_nb_metadata('gen_doc.nbdoc.ipynb',
summary='Helper function to build the documentation',
title='gen_doc.nbdoc')
update_nb_metadata('vision.learner.ipynb',
summary='`Learner` support for computer vision',
title='vision.learner')
update_nb_metadata('core.ipynb',
summary='Basic helper functions for the fastai library',
title='core')
update_nb_metadata('fastai_typing.ipynb',
keywords='fastai',
summary='Type annotations names',
title='fastai_typing')
update_nb_metadata('gen_doc.gen_notebooks.ipynb',
summary='Generation of documentation notebook skeletons from python module',
title='gen_doc.gen_notebooks')
update_nb_metadata('basic_train.ipynb',
summary='Learner class and training loop',
title='basic_train')
update_nb_metadata('gen_doc.ipynb',
keywords='fastai',
summary='Documentation modules overview',
title='gen_doc')
update_nb_metadata('callbacks.rnn.ipynb',
summary='Implementation of a callback for RNN training',
title='callbacks.rnn')
update_nb_metadata('callbacks.one_cycle.ipynb',
summary='Implementation of the 1cycle policy',
title='callbacks.one_cycle')
update_nb_metadata('vision.ipynb',
summary='Application to Computer Vision',
title='vision')
update_nb_metadata('vision.transform.ipynb',
summary='List of transforms for data augmentation in CV',
title='vision.transform')
update_nb_metadata('callbacks.lr_finder.ipynb',
summary='Implementation of the LR Range test from Leslie Smith',
title='callbacks.lr_finder')
update_nb_metadata('text.data.ipynb',
summary='Basic dataset for NLP tasks and helper functions to create a DataBunch',
title='text.data')
update_nb_metadata('text.models.ipynb',
summary='Implementation of the AWD-LSTM and the RNN models',
title='text.models')
update_nb_metadata('tabular.data.ipynb',
summary='Base class to deal with tabular data and get a DataBunch',
title='tabular.data')
update_nb_metadata('callbacks.ipynb',
keywords='fastai',
summary='Callbacks implemented in the fastai library',
title='callbacks')
update_nb_metadata('train.ipynb',
summary='Extensions to Learner that easily implement Callback',
title='train')
update_nb_metadata('callbacks.hooks.ipynb',
summary='Implement callbacks using hooks',
title='callbacks.hooks')
update_nb_metadata('vision.image.ipynb',
summary='Image class, variants and internal data augmentation pipeline',
title='vision.image')
update_nb_metadata('vision.models.unet.ipynb',
summary='Dynamic Unet that can use any pretrained model as a backbone.',
title='vision.models.unet')
update_nb_metadata('vision.models.ipynb',
keywords='fastai',
summary='Overview of the models used for CV in fastai',
title='vision.models')
update_nb_metadata('tabular.transform.ipynb',
summary='Transforms to clean and preprocess tabular data',
title='tabular.transform')
update_nb_metadata('index.ipynb',
keywords='fastai',
toc='false',
title='Welcome to fastai')
update_nb_metadata('layers.ipynb',
summary='Provides essential functions to building and modifying `Model` architectures.',
title='layers')
update_nb_metadata('tabular.ipynb',
keywords='fastai',
summary='Application to tabular/structured data',
title='tabular')
update_nb_metadata('basic_data.ipynb',
summary='Basic classes to contain the data for model training.',
title='basic_data')
update_nb_metadata('datasets.ipynb')
update_nb_metadata('tmp.ipynb',
keywords='fastai')
update_nb_metadata('callbacks.tracking.ipynb')
update_nb_metadata('data_block.ipynb',
keywords='fastai',
summary='The data block API',
title='data_block')
update_nb_metadata('callbacks.tracker.ipynb',
keywords='fastai',
summary='Callbacks that take decisions depending on the evolution of metrics during training',
title='callbacks.tracking')
update_nb_metadata('widgets.ipynb')
update_nb_metadata('text_tmp.ipynb')
update_nb_metadata('tabular_tmp.ipynb')
update_nb_metadata('tutorial.data.ipynb')
update_nb_metadata('tutorial.itemlist.ipynb')
update_nb_metadata('tutorial.inference.ipynb')
update_nb_metadata('vision.gan.ipynb')
update_nb_metadata('utils.collect_env.ipynb')
update_nb_metadata('widgets.image_cleaner.ipynb')
update_nb_metadata('utils.mem.ipynb')
update_nb_metadata('callbacks.mem.ipynb')
###Output
_____no_output_____
###Markdown
To update this notebook Run `tools/sgen_notebooks.py Or run below: You need to make sure to refresh right after
###Code
import glob
for f in Path().glob('*.ipynb'):
generate_missing_metadata(f)
###Output
_____no_output_____
###Markdown
Metadata generated below
###Code
update_nb_metadata('tutorial.itemlist.ipynb',
summary='Advanced tutorial, explains how to create your custom `ItemBase` or `ItemList`',
title='Custom ItemList')
update_nb_metadata('tutorial.inference.ipynb',
summary='Intermediate tutorial, explains how to create a Learner for inference',
title='Inference Learner')
update_nb_metadata('tutorial.data.ipynb',
summary="Beginner's tutorial, explains how to quickly look at your data or model predictions",
title='Look at data')
update_nb_metadata('vision.gan.ipynb',
summary='All the modules and callbacks necessary to train a GAN',
title='vision.gan')
update_nb_metadata('callbacks.csv_logger.ipynb',
summary='Callbacks that saves the tracked metrics during training',
title='callbacks.csv_logger')
update_nb_metadata('callbacks.tracker.ipynb',
summary='Callbacks that take decisions depending on the evolution of metrics during training',
title='callbacks.tracker')
update_nb_metadata('torch_core.ipynb',
summary='Basic functions using pytorch',
title='torch_core')
update_nb_metadata('gen_doc.convert2html.ipynb',
summary='Converting the documentation notebooks to HTML pages',
title='gen_doc.convert2html')
update_nb_metadata('metrics.ipynb',
summary='Useful metrics for training',
title='metrics')
update_nb_metadata('callbacks.fp16.ipynb',
summary='Training in mixed precision implementation',
title='callbacks.fp16')
update_nb_metadata('callbacks.general_sched.ipynb',
summary='Implementation of a flexible training API',
title='callbacks.general_sched')
update_nb_metadata('text.ipynb',
keywords='fastai',
summary='Application to NLP, including ULMFiT fine-tuning',
title='text')
update_nb_metadata('callback.ipynb',
summary='Implementation of the callback system',
title='callback')
update_nb_metadata('tabular.models.ipynb',
keywords='fastai',
summary='Model for training tabular/structured data',
title='tabular.models')
update_nb_metadata('callbacks.mixup.ipynb',
summary='Implementation of mixup',
title='callbacks.mixup')
update_nb_metadata('applications.ipynb',
summary='Types of problems you can apply the fastai library to',
title='applications')
update_nb_metadata('vision.data.ipynb',
summary='Basic dataset for computer vision and helper function to get a DataBunch',
title='vision.data')
update_nb_metadata('overview.ipynb',
summary='Overview of the core modules',
title='overview')
update_nb_metadata('training.ipynb',
keywords='fastai',
summary='Overview of fastai training modules, including Learner, metrics, and callbacks',
title='training')
update_nb_metadata('text.transform.ipynb',
summary='NLP data processing; tokenizes text and creates vocab indexes',
title='text.transform')
# do not overwrite this notebook, or changes may get lost!
# update_nb_metadata('jekyll_metadata.ipynb')
update_nb_metadata('collab.ipynb',
summary='Application to collaborative filtering',
title='collab')
update_nb_metadata('text.learner.ipynb',
summary='Easy access of language models and ULMFiT',
title='text.learner')
update_nb_metadata('gen_doc.nbdoc.ipynb',
summary='Helper function to build the documentation',
title='gen_doc.nbdoc')
update_nb_metadata('vision.learner.ipynb',
summary='`Learner` support for computer vision',
title='vision.learner')
update_nb_metadata('core.ipynb',
summary='Basic helper functions for the fastai library',
title='core')
update_nb_metadata('fastai_typing.ipynb',
keywords='fastai',
summary='Type annotations names',
title='fastai_typing')
update_nb_metadata('gen_doc.gen_notebooks.ipynb',
summary='Generation of documentation notebook skeletons from python module',
title='gen_doc.gen_notebooks')
update_nb_metadata('basic_train.ipynb',
summary='Learner class and training loop',
title='basic_train')
update_nb_metadata('gen_doc.ipynb',
keywords='fastai',
summary='Documentation modules overview',
title='gen_doc')
update_nb_metadata('callbacks.rnn.ipynb',
summary='Implementation of a callback for RNN training',
title='callbacks.rnn')
update_nb_metadata('callbacks.one_cycle.ipynb',
summary='Implementation of the 1cycle policy',
title='callbacks.one_cycle')
update_nb_metadata('vision.ipynb',
summary='Application to Computer Vision',
title='vision')
update_nb_metadata('vision.transform.ipynb',
summary='List of transforms for data augmentation in CV',
title='vision.transform')
update_nb_metadata('callbacks.lr_finder.ipynb',
summary='Implementation of the LR Range test from Leslie Smith',
title='callbacks.lr_finder')
update_nb_metadata('text.data.ipynb',
summary='Basic dataset for NLP tasks and helper functions to create a DataBunch',
title='text.data')
update_nb_metadata('text.models.ipynb',
summary='Implementation of the AWD-LSTM and the RNN models',
title='text.models')
update_nb_metadata('tabular.data.ipynb',
summary='Base class to deal with tabular data and get a DataBunch',
title='tabular.data')
update_nb_metadata('callbacks.ipynb',
keywords='fastai',
summary='Callbacks implemented in the fastai library',
title='callbacks')
update_nb_metadata('train.ipynb',
summary='Extensions to Learner that easily implement Callback',
title='train')
update_nb_metadata('callbacks.hooks.ipynb',
summary='Implement callbacks using hooks',
title='callbacks.hooks')
update_nb_metadata('vision.image.ipynb',
summary='Image class, variants and internal data augmentation pipeline',
title='vision.image')
update_nb_metadata('vision.models.unet.ipynb',
summary='Dynamic Unet that can use any pretrained model as a backbone.',
title='vision.models.unet')
update_nb_metadata('vision.models.ipynb',
keywords='fastai',
summary='Overview of the models used for CV in fastai',
title='vision.models')
update_nb_metadata('tabular.transform.ipynb',
summary='Transforms to clean and preprocess tabular data',
title='tabular.transform')
update_nb_metadata('index.ipynb',
keywords='fastai',
toc='false',
title='Welcome to fastai')
update_nb_metadata('layers.ipynb',
summary='Provides essential functions to building and modifying `Model` architectures.',
title='layers')
update_nb_metadata('tabular.ipynb',
keywords='fastai',
summary='Application to tabular/structured data',
title='tabular')
update_nb_metadata('basic_data.ipynb',
summary='Basic classes to contain the data for model training.',
title='basic_data')
update_nb_metadata('datasets.ipynb')
update_nb_metadata('tmp.ipynb',
keywords='fastai')
update_nb_metadata('callbacks.tracking.ipynb')
update_nb_metadata('data_block.ipynb',
keywords='fastai',
summary='The data block API',
title='data_block')
update_nb_metadata('callbacks.tracker.ipynb',
keywords='fastai',
summary='Callbacks that take decisions depending on the evolution of metrics during training',
title='callbacks.tracking')
update_nb_metadata('widgets.ipynb')
update_nb_metadata('text_tmp.ipynb')
update_nb_metadata('tabular_tmp.ipynb')
update_nb_metadata('tutorial.data.ipynb')
update_nb_metadata('tutorial.itemlist.ipynb')
update_nb_metadata('tutorial.inference.ipynb')
update_nb_metadata('vision.gan.ipynb')
update_nb_metadata('utils.collect_env.ipynb')
update_nb_metadata('widgets.image_cleaner.ipynb')
update_nb_metadata('utils.mem.ipynb')
update_nb_metadata('callbacks.mem.ipynb')
update_nb_metadata('gen_doc.nbtest.ipynb',
summary='Helper functions to search for api tests',
title='gen_doc.nbtest')
###Output
_____no_output_____
###Markdown
To update this notebook Run `tools/sgen_notebooks.py Or run below: You need to make sure to refresh right after
###Code
import glob
for f in Path().glob('*.ipynb'):
generate_missing_metadata(f)
###Output
_____no_output_____
###Markdown
Metadata generated below
###Code
update_nb_metadata('tutorial.itemlist.ipynb',
summary='Advanced tutorial, explains how to create your custom `ItemBase` or `ItemList`',
title='Custom ItemList')
update_nb_metadata('tutorial.inference.ipynb',
summary='Intermediate tutorial, explains how to create a Learner for inference',
title='Inference Learner')
update_nb_metadata('tutorial.data.ipynb',
summary="Beginner's tutorial, explains how to quickly look at your data or model predictions",
title='Look at data')
update_nb_metadata('callbacks.csv_logger.ipynb',
summary='Callbacks that saves the tracked metrics during training',
title='callbacks.csv_logger')
update_nb_metadata('callbacks.tracker.ipynb',
summary='Callbacks that take decisions depending on the evolution of metrics during training',
title='callbacks.tracker')
update_nb_metadata('torch_core.ipynb',
summary='Basic functions using pytorch',
title='torch_core')
update_nb_metadata('gen_doc.convert2html.ipynb',
summary='Converting the documentation notebooks to HTML pages',
title='gen_doc.convert2html')
update_nb_metadata('metrics.ipynb',
summary='Useful metrics for training',
title='metrics')
update_nb_metadata('callbacks.fp16.ipynb',
summary='Training in mixed precision implementation',
title='callbacks.fp16')
update_nb_metadata('callbacks.general_sched.ipynb',
summary='Implementation of a flexible training API',
title='callbacks.general_sched')
update_nb_metadata('text.ipynb',
keywords='fastai',
summary='Application to NLP, including ULMFiT fine-tuning',
title='text')
update_nb_metadata('callback.ipynb',
summary='Implementation of the callback system',
title='callback')
update_nb_metadata('tabular.models.ipynb',
keywords='fastai',
summary='Model for training tabular/structured data',
title='tabular.models')
update_nb_metadata('callbacks.mixup.ipynb',
summary='Implementation of mixup',
title='callbacks.mixup')
update_nb_metadata('applications.ipynb',
summary='Types of problems you can apply the fastai library to',
title='applications')
update_nb_metadata('vision.data.ipynb',
summary='Basic dataset for computer vision and helper function to get a DataBunch',
title='vision.data')
update_nb_metadata('overview.ipynb',
summary='Overview of the core modules',
title='overview')
update_nb_metadata('training.ipynb',
keywords='fastai',
summary='Overview of fastai training modules, including Learner, metrics, and callbacks',
title='training')
update_nb_metadata('text.transform.ipynb',
summary='NLP data processing; tokenizes text and creates vocab indexes',
title='text.transform')
# do not overwrite this notebook, or changes may get lost!
# update_nb_metadata('jekyll_metadata.ipynb')
update_nb_metadata('collab.ipynb',
summary='Application to collaborative filtering',
title='collab')
update_nb_metadata('text.learner.ipynb',
summary='Easy access of language models and ULMFiT',
title='text.learner')
update_nb_metadata('gen_doc.nbdoc.ipynb',
summary='Helper function to build the documentation',
title='gen_doc.nbdoc')
update_nb_metadata('vision.learner.ipynb',
summary='`Learner` support for computer vision',
title='vision.learner')
update_nb_metadata('core.ipynb',
summary='Basic helper functions for the fastai library',
title='core')
update_nb_metadata('fastai_typing.ipynb',
keywords='fastai',
summary='Type annotations names',
title='fastai_typing')
update_nb_metadata('gen_doc.gen_notebooks.ipynb',
summary='Generation of documentation notebook skeletons from python module',
title='gen_doc.gen_notebooks')
update_nb_metadata('basic_train.ipynb',
summary='Learner class and training loop',
title='basic_train')
update_nb_metadata('gen_doc.ipynb',
keywords='fastai',
summary='Documentation modules overview',
title='gen_doc')
update_nb_metadata('callbacks.rnn.ipynb',
summary='Implementation of a callback for RNN training',
title='callbacks.rnn')
update_nb_metadata('callbacks.one_cycle.ipynb',
summary='Implementation of the 1cycle policy',
title='callbacks.one_cycle')
update_nb_metadata('vision.ipynb',
summary='Application to Computer Vision',
title='vision')
update_nb_metadata('vision.transform.ipynb',
summary='List of transforms for data augmentation in CV',
title='vision.transform')
update_nb_metadata('callbacks.lr_finder.ipynb',
summary='Implementation of the LR Range test from Leslie Smith',
title='callbacks.lr_finder')
update_nb_metadata('text.data.ipynb',
summary='Basic dataset for NLP tasks and helper functions to create a DataBunch',
title='text.data')
update_nb_metadata('text.models.ipynb',
summary='Implementation of the AWD-LSTM and the RNN models',
title='text.models')
update_nb_metadata('tabular.data.ipynb',
summary='Base class to deal with tabular data and get a DataBunch',
title='tabular.data')
update_nb_metadata('callbacks.ipynb',
keywords='fastai',
summary='Callbacks implemented in the fastai library',
title='callbacks')
update_nb_metadata('train.ipynb',
summary='Extensions to Learner that easily implement Callback',
title='train')
update_nb_metadata('callbacks.hooks.ipynb',
summary='Implement callbacks using hooks',
title='callbacks.hooks')
update_nb_metadata('vision.image.ipynb',
summary='Image class, variants and internal data augmentation pipeline',
title='vision.image')
update_nb_metadata('vision.models.unet.ipynb',
summary='Dynamic Unet that can use any pretrained model as a backbone.',
title='vision.models.unet')
update_nb_metadata('vision.models.ipynb',
keywords='fastai',
summary='Overview of the models used for CV in fastai',
title='vision.models')
update_nb_metadata('tabular.transform.ipynb',
summary='Transforms to clean and preprocess tabular data',
title='tabular.transform')
update_nb_metadata('index.ipynb',
keywords='fastai',
toc='false',
title='Welcome to fastai')
update_nb_metadata('layers.ipynb',
summary='Provides essential functions to building and modifying `Model` architectures.',
title='layers')
update_nb_metadata('tabular.ipynb',
keywords='fastai',
summary='Application to tabular/structured data',
title='tabular')
update_nb_metadata('basic_data.ipynb',
summary='Basic classes to contain the data for model training.',
title='basic_data')
update_nb_metadata('datasets.ipynb')
update_nb_metadata('tmp.ipynb',
keywords='fastai')
update_nb_metadata('callbacks.tracking.ipynb')
update_nb_metadata('data_block.ipynb',
keywords='fastai',
summary='The data block API',
title='data_block')
update_nb_metadata('callbacks.tracker.ipynb',
keywords='fastai',
summary='Callbacks that take decisions depending on the evolution of metrics during training',
title='callbacks.tracking')
update_nb_metadata('widgets.ipynb')
update_nb_metadata('text_tmp.ipynb')
update_nb_metadata('tabular_tmp.ipynb')
update_nb_metadata('tutorial.data.ipynb')
update_nb_metadata('tutorial.itemlist.ipynb')
update_nb_metadata('tutorial.inference.ipynb')
###Output
_____no_output_____
###Markdown
To update this notebook Run `tools/sgen_notebooks.py Or run below: You need to make sure to refresh right after
###Code
import glob
for f in Path().glob('*.ipynb'):
generate_missing_metadata(f)
###Output
_____no_output_____
###Markdown
Metadata generated below
###Code
update_nb_metadata('tutorial.itemlist.ipynb',
summary='Advanced tutorial, explains how to create your custom `ItemBase` or `ItemList`',
title='Custom ItemList')
update_nb_metadata('tutorial.inference.ipynb',
summary='Intermediate tutorial, explains how to create a Learner for inference',
title='Inference Learner')
update_nb_metadata('tutorial.data.ipynb',
summary="Beginner's tutorial, explains how to quickly look at your data or model predictions",
title='Look at data')
update_nb_metadata('vision.gan.ipynb',
summary='All the modules and callbacks necessary to train a GAN',
title='vision.gan')
update_nb_metadata('callbacks.csv_logger.ipynb',
summary='Callbacks that saves the tracked metrics during training',
title='callbacks.csv_logger')
update_nb_metadata('callbacks.tracker.ipynb',
summary='Callbacks that take decisions depending on the evolution of metrics during training',
title='callbacks.tracker')
update_nb_metadata('torch_core.ipynb',
summary='Basic functions using pytorch',
title='torch_core')
update_nb_metadata('gen_doc.convert2html.ipynb',
summary='Converting the documentation notebooks to HTML pages',
title='gen_doc.convert2html')
update_nb_metadata('metrics.ipynb',
summary='Useful metrics for training',
title='metrics')
update_nb_metadata('callbacks.fp16.ipynb',
summary='Training in mixed precision implementation',
title='callbacks.fp16')
update_nb_metadata('callbacks.general_sched.ipynb',
summary='Implementation of a flexible training API',
title='callbacks.general_sched')
update_nb_metadata('text.ipynb',
keywords='fastai',
summary='Application to NLP, including ULMFiT fine-tuning',
title='text')
update_nb_metadata('callback.ipynb',
summary='Implementation of the callback system',
title='callback')
update_nb_metadata('tabular.models.ipynb',
keywords='fastai',
summary='Model for training tabular/structured data',
title='tabular.models')
update_nb_metadata('callbacks.mixup.ipynb',
summary='Implementation of mixup',
title='callbacks.mixup')
update_nb_metadata('applications.ipynb',
summary='Types of problems you can apply the fastai library to',
title='applications')
update_nb_metadata('vision.data.ipynb',
summary='Basic dataset for computer vision and helper function to get a DataBunch',
title='vision.data')
update_nb_metadata('overview.ipynb',
summary='Overview of the core modules',
title='overview')
update_nb_metadata('training.ipynb',
keywords='fastai',
summary='Overview of fastai training modules, including Learner, metrics, and callbacks',
title='training')
update_nb_metadata('text.transform.ipynb',
summary='NLP data processing; tokenizes text and creates vocab indexes',
title='text.transform')
# do not overwrite this notebook, or changes may get lost!
# update_nb_metadata('jekyll_metadata.ipynb')
update_nb_metadata('collab.ipynb',
summary='Application to collaborative filtering',
title='collab')
update_nb_metadata('text.learner.ipynb',
summary='Easy access of language models and ULMFiT',
title='text.learner')
update_nb_metadata('gen_doc.nbdoc.ipynb',
summary='Helper function to build the documentation',
title='gen_doc.nbdoc')
update_nb_metadata('vision.learner.ipynb',
summary='`Learner` support for computer vision',
title='vision.learner')
update_nb_metadata('core.ipynb',
summary='Basic helper functions for the fastai library',
title='core')
update_nb_metadata('fastai_typing.ipynb',
keywords='fastai',
summary='Type annotations names',
title='fastai_typing')
update_nb_metadata('gen_doc.gen_notebooks.ipynb',
summary='Generation of documentation notebook skeletons from python module',
title='gen_doc.gen_notebooks')
update_nb_metadata('basic_train.ipynb',
summary='Learner class and training loop',
title='basic_train')
update_nb_metadata('gen_doc.ipynb',
keywords='fastai',
summary='Documentation modules overview',
title='gen_doc')
update_nb_metadata('callbacks.rnn.ipynb',
summary='Implementation of a callback for RNN training',
title='callbacks.rnn')
update_nb_metadata('callbacks.one_cycle.ipynb',
summary='Implementation of the 1cycle policy',
title='callbacks.one_cycle')
update_nb_metadata('vision.ipynb',
summary='Application to Computer Vision',
title='vision')
update_nb_metadata('vision.transform.ipynb',
summary='List of transforms for data augmentation in CV',
title='vision.transform')
update_nb_metadata('callbacks.lr_finder.ipynb',
summary='Implementation of the LR Range test from Leslie Smith',
title='callbacks.lr_finder')
update_nb_metadata('text.data.ipynb',
summary='Basic dataset for NLP tasks and helper functions to create a DataBunch',
title='text.data')
update_nb_metadata('text.models.ipynb',
summary='Implementation of the AWD-LSTM and the RNN models',
title='text.models')
update_nb_metadata('tabular.data.ipynb',
summary='Base class to deal with tabular data and get a DataBunch',
title='tabular.data')
update_nb_metadata('callbacks.ipynb',
keywords='fastai',
summary='Callbacks implemented in the fastai library',
title='callbacks')
update_nb_metadata('train.ipynb',
summary='Extensions to Learner that easily implement Callback',
title='train')
update_nb_metadata('callbacks.hooks.ipynb',
summary='Implement callbacks using hooks',
title='callbacks.hooks')
update_nb_metadata('vision.image.ipynb',
summary='Image class, variants and internal data augmentation pipeline',
title='vision.image')
update_nb_metadata('vision.models.unet.ipynb',
summary='Dynamic Unet that can use any pretrained model as a backbone.',
title='vision.models.unet')
update_nb_metadata('vision.models.ipynb',
keywords='fastai',
summary='Overview of the models used for CV in fastai',
title='vision.models')
update_nb_metadata('tabular.transform.ipynb',
summary='Transforms to clean and preprocess tabular data',
title='tabular.transform')
update_nb_metadata('index.ipynb',
keywords='fastai',
toc='false',
title='Welcome to fastai')
update_nb_metadata('layers.ipynb',
summary='Provides essential functions to building and modifying `Model` architectures.',
title='layers')
update_nb_metadata('tabular.ipynb',
keywords='fastai',
summary='Application to tabular/structured data',
title='tabular')
update_nb_metadata('basic_data.ipynb',
summary='Basic classes to contain the data for model training.',
title='basic_data')
update_nb_metadata('datasets.ipynb')
update_nb_metadata('tmp.ipynb',
keywords='fastai')
update_nb_metadata('callbacks.tracking.ipynb')
update_nb_metadata('data_block.ipynb',
keywords='fastai',
summary='The data block API',
title='data_block')
update_nb_metadata('callbacks.tracker.ipynb',
keywords='fastai',
summary='Callbacks that take decisions depending on the evolution of metrics during training',
title='callbacks.tracking')
update_nb_metadata('widgets.ipynb')
update_nb_metadata('text_tmp.ipynb')
update_nb_metadata('tabular_tmp.ipynb')
update_nb_metadata('tutorial.data.ipynb')
update_nb_metadata('tutorial.itemlist.ipynb')
update_nb_metadata('tutorial.inference.ipynb')
update_nb_metadata('vision.gan.ipynb')
update_nb_metadata('utils.collect_env.ipynb')
update_nb_metadata('widgets.image_cleaner.ipynb')
update_nb_metadata('utils.mem.ipynb')
update_nb_metadata('callbacks.mem.ipynb')
update_nb_metadata('gen_doc.nbtest.ipynb',
summary='Helper functions to search for api tests',
title='gen_doc.nbtest')
update_nb_metadata('utils.ipython.ipynb')
update_nb_metadata('callbacks.misc.ipynb')
update_nb_metadata('utils.mod_display.ipynb')
update_nb_metadata('text.interpret.ipynb',
keywords='fastai',
summary='Easy access of language models and ULMFiT',
title='text.learner')
update_nb_metadata('vision.interpret.ipynb',
keywords='fastai',
summary='`Learner` support for computer vision',
title='vision.learner')
update_nb_metadata('widgets.class_confusion.ipynb')
###Output
_____no_output_____
###Markdown
To update this notebook Run `tools/sgen_notebooks.py Or run below: You need to make sure to refresh right after
###Code
import glob
for f in Path().glob('*.ipynb'):
generate_missing_metadata(f)
###Output
_____no_output_____
###Markdown
Metadata generated below
###Code
update_nb_metadata('tutorial.itemlist.ipynb',
summary='Advanced tutorial, explains how to create your custom `ItemBase` or `ItemList`',
title='Custom ItemList')
update_nb_metadata('tutorial.inference.ipynb',
summary='Intermediate tutorial, explains how to create a Learner for inference',
title='Inference Learner')
update_nb_metadata('tutorial.data.ipynb',
summary="Beginner's tutorial, explains how to quickly look at your data or model predictions",
title='Look at data')
update_nb_metadata('vision.gan.ipynb',
summary='All the modules and callbacks necessary to train a GAN',
title='vision.gan')
update_nb_metadata('callbacks.csv_logger.ipynb',
summary='Callbacks that saves the tracked metrics during training',
title='callbacks.csv_logger')
update_nb_metadata('callbacks.tracker.ipynb',
summary='Callbacks that take decisions depending on the evolution of metrics during training',
title='callbacks.tracker')
update_nb_metadata('torch_core.ipynb',
summary='Basic functions using pytorch',
title='torch_core')
update_nb_metadata('gen_doc.convert2html.ipynb',
summary='Converting the documentation notebooks to HTML pages',
title='gen_doc.convert2html')
update_nb_metadata('metrics.ipynb',
summary='Useful metrics for training',
title='metrics')
update_nb_metadata('callbacks.fp16.ipynb',
summary='Training in mixed precision implementation',
title='callbacks.fp16')
update_nb_metadata('callbacks.general_sched.ipynb',
summary='Implementation of a flexible training API',
title='callbacks.general_sched')
update_nb_metadata('text.ipynb',
keywords='fastai',
summary='Application to NLP, including ULMFiT fine-tuning',
title='text')
update_nb_metadata('callback.ipynb',
summary='Implementation of the callback system',
title='callback')
update_nb_metadata('tabular.models.ipynb',
keywords='fastai',
summary='Model for training tabular/structured data',
title='tabular.models')
update_nb_metadata('callbacks.mixup.ipynb',
summary='Implementation of mixup',
title='callbacks.mixup')
update_nb_metadata('applications.ipynb',
summary='Types of problems you can apply the fastai library to',
title='applications')
update_nb_metadata('vision.data.ipynb',
summary='Basic dataset for computer vision and helper function to get a DataBunch',
title='vision.data')
update_nb_metadata('overview.ipynb',
summary='Overview of the core modules',
title='overview')
update_nb_metadata('training.ipynb',
keywords='fastai',
summary='Overview of fastai training modules, including Learner, metrics, and callbacks',
title='training')
update_nb_metadata('text.transform.ipynb',
summary='NLP data processing; tokenizes text and creates vocab indexes',
title='text.transform')
# do not overwrite this notebook, or changes may get lost!
# update_nb_metadata('jekyll_metadata.ipynb')
update_nb_metadata('collab.ipynb',
summary='Application to collaborative filtering',
title='collab')
update_nb_metadata('text.learner.ipynb',
summary='Easy access of language models and ULMFiT',
title='text.learner')
update_nb_metadata('gen_doc.nbdoc.ipynb',
summary='Helper function to build the documentation',
title='gen_doc.nbdoc')
update_nb_metadata('vision.learner.ipynb',
summary='`Learner` support for computer vision',
title='vision.learner')
update_nb_metadata('core.ipynb',
summary='Basic helper functions for the fastai library',
title='core')
update_nb_metadata('fastai_typing.ipynb',
keywords='fastai',
summary='Type annotations names',
title='fastai_typing')
update_nb_metadata('gen_doc.gen_notebooks.ipynb',
summary='Generation of documentation notebook skeletons from python module',
title='gen_doc.gen_notebooks')
update_nb_metadata('basic_train.ipynb',
summary='Learner class and training loop',
title='basic_train')
update_nb_metadata('gen_doc.ipynb',
keywords='fastai',
summary='Documentation modules overview',
title='gen_doc')
update_nb_metadata('callbacks.rnn.ipynb',
summary='Implementation of a callback for RNN training',
title='callbacks.rnn')
update_nb_metadata('callbacks.one_cycle.ipynb',
summary='Implementation of the 1cycle policy',
title='callbacks.one_cycle')
update_nb_metadata('vision.ipynb',
summary='Application to Computer Vision',
title='vision')
update_nb_metadata('vision.transform.ipynb',
summary='List of transforms for data augmentation in CV',
title='vision.transform')
update_nb_metadata('callbacks.lr_finder.ipynb',
summary='Implementation of the LR Range test from Leslie Smith',
title='callbacks.lr_finder')
update_nb_metadata('text.data.ipynb',
summary='Basic dataset for NLP tasks and helper functions to create a DataBunch',
title='text.data')
update_nb_metadata('text.models.ipynb',
summary='Implementation of the AWD-LSTM and the RNN models',
title='text.models')
update_nb_metadata('tabular.data.ipynb',
summary='Base class to deal with tabular data and get a DataBunch',
title='tabular.data')
update_nb_metadata('callbacks.ipynb',
keywords='fastai',
summary='Callbacks implemented in the fastai library',
title='callbacks')
update_nb_metadata('train.ipynb',
summary='Extensions to Learner that easily implement Callback',
title='train')
update_nb_metadata('callbacks.hooks.ipynb',
summary='Implement callbacks using hooks',
title='callbacks.hooks')
update_nb_metadata('vision.image.ipynb',
summary='Image class, variants and internal data augmentation pipeline',
title='vision.image')
update_nb_metadata('vision.models.unet.ipynb',
summary='Dynamic Unet that can use any pretrained model as a backbone.',
title='vision.models.unet')
update_nb_metadata('vision.models.ipynb',
keywords='fastai',
summary='Overview of the models used for CV in fastai',
title='vision.models')
update_nb_metadata('tabular.transform.ipynb',
summary='Transforms to clean and preprocess tabular data',
title='tabular.transform')
update_nb_metadata('index.ipynb',
keywords='fastai',
toc='false',
title='Welcome to fastai')
update_nb_metadata('layers.ipynb',
summary='Provides essential functions to building and modifying `Model` architectures.',
title='layers')
update_nb_metadata('tabular.ipynb',
keywords='fastai',
summary='Application to tabular/structured data',
title='tabular')
update_nb_metadata('basic_data.ipynb',
summary='Basic classes to contain the data for model training.',
title='basic_data')
update_nb_metadata('datasets.ipynb')
update_nb_metadata('tmp.ipynb',
keywords='fastai')
update_nb_metadata('callbacks.tracking.ipynb')
update_nb_metadata('data_block.ipynb',
keywords='fastai',
summary='The data block API',
title='data_block')
update_nb_metadata('callbacks.tracker.ipynb',
keywords='fastai',
summary='Callbacks that take decisions depending on the evolution of metrics during training',
title='callbacks.tracking')
update_nb_metadata('widgets.ipynb')
update_nb_metadata('text_tmp.ipynb')
update_nb_metadata('tabular_tmp.ipynb')
update_nb_metadata('tutorial.data.ipynb')
update_nb_metadata('tutorial.itemlist.ipynb')
update_nb_metadata('tutorial.inference.ipynb')
update_nb_metadata('vision.gan.ipynb')
update_nb_metadata('utils.collect_env.ipynb')
update_nb_metadata('widgets.image_cleaner.ipynb')
###Output
_____no_output_____
###Markdown
To update this notebook Run `tools/sgen_notebooks.py Or run below: You need to make sure to refresh right after
###Code
import glob
for f in Path().glob('*.ipynb'):
generate_missing_metadata(f)
###Output
_____no_output_____
###Markdown
Metadata generated below
###Code
update_nb_metadata('torch_core.ipynb',
summary='Basic functions using pytorch',
title='torch_core')
update_nb_metadata('gen_doc.convert2html.ipynb',
summary='Converting the documentation notebooks to HTML pages',
title='gen_doc.convert2html')
update_nb_metadata('metrics.ipynb',
summary='Useful metrics for training',
title='metrics')
update_nb_metadata('callbacks.fp16.ipynb',
summary='Training in mixed precision implementation',
title='callbacks.fp16')
update_nb_metadata('callbacks.general_sched.ipynb',
summary='Implementation of a flexible training API',
title='callbacks.general_sched')
update_nb_metadata('text.ipynb',
keywords='fastai',
summary='Application to NLP, including ULMFiT fine-tuning',
title='text')
update_nb_metadata('callback.ipynb',
summary='Implementation of the callback system',
title='callback')
update_nb_metadata('tabular.models.ipynb',
keywords='fastai',
summary='Model for training tabular/structured data',
title='tabular.models')
update_nb_metadata('callbacks.mixup.ipynb',
summary='Implementation of mixup',
title='callbacks.mixup')
update_nb_metadata('applications.ipynb',
summary='Types of problems you can apply the fastai library to',
title='applications')
update_nb_metadata('vision.data.ipynb',
summary='Basic dataset for computer vision and helper function to get a DataBunch',
title='vision.data')
update_nb_metadata('overview.ipynb',
summary='Overview of the core modules',
title='overview')
update_nb_metadata('training.ipynb',
keywords='fastai',
summary='Overview of fastai training modules, including Learner, metrics, and callbacks',
title='training')
update_nb_metadata('text.transform.ipynb',
summary='NLP data processing; tokenizes text and creates vocab indexes',
title='text.transform')
# do not overwrite this notebook, or changes may get lost!
# update_nb_metadata('jekyll_metadata.ipynb')
update_nb_metadata('collab.ipynb',
summary='Application to collaborative filtering',
title='collab')
update_nb_metadata('text.learner.ipynb',
summary='Easy access of language models and ULMFiT',
title='text.learner')
update_nb_metadata('gen_doc.nbdoc.ipynb',
summary='Helper function to build the documentation',
title='gen_doc.nbdoc')
update_nb_metadata('vision.learner.ipynb',
summary='`Learner` support for computer vision',
title='vision.learner')
update_nb_metadata('core.ipynb',
summary='Basic helper functions for the fastai library',
title='core')
update_nb_metadata('fastai_typing.ipynb',
keywords='fastai',
summary='Type annotations names',
title='fastai_typing')
update_nb_metadata('gen_doc.gen_notebooks.ipynb',
summary='Generation of documentation notebook skeletons from python module',
title='gen_doc.gen_notebooks')
update_nb_metadata('basic_train.ipynb',
summary='Learner class and training loop',
title='basic_train')
update_nb_metadata('gen_doc.ipynb',
keywords='fastai',
summary='Documentation modules overview',
title='gen_doc')
update_nb_metadata('callbacks.rnn.ipynb',
summary='Implementation of a callback for RNN training',
title='callbacks.rnn')
update_nb_metadata('callbacks.one_cycle.ipynb',
summary='Implementation of the 1cycle policy',
title='callbacks.one_cycle')
update_nb_metadata('vision.ipynb',
summary='Application to Computer Vision',
title='vision')
update_nb_metadata('vision.transform.ipynb',
summary='List of transforms for data augmentation in CV',
title='vision.transform')
update_nb_metadata('callbacks.lr_finder.ipynb',
summary='Implementation of the LR Range test from Leslie Smith',
title='callbacks.lr_finder')
update_nb_metadata('text.data.ipynb',
summary='Basic dataset for NLP tasks and helper functions to create a DataBunch',
title='text.data')
update_nb_metadata('text.models.ipynb',
summary='Implementation of the AWD-LSTM and the RNN models',
title='text.models')
update_nb_metadata('tabular.data.ipynb',
summary='Base class to deal with tabular data and get a DataBunch',
title='tabular.data')
update_nb_metadata('callbacks.ipynb',
keywords='fastai',
summary='Callbacks implemented in the fastai library',
title='callbacks')
update_nb_metadata('train.ipynb',
summary='Extensions to Learner that easily implement Callback',
title='train')
update_nb_metadata('callbacks.hooks.ipynb',
summary='Implement callbacks using hooks',
title='callbacks.hooks')
update_nb_metadata('vision.image.ipynb',
summary='Image class, variants and internal data augmentation pipeline',
title='vision.image')
update_nb_metadata('vision.models.unet.ipynb',
summary='Dynamic Unet that can use any pretrained model as a backbone.',
title='vision.models.unet')
update_nb_metadata('vision.models.ipynb',
keywords='fastai',
summary='Overview of the models used for CV in fastai',
title='vision.models')
update_nb_metadata('tabular.transform.ipynb',
summary='Transforms to clean and preprocess tabular data',
title='tabular.transform')
update_nb_metadata('index.ipynb',
keywords='fastai',
toc='false',
title='Welcome to fastai')
update_nb_metadata('layers.ipynb',
summary='Provides essential functions to building and modifying `Model` architectures.',
title='layers')
update_nb_metadata('tabular.ipynb',
keywords='fastai',
summary='Application to tabular/structured data',
title='tabular')
update_nb_metadata('basic_data.ipynb',
summary='Basic classes to contain the data for model training.',
title='basic_data')
update_nb_metadata('datasets.ipynb')
update_nb_metadata('tmp.ipynb',
keywords='fastai')
update_nb_metadata('callbacks.tracking.ipynb')
###Output
_____no_output_____
###Markdown
To update this notebook Run `tools/sgen_notebooks.py Or run below: You need to make sure to refresh right after
###Code
import glob
for f in Path().glob('*.ipynb'):
generate_missing_metadata(f)
###Output
_____no_output_____
###Markdown
Metadata generated below
###Code
update_nb_metadata('tutorial.itemlist.ipynb',
summary='Advanced tutorial, explains how to create your custom `ItemBase` or `ItemList`',
title='Custom ItemList')
update_nb_metadata('tutorial.inference.ipynb',
summary='Intermediate tutorial, explains how to create a Learner for inference',
title='Inference Learner')
update_nb_metadata('tutorial.data.ipynb',
summary="Beginner's tutorial, explains how to quickly look at your data or model predictions",
title='Look at data')
update_nb_metadata('vision.gan.ipynb',
summary='All the modules and callbacks necessary to train a GAN',
title='vision.gan')
update_nb_metadata('callbacks.csv_logger.ipynb',
summary='Callbacks that saves the tracked metrics during training',
title='callbacks.csv_logger')
update_nb_metadata('callbacks.tracker.ipynb',
summary='Callbacks that take decisions depending on the evolution of metrics during training',
title='callbacks.tracker')
update_nb_metadata('torch_core.ipynb',
summary='Basic functions using pytorch',
title='torch_core')
update_nb_metadata('gen_doc.convert2html.ipynb',
summary='Converting the documentation notebooks to HTML pages',
title='gen_doc.convert2html')
update_nb_metadata('metrics.ipynb',
summary='Useful metrics for training',
title='metrics')
update_nb_metadata('callbacks.fp16.ipynb',
summary='Training in mixed precision implementation',
title='callbacks.fp16')
update_nb_metadata('callbacks.general_sched.ipynb',
summary='Implementation of a flexible training API',
title='callbacks.general_sched')
update_nb_metadata('text.ipynb',
keywords='fastai',
summary='Application to NLP, including ULMFiT fine-tuning',
title='text')
update_nb_metadata('callback.ipynb',
summary='Implementation of the callback system',
title='callback')
update_nb_metadata('tabular.models.ipynb',
keywords='fastai',
summary='Model for training tabular/structured data',
title='tabular.models')
update_nb_metadata('callbacks.mixup.ipynb',
summary='Implementation of mixup',
title='callbacks.mixup')
update_nb_metadata('applications.ipynb',
summary='Types of problems you can apply the fastai library to',
title='applications')
update_nb_metadata('vision.data.ipynb',
summary='Basic dataset for computer vision and helper function to get a DataBunch',
title='vision.data')
update_nb_metadata('overview.ipynb',
summary='Overview of the core modules',
title='overview')
update_nb_metadata('training.ipynb',
keywords='fastai',
summary='Overview of fastai training modules, including Learner, metrics, and callbacks',
title='training')
update_nb_metadata('text.transform.ipynb',
summary='NLP data processing; tokenizes text and creates vocab indexes',
title='text.transform')
# do not overwrite this notebook, or changes may get lost!
# update_nb_metadata('jekyll_metadata.ipynb')
update_nb_metadata('collab.ipynb',
summary='Application to collaborative filtering',
title='collab')
update_nb_metadata('text.learner.ipynb',
summary='Easy access of language models and ULMFiT',
title='text.learner')
update_nb_metadata('gen_doc.nbdoc.ipynb',
summary='Helper function to build the documentation',
title='gen_doc.nbdoc')
update_nb_metadata('vision.learner.ipynb',
summary='`Learner` support for computer vision',
title='vision.learner')
update_nb_metadata('core.ipynb',
summary='Basic helper functions for the fastai library',
title='core')
update_nb_metadata('fastai_typing.ipynb',
keywords='fastai',
summary='Type annotations names',
title='fastai_typing')
update_nb_metadata('gen_doc.gen_notebooks.ipynb',
summary='Generation of documentation notebook skeletons from python module',
title='gen_doc.gen_notebooks')
update_nb_metadata('basic_train.ipynb',
summary='Learner class and training loop',
title='basic_train')
update_nb_metadata('gen_doc.ipynb',
keywords='fastai',
summary='Documentation modules overview',
title='gen_doc')
update_nb_metadata('callbacks.rnn.ipynb',
summary='Implementation of a callback for RNN training',
title='callbacks.rnn')
update_nb_metadata('callbacks.one_cycle.ipynb',
summary='Implementation of the 1cycle policy',
title='callbacks.one_cycle')
update_nb_metadata('vision.ipynb',
summary='Application to Computer Vision',
title='vision')
update_nb_metadata('vision.transform.ipynb',
summary='List of transforms for data augmentation in CV',
title='vision.transform')
update_nb_metadata('callbacks.lr_finder.ipynb',
summary='Implementation of the LR Range test from Leslie Smith',
title='callbacks.lr_finder')
update_nb_metadata('text.data.ipynb',
summary='Basic dataset for NLP tasks and helper functions to create a DataBunch',
title='text.data')
update_nb_metadata('text.models.ipynb',
summary='Implementation of the AWD-LSTM and the RNN models',
title='text.models')
update_nb_metadata('tabular.data.ipynb',
summary='Base class to deal with tabular data and get a DataBunch',
title='tabular.data')
update_nb_metadata('callbacks.ipynb',
keywords='fastai',
summary='Callbacks implemented in the fastai library',
title='callbacks')
update_nb_metadata('train.ipynb',
summary='Extensions to Learner that easily implement Callback',
title='train')
update_nb_metadata('callbacks.hooks.ipynb',
summary='Implement callbacks using hooks',
title='callbacks.hooks')
update_nb_metadata('vision.image.ipynb',
summary='Image class, variants and internal data augmentation pipeline',
title='vision.image')
update_nb_metadata('vision.models.unet.ipynb',
summary='Dynamic Unet that can use any pretrained model as a backbone.',
title='vision.models.unet')
update_nb_metadata('vision.models.ipynb',
keywords='fastai',
summary='Overview of the models used for CV in fastai',
title='vision.models')
update_nb_metadata('tabular.transform.ipynb',
summary='Transforms to clean and preprocess tabular data',
title='tabular.transform')
update_nb_metadata('index.ipynb',
keywords='fastai',
toc='false',
title='Welcome to fastai')
update_nb_metadata('layers.ipynb',
summary='Provides essential functions to building and modifying `Model` architectures.',
title='layers')
update_nb_metadata('tabular.ipynb',
keywords='fastai',
summary='Application to tabular/structured data',
title='tabular')
update_nb_metadata('basic_data.ipynb',
summary='Basic classes to contain the data for model training.',
title='basic_data')
update_nb_metadata('datasets.ipynb')
update_nb_metadata('tmp.ipynb',
keywords='fastai')
update_nb_metadata('callbacks.tracking.ipynb')
update_nb_metadata('data_block.ipynb',
keywords='fastai',
summary='The data block API',
title='data_block')
update_nb_metadata('callbacks.tracker.ipynb',
keywords='fastai',
summary='Callbacks that take decisions depending on the evolution of metrics during training',
title='callbacks.tracking')
update_nb_metadata('widgets.ipynb')
update_nb_metadata('text_tmp.ipynb')
update_nb_metadata('tabular_tmp.ipynb')
update_nb_metadata('tutorial.data.ipynb')
update_nb_metadata('tutorial.itemlist.ipynb')
update_nb_metadata('tutorial.inference.ipynb')
update_nb_metadata('vision.gan.ipynb')
update_nb_metadata('utils.collect_env.ipynb')
update_nb_metadata('widgets.image_cleaner.ipynb')
update_nb_metadata('utils.mem.ipynb')
update_nb_metadata('callbacks.mem.ipynb')
update_nb_metadata('gen_doc.nbtest.ipynb',
summary='Helper functions to search for api tests',
title='gen_doc.nbtest')
update_nb_metadata('utils.ipython.ipynb')
update_nb_metadata('callbacks.misc.ipynb')
update_nb_metadata('utils.mod_display.ipynb')
###Output
_____no_output_____
###Markdown
To update this notebook Run `tools/sgen_notebooks.py Or run below: You need to make sure to refresh right after
###Code
import glob
for f in Path().glob('*.ipynb'):
generate_missing_metadata(f)
###Output
_____no_output_____
###Markdown
Metadata generated below
###Code
update_nb_metadata('callbacks.csv_logger.ipynb',
summary='Callbacks that saves the tracked metrics during training',
title='callbacks.csv_logger')
update_nb_metadata('callbacks.tracker.ipynb',
summary='Callbacks that take decisions depending on the evolution of metrics during training',
title='callbacks.tracker')
update_nb_metadata('torch_core.ipynb',
summary='Basic functions using pytorch',
title='torch_core')
update_nb_metadata('gen_doc.convert2html.ipynb',
summary='Converting the documentation notebooks to HTML pages',
title='gen_doc.convert2html')
update_nb_metadata('metrics.ipynb',
summary='Useful metrics for training',
title='metrics')
update_nb_metadata('callbacks.fp16.ipynb',
summary='Training in mixed precision implementation',
title='callbacks.fp16')
update_nb_metadata('callbacks.general_sched.ipynb',
summary='Implementation of a flexible training API',
title='callbacks.general_sched')
update_nb_metadata('text.ipynb',
keywords='fastai',
summary='Application to NLP, including ULMFiT fine-tuning',
title='text')
update_nb_metadata('callback.ipynb',
summary='Implementation of the callback system',
title='callback')
update_nb_metadata('tabular.models.ipynb',
keywords='fastai',
summary='Model for training tabular/structured data',
title='tabular.models')
update_nb_metadata('callbacks.mixup.ipynb',
summary='Implementation of mixup',
title='callbacks.mixup')
update_nb_metadata('applications.ipynb',
summary='Types of problems you can apply the fastai library to',
title='applications')
update_nb_metadata('vision.data.ipynb',
summary='Basic dataset for computer vision and helper function to get a DataBunch',
title='vision.data')
update_nb_metadata('overview.ipynb',
summary='Overview of the core modules',
title='overview')
update_nb_metadata('training.ipynb',
keywords='fastai',
summary='Overview of fastai training modules, including Learner, metrics, and callbacks',
title='training')
update_nb_metadata('text.transform.ipynb',
summary='NLP data processing; tokenizes text and creates vocab indexes',
title='text.transform')
# do not overwrite this notebook, or changes may get lost!
# update_nb_metadata('jekyll_metadata.ipynb')
update_nb_metadata('collab.ipynb',
summary='Application to collaborative filtering',
title='collab')
update_nb_metadata('text.learner.ipynb',
summary='Easy access of language models and ULMFiT',
title='text.learner')
update_nb_metadata('gen_doc.nbdoc.ipynb',
summary='Helper function to build the documentation',
title='gen_doc.nbdoc')
update_nb_metadata('vision.learner.ipynb',
summary='`Learner` support for computer vision',
title='vision.learner')
update_nb_metadata('core.ipynb',
summary='Basic helper functions for the fastai library',
title='core')
update_nb_metadata('fastai_typing.ipynb',
keywords='fastai',
summary='Type annotations names',
title='fastai_typing')
update_nb_metadata('gen_doc.gen_notebooks.ipynb',
summary='Generation of documentation notebook skeletons from python module',
title='gen_doc.gen_notebooks')
update_nb_metadata('basic_train.ipynb',
summary='Learner class and training loop',
title='basic_train')
update_nb_metadata('gen_doc.ipynb',
keywords='fastai',
summary='Documentation modules overview',
title='gen_doc')
update_nb_metadata('callbacks.rnn.ipynb',
summary='Implementation of a callback for RNN training',
title='callbacks.rnn')
update_nb_metadata('callbacks.one_cycle.ipynb',
summary='Implementation of the 1cycle policy',
title='callbacks.one_cycle')
update_nb_metadata('vision.ipynb',
summary='Application to Computer Vision',
title='vision')
update_nb_metadata('vision.transform.ipynb',
summary='List of transforms for data augmentation in CV',
title='vision.transform')
update_nb_metadata('callbacks.lr_finder.ipynb',
summary='Implementation of the LR Range test from Leslie Smith',
title='callbacks.lr_finder')
update_nb_metadata('text.data.ipynb',
summary='Basic dataset for NLP tasks and helper functions to create a DataBunch',
title='text.data')
update_nb_metadata('text.models.ipynb',
summary='Implementation of the AWD-LSTM and the RNN models',
title='text.models')
update_nb_metadata('tabular.data.ipynb',
summary='Base class to deal with tabular data and get a DataBunch',
title='tabular.data')
update_nb_metadata('callbacks.ipynb',
keywords='fastai',
summary='Callbacks implemented in the fastai library',
title='callbacks')
update_nb_metadata('train.ipynb',
summary='Extensions to Learner that easily implement Callback',
title='train')
update_nb_metadata('callbacks.hooks.ipynb',
summary='Implement callbacks using hooks',
title='callbacks.hooks')
update_nb_metadata('vision.image.ipynb',
summary='Image class, variants and internal data augmentation pipeline',
title='vision.image')
update_nb_metadata('vision.models.unet.ipynb',
summary='Dynamic Unet that can use any pretrained model as a backbone.',
title='vision.models.unet')
update_nb_metadata('vision.models.ipynb',
keywords='fastai',
summary='Overview of the models used for CV in fastai',
title='vision.models')
update_nb_metadata('tabular.transform.ipynb',
summary='Transforms to clean and preprocess tabular data',
title='tabular.transform')
update_nb_metadata('index.ipynb',
keywords='fastai',
toc='false',
title='Welcome to fastai')
update_nb_metadata('layers.ipynb',
summary='Provides essential functions to building and modifying `Model` architectures.',
title='layers')
update_nb_metadata('tabular.ipynb',
keywords='fastai',
summary='Application to tabular/structured data',
title='tabular')
update_nb_metadata('basic_data.ipynb',
summary='Basic classes to contain the data for model training.',
title='basic_data')
update_nb_metadata('datasets.ipynb')
update_nb_metadata('tmp.ipynb',
keywords='fastai')
update_nb_metadata('callbacks.tracking.ipynb')
update_nb_metadata('data_block.ipynb',
keywords='fastai',
summary='The data block API',
title='data_block')
update_nb_metadata('callbacks.tracker.ipynb',
keywords='fastai',
summary='Callbacks that take decisions depending on the evolution of metrics during training',
title='callbacks.tracking')
###Output
_____no_output_____
###Markdown
To update this notebook Run `tools/sgen_notebooks.py Or run below: You need to make sure to refresh right after
###Code
import glob
for f in Path().glob('*.ipynb'):
generate_missing_metadata(f)
###Output
_____no_output_____
###Markdown
Metadata generated below
###Code
update_nb_metadata('tutorial.itemlist.ipynb',
summary='Advanced tutorial, explains how to create your custom `ItemBase` or `ItemList`',
title='Custom ItemList')
update_nb_metadata('tutorial.inference.ipynb',
summary='Intermediate tutorial, explains how to create a Learner for inference',
title='Inference Learner')
update_nb_metadata('tutorial.data.ipynb',
summary="Beginner's tutorial, explains how to quickly look at your data or model predictions",
title='Look at data')
update_nb_metadata('vision.gan.ipynb',
summary='All the modules and callbacks necessary to train a GAN',
title='vision.gan')
update_nb_metadata('callbacks.csv_logger.ipynb',
summary='Callbacks that saves the tracked metrics during training',
title='callbacks.csv_logger')
update_nb_metadata('callbacks.tracker.ipynb',
summary='Callbacks that take decisions depending on the evolution of metrics during training',
title='callbacks.tracker')
update_nb_metadata('torch_core.ipynb',
summary='Basic functions using pytorch',
title='torch_core')
update_nb_metadata('gen_doc.convert2html.ipynb',
summary='Converting the documentation notebooks to HTML pages',
title='gen_doc.convert2html')
update_nb_metadata('metrics.ipynb',
summary='Useful metrics for training',
title='metrics')
update_nb_metadata('callbacks.fp16.ipynb',
summary='Training in mixed precision implementation',
title='callbacks.fp16')
update_nb_metadata('callbacks.general_sched.ipynb',
summary='Implementation of a flexible training API',
title='callbacks.general_sched')
update_nb_metadata('text.ipynb',
keywords='fastai',
summary='Application to NLP, including ULMFiT fine-tuning',
title='text')
update_nb_metadata('callback.ipynb',
summary='Implementation of the callback system',
title='callback')
update_nb_metadata('tabular.models.ipynb',
keywords='fastai',
summary='Model for training tabular/structured data',
title='tabular.models')
update_nb_metadata('callbacks.mixup.ipynb',
summary='Implementation of mixup',
title='callbacks.mixup')
update_nb_metadata('applications.ipynb',
summary='Types of problems you can apply the fastai library to',
title='applications')
update_nb_metadata('vision.data.ipynb',
summary='Basic dataset for computer vision and helper function to get a DataBunch',
title='vision.data')
update_nb_metadata('overview.ipynb',
summary='Overview of the core modules',
title='overview')
update_nb_metadata('training.ipynb',
keywords='fastai',
summary='Overview of fastai training modules, including Learner, metrics, and callbacks',
title='training')
update_nb_metadata('text.transform.ipynb',
summary='NLP data processing; tokenizes text and creates vocab indexes',
title='text.transform')
# do not overwrite this notebook, or changes may get lost!
# update_nb_metadata('jekyll_metadata.ipynb')
update_nb_metadata('collab.ipynb',
summary='Application to collaborative filtering',
title='collab')
update_nb_metadata('text.learner.ipynb',
summary='Easy access of language models and ULMFiT',
title='text.learner')
update_nb_metadata('gen_doc.nbdoc.ipynb',
summary='Helper function to build the documentation',
title='gen_doc.nbdoc')
update_nb_metadata('vision.learner.ipynb',
summary='`Learner` support for computer vision',
title='vision.learner')
update_nb_metadata('core.ipynb',
summary='Basic helper functions for the fastai library',
title='core')
update_nb_metadata('fastai_typing.ipynb',
keywords='fastai',
summary='Type annotations names',
title='fastai_typing')
update_nb_metadata('gen_doc.gen_notebooks.ipynb',
summary='Generation of documentation notebook skeletons from python module',
title='gen_doc.gen_notebooks')
update_nb_metadata('basic_train.ipynb',
summary='Learner class and training loop',
title='basic_train')
update_nb_metadata('gen_doc.ipynb',
keywords='fastai',
summary='Documentation modules overview',
title='gen_doc')
update_nb_metadata('callbacks.rnn.ipynb',
summary='Implementation of a callback for RNN training',
title='callbacks.rnn')
update_nb_metadata('callbacks.one_cycle.ipynb',
summary='Implementation of the 1cycle policy',
title='callbacks.one_cycle')
update_nb_metadata('vision.ipynb',
summary='Application to Computer Vision',
title='vision')
update_nb_metadata('vision.transform.ipynb',
summary='List of transforms for data augmentation in CV',
title='vision.transform')
update_nb_metadata('callbacks.lr_finder.ipynb',
summary='Implementation of the LR Range test from Leslie Smith',
title='callbacks.lr_finder')
update_nb_metadata('text.data.ipynb',
summary='Basic dataset for NLP tasks and helper functions to create a DataBunch',
title='text.data')
update_nb_metadata('text.models.ipynb',
summary='Implementation of the AWD-LSTM and the RNN models',
title='text.models')
update_nb_metadata('tabular.data.ipynb',
summary='Base class to deal with tabular data and get a DataBunch',
title='tabular.data')
update_nb_metadata('callbacks.ipynb',
keywords='fastai',
summary='Callbacks implemented in the fastai library',
title='callbacks')
update_nb_metadata('train.ipynb',
summary='Extensions to Learner that easily implement Callback',
title='train')
update_nb_metadata('callbacks.hooks.ipynb',
summary='Implement callbacks using hooks',
title='callbacks.hooks')
update_nb_metadata('vision.image.ipynb',
summary='Image class, variants and internal data augmentation pipeline',
title='vision.image')
update_nb_metadata('vision.models.unet.ipynb',
summary='Dynamic Unet that can use any pretrained model as a backbone.',
title='vision.models.unet')
update_nb_metadata('vision.models.ipynb',
keywords='fastai',
summary='Overview of the models used for CV in fastai',
title='vision.models')
update_nb_metadata('tabular.transform.ipynb',
summary='Transforms to clean and preprocess tabular data',
title='tabular.transform')
update_nb_metadata('index.ipynb',
keywords='fastai',
toc='false',
title='Welcome to fastai')
update_nb_metadata('layers.ipynb',
summary='Provides essential functions to building and modifying `Model` architectures.',
title='layers')
update_nb_metadata('tabular.ipynb',
keywords='fastai',
summary='Application to tabular/structured data',
title='tabular')
update_nb_metadata('basic_data.ipynb',
summary='Basic classes to contain the data for model training.',
title='basic_data')
update_nb_metadata('datasets.ipynb')
update_nb_metadata('tmp.ipynb',
keywords='fastai')
update_nb_metadata('callbacks.tracking.ipynb')
update_nb_metadata('data_block.ipynb',
keywords='fastai',
summary='The data block API',
title='data_block')
update_nb_metadata('callbacks.tracker.ipynb',
keywords='fastai',
summary='Callbacks that take decisions depending on the evolution of metrics during training',
title='callbacks.tracking')
update_nb_metadata('widgets.ipynb')
update_nb_metadata('text_tmp.ipynb')
update_nb_metadata('tabular_tmp.ipynb')
update_nb_metadata('tutorial.data.ipynb')
update_nb_metadata('tutorial.itemlist.ipynb')
update_nb_metadata('tutorial.inference.ipynb')
update_nb_metadata('vision.gan.ipynb')
update_nb_metadata('utils.collect_env.ipynb')
update_nb_metadata('widgets.image_cleaner.ipynb')
update_nb_metadata('utils.mem.ipynb')
update_nb_metadata('callbacks.mem.ipynb')
update_nb_metadata('gen_doc.nbtest.ipynb',
summary='Helper functions to search for api tests',
title='gen_doc.nbtest')
update_nb_metadata('utils.ipython.ipynb')
update_nb_metadata('callbacks.misc.ipynb')
update_nb_metadata('utils.mod_display.ipynb')
update_nb_metadata('text.interpret.ipynb',
keywords='fastai',
summary='Easy access of language models and ULMFiT',
title='text.learner')
update_nb_metadata('vision.interpret.ipynb',
keywords='fastai',
summary='`Learner` support for computer vision',
title='vision.learner')
update_nb_metadata('widgets.class_confusion.ipynb')
update_nb_metadata('callbacks.tensorboard.ipynb',
keywords='fastai',
summary='Callbacks that saves the tracked metrics during training and output logs for tensorboard to read',
title='callbacks.tensorboard')
###Output
_____no_output_____
###Markdown
To update this notebook Run `tools/sgen_notebooks.py Or run below: You need to make sure to refresh right after
###Code
import glob
for f in Path().glob('*.ipynb'):
generate_missing_metadata(f)
###Output
_____no_output_____
###Markdown
Metadata generated below
###Code
update_nb_metadata('callbacks.csv_logger.ipynb',
summary='Callbacks that saves the tracked metrics during training',
title='callbacks.csv_logger')
update_nb_metadata('callbacks.tracker.ipynb',
summary='Callbacks that take decisions depending on the evolution of metrics during training',
title='callbacks.tracker')
update_nb_metadata('torch_core.ipynb',
summary='Basic functions using pytorch',
title='torch_core')
update_nb_metadata('gen_doc.convert2html.ipynb',
summary='Converting the documentation notebooks to HTML pages',
title='gen_doc.convert2html')
update_nb_metadata('metrics.ipynb',
summary='Useful metrics for training',
title='metrics')
update_nb_metadata('callbacks.fp16.ipynb',
summary='Training in mixed precision implementation',
title='callbacks.fp16')
update_nb_metadata('callbacks.general_sched.ipynb',
summary='Implementation of a flexible training API',
title='callbacks.general_sched')
update_nb_metadata('text.ipynb',
keywords='fastai',
summary='Application to NLP, including ULMFiT fine-tuning',
title='text')
update_nb_metadata('callback.ipynb',
summary='Implementation of the callback system',
title='callback')
update_nb_metadata('tabular.models.ipynb',
keywords='fastai',
summary='Model for training tabular/structured data',
title='tabular.models')
update_nb_metadata('callbacks.mixup.ipynb',
summary='Implementation of mixup',
title='callbacks.mixup')
update_nb_metadata('applications.ipynb',
summary='Types of problems you can apply the fastai library to',
title='applications')
update_nb_metadata('vision.data.ipynb',
summary='Basic dataset for computer vision and helper function to get a DataBunch',
title='vision.data')
update_nb_metadata('overview.ipynb',
summary='Overview of the core modules',
title='overview')
update_nb_metadata('training.ipynb',
keywords='fastai',
summary='Overview of fastai training modules, including Learner, metrics, and callbacks',
title='training')
update_nb_metadata('text.transform.ipynb',
summary='NLP data processing; tokenizes text and creates vocab indexes',
title='text.transform')
# do not overwrite this notebook, or changes may get lost!
# update_nb_metadata('jekyll_metadata.ipynb')
update_nb_metadata('collab.ipynb',
summary='Application to collaborative filtering',
title='collab')
update_nb_metadata('text.learner.ipynb',
summary='Easy access of language models and ULMFiT',
title='text.learner')
update_nb_metadata('gen_doc.nbdoc.ipynb',
summary='Helper function to build the documentation',
title='gen_doc.nbdoc')
update_nb_metadata('vision.learner.ipynb',
summary='`Learner` support for computer vision',
title='vision.learner')
update_nb_metadata('core.ipynb',
summary='Basic helper functions for the fastai library',
title='core')
update_nb_metadata('fastai_typing.ipynb',
keywords='fastai',
summary='Type annotations names',
title='fastai_typing')
update_nb_metadata('gen_doc.gen_notebooks.ipynb',
summary='Generation of documentation notebook skeletons from python module',
title='gen_doc.gen_notebooks')
update_nb_metadata('basic_train.ipynb',
summary='Learner class and training loop',
title='basic_train')
update_nb_metadata('gen_doc.ipynb',
keywords='fastai',
summary='Documentation modules overview',
title='gen_doc')
update_nb_metadata('callbacks.rnn.ipynb',
summary='Implementation of a callback for RNN training',
title='callbacks.rnn')
update_nb_metadata('callbacks.one_cycle.ipynb',
summary='Implementation of the 1cycle policy',
title='callbacks.one_cycle')
update_nb_metadata('vision.ipynb',
summary='Application to Computer Vision',
title='vision')
update_nb_metadata('vision.transform.ipynb',
summary='List of transforms for data augmentation in CV',
title='vision.transform')
update_nb_metadata('callbacks.lr_finder.ipynb',
summary='Implementation of the LR Range test from Leslie Smith',
title='callbacks.lr_finder')
update_nb_metadata('text.data.ipynb',
summary='Basic dataset for NLP tasks and helper functions to create a DataBunch',
title='text.data')
update_nb_metadata('text.models.ipynb',
summary='Implementation of the AWD-LSTM and the RNN models',
title='text.models')
update_nb_metadata('tabular.data.ipynb',
summary='Base class to deal with tabular data and get a DataBunch',
title='tabular.data')
update_nb_metadata('callbacks.ipynb',
keywords='fastai',
summary='Callbacks implemented in the fastai library',
title='callbacks')
update_nb_metadata('train.ipynb',
summary='Extensions to Learner that easily implement Callback',
title='train')
update_nb_metadata('callbacks.hooks.ipynb',
summary='Implement callbacks using hooks',
title='callbacks.hooks')
update_nb_metadata('vision.image.ipynb',
summary='Image class, variants and internal data augmentation pipeline',
title='vision.image')
update_nb_metadata('vision.models.unet.ipynb',
summary='Dynamic Unet that can use any pretrained model as a backbone.',
title='vision.models.unet')
update_nb_metadata('vision.models.ipynb',
keywords='fastai',
summary='Overview of the models used for CV in fastai',
title='vision.models')
update_nb_metadata('tabular.transform.ipynb',
summary='Transforms to clean and preprocess tabular data',
title='tabular.transform')
update_nb_metadata('index.ipynb',
keywords='fastai',
toc='false',
title='Welcome to fastai')
update_nb_metadata('layers.ipynb',
summary='Provides essential functions to building and modifying `Model` architectures.',
title='layers')
update_nb_metadata('tabular.ipynb',
keywords='fastai',
summary='Application to tabular/structured data',
title='tabular')
update_nb_metadata('basic_data.ipynb',
summary='Basic classes to contain the data for model training.',
title='basic_data')
update_nb_metadata('datasets.ipynb')
update_nb_metadata('tmp.ipynb',
keywords='fastai')
update_nb_metadata('callbacks.tracking.ipynb')
update_nb_metadata('data_block.ipynb',
keywords='fastai',
summary='The data block API',
title='data_block')
update_nb_metadata('callbacks.tracker.ipynb',
keywords='fastai',
summary='Callbacks that take decisions depending on the evolution of metrics during training',
title='callbacks.tracking')
update_nb_metadata('widgets.ipynb')
###Output
_____no_output_____
###Markdown
To update this notebook Run `tools/sgen_notebooks.py Or run below: You need to make sure to refresh right after
###Code
import glob
for f in Path().glob('*.ipynb'):
generate_missing_metadata(f)
###Output
_____no_output_____
###Markdown
Metadata generated below
###Code
update_nb_metadata('tutorial.itemlist.ipynb',
summary='Advanced tutorial, explains how to create your custom `ItemBase` or `ItemList`',
title='Custom ItemList')
update_nb_metadata('tutorial.inference.ipynb',
summary='Intermediate tutorial, explains how to create a Learner for inference',
title='Inference Learner')
update_nb_metadata('tutorial.data.ipynb',
summary="Beginner's tutorial, explains how to quickly look at your data or model predictions",
title='Look at data')
update_nb_metadata('vision.gan.ipynb',
summary='All the modules and callbacks necessary to train a GAN',
title='vision.gan')
update_nb_metadata('callbacks.csv_logger.ipynb',
summary='Callbacks that saves the tracked metrics during training',
title='callbacks.csv_logger')
update_nb_metadata('callbacks.tracker.ipynb',
summary='Callbacks that take decisions depending on the evolution of metrics during training',
title='callbacks.tracker')
update_nb_metadata('torch_core.ipynb',
summary='Basic functions using pytorch',
title='torch_core')
update_nb_metadata('gen_doc.convert2html.ipynb',
summary='Converting the documentation notebooks to HTML pages',
title='gen_doc.convert2html')
update_nb_metadata('metrics.ipynb',
summary='Useful metrics for training',
title='metrics')
update_nb_metadata('callbacks.fp16.ipynb',
summary='Training in mixed precision implementation',
title='callbacks.fp16')
update_nb_metadata('callbacks.general_sched.ipynb',
summary='Implementation of a flexible training API',
title='callbacks.general_sched')
update_nb_metadata('text.ipynb',
keywords='fastai',
summary='Application to NLP, including ULMFiT fine-tuning',
title='text')
update_nb_metadata('callback.ipynb',
summary='Implementation of the callback system',
title='callback')
update_nb_metadata('tabular.models.ipynb',
keywords='fastai',
summary='Model for training tabular/structured data',
title='tabular.models')
update_nb_metadata('callbacks.mixup.ipynb',
summary='Implementation of mixup',
title='callbacks.mixup')
update_nb_metadata('applications.ipynb',
summary='Types of problems you can apply the fastai library to',
title='applications')
update_nb_metadata('vision.data.ipynb',
summary='Basic dataset for computer vision and helper function to get a DataBunch',
title='vision.data')
update_nb_metadata('overview.ipynb',
summary='Overview of the core modules',
title='overview')
update_nb_metadata('training.ipynb',
keywords='fastai',
summary='Overview of fastai training modules, including Learner, metrics, and callbacks',
title='training')
update_nb_metadata('text.transform.ipynb',
summary='NLP data processing; tokenizes text and creates vocab indexes',
title='text.transform')
# do not overwrite this notebook, or changes may get lost!
# update_nb_metadata('jekyll_metadata.ipynb')
update_nb_metadata('collab.ipynb',
summary='Application to collaborative filtering',
title='collab')
update_nb_metadata('text.learner.ipynb',
summary='Easy access of language models and ULMFiT',
title='text.learner')
update_nb_metadata('gen_doc.nbdoc.ipynb',
summary='Helper function to build the documentation',
title='gen_doc.nbdoc')
update_nb_metadata('vision.learner.ipynb',
summary='`Learner` support for computer vision',
title='vision.learner')
update_nb_metadata('core.ipynb',
summary='Basic helper functions for the fastai library',
title='core')
update_nb_metadata('fastai_typing.ipynb',
keywords='fastai',
summary='Type annotations names',
title='fastai_typing')
update_nb_metadata('gen_doc.gen_notebooks.ipynb',
summary='Generation of documentation notebook skeletons from python module',
title='gen_doc.gen_notebooks')
update_nb_metadata('basic_train.ipynb',
summary='Learner class and training loop',
title='basic_train')
update_nb_metadata('gen_doc.ipynb',
keywords='fastai',
summary='Documentation modules overview',
title='gen_doc')
update_nb_metadata('callbacks.rnn.ipynb',
summary='Implementation of a callback for RNN training',
title='callbacks.rnn')
update_nb_metadata('callbacks.one_cycle.ipynb',
summary='Implementation of the 1cycle policy',
title='callbacks.one_cycle')
update_nb_metadata('vision.ipynb',
summary='Application to Computer Vision',
title='vision')
update_nb_metadata('vision.transform.ipynb',
summary='List of transforms for data augmentation in CV',
title='vision.transform')
update_nb_metadata('callbacks.lr_finder.ipynb',
summary='Implementation of the LR Range test from Leslie Smith',
title='callbacks.lr_finder')
update_nb_metadata('text.data.ipynb',
summary='Basic dataset for NLP tasks and helper functions to create a DataBunch',
title='text.data')
update_nb_metadata('text.models.ipynb',
summary='Implementation of the AWD-LSTM and the RNN models',
title='text.models')
update_nb_metadata('tabular.data.ipynb',
summary='Base class to deal with tabular data and get a DataBunch',
title='tabular.data')
update_nb_metadata('callbacks.ipynb',
keywords='fastai',
summary='Callbacks implemented in the fastai library',
title='callbacks')
update_nb_metadata('train.ipynb',
summary='Extensions to Learner that easily implement Callback',
title='train')
update_nb_metadata('callbacks.hooks.ipynb',
summary='Implement callbacks using hooks',
title='callbacks.hooks')
update_nb_metadata('vision.image.ipynb',
summary='Image class, variants and internal data augmentation pipeline',
title='vision.image')
update_nb_metadata('vision.models.unet.ipynb',
summary='Dynamic Unet that can use any pretrained model as a backbone.',
title='vision.models.unet')
update_nb_metadata('vision.models.ipynb',
keywords='fastai',
summary='Overview of the models used for CV in fastai',
title='vision.models')
update_nb_metadata('tabular.transform.ipynb',
summary='Transforms to clean and preprocess tabular data',
title='tabular.transform')
update_nb_metadata('index.ipynb',
keywords='fastai',
toc='false',
title='Welcome to fastai')
update_nb_metadata('layers.ipynb',
summary='Provides essential functions to building and modifying `Model` architectures.',
title='layers')
update_nb_metadata('tabular.ipynb',
keywords='fastai',
summary='Application to tabular/structured data',
title='tabular')
update_nb_metadata('basic_data.ipynb',
summary='Basic classes to contain the data for model training.',
title='basic_data')
update_nb_metadata('datasets.ipynb')
update_nb_metadata('tmp.ipynb',
keywords='fastai')
update_nb_metadata('callbacks.tracking.ipynb')
update_nb_metadata('data_block.ipynb',
keywords='fastai',
summary='The data block API',
title='data_block')
update_nb_metadata('callbacks.tracker.ipynb',
keywords='fastai',
summary='Callbacks that take decisions depending on the evolution of metrics during training',
title='callbacks.tracking')
update_nb_metadata('widgets.ipynb')
update_nb_metadata('text_tmp.ipynb')
update_nb_metadata('tabular_tmp.ipynb')
update_nb_metadata('tutorial.data.ipynb')
update_nb_metadata('tutorial.itemlist.ipynb')
update_nb_metadata('tutorial.inference.ipynb')
update_nb_metadata('vision.gan.ipynb')
update_nb_metadata('utils.collect_env.ipynb')
update_nb_metadata('widgets.image_cleaner.ipynb')
update_nb_metadata('utils.mem.ipynb')
update_nb_metadata('callbacks.mem.ipynb')
update_nb_metadata('gen_doc.nbtest.ipynb',
summary='Helper functions to search for api tests',
title='gen_doc.nbtest')
update_nb_metadata('utils.ipython.ipynb')
update_nb_metadata('callbacks.misc.ipynb')
###Output
_____no_output_____
###Markdown
To update this notebook Run `tools/sgen_notebooks.py Or run below: You need to make sure to refresh right after
###Code
import glob
for f in Path().glob('*.ipynb'):
generate_missing_metadata(f)
###Output
_____no_output_____
###Markdown
Metadata generated below
###Code
update_nb_metadata('tutorial.itemlist.ipynb',
summary='Advanced tutorial, explains how to create your custom `ItemBase` or `ItemList`',
title='Custom ItemList')
update_nb_metadata('tutorial.inference.ipynb',
summary='Intermediate tutorial, explains how to create a Learner for inference',
title='Inference Learner')
update_nb_metadata('tutorial.data.ipynb',
summary="Beginner's tutorial, explains how to quickly look at your data or model predictions",
title='Look at data')
update_nb_metadata('vision.gan.ipynb',
summary='All the modules and callbacks necessary to train a GAN',
title='vision.gan')
update_nb_metadata('callbacks.csv_logger.ipynb',
summary='Callbacks that saves the tracked metrics during training',
title='callbacks.csv_logger')
update_nb_metadata('callbacks.tracker.ipynb',
summary='Callbacks that take decisions depending on the evolution of metrics during training',
title='callbacks.tracker')
update_nb_metadata('torch_core.ipynb',
summary='Basic functions using pytorch',
title='torch_core')
update_nb_metadata('gen_doc.convert2html.ipynb',
summary='Converting the documentation notebooks to HTML pages',
title='gen_doc.convert2html')
update_nb_metadata('metrics.ipynb',
summary='Useful metrics for training',
title='metrics')
update_nb_metadata('callbacks.fp16.ipynb',
summary='Training in mixed precision implementation',
title='callbacks.fp16')
update_nb_metadata('callbacks.general_sched.ipynb',
summary='Implementation of a flexible training API',
title='callbacks.general_sched')
update_nb_metadata('text.ipynb',
keywords='fastai',
summary='Application to NLP, including ULMFiT fine-tuning',
title='text')
update_nb_metadata('callback.ipynb',
summary='Implementation of the callback system',
title='callback')
update_nb_metadata('tabular.models.ipynb',
keywords='fastai',
summary='Model for training tabular/structured data',
title='tabular.models')
update_nb_metadata('callbacks.mixup.ipynb',
summary='Implementation of mixup',
title='callbacks.mixup')
update_nb_metadata('applications.ipynb',
summary='Types of problems you can apply the fastai library to',
title='applications')
update_nb_metadata('vision.data.ipynb',
summary='Basic dataset for computer vision and helper function to get a DataBunch',
title='vision.data')
update_nb_metadata('overview.ipynb',
summary='Overview of the core modules',
title='overview')
update_nb_metadata('training.ipynb',
keywords='fastai',
summary='Overview of fastai training modules, including Learner, metrics, and callbacks',
title='training')
update_nb_metadata('text.transform.ipynb',
summary='NLP data processing; tokenizes text and creates vocab indexes',
title='text.transform')
# do not overwrite this notebook, or changes may get lost!
# update_nb_metadata('jekyll_metadata.ipynb')
update_nb_metadata('collab.ipynb',
summary='Application to collaborative filtering',
title='collab')
update_nb_metadata('text.learner.ipynb',
summary='Easy access of language models and ULMFiT',
title='text.learner')
update_nb_metadata('gen_doc.nbdoc.ipynb',
summary='Helper function to build the documentation',
title='gen_doc.nbdoc')
update_nb_metadata('vision.learner.ipynb',
summary='`Learner` support for computer vision',
title='vision.learner')
update_nb_metadata('core.ipynb',
summary='Basic helper functions for the fastai library',
title='core')
update_nb_metadata('fastai_typing.ipynb',
keywords='fastai',
summary='Type annotations names',
title='fastai_typing')
update_nb_metadata('gen_doc.gen_notebooks.ipynb',
summary='Generation of documentation notebook skeletons from python module',
title='gen_doc.gen_notebooks')
update_nb_metadata('basic_train.ipynb',
summary='Learner class and training loop',
title='basic_train')
update_nb_metadata('gen_doc.ipynb',
keywords='fastai',
summary='Documentation modules overview',
title='gen_doc')
update_nb_metadata('callbacks.rnn.ipynb',
summary='Implementation of a callback for RNN training',
title='callbacks.rnn')
update_nb_metadata('callbacks.one_cycle.ipynb',
summary='Implementation of the 1cycle policy',
title='callbacks.one_cycle')
update_nb_metadata('vision.ipynb',
summary='Application to Computer Vision',
title='vision')
update_nb_metadata('vision.transform.ipynb',
summary='List of transforms for data augmentation in CV',
title='vision.transform')
update_nb_metadata('callbacks.lr_finder.ipynb',
summary='Implementation of the LR Range test from Leslie Smith',
title='callbacks.lr_finder')
update_nb_metadata('text.data.ipynb',
summary='Basic dataset for NLP tasks and helper functions to create a DataBunch',
title='text.data')
update_nb_metadata('text.models.ipynb',
summary='Implementation of the AWD-LSTM and the RNN models',
title='text.models')
update_nb_metadata('tabular.data.ipynb',
summary='Base class to deal with tabular data and get a DataBunch',
title='tabular.data')
update_nb_metadata('callbacks.ipynb',
keywords='fastai',
summary='Callbacks implemented in the fastai library',
title='callbacks')
update_nb_metadata('train.ipynb',
summary='Extensions to Learner that easily implement Callback',
title='train')
update_nb_metadata('callbacks.hooks.ipynb',
summary='Implement callbacks using hooks',
title='callbacks.hooks')
update_nb_metadata('vision.image.ipynb',
summary='Image class, variants and internal data augmentation pipeline',
title='vision.image')
update_nb_metadata('vision.models.unet.ipynb',
summary='Dynamic Unet that can use any pretrained model as a backbone.',
title='vision.models.unet')
update_nb_metadata('vision.models.ipynb',
keywords='fastai',
summary='Overview of the models used for CV in fastai',
title='vision.models')
update_nb_metadata('tabular.transform.ipynb',
summary='Transforms to clean and preprocess tabular data',
title='tabular.transform')
update_nb_metadata('index.ipynb',
keywords='fastai',
toc='false',
title='Welcome to fastai')
update_nb_metadata('layers.ipynb',
summary='Provides essential functions to building and modifying `Model` architectures.',
title='layers')
update_nb_metadata('tabular.ipynb',
keywords='fastai',
summary='Application to tabular/structured data',
title='tabular')
update_nb_metadata('basic_data.ipynb',
summary='Basic classes to contain the data for model training.',
title='basic_data')
update_nb_metadata('datasets.ipynb')
update_nb_metadata('tmp.ipynb',
keywords='fastai')
update_nb_metadata('callbacks.tracking.ipynb')
update_nb_metadata('data_block.ipynb',
keywords='fastai',
summary='The data block API',
title='data_block')
update_nb_metadata('callbacks.tracker.ipynb',
keywords='fastai',
summary='Callbacks that take decisions depending on the evolution of metrics during training',
title='callbacks.tracking')
update_nb_metadata('widgets.ipynb')
update_nb_metadata('text_tmp.ipynb')
update_nb_metadata('tabular_tmp.ipynb')
update_nb_metadata('tutorial.data.ipynb')
update_nb_metadata('tutorial.itemlist.ipynb')
update_nb_metadata('tutorial.inference.ipynb')
update_nb_metadata('vision.gan.ipynb')
update_nb_metadata('utils.collect_env.ipynb')
update_nb_metadata('widgets.image_cleaner.ipynb')
update_nb_metadata('utils.mem.ipynb')
update_nb_metadata('callbacks.mem.ipynb')
update_nb_metadata('gen_doc.nbtest.ipynb',
summary='Helper functions to search for api tests',
title='gen_doc.nbtest')
update_nb_metadata('utils.ipython.ipynb')
update_nb_metadata('callbacks.misc.ipynb')
update_nb_metadata('utils.mod_display.ipynb')
update_nb_metadata('text.interpret.ipynb',
keywords='fastai',
summary='Easy access of language models and ULMFiT',
title='text.learner')
update_nb_metadata('vision.interpret.ipynb',
keywords='fastai',
summary='`Learner` support for computer vision',
title='vision.learner')
update_nb_metadata('widgets.class_confusion.ipynb')
update_nb_metadata('callbacks.tensorboard.ipynb',
keywords='fastai',
summary='Callbacks that saves the tracked metrics during training and output logs for tensorboard to read',
title='callbacks.tensorboard')
update_nb_metadata('tabular.learner.ipynb',
keywords='fastai',
summary='Model for training tabular/structured data',
title='tabular.models')
###Output
_____no_output_____
###Markdown
To update this notebook Run `tools/sgen_notebooks.py Or run below: You need to make sure to refresh right after
###Code
import glob
for f in Path().glob('*.ipynb'):
generate_missing_metadata(f)
###Output
_____no_output_____
###Markdown
Metadata generated below
###Code
update_nb_metadata('tutorial.itemlist.ipynb',
summary='Advanced tutorial, explains how to create your custom `ItemBase` or `ItemList`',
title='Custom ItemList')
update_nb_metadata('tutorial.inference.ipynb',
summary='Intermediate tutorial, explains how to create a Learner for inference',
title='Inference Learner')
update_nb_metadata('tutorial.data.ipynb',
summary="Beginner's tutorial, explains how to quickly look at your data or model predictions",
title='Look at data')
update_nb_metadata('vision.gan.ipynb',
summary='All the modules and callbacks necessary to train a GAN',
title='vision.gan')
update_nb_metadata('callbacks.csv_logger.ipynb',
summary='Callbacks that saves the tracked metrics during training',
title='callbacks.csv_logger')
update_nb_metadata('callbacks.tracker.ipynb',
summary='Callbacks that take decisions depending on the evolution of metrics during training',
title='callbacks.tracker')
update_nb_metadata('torch_core.ipynb',
summary='Basic functions using pytorch',
title='torch_core')
update_nb_metadata('gen_doc.convert2html.ipynb',
summary='Converting the documentation notebooks to HTML pages',
title='gen_doc.convert2html')
update_nb_metadata('metrics.ipynb',
summary='Useful metrics for training',
title='metrics')
update_nb_metadata('callbacks.fp16.ipynb',
summary='Training in mixed precision implementation',
title='callbacks.fp16')
update_nb_metadata('callbacks.general_sched.ipynb',
summary='Implementation of a flexible training API',
title='callbacks.general_sched')
update_nb_metadata('text.ipynb',
keywords='fastai',
summary='Application to NLP, including ULMFiT fine-tuning',
title='text')
update_nb_metadata('callback.ipynb',
summary='Implementation of the callback system',
title='callback')
update_nb_metadata('tabular.models.ipynb',
keywords='fastai',
summary='Model for training tabular/structured data',
title='tabular.models')
update_nb_metadata('callbacks.mixup.ipynb',
summary='Implementation of mixup',
title='callbacks.mixup')
update_nb_metadata('applications.ipynb',
summary='Types of problems you can apply the fastai library to',
title='applications')
update_nb_metadata('vision.data.ipynb',
summary='Basic dataset for computer vision and helper function to get a DataBunch',
title='vision.data')
update_nb_metadata('overview.ipynb',
summary='Overview of the core modules',
title='overview')
update_nb_metadata('training.ipynb',
keywords='fastai',
summary='Overview of fastai training modules, including Learner, metrics, and callbacks',
title='training')
update_nb_metadata('text.transform.ipynb',
summary='NLP data processing; tokenizes text and creates vocab indexes',
title='text.transform')
# do not overwrite this notebook, or changes may get lost!
# update_nb_metadata('jekyll_metadata.ipynb')
update_nb_metadata('collab.ipynb',
summary='Application to collaborative filtering',
title='collab')
update_nb_metadata('text.learner.ipynb',
summary='Easy access of language models and ULMFiT',
title='text.learner')
update_nb_metadata('gen_doc.nbdoc.ipynb',
summary='Helper function to build the documentation',
title='gen_doc.nbdoc')
update_nb_metadata('vision.learner.ipynb',
summary='`Learner` support for computer vision',
title='vision.learner')
update_nb_metadata('core.ipynb',
summary='Basic helper functions for the fastai library',
title='core')
update_nb_metadata('fastai_typing.ipynb',
keywords='fastai',
summary='Type annotations names',
title='fastai_typing')
update_nb_metadata('gen_doc.gen_notebooks.ipynb',
summary='Generation of documentation notebook skeletons from python module',
title='gen_doc.gen_notebooks')
update_nb_metadata('basic_train.ipynb',
summary='Learner class and training loop',
title='basic_train')
update_nb_metadata('gen_doc.ipynb',
keywords='fastai',
summary='Documentation modules overview',
title='gen_doc')
update_nb_metadata('callbacks.rnn.ipynb',
summary='Implementation of a callback for RNN training',
title='callbacks.rnn')
update_nb_metadata('callbacks.one_cycle.ipynb',
summary='Implementation of the 1cycle policy',
title='callbacks.one_cycle')
update_nb_metadata('vision.ipynb',
summary='Application to Computer Vision',
title='vision')
update_nb_metadata('vision.transform.ipynb',
summary='List of transforms for data augmentation in CV',
title='vision.transform')
update_nb_metadata('callbacks.lr_finder.ipynb',
summary='Implementation of the LR Range test from Leslie Smith',
title='callbacks.lr_finder')
update_nb_metadata('text.data.ipynb',
summary='Basic dataset for NLP tasks and helper functions to create a DataBunch',
title='text.data')
update_nb_metadata('text.models.ipynb',
summary='Implementation of the AWD-LSTM and the RNN models',
title='text.models')
update_nb_metadata('tabular.data.ipynb',
summary='Base class to deal with tabular data and get a DataBunch',
title='tabular.data')
update_nb_metadata('callbacks.ipynb',
keywords='fastai',
summary='Callbacks implemented in the fastai library',
title='callbacks')
update_nb_metadata('train.ipynb',
summary='Extensions to Learner that easily implement Callback',
title='train')
update_nb_metadata('callbacks.hooks.ipynb',
summary='Implement callbacks using hooks',
title='callbacks.hooks')
update_nb_metadata('vision.image.ipynb',
summary='Image class, variants and internal data augmentation pipeline',
title='vision.image')
update_nb_metadata('vision.models.unet.ipynb',
summary='Dynamic Unet that can use any pretrained model as a backbone.',
title='vision.models.unet')
update_nb_metadata('vision.models.ipynb',
keywords='fastai',
summary='Overview of the models used for CV in fastai',
title='vision.models')
update_nb_metadata('tabular.transform.ipynb',
summary='Transforms to clean and preprocess tabular data',
title='tabular.transform')
update_nb_metadata('index.ipynb',
keywords='fastai',
toc='false',
title='Welcome to fastai')
update_nb_metadata('layers.ipynb',
summary='Provides essential functions to building and modifying `Model` architectures.',
title='layers')
update_nb_metadata('tabular.ipynb',
keywords='fastai',
summary='Application to tabular/structured data',
title='tabular')
update_nb_metadata('basic_data.ipynb',
summary='Basic classes to contain the data for model training.',
title='basic_data')
update_nb_metadata('datasets.ipynb')
update_nb_metadata('tmp.ipynb',
keywords='fastai')
update_nb_metadata('callbacks.tracking.ipynb')
update_nb_metadata('data_block.ipynb',
keywords='fastai',
summary='The data block API',
title='data_block')
update_nb_metadata('callbacks.tracker.ipynb',
keywords='fastai',
summary='Callbacks that take decisions depending on the evolution of metrics during training',
title='callbacks.tracking')
update_nb_metadata('widgets.ipynb')
update_nb_metadata('text_tmp.ipynb')
update_nb_metadata('tabular_tmp.ipynb')
update_nb_metadata('tutorial.data.ipynb')
update_nb_metadata('tutorial.itemlist.ipynb')
update_nb_metadata('tutorial.inference.ipynb')
update_nb_metadata('vision.gan.ipynb')
update_nb_metadata('utils.collect_env.ipynb')
update_nb_metadata('widgets.image_cleaner.ipynb')
update_nb_metadata('utils.mem.ipynb')
update_nb_metadata('callbacks.mem.ipynb')
update_nb_metadata('gen_doc.nbtest.ipynb',
summary='Helper functions to search for api tests',
title='gen_doc.nbtest')
update_nb_metadata('utils.ipython.ipynb')
update_nb_metadata('callbacks.misc.ipynb')
update_nb_metadata('utils.mod_display.ipynb')
update_nb_metadata('text.interpret.ipynb',
keywords='fastai',
summary='Easy access of language models and ULMFiT',
title='text.learner')
update_nb_metadata('vision.interpret.ipynb',
keywords='fastai',
summary='`Learner` support for computer vision',
title='vision.learner')
update_nb_metadata('widgets.class_confusion.ipynb')
update_nb_metadata('callbacks.tensorboard.ipynb',
keywords='fastai',
summary='Callbacks that saves the tracked metrics during training and output logs for tensorboard to read',
title='callbacks.tensorboard')
update_nb_metadata('tabular.learner.ipynb',
keywords='fastai',
summary='Model for training tabular/structured data',
title='tabular.models')
###Output
_____no_output_____
###Markdown
To update this notebook Run `tools/sgen_notebooks.py Or run below: You need to make sure to refresh right after
###Code
import glob
for f in Path().glob('*.ipynb'):
generate_missing_metadata(f)
###Output
_____no_output_____
###Markdown
Metadata generated below
###Code
update_nb_metadata('callbacks.csv_logger.ipynb',
summary='Callbacks that saves the tracked metrics during training',
title='callbacks.csv_logger')
update_nb_metadata('callbacks.tracker.ipynb',
summary='Callbacks that take decisions depending on the evolution of metrics during training',
title='callbacks.tracker')
update_nb_metadata('torch_core.ipynb',
summary='Basic functions using pytorch',
title='torch_core')
update_nb_metadata('gen_doc.convert2html.ipynb',
summary='Converting the documentation notebooks to HTML pages',
title='gen_doc.convert2html')
update_nb_metadata('metrics.ipynb',
summary='Useful metrics for training',
title='metrics')
update_nb_metadata('callbacks.fp16.ipynb',
summary='Training in mixed precision implementation',
title='callbacks.fp16')
update_nb_metadata('callbacks.general_sched.ipynb',
summary='Implementation of a flexible training API',
title='callbacks.general_sched')
update_nb_metadata('text.ipynb',
keywords='fastai',
summary='Application to NLP, including ULMFiT fine-tuning',
title='text')
update_nb_metadata('callback.ipynb',
summary='Implementation of the callback system',
title='callback')
update_nb_metadata('tabular.models.ipynb',
keywords='fastai',
summary='Model for training tabular/structured data',
title='tabular.models')
update_nb_metadata('callbacks.mixup.ipynb',
summary='Implementation of mixup',
title='callbacks.mixup')
update_nb_metadata('applications.ipynb',
summary='Types of problems you can apply the fastai library to',
title='applications')
update_nb_metadata('vision.data.ipynb',
summary='Basic dataset for computer vision and helper function to get a DataBunch',
title='vision.data')
update_nb_metadata('overview.ipynb',
summary='Overview of the core modules',
title='overview')
update_nb_metadata('training.ipynb',
keywords='fastai',
summary='Overview of fastai training modules, including Learner, metrics, and callbacks',
title='training')
update_nb_metadata('text.transform.ipynb',
summary='NLP data processing; tokenizes text and creates vocab indexes',
title='text.transform')
# do not overwrite this notebook, or changes may get lost!
# update_nb_metadata('jekyll_metadata.ipynb')
update_nb_metadata('collab.ipynb',
summary='Application to collaborative filtering',
title='collab')
update_nb_metadata('text.learner.ipynb',
summary='Easy access of language models and ULMFiT',
title='text.learner')
update_nb_metadata('gen_doc.nbdoc.ipynb',
summary='Helper function to build the documentation',
title='gen_doc.nbdoc')
update_nb_metadata('vision.learner.ipynb',
summary='`Learner` support for computer vision',
title='vision.learner')
update_nb_metadata('core.ipynb',
summary='Basic helper functions for the fastai library',
title='core')
update_nb_metadata('fastai_typing.ipynb',
keywords='fastai',
summary='Type annotations names',
title='fastai_typing')
update_nb_metadata('gen_doc.gen_notebooks.ipynb',
summary='Generation of documentation notebook skeletons from python module',
title='gen_doc.gen_notebooks')
update_nb_metadata('basic_train.ipynb',
summary='Learner class and training loop',
title='basic_train')
update_nb_metadata('gen_doc.ipynb',
keywords='fastai',
summary='Documentation modules overview',
title='gen_doc')
update_nb_metadata('callbacks.rnn.ipynb',
summary='Implementation of a callback for RNN training',
title='callbacks.rnn')
update_nb_metadata('callbacks.one_cycle.ipynb',
summary='Implementation of the 1cycle policy',
title='callbacks.one_cycle')
update_nb_metadata('vision.ipynb',
summary='Application to Computer Vision',
title='vision')
update_nb_metadata('vision.transform.ipynb',
summary='List of transforms for data augmentation in CV',
title='vision.transform')
update_nb_metadata('callbacks.lr_finder.ipynb',
summary='Implementation of the LR Range test from Leslie Smith',
title='callbacks.lr_finder')
update_nb_metadata('text.data.ipynb',
summary='Basic dataset for NLP tasks and helper functions to create a DataBunch',
title='text.data')
update_nb_metadata('text.models.ipynb',
summary='Implementation of the AWD-LSTM and the RNN models',
title='text.models')
update_nb_metadata('tabular.data.ipynb',
summary='Base class to deal with tabular data and get a DataBunch',
title='tabular.data')
update_nb_metadata('callbacks.ipynb',
keywords='fastai',
summary='Callbacks implemented in the fastai library',
title='callbacks')
update_nb_metadata('train.ipynb',
summary='Extensions to Learner that easily implement Callback',
title='train')
update_nb_metadata('callbacks.hooks.ipynb',
summary='Implement callbacks using hooks',
title='callbacks.hooks')
update_nb_metadata('vision.image.ipynb',
summary='Image class, variants and internal data augmentation pipeline',
title='vision.image')
update_nb_metadata('vision.models.unet.ipynb',
summary='Dynamic Unet that can use any pretrained model as a backbone.',
title='vision.models.unet')
update_nb_metadata('vision.models.ipynb',
keywords='fastai',
summary='Overview of the models used for CV in fastai',
title='vision.models')
update_nb_metadata('tabular.transform.ipynb',
summary='Transforms to clean and preprocess tabular data',
title='tabular.transform')
update_nb_metadata('index.ipynb',
keywords='fastai',
toc='false',
title='Welcome to fastai')
update_nb_metadata('layers.ipynb',
summary='Provides essential functions to building and modifying `Model` architectures.',
title='layers')
update_nb_metadata('tabular.ipynb',
keywords='fastai',
summary='Application to tabular/structured data',
title='tabular')
update_nb_metadata('basic_data.ipynb',
summary='Basic classes to contain the data for model training.',
title='basic_data')
update_nb_metadata('datasets.ipynb')
update_nb_metadata('tmp.ipynb',
keywords='fastai')
update_nb_metadata('callbacks.tracking.ipynb')
update_nb_metadata('data_block.ipynb',
keywords='fastai',
summary='The data block API',
title='data_block')
update_nb_metadata('callbacks.tracker.ipynb',
keywords='fastai',
summary='Callbacks that take decisions depending on the evolution of metrics during training',
title='callbacks.tracking')
update_nb_metadata('widgets.ipynb')
update_nb_metadata('text_tmp.ipynb')
update_nb_metadata('tabular_tmp.ipynb')
###Output
_____no_output_____
###Markdown
To update this notebook Run `tools/sgen_notebooks.py Or run below: You need to make sure to refresh right after
###Code
import glob
for f in Path().glob('*.ipynb'):
generate_missing_metadata(f)
###Output
_____no_output_____
###Markdown
Metadata generated below
###Code
update_nb_metadata('tutorial.itemlist.ipynb',
summary='Advanced tutorial, explains how to create your custom `ItemBase` or `ItemList`',
title='tutorial.itemlist')
update_nb_metadata('tutorial.inference.ipynb',
summary='Intermediate tutorial, explains how to create a Learner for inference',
title='tutorial.inference')
update_nb_metadata('tutorial.data.ipynb',
summary="Beginner's tutorial, explains how to quickly look at your data or model predictions",
title='tutorial.data')
update_nb_metadata('callbacks.csv_logger.ipynb',
summary='Callbacks that saves the tracked metrics during training',
title='callbacks.csv_logger')
update_nb_metadata('callbacks.tracker.ipynb',
summary='Callbacks that take decisions depending on the evolution of metrics during training',
title='callbacks.tracker')
update_nb_metadata('torch_core.ipynb',
summary='Basic functions using pytorch',
title='torch_core')
update_nb_metadata('gen_doc.convert2html.ipynb',
summary='Converting the documentation notebooks to HTML pages',
title='gen_doc.convert2html')
update_nb_metadata('metrics.ipynb',
summary='Useful metrics for training',
title='metrics')
update_nb_metadata('callbacks.fp16.ipynb',
summary='Training in mixed precision implementation',
title='callbacks.fp16')
update_nb_metadata('callbacks.general_sched.ipynb',
summary='Implementation of a flexible training API',
title='callbacks.general_sched')
update_nb_metadata('text.ipynb',
keywords='fastai',
summary='Application to NLP, including ULMFiT fine-tuning',
title='text')
update_nb_metadata('callback.ipynb',
summary='Implementation of the callback system',
title='callback')
update_nb_metadata('tabular.models.ipynb',
keywords='fastai',
summary='Model for training tabular/structured data',
title='tabular.models')
update_nb_metadata('callbacks.mixup.ipynb',
summary='Implementation of mixup',
title='callbacks.mixup')
update_nb_metadata('applications.ipynb',
summary='Types of problems you can apply the fastai library to',
title='applications')
update_nb_metadata('vision.data.ipynb',
summary='Basic dataset for computer vision and helper function to get a DataBunch',
title='vision.data')
update_nb_metadata('overview.ipynb',
summary='Overview of the core modules',
title='overview')
update_nb_metadata('training.ipynb',
keywords='fastai',
summary='Overview of fastai training modules, including Learner, metrics, and callbacks',
title='training')
update_nb_metadata('text.transform.ipynb',
summary='NLP data processing; tokenizes text and creates vocab indexes',
title='text.transform')
# do not overwrite this notebook, or changes may get lost!
# update_nb_metadata('jekyll_metadata.ipynb')
update_nb_metadata('collab.ipynb',
summary='Application to collaborative filtering',
title='collab')
update_nb_metadata('text.learner.ipynb',
summary='Easy access of language models and ULMFiT',
title='text.learner')
update_nb_metadata('gen_doc.nbdoc.ipynb',
summary='Helper function to build the documentation',
title='gen_doc.nbdoc')
update_nb_metadata('vision.learner.ipynb',
summary='`Learner` support for computer vision',
title='vision.learner')
update_nb_metadata('core.ipynb',
summary='Basic helper functions for the fastai library',
title='core')
update_nb_metadata('fastai_typing.ipynb',
keywords='fastai',
summary='Type annotations names',
title='fastai_typing')
update_nb_metadata('gen_doc.gen_notebooks.ipynb',
summary='Generation of documentation notebook skeletons from python module',
title='gen_doc.gen_notebooks')
update_nb_metadata('basic_train.ipynb',
summary='Learner class and training loop',
title='basic_train')
update_nb_metadata('gen_doc.ipynb',
keywords='fastai',
summary='Documentation modules overview',
title='gen_doc')
update_nb_metadata('callbacks.rnn.ipynb',
summary='Implementation of a callback for RNN training',
title='callbacks.rnn')
update_nb_metadata('callbacks.one_cycle.ipynb',
summary='Implementation of the 1cycle policy',
title='callbacks.one_cycle')
update_nb_metadata('vision.ipynb',
summary='Application to Computer Vision',
title='vision')
update_nb_metadata('vision.transform.ipynb',
summary='List of transforms for data augmentation in CV',
title='vision.transform')
update_nb_metadata('callbacks.lr_finder.ipynb',
summary='Implementation of the LR Range test from Leslie Smith',
title='callbacks.lr_finder')
update_nb_metadata('text.data.ipynb',
summary='Basic dataset for NLP tasks and helper functions to create a DataBunch',
title='text.data')
update_nb_metadata('text.models.ipynb',
summary='Implementation of the AWD-LSTM and the RNN models',
title='text.models')
update_nb_metadata('tabular.data.ipynb',
summary='Base class to deal with tabular data and get a DataBunch',
title='tabular.data')
update_nb_metadata('callbacks.ipynb',
keywords='fastai',
summary='Callbacks implemented in the fastai library',
title='callbacks')
update_nb_metadata('train.ipynb',
summary='Extensions to Learner that easily implement Callback',
title='train')
update_nb_metadata('callbacks.hooks.ipynb',
summary='Implement callbacks using hooks',
title='callbacks.hooks')
update_nb_metadata('vision.image.ipynb',
summary='Image class, variants and internal data augmentation pipeline',
title='vision.image')
update_nb_metadata('vision.models.unet.ipynb',
summary='Dynamic Unet that can use any pretrained model as a backbone.',
title='vision.models.unet')
update_nb_metadata('vision.models.ipynb',
keywords='fastai',
summary='Overview of the models used for CV in fastai',
title='vision.models')
update_nb_metadata('tabular.transform.ipynb',
summary='Transforms to clean and preprocess tabular data',
title='tabular.transform')
update_nb_metadata('index.ipynb',
keywords='fastai',
toc='false',
title='Welcome to fastai')
update_nb_metadata('layers.ipynb',
summary='Provides essential functions to building and modifying `Model` architectures.',
title='layers')
update_nb_metadata('tabular.ipynb',
keywords='fastai',
summary='Application to tabular/structured data',
title='tabular')
update_nb_metadata('basic_data.ipynb',
summary='Basic classes to contain the data for model training.',
title='basic_data')
update_nb_metadata('datasets.ipynb')
update_nb_metadata('tmp.ipynb',
keywords='fastai')
update_nb_metadata('callbacks.tracking.ipynb')
update_nb_metadata('data_block.ipynb',
keywords='fastai',
summary='The data block API',
title='data_block')
update_nb_metadata('callbacks.tracker.ipynb',
keywords='fastai',
summary='Callbacks that take decisions depending on the evolution of metrics during training',
title='callbacks.tracking')
update_nb_metadata('widgets.ipynb')
update_nb_metadata('text_tmp.ipynb')
update_nb_metadata('tabular_tmp.ipynb')
update_nb_metadata('tutorial.data.ipynb')
update_nb_metadata('tutorial.itemlist.ipynb')
update_nb_metadata('tutorial.inference.ipynb')
###Output
_____no_output_____
###Markdown
To update this notebook Run `tools/sgen_notebooks.py Or run below: You need to make sure to refresh right after
###Code
import glob
for f in Path().glob('*.ipynb'):
generate_missing_metadata(f)
###Output
_____no_output_____
###Markdown
Metadata generated below
###Code
update_nb_metadata('callbacks.csv_logger.ipynb',
summary='Callbacks that saves the tracked metrics during training',
title='callbacks.csv_logger')
update_nb_metadata('callbacks.tracker.ipynb',
summary='Callbacks that take decisions depending on the evolution of metrics during training',
title='callbacks.tracker')
update_nb_metadata('torch_core.ipynb',
summary='Basic functions using pytorch',
title='torch_core')
update_nb_metadata('gen_doc.convert2html.ipynb',
summary='Converting the documentation notebooks to HTML pages',
title='gen_doc.convert2html')
update_nb_metadata('metrics.ipynb',
summary='Useful metrics for training',
title='metrics')
update_nb_metadata('callbacks.fp16.ipynb',
summary='Training in mixed precision implementation',
title='callbacks.fp16')
update_nb_metadata('callbacks.general_sched.ipynb',
summary='Implementation of a flexible training API',
title='callbacks.general_sched')
update_nb_metadata('text.ipynb',
keywords='fastai',
summary='Application to NLP, including ULMFiT fine-tuning',
title='text')
update_nb_metadata('callback.ipynb',
summary='Implementation of the callback system',
title='callback')
update_nb_metadata('tabular.models.ipynb',
keywords='fastai',
summary='Model for training tabular/structured data',
title='tabular.models')
update_nb_metadata('callbacks.mixup.ipynb',
summary='Implementation of mixup',
title='callbacks.mixup')
update_nb_metadata('applications.ipynb',
summary='Types of problems you can apply the fastai library to',
title='applications')
update_nb_metadata('vision.data.ipynb',
summary='Basic dataset for computer vision and helper function to get a DataBunch',
title='vision.data')
update_nb_metadata('overview.ipynb',
summary='Overview of the core modules',
title='overview')
update_nb_metadata('training.ipynb',
keywords='fastai',
summary='Overview of fastai training modules, including Learner, metrics, and callbacks',
title='training')
update_nb_metadata('text.transform.ipynb',
summary='NLP data processing; tokenizes text and creates vocab indexes',
title='text.transform')
# do not overwrite this notebook, or changes may get lost!
# update_nb_metadata('jekyll_metadata.ipynb')
update_nb_metadata('collab.ipynb',
summary='Application to collaborative filtering',
title='collab')
update_nb_metadata('text.learner.ipynb',
summary='Easy access of language models and ULMFiT',
title='text.learner')
update_nb_metadata('gen_doc.nbdoc.ipynb',
summary='Helper function to build the documentation',
title='gen_doc.nbdoc')
update_nb_metadata('vision.learner.ipynb',
summary='`Learner` support for computer vision',
title='vision.learner')
update_nb_metadata('core.ipynb',
summary='Basic helper functions for the fastai library',
title='core')
update_nb_metadata('fastai_typing.ipynb',
keywords='fastai',
summary='Type annotations names',
title='fastai_typing')
update_nb_metadata('gen_doc.gen_notebooks.ipynb',
summary='Generation of documentation notebook skeletons from python module',
title='gen_doc.gen_notebooks')
update_nb_metadata('basic_train.ipynb',
summary='Learner class and training loop',
title='basic_train')
update_nb_metadata('gen_doc.ipynb',
keywords='fastai',
summary='Documentation modules overview',
title='gen_doc')
update_nb_metadata('callbacks.rnn.ipynb',
summary='Implementation of a callback for RNN training',
title='callbacks.rnn')
update_nb_metadata('callbacks.one_cycle.ipynb',
summary='Implementation of the 1cycle policy',
title='callbacks.one_cycle')
update_nb_metadata('vision.ipynb',
summary='Application to Computer Vision',
title='vision')
update_nb_metadata('vision.transform.ipynb',
summary='List of transforms for data augmentation in CV',
title='vision.transform')
update_nb_metadata('callbacks.lr_finder.ipynb',
summary='Implementation of the LR Range test from Leslie Smith',
title='callbacks.lr_finder')
update_nb_metadata('text.data.ipynb',
summary='Basic dataset for NLP tasks and helper functions to create a DataBunch',
title='text.data')
update_nb_metadata('text.models.ipynb',
summary='Implementation of the AWD-LSTM and the RNN models',
title='text.models')
update_nb_metadata('tabular.data.ipynb',
summary='Base class to deal with tabular data and get a DataBunch',
title='tabular.data')
update_nb_metadata('callbacks.ipynb',
keywords='fastai',
summary='Callbacks implemented in the fastai library',
title='callbacks')
update_nb_metadata('train.ipynb',
summary='Extensions to Learner that easily implement Callback',
title='train')
update_nb_metadata('callbacks.hooks.ipynb',
summary='Implement callbacks using hooks',
title='callbacks.hooks')
update_nb_metadata('vision.image.ipynb',
summary='Image class, variants and internal data augmentation pipeline',
title='vision.image')
update_nb_metadata('vision.models.unet.ipynb',
summary='Dynamic Unet that can use any pretrained model as a backbone.',
title='vision.models.unet')
update_nb_metadata('vision.models.ipynb',
keywords='fastai',
summary='Overview of the models used for CV in fastai',
title='vision.models')
update_nb_metadata('tabular.transform.ipynb',
summary='Transforms to clean and preprocess tabular data',
title='tabular.transform')
update_nb_metadata('index.ipynb',
keywords='fastai',
toc='false',
title='Welcome to fastai')
update_nb_metadata('layers.ipynb',
summary='Provides essential functions to building and modifying `Model` architectures.',
title='layers')
update_nb_metadata('tabular.ipynb',
keywords='fastai',
summary='Application to tabular/structured data',
title='tabular')
update_nb_metadata('basic_data.ipynb',
summary='Basic classes to contain the data for model training.',
title='basic_data')
update_nb_metadata('datasets.ipynb')
update_nb_metadata('tmp.ipynb',
keywords='fastai')
update_nb_metadata('callbacks.tracking.ipynb')
update_nb_metadata('data_block.ipynb',
keywords='fastai',
summary='The data block API',
title='data_block')
update_nb_metadata('callbacks.tracker.ipynb',
keywords='fastai',
summary='Callbacks that take decisions depending on the evolution of metrics during training',
title='callbacks.tracking')
update_nb_metadata('widgets.ipynb')
update_nb_metadata('text_tmp.ipynb')
update_nb_metadata('tabular_tmp.ipynb')
###Output
_____no_output_____
###Markdown
To update this notebook Run `tools/sgen_notebooks.py Or run below: You need to make sure to refresh right after
###Code
import glob
for f in Path().glob('*.ipynb'):
generate_missing_metadata(f)
###Output
_____no_output_____
###Markdown
Metadata generated below
###Code
update_nb_metadata('tutorial.itemlist.ipynb',
summary='Advanced tutorial, explains how to create your custom `ItemBase` or `ItemList`',
title='Custom ItemList')
update_nb_metadata('tutorial.inference.ipynb',
summary='Intermediate tutorial, explains how to create a Learner for inference',
title='Inference Learner')
update_nb_metadata('tutorial.data.ipynb',
summary="Beginner's tutorial, explains how to quickly look at your data or model predictions",
title='Look at data')
update_nb_metadata('vision.gan.ipynb',
summary='All the modules and callbacks necessary to train a GAN',
title='vision.gan')
update_nb_metadata('callbacks.csv_logger.ipynb',
summary='Callbacks that saves the tracked metrics during training',
title='callbacks.csv_logger')
update_nb_metadata('callbacks.tracker.ipynb',
summary='Callbacks that take decisions depending on the evolution of metrics during training',
title='callbacks.tracker')
update_nb_metadata('torch_core.ipynb',
summary='Basic functions using pytorch',
title='torch_core')
update_nb_metadata('gen_doc.convert2html.ipynb',
summary='Converting the documentation notebooks to HTML pages',
title='gen_doc.convert2html')
update_nb_metadata('metrics.ipynb',
summary='Useful metrics for training',
title='metrics')
update_nb_metadata('callbacks.fp16.ipynb',
summary='Training in mixed precision implementation',
title='callbacks.fp16')
update_nb_metadata('callbacks.general_sched.ipynb',
summary='Implementation of a flexible training API',
title='callbacks.general_sched')
update_nb_metadata('text.ipynb',
keywords='fastai',
summary='Application to NLP, including ULMFiT fine-tuning',
title='text')
update_nb_metadata('callback.ipynb',
summary='Implementation of the callback system',
title='callback')
update_nb_metadata('tabular.models.ipynb',
keywords='fastai',
summary='Model for training tabular/structured data',
title='tabular.models')
update_nb_metadata('callbacks.mixup.ipynb',
summary='Implementation of mixup',
title='callbacks.mixup')
update_nb_metadata('applications.ipynb',
summary='Types of problems you can apply the fastai library to',
title='applications')
update_nb_metadata('vision.data.ipynb',
summary='Basic dataset for computer vision and helper function to get a DataBunch',
title='vision.data')
update_nb_metadata('overview.ipynb',
summary='Overview of the core modules',
title='overview')
update_nb_metadata('training.ipynb',
keywords='fastai',
summary='Overview of fastai training modules, including Learner, metrics, and callbacks',
title='training')
update_nb_metadata('text.transform.ipynb',
summary='NLP data processing; tokenizes text and creates vocab indexes',
title='text.transform')
# do not overwrite this notebook, or changes may get lost!
# update_nb_metadata('jekyll_metadata.ipynb')
update_nb_metadata('collab.ipynb',
summary='Application to collaborative filtering',
title='collab')
update_nb_metadata('text.learner.ipynb',
summary='Easy access of language models and ULMFiT',
title='text.learner')
update_nb_metadata('gen_doc.nbdoc.ipynb',
summary='Helper function to build the documentation',
title='gen_doc.nbdoc')
update_nb_metadata('vision.learner.ipynb',
summary='`Learner` support for computer vision',
title='vision.learner')
update_nb_metadata('core.ipynb',
summary='Basic helper functions for the fastai library',
title='core')
update_nb_metadata('fastai_typing.ipynb',
keywords='fastai',
summary='Type annotations names',
title='fastai_typing')
update_nb_metadata('gen_doc.gen_notebooks.ipynb',
summary='Generation of documentation notebook skeletons from python module',
title='gen_doc.gen_notebooks')
update_nb_metadata('basic_train.ipynb',
summary='Learner class and training loop',
title='basic_train')
update_nb_metadata('gen_doc.ipynb',
keywords='fastai',
summary='Documentation modules overview',
title='gen_doc')
update_nb_metadata('callbacks.rnn.ipynb',
summary='Implementation of a callback for RNN training',
title='callbacks.rnn')
update_nb_metadata('callbacks.one_cycle.ipynb',
summary='Implementation of the 1cycle policy',
title='callbacks.one_cycle')
update_nb_metadata('vision.ipynb',
summary='Application to Computer Vision',
title='vision')
update_nb_metadata('vision.transform.ipynb',
summary='List of transforms for data augmentation in CV',
title='vision.transform')
update_nb_metadata('callbacks.lr_finder.ipynb',
summary='Implementation of the LR Range test from Leslie Smith',
title='callbacks.lr_finder')
update_nb_metadata('text.data.ipynb',
summary='Basic dataset for NLP tasks and helper functions to create a DataBunch',
title='text.data')
update_nb_metadata('text.models.ipynb',
summary='Implementation of the AWD-LSTM and the RNN models',
title='text.models')
update_nb_metadata('tabular.data.ipynb',
summary='Base class to deal with tabular data and get a DataBunch',
title='tabular.data')
update_nb_metadata('callbacks.ipynb',
keywords='fastai',
summary='Callbacks implemented in the fastai library',
title='callbacks')
update_nb_metadata('train.ipynb',
summary='Extensions to Learner that easily implement Callback',
title='train')
update_nb_metadata('callbacks.hooks.ipynb',
summary='Implement callbacks using hooks',
title='callbacks.hooks')
update_nb_metadata('vision.image.ipynb',
summary='Image class, variants and internal data augmentation pipeline',
title='vision.image')
update_nb_metadata('vision.models.unet.ipynb',
summary='Dynamic Unet that can use any pretrained model as a backbone.',
title='vision.models.unet')
update_nb_metadata('vision.models.ipynb',
keywords='fastai',
summary='Overview of the models used for CV in fastai',
title='vision.models')
update_nb_metadata('tabular.transform.ipynb',
summary='Transforms to clean and preprocess tabular data',
title='tabular.transform')
update_nb_metadata('index.ipynb',
keywords='fastai',
toc='false',
title='Welcome to fastai')
update_nb_metadata('layers.ipynb',
summary='Provides essential functions to building and modifying `Model` architectures.',
title='layers')
update_nb_metadata('tabular.ipynb',
keywords='fastai',
summary='Application to tabular/structured data',
title='tabular')
update_nb_metadata('basic_data.ipynb',
summary='Basic classes to contain the data for model training.',
title='basic_data')
update_nb_metadata('datasets.ipynb')
update_nb_metadata('tmp.ipynb',
keywords='fastai')
update_nb_metadata('callbacks.tracking.ipynb')
update_nb_metadata('data_block.ipynb',
keywords='fastai',
summary='The data block API',
title='data_block')
update_nb_metadata('callbacks.tracker.ipynb',
keywords='fastai',
summary='Callbacks that take decisions depending on the evolution of metrics during training',
title='callbacks.tracking')
update_nb_metadata('widgets.ipynb')
update_nb_metadata('text_tmp.ipynb')
update_nb_metadata('tabular_tmp.ipynb')
update_nb_metadata('tutorial.data.ipynb')
update_nb_metadata('tutorial.itemlist.ipynb')
update_nb_metadata('tutorial.inference.ipynb')
update_nb_metadata('vision.gan.ipynb')
update_nb_metadata('utils.collect_env.ipynb')
update_nb_metadata('widgets.image_cleaner.ipynb')
update_nb_metadata('utils.mem.ipynb')
update_nb_metadata('callbacks.mem.ipynb')
update_nb_metadata('gen_doc.nbtest.ipynb',
summary='Helper functions to search for api tests',
title='gen_doc.nbtest')
update_nb_metadata('utils.ipython.ipynb')
update_nb_metadata('callbacks.misc.ipynb')
update_nb_metadata('utils.mod_display.ipynb')
update_nb_metadata('text.interpret.ipynb',
keywords='fastai',
summary='Easy access of language models and ULMFiT',
title='text.learner')
update_nb_metadata('vision.interpret.ipynb',
keywords='fastai',
summary='`Learner` support for computer vision',
title='vision.learner')
###Output
_____no_output_____
###Markdown
To update this notebook Run `tools/sgen_notebooks.py Or run below: You need to make sure to refresh right after
###Code
import glob
for f in Path().glob('*.ipynb'):
generate_missing_metadata(f)
###Output
_____no_output_____
###Markdown
Metadata generated below
###Code
update_nb_metadata('tutorial.itemlist.ipynb',
summary='Advanced tutorial, explains how to create your custom `ItemBase` or `ItemList`',
title='Custom ItemList')
update_nb_metadata('tutorial.inference.ipynb',
summary='Intermediate tutorial, explains how to create a Learner for inference',
title='Inference Learner')
update_nb_metadata('tutorial.data.ipynb',
summary="Beginner's tutorial, explains how to quickly look at your data or model predictions",
title='Look at data')
update_nb_metadata('vision.gan.ipynb',
summary='All the modules and callbacks necessary to train a GAN',
title='vision.gan')
update_nb_metadata('callbacks.csv_logger.ipynb',
summary='Callbacks that saves the tracked metrics during training',
title='callbacks.csv_logger')
update_nb_metadata('callbacks.tracker.ipynb',
summary='Callbacks that take decisions depending on the evolution of metrics during training',
title='callbacks.tracker')
update_nb_metadata('torch_core.ipynb',
summary='Basic functions using pytorch',
title='torch_core')
update_nb_metadata('gen_doc.convert2html.ipynb',
summary='Converting the documentation notebooks to HTML pages',
title='gen_doc.convert2html')
update_nb_metadata('metrics.ipynb',
summary='Useful metrics for training',
title='metrics')
update_nb_metadata('callbacks.fp16.ipynb',
summary='Training in mixed precision implementation',
title='callbacks.fp16')
update_nb_metadata('callbacks.general_sched.ipynb',
summary='Implementation of a flexible training API',
title='callbacks.general_sched')
update_nb_metadata('text.ipynb',
keywords='fastai',
summary='Application to NLP, including ULMFiT fine-tuning',
title='text')
update_nb_metadata('callback.ipynb',
summary='Implementation of the callback system',
title='callback')
update_nb_metadata('tabular.models.ipynb',
keywords='fastai',
summary='Model for training tabular/structured data',
title='tabular.models')
update_nb_metadata('callbacks.mixup.ipynb',
summary='Implementation of mixup',
title='callbacks.mixup')
update_nb_metadata('applications.ipynb',
summary='Types of problems you can apply the fastai library to',
title='applications')
update_nb_metadata('vision.data.ipynb',
summary='Basic dataset for computer vision and helper function to get a DataBunch',
title='vision.data')
update_nb_metadata('overview.ipynb',
summary='Overview of the core modules',
title='overview')
update_nb_metadata('training.ipynb',
keywords='fastai',
summary='Overview of fastai training modules, including Learner, metrics, and callbacks',
title='training')
update_nb_metadata('text.transform.ipynb',
summary='NLP data processing; tokenizes text and creates vocab indexes',
title='text.transform')
# do not overwrite this notebook, or changes may get lost!
# update_nb_metadata('jekyll_metadata.ipynb')
update_nb_metadata('collab.ipynb',
summary='Application to collaborative filtering',
title='collab')
update_nb_metadata('text.learner.ipynb',
summary='Easy access of language models and ULMFiT',
title='text.learner')
update_nb_metadata('gen_doc.nbdoc.ipynb',
summary='Helper function to build the documentation',
title='gen_doc.nbdoc')
update_nb_metadata('vision.learner.ipynb',
summary='`Learner` support for computer vision',
title='vision.learner')
update_nb_metadata('core.ipynb',
summary='Basic helper functions for the fastai library',
title='core')
update_nb_metadata('fastai_typing.ipynb',
keywords='fastai',
summary='Type annotations names',
title='fastai_typing')
update_nb_metadata('gen_doc.gen_notebooks.ipynb',
summary='Generation of documentation notebook skeletons from python module',
title='gen_doc.gen_notebooks')
update_nb_metadata('basic_train.ipynb',
summary='Learner class and training loop',
title='basic_train')
update_nb_metadata('gen_doc.ipynb',
keywords='fastai',
summary='Documentation modules overview',
title='gen_doc')
update_nb_metadata('callbacks.rnn.ipynb',
summary='Implementation of a callback for RNN training',
title='callbacks.rnn')
update_nb_metadata('callbacks.one_cycle.ipynb',
summary='Implementation of the 1cycle policy',
title='callbacks.one_cycle')
update_nb_metadata('vision.ipynb',
summary='Application to Computer Vision',
title='vision')
update_nb_metadata('vision.transform.ipynb',
summary='List of transforms for data augmentation in CV',
title='vision.transform')
update_nb_metadata('callbacks.lr_finder.ipynb',
summary='Implementation of the LR Range test from Leslie Smith',
title='callbacks.lr_finder')
update_nb_metadata('text.data.ipynb',
summary='Basic dataset for NLP tasks and helper functions to create a DataBunch',
title='text.data')
update_nb_metadata('text.models.ipynb',
summary='Implementation of the AWD-LSTM and the RNN models',
title='text.models')
update_nb_metadata('tabular.data.ipynb',
summary='Base class to deal with tabular data and get a DataBunch',
title='tabular.data')
update_nb_metadata('callbacks.ipynb',
keywords='fastai',
summary='Callbacks implemented in the fastai library',
title='callbacks')
update_nb_metadata('train.ipynb',
summary='Extensions to Learner that easily implement Callback',
title='train')
update_nb_metadata('callbacks.hooks.ipynb',
summary='Implement callbacks using hooks',
title='callbacks.hooks')
update_nb_metadata('vision.image.ipynb',
summary='Image class, variants and internal data augmentation pipeline',
title='vision.image')
update_nb_metadata('vision.models.unet.ipynb',
summary='Dynamic Unet that can use any pretrained model as a backbone.',
title='vision.models.unet')
update_nb_metadata('vision.models.ipynb',
keywords='fastai',
summary='Overview of the models used for CV in fastai',
title='vision.models')
update_nb_metadata('tabular.transform.ipynb',
summary='Transforms to clean and preprocess tabular data',
title='tabular.transform')
update_nb_metadata('index.ipynb',
keywords='fastai',
toc='false',
title='Welcome to fastai')
update_nb_metadata('layers.ipynb',
summary='Provides essential functions to building and modifying `Model` architectures.',
title='layers')
update_nb_metadata('tabular.ipynb',
keywords='fastai',
summary='Application to tabular/structured data',
title='tabular')
update_nb_metadata('basic_data.ipynb',
summary='Basic classes to contain the data for model training.',
title='basic_data')
update_nb_metadata('datasets.ipynb')
update_nb_metadata('tmp.ipynb',
keywords='fastai')
update_nb_metadata('callbacks.tracking.ipynb')
update_nb_metadata('data_block.ipynb',
keywords='fastai',
summary='The data block API',
title='data_block')
update_nb_metadata('callbacks.tracker.ipynb',
keywords='fastai',
summary='Callbacks that take decisions depending on the evolution of metrics during training',
title='callbacks.tracking')
update_nb_metadata('widgets.ipynb')
update_nb_metadata('text_tmp.ipynb')
update_nb_metadata('tabular_tmp.ipynb')
update_nb_metadata('tutorial.data.ipynb')
update_nb_metadata('tutorial.itemlist.ipynb')
update_nb_metadata('tutorial.inference.ipynb')
update_nb_metadata('vision.gan.ipynb')
update_nb_metadata('utils.collect_env.ipynb')
update_nb_metadata('widgets.image_cleaner.ipynb')
###Output
_____no_output_____
###Markdown
To update this notebook Run `tools/sgen_notebooks.py Or run below: You need to make sure to refresh right after
###Code
import glob
for f in Path().glob('*.ipynb'):
generate_missing_metadata(f)
###Output
_____no_output_____
###Markdown
Metadata generated below
###Code
update_nb_metadata('data_block.ipynb',
summary='The data block API',
title='data_block')
update_nb_metadata('torch_core.ipynb',
summary='Basic functions using pytorch',
title='torch_core')
update_nb_metadata('gen_doc.convert2html.ipynb',
summary='Converting the documentation notebooks to HTML pages',
title='gen_doc.convert2html')
update_nb_metadata('metrics.ipynb',
summary='Useful metrics for training',
title='metrics')
update_nb_metadata('callbacks.fp16.ipynb',
summary='Training in mixed precision implementation',
title='callbacks.fp16')
update_nb_metadata('callbacks.general_sched.ipynb',
summary='Implementation of a flexible training API',
title='callbacks.general_sched')
update_nb_metadata('text.ipynb',
keywords='fastai',
summary='Application to NLP, including ULMFiT fine-tuning',
title='text')
update_nb_metadata('callback.ipynb',
summary='Implementation of the callback system',
title='callback')
update_nb_metadata('tabular.models.ipynb',
keywords='fastai',
summary='Model for training tabular/structured data',
title='tabular.models')
update_nb_metadata('callbacks.mixup.ipynb',
summary='Implementation of mixup',
title='callbacks.mixup')
update_nb_metadata('applications.ipynb',
summary='Types of problems you can apply the fastai library to',
title='applications')
update_nb_metadata('vision.data.ipynb',
summary='Basic dataset for computer vision and helper function to get a DataBunch',
title='vision.data')
update_nb_metadata('overview.ipynb',
summary='Overview of the core modules',
title='overview')
update_nb_metadata('training.ipynb',
keywords='fastai',
summary='Overview of fastai training modules, including Learner, metrics, and callbacks',
title='training')
update_nb_metadata('text.transform.ipynb',
summary='NLP data processing; tokenizes text and creates vocab indexes',
title='text.transform')
# do not overwrite this notebook, or changes may get lost!
# update_nb_metadata('jekyll_metadata.ipynb')
update_nb_metadata('collab.ipynb',
summary='Application to collaborative filtering',
title='collab')
update_nb_metadata('text.learner.ipynb',
summary='Easy access of language models and ULMFiT',
title='text.learner')
update_nb_metadata('gen_doc.nbdoc.ipynb',
summary='Helper function to build the documentation',
title='gen_doc.nbdoc')
update_nb_metadata('vision.learner.ipynb',
summary='`Learner` support for computer vision',
title='vision.learner')
update_nb_metadata('core.ipynb',
summary='Basic helper functions for the fastai library',
title='core')
update_nb_metadata('fastai_typing.ipynb',
keywords='fastai',
summary='Type annotations names',
title='fastai_typing')
update_nb_metadata('gen_doc.gen_notebooks.ipynb',
summary='Generation of documentation notebook skeletons from python module',
title='gen_doc.gen_notebooks')
update_nb_metadata('basic_train.ipynb',
summary='Learner class and training loop',
title='basic_train')
update_nb_metadata('gen_doc.ipynb',
keywords='fastai',
summary='Documentation modules overview',
title='gen_doc')
update_nb_metadata('callbacks.rnn.ipynb',
summary='Implementation of a callback for RNN training',
title='callbacks.rnn')
update_nb_metadata('callbacks.one_cycle.ipynb',
summary='Implementation of the 1cycle policy',
title='callbacks.one_cycle')
update_nb_metadata('vision.ipynb',
summary='Application to Computer Vision',
title='vision')
update_nb_metadata('vision.transform.ipynb',
summary='List of transforms for data augmentation in CV',
title='vision.transform')
update_nb_metadata('callbacks.lr_finder.ipynb',
summary='Implementation of the LR Range test from Leslie Smith',
title='callbacks.lr_finder')
update_nb_metadata('text.data.ipynb',
summary='Basic dataset for NLP tasks and helper functions to create a DataBunch',
title='text.data')
update_nb_metadata('text.models.ipynb',
summary='Implementation of the AWD-LSTM and the RNN models',
title='text.models')
update_nb_metadata('tabular.data.ipynb',
summary='Base class to deal with tabular data and get a DataBunch',
title='tabular.data')
update_nb_metadata('callbacks.ipynb',
keywords='fastai',
summary='Callbacks implemented in the fastai library',
title='callbacks')
update_nb_metadata('train.ipynb',
summary='Extensions to Learner that easily implement Callback',
title='train')
update_nb_metadata('callbacks.hooks.ipynb',
summary='Implement callbacks using hooks',
title='callbacks.hooks')
update_nb_metadata('vision.image.ipynb',
summary='Image class, variants and internal data augmentation pipeline',
title='vision.image')
update_nb_metadata('vision.models.unet.ipynb',
summary='Dynamic Unet that can use any pretrained model as a backbone.',
title='vision.models.unet')
update_nb_metadata('vision.models.ipynb',
keywords='fastai',
summary='Overview of the models used for CV in fastai',
title='vision.models')
update_nb_metadata('tabular.transform.ipynb',
summary='Transforms to clean and preprocess tabular data',
title='tabular.transform')
update_nb_metadata('index.ipynb',
keywords='fastai',
toc='false',
title='Welcome to fastai')
update_nb_metadata('layers.ipynb',
summary='Provides essential functions to building and modifying `Model` architectures.',
title='layers')
update_nb_metadata('tabular.ipynb',
keywords='fastai',
summary='Application to tabular/structured data',
title='tabular')
update_nb_metadata('basic_data.ipynb',
summary='Basic classes to contain the data for model training.',
title='basic_data')
update_nb_metadata('datasets.ipynb')
update_nb_metadata('tmp.ipynb',
keywords='fastai')
update_nb_metadata('callbacks.tracking.ipynb')
###Output
_____no_output_____
###Markdown
To update this notebook Run `tools/sgen_notebooks.py Or run below: You need to make sure to refresh right after
###Code
import glob
for f in Path().glob('*.ipynb'):
generate_missing_metadata(f)
###Output
_____no_output_____
###Markdown
Metadata generated below
###Code
update_nb_metadata('tutorial.itemlist.ipynb',
summary='Advanced tutorial, explains how to create your custom `ItemBase` or `ItemList`',
title='Custom ItemList')
update_nb_metadata('tutorial.inference.ipynb',
summary='Intermediate tutorial, explains how to create a Learner for inference',
title='Inference Learner')
update_nb_metadata('tutorial.data.ipynb',
summary="Beginner's tutorial, explains how to quickly look at your data or model predictions",
title='Look at data')
update_nb_metadata('vision.gan.ipynb',
summary='All the modules and callbacks necessary to train a GAN',
title='vision.gan')
update_nb_metadata('callbacks.csv_logger.ipynb',
summary='Callbacks that saves the tracked metrics during training',
title='callbacks.csv_logger')
update_nb_metadata('callbacks.tracker.ipynb',
summary='Callbacks that take decisions depending on the evolution of metrics during training',
title='callbacks.tracker')
update_nb_metadata('torch_core.ipynb',
summary='Basic functions using pytorch',
title='torch_core')
update_nb_metadata('gen_doc.convert2html.ipynb',
summary='Converting the documentation notebooks to HTML pages',
title='gen_doc.convert2html')
update_nb_metadata('metrics.ipynb',
summary='Useful metrics for training',
title='metrics')
update_nb_metadata('callbacks.fp16.ipynb',
summary='Training in mixed precision implementation',
title='callbacks.fp16')
update_nb_metadata('callbacks.general_sched.ipynb',
summary='Implementation of a flexible training API',
title='callbacks.general_sched')
update_nb_metadata('text.ipynb',
keywords='fastai',
summary='Application to NLP, including ULMFiT fine-tuning',
title='text')
update_nb_metadata('callback.ipynb',
summary='Implementation of the callback system',
title='callback')
update_nb_metadata('tabular.models.ipynb',
keywords='fastai',
summary='Model for training tabular/structured data',
title='tabular.models')
update_nb_metadata('callbacks.mixup.ipynb',
summary='Implementation of mixup',
title='callbacks.mixup')
update_nb_metadata('applications.ipynb',
summary='Types of problems you can apply the fastai library to',
title='applications')
update_nb_metadata('vision.data.ipynb',
summary='Basic dataset for computer vision and helper function to get a DataBunch',
title='vision.data')
update_nb_metadata('overview.ipynb',
summary='Overview of the core modules',
title='overview')
update_nb_metadata('training.ipynb',
keywords='fastai',
summary='Overview of fastai training modules, including Learner, metrics, and callbacks',
title='training')
update_nb_metadata('text.transform.ipynb',
summary='NLP data processing; tokenizes text and creates vocab indexes',
title='text.transform')
# do not overwrite this notebook, or changes may get lost!
# update_nb_metadata('jekyll_metadata.ipynb')
update_nb_metadata('collab.ipynb',
summary='Application to collaborative filtering',
title='collab')
update_nb_metadata('text.learner.ipynb',
summary='Easy access of language models and ULMFiT',
title='text.learner')
update_nb_metadata('gen_doc.nbdoc.ipynb',
summary='Helper function to build the documentation',
title='gen_doc.nbdoc')
update_nb_metadata('vision.learner.ipynb',
summary='`Learner` support for computer vision',
title='vision.learner')
update_nb_metadata('core.ipynb',
summary='Basic helper functions for the fastai library',
title='core')
update_nb_metadata('fastai_typing.ipynb',
keywords='fastai',
summary='Type annotations names',
title='fastai_typing')
update_nb_metadata('gen_doc.gen_notebooks.ipynb',
summary='Generation of documentation notebook skeletons from python module',
title='gen_doc.gen_notebooks')
update_nb_metadata('basic_train.ipynb',
summary='Learner class and training loop',
title='basic_train')
update_nb_metadata('gen_doc.ipynb',
keywords='fastai',
summary='Documentation modules overview',
title='gen_doc')
update_nb_metadata('callbacks.rnn.ipynb',
summary='Implementation of a callback for RNN training',
title='callbacks.rnn')
update_nb_metadata('callbacks.one_cycle.ipynb',
summary='Implementation of the 1cycle policy',
title='callbacks.one_cycle')
update_nb_metadata('vision.ipynb',
summary='Application to Computer Vision',
title='vision')
update_nb_metadata('vision.transform.ipynb',
summary='List of transforms for data augmentation in CV',
title='vision.transform')
update_nb_metadata('callbacks.lr_finder.ipynb',
summary='Implementation of the LR Range test from Leslie Smith',
title='callbacks.lr_finder')
update_nb_metadata('text.data.ipynb',
summary='Basic dataset for NLP tasks and helper functions to create a DataBunch',
title='text.data')
update_nb_metadata('text.models.ipynb',
summary='Implementation of the AWD-LSTM and the RNN models',
title='text.models')
update_nb_metadata('tabular.data.ipynb',
summary='Base class to deal with tabular data and get a DataBunch',
title='tabular.data')
update_nb_metadata('callbacks.ipynb',
keywords='fastai',
summary='Callbacks implemented in the fastai library',
title='callbacks')
update_nb_metadata('train.ipynb',
summary='Extensions to Learner that easily implement Callback',
title='train')
update_nb_metadata('callbacks.hooks.ipynb',
summary='Implement callbacks using hooks',
title='callbacks.hooks')
update_nb_metadata('vision.image.ipynb',
summary='Image class, variants and internal data augmentation pipeline',
title='vision.image')
update_nb_metadata('vision.models.unet.ipynb',
summary='Dynamic Unet that can use any pretrained model as a backbone.',
title='vision.models.unet')
update_nb_metadata('vision.models.ipynb',
keywords='fastai',
summary='Overview of the models used for CV in fastai',
title='vision.models')
update_nb_metadata('tabular.transform.ipynb',
summary='Transforms to clean and preprocess tabular data',
title='tabular.transform')
update_nb_metadata('index.ipynb',
keywords='fastai',
toc='false',
title='Welcome to fastai')
update_nb_metadata('layers.ipynb',
summary='Provides essential functions to building and modifying `Model` architectures.',
title='layers')
update_nb_metadata('tabular.ipynb',
keywords='fastai',
summary='Application to tabular/structured data',
title='tabular')
update_nb_metadata('basic_data.ipynb',
summary='Basic classes to contain the data for model training.',
title='basic_data')
update_nb_metadata('datasets.ipynb')
update_nb_metadata('tmp.ipynb',
keywords='fastai')
update_nb_metadata('callbacks.tracking.ipynb')
update_nb_metadata('data_block.ipynb',
keywords='fastai',
summary='The data block API',
title='data_block')
update_nb_metadata('callbacks.tracker.ipynb',
keywords='fastai',
summary='Callbacks that take decisions depending on the evolution of metrics during training',
title='callbacks.tracking')
update_nb_metadata('widgets.ipynb')
update_nb_metadata('text_tmp.ipynb')
update_nb_metadata('tabular_tmp.ipynb')
update_nb_metadata('tutorial.data.ipynb')
update_nb_metadata('tutorial.itemlist.ipynb')
update_nb_metadata('tutorial.inference.ipynb')
update_nb_metadata('vision.gan.ipynb')
###Output
_____no_output_____ |
ai1/labs/AI1_02.ipynb | ###Markdown
02 Pandas$\newcommand{\Set}[1]{\{1\}}$ $\newcommand{\Tuple}[1]{\langle1\rangle}$ $\newcommand{\v}[1]{\pmb{1}}$ $\newcommand{\cv}[1]{\begin{bmatrix}1\end{bmatrix}}$ $\newcommand{\rv}[1]{[1]}$ $\DeclareMathOperator{\argmax}{arg\,max}$ $\DeclareMathOperator{\argmin}{arg\,min}$ $\DeclareMathOperator{\dist}{dist}$$\DeclareMathOperator{\abs}{abs}$
###Code
%load_ext autoreload
%autoreload 2
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Series A Series is like a 1D array. The values in the Series have an index, which, by default, uses consecutive integers from 0.
###Code
s = pd.Series([2, 4, -12, 0, 2])
s
###Output
_____no_output_____
###Markdown
You can get its shape and dtype as we did with numpy arrays:
###Code
s.shape
s.dtype
###Output
_____no_output_____
###Markdown
You can get the values as a numpy array:
###Code
s.values
###Output
_____no_output_____
###Markdown
You can access by index and by slicing, as in Python:
###Code
s[3]
s[1:3]
s[1:]
###Output
_____no_output_____
###Markdown
A nice feature is Boolean indexing, where you extract values using a list of Booleans (not square brackets twice) and it returns the values that correspond to the Trues in the list:
###Code
s[[True, True, False, False, True]]
###Output
_____no_output_____
###Markdown
Operators are vectorized, similar to numpy:
###Code
s * 2
s > 0
###Output
_____no_output_____
###Markdown
The next example is neat. It combines a vectorized operator with the idea of Boolean indexing:
###Code
s[s > 0]
###Output
_____no_output_____
###Markdown
There are various methods, as you would expect, many building out from numpy e.g.:
###Code
s.sum()
s.mean()
s.unique()
s.value_counts()
###Output
_____no_output_____
###Markdown
One method is astype, which can do data type conversions:
###Code
s.astype(float)
###Output
_____no_output_____
###Markdown
DataFrame A DataFrame is a table of data, comprising rows and columns. The rows and columns both have an index. If you want more dimensions (we won't), then they support hierarchical indexing. There are various ways of creating a DataFrame, e.g. supply to its constructor a dictionary of equal-sized lists:
###Code
df = pd.DataFrame({'a' : [1, 2, 3], 'b' : [4, 5, 6], 'c' : [7, 8, 9]})
df
###Output
_____no_output_____
###Markdown
The keys of the dictionary became the column index, and it assigned integers to the other index. But, instead of looking at all the possible ways of doing this, we'll be reading the data in from a CSV file. We will assume that the first line of the file contains headers. These become the column indexes.
###Code
df = pd.read_csv('../datasets/dataset_stop_and_searchA.csv')
df
###Output
_____no_output_____
###Markdown
Notice when the CSV file has an empty value (a pair of consecutive commas), then Pandas treats this as NaN, which is a float. A useful method at this point is describe:
###Code
df.describe(include='all')
###Output
_____no_output_____
###Markdown
We can also get the column headers, row index, shape and dtypes (not dtype):
###Code
df.columns
df.index
df.shape
df.dtypes
###Output
_____no_output_____
###Markdown
You can retrieve a whole column, as a Series, using column indexing:
###Code
df['Suspect-ethnicity']
###Output
_____no_output_____
###Markdown
Now you have a Series, you might use the unique or value_counts methods that we looked at earlier.
###Code
df['Suspect-ethnicity'].unique()
df['Suspect-ethnicity'].value_counts()
###Output
_____no_output_____
###Markdown
If you ask for more than one column, then you must give them as a list (note the nested brackets). Then, the result is not a Series, but a DataFrame:
###Code
df[['Suspect-ethnicity', 'Officer-ethnicity']]
###Output
_____no_output_____
###Markdown
How do we get an individual row? The likelihood of wanting this in this module is small. If you do need to get an individual row, you cannot do indexing using square brackets, because that notation is for columns. The iloc and loc methods are probably what you would use. iloc retrieves by position. So df.iloc[0] retrieves the first row. loc, on the other hand, retrieves by label, so df.loc[0] retrieves the row whose label in the row index is 0. Confusing, huh? Ordinarily, they'll be the same.
###Code
df.iloc[4]
df.loc[4]
###Output
_____no_output_____
###Markdown
But sometimes the position and the label in the row index will not correspond. This can happen, for example, after shuffling the rows of the DataFrame or after deleting a row (see example later). In any case, we're much more likely to want to select several rows (hence a DataFrame) using Boolean indexing, defined by a Boolean expression. We use a Boolean expression that defines a Series and then use that to index the DataFrame. As an example, here's a Boolean expression:
###Code
df['Officer-ethnicity'] == 'Black'
###Output
_____no_output_____
###Markdown
And here we use that Boolean expression to extract rows:
###Code
df[df['Officer-ethnicity'] == 'Black']
###Output
_____no_output_____
###Markdown
In our Boolean expressions, we can do and, or and not (&, |, ~), but note that this often requires extra parentheses, e.g.
###Code
df[(df['Officer-ethnicity'] == 'Black') & (df['Object-of-search'] == 'Stolen goods')]
###Output
_____no_output_____
###Markdown
We can use this idea to delete rows. We use Boolean indexing as above to select the rows we want to keep. Then we assign that dataframe back to the original variable. For example, let's delete all male suspects, in other words, keep all female suspects:
###Code
df = df[df['Gender'] == 'Female'].copy()
df
###Output
_____no_output_____
###Markdown
This example also illustrates the point from earlier about the difference between position (iloc) and label in the row index (loc).
###Code
df.iloc[0]
df.loc[0] # raises an exception
df.iloc[11] # raises an exception
df.loc[11]
###Output
_____no_output_____
###Markdown
This is often a source of errors when writing Pandas. So one tip is, whenever you perform an operation that has the potential to change the row index, then reset the index so that it corresponds to the positions:
###Code
df.reset_index(drop=True, inplace=True)
df
###Output
_____no_output_____
###Markdown
Deleting columns can be done in the same way as we deleted rows, i.e. extract the ones you want to keep and then assign the result back to the original variable, e.g.:
###Code
df = df[['Gender', 'Age', 'Object-of-search', 'Outcome']].copy()
df
###Output
_____no_output_____
###Markdown
But deletion can also be done using the drop method. If axis=0 (default), you're deleting rows. If axis=1, you're deleting columns (and this time you name the column you want to delete), e.g.:
###Code
df.drop("Age", axis=1, inplace=True)
df
###Output
_____no_output_____
###Markdown
One handy variant is dropna with axis=0, which can be used to delete rows that contains NaN. We may see an example of this and a few other methods in our lectures and futuer labs. But, for now, we have enough for you to tackle something interesting. Exercise I've a larger file that contains all stop-and-searches by the Metropolitan Police for about a year (mid-2018 to mid-2019). Read it in:
###Code
df = pd.read_csv('../datasets/dataset_stop_and_searchB.csv')
df.shape
###Output
_____no_output_____ |
python-scripts/data_analytics_learn/link_pandas/Ex_Files_Pandas_Data/Exercise Files/05_06/Final/.ipynb_checkpoints/Data Frame Plots-checkpoint.ipynb | ###Markdown
Data Frame Plotsdocumentation: http://pandas.pydata.org/pandas-docs/stable/visualization.html
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('ggplot')
###Output
_____no_output_____
###Markdown
The plot method on Series and DataFrame is just a simple wrapper around plt.plot()If the index consists of dates, it calls gcf().autofmt_xdate() to try to format the x-axis nicely as show in the plot window.
###Code
ts = pd.Series(np.random.randn(1000), index=pd.date_range('1/1/2000', periods=1000))
ts = ts.cumsum()
ts.plot()
plt.show()
###Output
_____no_output_____
###Markdown
On DataFrame, plot() is a convenience to plot all of the columns, and include a legend within the plot.
###Code
df = pd.DataFrame(np.random.randn(1000, 4), index=pd.date_range('1/1/2016', periods=1000), columns=list('ABCD'))
df = df.cumsum()
plt.figure()
df.plot()
plt.show()
###Output
_____no_output_____
###Markdown
You can plot one column versus another using the x and y keywords in plot():
###Code
df3 = pd.DataFrame(np.random.randn(1000, 2), columns=['B', 'C']).cumsum()
df3['A'] = pd.Series(list(range(len(df))))
df3.plot(x='A', y='B')
plt.show()
df3.tail()
###Output
_____no_output_____
###Markdown
Plots other than line plotsPlotting methods allow for a handful of plot styles other than the default Line plot. These methods can be provided as the kind keyword argument to plot(). These include:- โbarโ or โbarhโ for bar plots- โhistโ for histogram- โboxโ for boxplot- โkdeโ or 'density' for density plots- โareaโ for area plots- โscatterโ for scatter plots- โhexbinโ for hexagonal bin plots- โpieโ for pie plotsFor example, a bar plot can be created the following way:
###Code
plt.figure()
df.ix[5].plot(kind='bar')
plt.axhline(0, color='k')
plt.show()
df.ix[5]
###Output
_____no_output_____
###Markdown
stack bar chart
###Code
df2 = pd.DataFrame(np.random.rand(10, 4), columns=['a', 'b', 'c', 'd'])
df2.plot.bar(stacked=True)
plt.show()
###Output
_____no_output_____
###Markdown
horizontal bar chart
###Code
df2.plot.barh(stacked=True)
plt.show()
###Output
_____no_output_____
###Markdown
box plot
###Code
df = pd.DataFrame(np.random.rand(10, 5), columns=['A', 'B', 'C', 'D', 'E'])
df.plot.box()
plt.show()
###Output
_____no_output_____
###Markdown
area plot
###Code
df = pd.DataFrame(np.random.rand(10, 4), columns=['a', 'b', 'c', 'd'])
df.plot.area()
plt.show()
###Output
_____no_output_____
###Markdown
Plotting with Missing DataPandas tries to be pragmatic about plotting DataFrames or Series that contain missing data. Missing values are dropped, left out, or filled depending on the plot type.| Plot Type | NaN Handling | ||----------------|-------------------------|---|| Line | Leave gaps at NaNs | || Line (stacked) | Fill 0โs | || Bar | Fill 0โs | || Scatter | Drop NaNs | || Histogram | Drop NaNs (column-wise) | || Box | Drop NaNs (column-wise) | || Area | Fill 0โs | || KDE | Drop NaNs (column-wise) | || Hexbin | Drop NaNs | || Pie | Fill 0โs | |If any of these defaults are not what you want, or if you want to be explicit about how missing values are handled, consider using fillna() or dropna() before plotting. density plot
###Code
ser = pd.Series(np.random.randn(1000))
ser.plot.kde()
plt.show()
###Output
_____no_output_____
###Markdown
lag plotLag plots are used to check if a data set or time series is random. Random data should not exhibit any structure in the lag plot. Non-random structure implies that the underlying data are not random.
###Code
from pandas.tools.plotting import lag_plot
plt.figure()
data = pd.Series(0.1 * np.random.rand(1000) + 0.9 * np.sin(np.linspace(-99 * np.pi, 99 * np.pi, num=1000)))
lag_plot(data)
plt.show()
###Output
_____no_output_____ |
code/iUma22_EscherMapping.ipynb | ###Markdown
Metabolic pathway visualization of Ustilago maydis with Escher Conversion of the model iCL1079 from sbml to json
###Code
# import cobra.test
from os.path import join
from cobra.io import read_sbml_model
# import escher
from escher import Builder
# import cobra
# from time import sleep
# data_dir = cobra.test.data_dir
# pip install pytest-astropy
ModelFile = join('..','model','iUma22.xml')
model=read_sbml_model(ModelFile)
# cobra.io.save_json_model(model, "iCL1079.json")
medium = model.medium
medium['EX_glc__D_e'] = 1
medium['EX_co2_e'] = 0
model.medium = medium
model.summary()
# model.reactions.get_by_id('EX_co2_e').lb = 0
# model.summary()
###Output
_____no_output_____
###Markdown
Export of central carbon pathway map
###Code
builder=Builder()
Escher_Central = join('Maps','iUma22_MetMap_TCA.json')
Escher_Glycine = join('Maps','iUma22_MetMap_glycine.json')
builder = Builder(
map_json=Escher_Central,
model = model, # 'iCL1079.json',
)
# Run FBA with the model and add the flux data to the map
solution = builder.model.optimize()
builder.reaction_data = solution.fluxes
builder.save_html('../code/example_map.html')
###Output
_____no_output_____ |
Analise_Exploratoria_DIO.ipynb | ###Markdown
###Code
#Importando as bibliotecas
import pandas as pd
import matplotlib.pyplot as plt
plt.style.use("seaborn")
#Upload do arquivo
from google.colab import files
arq = files.upload()
#Criando nosso DataFrame
df = pd.read_excel("AdventureWorks.xlsx")
#Visualizando as 5 primeiras linhas
df.head()
#Quantidade de linhas e colunas
df.shape
#Verificando os tipos de dados
df.dtypes
#Qual a Receita total?
df["Valor Venda"].sum()
#Qual o custo Total?
df["custo"] = df["Custo Unitรกrio"].mul(df["Quantidade"]) #Criando a coluna de custo
df.head(1)
#Qual o custo Total?
round(df["custo"].sum(), 2)
#Agora que temos a receita e custo e o total, podemos achar o Lucro total
#Vamos criar uma coluna de Lucro que serรก Receita - Custo
df["lucro"] = df["Valor Venda"] - df["custo"]
df.head(1)
#Total Lucro
round(df["lucro"].sum(),2)
#Criando uma coluna com total de dias para enviar o produto
df["Tempo_envio"] = df["Data Envio"] - df["Data Venda"]
df.head(1)
###Output
_____no_output_____
###Markdown
**Agora, queremos saber a mรฉdia do tempo de envio para cada Marca, e para isso precisamos transformar a coluna Tempo_envio em nรบmerica**
###Code
#Extraindo apenas os dias
df["Tempo_envio"] = (df["Data Envio"] - df["Data Venda"]).dt.days
df.head(1)
#Verificando o tipo da coluna Tempo_envio
df["Tempo_envio"].dtype
#Mรฉdia do tempo de envio por Marca
df.groupby("Marca")["Tempo_envio"].mean()
###Output
_____no_output_____
###Markdown
**Missing Values**
###Code
#Verificando se temos dados faltantes
df.isnull().sum()
###Output
_____no_output_____
###Markdown
**E, se a gente quiser saber o Lucro por Ano e Por Marca?**
###Code
#Vamos Agrupar por ano e marca
df.groupby([df["Data Venda"].dt.year, "Marca"])["lucro"].sum()
pd.options.display.float_format = '{:20,.2f}'.format
#Resetando o index
lucro_ano = df.groupby([df["Data Venda"].dt.year, "Marca"])["lucro"].sum().reset_index()
lucro_ano
#Qual o total de produtos vendidos?
df.groupby("Produto")["Quantidade"].sum().sort_values(ascending=False)
#Grรกfico Total de produtos vendidos
df.groupby("Produto")["Quantidade"].sum().sort_values(ascending=True).plot.barh(title="Total Produtos Vendidos")
plt.xlabel("Total")
plt.ylabel("Produto");
df.groupby(df["Data Venda"].dt.year)["lucro"].sum().plot.bar(title="Lucro x Ano")
plt.xlabel("Ano")
plt.ylabel("Receita");
df.groupby(df["Data Venda"].dt.year)["lucro"].sum()
#Selecionando apenas as vendas de 2009
df_2009 = df[df["Data Venda"].dt.year == 2009]
df_2009.head()
df_2009.groupby(df_2009["Data Venda"].dt.month)["lucro"].sum().plot(title="Lucro x Mรชs")
plt.xlabel("Mรชs")
plt.ylabel("Lucro");
df_2009.groupby("Marca")["lucro"].sum().plot.bar(title="Lucro x Marca")
plt.xlabel("Marca")
plt.ylabel("Lucro")
plt.xticks(rotation='horizontal');
df_2009.groupby("Classe")["lucro"].sum().plot.bar(title="Lucro x Classe")
plt.xlabel("Classe")
plt.ylabel("Lucro")
plt.xticks(rotation='horizontal');
df["Tempo_envio"].describe()
#Grรกfico de Boxplot
plt.boxplot(df["Tempo_envio"]);
#Histograma
plt.hist(df["Tempo_envio"]);
#Tempo mรญnimo de envio
df["Tempo_envio"].min()
#Tempo mรกximo de envio
df['Tempo_envio'].max()
#Identificando o Outlier
df[df["Tempo_envio"] == 20]
df.to_csv("df_vendas_novo.csv", index=False)
###Output
_____no_output_____
###Markdown
###Code
import pandas as pd
import matplotlib.pyplot as plt
plt.style.use("seaborn")
from google.colab import files
arq = files.upload()
df = pd.read_excel("AdventureWorks.xlsx")
df.head()
df.shape
df.dtypes
df["Valor Venda"].sum()
df["custo"] = df["Custo Unitรกrio"].mul(df["Quantidade"])
df.head(1)
round(df["custo"].sum(), 2)
df["lucro"] = df["Valor Venda"] - df["custo"]
df.head(1)
round(df["lucro"].sum(), 2)
df["Tempo_envio"] = df["Data Envio"] - df["Data Venda"]
df.head(1)
df["Tempo_envio"] = (df["Data Envio"] - df["Data Venda"]).dt.days
df.head(1)
df["Tempo_envio"].dtype
df.groupby("Marca")["Tempo_envio"].mean()
df.isnull().sum()
df.groupby([df["Data Venda"].dt.year, "Marca"])["lucro"].sum()
pd.options.display.float_format = '{:20,.2f}'.format
lucro_ano = df.groupby([df["Data Venda"].dt.year, "Marca"])["lucro"].sum().reset_index()
lucro_ano
df.groupby("Produto")["Quantidade"].sum().sort_values(ascending=False)
df.groupby("Produto")["Quantidade"].sum().sort_values(ascending=True).plot.barh(title="Total Produtos Vendidos")
plt.xlabel("Total")
plt.ylabel("Produto")
df.groupby(df["Data Venda"].dt.year)["lucro"].sum().plot.barh(title="Lucro x Ano")
plt.xlabel("Ano")
plt.ylabel("Receita")
df.groupby(df["Data Venda"].dt.year)["lucro"].sum().plot.bar(title="Lucro x Ano")
plt.xlabel("Ano")
plt.ylabel("Receita")
df.groupby(df["Data Venda"].dt.year)["lucro"].sum()
df_2009 = df[df["Data Venda"].dt.year == 2009]
df_2009.groupby(df_2009["Data Venda"].dt.month)["lucro"].sum().plot(title="Lucro x Mรชs")
plt.xlabel("Mรชs")
plt.ylabel("Lucro")
df_2009.groupby("Marca")["lucro"].sum().plot.bar(title="Lucro x Marca")
plt.xlabel("Marca")
plt.ylabel("Lucro")
plt.xticks(rotation='horizontal')
df_2009.groupby("Classe")["lucro"].sum().plot.bar(title="Lucro x Classe")
plt.xlabel("Classe")
plt.ylabel("Lucro")
plt.xticks(rotation='horizontal')
df["Tempo_envio"].describe()
plt.boxplot(df["Tempo_envio"])
plt.hist(df["Tempo_envio"])
df["Tempo_envio"].min()
df["Tempo_envio"].max()
df[df["Tempo_envio"] == 20]
df.to_csv("df_vendas_novo.csv", index=False)
###Output
_____no_output_____ |
examples/peak_detection.ipynb | ###Markdown
Peak Detection Feature detection, also referred to as peak detection, is the process by which local maxima that fulfill certain criteria (such as sufficient signal-to-noise ratio) are located in the signal acquired by a given analytical instrument. This process results in โfeaturesโ associated with the analysis of molecular analytes from the sample under study or from chemical, instrument, or random noise.Typically, feature detection involves a mass dimension (*m/z*) as well as one or more separation dimensions (e.g. drift and/or retention time), the latter offering distinction among isobaric/isotopic features. DEIMoS implements an N-dimensional maximum filter from [scipy.ndimage](https://docs.scipy.org/doc/scipy/reference/ndimage.html) that convolves the instrument signal with a structuring element, also known as a kernel, and compares the result against the input array to identify local maxima as candidate features or peaks.To demonstrate, we will operate on a subset of 2D data to minimize memory usage and computation time.
###Code
import deimos
import numpy as np
import matplotlib.pyplot as plt
# load data, excluding scanid column
ms1 = deimos.load('example_data.h5', key='ms1', columns=['mz', 'drift_time', 'retention_time', 'intensity'])
# sum over retention time
ms1_2d = deimos.collapse(ms1, keep=['mz', 'drift_time'])
# take a subset in m/z
ms1_2d = deimos.slice(ms1_2d, by='mz', low=200, high=400)
%%time
# perform peak detection
ms1_peaks = deimos.peakpick.local_maxima(ms1_2d, dims=['mz', 'drift_time'], bins=[9.5, 4.25])
###Output
CPU times: user 2.02 s, sys: 204 ms, total: 2.22 s
Wall time: 2.23 s
###Markdown
Selecting Kernel Size Key to this process is the selection of kernel size, which can vary by instrument, dataset, and even compound.For example, in LC-IMS-MS/MS data, peak width increases with increasing *m/z* and drift time, and also varies in retention time. Ideally, the kernel would be the same size as the N-dimensional peak (i.e. wavelets), though computational efficiency considerations for high-dimensional data currently limit the ability to dynamically adjust kernel size.Thus, the selected kernel size should be representative of likely features of interest. This process is exploratory, and selections can be further refined pending an initial processing of the data.To start, we will get a sense of our data by visualizing a high-intensity feature.
###Code
# get maximal data point
mz_i, dt_i, rt_i, intensity_i = ms1.loc[ms1['intensity'] == ms1['intensity'].max(), :].values[0]
# subset the raw data
feature = deimos.slice(ms1,
by=['mz', 'drift_time', 'retention_time'],
low=[mz_i - 0.1, dt_i - 1, rt_i - 1],
high=[mz_i + 0.2, dt_i + 1, rt_i + 2])
# visualize
deimos.plot.multipanel(feature, dpi=150)
plt.tight_layout()
plt.show()
print('{}:\t\t{}'.format('mz', len(feature['mz'].unique())))
print('{}:\t{}'.format('drift_time', len(feature['drift_time'].unique())))
print('{}:\t{}'.format('retention_time', len(feature['retention_time'].unique())))
###Output
mz: 38
drift_time: 17
retention_time: 74
###Markdown
The number of sampled data points in each dimension informs selection of suitable peak detection parameters, in this case 38 values in *m/z*, 17 values in drift time, and 74 values in retention time. For the kernel to be centered on each "voxel", however, selections must be odd. Due to the multidimensional nature of the data, kernel size need not be exact: two features need only be separated in one dimension, not all dimensions simultaneously. Partitioning This dataset is comprised of almost 200,000 unique *m/z* values, 416 unique drift times, and 568 unique retention times.In order to process the data by N-dimensional filter convolution, the data frame-based coordinate format must be converted into a dense array. In this case, a dense array would comprise 4.7E9 cells and, for 32-bit intensities, requiring approximately 174 GB of memory.
###Code
print('{}:\t\t{}'.format('mz', len(ms1['mz'].unique())))
print('{}:\t{}'.format('drift_time', len(ms1['drift_time'].unique())))
print('{}:\t{}'.format('retention_time', len(ms1['retention_time'].unique())))
###Output
mz: 197408
drift_time: 416
retention_time: 568
###Markdown
This is of course not tenable for many workstations, necessitating a partitioning utility by which the input may be split along a given dimension, each partition processed separately. Here, we create a `Partitions` object to divide the *m/z* dimension into chunks of 1000 unique values, with a partition overlap of 0.2 Da to ameliorate artifacts arising from artificial partition "edges".Next, its `map` method is invoked to apply peak detection to each partition. The `processes` flag may also be specified to spread the computational load over multiple cores.Memory footprint scales linearly with number of processes.
###Code
%%time
# partition the data
partitions = deimos.partition(ms1_2d, split_on='mz', size=500, overlap=0.2)
# map peak detection over partitions
ms1_peaks_partitioned = partitions.map(deimos.peakpick.local_maxima,
dims=['mz', 'drift_time'],
bins=[9.5, 4.25],
processes=4)
###Output
CPU times: user 4.21 s, sys: 403 ms, total: 4.62 s
Wall time: 5.03 s
###Markdown
With `overlap` selected appropriately, the partitioned result should be identical to the previous result.
###Code
all(ms1_peaks_partitioned == ms1_peaks)
###Output
_____no_output_____
###Markdown
Kernel Scaling Peak width in *m/z* and drift time increase with *m/z*. In the example data used here, the sample inverval in *m/z* also increases with increasing *m/z*.This means that our kernel effectively "grows" as *m/z* increases, as kernel is selected by number of such intervals rather than an *m/z* range.
###Code
# unique m/z values
mz_unq = np.unique(ms1_2d['mz'])
# m/z sample intervals
mz_diff = np.diff(mz_unq)
# visualize
plt.figure(dpi=150)
plt.plot(mz_unq[1:], mz_diff)
plt.xlabel('m/z', fontweight='bold')
plt.ylabel('Interval', fontweight='bold')
plt.show()
###Output
_____no_output_____
###Markdown
However, the drift time sample interval is constant throughout the acquisition. To accommodate increasing peak width in drift time, we can scale the kernel in that dimension by the *m/z* per partition, scaled by a reference resolution (i.e. the minimum interval in the above).Thus, the drift time kernel size of the first partition will be scaled by a factor of 1 (no change), the last by a factor of ~1.4.This represents an advanced usage scenario and should only be considered with sufficient justification. That is, knowledge of sample intervals in each dimension, peak widths as a function of these sample intervals, and whether the relationship(s) scale linearly.
###Code
%%time
# partition the data
partitions = deimos.partition(ms1_2d, split_on='mz', size=500, overlap=0.2)
# map peak detection over partitions
ms1_peaks_partitioned = partitions.map(deimos.peakpick.local_maxima,
dims=['mz', 'drift_time'],
bins=[9.5, 4.25],
scale_by='mz',
ref_res=mz_diff.min(),
scale=['drift_time'],
processes=4)
###Output
CPU times: user 4.29 s, sys: 247 ms, total: 4.54 s
Wall time: 4.79 s
|
Spectroscopy/CH4_09-Analyse_Core_Loss.ipynb | ###Markdown
**Chapter 4: [Spectroscopy](CH4-Spectroscopy.ipynb)** Analysis of Core-Loss Spectra **This notebook does not work in Google Colab** [Download](https://raw.githubusercontent.com/gduscher/MSE672-Introduction-to-TEM/main/Spectroscopy/CH4_09-Analyse_Core_Loss.ipynb) part of **[MSE672: Introduction to Transmission Electron Microscopy](../_MSE672_Intro_TEM.ipynb)**by Gerd Duscher, Spring 2021Microscopy FacilitiesJoint Institute of Advanced MaterialsMaterials Science & EngineeringThe University of Tennessee, KnoxvilleBackground and methods to analysis and quantification of data acquired with transmission electron microscopes. ContentQuantitative determination of chemical composition from a core-loss EELS spectrumPlease cite:[M. Tian et al. *Measuring the areal density of nanomaterials by electron energy-loss spectroscopy*Ultramicroscopy Volume 196, 2019, pages 154-160](https://doi.org/10.1016/j.ultramic.2018.10.009)as a reference of this quantification method. Load important packages Check Installed Packages
###Code
import sys
from pkg_resources import get_distribution, DistributionNotFound
def test_package(package_name):
"""Test if package exists and returns version or -1"""
try:
version = (get_distribution(package_name).version)
except (DistributionNotFound, ImportError) as err:
version = '-1'
return version
# pyTEMlib setup ------------------
if test_package('sidpy') < '0.0.5':
print('installing sidpy')
!{sys.executable} -m pip install --upgrade sidpy -q
if test_package('pyTEMlib') < '0.2021.4.20':
print('installing pyTEMlib')
!{sys.executable} -m pip install --upgrade pyTEMlib -q
# ------------------------------
print('done')
###Output
done
###Markdown
Import all relevant librariesPlease note that the EELS_tools package from pyTEMlib is essential.
###Code
%pylab --no-import-all notebook
%gui qt
# Import libraries from pyTEMlib
import pyTEMlib
import pyTEMlib.file_tools as ft # File input/ output library
import pyTEMlib.image_tools as it
import pyTEMlib.eels_tools as eels # EELS methods
import pyTEMlib.interactive_eels as ieels # Dialogs for EELS input and quantification
# For archiving reasons it is a good idea to print the version numbers out at this point
print('pyTEM version: ',pyTEMlib.__version__)
__notebook__ = 'analyse_core_loss'
__notebook_version__ = '2021_04_22'
###Output
Populating the interactive namespace from numpy and matplotlib
###Markdown
Load and plot a spectrumAs an example we load the spectrum **1EELS Acquire (high-loss).dm3** from the *example data* folder.Please see [Loading an EELS Spectrum](LoadEELS.ipynb) for details on storage and plotting.First a dialog to select a file will apear.Then the spectrum plot and ``Spectrum Info`` dialog will appear, in which we set the experimental parameters.Please use the ``Set Energy Scale`` button to change the energy scale. When pressed a new dialog and a cursor will appear in which one is able to set the energy scale based on known features in the spectrum.
###Code
# -----Input -------#
load_example = True
try:
main_dataset.h5_dataset.file.close()
except:
pass
if load_example:
main_dataset = ft.open_file('../example_data/EELS_STO.dm3')
else:
main_dataset = ft.open_file()
current_channel = main_dataset.h5_dataset.parent
if 'experiment' not in main_dataset.metadata:
main_dataset.metadata['experiment']= eels.read_dm3_eels_info(main_dataset.original_metadata)
eels.set_previous_quantification(main_dataset)
# US 200 does not set acceleration voltage correctly.
# comment out next line for other microscopes
# current_dataset.metadata['experiment']['acceleration_voltage'] = 200000
info = ieels.InfoDialog(main_dataset)
###Output
C:\Users\gduscher\Anaconda3\lib\site-packages\pyNSID\io\hdf_utils.py:351: FutureWarning: validate_h5_dimension may be removed in a future version
warn('validate_h5_dimension may be removed in a future version',
###Markdown
Chemical Composition The fit of the cross-section and background to the spectrum results in the chemical composition. If the calibration is correct this composition is given as areal density in atoms/nm$^2$ Fit of DataA dialog window will open, enter the elements first (0 will open a periodic table) and press ``Fit Composition`` button (bottom right). Adjust parameters as needed and check fit by pressing the ``Fit Composition`` button again.Select the ``Region`` checkbox to see which parts of the spectrum you choose to fit.Changing the multiplier value will make a simulation of your spectrum.The ``InfoDialog``, if open, still works to change experimental parameters and the energy scale.
###Code
# current_dataset.metadata['edges'] = {'0': {}, 'model': {}}
composition = ieels.CompositionDialog(main_dataset)
###Output
_____no_output_____
###Markdown
Output of Results
###Code
edges = main_dataset.metadata['edges']
element = []
areal_density = []
for key, edge in edges.items():
if key.isdigit():
element.append(edge['element'])
areal_density.append(edge['areal_density'])
print('Relative chemical composition of ', main_dataset.title)
for i in range(len(element)):
print(f'{element[i]}: {areal_density[i]/np.sum(areal_density)*100:.1f} %')
saved_edges_metadata = edges
###Output
Relative chemical composition of EELS_STO
Ti: 21.3 %
O: 78.7 %
###Markdown
Log DataWe write all the data to the hdf5 file associated with our dataset.In our case that is only the ``metadata``, in which we stored the ``experimental parameters`` and the ``fitting parameters and result``.
###Code
current_group = main_dataset.h5_dataset.parent.parent
if 'Log_000' in current_group:
del current_group['Log_000']
log_group = current_group.create_group('Log_000')
log_group['analysis'] = 'EELS_quantification'
log_group['EELS_quantification'] = ''
flat_dict = ft.flatten_dict(main_dataset.metadata)
if 'peak_fit-peak_out_list' in flat_dict:
del flat_dict['peak_fit-peak_out_list']
for key, item in flat_dict.items():
if not key == 'peak_fit-peak_out_list':
log_group.attrs[key]= item
current_group.file.flush()
ft.h5_tree(main_dataset.h5_dataset.file)
###Output
/
โ Measurement_000
---------------
โ Channel_000
-----------
โ EELS_STO
--------
โ EELS_STO
โ __dict__
--------
โ _axes
-----
โ _original_metadata
------------------
โ energy_loss
โ original_metadata
-----------------
โ Log_000
-------
โ EELS_quantification
โ analysis
###Markdown
ELNESThe electron energy-loss near edge structure is determined by fititng the spectrum after quantification model subtraction. First smooth the spectrum (2 iterations are ususally sufficient) and then find the number of peaks you want (Can be repeated as oftern as one wants).
###Code
peak_dialog = ieels.PeakFitDialog(main_dataset)
###Output
_____no_output_____
###Markdown
Output
###Code
areas = []
for p, peak in peak_dialog.peaks['peaks'].items():
area = np.sqrt(2* np.pi)* peak['amplitude'] * np.abs(peak['width'] / np.sqrt(2 *np.log(2)))
areas.append(area)
if 'associated_edge' not in peak:
peak['associated_edge']= ''
print(f"peak {p}: position: {peak['position']:7.1f}, area: {area:12.3f} associated edge: {peak['associated_edge']}")
#print(f'\n M4/M5 peak 2 to peak 1 ratio: {(areas[1])/areas[0]:.2f}')
###Output
peak 0: position: 506.5, area: -6722143.802 associated edge: Ti-L2
peak 1: position: 933.7, area: -4819176.064 associated edge:
peak 2: position: 515.9, area: 3289440.959 associated edge:
peak 3: position: 493.8, area: 2197645.853 associated edge: Ti-L2
peak 4: position: 905.3, area: 1857244.132 associated edge:
peak 5: position: 1157.2, area: 1694326.260 associated edge:
peak 6: position: 461.9, area: 1039384.757 associated edge: Ti-L2
peak 7: position: 853.9, area: 476364.383 associated edge:
peak 8: position: 457.1, area: 348689.573 associated edge: Ti-L3
###Markdown
Log Data
###Code
current_group = main_dataset.h5_dataset.parent.parent
if 'Log_001' in current_group:
del current_group['Log_001']
log_group = current_group.create_group('Log_001')
log_group['analysis'] = 'ELNES_fit'
log_group['ELNES_fit'] = ''
metadata = ft.flatten_dict(main_dataset.metadata)
if 'peak_fit-peak_out_list' in flat_dict:
del flat_dict['peak_fit-peak_out_list']
for key, item in metadata.items():
if not key == 'peak_fit-peak_out_list':
log_group.attrs[key]= item
current_group.file.flush()
print('Logged Data of ', main_dataset.title)
for key in current_group:
if 'Log_' in key:
if 'analysis' in current_group[key]:
print(f" {key}: {current_group[key]['analysis'][()]}")
###Output
Logged Data of 1EELS Acquire (high_loss)
Log_000: b'EELS_quantification'
Log_001: b'ELNES_fit'
###Markdown
Close FileFile needs to be closed to be used with other notebooks
###Code
main_dataset.h5_dataset.file.close()
###Output
_____no_output_____ |
Unsupervised Learning in R.ipynb | ###Markdown
Unsupervised Learning in R> clustering and dimensionality reduction in R from a machine learning perspective- author: Victor Omondi- toc: true- comments: true- categories: [unsupervised-learning, machine-learning, r]- image: images/ield.png OverviewMany times in machine learning, the goal is to find patterns in data without trying to make predictions. This is called unsupervised learning. One common use case of unsupervised learning is grouping consumers based on demographics and purchasing history to deploy targeted marketing campaigns. Another example is wanting to describe the unmeasured factors that most influence crime differences between cities. We will cover basic introduction to clustering and dimensionality reduction in R from a machine learning perspective, so that we can get from data to insights as quickly as possible. Libraries
###Code
library(readr)
library(ggplot2)
library(dplyr)
###Output
Warning message:
"package 'readr' was built under R version 3.6.3"Warning message:
"package 'dplyr' was built under R version 3.6.3"
Attaching package: 'dplyr'
The following objects are masked from 'package:stats':
filter, lag
The following objects are masked from 'package:base':
intersect, setdiff, setequal, union
###Markdown
Unsupervised learning in RThe k-means algorithm is one common approach to clustering. We will explore how the algorithm works under the hood, implement k-means clustering in R, visualize and interpret the results, and select the number of clusters when it's not known ahead of time. We'll have applied k-means clustering to a fun "real-world" dataset! Types of machine learning- Unsupervised learning - Finding structure in unlabeled data- Supervised learning - Making predictions based on labeled data - Predictions like regression or classication- Reinforcement learning Unsupervised learning - clustering- Finding homogeneous subgroups within larger group - _People have features such as income, education attainment, and gender_ Unsupervised learning - dimensionality reduction- Finding homogeneous subgroups within larger group - Clustering- Finding patterns in the features of the data - Dimensionality reduction - Find patterns in the features of the data - Visualization of high dimensional data - Pre-processing before supervised learning Challenges and benefits- No single goal of analysis- Requires more creativity- Much more unlabeled data available than cleanly labeled data Introduction to k-means clustering k-means clustering algorithm- Breaks observations into pre-dened number of clusters k-means in R- One observation per row, one feature per column- k-means has a random component- Run algorithm multiple times to improve odds of the best model
###Code
x <- as.matrix(x_df <- read_csv("datasets/x.csv"))
class(x)
head(x)
###Output
_____no_output_____
###Markdown
k-means clusteringWe have created some two-dimensional data and stored it in a variable called `x`.
###Code
x_df %>%
ggplot(aes(x=V1, y=V2)) +
geom_point()
###Output
_____no_output_____
###Markdown
The scatter plot on the above is a visual representation of the data.We will create a k-means model of the `x` data using 3 clusters, then to look at the structure of the resulting model using the `summary()` function.
###Code
# Create the k-means model: km.out
km.out_x <- kmeans(x, centers=3, nstart=20)
# Inspect the result
summary(km.out_x)
###Output
_____no_output_____
###Markdown
Results of kmeans()The `kmeans()` function produces several outputs. One is the output of modeling, the cluster membership. We will access the `cluster` component directly. This is useful anytime we need the cluster membership for each observation of the data used to build the clustering model. This cluster membership might be used to help communicate the results of k-means modeling.`k-means` models also have a print method to give a human friendly output of basic modeling results. This is available by using `print()` or simply typing the name of the model.
###Code
# Print the cluster membership component of the model
km.out_x$cluster
# Print the km.out object
km.out_x
###Output
_____no_output_____
###Markdown
Visualizing and interpreting results of kmeans()One of the more intuitive ways to interpret the results of k-means models is by plotting the data as a scatter plot and using color to label the samples' cluster membership. We will use the standard `plot()` function to accomplish this.To create a scatter plot, we can pass data with two features (i.e. columns) to `plot()` with an extra argument `col = km.out$cluster`, which sets the color of each point in the scatter plot according to its cluster membership.
###Code
# Scatter plot of x
plot(x, main="k-means with 3 clusters", col=km.out_x$cluster, xlab="", ylab="")
###Output
_____no_output_____
###Markdown
How k-means works and practical matters Objectives- Explain how k-means algorithm is implemented visually- **Model selection**: determining number of clusters Model selection- Recall k-means has a random component- Best outcome is based on total within cluster sum of squares: - For each cluster - For each observation in the cluster - Determine squared distance from observation to cluster center - Sum all of them together- Running algorithm multiple times helps find the global minimum total within cluster sum of squares Handling random algorithms`kmeans()` randomly initializes the centers of clusters. This random initialization can result in assigning observations to different cluster labels. Also, the random initialization can result in finding different local minima for the k-means algorithm. we will demonstrate both results.At the top of each plot, the measure of model qualityโtotal within cluster sum of squares errorโwill be plotted. Look for the model(s) with the lowest error to find models with the better model results.Because `kmeans()` initializes observations to random clusters, it is important to set the random number generator seed for reproducibility.
###Code
# Set up 2 x 3 plotting grid
par(mfrow = c(2, 3))
# Set seed
set.seed(1)
for(i in 1:6) {
# Run kmeans() on x with three clusters and one start
km.out <- kmeans(x, centers=3, nstart=1)
# Plot clusters
plot(x, col = km.out$cluster,
main = km.out$tot.withinss,
xlab = "", ylab = "")
}
###Output
_____no_output_____
###Markdown
Because of the random initialization of the k-means algorithm, there's quite some variation in cluster assignments among the six models. Selecting number of clustersThe k-means algorithm assumes the number of clusters as part of the input. If you know the number of clusters in advance (e.g. due to certain business constraints) this makes setting the number of clusters easy. However, if you do not know the number of clusters and need to determine it, you will need to run the algorithm multiple times, each time with a different number of clusters. From this, we can observe how a measure of model quality changes with the number of clusters.We will run `kmeans()` multiple times to see how model quality changes as the number of clusters changes. Plots displaying this information help to determine the number of clusters and are often referred to as scree plots.The ideal plot will have an elbow where the quality measure improves more slowly as the number of clusters increases. This indicates that the quality of the model is no longer improving substantially as the model complexity (i.e. number of clusters) increases. In other words, the elbow indicates the number of clusters inherent in the data.
###Code
# Initialize total within sum of squares error: wss
wss <- 0
# For 1 to 15 cluster centers
for (i in 1:15) {
km.out <- kmeans(x, centers = i, nstart=20)
# Save total within sum of squares to wss variable
wss[i] <- km.out$tot.withinss
}
# Plot total within sum of squares vs. number of clusters
plot(1:15, wss, type = "b",
xlab = "Number of Clusters",
ylab = "Within groups sum of squares")
# Set k equal to the number of clusters corresponding to the elbow location
k <- 2
###Output
_____no_output_____
###Markdown
Looking at the scree plot, it looks like there are inherently 2 or 3 clusters in the data. Introduction to the Pokemon data
###Code
pokemon <- read_csv("datasets//Pokemon.csv")
head(pokemon)
###Output
Parsed with column specification:
cols(
Number = col_double(),
Name = col_character(),
Type1 = col_character(),
Type2 = col_character(),
Total = col_double(),
HitPoints = col_double(),
Attack = col_double(),
Defense = col_double(),
SpecialAttack = col_double(),
SpecialDefense = col_double(),
Speed = col_double(),
Generation = col_double(),
Legendary = col_logical()
)
###Markdown
Data challenges- Selecting the variables to cluster upon- Scaling the data- Determining the number of clusters - Often no clean "elbow" in scree plot - This will be a core part. - Visualize the results for interpretation Practical matters: working with real dataDealing with real data is often more challenging than dealing with synthetic data. Synthetic data helps with learning new concepts and techniques, but we will deal with data that is closer to the type of real data we might find in the professional or academic pursuits.The first challenge with the Pokemon data is that there is no pre-determined number of clusters. We will determine the appropriate number of clusters, keeping in mind that in real data the elbow in the scree plot might be less of a sharp elbow than in synthetic data. We'll use our judgement on making the determination of the number of clusters.We'll be plotting the outcomes of the clustering on two dimensions, or features, of the data.An additional note: We'll utilize the `iter.max` argument to `kmeans()`. `kmeans()` is an iterative algorithm, repeating over and over until some stopping criterion is reached. The default number of iterations for `kmeans()` is 10, which is not enough for the algorithm to converge and reach its stopping criterion, so we'll set the number of iterations to 50 to overcome this issue.
###Code
head(pokemon <- pokemon %>%
select(HitPoints:Speed))
# Initialize total within sum of squares error: wss
wss <- 0
# Look over 1 to 15 possible clusters
for (i in 1:15) {
# Fit the model: km.out
km.out <- kmeans(pokemon, centers = i, nstart = 20, iter.max = 50)
# Save the within cluster sum of squares
wss[i] <- km.out$tot.withinss
}
# Produce a scree plot
plot(1:15, wss, type = "b",
xlab = "Number of Clusters",
ylab = "Within groups sum of squares")
# Select number of clusters
k <- 2
# Build model with k clusters: km.out
km.out <- kmeans(pokemon, centers = 2, nstart = 20, iter.max = 50)
# View the resulting model
km.out
# Plot of Defense vs. Speed by cluster membership
plot(pokemon[, c("Defense", "Speed")],
col = km.out$cluster,
main = paste("k-means clustering of Pokemon with", k, "clusters"),
xlab = "Defense", ylab = "Speed")
###Output
_____no_output_____
###Markdown
Review of k-means clustering Chapter review- Unsupervised vs. supervised learning- How to create k-means cluster model in R- How k-means algorithm works- Model selection- Application to "real" (and hopefully fun) dataset Hierarchical clusteringHierarchical clustering is another popular method for clustering. We will go over how it works, how to use it, and how it compares to k-means clustering. Introduction to hierarchical clustering Hierarchical clustering- Number of clusters is not known ahead of time- Two kinds: bottom-up and top-down, we will focus on bottom-up Hierarchical clustering in R
###Code
dist_matrix <- dist(x)
hclust(dist_matrix)
head(x <- as.matrix(x_df <- read_csv("datasets//x2.csv")))
class(x)
###Output
_____no_output_____
###Markdown
Hierarchical clustering with resultsWe will create hierarchical clustering model using the `hclust()` function.
###Code
# Create hierarchical clustering model: hclust.out
hclust.out <- hclust(dist(x))
# Inspect the result
summary(hclust.out)
###Output
_____no_output_____
###Markdown
Selecting number of clusters Dendrogram- Tree shaped structure used to interpret hierarchical clustering models
###Code
plot(hclust.out)
abline(h=6, col="red")
abline(h=3.5, col="red")
abline(h=4.5, col="red")
abline(h=6.9, col="red")
abline(h=9, col="red")
###Output
_____no_output_____
###Markdown
If you cut the tree at a height of 6.9, you're left with 3 branches representing 3 distinct clusters. Tree "cutting" in R`cutree()` is the R function that cuts a hierarchical model. The `h` and `k` arguments to `cutree()` allow you to cut the tree based on a certain height `h` or a certain number of clusters `k`.
###Code
cutree(hclust.out, h=6)
cutree(hclust.out, k=2)
# Cut by height
cutree(hclust.out, h=7)
# Cut by number of clusters
cutree(hclust.out, k=3)
###Output
_____no_output_____
###Markdown
The output of each `cutree()` call represents the cluster assignments for each observation in the original dataset Clustering linkage and practical matters Linking clusters in hierarchical clustering- How is distance between clusters determined? Rules?- Four methods to determine which cluster should be linked - **Complete**: pairwise similarity between all observations in cluster 1 and cluster 2, and uses ***largest of similarities*** - **Single**: same as above but uses ***smallest of similarities*** - **Average**: same as above but uses ***average of similarities*** - **Centroid**: finds centroid of cluster 1 and centroid of cluster 2, and uses ***similarity between two centroids*** Linkage in R
###Code
hclust.complete <- hclust(dist(x), method = "complete")
hclust.average <- hclust(dist(x), method = "average")
hclust.single <- hclust(dist(x), method = "single")
plot(hclust.complete, main="Complete")
plot(hclust.average, main="Average")
plot(hclust.single, main="Single")
###Output
_____no_output_____
###Markdown
Whether you want balanced or unbalanced trees for hierarchical clustering model depends on the context of the problem you're trying to solve. Balanced trees are essential if you want an even number of observations assigned to each cluster. On the other hand, if you want to detect outliers, for example, an unbalanced tree is more desirable because pruning an unbalanced tree can result in most observations assigned to one cluster and only a few observations assigned to other clusters. Practical matters- Data on different scales can cause undesirable results in clustering methods- Solution is to scale data so that features have same mean and standard deviation - Subtract mean of a feature from all observations - Divide each feature by the standard deviation of the feature - Normalized features have a mean of zero and a standard deviation of one
###Code
colMeans(x)
apply(x, 2, sd)
scaled_x <- scale(x)
colMeans(scaled_x)
apply(scaled_x, 2, sd)
###Output
_____no_output_____
###Markdown
Linkage methodsWe will produce hierarchical clustering models using different linkages and plot the dendrogram for each, observing the overall structure of the trees. Practical matters: scalingClustering real data may require scaling the features if they have different distributions. We will go back to working with "real" data, the `pokemon` dataset. We will observe the distribution (mean and standard deviation) of each feature, scale the data accordingly, then produce a hierarchical clustering model using the complete linkage method.
###Code
# View column means
colMeans(pokemon)
# View column standard deviations
apply(pokemon, 2, sd)
# Scale the data
pokemon.scaled = scale(pokemon)
# Create hierarchical clustering model: hclust.pokemon
hclust.pokemon <- hclust(dist(pokemon.scaled), method="complete")
###Output
_____no_output_____
###Markdown
Comparing kmeans() and hclust()Comparing k-means and hierarchical clustering, we'll see the two methods produce different cluster memberships. This is because the two algorithms make different assumptions about how the data is generated. In a more advanced course, we could choose to use one model over another based on the quality of the models' assumptions, but for now, it's enough to observe that they are different.We will have compare results from the two models on the pokemon dataset to see how they differ.
###Code
# Apply cutree() to hclust.pokemon: cut.pokemon
cut.pokemon<- cutree(hclust.pokemon, k=3)
# Compare methods
table(km.out$cluster, cut.pokemon)
###Output
_____no_output_____
###Markdown
Dimensionality reduction with PCAPrincipal component analysis, or PCA, is a common approach to dimensionality reduction. We'll explore exactly what PCA does, visualize the results of PCA with biplots and scree plots, and deal with practical issues such as centering and scaling the data before performing PCA. Introduction to PCA Two methods of clustering- Two methods of clustering - finding groups of homogeneous items- Next up, dimensionality reduction - Find structure in features - Aid in visualization Dimensionality reduction- A popular method is principal component analysis (PCA)- Three goals when finding lower dimensional representation of features: - Find linear combination of variables to create principal components - Maintain most variance in the data - Principal components are uncorrelated (i.e. orthogonal to each other)
###Code
head(iris)
###Output
_____no_output_____
###Markdown
PCA in R
###Code
summary(
pr.iris <- prcomp(x=iris[-5], scale=F, center=T)
)
###Output
_____no_output_____
###Markdown
PCA using prcomp()We will create PCA model and observe the diagnostic results.We have loaded the Pokemon data, which has four dimensions, and placed it in a variable called `pokemon`. We'll create a PCA model of the data, then to inspect the resulting model using the `summary()` function.
###Code
# Perform scaled PCA: pr.out
pr.out <- prcomp(pokemon, scale=T)
# Inspect model output
summary(pr.out)
###Output
_____no_output_____
###Markdown
The first 3 principal components describe around 77% of the variance. Additional results of PCAPCA models in R produce additional diagnostic and output components:- `center`: the column means used to center to the data, or `FALSE` if the data weren't centered- `scale`: the column standard deviations used to scale the data, or `FALSE` if the data weren't scaled- `rotation`: the directions of the principal component vectors in terms of the original features/variables. This information allows you to define new data in terms of the original principal components- `x`: the value of each observation in the original dataset projected to the principal componentsYou can access these the same as other model components. For example, use `pr.out$rotation` to access the rotation component
###Code
head(pr.out$x)
pr.out$center
pr.out$scale
###Output
_____no_output_____
###Markdown
Visualizing and interpreting PCA results Biplots in R
###Code
biplot(pr.iris)
###Output
_____no_output_____
###Markdown
Petal.Width and Petal.Length are correlated in the original dataset. Scree plots in R
###Code
# Getting proportion of variance for a scree plot
pr.var <- pr.iris$sdev^2
pve <- pr.var/sum(pr.var)
# Plot variance explained for each principal component
plot(pve, main="variance explained for each principal component",
xlab="Principal component", ylab="Proportion of Variance Explained", ylim=c(0,1), type="b")
###Output
_____no_output_____
###Markdown
Interpreting biplots (1)The `biplot()` function plots both the principal components loadings and the mapping of the observations to their first two principal component values. We will do interpretation of the `biplot()` visualization.Using the `biplot()` of the `pr.out` model, which two original variables have approximately the same loadings in the first two principal components?
###Code
biplot(pr.out)
###Output
_____no_output_____
###Markdown
`Attack` and `HitPoints` have approximately the same loadings in the first two principal components Variance explainedThe second common plot type for understanding PCA models is a scree plot. A scree plot shows the variance explained as the number of principal components increases. Sometimes the cumulative variance explained is plotted as well.We will prepare data from the `pr.out` modelfor use in a scree plot. Preparing the data for plotting is required because there is not a built-in function in R to create this type of plot.
###Code
# Variability of each principal component: pr.var
pr.var <- pr.out$sdev^2
# Variance explained by each principal component: pve
pve <-pr.var / sum(pr.var)
###Output
_____no_output_____
###Markdown
Visualize variance explainedNow we will create a scree plot showing the proportion of variance explained by each principal component, as well as the cumulative proportion of variance explained.These plots can help to determine the number of principal components to retain. One way to determine the number of principal components to retain is by looking for an elbow in the scree plot showing that as the number of principal components increases, the rate at which variance is explained decreases substantially. In the absence of a clear elbow, we can use the scree plot as a guide for setting a threshold.
###Code
# Plot variance explained for each principal component
plot(pve, xlab = "Principal Component",
ylab = "Proportion of Variance Explained",
ylim = c(0, 1), type = "b")
# Plot cumulative proportion of variance explained
plot(cumsum(pve), xlab = "Principal Component",
ylab = "Cumulative Proportion of Variance Explained",
ylim = c(0, 1), type = "b")
###Output
_____no_output_____
###Markdown
when the number of principal components is equal to the number of original features in the data, the cumulative proportion of variance explained is 1. Practical issues with PCA- Scaling the data- Missing values: - Drop observations with missing values - Impute / estimate missing values- Categorical data: - Do not use categorical data features - Encode categorical features as numbers mtcars dataset
###Code
head(mtcars)
###Output
_____no_output_____
###Markdown
Scaling
###Code
# Means and standard deviations vary a lot
round(colMeans(mtcars), 2)
round(apply(mtcars, 2, sd), 2)
biplot(prcomp(mtcars, center=T, scale=F))
biplot(prcomp(mtcars, scale=T, center=T))
###Output
_____no_output_____
###Markdown
Practical issues: scalingScaling data before doing PCA changes the results of the PCA modeling. Here, we will perform PCA with and without scaling, then visualize the results using biplots.Sometimes scaling is appropriate when the variances of the variables are substantially different. This is commonly the case when variables have different units of measurement, for example, degrees Fahrenheit (temperature) and miles (distance). Making the decision to use scaling is an important step in performing a principal component analysis.
###Code
head(pokemon <- read_csv("datasets//Pokemon.csv"))
head(pokemon <- pokemon%>%
select(Total, HitPoints, Attack, Defense, Speed))
# Mean of each variable
colMeans(pokemon)
# Standard deviation of each variable
apply(pokemon, 2, sd)
# PCA model with scaling: pr.with.scaling
pr.with.scaling <- prcomp(pokemon, center=T, scale=T)
# PCA model without scaling: pr.without.scaling
pr.without.scaling <- prcomp(pokemon, center=T, scale=F)
# Create biplots of both for comparison
biplot(pr.with.scaling)
biplot(pr.without.scaling)
###Output
_____no_output_____
###Markdown
The new Total column contains much more variation, on average, than the other four columns, so it has a disproportionate effect on the PCA model when scaling is not performed. After scaling the data, there's a much more even distribution of the loading vectors. Exploring Wisconsin breast cancer data Introduction to the case study- Human breast mass data: - Ten features measured of each cell nuclei - Summary information is provided for each group of cells - Includes diagnosis: benign (not cancerous) and malignant (cancerous) Analysis- Download data and prepare data for modeling- Exploratory data analysis ( observations, features, etc.)- Perform PCA and interpret results- Complete two types of clusteringUnderstand and compare the two types- Combine PCA and clustering Preparing the data
###Code
url <- "datasets//WisconsinCancer.csv"
# Download the data: wisc.df
wisc.df <- read.csv(url)
# Convert the features of the data: wisc.data
wisc.data <- as.matrix(wisc.df[3:32])
# Set the row names of wisc.data
row.names(wisc.data) <- wisc.df$id
# Create diagnosis vector
diagnosis <- as.numeric(wisc.df$diagnosis == "M")
###Output
_____no_output_____
###Markdown
Exploratory data analysis- How many observations are in this dataset?
###Code
dim(wisc.data)
names(wisc.df)
###Output
_____no_output_____
###Markdown
Performing PCAThe next step is to perform PCA on wisc.data.it's important to check if the data need to be scaled before performing PCA. Two common reasons for scaling data:- The input variables use different units of measurement.- The input variables have significantly different variances.
###Code
# Check column means and standard deviations
colMeans(wisc.data)
apply(wisc.data, 2, sd)
# Execute PCA, scaling if appropriate: wisc.pr
wisc.pr <- prcomp(wisc.data, center=T, scale=T)
# Look at summary of results
summary(wisc.pr)
###Output
_____no_output_____
###Markdown
Interpreting PCA resultsNow we'll use some visualizations to better understand the PCA model.
###Code
# Create a biplot of wisc.pr
biplot(wisc.pr)
# Scatter plot observations by components 1 and 2
plot(wisc.pr$x[, c(1, 2)], col = (diagnosis + 1),
xlab = "PC1", ylab = "PC2")
# Repeat for components 1 and 3
plot(wisc.pr$x[, c(1,3)], col = (diagnosis + 1),
xlab = "PC1", ylab = "PC3")
# Do additional data exploration of your choosing below (optional)
plot(wisc.pr$x[, c(2,3)], col = (diagnosis + 1),
xlab = "PC2", ylab = "PC3")
###Output
_____no_output_____
###Markdown
Because principal component 2 explains more variance in the original data than principal component 3, you can see that the first plot has a cleaner cut separating the two subgroups. Variance explainedWe will produce scree plots showing the proportion of variance explained as the number of principal components increases. The data from PCA must be prepared for these plots, as there is not a built-in function in R to create them directly from the PCA model.
###Code
# Set up 1 x 2 plotting grid
par(mfrow = c(1, 2))
# Calculate variability of each component
pr.var <- wisc.pr$sdev^2
# Variance explained by each principal component: pve
pve <- pr.var/sum(pr.var)
# Plot variance explained for each principal component
plot(pve, xlab = "Principal Component",
ylab = "Proportion of Variance Explained",
ylim = c(0, 1), type = "b")
# Plot cumulative proportion of variance explained
plot(cumsum(pve), xlab = "Principal Component",
ylab = "Cumulative Proportion of Variance Explained",
ylim = c(0, 1), type = "b")
###Output
_____no_output_____
###Markdown
Next steps- Complete hierarchical clustering- Complete k-means clustering- Combine PCA and clustering- Contrast results of hierarchical clustering with diagnosis- Compare hierarchical and k-means clustering results- PCA as a pre-processing step for clustering Hierarchical clustering of case dataWe will do hierarchical clustering of the observations. This type of clustering does not assume in advance the number of natural groups that exist in the data.As part of the preparation for hierarchical clustering, distance between all pairs of observations are computed. Furthermore, there are different ways to link clusters together, with single, complete, and average being the most common linkage methods.
###Code
# Scale the wisc.data data: data.scaled
data.scaled <- scale(wisc.data)
# Calculate the (Euclidean) distances: data.dist
data.dist <- dist(data.scaled)
# Create a hierarchical clustering model: wisc.hclust
wisc.hclust <- hclust(data.dist, method="complete")
###Output
_____no_output_____
###Markdown
Results of hierarchical clustering
###Code
plot(wisc.hclust)
###Output
_____no_output_____
###Markdown
Selecting number of clustersWe will compare the outputs from your hierarchical clustering model to the actual diagnoses. Normally when performing unsupervised learning like this, a target variable isn't available. We do have it with this dataset, however, so it can be used to check the performance of the clustering model.When performing supervised learningโthat is, when we're trying to predict some target variable of interest and that target variable is available in the original dataโusing clustering to create new features may or may not improve the performance of the final model.
###Code
# Cut tree so that it has 4 clusters: wisc.hclust.clusters
wisc.hclust.clusters <- cutree(wisc.hclust, k=4)
# Compare cluster membership to actual diagnoses
table(wisc.hclust.clusters, diagnosis)
###Output
_____no_output_____
###Markdown
Four clusters were picked after some exploration. Before moving on, we may want to explore how different numbers of clusters affect the ability of the hierarchical clustering to separate the different diagnoses k-means clustering and comparing resultsThere are two main types of clustering: hierarchical and k-means.We will create a k-means clustering model on the Wisconsin breast cancer data and compare the results to the actual diagnoses and the results of your hierarchical clustering model.
###Code
# Create a k-means model on wisc.data: wisc.km
wisc.km <- kmeans(scale(wisc.data), centers=2, nstart=20)
# Compare k-means to actual diagnoses
table(wisc.km$cluster, diagnosis)
# Compare k-means to hierarchical clustering
table(wisc.km$cluster, wisc.hclust.clusters)
###Output
_____no_output_____
###Markdown
Looking at the second table you generated, it looks like clusters 1, 2, and 4 from the hierarchical clustering model can be interpreted as the cluster 1 equivalent from the k-means algorithm, and cluster 3 can be interpreted as the cluster 2 equivalent. Clustering on PCA resultsWe will put together several steps used earlier and, in doing so, we will experience some of the creativity that is typical in unsupervised learning.The PCA model required significantly fewer features to describe 80% and 95% of the variability of the data. In addition to normalizing data and potentially avoiding overfitting, PCA also uncorrelates the variables, sometimes improving the performance of other modeling techniques.Let's see if PCA improves or degrades the performance of hierarchical clustering.
###Code
# Create a hierarchical clustering model: wisc.pr.hclust
wisc.pr.hclust <- hclust(dist(wisc.pr$x[, 1:7]), method = "complete")
# Cut model into 4 clusters: wisc.pr.hclust.clusters
wisc.pr.hclust.clusters <- cutree(wisc.pr.hclust, k=4)
# Compare to actual diagnoses
table(wisc.pr.hclust.clusters, diagnosis)
table(wisc.hclust.clusters, diagnosis)
# Compare to k-means and hierarchical
table(wisc.km$cluster, diagnosis)
###Output
_____no_output_____ |
Lab 07.ipynb | ###Markdown
Q1Company: Federal Reserve Bank of RichmondDiscription: Researching and working with economists to create data for speeches and brief content for the president.[Website](https://www.indeed.com/viewjob?jk=03031870201e12fd&tk=1d5umojbfacoo802&from=serp&vjs=3) Q2 and Q3
###Code
import xlwt
from collections import Counter
mybook = xlwt.Workbook()
sheet = mybook.add_sheet('word_count')
i = 0
sheet.write(i,0,'word')
sheet.write(i,1,'count')
with open ('job.txt','r') as job:
count_result = Counter(job.read().split())
for result in count_result.most_common(20):
i = i+1
sheet.write(i,0,result[0])
sheet.write(i,1,result[1])
mybook.save("job_word_count.xls")
###Output
_____no_output_____
###Markdown
Q4
###Code
import xlrd
mybook = xlrd.open_workbook("job_word_count.xls")
sheet = mybook.sheet_by_name("word_count")
for i in range(sheet.nrows):
row = sheet.row_values(i)
print(row)
###Output
['word', 'count']
['and', 33.0]
['the', 22.0]
['of', 19.0]
['to', 18.0]
['a', 11.0]
['or', 10.0]
['with', 9.0]
['for', 8.0]
['in', 8.0]
['is', 7.0]
['are', 7.0]
['as', 6.0]
['economic', 6.0]
['data', 5.0]
['research', 5.0]
['special', 5.0]
['be', 5.0]
['support', 4.0]
['work', 4.0]
['this', 4.0]
|
notebooks/monte_carlo_dev/test_oil_capacity.ipynb | ###Markdown
Plot up tanker fuel capacity by vessel length
###Code
# SuezMax: 5986.7 m3 (4,025/130 m3 for HFO/diesel)
# Aframax: 2,984 m3 (2,822/162 for HFO/diesel)
# Handymax as 1,956 m3 (1,826/130 m3 for HFO/diesel)
# Small Tanker: 740 m3 (687/53 for HFO/diesel)
# SuezMax (281 m)
# Aframax (201-250 m)
# Handymax (182 m)
# Small Tanker (< 150 m)
tanker_size_classes = [
'SuezMax (251-300 m)',
'Aframax (201-250 m)',
'Handymax (151-200 m)',
'Small Tanker (< 150 m)'
]
fuel_hfo_to_diesel = [
4025/130, 2822/162, 1826/130, 687/53
]
capacity = numpy.array([
740000, 1956000, 2984000, 5986000.7,
])
length = numpy.array([108.5, 182, 247.24, 281])
coef3 = numpy.polyfit(length, capacity, 3)
fit = (
coef3[3] +
coef3[2]*length +
coef3[1]*numpy.power(length,2) +
coef3[0]*numpy.power(length,3)
)
numpy.array(fit.tolist())
test_length = range(50, 320, 25)
fit = (
coef3[3] +
coef3[2]*test_length +
coef3[1]*numpy.power(test_length,2) +
coef3[0]*numpy.power(test_length,3)
)
## plot fit
fig1 = plt.figure()
ax1 = fig1.add_subplot(111)
ax1.scatter(length[:], capacity[:],50)
ax1.plot(test_length, fit)
plt.xlabel('tanker length (m)',fontsize=12)
plt.ylabel('tanker fuel capacity (liters)',fontsize=12)
plt.show()
###Output
_____no_output_____
###Markdown
plot up tank barge oil capacity from values in our [MMSI spreadsheet](https://docs.google.com/spreadsheets/d/1dlT0JydkFG43LorqgtHle5IN6caRYjf_3qLrUYqANDY/editgid=591561201)
###Code
harley_cargo_legths = [
65.3796, 73.4568,
73.4568,73.4568,
74.676, 83.8962,
85.0392, 86.868,
90.678, 108.5088,
114.3, 128.4732,
128.7018, 128.7018,
130.5306, 130.5306,
130.5306
]
harley_cargo_capacity = [
2544952.93, 5159066.51,
5195793.20, 5069714.13,
3973955.05, 3461689.27,
6197112.22, 7685258.62,
4465075.16, 5609803.16,
12652106.22, 12791381.46,
13315412.50, 13315412.50,
13041790.71, 13009356.75,
13042744.65
]
harley_volume_per_hold = [
254495.293, 573229.6122,
577310.3556, 563301.57,
397395.505, 247263.5193,
476700.94, 548947.0443,
744179.1933, 560980.316,
903721.8729, 913670.1043,
1109617.708, 1109617.708,
931556.4793, 929239.7679,
931624.6179
]
C = numpy.polyfit(
harley_cargo_legths,
harley_cargo_capacity,
3
)
test_length = range(65,135,5)
harley_cargo_fit = (
C[3] + C[2]*test_length +
C[1]*numpy.power(test_length,2) +
C[0]*numpy.power(test_length,3)
)
## plot fit
fig2 = plt.figure()
ax1 = fig2.add_subplot(111)
ax1.scatter(harley_cargo_legths[:], harley_cargo_capacity[:],50)
ax1.plot(test_length, harley_cargo_fit)
plt.xlabel('cargo tank length (m)',fontsize=12)
plt.ylabel('cargo tank capacity (liters)',fontsize=12)
plt.show()
## plot volume per hold
fig3 = plt.figure()
ax1 = fig3.add_subplot(111)
ax1.scatter(harley_cargo_legths[:], harley_volume_per_hold[:],50)
plt.xlabel('cargo tank length (m)',fontsize=12)
plt.ylabel('cargo volume per hold (liters)',fontsize=12)
plt.show()
test_length
###Output
_____no_output_____
###Markdown
plot up tug barge fuel capacityfrom values in our [MMSI spreadsheet](https://docs.google.com/spreadsheets/d/1dlT0JydkFG43LorqgtHle5IN6caRYjf_3qLrUYqANDY/editgid=591561201)
###Code
tug_length = [
33.92424, 33.92424,
33.92424, 32.06496,
38.34384, 41.4528,
41.45
]
tug_fuel_capacity = [
101383.00, 383776.22,
378541.00, 302832.80,
545099.04, 567811.50,
300000.00
]
## plot fit
fig4 = plt.figure()
ax1 = fig4.add_subplot(111)
ax1.scatter(tug_length[:], tug_fuel_capacity[:],50)
plt.xlabel('tug length (m)',fontsize=12)
plt.ylabel('tug fuel capacity (liters)',fontsize=12)
plt.show()
###Output
_____no_output_____ |
1. Foundations of NLP.ipynb | ###Markdown
NLP with Deep Learning for EveryoneFoundations of NLP Bruno Gonรงalves www.data4sci.com @bgoncalves, @data4sci
###Code
import warnings
warnings.filterwarnings('ignore')
import os
import gzip
from collections import Counter
from pprint import pprint
import pandas as pd
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import string
import nltk
from nltk.corpus import stopwords
from nltk.text import TextCollection
from nltk.collocations import BigramCollocationFinder
from nltk.metrics.association import BigramAssocMeasures
import sklearn
from sklearn.manifold import TSNE
from tqdm import tqdm
tqdm.pandas()
import watermark
%load_ext watermark
%matplotlib inline
###Output
_____no_output_____
###Markdown
We start by print out the versions of the libraries we're using for future reference
###Code
%watermark -n -v -m -g -iv
###Output
Python implementation: CPython
Python version : 3.8.5
IPython version : 7.19.0
Compiler : Clang 10.0.0
OS : Darwin
Release : 20.5.0
Machine : x86_64
Processor : i386
CPU cores : 16
Architecture: 64bit
Git hash: 39ce5cf3bd8432047933c28fa699f7e39cb38900
watermark : 2.1.0
pandas : 1.1.3
numpy : 1.19.2
nltk : 3.5
sklearn : 0.24.1
matplotlib: 3.3.2
json : 2.0.9
###Markdown
Load default figure style
###Code
plt.style.use('./d4sci.mplstyle')
colors = plt.rcParams['axes.prop_cycle'].by_key()['color']
###Output
_____no_output_____
###Markdown
One-Hot Encoding
###Code
text = """Mary had a little lamb, little lamb,
little lamb. 'Mary' had a little lamb
whose fleece was white as snow.
And everywhere that Mary went
Mary went, MARY went. Everywhere
that mary went,
The lamb was sure to go"""
###Output
_____no_output_____
###Markdown
The first step is to tokenize the text. NLTK provides us with a convenient tokenizer function we can use. It is also smart enough to be able to handle different languages
###Code
tokens = nltk.word_tokenize(text, 'english')
pprint(tokens)
###Output
['Mary',
'had',
'a',
'little',
'lamb',
',',
'little',
'lamb',
',',
'little',
'lamb',
'.',
"'Mary",
"'",
'had',
'a',
'little',
'lamb',
'whose',
'fleece',
'was',
'white',
'as',
'snow',
'.',
'And',
'everywhere',
'that',
'Mary',
'went',
'Mary',
'went',
',',
'MARY',
'went',
'.',
'Everywhere',
'that',
'mary',
'went',
',',
'The',
'lamb',
'was',
'sure',
'to',
'go']
###Markdown
You'll note that NLTK includes apostrophes at the beginning of words and returns all punctuation markings.
###Code
print(tokens[5])
print(tokens[12])
print(tokens[13])
###Output
,
'Mary
'
###Markdown
We wrap it into a utility function to handle these
###Code
def tokenize(text, preserve_case=True):
punctuation = set(string.punctuation)
text_words = []
for word in nltk.word_tokenize(text):
# Remove any punctuation characters present in the beginning of the word
while len(word) > 0 and word[0] in punctuation:
word = word[1:]
# Remove any punctuation characters present in the end of the word
while len(word) > 0 and word[-1] in punctuation:
word = word[:-1]
if len(word) > 0:
if preserve_case:
text_words.append(word)
else:
text_words.append(word.lower())
return text_words
text_words = tokenize(text, False)
text_words
###Output
_____no_output_____
###Markdown
We can get a quick one-hot encoded version using pandas:
###Code
one_hot = pd.get_dummies(text_words)
###Output
_____no_output_____
###Markdown
Which provides us with a DataFrame where each column corresponds to an individual unique word and each row to a word in our text.
###Code
temp = one_hot.astype('str')
temp[temp=='0'] = ""
temp = pd.DataFrame(temp)
temp
###Output
_____no_output_____
###Markdown
From here can easily generate a mapping between the words and their numerical id
###Code
word_dict = dict(zip(one_hot.columns, np.arange(one_hot.shape[1])))
word_dict
###Output
_____no_output_____
###Markdown
Allowing us to easily transform from words to ids and back Bag of Words From the one-hot encoded representation we can easily obtain the bag of words version of our document
###Code
pd.DataFrame(one_hot.sum(), columns=['Count'])
###Output
_____no_output_____
###Markdown
A more general representation would be to use a dictionary mapping each word to the number of times it occurs. This has the added advantage of being a dense representation that doesn't waste any space Stemming
###Code
words = ['playing', 'loved', 'ran', 'river', 'friendships', 'misunderstanding', 'trouble', 'troubling']
stemmers = {
'LancasterStemmer' : nltk.stem.LancasterStemmer(),
'PorterStemmer' : nltk.stem.PorterStemmer(),
'RegexpStemmer' : nltk.stem.RegexpStemmer('ing$|s$|e$|able$'),
'SnowballStemmer' : nltk.stem.SnowballStemmer('english')
}
matrix = []
for word in words:
row = []
for stemmer in stemmers:
stem = stemmers[stemmer]
row.append(stem.stem(word))
matrix.append(row)
comparison = pd.DataFrame(matrix, index=words, columns=stemmers.keys())
comparison
###Output
_____no_output_____
###Markdown
Lemmatization
###Code
wordnet = nltk.stem.WordNetLemmatizer()
results_n = [wordnet.lemmatize(word, 'n') for word in words]
results_v = [wordnet.lemmatize(word, 'v') for word in words]
comparison['WordNetLemmatizer Noun'] = results_n
comparison['WordNetLemmatizer Verb'] = results_v
comparison
###Output
_____no_output_____
###Markdown
Stopwords NLTK provides stopwords for 23 different languages
###Code
os.listdir('/Users/bgoncalves/nltk_data/corpora/stopwords/')
###Output
_____no_output_____
###Markdown
And we can easily use them to filter out meaningless words
###Code
stop_words = set(stopwords.words('english'))
tokens = tokenize(text)
filtered_sentence = [word if word.lower() not in stop_words else "" for word in tokens]
pd.DataFrame((zip(tokens, filtered_sentence)), columns=['Original', 'Filtered']).set_index('Original')
###Output
_____no_output_____
###Markdown
N-Grams
###Code
def get_ngrams(text, length):
from nltk.util import ngrams
n_grams = ngrams(tokenize(text), length)
return [ ' '.join(grams) for grams in n_grams]
get_ngrams(text.lower(), 2)
###Output
_____no_output_____
###Markdown
Collocations
###Code
bigrams = BigramCollocationFinder.from_words(tokenize(text, False))
scored = bigrams.score_ngrams(BigramAssocMeasures.likelihood_ratio)
scored
###Output
_____no_output_____
###Markdown
TF/IDF NLTK has the TextCollection object that allows us to easily compute tf-idf scores from a given corpus. We generate a small corpus by splitting our text by sentence
###Code
corpus = text.split('.')
###Output
_____no_output_____
###Markdown
We have 4 documents in our corpus
###Code
len(corpus)
###Output
_____no_output_____
###Markdown
NLTK expects the corpus to be tokenized so we do that now
###Code
corpus = [tokenize(doc, preserve_case=False) for doc in corpus]
corpus
###Output
_____no_output_____
###Markdown
We initialize the TextCollection object with our corpus
###Code
nlp = TextCollection(corpus)
###Output
_____no_output_____
###Markdown
This object provides us with a great deal of functionality, like total frequency counts
###Code
nlp.plot()
###Output
_____no_output_____
###Markdown
Individual tf/idf scores, etc
###Code
nlp.tf_idf('Mary', corpus[3])
###Output
_____no_output_____
###Markdown
To get the full TF/IDF scores for our corpus, we do
###Code
TFIDF = []
for doc in corpus:
current = {}
for token in doc:
current[token] = nlp.tf_idf(token, doc)
TFIDF.append(current)
TFIDF
###Output
_____no_output_____
###Markdown
NLP with Deep Learning for EveryoneFoundations of NLP Bruno Gonรงalves www.data4sci.com @bgoncalves, @data4sci
###Code
import warnings
warnings.filterwarnings('ignore')
import os
import gzip
from collections import Counter
from pprint import pprint
import pandas as pd
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import string
import nltk
from nltk.corpus import stopwords
from nltk.text import TextCollection
from nltk.collocations import BigramCollocationFinder
from nltk.metrics.association import BigramAssocMeasures
import sklearn
from sklearn.manifold import TSNE
from tqdm import tqdm
tqdm.pandas()
import watermark
%load_ext watermark
%matplotlib inline
###Output
_____no_output_____
###Markdown
We start by print out the versions of the libraries we're using for future reference
###Code
%watermark -n -v -m -g -iv
###Output
Python implementation: CPython
Python version : 3.8.5
IPython version : 7.19.0
Compiler : Clang 10.0.0
OS : Darwin
Release : 20.6.0
Machine : x86_64
Processor : i386
CPU cores : 16
Architecture: 64bit
Git hash: b709ad9ae690c7315eea98db54a42cc9968562f2
matplotlib: 3.3.2
watermark : 2.1.0
json : 2.0.9
numpy : 1.19.2
pandas : 1.1.3
sklearn : 0.0
nltk : 3.5
###Markdown
Load default figure style
###Code
plt.style.use('./d4sci.mplstyle')
colors = plt.rcParams['axes.prop_cycle'].by_key()['color']
###Output
_____no_output_____
###Markdown
One-Hot Encoding
###Code
text = """Mary had a little lamb, little lamb,
little lamb. 'Mary' had a little lamb
whose fleece was white as snow.
And everywhere that Mary went
Mary went, MARY went. Everywhere
that mary went,
The lamb was sure to go"""
###Output
_____no_output_____
###Markdown
The first step is to tokenize the text. NLTK provides us with a convenient tokenizer function we can use. It is also smart enough to be able to handle different languages
###Code
tokens = nltk.word_tokenize(text, 'english')
pprint(tokens)
###Output
['Mary',
'had',
'a',
'little',
'lamb',
',',
'little',
'lamb',
',',
'little',
'lamb',
'.',
"'Mary",
"'",
'had',
'a',
'little',
'lamb',
'whose',
'fleece',
'was',
'white',
'as',
'snow',
'.',
'And',
'everywhere',
'that',
'Mary',
'went',
'Mary',
'went',
',',
'MARY',
'went',
'.',
'Everywhere',
'that',
'mary',
'went',
',',
'The',
'lamb',
'was',
'sure',
'to',
'go']
###Markdown
You'll note that NLTK includes apostrophes at the beginning of words and returns all punctuation markings.
###Code
print(tokens[5])
print(tokens[12])
print(tokens[13])
###Output
,
'Mary
'
###Markdown
We wrap it into a utility function to handle these
###Code
def tokenize(text, preserve_case=True):
punctuation = set(string.punctuation)
text_words = []
for word in nltk.word_tokenize(text, 'english'):
# Remove any punctuation characters present in the beginning of the word
while len(word) > 0 and word[0] in punctuation:
word = word[1:]
# Remove any punctuation characters present in the end of the word
while len(word) > 0 and word[-1] in punctuation:
word = word[:-1]
if len(word) > 0:
if preserve_case:
text_words.append(word)
else:
text_words.append(word.lower())
return text_words
text_words = tokenize(text, preserve_case=False)
text_words
###Output
_____no_output_____
###Markdown
We can get a quick one-hot encoded version using pandas:
###Code
one_hot = pd.get_dummies(text_words)
###Output
_____no_output_____
###Markdown
Which provides us with a DataFrame where each column corresponds to an individual unique word and each row to a word in our text.
###Code
temp = one_hot.astype('str')
temp[temp=='0'] = ""
temp = pd.DataFrame(temp)
temp
###Output
_____no_output_____
###Markdown
From here can easily generate a mapping between the words and their numerical id
###Code
word_dict = dict(zip(one_hot.columns, np.arange(one_hot.shape[1])))
word_dict
###Output
_____no_output_____
###Markdown
Allowing us to easily transform from words to ids and back Bag of Words From the one-hot encoded representation we can easily obtain the bag of words version of our document
###Code
pd.DataFrame(one_hot.sum(), columns=['Count'])
Counter(text_words)
###Output
_____no_output_____
###Markdown
A more general representation would be to use a dictionary mapping each word to the number of times it occurs. This has the added advantage of being a dense representation that doesn't waste any space Stemming
###Code
words = ['playing', 'loved', 'ran', 'river', 'friendships',
'misunderstanding', 'trouble', 'troubling']
stemmers = {
'LancasterStemmer' : nltk.stem.LancasterStemmer(),
'PorterStemmer' : nltk.stem.PorterStemmer(),
'RegexpStemmer' : nltk.stem.RegexpStemmer('ing$|s$|e$|able$'),
'SnowballStemmer' : nltk.stem.SnowballStemmer('english')
}
matrix = []
for word in words:
row = []
for stemmer in stemmers:
stem = stemmers[stemmer]
row.append(stem.stem(word))
matrix.append(row)
comparison = pd.DataFrame(matrix, index=words, columns=stemmers.keys())
comparison
###Output
_____no_output_____
###Markdown
Lemmatization
###Code
wordnet = nltk.stem.WordNetLemmatizer()
results_n = [wordnet.lemmatize(word, 'n') for word in words]
results_v = [wordnet.lemmatize(word, 'v') for word in words]
comparison['WordNetLemmatizer Noun'] = results_n
comparison['WordNetLemmatizer Verb'] = results_v
comparison
###Output
_____no_output_____
###Markdown
Stopwords NLTK provides stopwords for 23 different languages
###Code
os.listdir('/Users/bgoncalves/nltk_data/corpora/stopwords/')
###Output
_____no_output_____
###Markdown
And we can easily use them to filter out meaningless words
###Code
stop_words = set(stopwords.words('english'))
tokens = tokenize(text)
filtered_sentence = [word if word.lower() not in stop_words else "" for word in tokens]
pd.DataFrame((zip(tokens, filtered_sentence)), columns=['Original', 'Filtered']).set_index('Original')
###Output
_____no_output_____
###Markdown
N-Grams
###Code
def get_ngrams(text, length):
from nltk.util import ngrams
n_grams = ngrams(tokenize(text), length)
return [' '.join(grams) for grams in n_grams]
get_ngrams(text.lower(), 2)
###Output
_____no_output_____
###Markdown
Collocations
###Code
bigrams = BigramCollocationFinder.from_words(tokenize(text, False))
scored = bigrams.score_ngrams(BigramAssocMeasures.likelihood_ratio)
scored
###Output
_____no_output_____
###Markdown
TF/IDF NLTK has the TextCollection object that allows us to easily compute tf-idf scores from a given corpus. We generate a small corpus by splitting our text by sentence
###Code
corpus = text.split('.')
###Output
_____no_output_____
###Markdown
We have 4 documents in our corpus
###Code
len(corpus)
###Output
_____no_output_____
###Markdown
NLTK expects the corpus to be tokenized so we do that now
###Code
corpus = [tokenize(doc, preserve_case=False) for doc in corpus]
corpus
###Output
_____no_output_____
###Markdown
We initialize the TextCollection object with our corpus
###Code
nlp = TextCollection(corpus)
###Output
_____no_output_____
###Markdown
This object provides us with a great deal of functionality, like total frequency counts
###Code
nlp.plot()
###Output
_____no_output_____
###Markdown
Individual tf/idf scores, etc
###Code
nlp.tf_idf('mary', corpus[3])
###Output
_____no_output_____
###Markdown
To get the full TF/IDF scores for our corpus, we do
###Code
TFIDF = []
for doc in corpus:
current = {}
for token in doc:
current[token] = nlp.tf_idf(token, doc)
TFIDF.append(current)
TFIDF
###Output
_____no_output_____
###Markdown
NLP with Deep Learning for EveryoneFoundations of NLP Bruno Gonรงalves www.data4sci.com @bgoncalves, @data4sci
###Code
import warnings
warnings.filterwarnings('ignore')
import os
import gzip
from collections import Counter
from pprint import pprint
import pandas as pd
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import string
import nltk
from nltk.corpus import stopwords
from nltk.text import TextCollection
from nltk.collocations import BigramCollocationFinder
from nltk.metrics.association import BigramAssocMeasures
import sklearn
from sklearn.manifold import TSNE
from tqdm import tqdm
tqdm.pandas()
import watermark
%load_ext watermark
%matplotlib inline
###Output
_____no_output_____
###Markdown
We start by print out the versions of the libraries we're using for future reference
###Code
%watermark -n -v -m -g -iv
###Output
Python implementation: CPython
Python version : 3.8.5
IPython version : 7.19.0
Compiler : Clang 10.0.0
OS : Darwin
Release : 20.5.0
Machine : x86_64
Processor : i386
CPU cores : 16
Architecture: 64bit
Git hash: 85b3983db4844c9484625cbfb73d0803fa7a1264
json : 2.0.9
sklearn : 0.24.1
pandas : 1.1.3
matplotlib: 3.3.2
nltk : 3.5
numpy : 1.19.2
watermark : 2.1.0
###Markdown
Load default figure style
###Code
plt.style.use('./d4sci.mplstyle')
colors = plt.rcParams['axes.prop_cycle'].by_key()['color']
###Output
_____no_output_____
###Markdown
One-Hot Encoding
###Code
text = """Mary had a little lamb, little lamb,
little lamb. 'Mary' had a little lamb
whose fleece was white as snow.
And everywhere that Mary went
Mary went, MARY went. Everywhere
that mary went,
The lamb was sure to go"""
###Output
_____no_output_____
###Markdown
The first step is to tokenize the text. NLTK provides us with a convenient tokenizer function we can use. It is also smart enough to be able to handle different languages
###Code
tokens = nltk.word_tokenize(text, 'english')
pprint(tokens)
###Output
['Mary',
'had',
'a',
'little',
'lamb',
',',
'little',
'lamb',
',',
'little',
'lamb',
'.',
"'Mary",
"'",
'had',
'a',
'little',
'lamb',
'whose',
'fleece',
'was',
'white',
'as',
'snow',
'.',
'And',
'everywhere',
'that',
'Mary',
'went',
'Mary',
'went',
',',
'MARY',
'went',
'.',
'Everywhere',
'that',
'mary',
'went',
',',
'The',
'lamb',
'was',
'sure',
'to',
'go']
###Markdown
You'll note that NLTK includes apostrophes at the beginning of words and returns all punctuation markings.
###Code
print(tokens[5])
print(tokens[12])
print(tokens[13])
###Output
,
'Mary
'
###Markdown
We wrap it into a utility function to handle these
###Code
def tokenize(text, preserve_case=True):
punctuation = set(string.punctuation)
text_words = []
for word in nltk.word_tokenize(text):
# Remove any punctuation characters present in the beginning of the word
while len(word) > 0 and word[0] in punctuation:
word = word[1:]
# Remove any punctuation characters present in the end of the word
while len(word) > 0 and word[-1] in punctuation:
word = word[:-1]
if len(word) > 0:
if preserve_case:
text_words.append(word)
else:
text_words.append(word.lower())
return text_words
text_words = tokenize(text, False)
text_words
###Output
_____no_output_____
###Markdown
We can get a quick one-hot encoded version using pandas:
###Code
one_hot = pd.get_dummies(text_words)
###Output
_____no_output_____
###Markdown
Which provides us with a DataFrame where each column corresponds to an individual unique word and each row to a word in our text.
###Code
temp = one_hot.astype('str')
temp[temp=='0'] = ""
temp = pd.DataFrame(temp)
temp
###Output
_____no_output_____
###Markdown
From here can easily generate a mapping between the words and their numerical id
###Code
word_dict = dict(zip(one_hot.columns, np.arange(one_hot.shape[1])))
word_dict
###Output
_____no_output_____
###Markdown
Allowing us to easily transform from words to ids and back Bag of Words From the one-hot encoded representation we can easily obtain the bag of words version of our document
###Code
pd.DataFrame(one_hot.sum(), columns=['Count'])
###Output
_____no_output_____
###Markdown
A more general representation would be to use a dictionary mapping each word to the number of times it occurs. This has the added advantage of being a dense representation that doesn't waste any space Stemming
###Code
words = ['playing', 'loved', 'ran', 'river', 'friendships',
'misunderstanding', 'trouble', 'troubling']
stemmers = {
'LancasterStemmer' : nltk.stem.LancasterStemmer(),
'PorterStemmer' : nltk.stem.PorterStemmer(),
'RegexpStemmer' : nltk.stem.RegexpStemmer('ing$|s$|e$|able$'),
'SnowballStemmer' : nltk.stem.SnowballStemmer('english')
}
matrix = []
for word in words:
row = []
for stemmer in stemmers:
stem = stemmers[stemmer]
row.append(stem.stem(word))
matrix.append(row)
comparison = pd.DataFrame(matrix, index=words, columns=stemmers.keys())
comparison
###Output
_____no_output_____
###Markdown
Lemmatization
###Code
wordnet = nltk.stem.WordNetLemmatizer()
results_n = [wordnet.lemmatize(word, 'n') for word in words]
results_v = [wordnet.lemmatize(word, 'v') for word in words]
comparison['WordNetLemmatizer Noun'] = results_n
comparison['WordNetLemmatizer Verb'] = results_v
comparison
###Output
_____no_output_____
###Markdown
Stopwords NLTK provides stopwords for 23 different languages
###Code
os.listdir('/Users/bgoncalves/nltk_data/corpora/stopwords/')
###Output
_____no_output_____
###Markdown
And we can easily use them to filter out meaningless words
###Code
stop_words = set(stopwords.words('english'))
tokens = tokenize(text)
filtered_sentence = [word if word.lower() not in stop_words else "" for word in tokens]
pd.DataFrame((zip(tokens, filtered_sentence)), columns=['Original', 'Filtered']).set_index('Original')
###Output
_____no_output_____
###Markdown
N-Grams
###Code
def get_ngrams(text, length):
from nltk.util import ngrams
n_grams = ngrams(tokenize(text), length)
return [ ' '.join(grams) for grams in n_grams]
get_ngrams(text.lower(), 2)
###Output
_____no_output_____
###Markdown
Collocations
###Code
bigrams = BigramCollocationFinder.from_words(tokenize(text, False))
scored = bigrams.score_ngrams(BigramAssocMeasures.likelihood_ratio)
scored
###Output
_____no_output_____
###Markdown
TF/IDF NLTK has the TextCollection object that allows us to easily compute tf-idf scores from a given corpus. We generate a small corpus by splitting our text by sentence
###Code
corpus = text.split('.')
###Output
_____no_output_____
###Markdown
We have 4 documents in our corpus
###Code
len(corpus)
###Output
_____no_output_____
###Markdown
NLTK expects the corpus to be tokenized so we do that now
###Code
corpus = [tokenize(doc, preserve_case=False) for doc in corpus]
corpus
###Output
_____no_output_____
###Markdown
We initialize the TextCollection object with our corpus
###Code
nlp = TextCollection(corpus)
###Output
_____no_output_____
###Markdown
This object provides us with a great deal of functionality, like total frequency counts
###Code
nlp.plot()
###Output
_____no_output_____
###Markdown
Individual tf/idf scores, etc
###Code
nlp.tf_idf('Mary', corpus[3])
###Output
_____no_output_____
###Markdown
To get the full TF/IDF scores for our corpus, we do
###Code
TFIDF = []
for doc in corpus:
current = {}
for token in doc:
current[token] = nlp.tf_idf(token, doc)
TFIDF.append(current)
TFIDF
###Output
_____no_output_____ |
Sentiment_Analysis_TA_session_Dec_11.ipynb | ###Markdown
We will work with the IMDB dataset, which contains movie reviews from IMDB. Each review is labeled as 1 (for positive) or 0 (for negative) from the rating provided by users together with their reviews.\This dataset is available [here](https://www.kaggle.com/columbine/imdb-dataset-sentiment-analysis-in-csv-format/download). Code referred from [here](https://www.kaggle.com/arunmohan003/sentiment-analysis-using-lstm-pytorch/notebook) Importing Libraries and Data
###Code
import numpy as np
import pandas as pd
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim.lr_scheduler as lr_scheduler
from torch.utils.data import TensorDataset, DataLoader
import nltk
nltk.download('stopwords')
from nltk.corpus import stopwords
from collections import Counter
import string
import re
import seaborn as sns
from tqdm import tqdm
import matplotlib.pyplot as plt
SEED = 1234
np.random.seed(SEED)
torch.manual_seed(SEED)
torch.cuda.manual_seed(SEED)
## upload data files to this colab notebook and read them using the respective paths
df_train = pd.read_csv('/content/Train.csv')
df_val = pd.read_csv('/content/Valid.csv')
df_test = pd.read_csv('/content/Test.csv')
df_train.head()
x_train, y_train = df_train['text'].values, df_train['label'].values
x_val, y_val = df_val['text'].values, df_val['label'].values
x_test, y_test = df_test['text'].values, df_test['label'].values
print(f'shape of train data is {x_train.shape}')
print(f'shape of val data is {x_val.shape}')
print(f'shape of test data is {x_test.shape}')
#plot of positive and negative class count in training set
dd = pd.Series(y_train).value_counts()
sns.barplot(x=np.array(['negative','positive']),y=dd.values)
plt.show()
###Output
_____no_output_____
###Markdown
Pre-Processing Data
###Code
def preprocess_string(s):
""" preprocessing string to remove special characters, white spaces and digits """
s = re.sub(r"[^\w\s]", '', s) # Remove all non-word characters (everything except numbers and letters)
s = re.sub(r"\s+", '', s) # Replace all runs of whitespaces with no space
s = re.sub(r"\d", '', s) # replace digits with no space
return s
def create_corpus(x_train):
""" creates dictionary of 1000 most frequent words in the training set and assigns token number to the words, returns dictionay (named corpus)"""
word_list = []
stop_words = set(stopwords.words('english'))
for sent in x_train:
for word in sent.lower().split():
word = preprocess_string(word)
if word not in stop_words and word != '':
word_list.append(word)
word_count = Counter(word_list)
# sorting on the basis of most common words
top_words = sorted(word_count, key=word_count.get, reverse=True)[:1000]
# creating a dict
corpus = {w:i+1 for i,w in enumerate(top_words)}
return corpus
def preprocess(x, y, corpus):
""" encodes reviews according to created corpus dictionary"""
x_new = []
for sent in x:
x_new.append([corpus[preprocess_string(word)] for word in sent.lower().split() if preprocess_string(word) in corpus.keys()])
return np.array(x_new), np.array(y)
corpus = create_corpus(x_train)
print(f'Length of vocabulary is {len(corpus)}')
x_train, y_train = preprocess(x_train, y_train, corpus)
x_val, y_val = preprocess(x_val, y_val, corpus)
x_test, y_test = preprocess(x_test, y_test, corpus)
#analysis of word count in reviews
rev_len = [len(i) for i in x_train]
pd.Series(rev_len).hist()
plt.show()
pd.Series(rev_len).describe()
###Output
_____no_output_____
###Markdown
From the above data, it can be seen that maximum length of review is 653 and 75% of the reviews have length less than 85. Furthermore length of reviews greater than 300 is not significant (from the graph) so will take maximum length of reviews to be 300.
###Code
def padding_(sentences, seq_len):
""" to tackle variable length of sequences: this function prepads reviews with 0 for reviews whose length is less than seq_len, and truncates reviews with length greater than
seq_len by removing words after seq_len in review"""
features = np.zeros((len(sentences), seq_len),dtype=int)
for ii, review in enumerate(sentences):
diff = seq_len - len(review)
if diff > 0:
features[ii,diff:] = np.array(review)
else:
features[ii] = np.array(review[:seq_len])
return features
# maximum review length (300)
x_train_pad = padding_(x_train,300)
x_val_pad = padding_(x_val,300)
x_test_pad = padding_(x_test, 300)
is_cuda = torch.cuda.is_available()
# use GPU if available
if is_cuda:
device = torch.device("cuda")
print("GPU is available")
else:
device = torch.device("cpu")
print("GPU not available, CPU used")
# create Tensor datasets
train_data = TensorDataset(torch.from_numpy(x_train_pad), torch.from_numpy(y_train))
valid_data = TensorDataset(torch.from_numpy(x_test_pad), torch.from_numpy(y_test))
batch_size = 50
# dataloaders
train_loader = DataLoader(train_data, shuffle=True, batch_size=batch_size)
valid_loader = DataLoader(valid_data, shuffle=True, batch_size=batch_size)
# one batch of training data
dataiter = iter(train_loader)
sample_x, sample_y = dataiter.next()
print('Sample input size: ', sample_x.size())
print('Sample input: \n', sample_x)
print('Sample output: \n', sample_y)
###Output
Sample input size: torch.Size([50, 300])
Sample input:
tensor([[ 0, 0, 0, ..., 572, 1, 1],
[ 0, 0, 0, ..., 38, 457, 87],
[ 0, 0, 0, ..., 168, 841, 253],
...,
[ 0, 0, 0, ..., 171, 5, 225],
[ 0, 0, 0, ..., 9, 446, 2],
[ 0, 0, 0, ..., 917, 179, 95]])
Sample output:
tensor([1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0,
0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0,
0, 1])
###Markdown
Image Credit- [Article](http://dprogrammer.org/rnn-lstm-gru) RNN
###Code
class RNN(nn.Module):
def __init__(self,no_layers,vocab_size,hidden_dim,embedding_dim,output_dim):
super(RNN,self).__init__()
self.output_dim = output_dim
self.hidden_dim = hidden_dim
self.no_layers = no_layers
self.vocab_size = vocab_size
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.rnn = nn.RNN(input_size=embedding_dim,hidden_size=self.hidden_dim,
num_layers=no_layers, batch_first=True)
self.dropout = nn.Dropout(0.3)
self.fc = nn.Linear(self.hidden_dim, output_dim)
self.sig = nn.Sigmoid()
def forward(self,x,hidden):
batch_size = x.size(0)
embeds = self.embedding(x)
rnn_out, hidden = self.rnn(embeds, hidden)
rnn_out = rnn_out.contiguous().view(-1, self.hidden_dim)
out = self.dropout(rnn_out)
out = self.fc(out)
sig_out = self.sig(out)
sig_out = sig_out.view(batch_size, -1)
sig_out = sig_out[:, -1]
return sig_out, hidden
def init_hidden(self, batch_size):
''' Initializes hidden state '''
h0 = torch.zeros((self.no_layers,batch_size,self.hidden_dim)).to(device)
return h0
no_layers = 2
vocab_size = len(corpus) + 1 #extra 1 for padding
embedding_dim = 64
output_dim = 1
hidden_dim = 256
model = RNN(no_layers,vocab_size,hidden_dim,embedding_dim,output_dim)
model.to(device)
print(model)
lr=0.001 #learning rate
criterion = nn.BCELoss() #Binary Cross Entropy loss
optimizer = torch.optim.Adam(model.parameters(), lr=lr)
def acc(pred,label):
""" function to calculate accuracy """
pred = torch.round(pred.squeeze())
return torch.sum(pred == label.squeeze()).item()
patience_early_stopping = 3 #training will stop if model performance does not improve for these many consecutive epochs
cnt = 0 #counter for checking patience level
prev_epoch_acc = 0.0 #initializing prev test accuracy for early stopping condition
scheduler = lr_scheduler.ReduceLROnPlateau(optimizer, mode = 'max', factor = 0.2, patience = 1)
epochs = 10
for epoch in range(epochs):
train_acc = 0.0
model.train()
for inputs, labels in train_loader: #training in batches
inputs, labels = inputs.to(device), labels.to(device)
h = model.init_hidden(batch_size)
model.zero_grad()
output,h = model(inputs,h)
loss = criterion(output.squeeze(), labels.float())
loss.backward()
accuracy = acc(output,labels)
train_acc += accuracy
optimizer.step()
val_acc = 0.0
model.eval()
for inputs, labels in valid_loader:
inputs, labels = inputs.to(device), labels.to(device)
val_h = model.init_hidden(batch_size)
output, val_h = model(inputs, val_h)
accuracy = acc(output,labels)
val_acc += accuracy
epoch_train_acc = train_acc/len(train_loader.dataset)
epoch_val_acc = val_acc/len(valid_loader.dataset)
scheduler.step(epoch_val_acc)
if epoch_val_acc > prev_epoch_acc: #check if val accuracy for current epoch has improved compared to previous epoch
cnt = 0 #f accuracy improves reset counter to 0
else: #otherwise increment current counter
cnt += 1
prev_epoch_acc = epoch_val_acc
print(f'Epoch {epoch+1}')
print(f'train_accuracy : {epoch_train_acc*100} val_accuracy : {epoch_val_acc*100}')
if cnt == patience_early_stopping:
print(f"early stopping as test accuracy did not improve for {patience_early_stopping} consecutive epochs")
break
def predict_text(text):
word_seq = np.array([corpus[preprocess_string(word)] for word in text.split()
if preprocess_string(word) in corpus.keys()])
word_seq = np.expand_dims(word_seq,axis=0)
pad = torch.from_numpy(padding_(word_seq,300))
input = pad.to(device)
batch_size = 1
h = model.init_hidden(batch_size)
#h = tuple([each.data for each in h])
output, h = model(input, h)
return(output.item())
index = 30
print(df_test['text'][index])
print('='*70)
print(f'Actual sentiment is : {df_test["label"][index]}')
print('='*70)
prob = predict_text(df_test['text'][index])
status = "positive" if prob > 0.5 else "negative"
prob = (1 - prob) if status == "negative" else prob
print(f'Predicted sentiment is {status} with a probability of {prob}')
###Output
This movie is good for entertainment purposes, but it is not historically reliable. If you are looking for a movie and thinking to yourself `Oh I want to learn more about Custer's life and his last stand', do not rent `They Died with Their Boots On'. But, if you would like to watch a movie for the enjoyment of an older western film, with a little bit of romance and just for a good story, this is a fun movie to watch.<br /><br />The story starts out with Custer's (Errol Flynn) first day at West Point. Everyone loves his charming personality which allows him to get away with most everything. The movie follows his career from West Point and his many battles, including his battle in the Civil War. The movie ends with his last stand at Little Big Horn. In between the battle scenes, he finds love and marriage with Libby (Olivia De Havilland).<br /><br />Errol Flynn portrays the arrogant, but suave George Armstrong Custer well. Olivia De Havilland plays the cute, sweet Libby very well, especially in the flirting scene that Custer and Libby first meet. Their chemistry on screen made you believe in their romance. The acting in general was impressive, especially the comedic role ( although stereotypical) of Callie played by Hattie McDaniel. Her character will definitely make you laugh.<br /><br />The heroic war music brought out the excitement of the battle scenes. The beautiful costumes set the tone of the era. The script, at times, was corny, although the movie was still enjoyable to watch. The director's portrayal of Custer was as a hero and history shows this is debatable. Some will watch this movie and see Custer as a hero. Others will watch this movie and learn hate him.<br /><br />I give it a thumbs up for this 1942 western film.
======================================================================
Actual sentiment is : 1
======================================================================
Predicted sentiment is negative with a probability of 0.5339558720588684
###Markdown
GRU
###Code
class GRU(nn.Module):
def __init__(self,no_layers,vocab_size,hidden_dim,embedding_dim):
super(GRU,self).__init__()
self.output_dim = output_dim
self.hidden_dim = hidden_dim
self.no_layers = no_layers
self.vocab_size = vocab_size
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.gru = nn.GRU(input_size=embedding_dim,hidden_size=self.hidden_dim,
num_layers=no_layers, batch_first=True)
self.dropout = nn.Dropout(0.3)
self.fc = nn.Linear(self.hidden_dim, output_dim)
self.sig = nn.Sigmoid()
def forward(self,x,hidden):
batch_size = x.size(0)
embeds = self.embedding(x)
gru_out, hidden = self.gru(embeds, hidden)
gru_out = gru_out.contiguous().view(-1, self.hidden_dim)
out = self.dropout(gru_out)
out = self.fc(out)
sig_out = self.sig(out)
sig_out = sig_out.view(batch_size, -1)
sig_out = sig_out[:, -1]
return sig_out, hidden
def init_hidden(self, batch_size):
''' Initializes hidden state '''
h0 = torch.zeros((self.no_layers,batch_size,self.hidden_dim)).to(device)
return h0
no_layers = 2
vocab_size = len(corpus) + 1 #extra 1 for padding
embedding_dim = 64
output_dim = 1
hidden_dim = 256
model = GRU(no_layers,vocab_size,hidden_dim,embedding_dim)
#moving to gpu
model.to(device)
print(model)
# loss and optimization functions
lr=0.001
criterion = nn.BCELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=lr)
# function to calculate accuracy
def acc(pred,label):
pred = torch.round(pred.squeeze())
return torch.sum(pred == label.squeeze()).item()
patience_early_stopping = 3 #training will stop if model performance does not improve for these many consecutive epochs
cnt = 0 #counter for checking patience level
prev_epoch_acc = 0.0 #initializing prev test accuracy for early stopping condition
scheduler = lr_scheduler.ReduceLROnPlateau(optimizer, mode = 'max', factor = 0.2, patience = 1)
epochs = 10
for epoch in range(epochs):
train_acc = 0.0
model.train()
for inputs, labels in train_loader:
inputs, labels = inputs.to(device), labels.to(device)
h = model.init_hidden(batch_size)
model.zero_grad()
output,h = model(inputs,h)
loss = criterion(output.squeeze(), labels.float())
loss.backward()
accuracy = acc(output,labels)
train_acc += accuracy
optimizer.step()
val_acc = 0.0
model.eval()
for inputs, labels in valid_loader:
inputs, labels = inputs.to(device), labels.to(device)
val_h = model.init_hidden(batch_size)
output, val_h = model(inputs, val_h)
accuracy = acc(output,labels)
val_acc += accuracy
epoch_train_acc = train_acc/len(train_loader.dataset)
epoch_val_acc = val_acc/len(valid_loader.dataset)
scheduler.step(epoch_val_acc)
if epoch_val_acc > prev_epoch_acc: #check if val accuracy for current epoch has improved compared to previous epoch
cnt = 0 #f accuracy improves reset counter to 0
else: #otherwise increment current counter
cnt += 1
prev_epoch_acc = epoch_val_acc
print(f'Epoch {epoch+1}')
print(f'train_accuracy : {epoch_train_acc*100} val_accuracy : {epoch_val_acc*100}')
if cnt == patience_early_stopping:
print(f"early stopping as test accuracy did not improve for {patience_early_stopping} consecutive epochs")
break
index = 30
print(df_test['text'][index])
print('='*70)
print(f'Actual sentiment is : {df_test["label"][index]}')
print('='*70)
prob = predict_text(df_test['text'][index])
status = "positive" if prob > 0.5 else "negative"
prob = (1 - prob) if status == "negative" else prob
print(f'Predicted sentiment is {status} with a probability of {prob}')
###Output
This movie is good for entertainment purposes, but it is not historically reliable. If you are looking for a movie and thinking to yourself `Oh I want to learn more about Custer's life and his last stand', do not rent `They Died with Their Boots On'. But, if you would like to watch a movie for the enjoyment of an older western film, with a little bit of romance and just for a good story, this is a fun movie to watch.<br /><br />The story starts out with Custer's (Errol Flynn) first day at West Point. Everyone loves his charming personality which allows him to get away with most everything. The movie follows his career from West Point and his many battles, including his battle in the Civil War. The movie ends with his last stand at Little Big Horn. In between the battle scenes, he finds love and marriage with Libby (Olivia De Havilland).<br /><br />Errol Flynn portrays the arrogant, but suave George Armstrong Custer well. Olivia De Havilland plays the cute, sweet Libby very well, especially in the flirting scene that Custer and Libby first meet. Their chemistry on screen made you believe in their romance. The acting in general was impressive, especially the comedic role ( although stereotypical) of Callie played by Hattie McDaniel. Her character will definitely make you laugh.<br /><br />The heroic war music brought out the excitement of the battle scenes. The beautiful costumes set the tone of the era. The script, at times, was corny, although the movie was still enjoyable to watch. The director's portrayal of Custer was as a hero and history shows this is debatable. Some will watch this movie and see Custer as a hero. Others will watch this movie and learn hate him.<br /><br />I give it a thumbs up for this 1942 western film.
======================================================================
Actual sentiment is : 1
======================================================================
Predicted sentiment is positive with a probability of 0.9998726844787598
###Markdown
LSTM
###Code
class LSTM(nn.Module):
def __init__(self,no_layers,vocab_size,hidden_dim,embedding_dim):
super(LSTM,self).__init__()
self.output_dim = output_dim
self.hidden_dim = hidden_dim
self.no_layers = no_layers
self.vocab_size = vocab_size
self.embedding = nn.Embedding(vocab_size, embedding_dim) #embedding layer
self.lstm = nn.LSTM(input_size=embedding_dim,hidden_size=self.hidden_dim,
num_layers=no_layers, batch_first=True) #lstm layer
self.dropout = nn.Dropout(0.3) # dropout layer
self.fc = nn.Linear(self.hidden_dim, output_dim) #fully connected layer
self.sig = nn.Sigmoid() #sigmoid activation
def forward(self,x,hidden):
batch_size = x.size(0)
embeds = self.embedding(x)
lstm_out, hidden = self.lstm(embeds, hidden)
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
out = self.dropout(lstm_out)
out = self.fc(out)
sig_out = self.sig(out)
sig_out = sig_out.view(batch_size, -1)
sig_out = sig_out[:, -1]
return sig_out, hidden
def init_hidden(self, batch_size):
''' Initializes hidden state and cell state for LSTM '''
h0 = torch.zeros((self.no_layers,batch_size,self.hidden_dim)).to(device)
c0 = torch.zeros((self.no_layers,batch_size,self.hidden_dim)).to(device)
hidden = (h0,c0)
return hidden
no_layers = 2
vocab_size = len(corpus) + 1 #extra 1 for padding
embedding_dim = 64
output_dim = 1
hidden_dim = 256
model = LSTM(no_layers,vocab_size,hidden_dim,embedding_dim)
model.to(device)
print(model)
# loss and optimization functions
lr=0.001
criterion = nn.BCELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=lr)
# function to calculate accuracy
def acc(pred,label):
pred = torch.round(pred.squeeze())
return torch.sum(pred == label.squeeze()).item()
patience_early_stopping = 3 #training will stop if model performance does not improve for these many consecutive epochs
cnt = 0 #counter for checking patience level
prev_epoch_acc = 0.0 #initializing prev test accuracy for early stopping condition
scheduler = lr_scheduler.ReduceLROnPlateau(optimizer, mode = 'max', factor = 0.2, patience = 1)
epochs = 10
for epoch in range(epochs):
train_acc = 0.0
model.train()
for inputs, labels in train_loader:
inputs, labels = inputs.to(device), labels.to(device)
h = model.init_hidden(batch_size)
model.zero_grad()
output,h = model(inputs,h)
loss = criterion(output.squeeze(), labels.float())
loss.backward()
accuracy = acc(output,labels)
train_acc += accuracy
optimizer.step()
val_acc = 0.0
model.eval()
for inputs, labels in valid_loader:
val_h = model.init_hidden(batch_size)
inputs, labels = inputs.to(device), labels.to(device)
output, val_h = model(inputs, val_h)
accuracy = acc(output,labels)
val_acc += accuracy
epoch_train_acc = train_acc/len(train_loader.dataset)
epoch_val_acc = val_acc/len(valid_loader.dataset)
scheduler.step(epoch_val_acc)
if epoch_val_acc > prev_epoch_acc: #check if val accuracy for current epoch has improved compared to previous epoch
cnt = 0 #f accuracy improves reset counter to 0
else: #otherwise increment current counter
cnt += 1
prev_epoch_acc = epoch_val_acc
print(f'Epoch {epoch+1}')
print(f'train_accuracy : {epoch_train_acc*100} val_accuracy : {epoch_val_acc*100}')
if cnt == patience_early_stopping:
print(f"early stopping as test accuracy did not improve for {patience_early_stopping} consecutive epochs")
break
index = 30
print(df_test['text'][index])
print('='*70)
print(f'Actual sentiment is : {df_test["label"][index]}')
print('='*70)
prob = predict_text(df_test['text'][index])
status = "positive" if prob > 0.5 else "negative"
prob = (1 - prob) if status == "negative" else prob
print(f'Predicted sentiment is {status} with a probability of {prob}')
###Output
_____no_output_____
###Markdown
We will work with the IMDB dataset, which contains movie reviews from IMDB. Each review is labeled as 1 (for positive) or 0 (for negative) from the rating provided by users together with their reviews.\This dataset is available [here](https://www.kaggle.com/columbine/imdb-dataset-sentiment-analysis-in-csv-format/download). Code referred from [here](https://www.kaggle.com/arunmohan003/sentiment-analysis-using-lstm-pytorch/notebook) Importing Libraries and Data
###Code
import numpy as np
import pandas as pd
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim.lr_scheduler as lr_scheduler
from torch.utils.data import TensorDataset, DataLoader
import nltk
nltk.download('stopwords')
from nltk.corpus import stopwords
from collections import Counter
import string
import re
import seaborn as sns
from tqdm import tqdm
import matplotlib.pyplot as plt
SEED = 1234
np.random.seed(SEED)
torch.manual_seed(SEED)
torch.cuda.manual_seed(SEED)
## upload data files to this colab notebook and read them using the respective paths
df_train = pd.read_csv('/content/Train.csv')
df_val = pd.read_csv('/content/Valid.csv')
df_test = pd.read_csv('/content/Test.csv')
df_train.head()
x_train, y_train = df_train['text'].values, df_train['label'].values
x_val, y_val = df_val['text'].values, df_val['label'].values
x_test, y_test = df_test['text'].values, df_test['label'].values
print(f'shape of train data is {x_train.shape}')
print(f'shape of val data is {x_val.shape}')
print(f'shape of test data is {x_test.shape}')
#plot of positive and negative class count in training set
dd = pd.Series(y_train).value_counts()
sns.barplot(x=np.array(['negative','positive']),y=dd.values)
plt.show()
###Output
_____no_output_____
###Markdown
Pre-Processing Data
###Code
def preprocess_string(s):
""" preprocessing string to remove special characters, white spaces and digits """
s = re.sub(r"[^\w\s]", '', s) # Remove all non-word characters (everything except numbers and letters)
s = re.sub(r"\s+", '', s) # Replace all runs of whitespaces with no space
s = re.sub(r"\d", '', s) # replace digits with no space
return s
def create_corpus(x_train):
""" creates dictionary of 1000 most frequent words in the training set and assigns token number to the words, returns dictionay (named corpus)"""
word_list = []
stop_words = set(stopwords.words('english'))
for sent in x_train:
for word in sent.lower().split():
word = preprocess_string(word)
if word not in stop_words and word != '':
word_list.append(word)
word_count = Counter(word_list)
# sorting on the basis of most common words
top_words = sorted(word_count, key=word_count.get, reverse=True)[:1000]
# creating a dict
corpus = {w:i+1 for i,w in enumerate(top_words)}
return corpus
def preprocess(x, y, corpus):
""" encodes reviews according to created corpus dictionary"""
x_new = []
for sent in x:
x_new.append([corpus[preprocess_string(word)] for word in sent.lower().split() if preprocess_string(word) in corpus.keys()])
return np.array(x_new), np.array(y)
corpus = create_corpus(x_train)
print(f'Length of vocabulary is {len(corpus)}')
x_train, y_train = preprocess(x_train, y_train, corpus)
x_val, y_val = preprocess(x_val, y_val, corpus)
x_test, y_test = preprocess(x_test, y_test, corpus)
#analysis of word count in reviews
rev_len = [len(i) for i in x_train]
pd.Series(rev_len).hist()
plt.show()
pd.Series(rev_len).describe()
###Output
_____no_output_____
###Markdown
From the above data, it can be seen that maximum length of review is 653 and 75% of the reviews have length less than 85. Furthermore length of reviews greater than 300 is not significant (from the graph) so will take maximum length of reviews to be 300.
###Code
def padding_(sentences, seq_len):
""" to tackle variable length of sequences: this function prepads reviews with 0 for reviews whose length is less than seq_len, and truncates reviews with length greater than
seq_len by removing words after seq_len in review"""
features = np.zeros((len(sentences), seq_len),dtype=int)
for ii, review in enumerate(sentences):
diff = seq_len - len(review)
if diff > 0:
features[ii,diff:] = np.array(review)
else:
features[ii] = np.array(review[:seq_len])
return features
# maximum review length (300)
x_train_pad = padding_(x_train,300)
x_val_pad = padding_(x_val,300)
x_test_pad = padding_(x_test, 300)
is_cuda = torch.cuda.is_available()
# use GPU if available
if is_cuda:
device = torch.device("cuda")
print("GPU is available")
else:
device = torch.device("cpu")
print("GPU not available, CPU used")
# create Tensor datasets
train_data = TensorDataset(torch.from_numpy(x_train_pad), torch.from_numpy(y_train))
valid_data = TensorDataset(torch.from_numpy(x_test_pad), torch.from_numpy(y_test))
batch_size = 50
# dataloaders
train_loader = DataLoader(train_data, shuffle=True, batch_size=batch_size)
valid_loader = DataLoader(valid_data, shuffle=True, batch_size=batch_size)
# one batch of training data
dataiter = iter(train_loader)
sample_x, sample_y = dataiter.next()
print('Sample input size: ', sample_x.size())
print('Sample input: \n', sample_x)
print('Sample output: \n', sample_y)
###Output
Sample input size: torch.Size([50, 300])
Sample input:
tensor([[ 0, 0, 0, ..., 572, 1, 1],
[ 0, 0, 0, ..., 38, 457, 87],
[ 0, 0, 0, ..., 168, 841, 253],
...,
[ 0, 0, 0, ..., 171, 5, 225],
[ 0, 0, 0, ..., 9, 446, 2],
[ 0, 0, 0, ..., 917, 179, 95]])
Sample output:
tensor([1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0,
0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0,
0, 1])
###Markdown
Image Credit- [Article](http://dprogrammer.org/rnn-lstm-gru) RNN
###Code
class RNN(nn.Module):
def __init__(self,no_layers,vocab_size,hidden_dim,embedding_dim,output_dim):
super(RNN,self).__init__()
self.output_dim = output_dim
self.hidden_dim = hidden_dim
self.no_layers = no_layers
self.vocab_size = vocab_size
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.rnn = nn.RNN(input_size=embedding_dim,hidden_size=self.hidden_dim,
num_layers=no_layers, batch_first=True)
self.dropout = nn.Dropout(0.3)
self.fc = nn.Linear(self.hidden_dim, output_dim)
self.sig = nn.Sigmoid()
def forward(self,x,hidden):
batch_size = x.size(0)
embeds = self.embedding(x)
rnn_out, hidden = self.rnn(embeds, hidden)
rnn_out = rnn_out.contiguous().view(-1, self.hidden_dim)
out = self.dropout(rnn_out)
out = self.fc(out)
sig_out = self.sig(out)
sig_out = sig_out.view(batch_size, -1)
sig_out = sig_out[:, -1]
return sig_out, hidden
def init_hidden(self, batch_size):
''' Initializes hidden state '''
h0 = torch.zeros((self.no_layers,batch_size,self.hidden_dim)).to(device)
return h0
no_layers = 2
vocab_size = len(corpus) + 1 #extra 1 for padding
embedding_dim = 64
output_dim = 1
hidden_dim = 256
model = RNN(no_layers,vocab_size,hidden_dim,embedding_dim,output_dim)
model.to(device)
print(model)
lr=0.001 #learning rate
criterion = nn.BCELoss() #Binary Cross Entropy loss
optimizer = torch.optim.Adam(model.parameters(), lr=lr)
def acc(pred,label):
""" function to calculate accuracy """
pred = torch.round(pred.squeeze())
return torch.sum(pred == label.squeeze()).item()
patience_early_stopping = 3 #training will stop if model performance does not improve for these many consecutive epochs
cnt = 0 #counter for checking patience level
prev_epoch_acc = 0.0 #initializing prev test accuracy for early stopping condition
scheduler = lr_scheduler.ReduceLROnPlateau(optimizer, mode = 'max', factor = 0.2, patience = 1)
epochs = 10
for epoch in range(epochs):
train_acc = 0.0
model.train()
for inputs, labels in train_loader: #training in batches
inputs, labels = inputs.to(device), labels.to(device)
h = model.init_hidden(batch_size)
model.zero_grad()
output,h = model(inputs,h)
loss = criterion(output.squeeze(), labels.float())
loss.backward()
accuracy = acc(output,labels)
train_acc += accuracy
optimizer.step()
val_acc = 0.0
model.eval()
for inputs, labels in valid_loader:
inputs, labels = inputs.to(device), labels.to(device)
val_h = model.init_hidden(batch_size)
output, val_h = model(inputs, val_h)
accuracy = acc(output,labels)
val_acc += accuracy
epoch_train_acc = train_acc/len(train_loader.dataset)
epoch_val_acc = val_acc/len(valid_loader.dataset)
scheduler.step(epoch_val_acc)
if epoch_val_acc > prev_epoch_acc: #check if val accuracy for current epoch has improved compared to previous epoch
cnt = 0 #f accuracy improves reset counter to 0
else: #otherwise increment current counter
cnt += 1
prev_epoch_acc = epoch_val_acc
print(f'Epoch {epoch+1}')
print(f'train_accuracy : {epoch_train_acc*100} val_accuracy : {epoch_val_acc*100}')
if cnt == patience_early_stopping:
print(f"early stopping as test accuracy did not improve for {patience_early_stopping} consecutive epochs")
break
def predict_text(text):
word_seq = np.array([corpus[preprocess_string(word)] for word in text.split()
if preprocess_string(word) in corpus.keys()])
word_seq = np.expand_dims(word_seq,axis=0)
pad = torch.from_numpy(padding_(word_seq,300))
input = pad.to(device)
batch_size = 1
h = model.init_hidden(batch_size)
#h = tuple([each.data for each in h])
output, h = model(input, h)
return(output.item())
index = 30
print(df_test['text'][index])
print('='*70)
print(f'Actual sentiment is : {df_test["label"][index]}')
print('='*70)
prob = predict_text(df_test['text'][index])
status = "positive" if prob > 0.5 else "negative"
prob = (1 - prob) if status == "negative" else prob
print(f'Predicted sentiment is {status} with a probability of {prob}')
###Output
This movie is good for entertainment purposes, but it is not historically reliable. If you are looking for a movie and thinking to yourself `Oh I want to learn more about Custer's life and his last stand', do not rent `They Died with Their Boots On'. But, if you would like to watch a movie for the enjoyment of an older western film, with a little bit of romance and just for a good story, this is a fun movie to watch.<br /><br />The story starts out with Custer's (Errol Flynn) first day at West Point. Everyone loves his charming personality which allows him to get away with most everything. The movie follows his career from West Point and his many battles, including his battle in the Civil War. The movie ends with his last stand at Little Big Horn. In between the battle scenes, he finds love and marriage with Libby (Olivia De Havilland).<br /><br />Errol Flynn portrays the arrogant, but suave George Armstrong Custer well. Olivia De Havilland plays the cute, sweet Libby very well, especially in the flirting scene that Custer and Libby first meet. Their chemistry on screen made you believe in their romance. The acting in general was impressive, especially the comedic role ( although stereotypical) of Callie played by Hattie McDaniel. Her character will definitely make you laugh.<br /><br />The heroic war music brought out the excitement of the battle scenes. The beautiful costumes set the tone of the era. The script, at times, was corny, although the movie was still enjoyable to watch. The director's portrayal of Custer was as a hero and history shows this is debatable. Some will watch this movie and see Custer as a hero. Others will watch this movie and learn hate him.<br /><br />I give it a thumbs up for this 1942 western film.
======================================================================
Actual sentiment is : 1
======================================================================
Predicted sentiment is negative with a probability of 0.5339558720588684
###Markdown
GRU
###Code
class GRU(nn.Module):
def __init__(self,no_layers,vocab_size,hidden_dim,embedding_dim):
super(GRU,self).__init__()
self.output_dim = output_dim
self.hidden_dim = hidden_dim
self.no_layers = no_layers
self.vocab_size = vocab_size
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.gru = nn.GRU(input_size=embedding_dim,hidden_size=self.hidden_dim,
num_layers=no_layers, batch_first=True)
self.dropout = nn.Dropout(0.3)
self.fc = nn.Linear(self.hidden_dim, output_dim)
self.sig = nn.Sigmoid()
def forward(self,x,hidden):
batch_size = x.size(0)
embeds = self.embedding(x)
gru_out, hidden = self.gru(embeds, hidden)
gru_out = gru_out.contiguous().view(-1, self.hidden_dim)
out = self.dropout(gru_out)
out = self.fc(out)
sig_out = self.sig(out)
sig_out = sig_out.view(batch_size, -1)
sig_out = sig_out[:, -1]
return sig_out, hidden
def init_hidden(self, batch_size):
''' Initializes hidden state '''
h0 = torch.zeros((self.no_layers,batch_size,self.hidden_dim)).to(device)
return h0
no_layers = 2
vocab_size = len(corpus) + 1 #extra 1 for padding
embedding_dim = 64
output_dim = 1
hidden_dim = 256
model = GRU(no_layers,vocab_size,hidden_dim,embedding_dim)
#moving to gpu
model.to(device)
print(model)
# loss and optimization functions
lr=0.001
criterion = nn.BCELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=lr)
# function to calculate accuracy
def acc(pred,label):
pred = torch.round(pred.squeeze())
return torch.sum(pred == label.squeeze()).item()
patience_early_stopping = 3 #training will stop if model performance does not improve for these many consecutive epochs
cnt = 0 #counter for checking patience level
prev_epoch_acc = 0.0 #initializing prev test accuracy for early stopping condition
scheduler = lr_scheduler.ReduceLROnPlateau(optimizer, mode = 'max', factor = 0.2, patience = 1)
epochs = 10
for epoch in range(epochs):
train_acc = 0.0
model.train()
for inputs, labels in train_loader:
inputs, labels = inputs.to(device), labels.to(device)
h = model.init_hidden(batch_size)
model.zero_grad()
output,h = model(inputs,h)
loss = criterion(output.squeeze(), labels.float())
loss.backward()
accuracy = acc(output,labels)
train_acc += accuracy
optimizer.step()
val_acc = 0.0
model.eval()
for inputs, labels in valid_loader:
inputs, labels = inputs.to(device), labels.to(device)
val_h = model.init_hidden(batch_size)
output, val_h = model(inputs, val_h)
accuracy = acc(output,labels)
val_acc += accuracy
epoch_train_acc = train_acc/len(train_loader.dataset)
epoch_val_acc = val_acc/len(valid_loader.dataset)
scheduler.step(epoch_val_acc)
if epoch_val_acc > prev_epoch_acc: #check if val accuracy for current epoch has improved compared to previous epoch
cnt = 0 #f accuracy improves reset counter to 0
else: #otherwise increment current counter
cnt += 1
prev_epoch_acc = epoch_val_acc
print(f'Epoch {epoch+1}')
print(f'train_accuracy : {epoch_train_acc*100} val_accuracy : {epoch_val_acc*100}')
if cnt == patience_early_stopping:
print(f"early stopping as test accuracy did not improve for {patience_early_stopping} consecutive epochs")
break
index = 30
print(df_test['text'][index])
print('='*70)
print(f'Actual sentiment is : {df_test["label"][index]}')
print('='*70)
prob = predict_text(df_test['text'][index])
status = "positive" if prob > 0.5 else "negative"
prob = (1 - prob) if status == "negative" else prob
print(f'Predicted sentiment is {status} with a probability of {prob}')
###Output
This movie is good for entertainment purposes, but it is not historically reliable. If you are looking for a movie and thinking to yourself `Oh I want to learn more about Custer's life and his last stand', do not rent `They Died with Their Boots On'. But, if you would like to watch a movie for the enjoyment of an older western film, with a little bit of romance and just for a good story, this is a fun movie to watch.<br /><br />The story starts out with Custer's (Errol Flynn) first day at West Point. Everyone loves his charming personality which allows him to get away with most everything. The movie follows his career from West Point and his many battles, including his battle in the Civil War. The movie ends with his last stand at Little Big Horn. In between the battle scenes, he finds love and marriage with Libby (Olivia De Havilland).<br /><br />Errol Flynn portrays the arrogant, but suave George Armstrong Custer well. Olivia De Havilland plays the cute, sweet Libby very well, especially in the flirting scene that Custer and Libby first meet. Their chemistry on screen made you believe in their romance. The acting in general was impressive, especially the comedic role ( although stereotypical) of Callie played by Hattie McDaniel. Her character will definitely make you laugh.<br /><br />The heroic war music brought out the excitement of the battle scenes. The beautiful costumes set the tone of the era. The script, at times, was corny, although the movie was still enjoyable to watch. The director's portrayal of Custer was as a hero and history shows this is debatable. Some will watch this movie and see Custer as a hero. Others will watch this movie and learn hate him.<br /><br />I give it a thumbs up for this 1942 western film.
======================================================================
Actual sentiment is : 1
======================================================================
Predicted sentiment is positive with a probability of 0.9998726844787598
###Markdown
LSTM
###Code
class LSTM(nn.Module):
def __init__(self,no_layers,vocab_size,hidden_dim,embedding_dim):
super(LSTM,self).__init__()
self.output_dim = output_dim
self.hidden_dim = hidden_dim
self.no_layers = no_layers
self.vocab_size = vocab_size
self.embedding = nn.Embedding(vocab_size, embedding_dim) #embedding layer
self.lstm = nn.LSTM(input_size=embedding_dim,hidden_size=self.hidden_dim,
num_layers=no_layers, batch_first=True) #lstm layer
self.dropout = nn.Dropout(0.3) # dropout layer
self.fc = nn.Linear(self.hidden_dim, output_dim) #fully connected layer
self.sig = nn.Sigmoid() #sigmoid activation
def forward(self,x,hidden):
batch_size = x.size(0)
embeds = self.embedding(x)
lstm_out, hidden = self.lstm(embeds, hidden)
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
out = self.dropout(lstm_out)
out = self.fc(out)
sig_out = self.sig(out)
sig_out = sig_out.view(batch_size, -1)
sig_out = sig_out[:, -1]
return sig_out, hidden
def init_hidden(self, batch_size):
''' Initializes hidden state and cell state for LSTM '''
h0 = torch.zeros((self.no_layers,batch_size,self.hidden_dim)).to(device)
c0 = torch.zeros((self.no_layers,batch_size,self.hidden_dim)).to(device)
hidden = (h0,c0)
return hidden
no_layers = 2
vocab_size = len(corpus) + 1 #extra 1 for padding
embedding_dim = 64
output_dim = 1
hidden_dim = 256
model = LSTM(no_layers,vocab_size,hidden_dim,embedding_dim)
model.to(device)
print(model)
# loss and optimization functions
lr=0.001
criterion = nn.BCELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=lr)
# function to calculate accuracy
def acc(pred,label):
pred = torch.round(pred.squeeze())
return torch.sum(pred == label.squeeze()).item()
patience_early_stopping = 3 #training will stop if model performance does not improve for these many consecutive epochs
cnt = 0 #counter for checking patience level
prev_epoch_acc = 0.0 #initializing prev test accuracy for early stopping condition
scheduler = lr_scheduler.ReduceLROnPlateau(optimizer, mode = 'max', factor = 0.2, patience = 1)
epochs = 10
for epoch in range(epochs):
train_acc = 0.0
model.train()
for inputs, labels in train_loader:
inputs, labels = inputs.to(device), labels.to(device)
h = model.init_hidden(batch_size)
model.zero_grad()
output,h = model(inputs,h)
loss = criterion(output.squeeze(), labels.float())
loss.backward()
accuracy = acc(output,labels)
train_acc += accuracy
optimizer.step()
val_acc = 0.0
model.eval()
for inputs, labels in valid_loader:
val_h = model.init_hidden(batch_size)
inputs, labels = inputs.to(device), labels.to(device)
output, val_h = model(inputs, val_h)
accuracy = acc(output,labels)
val_acc += accuracy
epoch_train_acc = train_acc/len(train_loader.dataset)
epoch_val_acc = val_acc/len(valid_loader.dataset)
scheduler.step(epoch_val_acc)
if epoch_val_acc > prev_epoch_acc: #check if val accuracy for current epoch has improved compared to previous epoch
cnt = 0 #f accuracy improves reset counter to 0
else: #otherwise increment current counter
cnt += 1
prev_epoch_acc = epoch_val_acc
print(f'Epoch {epoch+1}')
print(f'train_accuracy : {epoch_train_acc*100} val_accuracy : {epoch_val_acc*100}')
if cnt == patience_early_stopping:
print(f"early stopping as test accuracy did not improve for {patience_early_stopping} consecutive epochs")
break
index = 30
print(df_test['text'][index])
print('='*70)
print(f'Actual sentiment is : {df_test["label"][index]}')
print('='*70)
prob = predict_text(df_test['text'][index])
status = "positive" if prob > 0.5 else "negative"
prob = (1 - prob) if status == "negative" else prob
print(f'Predicted sentiment is {status} with a probability of {prob}')
###Output
_____no_output_____ |
IntroODEs/intro_to_ODEs2.ipynb | ###Markdown
Exercise 3 - Gravity!Continuing from the previous notebook, now we're going to try a more difficult problem: gravity! We need to do this in two dimensions, so now we've got more variables. It's still ordinary differential equations though. The only derivative is a time derivative.Now we want to solve a vector equation:$$\vec{F~} = - \frac{G~M~m}{r^2} \hat{r~}$$We'll take this to be the force on $m$, so $F = m a$. In terms of the unnormalized vector $\vec{r~}$, we have$$\vec{a~} = - \frac{G~M}{r^2} \frac{\vec{r~}}{r}$$where $r$ is the length of $\vec{r~}$. So how do we put this into the form scipy expects? We define the position of the little object by$$\vec{r~} = (x, y)$$Then the length is$$r = \sqrt{x^2 + y^2}$$We have second-order differential equations for both $x$ and $y$. We need four variables $x$, $y$, $v_x$, $v_y$.We also need to rescale our variables. Kilograms, meters, and seconds aren't great for describing orbits. We'll get a lot of huge numbers. Let's define a rescaling:$$t = T~\tau$$$$r = R~\rho$$So the differential equation looks something like$$\frac{d^2 r}{d t^2} = \frac{R}{T^2} \frac{d^2 \rho}{d \tau^2} = - \frac{G~M}{(R~\rho)^2}$$or$$\frac{d^2 \rho}{d \tau^2} = - \left( \frac{G~M~T^2}{R^3}\right) ~ \frac{1}{\rho^2}$$All the units have been collected into one single factor. If we choose $R = 1~\mathrm{AU}$ and $T = 1~\mathrm{yr}$, this factor becomes a nice number close to $1$.
###Code
# Calculate the factor above
gee_msol = gravitational_constant*mass_sun
scale_factor = (gee_msol/au/au/au) * year * year
print(scale_factor)
###Output
_____no_output_____
###Markdown
Now we're ready to define the gravitational acceleration and start some calculations.
###Code
# Gravitational acceleration in 2D
def fgrav(vec, t):
x, y, vx, vy = vec
r = # FIXME: Calculate the distance from x and y
acc = # FIXME: Calculate the magnitude of the acceleration
return (vx, vy, -acc*x/r, -acc*y/r) # Turn the calculations above into the acceleration vector
r_init = (1., 0., 0., 1.) # Starting values at t = 0
times = np.linspace(0., 4., 10000)
rarr = odeint(fgrav, r_init, times)
plt.figure(figsize=(8,8))
plt.scatter(rarr[:,0], rarr[:,1], s=5)
plt.scatter(0., 0., c='y', s=50)
plt.gca().set_aspect('equal', 'datalim')
###Output
_____no_output_____
###Markdown
We just guessed at the initial conditions, and we get a very elliptical orbit. Using the formula for acceleration on a circle$$v^2/r = G~M/r^2$$So the velocity on a circular orbit should be$$v = \sqrt{G~M/r}$$We can use that to get the initial conditions correct. **Exercise 3.1**: Fill in the initial condition below to get a circular orbit at $r = 1$.
###Code
fIr_init1 = (1., 0., 0., 1.) # FIXME: Change the last value
times = np.linspace(0., 4., 10000)
rarr1 = odeint(fgrav, r_init1, times)
plt.figure(figsize=(8,8))
plt.scatter(rarr1[:,0], rarr1[:,1], s=5)
plt.scatter(0., 0., c='y', s=50)
plt.gca().set_aspect('equal', 'datalim')
###Output
_____no_output_____
###Markdown
**Exercise 3.2**: How long does a single orbit take? Does this make sense? **Exercise 3.3**: Play with the conditions below, shooting the planet toward the sun but offset a bit in $y$ so it doesn't go straight through the center. What kind of shapes do you get? Note that we use a different `times` array than the others, so orbits that go way off can be stopped early if you want.
###Code
r_init2 = (4., 0.5, -10., 0.) # FIXME: Try different values
times2 = np.linspace(0., 2, 1000)
rarr2 = odeint(fgrav, r_init2, times)
plt.figure(figsize=(8,8))
plt.scatter(rarr2[:,0], rarr2[:,1], s=5)
plt.scatter(0., 0., c='y', s=50)
plt.gca().set_aspect('equal', 'datalim')
###Output
_____no_output_____
###Markdown
**Exercise 3.4**: I've defined the distance from Mars to the Sun in kilometers as `mars_distance`. Define `r_mars` in our units (the ones where the Earth is at $r = 1$, and change the initial conditions below to add Mars to the plot.
###Code
r_init3 = (1, 0., 0., 1.) # FIXME: Set correct x and vy for Mars
rarr3 = odeint(fgrav, r_init3, times)
plt.figure(figsize=(8,8))
plt.scatter(rarr1[:,0], rarr1[:,1], s=5)
plt.scatter(rarr3[:,0], rarr3[:,1], c='r', s=4)
plt.scatter(0., 0., c='y', s=50)
plt.gca().set_aspect('equal', 'datalim')
###Output
_____no_output_____ |
site/en/tutorials/distribute/dtensor_ml_tutorial.ipynb | ###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Distributed Training with DTensors View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook OverviewDTensor provides a way for you to distribute the training of your model across devices to improve efficiency, reliability and scalability. For more details on DTensor concepts, see [The DTensor Programming Guide](https://www.tensorflow.org/guide/dtensor_overview).In this tutorial, you will train a Sentiment Analysis model with DTensor. Three distributed training schemes are demonstrated with this example: - Data Parallel training, where the training samples are sharded (partitioned) to devices. - Model Parallel training, where the model variables are sharded to devices. - Spatial Parallel training, where the features of input data are sharded to devices. (Also known as [Spatial Partitioning](https://cloud.google.com/blog/products/ai-machine-learning/train-ml-models-on-large-images-and-3d-volumes-with-spatial-partitioning-on-cloud-tpus))The training portion of this tutorial is inspired [A Kaggle guide on Sentiment Analysis](https://www.kaggle.com/code/anasofiauzsoy/yelp-review-sentiment-analysis-tensorflow-tfds/notebook) notebook. To learn about the complete training and evaluation workflow (without DTensor), refer to that notebook.This tutorial will walk through the following steps:- First start with some data cleaning to obtain a `tf.data.Dataset` of tokenized sentences and their polarity.- Next build an MLP model with custom Dense and BatchNorm layers. Use a `tf.Module` to track the inference variables. The model constructor takes additional `Layout` arguments to control the sharding of variables.- For training, you will first use data parallel training together with `tf.experimental.dtensor`'s checkpoint feature. Then continue with Model Parallel Training and Spatial Parallel Training.- The final section briefly describes the interaction between `tf.saved_model` and `tf.experimental.dtensor` as of TensorFlow 2.9. SetupDTensor is part of TensorFlow 2.9.0 release.
###Code
!pip install --quiet --upgrade --pre tensorflow tensorflow-datasets
###Output
_____no_output_____
###Markdown
Next, import `tensorflow` and `tensorflow.experimental.dtensor`. Then configure TensorFlow to use 8 virtual CPUs.Even though this example uses CPUs, DTensor works the same way on CPU, GPU or TPU devices.
###Code
import tempfile
import numpy as np
import tensorflow_datasets as tfds
import tensorflow as tf
from tensorflow.experimental import dtensor
print('TensorFlow version:', tf.__version__)
def configure_virtual_cpus(ncpu):
phy_devices = tf.config.list_physical_devices('CPU')
tf.config.set_logical_device_configuration(phy_devices[0], [
tf.config.LogicalDeviceConfiguration(),
] * ncpu)
configure_virtual_cpus(8)
DEVICES = [f'CPU:{i}' for i in range(8)]
tf.config.list_logical_devices('CPU')
###Output
_____no_output_____
###Markdown
Download the datasetDownload the IMDB reviews data set to train the sentiment analysis model.
###Code
train_data = tfds.load('imdb_reviews', split='train', shuffle_files=True, batch_size=64)
train_data
###Output
_____no_output_____
###Markdown
Prepare the dataFirst tokenize the text. Here use an extension of one-hot encoding, the `'tf_idf'` mode of `tf.keras.layers.TextVectorization`.- For the sake of speed, limit the number of tokens to 1200.- To keep the `tf.Module` simple, run `TextVectorization` as a preprocessing step before the training.The final result of the data cleaning section is a `Dataset` with the tokenized text as `x` and label as `y`.**Note**: Running `TextVectorization` as a preprocessing step is **neither a usual practice nor a recommended one** as doing so assumes the training data fits into the client memory, which is not always the case.
###Code
text_vectorization = tf.keras.layers.TextVectorization(output_mode='tf_idf', max_tokens=1200, output_sequence_length=None)
text_vectorization.adapt(data=train_data.map(lambda x: x['text']))
def vectorize(features):
return text_vectorization(features['text']), features['label']
train_data_vec = train_data.map(vectorize)
train_data_vec
###Output
_____no_output_____
###Markdown
Build a neural network with DTensorNow build a Multi-Layer Perceptron (MLP) network with `DTensor`. The network will use fully connected Dense and BatchNorm layers.`DTensor` expands TensorFlow through single-program multi-data (SPMD) expansion of regular TensorFlow Ops according to the `dtensor.Layout` attributes of their input `Tensor` and variables.Variables of `DTensor` aware layers are `dtensor.DVariable`, and the constructors of `DTensor` aware layer objects take additional `Layout` inputs in addition to the usual layer parameters.Note: As of TensorFlow 2.9, Keras layers such as `tf.keras.layer.Dense`, and `tf.keras.layer.BatchNormalization` accepts `dtensor.Layout` arguments. Refer to the [DTensor Keras Integration Tutorial](/tutorials/distribute/dtensor_keras_tutorial) for more information using Keras with DTensor. Dense LayerThe following custom Dense layer defines 2 layer variables: $W_{ij}$ is the variable for weights, and $b_i$ is the variable for the biases.$$y_j = \sigma(\sum_i x_i W_{ij} + b_j)$$ Layout deductionThis result comes from the following observations:- The preferred DTensor sharding for operands to a matrix dot product $t_j = \sum_i x_i W_{ij}$ is to shard $\mathbf{W}$ and $\mathbf{x}$ the same way along the $i$-axis.- The preferred DTensor sharding for operands to a matrix sum $t_j + b_j$, is to shard $\mathbf{t}$ and $\mathbf{b}$ the same way along the $j$-axis.
###Code
class Dense(tf.Module):
def __init__(self, input_size, output_size,
init_seed, weight_layout, activation=None):
super().__init__()
random_normal_initializer = tf.function(tf.random.stateless_normal)
self.weight = dtensor.DVariable(
dtensor.call_with_layout(
random_normal_initializer, weight_layout,
shape=[input_size, output_size],
seed=init_seed
))
if activation is None:
activation = lambda x:x
self.activation = activation
# bias is sharded the same way as the last axis of weight.
bias_layout = weight_layout.delete([0])
self.bias = dtensor.DVariable(
dtensor.call_with_layout(tf.zeros, bias_layout, [output_size]))
def __call__(self, x):
y = tf.matmul(x, self.weight) + self.bias
y = self.activation(y)
return y
###Output
_____no_output_____
###Markdown
BatchNormA batch normalization layer helps avoid collapsing modes while training. In this case, adding batch normalization layers helps model training avoid producing a model that only produces zeros.The constructor of the custom `BatchNorm` layer below does not take a `Layout` argument. This is because `BatchNorm` has no layer variables. This still works with DTensor because 'x', the only input to the layer, is already a DTensor that represents the global batch.Note: With DTensor, the input Tensor 'x' always represents the global batch. Therefore `tf.nn.batch_normalization` is applied to the global batch. This differs from training with `tf.distribute.MirroredStrategy`, where Tensor 'x' only represents the per-replica shard of the batch (the local batch).
###Code
class BatchNorm(tf.Module):
def __init__(self):
super().__init__()
def __call__(self, x, training=True):
if not training:
# This branch is not used in the Tutorial.
pass
mean, variance = tf.nn.moments(x, axes=[0])
return tf.nn.batch_normalization(x, mean, variance, 0.0, 1.0, 1e-5)
###Output
_____no_output_____
###Markdown
A full featured batch normalization layer (such as `tf.keras.layers.BatchNormalization`) will need Layout arguments for its variables.
###Code
def make_keras_bn(bn_layout):
return tf.keras.layers.BatchNormalization(gamma_layout=bn_layout,
beta_layout=bn_layout,
moving_mean_layout=bn_layout,
moving_variance_layout=bn_layout,
fused=False)
###Output
_____no_output_____
###Markdown
Putting Layers TogetherNext, build a Multi-layer perceptron (MLP) network with the building blocks above. The diagram below shows the axis relationships between the input `x` and the weight matrices for the two `Dense` layers without any DTensor sharding or replication applied. The output of the first `Dense` layer is passed into the input of the second `Dense` layer (after the `BatchNorm`). Therefore, the preferred DTensor sharding for the output of first `Dense` layer ($\mathbf{W_1}$) and the input of second `Dense` layer ($\mathbf{W_2}$) is to shard $\mathbf{W_1}$ and $\mathbf{W_2}$ the same way along the common axis $\hat{j}$,$$\mathsf{Layout}[{W_{1,ij}}; i, j] = \left[\hat{i}, \hat{j}\right] \\\mathsf{Layout}[{W_{2,jk}}; j, k] = \left[\hat{j}, \hat{k} \right]$$Even though the layout deduction shows that the 2 layouts are not independent, for the sake of simplicity of the model interface, `MLP` will take 2 `Layout` arguments, one per Dense layer.
###Code
from typing import Tuple
class MLP(tf.Module):
def __init__(self, dense_layouts: Tuple[dtensor.Layout, dtensor.Layout]):
super().__init__()
self.dense1 = Dense(
1200, 48, (1, 2), dense_layouts[0], activation=tf.nn.relu)
self.bn = BatchNorm()
self.dense2 = Dense(48, 2, (3, 4), dense_layouts[1])
def __call__(self, x):
y = x
y = self.dense1(y)
y = self.bn(y)
y = self.dense2(y)
return y
###Output
_____no_output_____
###Markdown
The trade-off between correctness in layout deduction constraints and simplicity of API is a common design point of APIs that uses DTensor.It is also possible to capture the dependency between `Layout`'s with a different API. For example, the `MLPStricter` class creates the `Layout` objects in the constructor.
###Code
class MLPStricter(tf.Module):
def __init__(self, mesh, input_mesh_dim, inner_mesh_dim1, output_mesh_dim):
super().__init__()
self.dense1 = Dense(
1200, 48, (1, 2), dtensor.Layout([input_mesh_dim, inner_mesh_dim1], mesh),
activation=tf.nn.relu)
self.bn = BatchNorm()
self.dense2 = Dense(48, 2, (3, 4), dtensor.Layout([inner_mesh_dim1, output_mesh_dim], mesh))
def __call__(self, x):
y = x
y = self.dense1(y)
y = self.bn(y)
y = self.dense2(y)
return y
###Output
_____no_output_____
###Markdown
To make sure the model runs, probe your model with fully replicated layouts and a fully replicated batch of `'x'` input.
###Code
WORLD = dtensor.create_mesh([("world", 8)], devices=DEVICES)
model = MLP([dtensor.Layout.replicated(WORLD, rank=2),
dtensor.Layout.replicated(WORLD, rank=2)])
sample_x, sample_y = train_data_vec.take(1).get_single_element()
sample_x = dtensor.copy_to_mesh(sample_x, dtensor.Layout.replicated(WORLD, rank=2))
print(model(sample_x))
###Output
_____no_output_____
###Markdown
Moving data to the deviceUsually, `tf.data` iterators (and other data fetching methods) yield tensor objects backed by the local host device memory. This data must be transferred to the accelerator device memory that backs DTensor's component tensors.`dtensor.copy_to_mesh` is unsuitable for this situation because it replicates input tensors to all devices due to DTensor's global perspective. So in this tutorial, you will use a helper function `repack_local_tensor`, to facilitate the transfer of data. This helper function uses `dtensor.pack` to send (and only send) the shard of the global batch that is intended for a replica to the device backing the replica.This simplified function assumes single-client. Determining the correct way to split the local tensor and the mapping between the pieces of the split and the local devices can be laboring in a multi-client application.Additional DTensor API to simplify `tf.data` integration is planned, supporting both single-client and multi-client applications. Please stay tuned.
###Code
def repack_local_tensor(x, layout):
"""Repacks a local Tensor-like to a DTensor with layout.
This function assumes a single-client application.
"""
x = tf.convert_to_tensor(x)
sharded_dims = []
# For every sharded dimension, use tf.split to split the along the dimension.
# The result is a nested list of split-tensors in queue[0].
queue = [x]
for axis, dim in enumerate(layout.sharding_specs):
if dim == dtensor.UNSHARDED:
continue
num_splits = layout.shape[axis]
queue = tf.nest.map_structure(lambda x: tf.split(x, num_splits, axis=axis), queue)
sharded_dims.append(dim)
# Now we can build the list of component tensors by looking up the location in
# the nested list of split-tensors created in queue[0].
components = []
for locations in layout.mesh.local_device_locations():
t = queue[0]
for dim in sharded_dims:
split_index = locations[dim] # Only valid on single-client mesh.
t = t[split_index]
components.append(t)
return dtensor.pack(components, layout)
###Output
_____no_output_____
###Markdown
Data parallel trainingIn this section, you will train your MLP model with data parallel training. The following sections will demonstrate model parallel training and spatial parallel training.Data parallel training is a commonly used scheme for distributed machine learning: - Model variables are replicated on N devices each. - A global batch is split into N per-replica batches. - Each per-replica batch is trained on the replica device. - The gradient is reduced before weight up data is collectively performed on all replicas.Data parallel training provides nearly linear speedup regarding the number of devices. Creating a data parallel meshA typical data parallelism training loop uses a DTensor `Mesh` that consists of a single `batch` dimension, where each device becomes a replica that receives a shard from the global batch.The replicated model runs on the replica, therefore the model variables are fully replicated (unsharded).
###Code
mesh = dtensor.create_mesh([("batch", 8)], devices=DEVICES)
model = MLP([dtensor.Layout([dtensor.UNSHARDED, dtensor.UNSHARDED], mesh),
dtensor.Layout([dtensor.UNSHARDED, dtensor.UNSHARDED], mesh),])
###Output
_____no_output_____
###Markdown
Packing training data to DTensorsThe training data batch should be packed into DTensors sharded along the `'batch'`(first) axis, such that DTensor will evenly distribute the training data to the `'batch'` mesh dimension.**Note**: In DTensor, the `batch size` always refers to the global batch size. The batch size should be chosen such that it can be divided evenly by the size of the `batch` mesh dimension.
###Code
def repack_batch(x, y, mesh):
x = repack_local_tensor(x, layout=dtensor.Layout(['batch', dtensor.UNSHARDED], mesh))
y = repack_local_tensor(y, layout=dtensor.Layout(['batch'], mesh))
return x, y
sample_x, sample_y = train_data_vec.take(1).get_single_element()
sample_x, sample_y = repack_batch(sample_x, sample_y, mesh)
print('x', sample_x[:, 0])
print('y', sample_y)
###Output
_____no_output_____
###Markdown
Training stepThis example uses a Stochastic Gradient Descent optimizer with the Custom Training Loop (CTL). Consult the [Custom Training Loop guide](https://www.tensorflow.org/guide/keras/writing_a_training_loop_from_scratch) and [Walk through](https://www.tensorflow.org/tutorials/customization/custom_training_walkthrough) for more information on those topics.The `train_step` is encapsulated as a `tf.function` to indicate this body is to be traced as a TensorFlow Graph. The body of `train_step` consists of a forward inference pass, a backward gradient pass, and the variable update.Note that the body of `train_step` does not contain any special DTensor annotations. Instead, `train_step` only contains high-level TensorFlow operations that process the input `x` and `y` from the global view of the input batch and the model. All of the DTensor annotations (`Mesh`, `Layout`) are factored out of the train step.
###Code
# Refer to the CTL (custom training loop guide)
@tf.function
def train_step(model, x, y, learning_rate=tf.constant(1e-4)):
with tf.GradientTape() as tape:
logits = model(x)
# tf.reduce_sum sums the batch sharded per-example loss to a replicated
# global loss (scalar).
loss = tf.reduce_sum(
tf.nn.sparse_softmax_cross_entropy_with_logits(
logits=logits, labels=y))
parameters = model.trainable_variables
gradients = tape.gradient(loss, parameters)
for parameter, parameter_gradient in zip(parameters, gradients):
parameter.assign_sub(learning_rate * parameter_gradient)
# Define some metrics
accuracy = 1.0 - tf.reduce_sum(tf.cast(tf.argmax(logits, axis=-1, output_type=tf.int64) != y, tf.float32)) / x.shape[0]
loss_per_sample = loss / len(x)
return {'loss': loss_per_sample, 'accuracy': accuracy}
###Output
_____no_output_____
###Markdown
CheckpointingYou can checkpoint a DTensor model using `dtensor.DTensorCheckpoint`. The format of a DTensor checkpoint is fully compatible with a Standard TensorFlow Checkpoint. There is ongoing work to consolidate `dtensor.DTensorCheckpoint` into `tf.train.Checkpoint`.When a DTensor checkpoint is restored, `Layout`s of variables can be different from when the checkpoint is saved. This tutorial makes use of this feature to continue the training in the Model Parallel training and Spatial Parallel training sections.
###Code
CHECKPOINT_DIR = tempfile.mkdtemp()
def start_checkpoint_manager(mesh, model):
ckpt = dtensor.DTensorCheckpoint(mesh, root=model)
manager = tf.train.CheckpointManager(ckpt, CHECKPOINT_DIR, max_to_keep=3)
if manager.latest_checkpoint:
print("Restoring a checkpoint")
ckpt.restore(manager.latest_checkpoint).assert_consumed()
else:
print("new training")
return manager
###Output
_____no_output_____
###Markdown
Training loopFor the data parallel training scheme, train for epochs and report the progress. 3 epochs is insufficient for training the model -- an accuracy of 50% is as good as randomly guessing.Enable checkpointing so that you can pick up the training later. In the following section, you will load the checkpoint and train with a different parallel scheme.
###Code
num_epochs = 2
manager = start_checkpoint_manager(mesh, model)
for epoch in range(num_epochs):
step = 0
pbar = tf.keras.utils.Progbar(target=int(train_data_vec.cardinality()), stateful_metrics=[])
metrics = {'epoch': epoch}
for x,y in train_data_vec:
x, y = repack_batch(x, y, mesh)
metrics.update(train_step(model, x, y, 1e-2))
pbar.update(step, values=metrics.items(), finalize=False)
step += 1
manager.save()
pbar.update(step, values=metrics.items(), finalize=True)
###Output
_____no_output_____
###Markdown
Model Parallel TrainingIf you switch to a 2 dimensional `Mesh`, and shard the model variables along the second mesh dimension, then the training becomes Model Parallel.In Model Parallel training, each model replica spans multiple devices (2 in this case):- There are 4 model replicas, and the training data batch is distributed to the 4 replicas.- The 2 devices within a single model replica receive replicated training data.
###Code
mesh = dtensor.create_mesh([("batch", 4), ("model", 2)], devices=DEVICES)
model = MLP([dtensor.Layout([dtensor.UNSHARDED, "model"], mesh),
dtensor.Layout(["model", dtensor.UNSHARDED], mesh)])
###Output
_____no_output_____
###Markdown
As the training data is still sharded along the batch dimension, you can reuse the same `repack_batch` function as the Data Parallel training case. DTensor will automatically replicate the per-replica batch to all devices inside the replica along the `"model"` mesh dimension.
###Code
def repack_batch(x, y, mesh):
x = repack_local_tensor(x, layout=dtensor.Layout(['batch', dtensor.UNSHARDED], mesh))
y = repack_local_tensor(y, layout=dtensor.Layout(['batch'], mesh))
return x, y
###Output
_____no_output_____
###Markdown
Next run the training loop. The training loop reuses the same checkpoint manager as the Data Parallel training example, and the code looks identical.You can continue training the data parallel trained model under model parallel training.
###Code
num_epochs = 2
manager = start_checkpoint_manager(mesh, model)
for epoch in range(num_epochs):
step = 0
pbar = tf.keras.utils.Progbar(target=int(train_data_vec.cardinality()))
metrics = {'epoch': epoch}
for x,y in train_data_vec:
x, y = repack_batch(x, y, mesh)
metrics.update(train_step(model, x, y, 1e-2))
pbar.update(step, values=metrics.items(), finalize=False)
step += 1
manager.save()
pbar.update(step, values=metrics.items(), finalize=True)
###Output
_____no_output_____
###Markdown
Spatial Parallel Training When training data of very high dimensionality (e.g. a very large image or a video), it may be desirable to shard along the feature dimension. This is called [Spatial Partitioning](https://cloud.google.com/blog/products/ai-machine-learning/train-ml-models-on-large-images-and-3d-volumes-with-spatial-partitioning-on-cloud-tpus), which was first introduced into TensorFlow for training models with large 3-d input samples.DTensor also supports this case. The only change you need to do is to create a Mesh that includes a `feature` dimension, and apply the corresponding `Layout`.
###Code
mesh = dtensor.create_mesh([("batch", 2), ("feature", 2), ("model", 2)], devices=DEVICES)
model = MLP([dtensor.Layout(["feature", "model"], mesh),
dtensor.Layout(["model", dtensor.UNSHARDED], mesh)])
###Output
_____no_output_____
###Markdown
Shard the input data along the `feature` dimension when packing the input tensors to DTensors. You do this with a slightly different repack function, `repack_batch_for_spt`, where `spt` stands for Spatial Parallel Training.
###Code
def repack_batch_for_spt(x, y, mesh):
# Shard data on feature dimension, too
x = repack_local_tensor(x, layout=dtensor.Layout(["batch", 'feature'], mesh))
y = repack_local_tensor(y, layout=dtensor.Layout(["batch"], mesh))
return x, y
###Output
_____no_output_____
###Markdown
The Spatial parallel training can also continue from a checkpoint created with other parallell training schemes.
###Code
num_epochs = 2
manager = start_checkpoint_manager(mesh, model)
for epoch in range(num_epochs):
step = 0
metrics = {'epoch': epoch}
pbar = tf.keras.utils.Progbar(target=int(train_data_vec.cardinality()))
for x, y in train_data_vec:
x, y = repack_batch_for_spt(x, y, mesh)
metrics.update(train_step(model, x, y, 1e-2))
pbar.update(step, values=metrics.items(), finalize=False)
step += 1
manager.save()
pbar.update(step, values=metrics.items(), finalize=True)
###Output
_____no_output_____
###Markdown
SavedModel and DTensorThe integration of DTensor and SavedModel is still under development. This section only describes the current status quo for TensorFlow 2.9.0.As of TensorFlow 2.9.0, `tf.saved_model` only accepts DTensor models with fully replicated variables.As a workaround, you can convert a DTensor model to a fully replicated one by reloading a checkpoint. However, after a model is saved, all DTensor annotations are lost and the saved signatures can only be used with regular Tensors, not DTensors.
###Code
mesh = dtensor.create_mesh([("world", 1)], devices=DEVICES[:1])
mlp = MLP([dtensor.Layout([dtensor.UNSHARDED, dtensor.UNSHARDED], mesh),
dtensor.Layout([dtensor.UNSHARDED, dtensor.UNSHARDED], mesh)])
manager = start_checkpoint_manager(mesh, mlp)
model_for_saving = tf.keras.Sequential([
text_vectorization,
mlp
])
@tf.function(input_signature=[tf.TensorSpec([None], tf.string)])
def run(inputs):
return {'result': model_for_saving(inputs)}
tf.saved_model.save(
model_for_saving, "/tmp/saved_model",
signatures=run)
###Output
_____no_output_____
###Markdown
As of TensorFlow 2.9.0, you can only call a loaded signature with a regular Tensor, or a fully replicated DTensor (which will be converted to a regular Tensor).
###Code
sample_batch = train_data.take(1).get_single_element()
sample_batch
loaded = tf.saved_model.load("/tmp/saved_model")
run_sig = loaded.signatures["serving_default"]
result = run_sig(sample_batch['text'])['result']
np.mean(tf.argmax(result, axis=-1) == sample_batch['label'])
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Distributed Training with DTensors View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook OverviewDTensor provides a way for you to distribute the training of your model across devices to improve efficiency, reliability and scalability. For more details on DTensor concepts, see [The DTensor Programming Guide](https://www.tensorflow.org/guide/dtensor_overview).In this tutorial, you will train a Sentiment Analysis model with DTensor. Three distributed training schemes are demonstrated with this example: - Data Parallel training, where the training samples are sharded (partitioned) to devices. - Model Parallel training, where the model variables are sharded to devices. - Spatial Parallel training, where the features of input data are sharded to devices. (Also known as [Spatial Partitioning](https://cloud.google.com/blog/products/ai-machine-learning/train-ml-models-on-large-images-and-3d-volumes-with-spatial-partitioning-on-cloud-tpus))The training portion of this tutorial is inspired [A Kaggle guide on Sentiment Analysis](https://www.kaggle.com/code/anasofiauzsoy/yelp-review-sentiment-analysis-tensorflow-tfds/notebook) notebook. To learn about the complete training and evaluation workflow (without DTensor), refer to that notebook.This tutorial will walk through the following steps:- First start with some data cleaning to obtain a `tf.data.Dataset` of tokenized sentences and their polarity.- Next build an MLP model with custom Dense and BatchNorm layers. Use a `tf.Module` to track the inference variables. The model constructor takes additional `Layout` arguments to control the sharding of variables.- For training, you will first use data parallel training together with `tf.experimental.dtensor`'s checkpoint feature. Then continue with Model Parallel Training and Spatial Parallel Training.- The final section briefly describes the interaction between `tf.saved_model` and `tf.experimental.dtensor` as of TensorFlow 2.9. SetupDTensor is part of TensorFlow 2.9.0 release.
###Code
!pip install --quiet --upgrade --pre tensorflow tensorflow-datasets
###Output
_____no_output_____
###Markdown
Next, import `tensorflow` and `tensorflow.experimental.dtensor`. Then configure TensorFlow to use 8 virtual CPUs.Even though this example uses CPUs, DTensor works the same way on CPU, GPU or TPU devices.
###Code
import tempfile
import numpy as np
import tensorflow_datasets as tfds
import tensorflow as tf
from tensorflow.experimental import dtensor
print('TensorFlow version:', tf.__version__)
def configure_virtual_cpus(ncpu):
phy_devices = tf.config.list_physical_devices('CPU')
tf.config.set_logical_device_configuration(phy_devices[0], [
tf.config.LogicalDeviceConfiguration(),
] * ncpu)
configure_virtual_cpus(8)
DEVICES = [f'CPU:{i}' for i in range(8)]
tf.config.list_logical_devices('CPU')
###Output
_____no_output_____
###Markdown
Download the datasetDownload the IMDB reviews data set to train the sentiment analysis model.
###Code
train_data = tfds.load('imdb_reviews', split='train', shuffle_files=True, batch_size=64)
train_data
###Output
_____no_output_____
###Markdown
Prepare the dataFirst tokenize the text. Here use an extension of one-hot encoding, the `'tf_idf'` mode of `tf.keras.layers.TextVectorization`.- For the sake of speed, limit the number of tokens to 1200.- To keep the `tf.Module` simple, run `TextVectorization` as a preprocessing step before the training.The final result of the data cleaning section is a `Dataset` with the tokenized text as `x` and label as `y`.**Note**: Running `TextVectorization` as a preprocessing step is **neither a usual practice nor a recommended one** as doing so assumes the training data fits into the client memory, which is not always the case.
###Code
text_vectorization = tf.keras.layers.TextVectorization(output_mode='tf_idf', max_tokens=1200, output_sequence_length=None)
text_vectorization.adapt(data=train_data.map(lambda x: x['text']))
def vectorize(features):
return text_vectorization(features['text']), features['label']
train_data_vec = train_data.map(vectorize)
train_data_vec
###Output
_____no_output_____
###Markdown
Build a neural network with DTensorNow build a Multi-Layer Perceptron (MLP) network with `DTensor`. The network will use fully connected Dense and BatchNorm layers.`DTensor` expands TensorFlow through single-program multi-data (SPMD) expansion of regular TensorFlow Ops according to the `dtensor.Layout` attributes of their input `Tensor` and variables.Variables of `DTensor` aware layers are `dtensor.DVariable`, and the constructors of `DTensor` aware layer objects take additional `Layout` inputs in addition to the usual layer parameters.Note: As of TensorFlow 2.9, Keras layers such as `tf.keras.layer.Dense`, and `tf.keras.layer.BatchNormalization` accepts `dtensor.Layout` arguments. Refer to the [DTensor Keras Integration Tutorial](/tutorials/distribute/dtensor_keras_tutorial) for more information using Keras with DTensor. Dense LayerThe following custom Dense layer defines 2 layer variables: $W_{ij}$ is the variable for weights, and $b_i$ is the variable for the biases.$$y_j = \sigma(\sum_i x_i W_{ij} + b_j)$$ Layout deductionThis result comes from the following observations:- The preferred DTensor sharding for operands to a matrix dot product $t_j = \sum_i x_i W_{ij}$ is to shard $\mathbf{W}$ and $\mathbf{x}$ the same way along the $i$-axis.- The preferred DTensor sharding for operands to a matrix sum $t_j + b_j$, is to shard $\mathbf{t}$ and $\mathbf{b}$ the same way along the $j$-axis.
###Code
class Dense(tf.Module):
def __init__(self, input_size, output_size,
init_seed, weight_layout, activation=None):
super().__init__()
random_normal_initializer = tf.function(tf.random.stateless_normal)
self.weight = dtensor.DVariable(
dtensor.call_with_layout(
random_normal_initializer, weight_layout,
shape=[input_size, output_size],
seed=init_seed
))
if activation is None:
activation = lambda x:x
self.activation = activation
# bias is sharded the same way as the last axis of weight.
bias_layout = weight_layout.delete([0])
self.bias = dtensor.DVariable(
dtensor.call_with_layout(tf.zeros, bias_layout, [output_size]))
def __call__(self, x):
y = tf.matmul(x, self.weight) + self.bias
y = self.activation(y)
return y
###Output
_____no_output_____
###Markdown
BatchNormA batch normalization layer helps avoid collapsing modes while training. In this case, adding batch normalization layers helps model training avoid producing a model that only produces zeros.The constructor of the custom `BatchNorm` layer below does not take a `Layout` argument. This is because `BatchNorm` has no layer variables. This still works with DTensor because 'x', the only input to the layer, is already a DTensor that represents the global batch.Note: With DTensor, the input Tensor 'x' always represents the global batch. Therefore `tf.nn.batch_normalization` is applied to the global batch. This differs from training with `tf.distribute.MirroredStrategy`, where Tensor 'x' only represents the per-replica shard of the batch (the local batch).
###Code
class BatchNorm(tf.Module):
def __init__(self):
super().__init__()
def __call__(self, x, training=True):
if not training:
# This branch is not used in the Tutorial.
pass
mean, variance = tf.nn.moments(x, axes=[0])
return tf.nn.batch_normalization(x, mean, variance, 0.0, 1.0, 1e-5)
###Output
_____no_output_____
###Markdown
A full featured batch normalization layer (such as `tf.keras.layers.BatchNormalization`) will need Layout arguments for its variables.
###Code
def make_keras_bn(bn_layout):
return tf.keras.layers.BatchNormalization(gamma_layout=bn_layout,
beta_layout=bn_layout,
moving_mean_layout=bn_layout,
moving_variance_layout=bn_layout,
fused=False)
###Output
_____no_output_____
###Markdown
Putting Layers TogetherNext, build a Multi-layer perceptron (MLP) network with the building blocks above. The diagram below shows the axis relationships between the input `x` and the weight matrices for the two `Dense` layers without any DTensor sharding or replication applied. The output of the first `Dense` layer is passed into the input of the second `Dense` layer (after the `BatchNorm`). Therefore, the preferred DTensor sharding for the output of first `Dense` layer ($\mathbf{W_1}$) and the input of second `Dense` layer ($\mathbf{W_2}$) is to shard $\mathbf{W_1}$ and $\mathbf{W_2}$ the same way along the common axis $\hat{j}$,$$\mathsf{Layout}[{W_{1,ij}}; i, j] = \left[\hat{i}, \hat{j}\right] \\\mathsf{Layout}[{W_{2,jk}}; j, k] = \left[\hat{j}, \hat{k} \right]$$Even though the layout deduction shows that the 2 layouts are not independent, for the sake of simplicity of the model interface, `MLP` will take 2 `Layout` arguments, one per Dense layer.
###Code
from typing import Tuple
class MLP(tf.Module):
def __init__(self, dense_layouts: Tuple[dtensor.Layout, dtensor.Layout]):
super().__init__()
self.dense1 = Dense(
1200, 48, (1, 2), dense_layouts[0], activation=tf.nn.relu)
self.bn = BatchNorm()
self.dense2 = Dense(48, 2, (3, 4), dense_layouts[1])
def __call__(self, x):
y = x
y = self.dense1(y)
y = self.bn(y)
y = self.dense2(y)
return y
###Output
_____no_output_____
###Markdown
The trade-off between correctness in layout deduction constraints and simplicity of API is a common design point of APIs that uses DTensor.It is also possible to capture the dependency between `Layout`'s with a different API. For example, the `MLPStricter` class creates the `Layout` objects in the constructor.
###Code
class MLPStricter(tf.Module):
def __init__(self, mesh, input_mesh_dim, inner_mesh_dim1, output_mesh_dim):
super().__init__()
self.dense1 = Dense(
1200, 48, (1, 2), dtensor.Layout([input_mesh_dim, inner_mesh_dim1], mesh),
activation=tf.nn.relu)
self.bn = BatchNorm()
self.dense2 = Dense(48, 2, (3, 4), dtensor.Layout([inner_mesh_dim1, output_mesh_dim], mesh))
def __call__(self, x):
y = x
y = self.dense1(y)
y = self.bn(y)
y = self.dense2(y)
return y
###Output
_____no_output_____
###Markdown
To make sure the model runs, probe your model with fully replicated layouts and a fully replicated batch of `'x'` input.
###Code
WORLD = dtensor.create_mesh([("world", 8)], devices=DEVICES)
model = MLP([dtensor.Layout.replicated(WORLD, rank=2),
dtensor.Layout.replicated(WORLD, rank=2)])
sample_x, sample_y = train_data_vec.take(1).get_single_element()
sample_x = dtensor.copy_to_mesh(sample_x, dtensor.Layout.replicated(WORLD, rank=2))
print(model(sample_x))
###Output
_____no_output_____
###Markdown
Moving data to the deviceUsually, `tf.data` iterators (and other data fetching methods) yield tensor objects backed by the local host device memory. This data must be transfered to the accelerator device memory that backs DTensor's component tensors.`dtensor.copy_to_mesh` is unsuitable for this situation because it replicates input tensors to all devices due to DTensor's global perspective. So in this tutorial, you will use a helper function `repack_local_tensor`, to facilitate the transfer of data. This helper function uses `dtensor.pack` to send (and only send) the shard of the global batch that is intended for a replica to the device backing the replica.This simplified function assumes single-client. Determining the correct way to split the local tensor and the mapping between the pieces of the split and the local devices can be laboring in a multi-client application.Additional DTensor API to simplify `tf.data` integration is planned, supporting both single-client and multi-client applications. Please stay tuned.
###Code
def repack_local_tensor(x, layout):
"""Repacks a local Tensor-like to a DTensor with layout.
This function assumes a single-client application.
"""
x = tf.convert_to_tensor(x)
sharded_dims = []
# For every sharded dimension, use tf.split to split the along the dimension.
# The result is a nested list of split-tensors in queue[0].
queue = [x]
for axis, dim in enumerate(layout.sharding_specs):
if dim == dtensor.UNSHARDED:
continue
num_splits = layout.shape[axis]
queue = tf.nest.map_structure(lambda x: tf.split(x, num_splits, axis=axis), queue)
sharded_dims.append(dim)
# Now we can build the list of component tensors by looking up the location in
# the nested list of split-tensors created in queue[0].
components = []
for locations in layout.mesh.local_device_locations():
t = queue[0]
for dim in sharded_dims:
split_index = locations[dim] # Only valid on single-client mesh.
t = t[split_index]
components.append(t)
return dtensor.pack(components, layout)
###Output
_____no_output_____
###Markdown
Data parallel trainingIn this section, you will train your MLP model with data parallel training. The following sections will demonstrate model parallel training and spatial parallel training.Data parallel training is a commonly used scheme for distributed machine learning: - Model variables are replicated on N devices each. - A global batch is split into N per-replica batches. - Each per-replica batch is trained on the replica device. - The gradient is reduced before weight up data is collectively performed on all replicas.Data parallel training provides nearly linear speedup regarding the number of devices. Creating a data parallel meshA typical data parallelism training loop uses a DTensor `Mesh` that consists of a single `batch` dimension, where each device becomes a replica that receives a shard from the global batch.The replicated model runs on the replica, therefore the model variables are fully replicated (unsharded).
###Code
mesh = dtensor.create_mesh([("batch", 8)], devices=DEVICES)
model = MLP([dtensor.Layout([dtensor.UNSHARDED, dtensor.UNSHARDED], mesh),
dtensor.Layout([dtensor.UNSHARDED, dtensor.UNSHARDED], mesh),])
###Output
_____no_output_____
###Markdown
Packing training data to DTensorsThe training data batch should be packed into DTensors sharded along the `'batch'`(first) axis, such that DTensor will evenly distribute the training data to the `'batch'` mesh dimension.**Note**: In DTensor, the `batch size` always refers to the global batch size. The batch size should be chosen such that it can be divided evenly by the size of the `batch` mesh dimension.
###Code
def repack_batch(x, y, mesh):
x = repack_local_tensor(x, layout=dtensor.Layout(['batch', dtensor.UNSHARDED], mesh))
y = repack_local_tensor(y, layout=dtensor.Layout(['batch'], mesh))
return x, y
sample_x, sample_y = train_data_vec.take(1).get_single_element()
sample_x, sample_y = repack_batch(sample_x, sample_y, mesh)
print('x', sample_x[:, 0])
print('y', sample_y)
###Output
_____no_output_____
###Markdown
Training stepThis example uses a Stochastic Gradient Descent optimizer with the Custom Training Loop (CTL). Consult the [Custom Training Loop guide](https://www.tensorflow.org/guide/keras/writing_a_training_loop_from_scratch) and [Walk through](https://www.tensorflow.org/tutorials/customization/custom_training_walkthrough) for more information on those topics.The `train_step` is encapsulated as a `tf.funtion` to indicate this body is to be traced as a TensorFlow Graph. The body of `train_step` consists of a forward inference pass, a backward gradient pass, and the variable update.Note that the body of `train_step` does not contain any special DTensor annotations. Instead, `train_step` only contains high-level TensorFlow operations that process the input `x` and `y` from the global view of the input batch and the model. All of the DTensor annotations (`Mesh`, `Layout`) are factored out of the train step.
###Code
# Refer to the CTL (custom training loop guide)
@tf.function
def train_step(model, x, y, learning_rate=tf.constant(1e-4)):
with tf.GradientTape() as tape:
logits = model(x)
# tf.reduce_sum sums the batch sharded per-example loss to a replicated
# global loss (scalar).
loss = tf.reduce_sum(
tf.nn.sparse_softmax_cross_entropy_with_logits(
logits=logits, labels=y))
parameters = model.trainable_variables
gradients = tape.gradient(loss, parameters)
for parameter, parameter_gradient in zip(parameters, gradients):
parameter.assign_sub(learning_rate * parameter_gradient)
# Define some metrics
accuracy = 1.0 - tf.reduce_sum(tf.cast(tf.argmax(logits, axis=-1, output_type=tf.int64) != y, tf.float32)) / x.shape[0]
loss_per_sample = loss / len(x)
return {'loss': loss_per_sample, 'accuracy': accuracy}
###Output
_____no_output_____
###Markdown
CheckpointingYou can checkpoint a DTensor model using `dtensor.DTensorCheckpoint`. The format of a DTensor checkpoint is fully compatible with a Standard TensorFlow Checkpoint. There is ongoing work to consolidate `dtensor.DTensorCheckpoint` into `tf.train.Checkpoint`.When a DTensor checkpoint is restored, `Layout`s of variables can be different from when the checkpoint is saved. This tutorial makes use of this feature to continue the training in the Model Parallel training and Spatial Parallel training sections.
###Code
CHECKPOINT_DIR = tempfile.mkdtemp()
def start_checkpoint_manager(mesh, model):
ckpt = dtensor.DTensorCheckpoint(mesh, root=model)
manager = tf.train.CheckpointManager(ckpt, CHECKPOINT_DIR, max_to_keep=3)
if manager.latest_checkpoint:
print("Restoring a checkpoint")
ckpt.restore(manager.latest_checkpoint).assert_consumed()
else:
print("new training")
return manager
###Output
_____no_output_____
###Markdown
Training loopFor the data parallel training scheme, train for epochs and report the progress. 3 epochs is insufficient for training the model -- an accuracy of 50% is as good as randomly guessing.Enable checkpointing so that you can pick up the training later. In the following section, you will load the checkpoint and train with a different parallel scheme.
###Code
num_epochs = 2
manager = start_checkpoint_manager(mesh, model)
for epoch in range(num_epochs):
step = 0
pbar = tf.keras.utils.Progbar(target=int(train_data_vec.cardinality()), stateful_metrics=[])
metrics = {'epoch': epoch}
for x,y in train_data_vec:
x, y = repack_batch(x, y, mesh)
metrics.update(train_step(model, x, y, 1e-2))
pbar.update(step, values=metrics.items(), finalize=False)
step += 1
manager.save()
pbar.update(step, values=metrics.items(), finalize=True)
###Output
_____no_output_____
###Markdown
Model Parallel TrainingIf you switch to a 2 dimensional `Mesh`, and shard the model variables along the second mesh dimension, then the training becomes Model Parallel.In Model Parallel training, each model replica spans multiple devices (2 in this case):- There are 4 model replicas, and the training data batch is distributed to the 4 replicas.- The 2 devices within a single model replica receive replicated training data.
###Code
mesh = dtensor.create_mesh([("batch", 4), ("model", 2)], devices=DEVICES)
model = MLP([dtensor.Layout([dtensor.UNSHARDED, "model"], mesh),
dtensor.Layout(["model", dtensor.UNSHARDED], mesh)])
###Output
_____no_output_____
###Markdown
As the training data is still sharded along the batch dimension, you can reuse the same `repack_batch` function as the Data Parallel training case. DTensor will automatically replicate the per-replica batch to all devices inside the replica along the `"model"` mesh dimension.
###Code
def repack_batch(x, y, mesh):
x = repack_local_tensor(x, layout=dtensor.Layout(['batch', dtensor.UNSHARDED], mesh))
y = repack_local_tensor(y, layout=dtensor.Layout(['batch'], mesh))
return x, y
###Output
_____no_output_____
###Markdown
Next run the training loop. The training loop reuses the same checkpoint manager as the Data Parallel training example, and the code looks identical.You can continue training the data parallel trained model under model parallel training.
###Code
num_epochs = 2
manager = start_checkpoint_manager(mesh, model)
for epoch in range(num_epochs):
step = 0
pbar = tf.keras.utils.Progbar(target=int(train_data_vec.cardinality()))
metrics = {'epoch': epoch}
for x,y in train_data_vec:
x, y = repack_batch(x, y, mesh)
metrics.update(train_step(model, x, y, 1e-2))
pbar.update(step, values=metrics.items(), finalize=False)
step += 1
manager.save()
pbar.update(step, values=metrics.items(), finalize=True)
###Output
_____no_output_____
###Markdown
Spatial Parallel Training When training data of very high dimensionality (e.g. a very large image or a video), it may be desirable to shard along the feature dimension. This is called [Spatial Partitioning](https://cloud.google.com/blog/products/ai-machine-learning/train-ml-models-on-large-images-and-3d-volumes-with-spatial-partitioning-on-cloud-tpus), which was first introduced into TensorFlow for training models with large 3-d input samples.DTensor also supports this case. The only change you need to do is to create a Mesh that includes a `feature` dimension, and apply the corresponding `Layout`.
###Code
mesh = dtensor.create_mesh([("batch", 2), ("feature", 2), ("model", 2)], devices=DEVICES)
model = MLP([dtensor.Layout(["feature", "model"], mesh),
dtensor.Layout(["model", dtensor.UNSHARDED], mesh)])
###Output
_____no_output_____
###Markdown
Shard the input data along the `feature` dimension when packing the input tensors to DTensors. You do this with a slightly different repack function, `repack_batch_for_spt`, where `spt` stands for Spatial Parallel Training.
###Code
def repack_batch_for_spt(x, y, mesh):
# Shard data on feature dimension, too
x = repack_local_tensor(x, layout=dtensor.Layout(["batch", 'feature'], mesh))
y = repack_local_tensor(y, layout=dtensor.Layout(["batch"], mesh))
return x, y
###Output
_____no_output_____
###Markdown
The Spatial parallel training can also continue from a checkpoint created with other parallell training schemes.
###Code
num_epochs = 2
manager = start_checkpoint_manager(mesh, model)
for epoch in range(num_epochs):
step = 0
metrics = {'epoch': epoch}
pbar = tf.keras.utils.Progbar(target=int(train_data_vec.cardinality()))
for x, y in train_data_vec:
x, y = repack_batch_for_spt(x, y, mesh)
metrics.update(train_step(model, x, y, 1e-2))
pbar.update(step, values=metrics.items(), finalize=False)
step += 1
manager.save()
pbar.update(step, values=metrics.items(), finalize=True)
###Output
_____no_output_____
###Markdown
SavedModel and DTensorThe integration of DTensor and SavedModel is still under development. This section only describes the current status quo for TensorFlow 2.9.0.As of TensorFlow 2.9.0, `tf.saved_model` only accepts DTensor models with fully replicated variables.As a workaround, you can convert a DTensor model to a fully replicated one by reloading a checkpoint. However, after a model is saved, all DTensor annotations are lost and the saved signatures can only be used with regular Tensors, not DTensors.
###Code
mesh = dtensor.create_mesh([("world", 1)], devices=DEVICES[:1])
mlp = MLP([dtensor.Layout([dtensor.UNSHARDED, dtensor.UNSHARDED], mesh),
dtensor.Layout([dtensor.UNSHARDED, dtensor.UNSHARDED], mesh)])
manager = start_checkpoint_manager(mesh, mlp)
model_for_saving = tf.keras.Sequential([
text_vectorization,
mlp
])
@tf.function(input_signature=[tf.TensorSpec([None], tf.string)])
def run(inputs):
return {'result': model_for_saving(inputs)}
tf.saved_model.save(
model_for_saving, "/tmp/saved_model",
signatures=run)
###Output
_____no_output_____
###Markdown
As of TensorFlow 2.9.0, you can only call a loaded signature with a regular Tensor, or a fully replicated DTensor (which will be converted to a regular Tensor).
###Code
sample_batch = train_data.take(1).get_single_element()
sample_batch
loaded = tf.saved_model.load("/tmp/saved_model")
run_sig = loaded.signatures["serving_default"]
result = run_sig(sample_batch['text'])['result']
np.mean(tf.argmax(result, axis=-1) == sample_batch['label'])
###Output
_____no_output_____ |
examples/add_border_fig.ipynb | ###Markdown
Testing add_border()Add_border shows where the boundaries of a figure are. This is useful if it is unclear where the boundaries are and you are trying to optimize the location of different elements in the figure or making sure that nothing gets cut off.
###Code
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import os
from nice_figures import *
load_style()
fig = plt.figure()
ax = plt.axes([0.1, 0.07, 0.8, 0.8])
ax.set_xlabel('This label overlaps border')
ax.set_ylabel('This label sits inside border')
plt.savefig(os.path.join('figs', 'add_border_fig.pdf'))
add_border()
plt.show()
###Output
C:\Users\rbettles\PythonEnvironments\packaging_tutorial_env\lib\site-packages\ipykernel_launcher.py:14: UserWarning: This figure includes Axes that are not compatible with tight_layout, so results might be incorrect.
C:\Users\rbettles\PythonEnvironments\packaging_tutorial_env\lib\site-packages\IPython\core\pylabtools.py:132: UserWarning: This figure includes Axes that are not compatible with tight_layout, so results might be incorrect.
fig.canvas.print_figure(bytes_io, **kw)
|
IBM_AI_Engineering/Course-4-deep-neural-networks-with-pytorch/Week-6-CNN/9.4.2CNN_Small_Image.ipynb | ###Markdown
Convolutional Neural Network with Small Images Table of ContentsIn this lab, we will use a Convolutional Neural Network to classify handwritten digits from the MNIST database. We will reshape the images to make them faster to process Get Some DataConvolutional Neural NetworkDefine Softmax, Criterion function, Optimizer and Train the ModelAnalyze ResultsEstimated Time Needed: 25 min 14 min to train model Preparation
###Code
# Import the libraries we need to use in this lab
# Using the following line code to install the torchvision library
# !conda install -y torchvision
import torch
import torch.nn as nn
import torchvision.transforms as transforms
import torchvision.datasets as dsets
import matplotlib.pylab as plt
import numpy as np
###Output
_____no_output_____
###Markdown
Define the function plot_channels to plot out the kernel parameters of each channel
###Code
# Define the function for plotting the channels
def plot_channels(W):
n_out = W.shape[0]
n_in = W.shape[1]
w_min = W.min().item()
w_max = W.max().item()
fig, axes = plt.subplots(n_out, n_in)
fig.subplots_adjust(hspace=0.1)
out_index = 0
in_index = 0
#plot outputs as rows inputs as columns
for ax in axes.flat:
if in_index > n_in-1:
out_index = out_index + 1
in_index = 0
ax.imshow(W[out_index, in_index, :, :], vmin=w_min, vmax=w_max, cmap='seismic')
ax.set_yticklabels([])
ax.set_xticklabels([])
in_index = in_index + 1
plt.show()
###Output
_____no_output_____
###Markdown
Define the function plot_parameters to plot out the kernel parameters of each channel with Multiple outputs .
###Code
# Define the function for plotting the parameters
def plot_parameters(W, number_rows=1, name="", i=0):
W = W.data[:, i, :, :]
n_filters = W.shape[0]
w_min = W.min().item()
w_max = W.max().item()
fig, axes = plt.subplots(number_rows, n_filters // number_rows)
fig.subplots_adjust(hspace=0.4)
for i, ax in enumerate(axes.flat):
if i < n_filters:
# Set the label for the sub-plot.
ax.set_xlabel("kernel:{0}".format(i + 1))
# Plot the image.
ax.imshow(W[i, :], vmin=w_min, vmax=w_max, cmap='seismic')
ax.set_xticks([])
ax.set_yticks([])
plt.suptitle(name, fontsize=10)
plt.show()
###Output
_____no_output_____
###Markdown
Define the function plot_activation to plot out the activations of the Convolutional layers
###Code
# Define the function for plotting the activations
def plot_activations(A, number_rows=1, name="", i=0):
A = A[0, :, :, :].detach().numpy()
n_activations = A.shape[0]
A_min = A.min().item()
A_max = A.max().item()
fig, axes = plt.subplots(number_rows, n_activations // number_rows)
fig.subplots_adjust(hspace = 0.4)
for i, ax in enumerate(axes.flat):
if i < n_activations:
# Set the label for the sub-plot.
ax.set_xlabel("activation:{0}".format(i + 1))
# Plot the image.
ax.imshow(A[i, :], vmin=A_min, vmax=A_max, cmap='seismic')
ax.set_xticks([])
ax.set_yticks([])
plt.show()
###Output
_____no_output_____
###Markdown
Define the function show_data to plot out data samples as images.
###Code
def show_data(data_sample):
plt.imshow(data_sample[0].numpy().reshape(IMAGE_SIZE, IMAGE_SIZE), cmap='gray')
# plt.title('y = '+ str(data_sample[1].item()))
plt.title('y = '+ str(data_sample[1]))
###Output
_____no_output_____
###Markdown
Get the Data we create a transform to resize the image and convert it to a tensor .
###Code
IMAGE_SIZE = 16
composed = transforms.Compose([transforms.Resize((IMAGE_SIZE, IMAGE_SIZE)), transforms.ToTensor()])
###Output
_____no_output_____
###Markdown
Load the training dataset by setting the parameters train to True. We use the transform defined above.
###Code
train_dataset = dsets.MNIST(root='../data', train=True, download=True, transform=composed)
###Output
_____no_output_____
###Markdown
Load the testing dataset by setting the parameters train False.
###Code
# Make the validating
validation_dataset = dsets.MNIST(root='../data', train=False, download=True, transform=composed)
###Output
_____no_output_____
###Markdown
We can see the data type is long.
###Code
# Show the data type for each element in dataset
print(train_dataset[1][0].type())
train_dataset[1][1]
###Output
torch.FloatTensor
###Markdown
Each element in the rectangular tensor corresponds to a number representing a pixel intensity as demonstrated by the following image. Print out the fourth label
###Code
# The label for the fourth data element
train_dataset[3][1]
###Output
_____no_output_____
###Markdown
Plot the fourth sample
###Code
# The image for the fourth data element
show_data(train_dataset[3])
###Output
_____no_output_____
###Markdown
The fourth sample is a "1". Build a Convolutional Neural Network Class Build a Convolutional Network class with two Convolutional layers and one fully connected layer. Pre-determine the size of the final output matrix. The parameters in the constructor are the number of output channels for the first and second layer.
###Code
class CNN(nn.Module):
# Contructor
def __init__(self, out_1=16, out_2=32):
super(CNN, self).__init__()
self.cnn1 = nn.Conv2d(in_channels=1, out_channels=out_1, kernel_size=5, padding=2)
self.maxpool1=nn.MaxPool2d(kernel_size=2)
self.cnn2 = nn.Conv2d(in_channels=out_1, out_channels=out_2, kernel_size=5, stride=1, padding=2)
self.maxpool2=nn.MaxPool2d(kernel_size=2)
self.fc1 = nn.Linear(out_2 * 4 * 4, 10)
# Prediction
def forward(self, x):
x = self.cnn1(x)
x = torch.relu(x)
x = self.maxpool1(x)
x = self.cnn2(x)
x = torch.relu(x)
x = self.maxpool2(x)
x = x.view(x.size(0), -1)
x = self.fc1(x)
return x
# Outputs in each steps
def activations(self, x):
#outputs activation this is not necessary
z1 = self.cnn1(x)
a1 = torch.relu(z1)
out = self.maxpool1(a1)
z2 = self.cnn2(out)
a2 = torch.relu(z2)
out1 = self.maxpool2(a2)
out = out.view(out.size(0),-1)
return z1, a1, z2, a2, out1,out
###Output
_____no_output_____
###Markdown
Define the Convolutional Neural Network Classifier, Criterion function, Optimizer and Train the Model There are 16 output channels for the first layer, and 32 output channels for the second layer
###Code
# Create the model object using CNN class
model = CNN(out_1=16, out_2=32)
###Output
_____no_output_____
###Markdown
Plot the model parameters for the kernels before training the kernels. The kernels are initialized randomly.
###Code
# Plot the parameters
plot_parameters(model.state_dict()['cnn1.weight'], number_rows=4, name="1st layer kernels before training ")
plot_parameters(model.state_dict()['cnn2.weight'], number_rows=4, name='2nd layer kernels before training' )
###Output
_____no_output_____
###Markdown
Define the loss function, the optimizer and the dataset loader
###Code
criterion = nn.CrossEntropyLoss()
learning_rate = 0.1
optimizer = torch.optim.SGD(model.parameters(), lr = learning_rate)
train_loader = torch.utils.data.DataLoader(dataset=train_dataset, batch_size=100)
validation_loader = torch.utils.data.DataLoader(dataset=validation_dataset, batch_size=5000)
###Output
_____no_output_____
###Markdown
Train the model and determine validation accuracy technically test accuracy **(This may take a long time)**
###Code
# Train the model
n_epochs=3
cost_list=[]
accuracy_list=[]
N_test=len(validation_dataset)
COST=0
def train_model(n_epochs):
for epoch in range(n_epochs):
COST=0
for x, y in train_loader:
optimizer.zero_grad()
z = model(x)
loss = criterion(z, y)
loss.backward()
optimizer.step()
COST+=loss.data
cost_list.append(COST)
correct=0
#perform a prediction on the validation data
for x_test, y_test in validation_loader:
z = model(x_test)
_, yhat = torch.max(z.data, 1)
correct += (yhat == y_test).sum().item()
accuracy = correct / N_test
accuracy_list.append(accuracy)
print('epoch:'+str(epoch)+'/'+str(n_epochs)+' cost: '+str(COST)+' acc: '+str(accuracy))
train_model(n_epochs)
###Output
epoch:0/3 cost: tensor(60.0698) acc: 0.9726
epoch:1/3 cost: tensor(47.4953) acc: 0.9776
epoch:2/3 cost: tensor(40.1689) acc: 0.9793
###Markdown
Analyze Results Plot the loss and accuracy on the validation data:
###Code
# Plot the loss and accuracy
fig, ax1 = plt.subplots()
color = 'tab:red'
ax1.plot(cost_list, color=color)
ax1.set_xlabel('epoch', color=color)
ax1.set_ylabel('Cost', color=color)
ax1.tick_params(axis='y', color=color)
ax2 = ax1.twinx()
color = 'tab:blue'
ax2.set_ylabel('accuracy', color=color)
ax2.set_xlabel('epoch', color=color)
ax2.plot( accuracy_list, color=color)
ax2.tick_params(axis='y', color=color)
fig.tight_layout()
###Output
_____no_output_____
###Markdown
View the results of the parameters for the Convolutional layers
###Code
# Plot the channels
plot_channels(model.state_dict()['cnn1.weight'])
plot_channels(model.state_dict()['cnn2.weight'])
train_dataset[1]
###Output
_____no_output_____
###Markdown
Consider the following sample
###Code
# Show the second image
show_data(train_dataset[1])
###Output
_____no_output_____
###Markdown
Determine the activations
###Code
# Use the CNN activations class to see the steps
out = model.activations(train_dataset[1][0].view(1, 1, IMAGE_SIZE, IMAGE_SIZE))
###Output
_____no_output_____
###Markdown
Plot out the first set of activations
###Code
# Plot the outputs after the first CNN
plot_activations(out[0], number_rows=4, name="Output after the 1st CNN")
###Output
_____no_output_____
###Markdown
The image below is the result after applying the relu activation function
###Code
# Plot the outputs after the first Relu
plot_activations(out[1], number_rows=4, name="Output after the 1st Relu")
###Output
_____no_output_____
###Markdown
The image below is the result of the activation map after the second output layer.
###Code
# Plot the outputs after the second CNN
plot_activations(out[2], number_rows=32 // 4, name="Output after the 2nd CNN")
###Output
_____no_output_____
###Markdown
The image below is the result of the activation map after applying the second relu
###Code
# Plot the outputs after the second Relu
plot_activations(out[3], number_rows=4, name="Output after the 2nd Relu")
###Output
_____no_output_____
###Markdown
We can see the result for the third sample
###Code
# Show the third image
show_data(train_dataset[2])
# Use the CNN activations class to see the steps
out = model.activations(train_dataset[2][0].view(1, 1, IMAGE_SIZE, IMAGE_SIZE))
# Plot the outputs after the first CNN
plot_activations(out[0], number_rows=4, name="Output after the 1st CNN")
# Plot the outputs after the first Relu
plot_activations(out[1], number_rows=4, name="Output after the 1st Relu")
# Plot the outputs after the second CNN
plot_activations(out[2], number_rows=32 // 4, name="Output after the 2nd CNN")
# Plot the outputs after the second Relu
plot_activations(out[3], number_rows=4, name="Output after the 2nd Relu")
###Output
_____no_output_____
###Markdown
Plot the first five mis-classified samples:
###Code
# Plot the mis-classified samples
count = 0
for x, y in torch.utils.data.DataLoader(dataset=validation_dataset, batch_size=1):
z = model(x)
_, yhat = torch.max(z, 1)
if yhat != y:
show_data((x, y))
plt.show()
print("yhat: ",yhat)
count += 1
if count >= 5:
break
###Output
_____no_output_____ |
nbs/011_callback.noisy_student.ipynb | ###Markdown
Noisy student> Callback to apply noisy student self-training (a semi-supervised learning approach) based on: Xie, Q., Luong, M. T., Hovy, E., & Le, Q. V. (2020). Self-training with noisy student improves imagenet classification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 10687-10698).
###Code
#export
from tsai.imports import *
from tsai.utils import *
from tsai.data.preprocessing import *
from tsai.data.transforms import *
from tsai.models.layers import *
from fastai.callback.all import *
#export
import torch.multiprocessing
torch.multiprocessing.set_sharing_strategy('file_system')
#export
# This is an unofficial implementation of noisy student based on:
# Xie, Q., Luong, M. T., Hovy, E., & Le, Q. V. (2020). Self-training with noisy student improves imagenet classification.
# In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 10687-10698).
# Official tensorflow implementation available in https://github.com/google-research/noisystudent
class NoisyStudent(Callback):
"""A callback to implement the Noisy Student approach. In the original paper this was used in combination with noise:
- stochastic depth: .8
- RandAugment: N=2, M=27
- dropout: .5
Steps:
1. Build the dl you will use as a teacher
2. Create dl2 with the pseudolabels (either soft or hard preds)
3. Pass any required batch_tfms to the callback
"""
def __init__(self, dl2:DataLoader, bs:Optional[int]=None, l2pl_ratio:int=1, batch_tfms:Optional[list]=None, do_setup:bool=True,
pseudolabel_sample_weight:float=1., verbose=False):
r'''
Args:
dl2: dataloader with the pseudolabels
bs: batch size of the new, combined dataloader. If None, it will pick the bs from the labeled dataloader.
l2pl_ratio: ratio between labels and pseudolabels in the combined batch
batch_tfms: transforms applied to the combined batch. If None, it will pick the batch_tfms from the labeled dataloader (if any)
do_setup: perform a transform setup on the labeled dataset.
pseudolabel_sample_weight: weight of each pseudolabel sample relative to the labeled one of the loss.
'''
self.dl2, self.bs, self.l2pl_ratio, self.batch_tfms, self.do_setup, self.verbose = dl2, bs, l2pl_ratio, batch_tfms, do_setup, verbose
self.pl_sw = pseudolabel_sample_weight
def before_fit(self):
if self.batch_tfms is None: self.batch_tfms = self.dls.train.after_batch
self.old_bt = self.dls.train.after_batch # Remove and store dl.train.batch_tfms
self.old_bs = self.dls.train.bs
self.dls.train.after_batch = noop
if self.do_setup and self.batch_tfms:
for bt in self.batch_tfms:
bt.setup(self.dls.train)
if self.bs is None: self.bs = self.dls.train.bs
self.dl2.bs = min(len(self.dl2.dataset), int(self.bs / (1 + self.l2pl_ratio)))
self.dls.train.bs = self.bs - self.dl2.bs
pv(f'labels / pseudolabels per training batch : {self.dls.train.bs} / {self.dl2.bs}', self.verbose)
rel_weight = (self.dls.train.bs/self.dl2.bs) * (len(self.dl2.dataset)/len(self.dls.train.dataset))
pv(f'relative labeled/ pseudolabel sample weight in dataset: {rel_weight:.1f}', self.verbose)
self.dl2iter = iter(self.dl2)
self.old_loss_func = self.learn.loss_func
self.learn.loss_func = self.loss
def before_batch(self):
if self.training:
X, y = self.x, self.y
try: X2, y2 = next(self.dl2iter)
except StopIteration:
self.dl2iter = iter(self.dl2)
X2, y2 = next(self.dl2iter)
if y.ndim == 1 and y2.ndim == 2: y = torch.eye(self.learn.dls.c)[y].to(device) # ensure both
X_comb, y_comb = concat(X, X2), concat(y, y2)
if self.batch_tfms is not None:
X_comb = compose_tfms(X_comb, self.batch_tfms, split_idx=0)
y_comb = compose_tfms(y_comb, self.batch_tfms, split_idx=0)
self.learn.xb = (X_comb,)
self.learn.yb = (y_comb,)
pv(f'\nX: {X.shape} X2: {X2.shape} X_comb: {X_comb.shape}', self.verbose)
pv(f'y: {y.shape} y2: {y2.shape} y_comb: {y_comb.shape}', self.verbose)
def loss(self, output, target):
if target.ndim == 2: _, target = target.max(dim=1)
if self.training and self.pl_sw != 1:
loss = (1 - self.pl_sw) * self.old_loss_func(output[:self.dls.train.bs], target[:self.dls.train.bs])
loss += self.pl_sw * self.old_loss_func(output[self.dls.train.bs:], target[self.dls.train.bs:])
return loss
else:
return self.old_loss_func(output, target)
def after_fit(self):
self.dls.train.after_batch = self.old_bt
self.learn.loss_func = self.old_loss_func
self.dls.train.bs = self.old_bs
self.dls.bs = self.old_bs
from tsai.data.all import *
from tsai.models.all import *
dsid = 'NATOPS'
X, y, splits = get_UCR_data(dsid, return_split=False)
tfms = [None, Categorize()]
dsets = TSDatasets(X, y, tfms=tfms, splits=splits)
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, batch_tfms=[TSStandardize(), TSRandomSize(.5)])
pseudolabeled_data = X
soft_preds = True
pseudolabels = ToNumpyCategory()(y) if soft_preds else OneHot()(y)
dsets2 = TSDatasets(pseudolabeled_data, pseudolabels)
dl2 = TSDataLoader(dsets2)
model = create_model(InceptionTime, dls=dls)
noisy_student_cb = NoisyStudent(dl2, bs=256, l2pl_ratio=2, verbose=True)
learn = Learner(dls, model, cbs=noisy_student_cb, metrics=accuracy)
learn.fit_one_cycle(1)
pseudolabeled_data = X
soft_preds = False
pseudolabels = ToNumpyCategory()(y) if soft_preds else OneHot()(y)
dsets2 = TSDatasets(pseudolabeled_data, pseudolabels)
dl2 = TSDataLoader(dsets2)
model = create_model(InceptionTime, dls=dls)
noisy_student_cb = NoisyStudent(dl2, bs=256, l2pl_ratio=2, verbose=True)
learn = Learner(dls, model, cbs=noisy_student_cb, metrics=accuracy)
learn.fit_one_cycle(1)
#hide
out = create_scripts(); beep(out)
###Output
_____no_output_____ |
02-python-201/labs/APIs/01_JSON.ipynb | ###Markdown
Adquisiรณn de datos `DIRECTO`- [X] descarga directa- peticiรณn GET a travรฉs de API de terceros (ej. AEMET, Ayto. Barcelona....)- web crawling (que es una prรกctica ilegal...pero muy de moda entre los hackers!?ยฟ!) *** Primer pasoEs trabajar con los datos en formato `JSON`
###Code
# Primero vamos a entender el funcionamiento del JSON a trรกves los diccionarios (dict)
# Construimos un diccionario de ejemplo y mostramos el tipo de datos y el contenido de la variable.
diccionario_ejemplo = {"nombre": "Yann", "apellidos": {"apellido1": "LeCun", "apellido2": "-"}, "edad": 56}
print(type(diccionario_ejemplo))
print(diccionario_ejemplo)
# Construimos una lista de ejemplo y mostramos el tipo de datos y el contenido de la variable.
lista_ejemplo = [1, 2, 3]
print(type(lista_ejemplo))
print(lista_ejemplo)
nested_dict = [diccionario_ejemplo]
nested_dict
print(type(nested_dict))
nested_dict
type(nested_dict[0])
# Trataremos los json a parte
import json
# Mostramos la representaciรณn del json del diccionario
json_dict = json.dumps(diccionario_ejemplo)
# Mostramos su estructura
print(type(json_dict))
print(json_dict)
# Mostramos la representaciรณn del json de la lista
json_list = json.dumps(lista_ejemplo)
print(type(json_list))
print(json_list)
###Output
<class 'str'>
[1, 2, 3]
###Markdown
Este proceso a travรฉs de la funciรณn `json.dumps` del json, es **serializar** el objeto, en este caso siempre serรก en formato 'string'. ***El proceso inverso conocido como **deserializar** crea objetos Python en `list`y `dict` a travรฉs de la funciรณn `json.loads`
###Code
# Como el caso anterior convertimos los JSONs en dict y list
json2dict = json.loads(json_dict)
print(json2dict)
print(type(json2dict))
# No podemos convertir a json datos en lista o diccionarios, tienen que ser en formato o class STR, BYTES o BYTEARRAY
json2dict_2 = json.loads(nested_dict)
# Convertimos el objeto (anteriormente en lista) de json a list
json2list = json.loads(json_list)
print(json2list)
print(type(json2list))
###Output
[1, 2, 3]
<class 'list'>
###Markdown
***Para mejorar la legibilidad de los datos que obtendremos de las API, definiremos una funciรณn que mostrarรก cadenas JSON por pantalla formateadas para mejorar la lectura. La funciรณn aceptarรก tanto cadenas de carรกcteres con contenido JSON como objetos Python, y mostrarรก el contenido por pantalla.Ademรกs, la funciรณn recibirรก un parรกmetro opcional que nos permitirรก indicar el nรบmero mรกximo de lรญneas que hay que mostrar. Asรญ, podremos usar la funciรณn para visualizar las primeras lรญneas de un JSON largo, sin tener que mostrar el JSON completo por pantalla.
###Code
# Definimos una funciรณn `json_print` que tiene un parรกmetro (json_data) y uno opcional (limit)
# El parรกmetro sort_keys FALSE para ordenar o no alfabeticamente
# el parรกmetro indent para buscar entre los anidados (niveles)
def json_print(json_data, limit=None):
if isinstance(json_data, (str)):
json_data = json.loads(json_data)
nice = json.dumps(json_data, sort_keys=False, indent=3, separators=(',',':'))
print("\n".join(nice.split("\n")[0:limit]))
if limit is not None:
print("[....]")
# Aplicamos la funciรณn a un tweet
tweet = {
"created_at": "Thu Apr 06 15:24:15 +0000 2017",
"id_str": "850006245121695744",
"text": "1\/ Today we\u2019re sharing our vision for the future of the Twitter API platform!\nhttps:\/\/t.co\/XweGngmxlP",
"user": {
"id": 2244994945,
"name": "Twitter Dev",
"screen_name": "TwitterDev",
"location": "Internet",
"url": "https:\/\/dev.twitter.com\/",
"description": "Your official source for Twitter Platform news, updates & events. Need technical help? Visit https:\/\/twittercommunity.com\/ \u2328\ufe0f #TapIntoTwitter"
},
"place": {
},
"entities": {
"hashtags": [
],
"urls": [
{
"url": "https:\/\/t.co\/XweGngmxlP",
"unwound": {
"url": "https:\/\/cards.twitter.com\/cards\/18ce53wgo4h\/3xo1c",
"title": "Building the Future of the Twitter API Platform"
}
}
],
"user_mentions": [
]
}
}
tweet
type(tweet)
# Convertimos este tweet en json
json_print(tweet)
print(json_dict)
print(type(json_dict))
print(diccionario_ejemplo)
print(type(diccionario_ejemplo))
json_print(diccionario_ejemplo)
json_print(lista_ejemplo)
diccionario_ejemplo
print(type(json_print(diccionario_ejemplo, 3)))
###Output
{
"nombre":"Yann",
"apellidos":{
[....]
<class 'NoneType'>
|
OPC_Sensor/Models with Min Max Normalization/LSTM/.ipynb_checkpoints/LSTM_tanh_binary-checkpoint.ipynb | ###Markdown
Importing Libraries
###Code
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
import os.path as op
import pickle
import tensorflow as tf
from tensorflow import keras
from keras.models import Model,Sequential,load_model
from keras.layers import Input, Embedding
from keras.layers import Dense, Bidirectional
from keras.layers.recurrent import LSTM
import keras.metrics as metrics
import itertools
from tensorflow.python.keras.utils.data_utils import Sequence
from decimal import Decimal
from keras import backend as K
from keras.layers import Conv1D,MaxPooling1D,Flatten,Dense
###Output
_____no_output_____
###Markdown
Data Fetching
###Code
A1=np.empty((0,5),dtype='float32')
U1=np.empty((0,7),dtype='float32')
node=['150','149','147','144','142','140','136','61']
mon=['Apr','Mar','Aug','Jun','Jul','Sep','May','Oct']
for j in node:
for i in mon:
inp= pd.read_csv('data_gkv/AT510_Node_'+str(j)+'_'+str(i)+'19_OutputFile.csv',usecols=[1,2,3,15,16])
out= pd.read_csv('data_gkv/AT510_Node_'+str(j)+'_'+str(i)+'19_OutputFile.csv',usecols=[5,6,7,8,17,18,19])
inp=np.array(inp,dtype='float32')
out=np.array(out,dtype='float32')
A1=np.append(A1, inp, axis=0)
U1=np.append(U1, out, axis=0)
print(A1)
print(U1)
###Output
[[1.50000e+02 1.90401e+05 7.25000e+02 2.75500e+01 8.03900e+01]
[1.50000e+02 1.90401e+05 8.25000e+02 2.75600e+01 8.03300e+01]
[1.50000e+02 1.90401e+05 9.25000e+02 2.75800e+01 8.02400e+01]
...
[6.10000e+01 1.91020e+05 1.94532e+05 2.93700e+01 7.52100e+01]
[6.10000e+01 1.91020e+05 1.94632e+05 2.93500e+01 7.52700e+01]
[6.10000e+01 1.91020e+05 1.94732e+05 2.93400e+01 7.53000e+01]]
[[ 28. 3. -52. ... 16.97 19.63 20.06]
[ 28. 15. -53. ... 16.63 19.57 23.06]
[ 31. 16. -55. ... 17.24 19.98 20.24]
...
[ 76. 12. -76. ... 3.47 3.95 4.35]
[ 75. 13. -76. ... 3.88 4.33 4.42]
[ 76. 12. -75. ... 3.46 4.07 4.28]]
###Markdown
Min Max Scaler
###Code
from sklearn.preprocessing import MinMaxScaler
import warnings
scaler_obj=MinMaxScaler()
X1=scaler_obj.fit_transform(A1)
Y1=scaler_obj.fit_transform(U1)
warnings.filterwarnings(action='ignore', category=UserWarning)
X1=X1[:,np.newaxis,:]
Y1=Y1[:,np.newaxis,:]
def rmse(y_true, y_pred):
return K.sqrt(K.mean(K.square(y_pred - y_true), axis=-1))
def coeff_determination(y_true, y_pred):
SS_res = K.sum(K.square( y_true-y_pred ))
SS_tot = K.sum(K.square( y_true - K.mean(y_true) ) )
return ( 1 - SS_res/(SS_tot + K.epsilon()) )
###Output
_____no_output_____
###Markdown
Model
###Code
model1 = Sequential()
model1.add(keras.Input(shape=(1,5)))
model1.add(tf.keras.layers.LSTM(7,activation="tanh",use_bias=True,kernel_initializer="glorot_uniform",bias_initializer="zeros"))
model1.add(Dense(7))
model1.add(keras.layers.BatchNormalization(axis=-1,momentum=0.99,epsilon=0.001,center=True,scale=True,
beta_initializer="zeros",gamma_initializer="ones",
moving_mean_initializer="zeros",moving_variance_initializer="ones",trainable=True))
model1.add(keras.layers.ReLU())
model1.compile(optimizer=keras.optimizers.Adam(learning_rate=1e-5), loss='binary_crossentropy',metrics=['accuracy','mse','mae',rmse])
model1.summary()
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(X1, Y1, test_size=0.25, random_state=42)
model_fit8 = model1.fit(x_train,y_train,batch_size=256,epochs=50, validation_split=0.1)
model1.evaluate(x_test,y_test)
model1.evaluate(x_train,y_train)
###Output
40554/40554 [==============================] - 206s 5ms/step - loss: 0.0860 - accuracy: 0.9438 - mse: 1.8242e-04 - mae: 0.0047 - rmse: 0.0087
###Markdown
Saving Model as File
###Code
model_json = model1.to_json()
with open("Model_File/lstm_tanh.json", "w") as json_file:
json_file.write(model_json)
# serialize weights to HDF5
model1.save_weights("Model_File/lstm_tanh.h5")
print("Saved model to disk")
from keras.models import model_from_json
json_file = open('Model_File/lstm_tanh.json', 'r')
loaded_model_json = json_file.read()
json_file.close()
loaded_model = model_from_json(loaded_model_json)
# load weights into new model
loaded_model.load_weights("Model_File/lstm_tanh.h5")
print("Loaded model from disk")
loaded_model.compile(optimizer=keras.optimizers.Adam(learning_rate=0.001), loss='binary_crossentropy',metrics=['accuracy','mse','mae',rmse])
###Output
Loaded model from disk
###Markdown
Error Analysis
###Code
# summarize history for loss
plt.plot(model_fit8.history['loss'])
plt.plot(model_fit8.history['val_loss'])
plt.title('Model Loss',fontweight ='bold',fontsize = 15)
plt.ylabel('Loss',fontweight ='bold',fontsize = 15)
plt.xlabel('Epoch',fontweight ='bold',fontsize = 15)
plt.legend(['Train', 'Test'], loc='upper left')
plt.show()
# summarize history for accuracy
plt.plot(model_fit8.history['accuracy'])
plt.plot(model_fit8.history['val_accuracy'])
plt.title('Model accuracy',fontweight ='bold',fontsize = 15)
plt.ylabel('Accuracy',fontweight ='bold',fontsize = 15)
plt.xlabel('Epoch',fontweight ='bold',fontsize = 15)
plt.legend(['Train', 'Test'], loc='upper left')
plt.show()
#Creating csv file of prediction
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(X1, Y1, test_size=0.25, random_state=42)
y_test_pred=loaded_model.predict(x_test)
y_test_pred
y_test
y_test=y_test[:,0]
from numpy import savetxt
savetxt('ARRAY_DATA/lstm_y_test_pred.csv', y_test_pred[:1001], delimiter=',')
from numpy import savetxt
savetxt('ARRAY_DATA/lstm_y_test.csv', y_test[:1001], delimiter=',')
#completed
###Output
_____no_output_____ |
.ipynb_checkpoints/prepareData_V2-checkpoint.ipynb | ###Markdown
Note* Aggregate data by every 180 days
###Code
df_creatinine = pd.read_csv('CSV/T_creatinine.csv'); df_creatinine.rename(columns = {'value': 'creatinine'}, inplace=True)
df_dbp = pd.read_csv('CSV/T_DBP.csv'); df_dbp.rename(columns = {'value': 'dbp'}, inplace=True)
df_glucose = pd.read_csv('CSV/T_glucose.csv'); df_glucose.rename(columns = {'value': 'glucose'}, inplace=True)
df_hgb = pd.read_csv('CSV/T_HGB.csv'); df_hgb.rename(columns = {'value': 'hgb'}, inplace=True)
df_ldl = pd.read_csv('CSV/T_ldl.csv'); df_ldl.rename(columns = {'value': 'ldl'}, inplace=True)
df_meds = pd.read_csv('CSV/T_meds.csv')
df_sbp = pd.read_csv('CSV/T_sbp.csv'); df_sbp.rename(columns = {'value': 'sbp'}, inplace=True)
###Output
_____no_output_____
###Markdown
Compute maximum time point (day) for each subject
###Code
df_creatinine_d = df_creatinine.groupby(['id'])['time'].max()
df_dbp_d = df_dbp.groupby(['id'])['time'].max()
df_glucose_d = df_glucose.groupby(['id'])['time'].max()
df_hgb_d = df_hgb.groupby(['id'])['time'].max()
df_ldl_d = df_ldl.groupby(['id'])['time'].max()
df_sbp_d = df_sbp.groupby(['id'])['time'].max()
df_meds_d = df_meds.groupby(['id'])['end_day'].max()
df_meds_d = df_meds_d.rename('time')
df_d_merge = pd.DataFrame(pd.concat([df_creatinine_d, df_dbp_d, df_glucose_d, df_hgb_d, df_ldl_d, df_sbp_d, df_meds_d])).reset_index()
df_d_merge = df_d_merge.groupby(['id']).max().reset_index()
df_d_merge = df_d_merge.sort_values('time')
print('Minimum = ' + str(df_d_merge['time'].min()) + ', Maximum = ' + str(df_d_merge['time'].max()))
print('Mean = ' + str(df_d_merge['time'].mean()) + ', Median = ' + str(df_d_merge['time'].median()))
plt.plot(list(range(df_d_merge.shape[0])), df_d_merge['time'], '-p', markersize=1)
plt.xlabel("Subject")
plt.ylabel("Days")
plt.title("Days of record")
df_d_merge.to_csv('CSV/days_of_record.csv', index=False)
###Output
Minimum = 708, Maximum = 1429
Mean = 1131.21, Median = 1160.0
###Markdown
Process med data
###Code
# Ignore medication ended before day 0
df_meds = df_meds[df_meds['end_day'] >= 0]
df_meds.head(10)
period_bin = 180
def generate_bin(n_start, n_end):
global period_bin
start_count = period_bin
n = 1
token = 0
# keep trying until a code is assigned
while token == 0:
if n_end <= start_count:
# start and end within period
if n_start <= (start_count + 1):
return int(start_count / period_bin)
token = 1
else:
# the "end of period" is within start and end (e.g.: 90 < 180 < 280)
if n_start <= start_count:
# set a code for processing later
return 99
token = 1
# start and end are both outside of the period
else:
# try the next period
n += 1
start_count *= n
df_meds['days_bin'] = df_meds.apply(lambda x: generate_bin(x['start_day'], x['end_day']), axis=1)
# Fix the in-between
MID = df_meds['days_bin'] == 99
# Replicate the error part to be fixed and concat with the main one
df_temp = df_meds[MID]
# Bin months based on end_day
df_temp['days_bin'] = (df_temp['end_day'] / period_bin).astype(int) + 1
# Value to be used to replace start (+1) or end
v = (np.floor(df_meds.loc[MID, 'end_day'] / period_bin) * period_bin).astype(int)
df_meds.loc[MID, 'end_day'] = v
# Bin months based on end_day
df_meds['days_bin'] = (df_meds['end_day'] / period_bin).astype(int) + 1
df_temp['start_day'] = (v + 1).astype(int)
df_meds = pd.concat([df_meds, df_temp], axis=0)
df_meds['days_bin'].value_counts().sort_index()
df_meds['end_day'].max()
# Get the total dosage during the period
df_meds['total_day'] = df_meds['end_day'] - df_meds['start_day'] + 1
df_meds['total_dosage'] = df_meds['total_day'] * df_meds['daily_dosage']
# Bin the data by days_bin
df_med_binned = df_meds.groupby(['id', 'days_bin', 'drug'])['total_dosage'].sum().reset_index()
df_med_binned.head()
# Convert df to wide format, with each column = dosage of one med
# If drug not taken, assumed it's 0
df_med_wide = df_med_binned.pivot(index=['id', 'days_bin'],columns='drug',values='total_dosage').reset_index().fillna(0)
df_med_wide.head()
###Output
_____no_output_____
###Markdown
Merge the raw measurements
###Code
# Check how many is between day 699 and day 720
df_hgb[(df_hgb['time']> 699) & (df_hgb['time'] <= 720)].shape[0]
# Sort columns to id, time, value first
# First values are blood pressure, and systolic comes before diastolic
df_sbp = df_sbp[['id', 'time', 'sbp']]
df_merged = df_sbp.merge(df_dbp, on = ['id','time'], how='outer')
df_merged = df_merged.merge(df_creatinine, on = ['id','time'], how='outer')
df_merged = df_merged.merge(df_glucose, on = ['id','time'], how='outer')
df_merged = df_merged.merge(df_ldl, on = ['id','time'], how='outer')
df_merged = df_merged.merge(df_hgb, on = ['id','time'], how='outer')
df_merged = df_merged.sort_values(['id','time'])
df_merged.head()
# bin time
df_merged['days_bin'] = (df_merged['time'] / period_bin).astype(int) + 1
df_merged = df_merged.drop('time', axis=1)
df_merged['days_bin'].value_counts().sort_index()
# Aggregate data by months_bin and get mean
df_merged = df_merged.groupby(['id', 'days_bin']).median().reset_index()
df_merged.head()
# Merge with med
df_merged = df_merged.merge(df_med_wide, on = ['id','days_bin'], how='outer')
df_merged.head()
# Save output for modelling
df_merged.to_csv('CSV/df_daybin.csv', index=False)
# Only first 4 bins (720 days)
df_merged_4 = df_merged[df_merged['days_bin'] <= 4]
# Change NA to 0 for drugs
df_merged_4.iloc[:, 8:29] = df_merged_4.iloc[:, 8:29].fillna(0)
# Use KNNImputer to fill continuous missing values
imputer = KNNImputer(n_neighbors=3)
for day in range(1,5):
DID = df_merged_4['days_bin'] == day
df_day = df_merged_4[DID]
# Remove id from imputation
df_day.iloc[:,2:8] = pd.DataFrame(imputer.fit_transform(df_day.iloc[:,2:8]), index = df_day.index, columns = df_day.columns[2:8])
df_merged_4[DID] = df_day
# Merge with demographic
df_demo = pd.read_csv('CSV/T_demo.csv')
# Change the unknown in df_demo race to the mode (White)
df_demo.loc[df_demo['race'] == 'Unknown','race'] = 'White'
df_merged_4 = df_merged_4.merge(df_demo, on='id')
# Merge with output
df_stage = pd.read_csv('CSV/T_stage.csv')
# Change state to 0, 1
df_stage['Stage_Progress'] = np.where(df_stage['Stage_Progress'] == True, 1, 0)
df_merged_4 = df_merged_4.merge(df_stage, on='id')
# Save output for modelling
df_merged_4.to_csv('CSV/df_daybin_4.csv', index=False)
df_merged_4.head()
###Output
_____no_output_____
###Markdown
Aggregated data
###Code
df_agg = df_merged_4.copy()
# Take out demographic and outcome
df_agg.drop( ['race', 'gender', 'age', 'Stage_Progress'], axis=1, inplace=True)
df_agg_mean = df_agg.groupby('id').mean().reset_index()
df_agg_mean.head()
# Mean sbp, dbp, creatinine, glucose, ldl, hgb
df_agg_mean = df_agg.groupby('id').mean().reset_index()
df_agg_mean = df_agg_mean.iloc[:, np.r_[0, 2:8]]
df_agg_mean.head()
df_agg_mean.shape
# Sum drugs
df_agg_sum = df_agg.groupby('id').sum().reset_index()
df_agg_sum = df_agg_sum.iloc[:, 8:]
df_agg_sum.head()
df_agg_sum.shape
df_agg_fixed = pd.concat([df_agg_mean, df_agg_sum], axis=1)
df_agg_fixed.shape
# Put back demo
df_agg_fixed = df_agg_fixed.merge(df_demo, on = 'id')
# Put back outcome
df_agg_fixed = df_agg_fixed.merge(df_stage, on = 'id')
df_agg_fixed.head()
df_agg_fixed.shape
df_agg_fixed.to_csv('CSV/df_agg.csv', index=False)
###Output
_____no_output_____
###Markdown
Temporal data* Only use first 2 years of data (most measurements stop at day 699)
###Code
df_temporal = df_merged_4.copy()
df_temporal.head()
# Take out demographic and outcome
df_temporal.drop( ['race', 'gender', 'age', 'Stage_Progress'], axis=1, inplace=True)
# Convert to wide format
df_temporal = df_temporal.set_index(['id','days_bin']).unstack()
df_temporal.columns = df_temporal.columns.map(lambda x: '{}_{}'.format(x[0], x[1]))
# Some subjects don't have data in a time_bin, KNNImpute again
df_temporal = pd.DataFrame(imputer.fit_transform(df_temporal), index = df_temporal.index, columns = df_temporal.columns)
df_temporal = df_temporal.reset_index()
# Put back demo
df_temporal = df_temporal.merge(df_demo, on = 'id')
# Put back outcome
df_temporal = df_temporal.merge(df_stage, on = 'id')
df_temporal.head()
# Save output for modelling
df_temporal.to_csv('CSV/df_temporal.csv', index=False)
###Output
_____no_output_____
###Markdown
Categorize measurements* Set continuous readings to 1=low, 2=normal, 3=high* Categorize medicine by tertile split total dosage to categorize severity (1=low, 2=normal, 3=high)* Categorize medicine by the treatment target, sum binary code
###Code
# Remove 0, get 75th percentile as threshold for high dosage
# Set normal as 1, high as 2
def categorize_drug(df):
NID = df > 0
if sum(NID) > 0:
threshold = np.percentile(df[NID], 75)
df[NID] = np.where(df[NID] > threshold, 2, 1)
return df
###Output
_____no_output_____
###Markdown
Day_bin
###Code
df_merged_4_cat = df_merged_4.copy()
df_merged_4_cat.head()
names = ['1', '2', '3']
bins = [0, 90, 120, np.inf]
df_merged_4_cat['sbp'] = pd.cut(df_merged_4['sbp'], bins, labels=names)
bins = [0, 60, 80, np.inf]
df_merged_4_cat['dbp'] = pd.cut(df_merged_4['dbp'], bins, labels=names)
bins = [0, 3.9, 7.8, np.inf]
df_merged_4_cat['glucose'] = pd.cut(df_merged_4['glucose'], bins, labels=names)
bins = [0, 100, 129, np.inf]
df_merged_4_cat['ldl'] = pd.cut(df_merged_4['ldl'], bins, labels=names)
MID = df_merged_4['gender'] == 'Male'
bins = [0, 0.74, 1.35, np.inf]
df_merged_4_cat.loc[MID, 'creatinine'] = pd.cut(df_merged_4.loc[MID, 'creatinine'], bins, labels=names)
bins = [0, 0.59, 1.04, np.inf]
df_merged_4_cat.loc[~MID, 'creatinine'] = pd.cut(df_merged_4.loc[~MID, 'creatinine'], bins, labels=names)
bins = [0, 14, 17.5, np.inf]
df_merged_4_cat.loc[MID, 'hgb'] = pd.cut(df_merged_4.loc[MID, 'hgb'], bins, labels=names)
bins = [0, 12.3, 15.3, np.inf]
df_merged_4_cat.loc[~MID, 'hgb'] = pd.cut(df_merged_4.loc[~MID, 'hgb'], bins, labels=names)
df_merged_4_cat.head()
# Remove 0, get 75th percentile as threshold for high dosage, set normal as 1, high as 2
# Need to compute separately for different days_bin
for day in range(1, 5):
DID = df_merged_4_cat['days_bin'] == day
df_day = df_merged_4_cat[DID]
df_merged_4_cat = df_merged_4_cat[~DID]
df_day.iloc[:, 8:29] = df_day.iloc[:, 8:29].apply(lambda x: categorize_drug(x)).astype(int)
df_merged_4_cat = pd.concat([df_merged_4_cat, df_day])
# Label encode race and gender
le = LabelEncoder()
df_merged_4_cat['race'] = le.fit_transform(df_merged_4_cat['race'])
df_merged_4_cat['gender'] = le.fit_transform(df_merged_4_cat['gender'])
# Group age to young-old (โค74 y.o.) as 1, middle-old (75 to 84 y.o.) as 2, and old-old (โฅ85 y.o.) as 3
df_merged_4_cat['age'] = pd.qcut(df_merged_4['age'], 3, labels=[1,2,3])
df_merged_4_cat['age'].value_counts()
df_merged_4_cat.to_csv('CSV/df_merged_4_cat.csv', index=False)
# Group drug by treatment (sum the binary code)
df_merged_4_cat_drug = df_merged_4_cat.copy()
glucose_col = ['canagliflozin', 'dapagliflozin', 'metformin']
df_merged_4_cat_drug['glucose_treatment'] = df_merged_4_cat_drug[glucose_col].sum(axis=1).astype(int)
df_merged_4_cat_drug.drop(glucose_col, axis=1, inplace=True)
bp_col = ['atenolol','bisoprolol','carvedilol','irbesartan','labetalol','losartan','metoprolol','nebivolol','olmesartan','propranolol','telmisartan','valsartan']
df_merged_4_cat_drug['bp_treatment'] = df_merged_4_cat_drug[bp_col].sum(axis=1).astype(int)
df_merged_4_cat_drug.drop(bp_col, axis=1, inplace=True)
cholesterol_col = ['atorvastatin','lovastatin','pitavastatin','pravastatin','rosuvastatin','simvastatin']
df_merged_4_cat_drug['cholesterol_treatment'] = df_merged_4_cat_drug[cholesterol_col].sum(axis=1).astype(int)
df_merged_4_cat_drug.drop(cholesterol_col, axis=1, inplace=True)
df_merged_4_cat_drug.head()
df_merged_4_cat_drug.to_csv('CSV/df_merged_4_cat_drug.csv', index=False)
###Output
_____no_output_____
###Markdown
Aggregated
###Code
df_agg_cat = df_agg_fixed
names = ['1', '2', '3']
bins = [0, 90, 120, np.inf]
df_agg_cat['sbp'] = pd.cut(df_agg_fixed['sbp'], bins, labels=names)
bins = [0, 60, 80, np.inf]
df_agg_cat['dbp'] = pd.cut(df_agg_fixed['dbp'], bins, labels=names)
bins = [0, 3.9, 7.8, np.inf]
df_agg_cat['glucose'] = pd.cut(df_agg_fixed['glucose'], bins, labels=names)
bins = [0, 100, 129, np.inf]
df_agg_cat['ldl'] = pd.cut(df_agg_fixed['ldl'], bins, labels=names)
MID = df_agg_fixed['gender'] == 'Male'
bins = [0, 0.74, 1.35, np.inf]
df_agg_cat.loc[MID, 'creatinine'] = pd.cut(df_agg_fixed.loc[MID, 'creatinine'], bins, labels=names)
bins = [0, 0.59, 1.04, np.inf]
df_agg_cat.loc[~MID, 'creatinine'] = pd.cut(df_agg_fixed.loc[~MID, 'creatinine'], bins, labels=names)
bins = [0, 14, 17.5, np.inf]
df_agg_cat.loc[MID, 'hgb'] = pd.cut(df_agg_fixed.loc[MID, 'hgb'], bins, labels=names)
bins = [0, 12.3, 15.3, np.inf]
df_agg_cat.loc[~MID, 'hgb'] = pd.cut(df_agg_fixed.loc[~MID, 'hgb'], bins, labels=names)
df_agg_cat.head()
# Remove 0, get 75th percentile as threshold for high dosage, set normal as 1, high as 2
df_agg_cat.iloc[:,7:28] = df_agg_fixed.iloc[:,7:28].apply(lambda x: categorize_drug(x)).astype(int)
# Label encode race and gender
le = LabelEncoder()
df_agg_cat['race'] = le.fit_transform(df_agg_cat['race'])
df_agg_cat['gender'] = le.fit_transform(df_agg_cat['gender'])
# Group age to young-old (โค74 y.o.) as 1, middle-old (75 to 84 y.o.) as 2, and old-old (โฅ85 y.o.) as 3
df_agg_cat['age'] = pd.qcut(df_agg_cat['age'], 3, labels=[1,2,3])
df_agg_cat['age'].value_counts()
df_agg_cat.to_csv('CSV/df_agg_cat.csv', index=False)
# Group drug by treatment (sum the binary code)
df_agg_cat_drug = df_agg_cat.copy()
glucose_col = ['canagliflozin', 'dapagliflozin', 'metformin']
df_agg_cat_drug['glucose_treatment'] = df_agg_cat_drug[glucose_col].sum(axis=1).astype(int)
df_agg_cat_drug.drop(glucose_col, axis=1, inplace=True)
bp_col = ['atenolol','bisoprolol','carvedilol','irbesartan','labetalol','losartan','metoprolol','nebivolol','olmesartan','propranolol','telmisartan','valsartan']
df_agg_cat_drug['bp_treatment'] = df_agg_cat_drug[bp_col].sum(axis=1).astype(int)
df_agg_cat_drug.drop(bp_col, axis=1, inplace=True)
cholesterol_col = ['atorvastatin','lovastatin','pitavastatin','pravastatin','rosuvastatin','simvastatin']
df_agg_cat_drug['cholesterol_treatment'] = df_agg_cat_drug[cholesterol_col].sum(axis=1).astype(int)
df_agg_cat_drug.drop(cholesterol_col, axis=1, inplace=True)
df_agg_cat_drug.head()
df_agg_cat_drug.to_csv('CSV/df_agg_cat_drug.csv', index=False)
###Output
_____no_output_____
###Markdown
Temporal
###Code
df_temporal_cat = df_temporal.copy()
names = ['1', '2', '3']
bins = [0, 90, 120, np.inf]
for colname in ['sbp_1', 'sbp_2', 'sbp_3', 'sbp_4']:
df_temporal_cat[colname] = pd.cut(df_temporal_cat[colname], bins, labels=names)
bins = [0, 60, 80, np.inf]
for colname in ['dbp_1', 'dbp_2', 'dbp_3', 'dbp_4']:
df_temporal_cat[colname] = pd.cut(df_temporal_cat[colname], bins, labels=names)
bins = [0, 3.9, 7.8, np.inf]
for colname in ['glucose_1', 'glucose_2', 'glucose_3', 'glucose_4']:
df_temporal_cat[colname] = pd.cut(df_temporal_cat[colname], bins, labels=names)
bins = [0, 100, 129, np.inf]
for colname in ['ldl_1', 'ldl_2', 'ldl_3', 'ldl_4']:
df_temporal_cat[colname] = pd.cut(df_temporal_cat[colname], bins, labels=names)
MID = df_temporal_cat['gender'] == 'Male'
bins = [0, 0.74, 1.35, np.inf]
for colname in ['creatinine_1', 'creatinine_2', 'creatinine_3', 'creatinine_4']:
df_temporal_cat.loc[MID, colname] = pd.cut(df_temporal_cat.loc[MID, colname], bins, labels=names)
bins = [0, 0.59, 1.04, np.inf]
for colname in ['creatinine_1', 'creatinine_2', 'creatinine_3', 'creatinine_4']:
df_temporal_cat.loc[~MID, colname] = pd.cut(df_temporal_cat.loc[~MID, colname], bins, labels=names)
bins = [0, 14, 17.5, np.inf]
for colname in ['hgb_1', 'hgb_2', 'hgb_3', 'hgb_4']:
df_temporal_cat.loc[MID, colname] = pd.cut(df_temporal_cat.loc[MID, colname], bins, labels=names)
bins = [0, 12.3, 15.3, np.inf]
for colname in ['hgb_1', 'hgb_2', 'hgb_3', 'hgb_4']:
df_temporal_cat.loc[~MID, colname] = pd.cut(df_temporal_cat.loc[~MID, colname], bins, labels=names)
df_temporal_cat.head()
# Remove 0, get 75th percentile as threshold for high dosage, set normal as 1, high as 2
df_temporal_cat.iloc[:,25:109] = df_temporal_cat.iloc[:,25:109].apply(lambda x: categorize_drug(x)).astype(int)
# Label encode race and gender
le = LabelEncoder()
df_temporal_cat['race'] = le.fit_transform(df_temporal_cat['race'])
df_temporal_cat['gender'] = le.fit_transform(df_temporal_cat['gender'])
# Group age to young-old (โค74 y.o.) as 1, middle-old (75 to 84 y.o.) as 2, and old-old (โฅ85 y.o.) as 3
df_temporal_cat['age'] = pd.qcut(df_temporal_cat['age'], 3, labels=[1,2,3])
df_temporal_cat['age'].value_counts()
df_temporal_cat.to_csv('CSV/df_temporal_cat.csv', index=False)
# Group drug by treatment (sum the binary code)
df_temporal_cat_drug = df_temporal_cat.copy()
for i in range(1,5):
glucose_col = ['canagliflozin_' + str(i), 'dapagliflozin_' + str(i), 'metformin_' + str(i)]
df_temporal_cat_drug['glucose_treatment_'+ str(i)] = df_temporal_cat_drug[glucose_col].sum(axis=1).astype(int)
df_temporal_cat_drug.drop(glucose_col, axis=1, inplace=True)
bp_col = ['atenolol_' + str(i),'bisoprolol_' + str(i),'carvedilol_' + str(i),'irbesartan_' + str(i),'labetalol_' + str(i),'losartan_' + str(i),'metoprolol_' + str(i),'nebivolol_' + str(i),'olmesartan_' + str(i),'propranolol_' + str(i),'telmisartan_' + str(i),'valsartan_' + str(i)]
df_temporal_cat_drug['bp_treatment_'+ str(i)] = df_temporal_cat_drug[bp_col].sum(axis=1).astype(int)
df_temporal_cat_drug.drop(bp_col, axis=1, inplace=True)
cholesterol_col = ['atorvastatin_' + str(i),'lovastatin_' + str(i),'pitavastatin_' + str(i),'pravastatin_' + str(i),'rosuvastatin_' + str(i),'simvastatin_' + str(i)]
df_temporal_cat_drug['cholesterol_treatment_'+ str(i)] = df_temporal_cat_drug[cholesterol_col].sum(axis=1).astype(int)
df_temporal_cat_drug.drop(cholesterol_col, axis=1, inplace=True)
df_temporal_cat_drug.head()
df_temporal_cat_drug.to_csv('CSV/df_temporal_cat_drug.csv', index=False)
###Output
_____no_output_____
###Markdown
Compute GFR* CKD-EPI equations
###Code
def computeGFR(df):
gender = df['gender']
f_constant = 1
if gender == 'Male':
k = 0.9
a = -0.411
else:
k = 0.7
a = -0.329
f_constant = 1.018
race = df['race']
b_constant = 1
if race == 'Black':
b_constant = 1.159
gfr = 141 * min(df['creatinine'] / k, 1) * (max(df['creatinine'] / k, 1)**(-1.209)) * (0.993**df['age']) * f_constant * b_constant
return gfr
###Output
_____no_output_____
###Markdown
180-day bin
###Code
col_gfr = ['id', 'days_bin', 'creatinine', 'race', 'gender', 'age', 'Stage_Progress']
df_merged_4_gfr = df_merged_4[col_gfr].copy()
df_merged_4_gfr['gfr'] = df_merged_4_gfr.apply(lambda x: computeGFR(x), axis=1)
df_merged_4_gfr.drop(['creatinine', 'race', 'gender', 'age'], axis=1, inplace=True)
# Categorize GFR
df_merged_4_gfr['gfr_cat'] = np.where(df_merged_4_gfr['gfr'] < 60, 1, 2)
df_merged_4_gfr['gfr_cat'].value_counts()
df_merged_4_gfr.to_csv('CSV/df_merged_4_gfr.csv', index=False)
df_merged_4.head()
df_merged_4_gfr.head()
###Output
_____no_output_____
###Markdown
Aggregated
###Code
col_gfr = ['id', 'creatinine', 'race', 'gender', 'age', 'Stage_Progress']
df_agg_gfr = df_agg_fixed[col_gfr].copy()
df_agg_gfr['gfr'] = df_agg_gfr.apply(lambda x: computeGFR(x), axis=1)
df_agg_gfr.drop(['creatinine', 'race', 'gender', 'age'], axis=1, inplace=True)
# Categorize GFR
df_agg_gfr['gfr_cat'] = np.where(df_agg_gfr['gfr'] < 60, 1, 2)
df_agg_gfr['gfr_cat'].value_counts()
df_agg_gfr.to_csv('CSV/df_agg_gfr.csv', index=False)
###Output
_____no_output_____
###Markdown
Temporal
###Code
def computeGFR_temporal(df, i):
gender = df['gender']
f_constant = 1
if gender == 'Male':
k = 0.9
a = -0.411
else:
k = 0.7
a = -0.329
f_constant = 1.018
race = df['race']
b_constant = 1
if race == 'Black':
b_constant = 1.159
gfr = 141 * min(df['creatinine_' + str(i)] / k, 1) * (max(df['creatinine_' + str(i)] / k, 1)**(-1.209)) * (0.993**df['age']) * f_constant * b_constant
return gfr
col_gfr = ['id', 'creatinine_1', 'creatinine_2', 'creatinine_3', 'creatinine_4', 'race', 'gender', 'age', 'Stage_Progress']
df_temporal_gfr = df_temporal[col_gfr].copy()
for i in range(1, 5):
df_temporal_gfr['gfr_' + str(i)] = df_temporal_gfr.apply(lambda x: computeGFR_temporal(x, i), axis=1)
df_temporal_gfr.drop('creatinine_' + str(i), axis=1, inplace=True)
df_temporal_gfr.drop(['race', 'gender', 'age'], axis=1, inplace=True)
# Categorize GFR
for i in range(1, 5):
df_temporal_gfr['gfr_cat_' + str(i)] = np.where(df_temporal_gfr['gfr_' + str(i)] < 60, 1, 2)
df_temporal_gfr.to_csv('CSV/df_temporal_gfr.csv', index=False)
###Output
_____no_output_____ |
00_quickstart/generated_profiler_report/profiler-report.ipynb | ###Markdown
SageMaker Debugger Profiling ReportSageMaker Debugger auto generated this report. You can generate similar reports on all supported training jobs. The report provides summary of training job, system resource usage statistics, framework metrics, rules summary, and detailed analysis from each rule. The graphs and tables are interactive. **Legal disclaimer:** This report and any recommendations are provided for informational purposes only and are not definitive. You are responsible for making your own independent assessment of the information.
###Code
import json
import pandas as pd
import glob
import matplotlib.pyplot as plt
import numpy as np
import datetime
from smdebug.profiler.utils import us_since_epoch_to_human_readable_time, ns_since_epoch_to_human_readable_time
import bokeh
from bokeh.io import output_notebook, show
from bokeh.layouts import column, row
from bokeh.plotting import figure
from bokeh.models.widgets import DataTable, DateFormatter, TableColumn
from bokeh.models import ColumnDataSource, PreText
from math import pi
from bokeh.transform import cumsum
import warnings
from bokeh.models.widgets import Paragraph
from bokeh.models import Legend
from bokeh.util.warnings import BokehDeprecationWarning, BokehUserWarning
warnings.simplefilter('ignore', BokehDeprecationWarning)
warnings.simplefilter('ignore', BokehUserWarning)
output_notebook(hide_banner=True)
def create_piechart(data_dict, title=None, height=400, width=400, x1=0, x2=0.1, radius=0.4, toolbar_location='right'):
plot = figure(plot_height=height,
plot_width=width,
toolbar_location=toolbar_location,
tools="hover,wheel_zoom,reset,pan",
tooltips="@phase:@value",
title=title,
x_range=(-radius-x1, radius+x2))
data = pd.Series(data_dict).reset_index(name='value').rename(columns={'index':'phase'})
data['angle'] = data['value']/data['value'].sum() * 2*pi
data['color'] = bokeh.palettes.viridis(len(data_dict))
plot.wedge(x=0, y=0., radius=radius,
start_angle=cumsum('angle', include_zero=True),
end_angle=cumsum('angle'),
line_color="white",
source=data,
fill_color='color',
legend='phase'
)
plot.legend.label_text_font_size = "8pt"
plot.legend.location = 'center_right'
plot.axis.axis_label=None
plot.axis.visible=False
plot.grid.grid_line_color = None
plot.outline_line_color = "white"
return plot
from IPython.display import display, HTML, Markdown, Image
def pretty_print(df):
raw_html = df.to_html().replace("\\n","<br>").replace('<tr>','<tr style="text-align: left;">')
return display(HTML(raw_html))
###Output
_____no_output_____
###Markdown
Training job summary
###Code
def load_report(rule_name):
try:
report = json.load(open('/opt/ml/processing/output/rule/profiler-output/profiler-reports/'+rule_name+'.json'))
return report
except FileNotFoundError:
print (rule_name + ' not triggered')
job_statistics = {}
report = load_report('MaxInitializationTime')
if report:
if "first" in report['Details']["step_num"] and "last" in report['Details']["step_num"]:
first_step = report['Details']["step_num"]["first"]
last_step = report['Details']["step_num"]["last"]
tmp = us_since_epoch_to_human_readable_time(report['Details']['job_start'] * 1000000)
date = datetime.datetime.strptime(tmp, '%Y-%m-%dT%H:%M:%S:%f')
day = date.date().strftime("%m/%d/%Y")
hour = date.time().strftime("%H:%M:%S")
job_statistics["Start time"] = f"{hour} {day}"
tmp = us_since_epoch_to_human_readable_time(report['Details']['job_end'] * 1000000)
date = datetime.datetime.strptime(tmp, '%Y-%m-%dT%H:%M:%S:%f')
day = date.date().strftime("%m/%d/%Y")
hour = date.time().strftime("%H:%M:%S")
job_statistics["End time"] = f"{hour} {day}"
job_duration_in_seconds = int(report['Details']['job_end'] - report['Details']['job_start'])
job_statistics["Job duration"] = f"{job_duration_in_seconds} seconds"
if "first" in report['Details']["step_num"] and "last" in report['Details']["step_num"]:
tmp = us_since_epoch_to_human_readable_time(first_step)
date = datetime.datetime.strptime(tmp, '%Y-%m-%dT%H:%M:%S:%f')
day = date.date().strftime("%m/%d/%Y")
hour = date.time().strftime("%H:%M:%S")
job_statistics["Training loop start"] = f"{hour} {day}"
tmp = us_since_epoch_to_human_readable_time(last_step)
date = datetime.datetime.strptime(tmp, '%Y-%m-%dT%H:%M:%S:%f')
day = date.date().strftime("%m/%d/%Y")
hour = date.time().strftime("%H:%M:%S")
job_statistics["Training loop end"] = f"{hour} {day}"
training_loop_duration_in_seconds = int((last_step - first_step) / 1000000)
job_statistics["Training loop duration"] = f"{training_loop_duration_in_seconds} seconds"
initialization_in_seconds = int(first_step/1000000 - report['Details']['job_start'])
job_statistics["Initialization time"] = f"{initialization_in_seconds} seconds"
finalization_in_seconds = int(np.abs(report['Details']['job_end'] - last_step/1000000))
job_statistics["Finalization time"] = f"{finalization_in_seconds} seconds"
initialization_perc = int(initialization_in_seconds / job_duration_in_seconds * 100)
job_statistics["Initialization"] = f"{initialization_perc} %"
training_loop_perc = int(training_loop_duration_in_seconds / job_duration_in_seconds * 100)
job_statistics["Training loop"] = f"{training_loop_perc} %"
finalization_perc = int(finalization_in_seconds / job_duration_in_seconds * 100)
job_statistics["Finalization"] = f"{finalization_perc} %"
if report:
text = """The following table gives a summary about the training job. The table includes information about when the training job started and ended, how much time initialization, training loop and finalization took."""
if len(job_statistics) > 0:
df = pd.DataFrame.from_dict(job_statistics, orient='index')
start_time = us_since_epoch_to_human_readable_time(report['Details']['job_start'] * 1000000)
date = datetime.datetime.strptime(start_time, '%Y-%m-%dT%H:%M:%S:%f')
day = date.date().strftime("%m/%d/%Y")
hour = date.time().strftime("%H:%M:%S")
duration = job_duration_in_seconds
text = f"""{text} \n Your training job started on {day} at {hour} and ran for {duration} seconds."""
#pretty_print(df)
if "first" in report['Details']["step_num"] and "last" in report['Details']["step_num"]:
if finalization_perc < 0:
job_statistics["Finalization%"] = 0
if training_loop_perc < 0:
job_statistics["Training loop"] = 0
if initialization_perc < 0:
job_statistics["Initialization"] = 0
else:
text = f"""{text} \n Your training job started on {day} at {hour} and ran for {duration} seconds."""
if len(job_statistics) > 0:
df2 = df.reset_index()
df2.columns = ["0", "1"]
source = ColumnDataSource(data=df2)
columns = [TableColumn(field='0', title=""),
TableColumn(field='1', title="Job Statistics"),]
table = DataTable(source=source, columns=columns, width=450, height=380)
plot = None
if "Initialization" in job_statistics:
piechart_data = {}
piechart_data["Initialization"] = initialization_perc
piechart_data["Training loop"] = training_loop_perc
piechart_data["Finalization"] = finalization_perc
plot = create_piechart(piechart_data,
height=350,
width=500,
x1=0.15,
x2=0.15,
radius=0.15,
toolbar_location=None)
if plot != None:
paragraph = Paragraph(text=f"""{text}""", width = 800)
show(column(paragraph, row(table, plot)))
else:
paragraph = Paragraph(text=f"""{text}. No step information was profiled from your training job. The time spent on initialization and finalization cannot be computed.""" , width = 800)
show(column(paragraph, row(table)))
###Output
_____no_output_____
###Markdown
System usage statistics
###Code
report = load_report('OverallSystemUsage')
text1 = ''
if report:
if "GPU" in report["Details"]:
for node_id in report["Details"]["GPU"]:
gpu_p95 = report["Details"]["GPU"][node_id]["p95"]
gpu_p50 = report["Details"]["GPU"][node_id]["p50"]
cpu_p95 = report["Details"]["CPU"][node_id]["p95"]
cpu_p50 = report["Details"]["CPU"][node_id]["p50"]
if gpu_p95 < 70 and cpu_p95 < 70:
text1 = f"""{text1}The 95th percentile of the total GPU utilization on node {node_id} is only {int(gpu_p95)}%.
The 95th percentile of the total CPU utilization is only {int(cpu_p95)}%. Node {node_id} is underutilized.
You may want to consider switching to a smaller instance type."""
elif gpu_p95 < 70 and cpu_p95 > 70:
text1 = f"""{text1}The 95th percentile of the total GPU utilization on node {node_id} is only {int(gpu_p95)}%.
However, the 95th percentile of the total CPU utilization is {int(cpu_p95)}%. GPUs on node {node_id} are underutilized
likely because of CPU bottlenecks"""
elif gpu_p50 > 70:
text1 = f"""{text1}The median total GPU utilization on node {node_id} is {int(gpu_p50)}%.
GPUs on node {node_id} are well utilized"""
else:
text1 = f"""{text1}The median total GPU utilization on node {node_id} is {int(gpu_p50)}%.
The median total CPU utilization is {int(cpu_p50)}%."""
else:
for node_id in report["Details"]["CPU"]:
cpu_p95 = report["Details"]["CPU"][node_id]["p95"]
if cpu_p95 > 70:
text1 = f"""{text1}The 95th percentile of the total CPU utilization on node {node_id} is {int**(cpu_p95)}%. GPUs on node {node_id} are well utilized"""
text1 = Paragraph(text=f"""{text1}""", width=1100)
text2 = Paragraph(text=f"""The following table shows statistics of resource utilization per worker (node),
such as the total CPU and GPU utilization, and the memory utilization on CPU and GPU.
The table also includes the total I/O wait time and the total amount of data sent or received in bytes.
The table shows min and max values as well as p99, p90 and p50 percentiles.""", width=900)
pd.set_option('display.float_format', lambda x: '%.2f' % x)
rows = []
units = {"CPU": "percentage", "CPU memory": "percentage", "GPU": "percentage", "Network": "bytes", "GPU memory": "percentage", "I/O": "percentage"}
if report:
for metric in report['Details']:
for node_id in report['Details'][metric]:
values = report['Details'][metric][node_id]
rows.append([node_id, metric, units[metric], values['max'], values['p99'], values['p95'], values['p50'], values['min']])
df = pd.DataFrame(rows)
df.columns = ['Node', 'metric', 'unit', 'max', 'p99', 'p95', 'p50', 'min']
df2 = df.reset_index()
source = ColumnDataSource(data=df2)
columns = [TableColumn(field='Node', title="node"),
TableColumn(field='metric', title="metric"),
TableColumn(field='unit', title="unit"),
TableColumn(field='max', title="max"),
TableColumn(field='p99', title="p99"),
TableColumn(field='p95', title="p95"),
TableColumn(field='p50', title="p50"),
TableColumn(field='min', title="min"),]
table = DataTable(source=source, columns=columns, width=800, height=df2.shape[0]*30)
show(column( text1, text2, row(table)))
report = load_report('OverallFrameworkMetrics')
if report:
if 'Details' in report:
display(Markdown(f"""## Framework metrics summary"""))
plots = []
text = ''
if 'phase' in report['Details']:
text = f"""The following two pie charts show the time spent on the TRAIN phase, the EVAL phase,
and others. The 'others' includes the time spent between steps (after one step has finished and before
the next step has started). Ideally, most of the training time should be spent on the
TRAIN and EVAL phases. If TRAIN/EVAL were not specified in the training script, steps will be recorded as
GLOBAL."""
if 'others' in report['Details']['phase']:
others = float(report['Details']['phase']['others'])
if others > 25:
text = f"""{text} Your training job spent quite a significant amount of time ({round(others,2)}%) in phase "others".
You should check what is happening in between the steps."""
plot = create_piechart(report['Details']['phase'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="The ratio between the time spent on the TRAIN/EVAL phase and others")
plots.append(plot)
if 'forward_backward' in report['Details']:
event = max(report['Details']['forward_backward'], key=report['Details']['forward_backward'].get)
perc = report['Details']['forward_backward'][event]
text = f"""{text} The pie chart on the right shows a more detailed breakdown.
It shows that {int(perc)}% of the time was spent in event "{event}"."""
if perc > 70:
text = f"""There is quite a significant difference between the time spent on forward and backward
pass."""
else:
text = f"""{text} It shows that {int(perc)}% of the training time
was spent on "{event}"."""
plot = create_piechart(report['Details']['forward_backward'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="The ratio between forward and backward pass")
plots.append(plot)
if len(plots) > 0:
paragraph = Paragraph(text=text, width=1100)
show(column(paragraph, row(plots)))
plots = []
text=''
if 'ratio' in report['Details'] and len(report['Details']['ratio']) > 0:
key = list(report['Details']['ratio'].keys())[0]
ratio = report['Details']['ratio'][key]
text = f"""The following piechart shows a breakdown of the CPU/GPU operators.
It shows that {int(ratio)}% of training time was spent on executing the "{key}" operator."""
plot = create_piechart(report['Details']['ratio'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="The ratio between the time spent on CPU/GPU operators")
plots.append(plot)
if 'general' in report['Details']:
event = max(report['Details']['general'], key=report['Details']['general'].get)
perc = report['Details']['general'][event]
plot = create_piechart(report['Details']['general'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="General framework operations")
plots.append(plot)
if len(plots) > 0:
paragraph = Paragraph(text=text, width=1100)
show(column(paragraph, row(plots)))
plots = []
text = ''
if 'horovod' in report['Details']:
display(Markdown(f"""#### Overview: Horovod metrics"""))
event = max(report['Details']['horovod'], key=report['Details']['horovod'].get)
perc = report['Details']['horovod'][event]
text = f"""{text} The following pie chart shows a detailed breakdown of the Horovod metrics profiled
from your training job. The most expensive function was "{event}" with {int(perc)}%."""
plot = create_piechart(report['Details']['horovod'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="Horovod metrics ")
paragraph = Paragraph(text=text, width=1100)
show(column(paragraph, row(plot)))
pd.set_option('display.float_format', lambda x: '%.2f' % x)
rows = []
values = []
if report:
if 'CPU_total' in report['Details']:
display(Markdown(f"""#### Overview: CPU operators"""))
event = max(report['Details']['CPU'], key=report['Details']['CPU'].get)
perc = report['Details']['CPU'][event]
for function in report['Details']['CPU']:
percentage = round(report['Details']['CPU'][function],2)
time = report['Details']['CPU_total'][function]
rows.append([percentage, time, function])
df = pd.DataFrame(rows)
df.columns = ['percentage', 'time', 'operator']
df = df.sort_values(by=['percentage'], ascending=False)
source = ColumnDataSource(data=df)
columns = [TableColumn(field='percentage', title="Percentage"),
TableColumn(field='time', title="Cumulative time in microseconds"),
TableColumn(field='operator', title="CPU operator"),]
table = DataTable(source=source, columns=columns, width=550, height=350)
text = Paragraph(text=f"""The following table shows a list of operators that ran on the CPUs.
The most expensive operator on the CPUs was "{event}" with {int(perc)} %.""")
plot = create_piechart(report['Details']['CPU'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
)
show(column(text, row(table, plot)))
pd.set_option('display.float_format', lambda x: '%.2f' % x)
rows = []
values = []
if report:
if 'GPU_total' in report['Details']:
display(Markdown(f"""#### Overview: GPU operators"""))
event = max(report['Details']['GPU'], key=report['Details']['GPU'].get)
perc = report['Details']['GPU'][event]
for function in report['Details']['GPU']:
percentage = round(report['Details']['GPU'][function],2)
time = report['Details']['GPU_total'][function]
rows.append([percentage, time, function])
df = pd.DataFrame(rows)
df.columns = ['percentage', 'time', 'operator']
df = df.sort_values(by=['percentage'], ascending=False)
source = ColumnDataSource(data=df)
columns = [TableColumn(field='percentage', title="Percentage"),
TableColumn(field='time', title="Cumulative time in microseconds"),
TableColumn(field='operator', title="GPU operator"),]
table = DataTable(source=source, columns=columns, width=450, height=350)
text = Paragraph(text=f"""The following table shows a list of operators that your training job ran on GPU.
The most expensive operator on GPU was "{event}" with {int(perc)} %""")
plot = create_piechart(report['Details']['GPU'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
)
show(column(text, row(table, plot)))
###Output
_____no_output_____
###Markdown
Rules summary
###Code
description = {}
description['CPUBottleneck'] = 'Checks if the CPU utilization is high and the GPU utilization is low. \
It might indicate CPU bottlenecks, where the GPUs are waiting for data to arrive \
from the CPUs. The rule evaluates the CPU and GPU utilization rates, and triggers the issue \
if the time spent on the CPU bottlenecks exceeds a threshold percent of the total training time. The default threshold is 50 percent.'
description['IOBottleneck'] = 'Checks if the data I/O wait time is high and the GPU utilization is low. \
It might indicate IO bottlenecks where GPU is waiting for data to arrive from storage. \
The rule evaluates the I/O and GPU utilization rates and triggers the issue \
if the time spent on the IO bottlenecks exceeds a threshold percent of the total training time. The default threshold is 50 percent.'
description['Dataloader'] = 'Checks how many data loaders are running in parallel and whether the total number is equal the number \
of available CPU cores. The rule triggers if number is much smaller or larger than the number of available cores. \
If too small, it might lead to low GPU utilization. If too large, it might impact other compute intensive operations on CPU.'
description['GPUMemoryIncrease'] = 'Measures the average GPU memory footprint and triggers if there is a large increase.'
description['BatchSize'] = 'Checks if GPUs are underutilized because the batch size is too small. \
To detect this problem, the rule analyzes the average GPU memory footprint, \
the CPU and the GPU utilization. '
description['LowGPUUtilization'] = 'Checks if the GPU utilization is low or fluctuating. \
This can happen due to bottlenecks, blocking calls for synchronizations, \
or a small batch size.'
description['MaxInitializationTime'] = 'Checks if the time spent on initialization exceeds a threshold percent of the total training time. \
The rule waits until the first step of training loop starts. The initialization can take longer \
if downloading the entire dataset from Amazon S3 in File mode. The default threshold is 20 minutes.'
description['LoadBalancing'] = 'Detects workload balancing issues across GPUs. \
Workload imbalance can occur in training jobs with data parallelism. \
The gradients are accumulated on a primary GPU, and this GPU might be overused \
with regard to other GPUs, resulting in reducing the efficiency of data parallelization.'
description['StepOutlier'] = 'Detects outliers in step duration. The step duration for forward and backward pass should be \
roughly the same throughout the training. If there are significant outliers, \
it may indicate a system stall or bottleneck issues.'
recommendation = {}
recommendation['CPUBottleneck'] = 'Consider increasing the number of data loaders \
or applying data pre-fetching.'
recommendation['IOBottleneck'] = 'Pre-fetch data or choose different file formats, such as binary formats that \
improve I/O performance.'
recommendation['Dataloader'] = 'Change the number of data loader processes.'
recommendation['GPUMemoryIncrease'] = 'Choose a larger instance type with more memory if footprint is close to maximum available memory.'
recommendation['BatchSize'] = 'The batch size is too small, and GPUs are underutilized. Consider running on a smaller instance type or increasing the batch size.'
recommendation['LowGPUUtilization'] = 'Check if there are bottlenecks, minimize blocking calls, \
change distributed training strategy, or increase the batch size.'
recommendation['MaxInitializationTime'] = 'Initialization takes too long. \
If using File mode, consider switching to Pipe mode in case you are using TensorFlow framework.'
recommendation['LoadBalancing'] = 'Choose a different distributed training strategy or \
a different distributed training framework.'
recommendation['StepOutlier'] = 'Check if there are any bottlenecks (CPU, I/O) correlated to the step outliers.'
files = glob.glob('/opt/ml/processing/output/rule/profiler-output/profiler-reports/*json')
summary = {}
for i in files:
rule_name = i.split('/')[-1].replace('.json','')
if rule_name == "OverallSystemUsage" or rule_name == "OverallFrameworkMetrics":
continue
rule_report = json.load(open(i))
summary[rule_name] = {}
summary[rule_name]['Description'] = description[rule_name]
summary[rule_name]['Recommendation'] = recommendation[rule_name]
summary[rule_name]['Number of times rule triggered'] = rule_report['RuleTriggered']
#summary[rule_name]['Number of violations'] = rule_report['Violations']
summary[rule_name]['Number of datapoints'] = rule_report['Datapoints']
summary[rule_name]['Rule parameters'] = rule_report['RuleParameters']
df = pd.DataFrame.from_dict(summary, orient='index')
df = df.sort_values(by=['Number of times rule triggered'], ascending=False)
display(Markdown(f"""The following table shows a profiling summary of the Debugger built-in rules.
The table is sorted by the rules that triggered the most frequently. During your training job, the {df.index[0]} rule
was the most frequently triggered. It processed {df.values[0,3]} datapoints and was triggered {df.values[0,2]} times."""))
with pd.option_context('display.colheader_justify','left'):
pretty_print(df)
analyse_phase = "training"
if job_statistics and "initialization_in_seconds" in job_statistics:
if job_statistics["initialization_in_seconds"] > job_statistics["training_loop_duration_in_seconds"]:
analyse_phase = "initialization"
time = job_statistics["initialization_in_seconds"]
perc = job_statistics["initialization_%"]
display(Markdown(f"""The initialization phase took {int(time)} seconds, which is {int(perc)}%*
of the total training time. Since the training loop has taken the most time,
we dive deep into the events occurring during this phase"""))
display(Markdown("""## Analyzing initialization\n\n"""))
time = job_statistics["training_loop_duration_in_seconds"]
perc = job_statistics["training_loop_%"]
display(Markdown(f"""The training loop lasted for {int(time)} seconds which is {int(perc)}% of the training job time.
Since the training loop has taken the most time, we dive deep into the events occured during this phase."""))
if analyse_phase == 'training':
display(Markdown("""## Analyzing the training loop\n\n"""))
if analyse_phase == "initialization":
display(Markdown("""### MaxInitializationTime\n\nThis rule helps to detect if the training initialization is taking too much time. \nThe rule waits until first step is available. The rule takes the parameter `threshold` that defines how many minutes to wait for the first step to become available. Default is 20 minutes.\nYou can run the rule locally in the following way:
"""))
_ = load_report("MaxInitializationTime")
if analyse_phase == "training":
display(Markdown("""### Step duration analysis"""))
report = load_report('StepOutlier')
if report:
parameters = report['RuleParameters']
params = report['RuleParameters'].split('\n')
stddev = params[3].split(':')[1]
mode = params[1].split(':')[1]
n_outlier = params[2].split(':')[1]
triggered = report['RuleTriggered']
datapoints = report['Datapoints']
text = f"""The StepOutlier rule measures step durations and checks for outliers. The rule
returns True if duration is larger than {stddev} times the standard deviation. The rule
also takes the parameter mode, that specifies whether steps from training or validation phase
should be checked. In your processing job mode was specified as {mode}.
Typically the first step is taking significantly more time and to avoid the
rule triggering immediately, one can use n_outliers to specify the number of outliers to ignore.
n_outliers was set to {n_outlier}.
The rule analysed {datapoints} datapoints and triggered {triggered} times.
"""
paragraph = Paragraph(text=text, width=900)
show(column(paragraph))
if report and len(report['Details']['step_details']) > 0:
for node_id in report['Details']['step_details']:
tmp = report['RuleParameters'].split('threshold:')
threshold = tmp[1].split('\n')[0]
n_outliers = report['Details']['step_details'][node_id]['number_of_outliers']
mean = report['Details']['step_details'][node_id]['step_stats']['mean']
stddev = report['Details']['step_details'][node_id]['stddev']
phase = report['Details']['step_details'][node_id]['phase']
display(Markdown(f"""**Step durations on node {node_id}:**"""))
display(Markdown(f"""The following table is a summary of the statistics of step durations measured on node {node_id}.
The rule has analyzed the step duration from {phase} phase.
The average step duration on node {node_id} was {round(mean, 2)}s.
The rule detected {n_outliers} outliers, where step duration was larger than {threshold} times the standard deviation of {stddev}s
\n"""))
step_stats_df = pd.DataFrame.from_dict(report['Details']['step_details'][node_id]['step_stats'], orient='index').T
step_stats_df.index = ['Step Durations in [s]']
pretty_print(step_stats_df)
display(Markdown(f"""The following histogram shows the step durations measured on the different nodes.
You can turn on or turn off the visualization of histograms by selecting or unselecting the labels in the legend."""))
plot = figure(plot_height=450,
plot_width=850,
title=f"""Step durations""")
colors = bokeh.palettes.viridis(len(report['Details']['step_details']))
for index, node_id in enumerate(report['Details']['step_details']):
probs = report['Details']['step_details'][node_id]['probs']
binedges = report['Details']['step_details'][node_id]['binedges']
plot.quad( top=probs,
bottom=0,
left=binedges[:-1],
right=binedges[1:],
line_color="white",
fill_color=colors[index],
fill_alpha=0.7,
legend=node_id)
plot.add_layout(Legend(), 'right')
plot.y_range.start = 0
plot.xaxis.axis_label = f"""Step durations in [s]"""
plot.yaxis.axis_label = "Occurrences"
plot.grid.grid_line_color = "white"
plot.legend.click_policy="hide"
plot.legend.location = 'center_right'
show(plot)
if report['RuleTriggered'] > 0:
text=f"""To get a better understanding of what may have caused those outliers,
we correlate the timestamps of step outliers with other framework metrics that happened at the same time.
The left chart shows how much time was spent in the different framework
metrics aggregated by event phase. The chart on the right shows the histogram of normal step durations (without
outliers). The following chart shows how much time was spent in the different
framework metrics when step outliers occurred. In this chart framework metrics are not aggregated byphase."""
plots = []
if 'phase' in report['Details']:
text = f"""{text} The chart (in the middle) shows whether step outliers mainly happened during TRAIN or EVAL phase.
"""
plot = create_piechart(report['Details']['phase'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="The ratio between the time spent on the TRAIN/EVAL phase")
plots.append(plot)
if 'forward_backward' in report['Details'] and len(report['Details']['forward_backward']) > 0:
event = max(report['Details']['forward_backward'], key=report['Details']['forward_backward'].get)
perc = report['Details']['forward_backward'][event]
text = f"""{text} The pie chart on the right shows a detailed breakdown.
It shows that {int(perc)}% of the training time was spent on event "{event}"."""
plot = create_piechart(report['Details']['forward_backward'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="The Ratio between forward and backward pass")
plots.append(plot)
if len(plots) > 0:
paragraph = Paragraph(text=text, width=900)
show(column(paragraph, row(plots)))
plots = []
text = ""
if 'ratio' in report['Details'] and len(report['Details']['ratio']) > 0:
key = list(report['Details']['ratio'].keys())[0]
ratio = report['Details']['ratio'][key]
text = f"""The following pie chart shows a breakdown of the CPU/GPU operators executed during the step outliers.
It shows that {int(ratio)}% of the training time was spent on executing operators in "{key}"."""
plot = create_piechart(report['Details']['ratio'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="The ratio between CPU/GPU operators")
plots.append(plot)
if 'general' in report['Details'] and len(report['Details']['general']) > 0:
event = max(report['Details']['general'], key=report['Details']['general'].get)
perc = report['Details']['general'][event]
plot = create_piechart(report['Details']['general'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="General metrics recorded in framework ")
plots.append(plot)
if len(plots) > 0:
paragraph = Paragraph(text=text, width=900)
show(column(paragraph, row(plots)))
plots = []
text = ""
if 'horovod' in report['Details'] and len(report['Details']['horovod']) > 0:
event = max(report['Details']['horovod'], key=report['Details']['horovod'].get)
perc = report['Details']['horovod'][event]
text = f"""The following pie chart shows a detailed breakdown of the Horovod metrics that have been
recorded when step outliers happened. The most expensive function was {event} with {int(perc)}%"""
plot = create_piechart(report['Details']['horovod'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="General metrics recorded in framework ")
paragraph = Paragraph(text=text, width=900)
show(column(paragraph, row(plot)))
if analyse_phase == "training":
display(Markdown("""### GPU utilization analysis\n\n"""))
display(Markdown("""**Usage per GPU** \n\n"""))
report = load_report('LowGPUUtilization')
if report:
params = report['RuleParameters'].split('\n')
threshold_p95 = params[0].split(':')[1]
threshold_p5 = params[1].split(':')[1]
window = params[2].split(':')[1]
patience = params[3].split(':')[1]
violations = report['Violations']
triggered = report['RuleTriggered']
datapoints = report['Datapoints']
text=Paragraph(text=f"""The LowGPUUtilization rule checks for a low and fluctuating GPU usage. If the GPU usage is
consistently low, it might be caused by bottlenecks or a small batch size. If usage is heavily
fluctuating, it can be due to bottlenecks or blocking calls. The rule computed the 95th and 5th
percentile of GPU utilization on {window} continuous datapoints and found {violations} cases where
p95 was above {threshold_p95}% and p5 was below {threshold_p5}%. If p95 is high and p5 is low,
it might indicate that the GPU usage is highly fluctuating. If both values are very low,
it would mean that the machine is underutilized. During initialization, the GPU usage is likely zero,
so the rule skipped the first {patience} data points.
The rule analysed {datapoints} datapoints and triggered {triggered} times.""", width=800)
show(text)
if len(report['Details']) > 0:
timestamp = us_since_epoch_to_human_readable_time(report['Details']['last_timestamp'])
date = datetime.datetime.strptime(timestamp, '%Y-%m-%dT%H:%M:%S:%f')
day = date.date().strftime("%m/%d/%Y")
hour = date.time().strftime("%H:%M:%S")
text = Paragraph(text=f"""Your training job is underutilizing the instance. You may want to consider
to either switch to a smaller instance type or to increase the batch size.
The last time that the LowGPUUtilization rule was triggered in your training job was on {day} at {hour}.
The following boxplots are a snapshot from the timestamps.
They show the utilization per GPU (without outliers).
To get a better understanding of the workloads throughout the whole training,
you can check the workload histogram in the next section.""", width=800)
show(text)
del report['Details']['last_timestamp']
for node_id in report['Details']:
plot = figure(plot_height=350,
plot_width=1000,
toolbar_location='right',
tools="hover,wheel_zoom,reset,pan",
title=f"Node {node_id}",
x_range=(0,17),
)
for index, key in enumerate(report['Details'][node_id]):
display(Markdown(f"""**GPU utilization of {key} on node {node_id}:**"""))
text = ""
gpu_max = report['Details'][node_id][key]['gpu_max']
p_95 = report['Details'][node_id][key]['gpu_95']
p_5 = report['Details'][node_id][key]['gpu_5']
text = f"""{text} The max utilization of {key} on node {node_id} was {gpu_max}%"""
if p_95 < int(threshold_p95):
text = f"""{text} and the 95th percentile was only {p_95}%.
{key} on node {node_id} is underutilized"""
if p_5 < int(threshold_p5):
text = f"""{text} and the 5th percentile was only {p_5}%"""
if p_95 - p_5 > 50:
text = f"""{text} The difference between 5th percentile {p_5}% and 95th percentile {p_95}% is quite
significant, which means that utilization on {key} is fluctuating quite a lot.\n"""
upper = report['Details'][node_id][key]['upper']
lower = report['Details'][node_id][key]['lower']
p75 = report['Details'][node_id][key]['p75']
p25 = report['Details'][node_id][key]['p25']
p50 = report['Details'][node_id][key]['p50']
plot.segment(index+1, upper, index+1, p75, line_color="black")
plot.segment(index+1, lower, index+1, p25, line_color="black")
plot.vbar(index+1, 0.7, p50, p75, fill_color="#FDE725", line_color="black")
plot.vbar(index+1, 0.7, p25, p50, fill_color="#440154", line_color="black")
plot.rect(index+1, lower, 0.2, 0.01, line_color="black")
plot.rect(index+1, upper, 0.2, 0.01, line_color="black")
plot.xaxis.major_label_overrides[index+1] = key
plot.xgrid.grid_line_color = None
plot.ygrid.grid_line_color = "white"
plot.grid.grid_line_width = 0
plot.xaxis.major_label_text_font_size="10px"
text=Paragraph(text=f"""{text}""", width=900)
show(text)
plot.yaxis.axis_label = "Utilization in %"
plot.xaxis.ticker = np.arange(index+2)
show(plot)
if analyse_phase == "training":
display(Markdown("""**Workload balancing**\n\n"""))
report = load_report('LoadBalancing')
if report:
params = report['RuleParameters'].split('\n')
threshold = params[0].split(':')[1]
patience = params[1].split(':')[1]
triggered = report['RuleTriggered']
datapoints = report['Datapoints']
paragraph = Paragraph(text=f"""The LoadBalancing rule helps to detect issues in workload balancing
between multiple GPUs.
It computes a histogram of GPU utilization values for each GPU and compares then the
similarity between histograms. The rule checked if the distance of histograms is larger than the
threshold of {threshold}.
During initialization utilization is likely zero, so the rule skipped the first {patience} data points.
""", width=900)
show(paragraph)
if len(report['Details']) > 0:
for node_id in report['Details']:
text = f"""The following histogram shows the workload per GPU on node {node_id}.
You can enable/disable the visualization of a workload by clicking on the label in the legend.
"""
if len(report['Details']) == 1 and len(report['Details'][node_id]['workloads']) == 1:
text = f"""{text} Your training job only used one GPU so there is no workload balancing issue."""
plot = figure(plot_height=450,
plot_width=850,
x_range=(-1,100),
title=f"""Workloads on node {node_id}""")
colors = bokeh.palettes.viridis(len(report['Details'][node_id]['workloads']))
for index, gpu_id2 in enumerate(report['Details'][node_id]['workloads']):
probs = report['Details'][node_id]['workloads'][gpu_id2]
plot.quad( top=probs,
bottom=0,
left=np.arange(0,98,2),
right=np.arange(2,100,2),
line_color="white",
fill_color=colors[index],
fill_alpha=0.8,
legend=gpu_id2 )
plot.y_range.start = 0
plot.xaxis.axis_label = f"""Utilization"""
plot.yaxis.axis_label = "Occurrences"
plot.grid.grid_line_color = "white"
plot.legend.click_policy="hide"
paragraph = Paragraph(text=text)
show(column(paragraph, plot))
if "distances" in report['Details'][node_id]:
text = f"""The rule identified workload balancing issues on node {node_id}
where workloads differed by more than threshold {threshold}.
"""
for index, gpu_id2 in enumerate(report['Details'][node_id]['distances']):
for gpu_id1 in report['Details'][node_id]['distances'][gpu_id2]:
distance = round(report['Details'][node_id]['distances'][gpu_id2][gpu_id1], 2)
text = f"""{text} The difference of workload between {gpu_id2} and {gpu_id1} is: {distance}."""
paragraph = Paragraph(text=f"""{text}""", width=900)
show(column(paragraph))
if analyse_phase == "training":
display(Markdown("""### Dataloading analysis\n\n"""))
report = load_report('Dataloader')
if report:
params = report['RuleParameters'].split("\n")
min_threshold = params[0].split(':')[1]
max_threshold = params[1].split(':')[1]
triggered = report['RuleTriggered']
datapoints = report['Datapoints']
text=f"""The number of dataloader workers can greatly affect the overall performance
of your training job. The rule analyzed the number of dataloading processes that have been running in
parallel on the training instance and compares it against the total number of cores.
The rule checked if the number of processes is smaller than {min_threshold}% or larger than
{max_threshold}% the total number of cores. Having too few dataloader workers can slowdown data preprocessing and lead to GPU
underutilization. Having too many dataloader workers may hurt the
overall performance if you are running other compute intensive tasks on the CPU.
The rule analysed {datapoints} datapoints and triggered {triggered} times."""
paragraph = Paragraph(text=f"{text}", width=900)
show(paragraph)
text = ""
if 'cores' in report['Details']:
cores = int(report['Details']['cores'])
dataloaders = report['Details']['dataloaders']
if dataloaders < cores:
text=f"""{text} Your training instance provided {cores} CPU cores, however your training job only
ran on average {dataloaders} dataloader workers in parallel. We recommend you to increase the number of
dataloader workers."""
if dataloaders > cores:
text=f"""{text} Your training instance provided {cores} CPU cores, however your training job ran
on average {dataloaders} dataloader workers. We recommed you to decrease the number of dataloader
workers."""
if 'pin_memory' in report['Details'] and report['Details']['pin_memory'] == False:
text=f"""{text} Using pinned memory also improves performance because it enables fast data transfer to CUDA-enabled GPUs.
The rule detected that your training job was not using pinned memory.
In case of using PyTorch Dataloader, you can enable this by setting pin_memory=True."""
if 'prefetch' in report['Details'] and report['Details']['prefetch'] == False:
text=f"""{text} It appears that your training job did not perform any data pre-fetching. Pre-fetching can improve your
data input pipeline as it produces the data ahead of time."""
paragraph = Paragraph(text=f"{text}", width=900)
show(paragraph)
colors=bokeh.palettes.viridis(10)
if "dataloading_time" in report['Details']:
median = round(report['Details']["dataloading_time"]['p50'],4)
p95 = round(report['Details']["dataloading_time"]['p95'],4)
p25 = round(report['Details']["dataloading_time"]['p25'],4)
binedges = report['Details']["dataloading_time"]['binedges']
probs = report['Details']["dataloading_time"]['probs']
text=f"""The following histogram shows the distribution of dataloading times that have been measured throughout your training job. The median dataloading time was {median}s.
The 95th percentile was {p95}s and the 25th percentile was {p25}s"""
plot = figure(plot_height=450,
plot_width=850,
toolbar_location='right',
tools="hover,wheel_zoom,reset,pan",
x_range=(binedges[0], binedges[-1])
)
plot.quad( top=probs,
bottom=0,
left=binedges[:-1],
right=binedges[1:],
line_color="white",
fill_color=colors[0],
fill_alpha=0.8,
legend="Dataloading events" )
plot.y_range.start = 0
plot.xaxis.axis_label = f"""Dataloading in [s]"""
plot.yaxis.axis_label = "Occurrences"
plot.grid.grid_line_color = "white"
plot.legend.click_policy="hide"
paragraph = Paragraph(text=f"{text}", width=900)
show(column(paragraph, plot))
if analyse_phase == "training":
display(Markdown(""" ### Batch size"""))
report = load_report('BatchSize')
if report:
params = report['RuleParameters'].split('\n')
cpu_threshold_p95 = int(params[0].split(':')[1])
gpu_threshold_p95 = int(params[1].split(':')[1])
gpu_memory_threshold_p95 = int(params[2].split(':')[1])
patience = int(params[3].split(':')[1])
window = int(params[4].split(':')[1])
violations = report['Violations']
triggered = report['RuleTriggered']
datapoints = report['Datapoints']
text = Paragraph(text=f"""The BatchSize rule helps to detect if GPU is underutilized because of the batch size being
too small. To detect this the rule analyzes the GPU memory footprint, CPU and GPU utilization. The rule checked if the 95th percentile of CPU utilization is below cpu_threshold_p95 of
{cpu_threshold_p95}%, the 95th percentile of GPU utilization is below gpu_threshold_p95 of {gpu_threshold_p95}% and the 95th percentile of memory footprint \
below gpu_memory_threshold_p95 of {gpu_memory_threshold_p95}%. In your training job this happened {violations} times. \
The rule skipped the first {patience} datapoints. The rule computed the percentiles over window size of {window} continuous datapoints.\n
The rule analysed {datapoints} datapoints and triggered {triggered} times.
""", width=800)
show(text)
if len(report['Details']) >0:
timestamp = us_since_epoch_to_human_readable_time(report['Details']['last_timestamp'])
date = datetime.datetime.strptime(timestamp, '%Y-%m-%dT%H:%M:%S:%f')
day = date.date().strftime("%m/%d/%Y")
hour = date.time().strftime("%H:%M:%S")
del report['Details']['last_timestamp']
text = Paragraph(text=f"""Your training job is underutilizing the instance. You may want to consider
either switch to a smaller instance type or to increase the batch size.
The last time the BatchSize rule triggered in your training job was on {day} at {hour}.
The following boxplots are a snapshot from the timestamps. They the total
CPU utilization, the GPU utilization, and the GPU memory usage per GPU (without outliers).""",
width=800)
show(text)
for node_id in report['Details']:
xmax = max(20, len(report['Details'][node_id]))
plot = figure(plot_height=350,
plot_width=1000,
toolbar_location='right',
tools="hover,wheel_zoom,reset,pan",
title=f"Node {node_id}",
x_range=(0,xmax)
)
for index, key in enumerate(report['Details'][node_id]):
upper = report['Details'][node_id][key]['upper']
lower = report['Details'][node_id][key]['lower']
p75 = report['Details'][node_id][key]['p75']
p25 = report['Details'][node_id][key]['p25']
p50 = report['Details'][node_id][key]['p50']
plot.segment(index+1, upper, index+1, p75, line_color="black")
plot.segment(index+1, lower, index+1, p25, line_color="black")
plot.vbar(index+1, 0.7, p50, p75, fill_color="#FDE725", line_color="black")
plot.vbar(index+1, 0.7, p25, p50, fill_color="#440154", line_color="black")
plot.rect(index+1, lower, 0.2, 0.01, line_color="black")
plot.rect(index+1, upper, 0.2, 0.01, line_color="black")
plot.xaxis.major_label_overrides[index+1] = key
plot.xgrid.grid_line_color = None
plot.ygrid.grid_line_color = "white"
plot.grid.grid_line_width = 0
plot.xaxis.major_label_text_font_size="10px"
plot.xaxis.ticker = np.arange(index+2)
plot.yaxis.axis_label = "Utilization in %"
show(plot)
if analyse_phase == "training":
display(Markdown("""### CPU bottlenecks\n\n"""))
report = load_report('CPUBottleneck')
if report:
params = report['RuleParameters'].split('\n')
threshold = int(params[0].split(':')[1])
cpu_threshold = int(params[1].split(':')[1])
gpu_threshold = int(params[2].split(':')[1])
patience = int(params[3].split(':')[1])
violations = report['Violations']
triggered = report['RuleTriggered']
datapoints = report['Datapoints']
if report['Violations'] > 0:
perc = int(report['Violations']/report['Datapoints']*100)
else:
perc = 0
if perc < threshold:
string = 'below'
else:
string = 'above'
text = f"""The CPUBottleneck rule checked when the CPU utilization was above cpu_threshold of {cpu_threshold}%
and GPU utilization was below gpu_threshold of {gpu_threshold}%.
During initialization utilization is likely to be zero, so the rule skipped the first {patience} datapoints.
With this configuration the rule found {violations} CPU bottlenecks which is {perc}% of the total time. This is {string} the threshold of {threshold}%
The rule analysed {datapoints} data points and triggered {triggered} times."""
paragraph = Paragraph(text=text, width=900)
show(paragraph)
if report:
plots = []
text = ""
if report['RuleTriggered'] > 0:
low_gpu = report['Details']['low_gpu_utilization']
cpu_bottleneck = {}
cpu_bottleneck["GPU usage above threshold"] = report["Datapoints"] - report["Details"]["low_gpu_utilization"]
cpu_bottleneck["GPU usage below threshold"] = report["Details"]["low_gpu_utilization"] - len(report["Details"])
cpu_bottleneck["Low GPU usage due to CPU bottlenecks"] = len(report["Details"]["bottlenecks"])
n_bottlenecks = round(len(report['Details']['bottlenecks'])/datapoints * 100, 2)
text = f"""The following chart (left) shows how many datapoints were below the gpu_threshold of {gpu_threshold}%
and how many of those datapoints were likely caused by a CPU bottleneck. The rule found {low_gpu} out of {datapoints} datapoints which had a GPU utilization
below {gpu_threshold}%. Out of those datapoints {n_bottlenecks}% were likely caused by CPU bottlenecks.
"""
plot = create_piechart(cpu_bottleneck,
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="Low GPU usage caused by CPU bottlenecks")
plots.append(plot)
if 'phase' in report['Details']:
text = f"""{text} The chart (in the middle) shows whether CPU bottlenecks mainly
happened during train/validation phase.
"""
plot = create_piechart(report['Details']['phase'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="The ratio between time spent on TRAIN/EVAL phase")
plots.append(plot)
if 'forward_backward' in report['Details'] and len(report['Details']['forward_backward']) > 0:
event = max(report['Details']['forward_backward'], key=report['Details']['forward_backward'].get)
perc = report['Details']['forward_backward'][event]
text = f"""{text} The pie charts on the right shows a more detailed breakdown.
It shows that {int(perc)}% of the training time was spent on event {event}"""
plot = create_piechart(report['Details']['forward_backward'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="The ratio between forward and backward pass")
plots.append(plot)
if len(plots) > 0:
paragraph = Paragraph(text=text, width=900)
show(column(paragraph, row(plots)))
plots = []
text = ""
if 'ratio' in report['Details'] and len(report['Details']['ratio']) > 0:
key = list(report['Details']['ratio'].keys())[0]
ratio = report['Details']['ratio'][key]
text = f"""The following pie chart shows a breakdown of the CPU/GPU operators that happened during CPU bottlenecks.
It shows that {int(ratio)}% of the training time was spent on executing operators in "{key}"."""
plot = create_piechart(report['Details']['ratio'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="The ratio between CPU/GPU operators")
plots.append(plot)
if 'general' in report['Details'] and len(report['Details']['general']) > 0:
event = max(report['Details']['general'], key=report['Details']['general'].get)
perc = report['Details']['general'][event]
plot = create_piechart(report['Details']['general'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="General metrics recorded in framework ")
plots.append(plot)
if len(plots) > 0:
paragraph = Paragraph(text=text, width=900)
show(column(paragraph, row(plots)))
plots = []
text = ""
if 'horovod' in report['Details'] and len(report['Details']['horovod']) > 0:
event = max(report['Details']['horovod'], key=report['Details']['horovod'].get)
perc = report['Details']['horovod'][event]
text = f"""The following pie chart shows a detailed breakdown of the Horovod metrics
that have been recorded when the CPU bottleneck happened. The most expensive function was
{event} with {int(perc)}%"""
plot = create_piechart(report['Details']['horovod'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="General metrics recorded in framework ")
paragraph = Paragraph(text=text, width=900)
show(column(paragraph, row(plot)))
if analyse_phase == "training":
display(Markdown("""### I/O bottlenecks\n\n"""))
report = load_report('IOBottleneck')
if report:
params = report['RuleParameters'].split('\n')
threshold = int(params[0].split(':')[1])
io_threshold = int(params[1].split(':')[1])
gpu_threshold = int(params[2].split(':')[1])
patience = int(params[3].split(':')[1])
violations = report['Violations']
triggered = report['RuleTriggered']
datapoints = report['Datapoints']
if report['Violations'] > 0:
perc = int(report['Violations']/report['Datapoints']*100)
else:
perc = 0
if perc < threshold:
string = 'below'
else:
string = 'above'
text = f"""The IOBottleneck rule checked when I/O wait time was above io_threshold of {io_threshold}%
and GPU utilization was below gpu_threshold of {gpu_threshold}. During initialization utilization is likely to be zero, so the rule skipped the first {patience} datapoints.
With this configuration the rule found {violations} I/O bottlenecks which is {perc}% of the total time. This is {string} the threshold of {threshold}%.
The rule analysed {datapoints} datapoints and triggered {triggered} times."""
paragraph = Paragraph(text=text, width=900)
show(paragraph)
if report:
plots = []
text = ""
if report['RuleTriggered'] > 0:
low_gpu = report['Details']['low_gpu_utilization']
cpu_bottleneck = {}
cpu_bottleneck["GPU usage above threshold"] = report["Datapoints"] - report["Details"]["low_gpu_utilization"]
cpu_bottleneck["GPU usage below threshold"] = report["Details"]["low_gpu_utilization"] - len(report["Details"])
cpu_bottleneck["Low GPU usage due to I/O bottlenecks"] = len(report["Details"]["bottlenecks"])
n_bottlenecks = round(len(report['Details']['bottlenecks'])/datapoints * 100, 2)
text = f"""The following chart (left) shows how many datapoints were below the gpu_threshold of {gpu_threshold}%
and how many of those datapoints were likely caused by a I/O bottleneck. The rule found {low_gpu} out of {datapoints} datapoints which had a GPU utilization
below {gpu_threshold}%. Out of those datapoints {n_bottlenecks}% were likely caused by I/O bottlenecks.
"""
plot = create_piechart(cpu_bottleneck,
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="Low GPU usage caused by I/O bottlenecks")
plots.append(plot)
if 'phase' in report['Details']:
text = f"""{text} The chart (in the middle) shows whether I/O bottlenecks mainly happened during trianing or validation phase.
"""
plot = create_piechart(report['Details']['phase'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="The ratio between the time spent on the TRAIN/EVAL phase")
plots.append(plot)
if 'forward_backward' in report['Details'] and len(report['Details']['forward_backward']) > 0:
event = max(report['Details']['forward_backward'], key=report['Details']['forward_backward'].get)
perc = report['Details']['forward_backward'][event]
text = f"""{text} The pie charts on the right shows a more detailed breakdown.
It shows that {int(perc)}% of the training time was spent on event "{event}"."""
plot = create_piechart(report['Details']['forward_backward'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="The ratio between forward and backward pass")
plots.append(plot)
if len(plots) > 0:
paragraph = Paragraph(text=text, width=900)
show(column(paragraph, row(plots)))
plots = []
text = ""
if 'ratio' in report['Details'] and len(report['Details']['ratio']) > 0:
key = list(report['Details']['ratio'].keys())[0]
ratio = report['Details']['ratio'][key]
text = f"""The following pie chart shows a breakdown of the CPU/GPU operators that happened
during I/O bottlenecks. It shows that {int(ratio)}% of the training time was spent on executing operators in "{key}"."""
plot = create_piechart(report['Details']['ratio'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="Ratio between CPU/GPU operators")
plots.append(plot)
if 'general' in report['Details'] and len(report['Details']['general']) > 0:
event = max(report['Details']['general'], key=report['Details']['general'].get)
perc = report['Details']['general'][event]
plot = create_piechart(report['Details']['general'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="General metrics recorded in framework ")
plots.append(plot)
if len(plots) > 0:
paragraph = Paragraph(text=text, width=900)
show(column(paragraph, row(plots)))
plots = []
text = ""
if 'horovod' in report['Details'] and len(report['Details']['horovod']) > 0:
event = max(report['Details']['horovod'], key=report['Details']['horovod'].get)
perc = report['Details']['horovod'][event]
text = f"""The following pie chart shows a detailed breakdown of the Horovod metrics that have been
recorded when I/O bottleneck happened. The most expensive function was {event} with {int(perc)}%"""
plot = create_piechart(report['Details']['horovod'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="General metrics recorded in framework ")
paragraph = Paragraph(text=text, width=900)
show(column(paragraph, row(plot)))
if analyse_phase == "training":
display(Markdown("""### GPU memory\n\n"""))
report = load_report('GPUMemoryIncrease')
if report:
params = report['RuleParameters'].split('\n')
increase = float(params[0].split(':')[1])
patience = params[1].split(':')[1]
window = params[2].split(':')[1]
violations = report['Violations']
triggered = report['RuleTriggered']
datapoints = report['Datapoints']
text=Paragraph(text=f"""The GPUMemoryIncrease rule helps to detect large increase in memory usage on GPUs.
The rule checked if the moving average of memory increased by more than {increase}%.
So if the moving average increased for instance from 10% to {11+increase}%,
the rule would have triggered. During initialization utilization is likely 0, so the rule skipped the first {patience} datapoints.
The moving average was computed on a window size of {window} continuous datapoints. The rule detected {violations} violations
where the moving average between previous and current time window increased by more than {increase}%.
The rule analysed {datapoints} datapoints and triggered {triggered} times.""",
width=900)
show(text)
if len(report['Details']) > 0:
timestamp = us_since_epoch_to_human_readable_time(report['Details']['last_timestamp'])
date = datetime.datetime.strptime(timestamp, '%Y-%m-%dT%H:%M:%S:%f')
day = date.date().strftime("%m/%d/%Y")
hour = date.time().strftime("%H:%M:%S")
text = Paragraph(text=f"""Your training job triggered memory spikes.
The last time the GPUMemoryIncrease rule triggered in your training job was on {day} at {hour}.
The following boxplots are a snapshot from the timestamps. They show for each node and GPU the corresponding
memory utilization (without outliers).""", width=900)
show(text)
del report['Details']['last_timestamp']
for node_id in report['Details']:
plot = figure(plot_height=350,
plot_width=1000,
toolbar_location='right',
tools="hover,wheel_zoom,reset,pan",
title=f"Node {node_id}",
x_range=(0,17),
)
for index, key in enumerate(report['Details'][node_id]):
display(Markdown(f"""**Memory utilization of {key} on node {node_id}:**"""))
text = ""
gpu_max = report['Details'][node_id][key]['gpu_max']
text = f"""{text} The max memory utilization of {key} on node {node_id} was {gpu_max}%."""
p_95 = int(report['Details'][node_id][key]['p95'])
p_5 = report['Details'][node_id][key]['p05']
if p_95 < int(50):
text = f"""{text} The 95th percentile was only {p_95}%."""
if p_5 < int(5):
text = f"""{text} The 5th percentile was only {p_5}%."""
if p_95 - p_5 > 50:
text = f"""{text} The difference between 5th percentile {p_5}% and 95th percentile {p_95}% is quite
significant, which means that memory utilization on {key} is fluctuating quite a lot."""
text = Paragraph(text=f"""{text}""", width=900)
show(text)
upper = report['Details'][node_id][key]['upper']
lower = report['Details'][node_id][key]['lower']
p75 = report['Details'][node_id][key]['p75']
p25 = report['Details'][node_id][key]['p25']
p50 = report['Details'][node_id][key]['p50']
plot.segment(index+1, upper, index+1, p75, line_color="black")
plot.segment(index+1, lower, index+1, p25, line_color="black")
plot.vbar(index+1, 0.7, p50, p75, fill_color="#FDE725", line_color="black")
plot.vbar(index+1, 0.7, p25, p50, fill_color="#440154", line_color="black")
plot.rect(index+1, lower, 0.2, 0.01, line_color="black")
plot.rect(index+1, upper, 0.2, 0.01, line_color="black")
plot.xaxis.major_label_overrides[index+1] = key
plot.xgrid.grid_line_color = None
plot.ygrid.grid_line_color = "white"
plot.grid.grid_line_width = 0
plot.xaxis.major_label_text_font_size="10px"
plot.xaxis.ticker = np.arange(index+2)
plot.yaxis.axis_label = "Utilization in %"
show(plot)
###Output
_____no_output_____
###Markdown
SageMaker Debugger Profiling ReportSageMaker Debugger auto generated this report. You can generate similar reports on all supported training jobs. The report provides summary of training job, system resource usage statistics, framework metrics, rules summary, and detailed analysis from each rule. The graphs and tables are interactive. **Legal disclaimer:** This report and any recommendations are provided for informational purposes only and are not definitive. You are responsible for making your own independent assessment of the information.
###Code
import json
import pandas as pd
import glob
import matplotlib.pyplot as plt
import numpy as np
import datetime
from smdebug.profiler.utils import us_since_epoch_to_human_readable_time, ns_since_epoch_to_human_readable_time
import bokeh
from bokeh.io import output_notebook, show
from bokeh.layouts import column, row
from bokeh.plotting import figure
from bokeh.models.widgets import DataTable, DateFormatter, TableColumn
from bokeh.models import ColumnDataSource, PreText
from math import pi
from bokeh.transform import cumsum
import warnings
from bokeh.models.widgets import Paragraph
from bokeh.models import Legend
from bokeh.util.warnings import BokehDeprecationWarning, BokehUserWarning
warnings.simplefilter('ignore', BokehDeprecationWarning)
warnings.simplefilter('ignore', BokehUserWarning)
output_notebook(hide_banner=True)
def create_piechart(data_dict, title=None, height=400, width=400, x1=0, x2=0.1, radius=0.4, toolbar_location='right'):
plot = figure(plot_height=height,
plot_width=width,
toolbar_location=toolbar_location,
tools="hover,wheel_zoom,reset,pan",
tooltips="@phase:@value",
title=title,
x_range=(-radius-x1, radius+x2))
data = pd.Series(data_dict).reset_index(name='value').rename(columns={'index':'phase'})
data['angle'] = data['value']/data['value'].sum() * 2*pi
data['color'] = bokeh.palettes.viridis(len(data_dict))
plot.wedge(x=0, y=0., radius=radius,
start_angle=cumsum('angle', include_zero=True),
end_angle=cumsum('angle'),
line_color="white",
source=data,
fill_color='color',
legend='phase'
)
plot.legend.label_text_font_size = "8pt"
plot.legend.location = 'center_right'
plot.axis.axis_label=None
plot.axis.visible=False
plot.grid.grid_line_color = None
plot.outline_line_color = "white"
return plot
from IPython.display import display, HTML, Markdown, Image
def pretty_print(df):
raw_html = df.to_html().replace("\\n","<br>").replace('<tr>','<tr style="text-align: left;">')
return display(HTML(raw_html))
###Output
_____no_output_____
###Markdown
Training job summary
###Code
def load_report(rule_name):
try:
report = json.load(open('/opt/ml/processing/output/rule/profiler-output/profiler-reports/'+rule_name+'.json'))
return report
except FileNotFoundError:
print (rule_name + ' not triggered')
job_statistics = {}
report = load_report('MaxInitializationTime')
if report:
if "first" in report['Details']["step_num"] and "last" in report['Details']["step_num"]:
first_step = report['Details']["step_num"]["first"]
last_step = report['Details']["step_num"]["last"]
tmp = us_since_epoch_to_human_readable_time(report['Details']['job_start'] * 1000000)
date = datetime.datetime.strptime(tmp, '%Y-%m-%dT%H:%M:%S:%f')
day = date.date().strftime("%m/%d/%Y")
hour = date.time().strftime("%H:%M:%S")
job_statistics["Start time"] = f"{hour} {day}"
tmp = us_since_epoch_to_human_readable_time(report['Details']['job_end'] * 1000000)
date = datetime.datetime.strptime(tmp, '%Y-%m-%dT%H:%M:%S:%f')
day = date.date().strftime("%m/%d/%Y")
hour = date.time().strftime("%H:%M:%S")
job_statistics["End time"] = f"{hour} {day}"
job_duration_in_seconds = int(report['Details']['job_end'] - report['Details']['job_start'])
job_statistics["Job duration"] = f"{job_duration_in_seconds} seconds"
if "first" in report['Details']["step_num"] and "last" in report['Details']["step_num"]:
tmp = us_since_epoch_to_human_readable_time(first_step)
date = datetime.datetime.strptime(tmp, '%Y-%m-%dT%H:%M:%S:%f')
day = date.date().strftime("%m/%d/%Y")
hour = date.time().strftime("%H:%M:%S")
job_statistics["Training loop start"] = f"{hour} {day}"
tmp = us_since_epoch_to_human_readable_time(last_step)
date = datetime.datetime.strptime(tmp, '%Y-%m-%dT%H:%M:%S:%f')
day = date.date().strftime("%m/%d/%Y")
hour = date.time().strftime("%H:%M:%S")
job_statistics["Training loop end"] = f"{hour} {day}"
training_loop_duration_in_seconds = int((last_step - first_step) / 1000000)
job_statistics["Training loop duration"] = f"{training_loop_duration_in_seconds} seconds"
initialization_in_seconds = int(first_step/1000000 - report['Details']['job_start'])
job_statistics["Initialization time"] = f"{initialization_in_seconds} seconds"
finalization_in_seconds = int(np.abs(report['Details']['job_end'] - last_step/1000000))
job_statistics["Finalization time"] = f"{finalization_in_seconds} seconds"
initialization_perc = int(initialization_in_seconds / job_duration_in_seconds * 100)
job_statistics["Initialization"] = f"{initialization_perc} %"
training_loop_perc = int(training_loop_duration_in_seconds / job_duration_in_seconds * 100)
job_statistics["Training loop"] = f"{training_loop_perc} %"
finalization_perc = int(finalization_in_seconds / job_duration_in_seconds * 100)
job_statistics["Finalization"] = f"{finalization_perc} %"
if report:
text = """The following table gives a summary about the training job. The table includes information about when the training job started and ended, how much time initialization, training loop and finalization took."""
if len(job_statistics) > 0:
df = pd.DataFrame.from_dict(job_statistics, orient='index')
start_time = us_since_epoch_to_human_readable_time(report['Details']['job_start'] * 1000000)
date = datetime.datetime.strptime(start_time, '%Y-%m-%dT%H:%M:%S:%f')
day = date.date().strftime("%m/%d/%Y")
hour = date.time().strftime("%H:%M:%S")
duration = job_duration_in_seconds
text = f"""{text} \n Your training job started on {day} at {hour} and ran for {duration} seconds."""
#pretty_print(df)
if "first" in report['Details']["step_num"] and "last" in report['Details']["step_num"]:
if finalization_perc < 0:
job_statistics["Finalization%"] = 0
if training_loop_perc < 0:
job_statistics["Training loop"] = 0
if initialization_perc < 0:
job_statistics["Initialization"] = 0
else:
text = f"""{text} \n Your training job started on {day} at {hour} and ran for {duration} seconds."""
if len(job_statistics) > 0:
df2 = df.reset_index()
df2.columns = ["0", "1"]
source = ColumnDataSource(data=df2)
columns = [TableColumn(field='0', title=""),
TableColumn(field='1', title="Job Statistics"),]
table = DataTable(source=source, columns=columns, width=450, height=380)
plot = None
if "Initialization" in job_statistics:
piechart_data = {}
piechart_data["Initialization"] = initialization_perc
piechart_data["Training loop"] = training_loop_perc
piechart_data["Finalization"] = finalization_perc
plot = create_piechart(piechart_data,
height=350,
width=500,
x1=0.15,
x2=0.15,
radius=0.15,
toolbar_location=None)
if plot != None:
paragraph = Paragraph(text=f"""{text}""", width = 800)
show(column(paragraph, row(table, plot)))
else:
paragraph = Paragraph(text=f"""{text}. No step information was profiled from your training job. The time spent on initialization and finalization cannot be computed.""" , width = 800)
show(column(paragraph, row(table)))
###Output
_____no_output_____
###Markdown
System usage statistics
###Code
report = load_report('OverallSystemUsage')
text1 = ''
if report:
if "GPU" in report["Details"]:
for node_id in report["Details"]["GPU"]:
gpu_p95 = report["Details"]["GPU"][node_id]["p95"]
gpu_p50 = report["Details"]["GPU"][node_id]["p50"]
cpu_p95 = report["Details"]["CPU"][node_id]["p95"]
cpu_p50 = report["Details"]["CPU"][node_id]["p50"]
if gpu_p95 < 70 and cpu_p95 < 70:
text1 = f"""{text1}The 95th percentile of the total GPU utilization on node {node_id} is only {int(gpu_p95)}%.
The 95th percentile of the total CPU utilization is only {int(cpu_p95)}%. Node {node_id} is underutilized.
You may want to consider switching to a smaller instance type."""
elif gpu_p95 < 70 and cpu_p95 > 70:
text1 = f"""{text1}The 95th percentile of the total GPU utilization on node {node_id} is only {int(gpu_p95)}%.
However, the 95th percentile of the total CPU utilization is {int(cpu_p95)}%. GPUs on node {node_id} are underutilized
likely because of CPU bottlenecks"""
elif gpu_p50 > 70:
text1 = f"""{text1}The median total GPU utilization on node {node_id} is {int(gpu_p50)}%.
GPUs on node {node_id} are well utilized"""
else:
text1 = f"""{text1}The median total GPU utilization on node {node_id} is {int(gpu_p50)}%.
The median total CPU utilization is {int(cpu_p50)}%."""
else:
for node_id in report["Details"]["CPU"]:
cpu_p95 = report["Details"]["CPU"][node_id]["p95"]
if cpu_p95 > 70:
text1 = f"""{text1}The 95th percentile of the total CPU utilization on node {node_id} is {int**(cpu_p95)}%. GPUs on node {node_id} are well utilized"""
text1 = Paragraph(text=f"""{text1}""", width=1100)
text2 = Paragraph(text=f"""The following table shows statistics of resource utilization per worker (node),
such as the total CPU and GPU utilization, and the memory utilization on CPU and GPU.
The table also includes the total I/O wait time and the total amount of data sent or received in bytes.
The table shows min and max values as well as p99, p90 and p50 percentiles.""", width=900)
pd.set_option('display.float_format', lambda x: '%.2f' % x)
rows = []
units = {"CPU": "percentage", "CPU memory": "percentage", "GPU": "percentage", "Network": "bytes", "GPU memory": "percentage", "I/O": "percentage"}
if report:
for metric in report['Details']:
for node_id in report['Details'][metric]:
values = report['Details'][metric][node_id]
rows.append([node_id, metric, units[metric], values['max'], values['p99'], values['p95'], values['p50'], values['min']])
df = pd.DataFrame(rows)
df.columns = ['Node', 'metric', 'unit', 'max', 'p99', 'p95', 'p50', 'min']
df2 = df.reset_index()
source = ColumnDataSource(data=df2)
columns = [TableColumn(field='Node', title="node"),
TableColumn(field='metric', title="metric"),
TableColumn(field='unit', title="unit"),
TableColumn(field='max', title="max"),
TableColumn(field='p99', title="p99"),
TableColumn(field='p95', title="p95"),
TableColumn(field='p50', title="p50"),
TableColumn(field='min', title="min"),]
table = DataTable(source=source, columns=columns, width=800, height=df2.shape[0]*30)
show(column( text1, text2, row(table)))
report = load_report('OverallFrameworkMetrics')
if report:
if 'Details' in report:
display(Markdown(f"""## Framework metrics summary"""))
plots = []
text = ''
if 'phase' in report['Details']:
text = f"""The following two pie charts show the time spent on the TRAIN phase, the EVAL phase,
and others. The 'others' includes the time spent between steps (after one step has finished and before
the next step has started). Ideally, most of the training time should be spent on the
TRAIN and EVAL phases. If TRAIN/EVAL were not specified in the training script, steps will be recorded as
GLOBAL."""
if 'others' in report['Details']['phase']:
others = float(report['Details']['phase']['others'])
if others > 25:
text = f"""{text} Your training job spent quite a significant amount of time ({round(others,2)}%) in phase "others".
You should check what is happening in between the steps."""
plot = create_piechart(report['Details']['phase'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="The ratio between the time spent on the TRAIN/EVAL phase and others")
plots.append(plot)
if 'forward_backward' in report['Details']:
event = max(report['Details']['forward_backward'], key=report['Details']['forward_backward'].get)
perc = report['Details']['forward_backward'][event]
text = f"""{text} The pie chart on the right shows a more detailed breakdown.
It shows that {int(perc)}% of the time was spent in event "{event}"."""
if perc > 70:
text = f"""There is quite a significant difference between the time spent on forward and backward
pass."""
else:
text = f"""{text} It shows that {int(perc)}% of the training time
was spent on "{event}"."""
plot = create_piechart(report['Details']['forward_backward'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="The ratio between forward and backward pass")
plots.append(plot)
if len(plots) > 0:
paragraph = Paragraph(text=text, width=1100)
show(column(paragraph, row(plots)))
plots = []
text=''
if 'ratio' in report['Details'] and len(report['Details']['ratio']) > 0:
key = list(report['Details']['ratio'].keys())[0]
ratio = report['Details']['ratio'][key]
text = f"""The following piechart shows a breakdown of the CPU/GPU operators.
It shows that {int(ratio)}% of training time was spent on executing the "{key}" operator."""
plot = create_piechart(report['Details']['ratio'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="The ratio between the time spent on CPU/GPU operators")
plots.append(plot)
if 'general' in report['Details']:
event = max(report['Details']['general'], key=report['Details']['general'].get)
perc = report['Details']['general'][event]
plot = create_piechart(report['Details']['general'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="General framework operations")
plots.append(plot)
if len(plots) > 0:
paragraph = Paragraph(text=text, width=1100)
show(column(paragraph, row(plots)))
plots = []
text = ''
if 'horovod' in report['Details']:
display(Markdown(f"""#### Overview: Horovod metrics"""))
event = max(report['Details']['horovod'], key=report['Details']['horovod'].get)
perc = report['Details']['horovod'][event]
text = f"""{text} The following pie chart shows a detailed breakdown of the Horovod metrics profiled
from your training job. The most expensive function was "{event}" with {int(perc)}%."""
plot = create_piechart(report['Details']['horovod'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="Horovod metrics ")
paragraph = Paragraph(text=text, width=1100)
show(column(paragraph, row(plot)))
pd.set_option('display.float_format', lambda x: '%.2f' % x)
rows = []
values = []
if report:
if 'CPU_total' in report['Details']:
display(Markdown(f"""#### Overview: CPU operators"""))
event = max(report['Details']['CPU'], key=report['Details']['CPU'].get)
perc = report['Details']['CPU'][event]
for function in report['Details']['CPU']:
percentage = round(report['Details']['CPU'][function],2)
time = report['Details']['CPU_total'][function]
rows.append([percentage, time, function])
df = pd.DataFrame(rows)
df.columns = ['percentage', 'time', 'operator']
df = df.sort_values(by=['percentage'], ascending=False)
source = ColumnDataSource(data=df)
columns = [TableColumn(field='percentage', title="Percentage"),
TableColumn(field='time', title="Cumulative time in microseconds"),
TableColumn(field='operator', title="CPU operator"),]
table = DataTable(source=source, columns=columns, width=550, height=350)
text = Paragraph(text=f"""The following table shows a list of operators that ran on the CPUs.
The most expensive operator on the CPUs was "{event}" with {int(perc)} %.""")
plot = create_piechart(report['Details']['CPU'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
)
show(column(text, row(table, plot)))
pd.set_option('display.float_format', lambda x: '%.2f' % x)
rows = []
values = []
if report:
if 'GPU_total' in report['Details']:
display(Markdown(f"""#### Overview: GPU operators"""))
event = max(report['Details']['GPU'], key=report['Details']['GPU'].get)
perc = report['Details']['GPU'][event]
for function in report['Details']['GPU']:
percentage = round(report['Details']['GPU'][function],2)
time = report['Details']['GPU_total'][function]
rows.append([percentage, time, function])
df = pd.DataFrame(rows)
df.columns = ['percentage', 'time', 'operator']
df = df.sort_values(by=['percentage'], ascending=False)
source = ColumnDataSource(data=df)
columns = [TableColumn(field='percentage', title="Percentage"),
TableColumn(field='time', title="Cumulative time in microseconds"),
TableColumn(field='operator', title="GPU operator"),]
table = DataTable(source=source, columns=columns, width=450, height=350)
text = Paragraph(text=f"""The following table shows a list of operators that your training job ran on GPU.
The most expensive operator on GPU was "{event}" with {int(perc)} %""")
plot = create_piechart(report['Details']['GPU'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
)
show(column(text, row(table, plot)))
###Output
_____no_output_____
###Markdown
Rules summary
###Code
description = {}
description['CPUBottleneck'] = 'Checks if the CPU utilization is high and the GPU utilization is low. \
It might indicate CPU bottlenecks, where the GPUs are waiting for data to arrive \
from the CPUs. The rule evaluates the CPU and GPU utilization rates, and triggers the issue \
if the time spent on the CPU bottlenecks exceeds a threshold percent of the total training time. The default threshold is 50 percent.'
description['IOBottleneck'] = 'Checks if the data I/O wait time is high and the GPU utilization is low. \
It might indicate IO bottlenecks where GPU is waiting for data to arrive from storage. \
The rule evaluates the I/O and GPU utilization rates and triggers the issue \
if the time spent on the IO bottlenecks exceeds a threshold percent of the total training time. The default threshold is 50 percent.'
description['Dataloader'] = 'Checks how many data loaders are running in parallel and whether the total number is equal the number \
of available CPU cores. The rule triggers if number is much smaller or larger than the number of available cores. \
If too small, it might lead to low GPU utilization. If too large, it might impact other compute intensive operations on CPU.'
description['GPUMemoryIncrease'] = 'Measures the average GPU memory footprint and triggers if there is a large increase.'
description['BatchSize'] = 'Checks if GPUs are underutilized because the batch size is too small. \
To detect this problem, the rule analyzes the average GPU memory footprint, \
the CPU and the GPU utilization. '
description['LowGPUUtilization'] = 'Checks if the GPU utilization is low or fluctuating. \
This can happen due to bottlenecks, blocking calls for synchronizations, \
or a small batch size.'
description['MaxInitializationTime'] = 'Checks if the time spent on initialization exceeds a threshold percent of the total training time. \
The rule waits until the first step of training loop starts. The initialization can take longer \
if downloading the entire dataset from Amazon S3 in File mode. The default threshold is 20 minutes.'
description['LoadBalancing'] = 'Detects workload balancing issues across GPUs. \
Workload imbalance can occur in training jobs with data parallelism. \
The gradients are accumulated on a primary GPU, and this GPU might be overused \
with regard to other GPUs, resulting in reducing the efficiency of data parallelization.'
description['StepOutlier'] = 'Detects outliers in step duration. The step duration for forward and backward pass should be \
roughly the same throughout the training. If there are significant outliers, \
it may indicate a system stall or bottleneck issues.'
recommendation = {}
recommendation['CPUBottleneck'] = 'Consider increasing the number of data loaders \
or applying data pre-fetching.'
recommendation['IOBottleneck'] = 'Pre-fetch data or choose different file formats, such as binary formats that \
improve I/O performance.'
recommendation['Dataloader'] = 'Change the number of data loader processes.'
recommendation['GPUMemoryIncrease'] = 'Choose a larger instance type with more memory if footprint is close to maximum available memory.'
recommendation['BatchSize'] = 'The batch size is too small, and GPUs are underutilized. Consider running on a smaller instance type or increasing the batch size.'
recommendation['LowGPUUtilization'] = 'Check if there are bottlenecks, minimize blocking calls, \
change distributed training strategy, or increase the batch size.'
recommendation['MaxInitializationTime'] = 'Initialization takes too long. \
If using File mode, consider switching to Pipe mode in case you are using TensorFlow framework.'
recommendation['LoadBalancing'] = 'Choose a different distributed training strategy or \
a different distributed training framework.'
recommendation['StepOutlier'] = 'Check if there are any bottlenecks (CPU, I/O) correlated to the step outliers.'
files = glob.glob('/opt/ml/processing/output/rule/profiler-output/profiler-reports/*json')
summary = {}
for i in files:
rule_name = i.split('/')[-1].replace('.json','')
if rule_name == "OverallSystemUsage" or rule_name == "OverallFrameworkMetrics":
continue
rule_report = json.load(open(i))
summary[rule_name] = {}
summary[rule_name]['Description'] = description[rule_name]
summary[rule_name]['Recommendation'] = recommendation[rule_name]
summary[rule_name]['Number of times rule triggered'] = rule_report['RuleTriggered']
#summary[rule_name]['Number of violations'] = rule_report['Violations']
summary[rule_name]['Number of datapoints'] = rule_report['Datapoints']
summary[rule_name]['Rule parameters'] = rule_report['RuleParameters']
df = pd.DataFrame.from_dict(summary, orient='index')
df = df.sort_values(by=['Number of times rule triggered'], ascending=False)
display(Markdown(f"""The following table shows a profiling summary of the Debugger built-in rules.
The table is sorted by the rules that triggered the most frequently. During your training job, the {df.index[0]} rule
was the most frequently triggered. It processed {df.values[0,3]} datapoints and was triggered {df.values[0,2]} times."""))
with pd.option_context('display.colheader_justify','left'):
pretty_print(df)
analyse_phase = "training"
if job_statistics and "initialization_in_seconds" in job_statistics:
if job_statistics["initialization_in_seconds"] > job_statistics["training_loop_duration_in_seconds"]:
analyse_phase = "initialization"
time = job_statistics["initialization_in_seconds"]
perc = job_statistics["initialization_%"]
display(Markdown(f"""The initialization phase took {int(time)} seconds, which is {int(perc)}%*
of the total training time. Since the training loop has taken the most time,
we dive deep into the events occurring during this phase"""))
display(Markdown("""## Analyzing initialization\n\n"""))
time = job_statistics["training_loop_duration_in_seconds"]
perc = job_statistics["training_loop_%"]
display(Markdown(f"""The training loop lasted for {int(time)} seconds which is {int(perc)}% of the training job time.
Since the training loop has taken the most time, we dive deep into the events occured during this phase."""))
if analyse_phase == 'training':
display(Markdown("""## Analyzing the training loop\n\n"""))
if analyse_phase == "initialization":
display(Markdown("""### MaxInitializationTime\n\nThis rule helps to detect if the training initialization is taking too much time. \nThe rule waits until first step is available. The rule takes the parameter `threshold` that defines how many minutes to wait for the first step to become available. Default is 20 minutes.\nYou can run the rule locally in the following way:
"""))
_ = load_report("MaxInitializationTime")
if analyse_phase == "training":
display(Markdown("""### Step duration analysis"""))
report = load_report('StepOutlier')
if report:
parameters = report['RuleParameters']
params = report['RuleParameters'].split('\n')
stddev = params[3].split(':')[1]
mode = params[1].split(':')[1]
n_outlier = params[2].split(':')[1]
triggered = report['RuleTriggered']
datapoints = report['Datapoints']
text = f"""The StepOutlier rule measures step durations and checks for outliers. The rule
returns True if duration is larger than {stddev} times the standard deviation. The rule
also takes the parameter mode, that specifies whether steps from training or validation phase
should be checked. In your processing job mode was specified as {mode}.
Typically the first step is taking significantly more time and to avoid the
rule triggering immediately, one can use n_outliers to specify the number of outliers to ignore.
n_outliers was set to {n_outlier}.
The rule analysed {datapoints} datapoints and triggered {triggered} times.
"""
paragraph = Paragraph(text=text, width=900)
show(column(paragraph))
if report and len(report['Details']['step_details']) > 0:
for node_id in report['Details']['step_details']:
tmp = report['RuleParameters'].split('threshold:')
threshold = tmp[1].split('\n')[0]
n_outliers = report['Details']['step_details'][node_id]['number_of_outliers']
mean = report['Details']['step_details'][node_id]['step_stats']['mean']
stddev = report['Details']['step_details'][node_id]['stddev']
phase = report['Details']['step_details'][node_id]['phase']
display(Markdown(f"""**Step durations on node {node_id}:**"""))
display(Markdown(f"""The following table is a summary of the statistics of step durations measured on node {node_id}.
The rule has analyzed the step duration from {phase} phase.
The average step duration on node {node_id} was {round(mean, 2)}s.
The rule detected {n_outliers} outliers, where step duration was larger than {threshold} times the standard deviation of {stddev}s
\n"""))
step_stats_df = pd.DataFrame.from_dict(report['Details']['step_details'][node_id]['step_stats'], orient='index').T
step_stats_df.index = ['Step Durations in [s]']
pretty_print(step_stats_df)
display(Markdown(f"""The following histogram shows the step durations measured on the different nodes.
You can turn on or turn off the visualization of histograms by selecting or unselecting the labels in the legend."""))
plot = figure(plot_height=450,
plot_width=850,
title=f"""Step durations""")
colors = bokeh.palettes.viridis(len(report['Details']['step_details']))
for index, node_id in enumerate(report['Details']['step_details']):
probs = report['Details']['step_details'][node_id]['probs']
binedges = report['Details']['step_details'][node_id]['binedges']
plot.quad( top=probs,
bottom=0,
left=binedges[:-1],
right=binedges[1:],
line_color="white",
fill_color=colors[index],
fill_alpha=0.7,
legend=node_id)
plot.add_layout(Legend(), 'right')
plot.y_range.start = 0
plot.xaxis.axis_label = f"""Step durations in [s]"""
plot.yaxis.axis_label = "Occurrences"
plot.grid.grid_line_color = "white"
plot.legend.click_policy="hide"
plot.legend.location = 'center_right'
show(plot)
if report['RuleTriggered'] > 0:
text=f"""To get a better understanding of what may have caused those outliers,
we correlate the timestamps of step outliers with other framework metrics that happened at the same time.
The left chart shows how much time was spent in the different framework
metrics aggregated by event phase. The chart on the right shows the histogram of normal step durations (without
outliers). The following chart shows how much time was spent in the different
framework metrics when step outliers occurred. In this chart framework metrics are not aggregated byphase."""
plots = []
if 'phase' in report['Details']:
text = f"""{text} The chart (in the middle) shows whether step outliers mainly happened during TRAIN or EVAL phase.
"""
plot = create_piechart(report['Details']['phase'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="The ratio between the time spent on the TRAIN/EVAL phase")
plots.append(plot)
if 'forward_backward' in report['Details'] and len(report['Details']['forward_backward']) > 0:
event = max(report['Details']['forward_backward'], key=report['Details']['forward_backward'].get)
perc = report['Details']['forward_backward'][event]
text = f"""{text} The pie chart on the right shows a detailed breakdown.
It shows that {int(perc)}% of the training time was spent on event "{event}"."""
plot = create_piechart(report['Details']['forward_backward'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="The Ratio between forward and backward pass")
plots.append(plot)
if len(plots) > 0:
paragraph = Paragraph(text=text, width=900)
show(column(paragraph, row(plots)))
plots = []
text = ""
if 'ratio' in report['Details'] and len(report['Details']['ratio']) > 0:
key = list(report['Details']['ratio'].keys())[0]
ratio = report['Details']['ratio'][key]
text = f"""The following pie chart shows a breakdown of the CPU/GPU operators executed during the step outliers.
It shows that {int(ratio)}% of the training time was spent on executing operators in "{key}"."""
plot = create_piechart(report['Details']['ratio'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="The ratio between CPU/GPU operators")
plots.append(plot)
if 'general' in report['Details'] and len(report['Details']['general']) > 0:
event = max(report['Details']['general'], key=report['Details']['general'].get)
perc = report['Details']['general'][event]
plot = create_piechart(report['Details']['general'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="General metrics recorded in framework ")
plots.append(plot)
if len(plots) > 0:
paragraph = Paragraph(text=text, width=900)
show(column(paragraph, row(plots)))
plots = []
text = ""
if 'horovod' in report['Details'] and len(report['Details']['horovod']) > 0:
event = max(report['Details']['horovod'], key=report['Details']['horovod'].get)
perc = report['Details']['horovod'][event]
text = f"""The following pie chart shows a detailed breakdown of the Horovod metrics that have been
recorded when step outliers happened. The most expensive function was {event} with {int(perc)}%"""
plot = create_piechart(report['Details']['horovod'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="General metrics recorded in framework ")
paragraph = Paragraph(text=text, width=900)
show(column(paragraph, row(plot)))
if analyse_phase == "training":
display(Markdown("""### GPU utilization analysis\n\n"""))
display(Markdown("""**Usage per GPU** \n\n"""))
report = load_report('LowGPUUtilization')
if report:
params = report['RuleParameters'].split('\n')
threshold_p95 = params[0].split(':')[1]
threshold_p5 = params[1].split(':')[1]
window = params[2].split(':')[1]
patience = params[3].split(':')[1]
violations = report['Violations']
triggered = report['RuleTriggered']
datapoints = report['Datapoints']
text=Paragraph(text=f"""The LowGPUUtilization rule checks for a low and fluctuating GPU usage. If the GPU usage is
consistently low, it might be caused by bottlenecks or a small batch size. If usage is heavily
fluctuating, it can be due to bottlenecks or blocking calls. The rule computed the 95th and 5th
percentile of GPU utilization on {window} continuous datapoints and found {violations} cases where
p95 was above {threshold_p95}% and p5 was below {threshold_p5}%. If p95 is high and p5 is low,
it might indicate that the GPU usage is highly fluctuating. If both values are very low,
it would mean that the machine is underutilized. During initialization, the GPU usage is likely zero,
so the rule skipped the first {patience} data points.
The rule analysed {datapoints} datapoints and triggered {triggered} times.""", width=800)
show(text)
if len(report['Details']) > 0:
timestamp = us_since_epoch_to_human_readable_time(report['Details']['last_timestamp'])
date = datetime.datetime.strptime(timestamp, '%Y-%m-%dT%H:%M:%S:%f')
day = date.date().strftime("%m/%d/%Y")
hour = date.time().strftime("%H:%M:%S")
text = Paragraph(text=f"""Your training job is underutilizing the instance. You may want to consider
to either switch to a smaller instance type or to increase the batch size.
The last time that the LowGPUUtilization rule was triggered in your training job was on {day} at {hour}.
The following boxplots are a snapshot from the timestamps.
They show the utilization per GPU (without outliers).
To get a better understanding of the workloads throughout the whole training,
you can check the workload histogram in the next section.""", width=800)
show(text)
del report['Details']['last_timestamp']
for node_id in report['Details']:
plot = figure(plot_height=350,
plot_width=1000,
toolbar_location='right',
tools="hover,wheel_zoom,reset,pan",
title=f"Node {node_id}",
x_range=(0,17),
)
for index, key in enumerate(report['Details'][node_id]):
display(Markdown(f"""**GPU utilization of {key} on node {node_id}:**"""))
text = ""
gpu_max = report['Details'][node_id][key]['gpu_max']
p_95 = report['Details'][node_id][key]['gpu_95']
p_5 = report['Details'][node_id][key]['gpu_5']
text = f"""{text} The max utilization of {key} on node {node_id} was {gpu_max}%"""
if p_95 < int(threshold_p95):
text = f"""{text} and the 95th percentile was only {p_95}%.
{key} on node {node_id} is underutilized"""
if p_5 < int(threshold_p5):
text = f"""{text} and the 5th percentile was only {p_5}%"""
if p_95 - p_5 > 50:
text = f"""{text} The difference between 5th percentile {p_5}% and 95th percentile {p_95}% is quite
significant, which means that utilization on {key} is fluctuating quite a lot.\n"""
upper = report['Details'][node_id][key]['upper']
lower = report['Details'][node_id][key]['lower']
p75 = report['Details'][node_id][key]['p75']
p25 = report['Details'][node_id][key]['p25']
p50 = report['Details'][node_id][key]['p50']
plot.segment(index+1, upper, index+1, p75, line_color="black")
plot.segment(index+1, lower, index+1, p25, line_color="black")
plot.vbar(index+1, 0.7, p50, p75, fill_color="#FDE725", line_color="black")
plot.vbar(index+1, 0.7, p25, p50, fill_color="#440154", line_color="black")
plot.rect(index+1, lower, 0.2, 0.01, line_color="black")
plot.rect(index+1, upper, 0.2, 0.01, line_color="black")
plot.xaxis.major_label_overrides[index+1] = key
plot.xgrid.grid_line_color = None
plot.ygrid.grid_line_color = "white"
plot.grid.grid_line_width = 0
plot.xaxis.major_label_text_font_size="10px"
text=Paragraph(text=f"""{text}""", width=900)
show(text)
plot.yaxis.axis_label = "Utilization in %"
plot.xaxis.ticker = np.arange(index+2)
show(plot)
if analyse_phase == "training":
display(Markdown("""**Workload balancing**\n\n"""))
report = load_report('LoadBalancing')
if report:
params = report['RuleParameters'].split('\n')
threshold = params[0].split(':')[1]
patience = params[1].split(':')[1]
triggered = report['RuleTriggered']
datapoints = report['Datapoints']
paragraph = Paragraph(text=f"""The LoadBalancing rule helps to detect issues in workload balancing
between multiple GPUs.
It computes a histogram of GPU utilization values for each GPU and compares then the
similarity between histograms. The rule checked if the distance of histograms is larger than the
threshold of {threshold}.
During initialization utilization is likely zero, so the rule skipped the first {patience} data points.
""", width=900)
show(paragraph)
if len(report['Details']) > 0:
for node_id in report['Details']:
text = f"""The following histogram shows the workload per GPU on node {node_id}.
You can enable/disable the visualization of a workload by clicking on the label in the legend.
"""
if len(report['Details']) == 1 and len(report['Details'][node_id]['workloads']) == 1:
text = f"""{text} Your training job only used one GPU so there is no workload balancing issue."""
plot = figure(plot_height=450,
plot_width=850,
x_range=(-1,100),
title=f"""Workloads on node {node_id}""")
colors = bokeh.palettes.viridis(len(report['Details'][node_id]['workloads']))
for index, gpu_id2 in enumerate(report['Details'][node_id]['workloads']):
probs = report['Details'][node_id]['workloads'][gpu_id2]
plot.quad( top=probs,
bottom=0,
left=np.arange(0,98,2),
right=np.arange(2,100,2),
line_color="white",
fill_color=colors[index],
fill_alpha=0.8,
legend=gpu_id2 )
plot.y_range.start = 0
plot.xaxis.axis_label = f"""Utilization"""
plot.yaxis.axis_label = "Occurrences"
plot.grid.grid_line_color = "white"
plot.legend.click_policy="hide"
paragraph = Paragraph(text=text)
show(column(paragraph, plot))
if "distances" in report['Details'][node_id]:
text = f"""The rule identified workload balancing issues on node {node_id}
where workloads differed by more than threshold {threshold}.
"""
for index, gpu_id2 in enumerate(report['Details'][node_id]['distances']):
for gpu_id1 in report['Details'][node_id]['distances'][gpu_id2]:
distance = round(report['Details'][node_id]['distances'][gpu_id2][gpu_id1], 2)
text = f"""{text} The difference of workload between {gpu_id2} and {gpu_id1} is: {distance}."""
paragraph = Paragraph(text=f"""{text}""", width=900)
show(column(paragraph))
if analyse_phase == "training":
display(Markdown("""### Dataloading analysis\n\n"""))
report = load_report('Dataloader')
if report:
params = report['RuleParameters'].split("\n")
min_threshold = params[0].split(':')[1]
max_threshold = params[1].split(':')[1]
triggered = report['RuleTriggered']
datapoints = report['Datapoints']
text=f"""The number of dataloader workers can greatly affect the overall performance
of your training job. The rule analyzed the number of dataloading processes that have been running in
parallel on the training instance and compares it against the total number of cores.
The rule checked if the number of processes is smaller than {min_threshold}% or larger than
{max_threshold}% the total number of cores. Having too few dataloader workers can slowdown data preprocessing and lead to GPU
underutilization. Having too many dataloader workers may hurt the
overall performance if you are running other compute intensive tasks on the CPU.
The rule analysed {datapoints} datapoints and triggered {triggered} times."""
paragraph = Paragraph(text=f"{text}", width=900)
show(paragraph)
text = ""
if 'cores' in report['Details']:
cores = int(report['Details']['cores'])
dataloaders = report['Details']['dataloaders']
if dataloaders < cores:
text=f"""{text} Your training instance provided {cores} CPU cores, however your training job only
ran on average {dataloaders} dataloader workers in parallel. We recommend you to increase the number of
dataloader workers."""
if dataloaders > cores:
text=f"""{text} Your training instance provided {cores} CPU cores, however your training job ran
on average {dataloaders} dataloader workers. We recommed you to decrease the number of dataloader
workers."""
if 'pin_memory' in report['Details'] and report['Details']['pin_memory'] == False:
text=f"""{text} Using pinned memory also improves performance because it enables fast data transfer to CUDA-enabled GPUs.
The rule detected that your training job was not using pinned memory.
In case of using PyTorch Dataloader, you can enable this by setting pin_memory=True."""
if 'prefetch' in report['Details'] and report['Details']['prefetch'] == False:
text=f"""{text} It appears that your training job did not perform any data pre-fetching. Pre-fetching can improve your
data input pipeline as it produces the data ahead of time."""
paragraph = Paragraph(text=f"{text}", width=900)
show(paragraph)
colors=bokeh.palettes.viridis(10)
if "dataloading_time" in report['Details']:
median = round(report['Details']["dataloading_time"]['p50'],4)
p95 = round(report['Details']["dataloading_time"]['p95'],4)
p25 = round(report['Details']["dataloading_time"]['p25'],4)
binedges = report['Details']["dataloading_time"]['binedges']
probs = report['Details']["dataloading_time"]['probs']
text=f"""The following histogram shows the distribution of dataloading times that have been measured throughout your training job. The median dataloading time was {median}s.
The 95th percentile was {p95}s and the 25th percentile was {p25}s"""
plot = figure(plot_height=450,
plot_width=850,
toolbar_location='right',
tools="hover,wheel_zoom,reset,pan",
x_range=(binedges[0], binedges[-1])
)
plot.quad( top=probs,
bottom=0,
left=binedges[:-1],
right=binedges[1:],
line_color="white",
fill_color=colors[0],
fill_alpha=0.8,
legend="Dataloading events" )
plot.y_range.start = 0
plot.xaxis.axis_label = f"""Dataloading in [s]"""
plot.yaxis.axis_label = "Occurrences"
plot.grid.grid_line_color = "white"
plot.legend.click_policy="hide"
paragraph = Paragraph(text=f"{text}", width=900)
show(column(paragraph, plot))
if analyse_phase == "training":
display(Markdown(""" ### Batch size"""))
report = load_report('BatchSize')
if report:
params = report['RuleParameters'].split('\n')
cpu_threshold_p95 = int(params[0].split(':')[1])
gpu_threshold_p95 = int(params[1].split(':')[1])
gpu_memory_threshold_p95 = int(params[2].split(':')[1])
patience = int(params[3].split(':')[1])
window = int(params[4].split(':')[1])
violations = report['Violations']
triggered = report['RuleTriggered']
datapoints = report['Datapoints']
text = Paragraph(text=f"""The BatchSize rule helps to detect if GPU is underutilized because of the batch size being
too small. To detect this the rule analyzes the GPU memory footprint, CPU and GPU utilization. The rule checked if the 95th percentile of CPU utilization is below cpu_threshold_p95 of
{cpu_threshold_p95}%, the 95th percentile of GPU utilization is below gpu_threshold_p95 of {gpu_threshold_p95}% and the 95th percentile of memory footprint \
below gpu_memory_threshold_p95 of {gpu_memory_threshold_p95}%. In your training job this happened {violations} times. \
The rule skipped the first {patience} datapoints. The rule computed the percentiles over window size of {window} continuous datapoints.\n
The rule analysed {datapoints} datapoints and triggered {triggered} times.
""", width=800)
show(text)
if len(report['Details']) >0:
timestamp = us_since_epoch_to_human_readable_time(report['Details']['last_timestamp'])
date = datetime.datetime.strptime(timestamp, '%Y-%m-%dT%H:%M:%S:%f')
day = date.date().strftime("%m/%d/%Y")
hour = date.time().strftime("%H:%M:%S")
del report['Details']['last_timestamp']
text = Paragraph(text=f"""Your training job is underutilizing the instance. You may want to consider
either switch to a smaller instance type or to increase the batch size.
The last time the BatchSize rule triggered in your training job was on {day} at {hour}.
The following boxplots are a snapshot from the timestamps. They the total
CPU utilization, the GPU utilization, and the GPU memory usage per GPU (without outliers).""",
width=800)
show(text)
for node_id in report['Details']:
xmax = max(20, len(report['Details'][node_id]))
plot = figure(plot_height=350,
plot_width=1000,
toolbar_location='right',
tools="hover,wheel_zoom,reset,pan",
title=f"Node {node_id}",
x_range=(0,xmax)
)
for index, key in enumerate(report['Details'][node_id]):
upper = report['Details'][node_id][key]['upper']
lower = report['Details'][node_id][key]['lower']
p75 = report['Details'][node_id][key]['p75']
p25 = report['Details'][node_id][key]['p25']
p50 = report['Details'][node_id][key]['p50']
plot.segment(index+1, upper, index+1, p75, line_color="black")
plot.segment(index+1, lower, index+1, p25, line_color="black")
plot.vbar(index+1, 0.7, p50, p75, fill_color="#FDE725", line_color="black")
plot.vbar(index+1, 0.7, p25, p50, fill_color="#440154", line_color="black")
plot.rect(index+1, lower, 0.2, 0.01, line_color="black")
plot.rect(index+1, upper, 0.2, 0.01, line_color="black")
plot.xaxis.major_label_overrides[index+1] = key
plot.xgrid.grid_line_color = None
plot.ygrid.grid_line_color = "white"
plot.grid.grid_line_width = 0
plot.xaxis.major_label_text_font_size="10px"
plot.xaxis.ticker = np.arange(index+2)
plot.yaxis.axis_label = "Utilization in %"
show(plot)
if analyse_phase == "training":
display(Markdown("""### CPU bottlenecks\n\n"""))
report = load_report('CPUBottleneck')
if report:
params = report['RuleParameters'].split('\n')
threshold = int(params[0].split(':')[1])
cpu_threshold = int(params[1].split(':')[1])
gpu_threshold = int(params[2].split(':')[1])
patience = int(params[3].split(':')[1])
violations = report['Violations']
triggered = report['RuleTriggered']
datapoints = report['Datapoints']
if report['Violations'] > 0:
perc = int(report['Violations']/report['Datapoints']*100)
else:
perc = 0
if perc < threshold:
string = 'below'
else:
string = 'above'
text = f"""The CPUBottleneck rule checked when the CPU utilization was above cpu_threshold of {cpu_threshold}%
and GPU utilization was below gpu_threshold of {gpu_threshold}%.
During initialization utilization is likely to be zero, so the rule skipped the first {patience} datapoints.
With this configuration the rule found {violations} CPU bottlenecks which is {perc}% of the total time. This is {string} the threshold of {threshold}%
The rule analysed {datapoints} data points and triggered {triggered} times."""
paragraph = Paragraph(text=text, width=900)
show(paragraph)
if report:
plots = []
text = ""
if report['RuleTriggered'] > 0:
low_gpu = report['Details']['low_gpu_utilization']
cpu_bottleneck = {}
cpu_bottleneck["GPU usage above threshold"] = report["Datapoints"] - report["Details"]["low_gpu_utilization"]
cpu_bottleneck["GPU usage below threshold"] = report["Details"]["low_gpu_utilization"] - len(report["Details"])
cpu_bottleneck["Low GPU usage due to CPU bottlenecks"] = len(report["Details"]["bottlenecks"])
n_bottlenecks = round(len(report['Details']['bottlenecks'])/datapoints * 100, 2)
text = f"""The following chart (left) shows how many datapoints were below the gpu_threshold of {gpu_threshold}%
and how many of those datapoints were likely caused by a CPU bottleneck. The rule found {low_gpu} out of {datapoints} datapoints which had a GPU utilization
below {gpu_threshold}%. Out of those datapoints {n_bottlenecks}% were likely caused by CPU bottlenecks.
"""
plot = create_piechart(cpu_bottleneck,
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="Low GPU usage caused by CPU bottlenecks")
plots.append(plot)
if 'phase' in report['Details']:
text = f"""{text} The chart (in the middle) shows whether CPU bottlenecks mainly
happened during train/validation phase.
"""
plot = create_piechart(report['Details']['phase'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="The ratio between time spent on TRAIN/EVAL phase")
plots.append(plot)
if 'forward_backward' in report['Details'] and len(report['Details']['forward_backward']) > 0:
event = max(report['Details']['forward_backward'], key=report['Details']['forward_backward'].get)
perc = report['Details']['forward_backward'][event]
text = f"""{text} The pie charts on the right shows a more detailed breakdown.
It shows that {int(perc)}% of the training time was spent on event {event}"""
plot = create_piechart(report['Details']['forward_backward'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="The ratio between forward and backward pass")
plots.append(plot)
if len(plots) > 0:
paragraph = Paragraph(text=text, width=900)
show(column(paragraph, row(plots)))
plots = []
text = ""
if 'ratio' in report['Details'] and len(report['Details']['ratio']) > 0:
key = list(report['Details']['ratio'].keys())[0]
ratio = report['Details']['ratio'][key]
text = f"""The following pie chart shows a breakdown of the CPU/GPU operators that happened during CPU bottlenecks.
It shows that {int(ratio)}% of the training time was spent on executing operators in "{key}"."""
plot = create_piechart(report['Details']['ratio'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="The ratio between CPU/GPU operators")
plots.append(plot)
if 'general' in report['Details'] and len(report['Details']['general']) > 0:
event = max(report['Details']['general'], key=report['Details']['general'].get)
perc = report['Details']['general'][event]
plot = create_piechart(report['Details']['general'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="General metrics recorded in framework ")
plots.append(plot)
if len(plots) > 0:
paragraph = Paragraph(text=text, width=900)
show(column(paragraph, row(plots)))
plots = []
text = ""
if 'horovod' in report['Details'] and len(report['Details']['horovod']) > 0:
event = max(report['Details']['horovod'], key=report['Details']['horovod'].get)
perc = report['Details']['horovod'][event]
text = f"""The following pie chart shows a detailed breakdown of the Horovod metrics
that have been recorded when the CPU bottleneck happened. The most expensive function was
{event} with {int(perc)}%"""
plot = create_piechart(report['Details']['horovod'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="General metrics recorded in framework ")
paragraph = Paragraph(text=text, width=900)
show(column(paragraph, row(plot)))
if analyse_phase == "training":
display(Markdown("""### I/O bottlenecks\n\n"""))
report = load_report('IOBottleneck')
if report:
params = report['RuleParameters'].split('\n')
threshold = int(params[0].split(':')[1])
io_threshold = int(params[1].split(':')[1])
gpu_threshold = int(params[2].split(':')[1])
patience = int(params[3].split(':')[1])
violations = report['Violations']
triggered = report['RuleTriggered']
datapoints = report['Datapoints']
if report['Violations'] > 0:
perc = int(report['Violations']/report['Datapoints']*100)
else:
perc = 0
if perc < threshold:
string = 'below'
else:
string = 'above'
text = f"""The IOBottleneck rule checked when I/O wait time was above io_threshold of {io_threshold}%
and GPU utilization was below gpu_threshold of {gpu_threshold}. During initialization utilization is likely to be zero, so the rule skipped the first {patience} datapoints.
With this configuration the rule found {violations} I/O bottlenecks which is {perc}% of the total time. This is {string} the threshold of {threshold}%.
The rule analysed {datapoints} datapoints and triggered {triggered} times."""
paragraph = Paragraph(text=text, width=900)
show(paragraph)
if report:
plots = []
text = ""
if report['RuleTriggered'] > 0:
low_gpu = report['Details']['low_gpu_utilization']
cpu_bottleneck = {}
cpu_bottleneck["GPU usage above threshold"] = report["Datapoints"] - report["Details"]["low_gpu_utilization"]
cpu_bottleneck["GPU usage below threshold"] = report["Details"]["low_gpu_utilization"] - len(report["Details"])
cpu_bottleneck["Low GPU usage due to I/O bottlenecks"] = len(report["Details"]["bottlenecks"])
n_bottlenecks = round(len(report['Details']['bottlenecks'])/datapoints * 100, 2)
text = f"""The following chart (left) shows how many datapoints were below the gpu_threshold of {gpu_threshold}%
and how many of those datapoints were likely caused by a I/O bottleneck. The rule found {low_gpu} out of {datapoints} datapoints which had a GPU utilization
below {gpu_threshold}%. Out of those datapoints {n_bottlenecks}% were likely caused by I/O bottlenecks.
"""
plot = create_piechart(cpu_bottleneck,
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="Low GPU usage caused by I/O bottlenecks")
plots.append(plot)
if 'phase' in report['Details']:
text = f"""{text} The chart (in the middle) shows whether I/O bottlenecks mainly happened during trianing or validation phase.
"""
plot = create_piechart(report['Details']['phase'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="The ratio between the time spent on the TRAIN/EVAL phase")
plots.append(plot)
if 'forward_backward' in report['Details'] and len(report['Details']['forward_backward']) > 0:
event = max(report['Details']['forward_backward'], key=report['Details']['forward_backward'].get)
perc = report['Details']['forward_backward'][event]
text = f"""{text} The pie charts on the right shows a more detailed breakdown.
It shows that {int(perc)}% of the training time was spent on event "{event}"."""
plot = create_piechart(report['Details']['forward_backward'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="The ratio between forward and backward pass")
plots.append(plot)
if len(plots) > 0:
paragraph = Paragraph(text=text, width=900)
show(column(paragraph, row(plots)))
plots = []
text = ""
if 'ratio' in report['Details'] and len(report['Details']['ratio']) > 0:
key = list(report['Details']['ratio'].keys())[0]
ratio = report['Details']['ratio'][key]
text = f"""The following pie chart shows a breakdown of the CPU/GPU operators that happened
during I/O bottlenecks. It shows that {int(ratio)}% of the training time was spent on executing operators in "{key}"."""
plot = create_piechart(report['Details']['ratio'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="Ratio between CPU/GPU operators")
plots.append(plot)
if 'general' in report['Details'] and len(report['Details']['general']) > 0:
event = max(report['Details']['general'], key=report['Details']['general'].get)
perc = report['Details']['general'][event]
plot = create_piechart(report['Details']['general'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="General metrics recorded in framework ")
plots.append(plot)
if len(plots) > 0:
paragraph = Paragraph(text=text, width=900)
show(column(paragraph, row(plots)))
plots = []
text = ""
if 'horovod' in report['Details'] and len(report['Details']['horovod']) > 0:
event = max(report['Details']['horovod'], key=report['Details']['horovod'].get)
perc = report['Details']['horovod'][event]
text = f"""The following pie chart shows a detailed breakdown of the Horovod metrics that have been
recorded when I/O bottleneck happened. The most expensive function was {event} with {int(perc)}%"""
plot = create_piechart(report['Details']['horovod'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="General metrics recorded in framework ")
paragraph = Paragraph(text=text, width=900)
show(column(paragraph, row(plot)))
if analyse_phase == "training":
display(Markdown("""### GPU memory\n\n"""))
report = load_report('GPUMemoryIncrease')
if report:
params = report['RuleParameters'].split('\n')
increase = float(params[0].split(':')[1])
patience = params[1].split(':')[1]
window = params[2].split(':')[1]
violations = report['Violations']
triggered = report['RuleTriggered']
datapoints = report['Datapoints']
text=Paragraph(text=f"""The GPUMemoryIncrease rule helps to detect large increase in memory usage on GPUs.
The rule checked if the moving average of memory increased by more than {increase}%.
So if the moving average increased for instance from 10% to {11+increase}%,
the rule would have triggered. During initialization utilization is likely 0, so the rule skipped the first {patience} datapoints.
The moving average was computed on a window size of {window} continuous datapoints. The rule detected {violations} violations
where the moving average between previous and current time window increased by more than {increase}%.
The rule analysed {datapoints} datapoints and triggered {triggered} times.""",
width=900)
show(text)
if len(report['Details']) > 0:
timestamp = us_since_epoch_to_human_readable_time(report['Details']['last_timestamp'])
date = datetime.datetime.strptime(timestamp, '%Y-%m-%dT%H:%M:%S:%f')
day = date.date().strftime("%m/%d/%Y")
hour = date.time().strftime("%H:%M:%S")
text = Paragraph(text=f"""Your training job triggered memory spikes.
The last time the GPUMemoryIncrease rule triggered in your training job was on {day} at {hour}.
The following boxplots are a snapshot from the timestamps. They show for each node and GPU the corresponding
memory utilization (without outliers).""", width=900)
show(text)
del report['Details']['last_timestamp']
for node_id in report['Details']:
plot = figure(plot_height=350,
plot_width=1000,
toolbar_location='right',
tools="hover,wheel_zoom,reset,pan",
title=f"Node {node_id}",
x_range=(0,17),
)
for index, key in enumerate(report['Details'][node_id]):
display(Markdown(f"""**Memory utilization of {key} on node {node_id}:**"""))
text = ""
gpu_max = report['Details'][node_id][key]['gpu_max']
text = f"""{text} The max memory utilization of {key} on node {node_id} was {gpu_max}%."""
p_95 = int(report['Details'][node_id][key]['p95'])
p_5 = report['Details'][node_id][key]['p05']
if p_95 < int(50):
text = f"""{text} The 95th percentile was only {p_95}%."""
if p_5 < int(5):
text = f"""{text} The 5th percentile was only {p_5}%."""
if p_95 - p_5 > 50:
text = f"""{text} The difference between 5th percentile {p_5}% and 95th percentile {p_95}% is quite
significant, which means that memory utilization on {key} is fluctuating quite a lot."""
text = Paragraph(text=f"""{text}""", width=900)
show(text)
upper = report['Details'][node_id][key]['upper']
lower = report['Details'][node_id][key]['lower']
p75 = report['Details'][node_id][key]['p75']
p25 = report['Details'][node_id][key]['p25']
p50 = report['Details'][node_id][key]['p50']
plot.segment(index+1, upper, index+1, p75, line_color="black")
plot.segment(index+1, lower, index+1, p25, line_color="black")
plot.vbar(index+1, 0.7, p50, p75, fill_color="#FDE725", line_color="black")
plot.vbar(index+1, 0.7, p25, p50, fill_color="#440154", line_color="black")
plot.rect(index+1, lower, 0.2, 0.01, line_color="black")
plot.rect(index+1, upper, 0.2, 0.01, line_color="black")
plot.xaxis.major_label_overrides[index+1] = key
plot.xgrid.grid_line_color = None
plot.ygrid.grid_line_color = "white"
plot.grid.grid_line_width = 0
plot.xaxis.major_label_text_font_size="10px"
plot.xaxis.ticker = np.arange(index+2)
plot.yaxis.axis_label = "Utilization in %"
show(plot)
###Output
_____no_output_____ |
Undergrad/CS-370-T1045/Week 5 /TreasureHuntGame/Bailey_Samuel_ProjectTwoMilestone.ipynb | ###Markdown
Treasure Hunt Game Notebook Read and Review Your Starter CodeThe theme of this project is a popular treasure hunt game in which the player needs to find the treasure before the pirate does. While you will not be developing the entire game, you will write the part of the game that represents the intelligent agent, which is a pirate in this case. The pirate will try to find the optimal path to the treasure using deep Q-learning. You have been provided with two Python classes and this notebook to help you with this assignment. The first class, TreasureMaze.py, represents the environment, which includes a maze object defined as a matrix. The second class, GameExperience.py, stores the episodes โ that is, all the states that come in between the initial state and the terminal state. This is later used by the agent for learning by experience, called "exploration". This notebook shows how to play a game. Your task is to complete the deep Q-learning implementation for which a skeleton implementation has been provided. The code blocs you will need to complete has TODO as a header.First, read and review the next few code and instruction blocks to understand the code that you have been given.
###Code
from __future__ import print_function
import os, sys, time, datetime, json, random
import numpy as np
from keras.models import Sequential
from keras.layers.core import Dense, Activation
from keras.optimizers import SGD , Adam, RMSprop
from keras.layers.advanced_activations import PReLU
import matplotlib.pyplot as plt
from TreasureMaze import TreasureMaze
from GameExperience import GameExperience
%matplotlib inline
###Output
Using TensorFlow backend.
###Markdown
The following code block contains an 8x8 matrix that will be used as a maze object:
###Code
maze = np.array([
[ 1., 0., 1., 1., 1., 1., 1., 1.],
[ 1., 0., 1., 1., 1., 0., 1., 1.],
[ 1., 1., 1., 1., 0., 1., 0., 1.],
[ 1., 1., 1., 0., 1., 1., 1., 1.],
[ 1., 1., 0., 1., 1., 1., 1., 1.],
[ 1., 1., 1., 0., 1., 0., 0., 0.],
[ 1., 1., 1., 0., 1., 1., 1., 1.],
[ 1., 1., 1., 1., 0., 1., 1., 1.]
])
###Output
_____no_output_____
###Markdown
This helper function allows a visual representation of the maze object:
###Code
def show(qmaze):
plt.grid('on')
nrows, ncols = qmaze.maze.shape
ax = plt.gca()
ax.set_xticks(np.arange(0.5, nrows, 1))
ax.set_yticks(np.arange(0.5, ncols, 1))
ax.set_xticklabels([])
ax.set_yticklabels([])
canvas = np.copy(qmaze.maze)
for row,col in qmaze.visited:
canvas[row,col] = 0.6
pirate_row, pirate_col, _ = qmaze.state
canvas[pirate_row, pirate_col] = 0.3 # pirate cell
canvas[nrows-1, ncols-1] = 0.9 # treasure cell
img = plt.imshow(canvas, interpolation='none', cmap='gray')
return img
###Output
_____no_output_____
###Markdown
The pirate agent can move in four directions: left, right, up, and down. While the agent primarily learns by experience through exploitation, often, the agent can choose to explore the environment to find previously undiscovered paths. This is called "exploration" and is defined by epsilon. This value is typically a lower value such as 0.1, which means for every ten attempts, the agent will attempt to learn by experience nine times and will randomly explore a new path one time. You are encouraged to try various values for the exploration factor and see how the algorithm performs.
###Code
LEFT = 0
UP = 1
RIGHT = 2
DOWN = 3
# Exploration factor
epsilon = 0.1
# Actions dictionary
actions_dict = {
LEFT: 'left',
UP: 'up',
RIGHT: 'right',
DOWN: 'down',
}
num_actions = len(actions_dict)
###Output
_____no_output_____
###Markdown
The sample code block and output below show creating a maze object and performing one action (DOWN), which returns the reward. The resulting updated environment is visualized.
###Code
qmaze = TreasureMaze(maze)
canvas, reward, game_over = qmaze.act(DOWN)
print("reward=", reward)
show(qmaze)
###Output
reward= -0.04
###Markdown
This function simulates a full game based on the provided trained model. The other parameters include the TreasureMaze object and the starting position of the pirate.
###Code
def play_game(model, qmaze, pirate_cell):
qmaze.reset(pirate_cell)
envstate = qmaze.observe()
while True:
prev_envstate = envstate
# get next action
q = model.predict(prev_envstate)
action = np.argmax(q[0])
# apply action, get rewards and new state
envstate, reward, game_status = qmaze.act(action)
if game_status == 'win':
return True
elif game_status == 'lose':
return False
###Output
_____no_output_____
###Markdown
This function helps you to determine whether the pirate can win any game at all. If your maze is not well designed, the pirate may not win any game at all. In this case, your training would not yield any result. The provided maze in this notebook ensures that there is a path to win and you can run this method to check.
###Code
def completion_check(model, qmaze):
for cell in qmaze.free_cells:
if not qmaze.valid_actions(cell):
return False
if not play_game(model, qmaze, cell):
return False
return True
###Output
_____no_output_____
###Markdown
The code you have been given in this block will build the neural network model. Review the code and note the number of layers, as well as the activation, optimizer, and loss functions that are used to train the model.
###Code
def build_model(maze):
model = Sequential()
model.add(Dense(maze.size, input_shape=(maze.size,)))
model.add(PReLU())
model.add(Dense(maze.size))
model.add(PReLU())
model.add(Dense(num_actions))
model.compile(optimizer='adam', loss='mse')
return model
###Output
_____no_output_____
###Markdown
TODO: Complete the Q-Training Algorithm Code BlockThis is your deep Q-learning implementation. The goal of your deep Q-learning implementation is to find the best possible navigation sequence that results in reaching the treasure cell while maximizing the reward. In your implementation, you need to determine the optimal number of epochs to achieve a 100% win rate.You will need to complete the section starting with pseudocode. The pseudocode has been included for you.
###Code
def qtrain(model, maze, **opt):
# exploration factor
global epsilon
# number of epochs
n_epoch = opt.get('n_epoch', 15000)
# maximum memory to store episodes
max_memory = opt.get('max_memory', 1000)
# maximum data size for training
data_size = opt.get('data_size', 50)
# start time
start_time = datetime.datetime.now()
# Construct environment/game from numpy array: maze (see above)
qmaze = TreasureMaze(maze)
# Initialize experience replay object
experience = GameExperience(model, max_memory=max_memory)
win_history = [] # history of win/lose game
hsize = qmaze.maze.size//2 # history window size
win_rate = 0.0
# Training Code
# Epoch 'for' code:
for i in range (n_epoch):
Agent_cell = random.choice(qmaze.free_cells)
qmaze.reset(Agent_cell)
envstate = qmaze.observe
# State declaration
State = 'not game over'
# While loop for 'not game over'
while State == 'not game over':
previous_envstate = envstate
q = model.predict(previous_envstate)
action = random.choice(actions_dict)
envstate, reward, game_status = qmaze.act(action)
actionInt = list(actions_dict.keys()) [list(actions_dict.values()).index(action)]
episode = [previous_envstate, actionInt, reward, envstate, game_status]
# Store the episode in Experience replay Object
experience.remember(episode)
# Call GameExperience.get_data to retrieve training data (input and target)
inputs, targets = experience.get_data()
# Pass to model.fit method to train the model
model.fit(inputs, targets)
# Evaluated loss with model.evaluate
win_rate = model.evaluate(inputs, targets)
print(win_rate)
print(State)
# If the win rate is above the threshold and your model passes the completion check, that would be your epoch.
if win_rate > 0.9 and completion_check(model, qmaze):
epoch = i
print(i)
#Print the epoch, loss, episodes, win count, and win rate for each epoch
dt = datetime.datetime.now() - start_time
t = format_time(dt.total_seconds())
template = "Epoch: {:03d}/{:d} | Loss: {:.4f} | Episodes: {:d} | Win count: {:d} | Win rate: {:.3f} | time: {}"
print(template.format(epoch, n_epoch-1, loss, n_episodes, sum(win_history), win_rate, t))
# We simply check if training has exhausted all free cells and if in all
# cases the agent won.
if win_rate > 0.9 : epsilon = 0.05
if sum(win_history[-hsize:]) == hsize and completion_check(model, qmaze):
print("Reached 100%% win rate at epoch: %d" % (epoch,))
break
# Determine the total time for training
dt = datetime.datetime.now() - start_time
seconds = dt.total_seconds()
t = format_time(seconds)
print("n_epoch: %d, max_mem: %d, data: %d, time: %s" % (epoch, max_memory, data_size, t))
return seconds
# This is a small utility for printing readable time strings:
def format_time(seconds):
if seconds < 400:
s = float(seconds)
return "%.1f seconds" % (s,)
elif seconds < 4000:
m = seconds / 60.0
return "%.2f minutes" % (m,)
else:
h = seconds / 3600.0
return "%.2f hours" % (h,)
###Output
_____no_output_____
###Markdown
Test Your ModelNow we will start testing the deep Q-learning implementation. To begin, select **Cell**, then **Run All** from the menu bar. This will run your notebook. As it runs, you should see output begin to appear beneath the next few cells. The code below creates an instance of TreasureMaze.
###Code
qmaze = TreasureMaze(maze)
show(qmaze)
###Output
_____no_output_____
###Markdown
In the next code block, you will build your model and train it using deep Q-learning. Note: This step takes several minutes to fully run.
###Code
model = build_model(maze)
qtrain(model, maze, epochs=1000, max_memory=8*maze.size, data_size=32)
###Output
_____no_output_____
###Markdown
This cell will check to see if the model passes the completion check. Note: This could take several minutes.
###Code
completion_check(model, qmaze)
show(qmaze)
###Output
_____no_output_____
###Markdown
This cell will test your model for one game. It will start the pirate at the top-left corner and run play_game. The agent should find a path from the starting position to the target (treasure). The treasure is located in the bottom-right corner.
###Code
pirate_start = (0, 0)
play_game(model, qmaze, pirate_start)
show(qmaze)
###Output
_____no_output_____ |
notebooks/rolling_updates.ipynb | ###Markdown
Rolling Update TestsCheck rolling updates function as expected.
###Code
import json
import time
!kubectl create namespace seldon
!kubectl config set-context $(kubectl config current-context) --namespace=seldon
###Output
_____no_output_____
###Markdown
Change Image
###Code
!kubectl apply -f resources/fixed_v1.yaml
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=fixed \
-o jsonpath='{.items[0].metadata.name}')
for i in range(60):
state=!kubectl get sdep fixed -o jsonpath='{.status.state}'
state=state[0]
print(state)
if state=="Available":
break
time.sleep(1)
assert(state=="Available")
!curl -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' \
-X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions \
-H "Content-Type: application/json"
!kubectl apply -f resources/fixed_v2.yaml
time.sleep(5) # To allow operator to start the update
for i in range(120):
responseRaw=!curl -s -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' -X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions -H "Content-Type: application/json"
try:
response = json.loads(responseRaw[0])
except:
print("Failed to parse json",responseRaw)
continue
assert(response['data']['ndarray'][0]==1 or response['data']['ndarray'][0]==5)
jsonRaw=!kubectl get deploy -l seldon-deployment-id=fixed -o json
data="".join(jsonRaw)
resources = json.loads(data)
numReplicas = int(resources["items"][0]["status"]["replicas"])
if numReplicas == 3:
break
time.sleep(1)
print("Rollout Success")
!kubectl delete -f resources/fixed_v1.yaml
###Output
_____no_output_____
###Markdown
Separate Service Orchestrator
###Code
!kubectl apply -f resources/fixed_v1_sep.yaml
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=fixed \
-o jsonpath='{.items[0].metadata.name}')
for i in range(60):
state=!kubectl get sdep fixed -o jsonpath='{.status.state}'
state=state[0]
print(state)
if state=="Available":
break
time.sleep(1)
assert(state=="Available")
!curl -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' \
-X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions \
-H "Content-Type: application/json"
!kubectl apply -f resources/fixed_v2_sep.yaml
time.sleep(5) # To allow operator to start the update
for i in range(120):
responseRaw=!curl -s -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' -X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions -H "Content-Type: application/json"
try:
response = json.loads(responseRaw[0])
except:
print("Failed to parse json",responseRaw)
continue
assert(response['data']['ndarray'][0]==1 or response['data']['ndarray'][0]==5)
jsonRaw=!kubectl get deploy -l seldon-deployment-id=fixed -o json
data="".join(jsonRaw)
resources = json.loads(data)
numReplicas = int(resources["items"][0]["status"]["replicas"])
if numReplicas == 1:
break
time.sleep(1)
print("Rollout Success")
!kubectl delete -f resources/fixed_v1_sep.yaml
###Output
_____no_output_____
###Markdown
Two PodSpecs
###Code
!kubectl apply -f resources/fixed_v1_2podspecs.yaml
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=fixed \
-o jsonpath='{.items[0].metadata.name}')
for i in range(60):
state=!kubectl get sdep fixed -o jsonpath='{.status.state}'
state=state[0]
print(state)
if state=="Available":
break
time.sleep(1)
assert(state=="Available")
!curl -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' \
-X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions \
-H "Content-Type: application/json"
!kubectl apply -f resources/fixed_v2_2podspecs.yaml
time.sleep(5) # To allow operator to start the update
for i in range(120):
responseRaw=!curl -s -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' -X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions -H "Content-Type: application/json"
try:
response = json.loads(responseRaw[0])
except:
print("Failed to parse json",responseRaw)
continue
assert(response['data']['ndarray'][0]==1 or response['data']['ndarray'][0]==5)
jsonRaw=!kubectl get deploy -l seldon-deployment-id=fixed -o json
data="".join(jsonRaw)
resources = json.loads(data)
numReplicas = int(resources["items"][0]["status"]["replicas"])
if numReplicas == 1:
break
time.sleep(1)
print("Rollout Success")
!kubectl delete -f resources/fixed_v1_2podspecs.yaml
###Output
_____no_output_____
###Markdown
Two Models
###Code
!kubectl apply -f resources/fixed_v1_2models.yaml
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=fixed \
-o jsonpath='{.items[0].metadata.name}')
for i in range(60):
state=!kubectl get sdep fixed -o jsonpath='{.status.state}'
state=state[0]
print(state)
if state=="Available":
break
time.sleep(1)
assert(state=="Available")
!curl -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' \
-X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions \
-H "Content-Type: application/json"
!kubectl apply -f resources/fixed_v2_2models.yaml
time.sleep(5) # To allow operator to start the update
for i in range(120):
responseRaw=!curl -s -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' -X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions -H "Content-Type: application/json"
try:
response = json.loads(responseRaw[0])
except:
print("Failed to parse json",responseRaw)
continue
assert(response['data']['ndarray'][0]==1 or response['data']['ndarray'][0]==5)
jsonRaw=!kubectl get deploy -l seldon-deployment-id=fixed -o json
data="".join(jsonRaw)
resources = json.loads(data)
numReplicas = int(resources["items"][0]["status"]["replicas"])
if numReplicas == 3:
break
time.sleep(1)
print("Rollout Success")
!kubectl delete -f resources/fixed_v2_2models.yaml
###Output
_____no_output_____
###Markdown
Model name changesThis will not do a rolling update but create a new deployment.
###Code
!kubectl apply -f resources/fixed_v1.yaml
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=fixed \
-o jsonpath='{.items[0].metadata.name}')
for i in range(60):
state=!kubectl get sdep fixed -o jsonpath='{.status.state}'
state=state[0]
print(state)
if state=="Available":
break
time.sleep(1)
assert(state=="Available")
!curl -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' \
-X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions \
-H "Content-Type: application/json"
!kubectl apply -f resources/fixed_v2_new_name.yaml
time.sleep(5)
for i in range(120):
responseRaw=!curl -s -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' -X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions -H "Content-Type: application/json"
try:
response = json.loads(responseRaw[0])
except:
print("Failed to parse json",responseRaw)
continue
assert(response['data']['ndarray'][0]==1 or response['data']['ndarray'][0]==5)
jsonRaw=!kubectl get deploy -l seldon-deployment-id=fixed -o json
data="".join(jsonRaw)
resources = json.loads(data)
numItems = len(resources["items"])
if numItems == 1:
break
time.sleep(1)
print("Rollout Success")
!kubectl delete -f resources/fixed_v2_new_name.yaml
###Output
_____no_output_____
###Markdown
Rolling Update TestsCheck rolling updates function as expected.
###Code
import json
import time
###Output
_____no_output_____
###Markdown
Before we get started we'd like to make sure that we're making all the changes in a new blank namespace of the name `seldon`
###Code
!kubectl create namespace seldon
!kubectl config set-context $(kubectl config current-context) --namespace=seldon
###Output
_____no_output_____
###Markdown
Change Image We'll want to try modifying an image and seeing how the rolling update performs the changes.We'll first create the following model:
###Code
%%writefile resources/fixed_v1.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: fixed
spec:
name: fixed
protocol: seldon
transport: rest
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/fixed-model:0.1
name: classifier
graph:
name: classifier
type: MODEL
name: default
replicas: 3
###Output
_____no_output_____
###Markdown
Now we can run that model and wait until it's released
###Code
!kubectl apply -f resources/fixed_v1.yaml
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=fixed \
-o jsonpath='{.items[0].metadata.name}')
###Output
_____no_output_____
###Markdown
Let's confirm that the state of the model is Available
###Code
for i in range(60):
state=!kubectl get sdep fixed -o jsonpath='{.status.state}'
state=state[0]
print(state)
if state=="Available":
break
time.sleep(1)
assert(state=="Available")
!curl -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' \
-X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions \
-H "Content-Type: application/json"
###Output
_____no_output_____
###Markdown
Now we can modify the model by providing a new image name, using the following config file:
###Code
%%writefile resources/fixed_v2.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: fixed
spec:
name: fixed
protocol: seldon
transport: rest
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/fixed-model:0.2
name: classifier
graph:
name: classifier
type: MODEL
name: default
replicas: 3
!kubectl apply -f resources/fixed_v2.yaml
###Output
_____no_output_____
###Markdown
Now let's actually send a couple of requests to make sure that there are no failed requests as the rolling update is performed
###Code
time.sleep(5) # To allow operator to start the update
for i in range(120):
responseRaw=!curl -s -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' -X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions -H "Content-Type: application/json"
try:
response = json.loads(responseRaw[0])
except:
print("Failed to parse json",responseRaw)
continue
assert(response['data']['ndarray'][0]==1 or response['data']['ndarray'][0]==5)
jsonRaw=!kubectl get deploy -l seldon-deployment-id=fixed -o json
data="".join(jsonRaw)
resources = json.loads(data)
numReplicas = int(resources["items"][0]["status"]["replicas"])
if numReplicas == 3:
break
time.sleep(1)
print("Rollout Success")
!kubectl delete -f resources/fixed_v1.yaml
###Output
_____no_output_____
###Markdown
Change Replicas (no rolling update) We'll want to try modifying number of replicas and no rolling update is needed.We'll first create the following model:
###Code
%%writefile resources/fixed_v1_rep2.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: fixed
spec:
name: fixed
protocol: seldon
transport: rest
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/fixed-model:0.1
name: classifier
graph:
name: classifier
type: MODEL
name: default
replicas: 2
###Output
_____no_output_____
###Markdown
Now we can run that model and wait until it's released
###Code
!kubectl apply -f resources/fixed_v1_rep2.yaml
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=fixed \
-o jsonpath='{.items[0].metadata.name}')
###Output
_____no_output_____
###Markdown
Let's confirm that the state of the model is Available
###Code
for i in range(60):
state=!kubectl get sdep fixed -o jsonpath='{.status.state}'
state=state[0]
print(state)
if state=="Available":
break
time.sleep(1)
assert(state=="Available")
!curl -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' \
-X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions \
-H "Content-Type: application/json"
###Output
_____no_output_____
###Markdown
Now we can modify the model by providing a new image name, using the following config file:
###Code
%%writefile resources/fixed_v1_rep4.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: fixed
spec:
name: fixed
protocol: seldon
transport: rest
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/fixed-model:0.1
name: classifier
graph:
name: classifier
type: MODEL
name: default
replicas: 4
!kubectl apply -f resources/fixed_v1_rep4.yaml
###Output
_____no_output_____
###Markdown
Now let's actually send a couple of requests to make sure that there are no failed requests as the rolling update is performed
###Code
time.sleep(5) # To allow operator to start the update
for i in range(120):
responseRaw=!curl -s -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' -X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions -H "Content-Type: application/json"
try:
response = json.loads(responseRaw[0])
except:
print("Failed to parse json",responseRaw)
continue
assert(response['data']['ndarray'][0]==1 or response['data']['ndarray'][0]==5)
jsonRaw=!kubectl get deploy -l seldon-deployment-id=fixed -o json
data="".join(jsonRaw)
resources = json.loads(data)
numReplicas = int(resources["items"][0]["status"]["replicas"])
if numReplicas == 4:
break
time.sleep(1)
print("Rollout Success")
###Output
_____no_output_____
###Markdown
Now downsize back to 2
###Code
!kubectl apply -f resources/fixed_v1_rep2.yaml
time.sleep(5) # To allow operator to start the update
for i in range(120):
responseRaw=!curl -s -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' -X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions -H "Content-Type: application/json"
try:
response = json.loads(responseRaw[0])
except:
print("Failed to parse json",responseRaw)
continue
assert(response['data']['ndarray'][0]==1 or response['data']['ndarray'][0]==5)
jsonRaw=!kubectl get deploy -l seldon-deployment-id=fixed -o json
data="".join(jsonRaw)
resources = json.loads(data)
numReplicas = int(resources["items"][0]["status"]["replicas"])
if numReplicas == 2:
break
time.sleep(1)
print("Rollout Success")
!kubectl delete -f resources/fixed_v1_rep2.yaml
###Output
_____no_output_____
###Markdown
Separate Service OrchestratorWe can test that the rolling update works when we use the annotation that allows us to have the service orchestrator on a separate pod, namely `seldon.io/engine-separate-pod: "true"`, as per the config file below. Though in this case both the service orchestrator and model pod will be recreated.
###Code
%%writefile resources/fixed_v1_sep.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: fixed
spec:
name: fixed
protocol: seldon
transport: rest
annotations:
seldon.io/engine-separate-pod: "true"
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/fixed-model:0.1
name: classifier
graph:
name: classifier
type: MODEL
name: default
replicas: 1
!kubectl apply -f resources/fixed_v1_sep.yaml
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=fixed \
-o jsonpath='{.items[0].metadata.name}')
###Output
_____no_output_____
###Markdown
We can wait until the pod is available before starting the rolling update.
###Code
for i in range(60):
state=!kubectl get sdep fixed -o jsonpath='{.status.state}'
state=state[0]
print(state)
if state=="Available":
break
time.sleep(1)
assert(state=="Available")
!curl -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' \
-X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions \
-H "Content-Type: application/json"
###Output
_____no_output_____
###Markdown
Now we can make a rolling update by changing the version of the docker image we will be updating it for.
###Code
%%writefile resources/fixed_v2_sep.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: fixed
spec:
name: fixed
protocol: seldon
transport: rest
annotations:
seldon.io/engine-separate-pod: "true"
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/fixed-model:0.2
name: classifier
graph:
name: classifier
type: MODEL
name: default
replicas: 1
!kubectl apply -f resources/fixed_v2_sep.yaml
###Output
_____no_output_____
###Markdown
And we can send requests to confirm that the rolling update is performed without interruptions
###Code
time.sleep(5) # To allow operator to start the update
for i in range(120):
responseRaw=!curl -s -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' -X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions -H "Content-Type: application/json"
try:
response = json.loads(responseRaw[0])
except:
print("Failed to parse json",responseRaw)
continue
assert(response['data']['ndarray'][0]==1 or response['data']['ndarray'][0]==5)
jsonRaw=!kubectl get deploy -l seldon-deployment-id=fixed -o json
data="".join(jsonRaw)
resources = json.loads(data)
numReplicas = int(resources["items"][0]["status"]["replicas"])
if numReplicas == 1:
break
time.sleep(1)
print("Rollout Success")
!kubectl delete -f resources/fixed_v1_sep.yaml
###Output
_____no_output_____
###Markdown
Two PodSpecs We can test that the rolling update works when we have multiple podSpecs in our deployment and only does a rolling update the the first pod (which also contains the service orchestrator)
###Code
%%writefile resources/fixed_v1_2podspecs.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: fixed
spec:
name: fixed
protocol: seldon
transport: rest
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/fixed-model:0.1
name: classifier1
- spec:
containers:
- image: seldonio/fixed-model:0.1
name: classifier2
graph:
name: classifier1
type: MODEL
children:
- name: classifier2
type: MODEL
name: default
replicas: 1
!kubectl apply -f resources/fixed_v1_2podspecs.yaml
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=fixed \
-o jsonpath='{.items[0].metadata.name}')
###Output
_____no_output_____
###Markdown
We can wait until the pod is available before starting the rolling update.
###Code
for i in range(60):
state=!kubectl get sdep fixed -o jsonpath='{.status.state}'
state=state[0]
print(state)
if state=="Available":
break
time.sleep(1)
assert(state=="Available")
!curl -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' \
-X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions \
-H "Content-Type: application/json"
###Output
_____no_output_____
###Markdown
Now we can make a rolling update by changing the version of the docker image we will be updating it for.
###Code
%%writefile resources/fixed_v2_2podspecs.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: fixed
spec:
name: fixed
protocol: seldon
transport: rest
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/fixed-model:0.1
name: classifier1
- spec:
containers:
- image: seldonio/fixed-model:0.2
name: classifier2
graph:
name: classifier1
type: MODEL
children:
- name: classifier2
type: MODEL
name: default
replicas: 1
!kubectl apply -f resources/fixed_v2_2podspecs.yaml
###Output
_____no_output_____
###Markdown
And we can send requests to confirm that the rolling update is performed without interruptions
###Code
time.sleep(5) # To allow operator to start the update
for i in range(120):
responseRaw=!curl -s -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' -X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions -H "Content-Type: application/json"
try:
response = json.loads(responseRaw[0])
except:
print("Failed to parse json",responseRaw)
continue
assert(response['data']['ndarray'][0]==1 or response['data']['ndarray'][0]==5)
jsonRaw=!kubectl get deploy -l seldon-deployment-id=fixed -o json
data="".join(jsonRaw)
resources = json.loads(data)
numReplicas = int(resources["items"][0]["status"]["replicas"])
if numReplicas == 1:
break
time.sleep(1)
print("Rollout Success")
!kubectl delete -f resources/fixed_v1_2podspecs.yaml
###Output
_____no_output_____
###Markdown
Two ModelsWe can test that the rolling update works when we have two predictors / models in our deployment.
###Code
%%writefile resources/fixed_v1_2models.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: fixed
spec:
name: fixed
protocol: seldon
transport: rest
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/fixed-model:0.1
name: classifier
- image: seldonio/fixed-model:0.1
name: classifier2
graph:
name: classifier
type: MODEL
children:
- name: classifier2
type: MODEL
name: default
replicas: 3
!kubectl apply -f resources/fixed_v1_2models.yaml
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=fixed \
-o jsonpath='{.items[0].metadata.name}')
###Output
_____no_output_____
###Markdown
We can wait until the pod is available before starting the rolling update.
###Code
for i in range(60):
state=!kubectl get sdep fixed -o jsonpath='{.status.state}'
state=state[0]
print(state)
if state=="Available":
break
time.sleep(1)
assert(state=="Available")
!curl -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' \
-X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions \
-H "Content-Type: application/json"
###Output
_____no_output_____
###Markdown
Now we can make a rolling update by changing the version of the docker image we will be updating it for.
###Code
%%writefile resources/fixed_v2_2models.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: fixed
spec:
name: fixed
protocol: seldon
transport: rest
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/fixed-model:0.2
name: classifier
- image: seldonio/fixed-model:0.2
name: classifier2
graph:
name: classifier
type: MODEL
children:
- name: classifier2
type: MODEL
name: default
replicas: 3
!kubectl apply -f resources/fixed_v2_2models.yaml
###Output
_____no_output_____
###Markdown
And we can send requests to confirm that the rolling update is performed without interruptions
###Code
time.sleep(5) # To allow operator to start the update
for i in range(120):
responseRaw=!curl -s -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' -X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions -H "Content-Type: application/json"
try:
response = json.loads(responseRaw[0])
except:
print("Failed to parse json",responseRaw)
continue
assert(response['data']['ndarray'][0]==1 or response['data']['ndarray'][0]==5)
jsonRaw=!kubectl get deploy -l seldon-deployment-id=fixed -o json
data="".join(jsonRaw)
resources = json.loads(data)
numReplicas = int(resources["items"][0]["status"]["replicas"])
if numReplicas == 3:
break
time.sleep(1)
print("Rollout Success")
!kubectl delete -f resources/fixed_v2_2models.yaml
###Output
_____no_output_____
###Markdown
Two PredictorsWe can test that the rolling update works when we have two predictors in our deployment.
###Code
%%writefile resources/fixed_v1_2predictors.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: fixed
spec:
name: fixed
protocol: seldon
transport: rest
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/fixed-model:0.1
name: classifier
- image: seldonio/fixed-model:0.1
name: classifier2
graph:
name: classifier
type: MODEL
children:
- name: classifier2
type: MODEL
name: a
replicas: 3
traffic: 50
- componentSpecs:
- spec:
containers:
- image: seldonio/fixed-model:0.1
name: classifier
- image: seldonio/fixed-model:0.1
name: classifier2
graph:
name: classifier
type: MODEL
children:
- name: classifier2
type: MODEL
name: b
replicas: 1
traffic: 50
!kubectl apply -f resources/fixed_v1_2predictors.yaml
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=fixed \
-o jsonpath='{.items[0].metadata.name}')
###Output
_____no_output_____
###Markdown
We can wait until the pod is available before starting the rolling update.
###Code
for i in range(60):
state=!kubectl get sdep fixed -o jsonpath='{.status.state}'
state=state[0]
print(state)
if state=="Available":
break
time.sleep(1)
assert(state=="Available")
!curl -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' \
-X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions \
-H "Content-Type: application/json"
###Output
_____no_output_____
###Markdown
Now we can make a rolling update by changing the version of the docker image we will be updating it for.
###Code
%%writefile resources/fixed_v2_2predictors.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: fixed
spec:
name: fixed
protocol: seldon
transport: rest
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/fixed-model:0.2
name: classifier
- image: seldonio/fixed-model:0.2
name: classifier2
graph:
name: classifier
type: MODEL
children:
- name: classifier2
type: MODEL
name: a
replicas: 3
traffic: 50
- componentSpecs:
- spec:
containers:
- image: seldonio/fixed-model:0.1
name: classifier
- image: seldonio/fixed-model:0.1
name: classifier2
graph:
name: classifier
type: MODEL
children:
- name: classifier2
type: MODEL
name: b
replicas: 1
traffic: 50
!kubectl apply -f resources/fixed_v2_2predictors.yaml
###Output
_____no_output_____
###Markdown
And we can send requests to confirm that the rolling update is performed without interruptions
###Code
time.sleep(5) # To allow operator to start the update
for i in range(120):
responseRaw=!curl -s -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' -X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions -H "Content-Type: application/json"
try:
response = json.loads(responseRaw[0])
except:
print("Failed to parse json",responseRaw)
continue
assert(response['data']['ndarray'][0]==1 or response['data']['ndarray'][0]==5)
jsonRaw=!kubectl get deploy -l seldon-deployment-id=fixed -o json
data="".join(jsonRaw)
resources = json.loads(data)
numReplicas = int(resources["items"][0]["status"]["replicas"])
if numReplicas == 3:
break
time.sleep(1)
print("Rollout Success")
!kubectl delete -f resources/fixed_v2_2predictors.yaml
###Output
_____no_output_____
###Markdown
Model name changesThis will not do a rolling update but create a new deployment.
###Code
%%writefile resources/fixed_v1.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: fixed
spec:
name: fixed
protocol: seldon
transport: rest
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/fixed-model:0.1
name: classifier
graph:
name: classifier
type: MODEL
name: default
replicas: 3
!kubectl apply -f resources/fixed_v1.yaml
###Output
_____no_output_____
###Markdown
We can wait until the pod is available.
###Code
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=fixed \
-o jsonpath='{.items[0].metadata.name}')
for i in range(60):
state=!kubectl get sdep fixed -o jsonpath='{.status.state}'
state=state[0]
print(state)
if state=="Available":
break
time.sleep(1)
assert(state=="Available")
!curl -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' \
-X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions \
-H "Content-Type: application/json"
###Output
_____no_output_____
###Markdown
Now when we apply the update, we should see the change taking place, but there should not be an actual full rolling update triggered.
###Code
%%writefile resources/fixed_v2_new_name.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: fixed
spec:
name: fixed
protocol: seldon
transport: rest
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/fixed-model:0.2
name: classifier2
graph:
name: classifier2
type: MODEL
name: default
replicas: 3
!kubectl apply -f resources/fixed_v2_new_name.yaml
time.sleep(5)
for i in range(120):
responseRaw=!curl -s -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' -X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions -H "Content-Type: application/json"
try:
response = json.loads(responseRaw[0])
except:
print("Failed to parse json",responseRaw)
continue
assert(response['data']['ndarray'][0]==1 or response['data']['ndarray'][0]==5)
jsonRaw=!kubectl get deploy -l seldon-deployment-id=fixed -o json
data="".join(jsonRaw)
resources = json.loads(data)
numItems = len(resources["items"])
if numItems == 1:
break
time.sleep(1)
print("Rollout Success")
!kubectl delete -f resources/fixed_v2_new_name.yaml
###Output
_____no_output_____
###Markdown
Rolling Update TestsCheck rolling updates function as expected.
###Code
import json
import time
###Output
_____no_output_____
###Markdown
Before we get started we'd like to make sure that we're making all the changes in a new blank namespace of the name `seldon`
###Code
!kubectl create namespace seldon
!kubectl config set-context $(kubectl config current-context) --namespace=seldon
###Output
_____no_output_____
###Markdown
Change Image We'll want to try modifying an image and seeing how the rolling update performs the changes.We'll first create the following model:
###Code
%%writefile resources/fixed_v1.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: fixed
spec:
name: fixed
protocol: seldon
transport: rest
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/fixed-model:0.1
name: classifier
graph:
name: classifier
type: MODEL
name: default
replicas: 3
###Output
_____no_output_____
###Markdown
Now we can run that model and wait until it's released
###Code
!kubectl apply -f resources/fixed_v1.yaml
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=fixed \
-o jsonpath='{.items[0].metadata.name}')
###Output
_____no_output_____
###Markdown
Let's confirm that the state of the model is Available
###Code
for i in range(60):
state=!kubectl get sdep fixed -o jsonpath='{.status.state}'
state=state[0]
print(state)
if state=="Available":
break
time.sleep(1)
assert(state=="Available")
!curl -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' \
-X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions \
-H "Content-Type: application/json"
###Output
_____no_output_____
###Markdown
Now we can modify the model by providing a new image name, using the following config file:
###Code
%%writefile resources/fixed_v2.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: fixed
spec:
name: fixed
protocol: seldon
transport: rest
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/fixed-model:0.2
name: classifier
graph:
name: classifier
type: MODEL
name: default
replicas: 3
!kubectl apply -f resources/fixed_v2.yaml
###Output
_____no_output_____
###Markdown
Now let's actually send a couple of requests to make sure that there are no failed requests as the rolling update is performed
###Code
time.sleep(5) # To allow operator to start the update
for i in range(120):
responseRaw=!curl -s -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' -X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions -H "Content-Type: application/json"
try:
response = json.loads(responseRaw[0])
except:
print("Failed to parse json",responseRaw)
continue
assert(response['data']['ndarray'][0]==1 or response['data']['ndarray'][0]==5)
jsonRaw=!kubectl get deploy -l seldon-deployment-id=fixed -o json
data="".join(jsonRaw)
resources = json.loads(data)
numReplicas = int(resources["items"][0]["status"]["replicas"])
if numReplicas == 3:
break
time.sleep(1)
print("Rollout Success")
!kubectl delete -f resources/fixed_v1.yaml
###Output
_____no_output_____
###Markdown
Separate Service OrchestratorWe can test that the rolling update works when we use the annotation that allows us to have the service orchestrator on a separate pod, namely `seldon.io/engine-separate-pod: "true"`, as per the config file below:
###Code
%%writefile resources/fixed_v1_sep.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: fixed
spec:
name: fixed
protocol: seldon
transport: rest
annotations:
seldon.io/engine-separate-pod: "true"
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/fixed-model:0.1
name: classifier
graph:
name: classifier
type: MODEL
name: default
replicas: 1
!kubectl apply -f resources/fixed_v1_sep.yaml
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=fixed \
-o jsonpath='{.items[0].metadata.name}')
###Output
_____no_output_____
###Markdown
We can wait until the pod is available before starting the rolling update.
###Code
for i in range(60):
state=!kubectl get sdep fixed -o jsonpath='{.status.state}'
state=state[0]
print(state)
if state=="Available":
break
time.sleep(1)
assert(state=="Available")
!curl -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' \
-X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions \
-H "Content-Type: application/json"
###Output
_____no_output_____
###Markdown
Now we can make a rolling update by changing the version of the docker image we will be updating it for.
###Code
%%writefile resources/fixed_v2_sep.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: fixed
spec:
name: fixed
protocol: seldon
transport: rest
annotations:
seldon.io/engine-separate-pod: "true"
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/fixed-model:0.2
name: classifier
graph:
name: classifier
type: MODEL
name: default
replicas: 1
!kubectl apply -f resources/fixed_v2_sep.yaml
###Output
_____no_output_____
###Markdown
And we can send requests to confirm that the rolling update is performed without interruptions
###Code
time.sleep(5) # To allow operator to start the update
for i in range(120):
responseRaw=!curl -s -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' -X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions -H "Content-Type: application/json"
try:
response = json.loads(responseRaw[0])
except:
print("Failed to parse json",responseRaw)
continue
assert(response['data']['ndarray'][0]==1 or response['data']['ndarray'][0]==5)
jsonRaw=!kubectl get deploy -l seldon-deployment-id=fixed -o json
data="".join(jsonRaw)
resources = json.loads(data)
numReplicas = int(resources["items"][0]["status"]["replicas"])
if numReplicas == 1:
break
time.sleep(1)
print("Rollout Success")
!kubectl delete -f resources/fixed_v1_sep.yaml
###Output
_____no_output_____
###Markdown
Two PodSpecs We can test that the rolling update works when we have multiple podSpecs in our deployment.
###Code
%%writefile resources/fixed_v1_2podspecs.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: fixed
spec:
name: fixed
protocol: seldon
transport: rest
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/fixed-model:0.1
name: classifier1
- spec:
containers:
- image: seldonio/fixed-model:0.1
name: classifier2
graph:
name: classifier1
type: MODEL
children:
- name: classifier2
type: MODEL
name: default
replicas: 1
!kubectl apply -f resources/fixed_v1_2podspecs.yaml
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=fixed \
-o jsonpath='{.items[0].metadata.name}')
###Output
_____no_output_____
###Markdown
We can wait until the pod is available before starting the rolling update.
###Code
for i in range(60):
state=!kubectl get sdep fixed -o jsonpath='{.status.state}'
state=state[0]
print(state)
if state=="Available":
break
time.sleep(1)
assert(state=="Available")
!curl -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' \
-X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions \
-H "Content-Type: application/json"
###Output
_____no_output_____
###Markdown
Now we can make a rolling update by changing the version of the docker image we will be updating it for.
###Code
%%writefile resources/fixed_v2_2podspecs.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: fixed
spec:
name: fixed
protocol: seldon
transport: rest
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/fixed-model:0.1
name: classifier1
- spec:
containers:
- image: seldonio/fixed-model:0.2
name: classifier2
graph:
name: classifier1
type: MODEL
children:
- name: classifier2
type: MODEL
name: default
replicas: 1
!kubectl apply -f resources/fixed_v2_2podspecs.yaml
###Output
_____no_output_____
###Markdown
And we can send requests to confirm that the rolling update is performed without interruptions
###Code
time.sleep(5) # To allow operator to start the update
for i in range(120):
responseRaw=!curl -s -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' -X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions -H "Content-Type: application/json"
try:
response = json.loads(responseRaw[0])
except:
print("Failed to parse json",responseRaw)
continue
assert(response['data']['ndarray'][0]==1 or response['data']['ndarray'][0]==5)
jsonRaw=!kubectl get deploy -l seldon-deployment-id=fixed -o json
data="".join(jsonRaw)
resources = json.loads(data)
numReplicas = int(resources["items"][0]["status"]["replicas"])
if numReplicas == 1:
break
time.sleep(1)
print("Rollout Success")
!kubectl delete -f resources/fixed_v1_2podspecs.yaml
###Output
_____no_output_____
###Markdown
Two ModelsWe can test that the rolling update works when we have two predictors / models in our deployment.
###Code
%%writefile resources/fixed_v1_2models.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: fixed
spec:
name: fixed
protocol: seldon
transport: rest
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/fixed-model:0.1
name: classifier
- image: seldonio/fixed-model:0.1
name: classifier2
graph:
name: classifier
type: MODEL
children:
- name: classifier2
type: MODEL
name: default
replicas: 3
!kubectl apply -f resources/fixed_v1_2models.yaml
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=fixed \
-o jsonpath='{.items[0].metadata.name}')
###Output
_____no_output_____
###Markdown
We can wait until the pod is available before starting the rolling update.
###Code
for i in range(60):
state=!kubectl get sdep fixed -o jsonpath='{.status.state}'
state=state[0]
print(state)
if state=="Available":
break
time.sleep(1)
assert(state=="Available")
!curl -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' \
-X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions \
-H "Content-Type: application/json"
###Output
_____no_output_____
###Markdown
Now we can make a rolling update by changing the version of the docker image we will be updating it for.
###Code
%%writefile resources/fixed_v2_2models.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: fixed
spec:
name: fixed
protocol: seldon
transport: rest
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/fixed-model:0.2
name: classifier
- image: seldonio/fixed-model:0.2
name: classifier2
graph:
name: classifier
type: MODEL
children:
- name: classifier2
type: MODEL
name: default
replicas: 3
!kubectl apply -f resources/fixed_v2_2models.yaml
###Output
_____no_output_____
###Markdown
And we can send requests to confirm that the rolling update is performed without interruptions
###Code
time.sleep(5) # To allow operator to start the update
for i in range(120):
responseRaw=!curl -s -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' -X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions -H "Content-Type: application/json"
try:
response = json.loads(responseRaw[0])
except:
print("Failed to parse json",responseRaw)
continue
assert(response['data']['ndarray'][0]==1 or response['data']['ndarray'][0]==5)
jsonRaw=!kubectl get deploy -l seldon-deployment-id=fixed -o json
data="".join(jsonRaw)
resources = json.loads(data)
numReplicas = int(resources["items"][0]["status"]["replicas"])
if numReplicas == 3:
break
time.sleep(1)
print("Rollout Success")
!kubectl delete -f resources/fixed_v2_2models.yaml
###Output
_____no_output_____
###Markdown
Model name changesThis will not do a rolling update but create a new deployment.
###Code
%%writefile resources/fixed_v1.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: fixed
spec:
name: fixed
protocol: seldon
transport: rest
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/fixed-model:0.1
name: classifier
graph:
name: classifier
type: MODEL
name: default
replicas: 3
!kubectl apply -f resources/fixed_v1.yaml
###Output
_____no_output_____
###Markdown
We can wait until the pod is available.
###Code
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=fixed \
-o jsonpath='{.items[0].metadata.name}')
for i in range(60):
state=!kubectl get sdep fixed -o jsonpath='{.status.state}'
state=state[0]
print(state)
if state=="Available":
break
time.sleep(1)
assert(state=="Available")
!curl -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' \
-X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions \
-H "Content-Type: application/json"
###Output
_____no_output_____
###Markdown
Now when we apply the update, we should see the change taking place, but there should not be an actual full rolling update triggered.
###Code
%%writefile resources/fixed_v2_new_name.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: fixed
spec:
name: fixed
protocol: seldon
transport: rest
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/fixed-model:0.2
name: classifier2
graph:
name: classifier2
type: MODEL
name: default
replicas: 3
!kubectl apply -f resources/fixed_v2_new_name.yaml
time.sleep(5)
for i in range(120):
responseRaw=!curl -s -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' -X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions -H "Content-Type: application/json"
try:
response = json.loads(responseRaw[0])
except:
print("Failed to parse json",responseRaw)
continue
assert(response['data']['ndarray'][0]==1 or response['data']['ndarray'][0]==5)
jsonRaw=!kubectl get deploy -l seldon-deployment-id=fixed -o json
data="".join(jsonRaw)
resources = json.loads(data)
numItems = len(resources["items"])
if numItems == 1:
break
time.sleep(1)
print("Rollout Success")
!kubectl delete -f resources/fixed_v2_new_name.yaml
###Output
_____no_output_____
###Markdown
Rolling Update TestsCheck rolling updates function as expected.
###Code
import json
import time
###Output
_____no_output_____
###Markdown
Before we get started we'd like to make sure that we're making all the changes in a new blank namespace of the name `seldon`
###Code
!kubectl create namespace seldon
!kubectl config set-context $(kubectl config current-context) --namespace=seldon
###Output
_____no_output_____
###Markdown
Change Image We'll want to try modifying an image and seeing how the rolling update performs the changes.We'll first create the following model:
###Code
%%writefile resources/fixed_v1.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: fixed
spec:
name: fixed
protocol: seldon
transport: rest
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/fixed-model:0.1
name: classifier
graph:
name: classifier
type: MODEL
name: default
replicas: 3
###Output
_____no_output_____
###Markdown
Now we can run that model and wait until it's released
###Code
!kubectl apply -f resources/fixed_v1.yaml
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=fixed \
-o jsonpath='{.items[0].metadata.name}')
###Output
_____no_output_____
###Markdown
Let's confirm that the state of the model is Available
###Code
for i in range(60):
state=!kubectl get sdep fixed -o jsonpath='{.status.state}'
state=state[0]
print(state)
if state=="Available":
break
time.sleep(1)
assert(state=="Available")
!curl -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' \
-X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions \
-H "Content-Type: application/json"
###Output
_____no_output_____
###Markdown
Now we can modify the model by providing a new image name, using the following config file:
###Code
%%writefile resources/fixed_v2.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: fixed
spec:
name: fixed
protocol: seldon
transport: rest
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/fixed-model:0.2
name: classifier
graph:
name: classifier
type: MODEL
name: default
replicas: 3
!kubectl apply -f resources/fixed_v2.yaml
###Output
_____no_output_____
###Markdown
Now let's actually send a couple of requests to make sure that there are no failed requests as the rolling update is performed
###Code
time.sleep(5) # To allow operator to start the update
for i in range(120):
responseRaw=!curl -s -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' -X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions -H "Content-Type: application/json"
try:
response = json.loads(responseRaw[0])
except:
print("Failed to parse json",responseRaw)
continue
assert(response['data']['ndarray'][0]==1 or response['data']['ndarray'][0]==5)
jsonRaw=!kubectl get deploy -l seldon-deployment-id=fixed -o json
data="".join(jsonRaw)
resources = json.loads(data)
numReplicas = int(resources["items"][0]["status"]["replicas"])
if numReplicas == 3:
break
time.sleep(1)
print("Rollout Success")
!kubectl delete -f resources/fixed_v1.yaml
###Output
_____no_output_____
###Markdown
Change Replicas (no rolling update) We'll want to try modifying number of replicas and no rolling update is needed.We'll first create the following model:
###Code
%%writefile resources/fixed_v1_rep2.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: fixed
spec:
name: fixed
protocol: seldon
transport: rest
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/fixed-model:0.1
name: classifier
graph:
name: classifier
type: MODEL
name: default
replicas: 2
###Output
_____no_output_____
###Markdown
Now we can run that model and wait until it's released
###Code
!kubectl apply -f resources/fixed_v1_rep2.yaml
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=fixed \
-o jsonpath='{.items[0].metadata.name}')
###Output
_____no_output_____
###Markdown
Let's confirm that the state of the model is Available
###Code
for i in range(60):
state=!kubectl get sdep fixed -o jsonpath='{.status.state}'
state=state[0]
print(state)
if state=="Available":
break
time.sleep(1)
assert(state=="Available")
!curl -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' \
-X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions \
-H "Content-Type: application/json"
###Output
_____no_output_____
###Markdown
Now we can modify the model by providing a new image name, using the following config file:
###Code
%%writefile resources/fixed_v1_rep4.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: fixed
spec:
name: fixed
protocol: seldon
transport: rest
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/fixed-model:0.1
name: classifier
graph:
name: classifier
type: MODEL
name: default
replicas: 4
!kubectl apply -f resources/fixed_v1_rep4.yaml
###Output
_____no_output_____
###Markdown
Now let's actually send a couple of requests to make sure that there are no failed requests as the rolling update is performed
###Code
time.sleep(5) # To allow operator to start the update
for i in range(120):
responseRaw=!curl -s -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' -X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions -H "Content-Type: application/json"
try:
response = json.loads(responseRaw[0])
except:
print("Failed to parse json",responseRaw)
continue
assert(response['data']['ndarray'][0]==1 or response['data']['ndarray'][0]==5)
jsonRaw=!kubectl get deploy -l seldon-deployment-id=fixed -o json
data="".join(jsonRaw)
resources = json.loads(data)
numReplicas = int(resources["items"][0]["status"]["replicas"])
if numReplicas == 4:
break
time.sleep(1)
print("Rollout Success")
###Output
_____no_output_____
###Markdown
Now downsize back to 2
###Code
!kubectl apply -f resources/fixed_v1_rep2.yaml
time.sleep(5) # To allow operator to start the update
for i in range(120):
responseRaw=!curl -s -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' -X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions -H "Content-Type: application/json"
try:
response = json.loads(responseRaw[0])
except:
print("Failed to parse json",responseRaw)
continue
assert(response['data']['ndarray'][0]==1 or response['data']['ndarray'][0]==5)
jsonRaw=!kubectl get deploy -l seldon-deployment-id=fixed -o json
data="".join(jsonRaw)
resources = json.loads(data)
numReplicas = int(resources["items"][0]["status"]["replicas"])
if numReplicas == 2:
break
time.sleep(1)
print("Rollout Success")
!kubectl delete -f resources/fixed_v1_rep2.yaml
###Output
_____no_output_____
###Markdown
Separate Service OrchestratorWe can test that the rolling update works when we use the annotation that allows us to have the service orchestrator on a separate pod, namely `seldon.io/engine-separate-pod: "true"`, as per the config file below. Though in this case both the service orchestrator and model pod will be recreated.
###Code
%%writefile resources/fixed_v1_sep.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: fixed
spec:
name: fixed
protocol: seldon
transport: rest
annotations:
seldon.io/engine-separate-pod: "true"
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/fixed-model:0.1
name: classifier
graph:
name: classifier
type: MODEL
name: default
replicas: 1
!kubectl apply -f resources/fixed_v1_sep.yaml
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=fixed \
-o jsonpath='{.items[0].metadata.name}')
###Output
_____no_output_____
###Markdown
We can wait until the pod is available before starting the rolling update.
###Code
for i in range(60):
state=!kubectl get sdep fixed -o jsonpath='{.status.state}'
state=state[0]
print(state)
if state=="Available":
break
time.sleep(1)
assert(state=="Available")
!curl -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' \
-X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions \
-H "Content-Type: application/json"
###Output
_____no_output_____
###Markdown
Now we can make a rolling update by changing the version of the docker image we will be updating it for.
###Code
%%writefile resources/fixed_v2_sep.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: fixed
spec:
name: fixed
protocol: seldon
transport: rest
annotations:
seldon.io/engine-separate-pod: "true"
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/fixed-model:0.2
name: classifier
graph:
name: classifier
type: MODEL
name: default
replicas: 1
!kubectl apply -f resources/fixed_v2_sep.yaml
###Output
_____no_output_____
###Markdown
And we can send requests to confirm that the rolling update is performed without interruptions
###Code
time.sleep(5) # To allow operator to start the update
for i in range(120):
responseRaw=!curl -s -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' -X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions -H "Content-Type: application/json"
try:
response = json.loads(responseRaw[0])
except:
print("Failed to parse json",responseRaw)
continue
assert(response['data']['ndarray'][0]==1 or response['data']['ndarray'][0]==5)
jsonRaw=!kubectl get deploy -l seldon-deployment-id=fixed -o json
data="".join(jsonRaw)
resources = json.loads(data)
numReplicas = int(resources["items"][0]["status"]["replicas"])
if numReplicas == 1:
break
time.sleep(1)
print("Rollout Success")
!kubectl delete -f resources/fixed_v1_sep.yaml
###Output
_____no_output_____
###Markdown
Two PodSpecs We can test that the rolling update works when we have multiple podSpecs in our deployment and only does a rolling update the first pod (which also contains the service orchestrator)
###Code
%%writefile resources/fixed_v1_2podspecs.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: fixed
spec:
name: fixed
protocol: seldon
transport: rest
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/fixed-model:0.1
name: classifier1
- spec:
containers:
- image: seldonio/fixed-model:0.1
name: classifier2
graph:
name: classifier1
type: MODEL
children:
- name: classifier2
type: MODEL
name: default
replicas: 1
!kubectl apply -f resources/fixed_v1_2podspecs.yaml
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=fixed \
-o jsonpath='{.items[0].metadata.name}')
###Output
_____no_output_____
###Markdown
We can wait until the pod is available before starting the rolling update.
###Code
for i in range(60):
state=!kubectl get sdep fixed -o jsonpath='{.status.state}'
state=state[0]
print(state)
if state=="Available":
break
time.sleep(1)
assert(state=="Available")
!curl -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' \
-X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions \
-H "Content-Type: application/json"
###Output
_____no_output_____
###Markdown
Now we can make a rolling update by changing the version of the docker image we will be updating it for.
###Code
%%writefile resources/fixed_v2_2podspecs.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: fixed
spec:
name: fixed
protocol: seldon
transport: rest
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/fixed-model:0.1
name: classifier1
- spec:
containers:
- image: seldonio/fixed-model:0.2
name: classifier2
graph:
name: classifier1
type: MODEL
children:
- name: classifier2
type: MODEL
name: default
replicas: 1
!kubectl apply -f resources/fixed_v2_2podspecs.yaml
###Output
_____no_output_____
###Markdown
And we can send requests to confirm that the rolling update is performed without interruptions
###Code
time.sleep(5) # To allow operator to start the update
for i in range(120):
responseRaw=!curl -s -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' -X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions -H "Content-Type: application/json"
try:
response = json.loads(responseRaw[0])
except:
print("Failed to parse json",responseRaw)
continue
assert(response['data']['ndarray'][0]==1 or response['data']['ndarray'][0]==5)
jsonRaw=!kubectl get deploy -l seldon-deployment-id=fixed -o json
data="".join(jsonRaw)
resources = json.loads(data)
numReplicas = int(resources["items"][0]["status"]["replicas"])
if numReplicas == 1:
break
time.sleep(1)
print("Rollout Success")
!kubectl delete -f resources/fixed_v1_2podspecs.yaml
###Output
_____no_output_____
###Markdown
Two ModelsWe can test that the rolling update works when we have two predictors / models in our deployment.
###Code
%%writefile resources/fixed_v1_2models.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: fixed
spec:
name: fixed
protocol: seldon
transport: rest
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/fixed-model:0.1
name: classifier
- image: seldonio/fixed-model:0.1
name: classifier2
graph:
name: classifier
type: MODEL
children:
- name: classifier2
type: MODEL
name: default
replicas: 3
!kubectl apply -f resources/fixed_v1_2models.yaml
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=fixed \
-o jsonpath='{.items[0].metadata.name}')
###Output
_____no_output_____
###Markdown
We can wait until the pod is available before starting the rolling update.
###Code
for i in range(60):
state=!kubectl get sdep fixed -o jsonpath='{.status.state}'
state=state[0]
print(state)
if state=="Available":
break
time.sleep(1)
assert(state=="Available")
!curl -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' \
-X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions \
-H "Content-Type: application/json"
###Output
_____no_output_____
###Markdown
Now we can make a rolling update by changing the version of the docker image we will be updating it for.
###Code
%%writefile resources/fixed_v2_2models.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: fixed
spec:
name: fixed
protocol: seldon
transport: rest
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/fixed-model:0.2
name: classifier
- image: seldonio/fixed-model:0.2
name: classifier2
graph:
name: classifier
type: MODEL
children:
- name: classifier2
type: MODEL
name: default
replicas: 3
!kubectl apply -f resources/fixed_v2_2models.yaml
###Output
_____no_output_____
###Markdown
And we can send requests to confirm that the rolling update is performed without interruptions
###Code
time.sleep(5) # To allow operator to start the update
for i in range(120):
responseRaw=!curl -s -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' -X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions -H "Content-Type: application/json"
try:
response = json.loads(responseRaw[0])
except:
print("Failed to parse json",responseRaw)
continue
assert(response['data']['ndarray'][0]==1 or response['data']['ndarray'][0]==5)
jsonRaw=!kubectl get deploy -l seldon-deployment-id=fixed -o json
data="".join(jsonRaw)
resources = json.loads(data)
numReplicas = int(resources["items"][0]["status"]["replicas"])
if numReplicas == 3:
break
time.sleep(1)
print("Rollout Success")
!kubectl delete -f resources/fixed_v2_2models.yaml
###Output
_____no_output_____
###Markdown
Two PredictorsWe can test that the rolling update works when we have two predictors in our deployment.
###Code
%%writefile resources/fixed_v1_2predictors.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: fixed
spec:
name: fixed
protocol: seldon
transport: rest
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/fixed-model:0.1
name: classifier
- image: seldonio/fixed-model:0.1
name: classifier2
graph:
name: classifier
type: MODEL
children:
- name: classifier2
type: MODEL
name: a
replicas: 3
traffic: 50
- componentSpecs:
- spec:
containers:
- image: seldonio/fixed-model:0.1
name: classifier
- image: seldonio/fixed-model:0.1
name: classifier2
graph:
name: classifier
type: MODEL
children:
- name: classifier2
type: MODEL
name: b
replicas: 1
traffic: 50
!kubectl apply -f resources/fixed_v1_2predictors.yaml
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=fixed \
-o jsonpath='{.items[0].metadata.name}')
###Output
_____no_output_____
###Markdown
We can wait until the pod is available before starting the rolling update.
###Code
for i in range(60):
state=!kubectl get sdep fixed -o jsonpath='{.status.state}'
state=state[0]
print(state)
if state=="Available":
break
time.sleep(1)
assert(state=="Available")
!curl -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' \
-X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions \
-H "Content-Type: application/json"
###Output
_____no_output_____
###Markdown
Now we can make a rolling update by changing the version of the docker image we will be updating it for.
###Code
%%writefile resources/fixed_v2_2predictors.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: fixed
spec:
name: fixed
protocol: seldon
transport: rest
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/fixed-model:0.2
name: classifier
- image: seldonio/fixed-model:0.2
name: classifier2
graph:
name: classifier
type: MODEL
children:
- name: classifier2
type: MODEL
name: a
replicas: 3
traffic: 50
- componentSpecs:
- spec:
containers:
- image: seldonio/fixed-model:0.1
name: classifier
- image: seldonio/fixed-model:0.1
name: classifier2
graph:
name: classifier
type: MODEL
children:
- name: classifier2
type: MODEL
name: b
replicas: 1
traffic: 50
!kubectl apply -f resources/fixed_v2_2predictors.yaml
###Output
_____no_output_____
###Markdown
And we can send requests to confirm that the rolling update is performed without interruptions
###Code
time.sleep(5) # To allow operator to start the update
for i in range(120):
responseRaw=!curl -s -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' -X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions -H "Content-Type: application/json"
try:
response = json.loads(responseRaw[0])
except:
print("Failed to parse json",responseRaw)
continue
assert(response['data']['ndarray'][0]==1 or response['data']['ndarray'][0]==5)
jsonRaw=!kubectl get deploy -l seldon-deployment-id=fixed -o json
data="".join(jsonRaw)
resources = json.loads(data)
numReplicas = int(resources["items"][0]["status"]["replicas"])
if numReplicas == 3:
break
time.sleep(1)
print("Rollout Success")
!kubectl delete -f resources/fixed_v2_2predictors.yaml
###Output
_____no_output_____
###Markdown
Model name changesThis will not do a rolling update but create a new deployment.
###Code
%%writefile resources/fixed_v1.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: fixed
spec:
name: fixed
protocol: seldon
transport: rest
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/fixed-model:0.1
name: classifier
graph:
name: classifier
type: MODEL
name: default
replicas: 3
!kubectl apply -f resources/fixed_v1.yaml
###Output
_____no_output_____
###Markdown
We can wait until the pod is available.
###Code
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=fixed \
-o jsonpath='{.items[0].metadata.name}')
for i in range(60):
state=!kubectl get sdep fixed -o jsonpath='{.status.state}'
state=state[0]
print(state)
if state=="Available":
break
time.sleep(1)
assert(state=="Available")
!curl -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' \
-X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions \
-H "Content-Type: application/json"
###Output
_____no_output_____
###Markdown
Now when we apply the update, we should see the change taking place, but there should not be an actual full rolling update triggered.
###Code
%%writefile resources/fixed_v2_new_name.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: fixed
spec:
name: fixed
protocol: seldon
transport: rest
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/fixed-model:0.2
name: classifier2
graph:
name: classifier2
type: MODEL
name: default
replicas: 3
!kubectl apply -f resources/fixed_v2_new_name.yaml
time.sleep(5)
for i in range(120):
responseRaw=!curl -s -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' -X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions -H "Content-Type: application/json"
try:
response = json.loads(responseRaw[0])
except:
print("Failed to parse json",responseRaw)
continue
assert(response['data']['ndarray'][0]==1 or response['data']['ndarray'][0]==5)
jsonRaw=!kubectl get deploy -l seldon-deployment-id=fixed -o json
data="".join(jsonRaw)
resources = json.loads(data)
numItems = len(resources["items"])
if numItems == 1:
break
time.sleep(1)
print("Rollout Success")
!kubectl delete -f resources/fixed_v2_new_name.yaml
###Output
_____no_output_____
###Markdown
Rolling Update TestsCheck rolling updates function as expected.
###Code
import json
import time
!kubectl create namespace seldon
!kubectl config set-context $(kubectl config current-context) --namespace=seldon
###Output
_____no_output_____
###Markdown
Change Image
###Code
!kubectl apply -f resources/fixed_v1.yaml
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=fixed \
-o jsonpath='{.items[0].metadata.name}')
for i in range(60):
state=!kubectl get sdep fixed -o jsonpath='{.status.state}'
state=state[0]
print(state)
if state=="Available":
break
time.sleep(1)
assert(state=="Available")
!curl -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' \
-X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions \
-H "Content-Type: application/json"
!kubectl apply -f resources/fixed_v2.yaml
time.sleep(5) # To allow operator to start the update
for i in range(60):
responseRaw=!curl -s -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' -X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions -H "Content-Type: application/json"
response = json.loads(responseRaw[0])
assert(response['data']['ndarray'][0]==1 or response['data']['ndarray'][0]==5)
jsonRaw=!kubectl get deploy -l seldon-deployment-id=fixed -o json
data="".join(jsonRaw)
resources = json.loads(data)
numReplicas = int(resources["items"][0]["status"]["replicas"])
if numReplicas == 3:
break
time.sleep(1)
print("Rollout Success")
!kubectl delete -f resources/fixed_v1.yaml
###Output
_____no_output_____
###Markdown
Separate Service Orchestrator
###Code
!kubectl apply -f resources/fixed_v1_sep.yaml
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=fixed \
-o jsonpath='{.items[0].metadata.name}')
for i in range(60):
state=!kubectl get sdep fixed -o jsonpath='{.status.state}'
state=state[0]
print(state)
if state=="Available":
break
time.sleep(1)
assert(state=="Available")
!curl -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' \
-X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions \
-H "Content-Type: application/json"
!kubectl apply -f resources/fixed_v2_sep.yaml
time.sleep(5) # To allow operator to start the update
for i in range(60):
responseRaw=!curl -s -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' -X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions -H "Content-Type: application/json"
response = json.loads(responseRaw[0])
assert(response['data']['ndarray'][0]==1 or response['data']['ndarray'][0]==5)
jsonRaw=!kubectl get deploy -l seldon-deployment-id=fixed -o json
data="".join(jsonRaw)
resources = json.loads(data)
numReplicas = int(resources["items"][0]["status"]["replicas"])
if numReplicas == 1:
break
time.sleep(1)
print("Rollout Success")
!kubectl delete -f resources/fixed_v1_sep.yaml
###Output
_____no_output_____
###Markdown
Two PodSpecs
###Code
!kubectl apply -f resources/fixed_v1_2podspecs.yaml
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=fixed \
-o jsonpath='{.items[0].metadata.name}')
for i in range(60):
state=!kubectl get sdep fixed -o jsonpath='{.status.state}'
state=state[0]
print(state)
if state=="Available":
break
time.sleep(1)
assert(state=="Available")
!curl -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' \
-X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions \
-H "Content-Type: application/json"
!kubectl apply -f resources/fixed_v2_2podspecs.yaml
time.sleep(5) # To allow operator to start the update
for i in range(60):
responseRaw=!curl -s -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' -X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions -H "Content-Type: application/json"
response = json.loads(responseRaw[0])
assert(response['data']['ndarray'][0]==1 or response['data']['ndarray'][0]==5)
jsonRaw=!kubectl get deploy -l seldon-deployment-id=fixed -o json
data="".join(jsonRaw)
resources = json.loads(data)
numReplicas = int(resources["items"][0]["status"]["replicas"])
if numReplicas == 1:
break
time.sleep(1)
print("Rollout Success")
!kubectl delete -f resources/fixed_v1_2podspecs.yaml
###Output
_____no_output_____
###Markdown
Two Models
###Code
!kubectl apply -f resources/fixed_v1_2models.yaml
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=fixed \
-o jsonpath='{.items[0].metadata.name}')
for i in range(60):
state=!kubectl get sdep fixed -o jsonpath='{.status.state}'
state=state[0]
print(state)
if state=="Available":
break
time.sleep(1)
assert(state=="Available")
!curl -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' \
-X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions \
-H "Content-Type: application/json"
!kubectl apply -f resources/fixed_v2_2models.yaml
time.sleep(5) # To allow operator to start the update
for i in range(60):
responseRaw=!curl -s -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' -X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions -H "Content-Type: application/json"
response = json.loads(responseRaw[0])
assert(response['data']['ndarray'][0]==1 or response['data']['ndarray'][0]==5)
jsonRaw=!kubectl get deploy -l seldon-deployment-id=fixed -o json
data="".join(jsonRaw)
resources = json.loads(data)
numReplicas = int(resources["items"][0]["status"]["replicas"])
if numReplicas == 3:
break
time.sleep(1)
print("Rollout Success")
!kubectl delete -f resources/fixed_v2_2models.yaml
###Output
_____no_output_____
###Markdown
Model name changesThis will not do a rolling update but create a new deployment.
###Code
!kubectl apply -f resources/fixed_v1.yaml
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=fixed \
-o jsonpath='{.items[0].metadata.name}')
for i in range(60):
state=!kubectl get sdep fixed -o jsonpath='{.status.state}'
state=state[0]
print(state)
if state=="Available":
break
time.sleep(1)
assert(state=="Available")
!curl -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' \
-X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions \
-H "Content-Type: application/json"
!kubectl apply -f resources/fixed_v2_new_name.yaml
time.sleep(5) # To allow operator to start the update
for i in range(60):
responseRaw=!curl -s -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' -X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions -H "Content-Type: application/json"
response = json.loads(responseRaw[0])
assert(response['data']['ndarray'][0]==1 or response['data']['ndarray'][0]==5)
jsonRaw=!kubectl get deploy -l seldon-deployment-id=fixed -o json
data="".join(jsonRaw)
resources = json.loads(data)
numItems = len(resources["items"])
if numItems == 1:
break
time.sleep(1)
print("Rollout Success")
!kubectl delete -f resources/fixed_v2_new_name.yaml
###Output
_____no_output_____
###Markdown
Separate Service Orchestrator
###Code
!kubectl apply -f resources/fixed_v1_sep.yaml
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=fixed \
-o jsonpath='{.items[0].metadata.name}')
!curl -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' \
-X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions \
-H "Content-Type: application/json"
!kubectl apply -f resources/fixed_v2_sep.yaml
!kubectl delete -f resources/fixed_v1_sep.yaml
###Output
seldondeployment.machinelearning.seldon.io "fixed" deleted
###Markdown
Two PodSpecs
###Code
!kubectl apply -f resources/fixed_v1_2podspecs.yaml
!curl -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' \
-X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions \
-H "Content-Type: application/json"
!kubectl apply -f resources/fixed_v2_2podspecs.yaml
!kubectl delete -f resources/fixed_v1_2podspecs.yaml
###Output
seldondeployment.machinelearning.seldon.io "fixed" deleted
###Markdown
Rolling Update TestsCheck rolling updates function as expected.
###Code
import json
import time
###Output
_____no_output_____
###Markdown
Before we get started we'd like to make sure that we're making all the changes in a new blank namespace of the name `seldon`
###Code
!kubectl create namespace seldon
!kubectl config set-context $(kubectl config current-context) --namespace=seldon
###Output
_____no_output_____
###Markdown
Change Image We'll want to try modifying an image and seeing how the rolling update performs the changes.We'll first create the following model:
###Code
%%writefile resources/fixed_v1.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: fixed
spec:
name: fixed
protocol: seldon
transport: rest
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/fixed-model:0.1
name: classifier
graph:
name: classifier
type: MODEL
name: default
replicas: 3
###Output
_____no_output_____
###Markdown
Now we can run that model and wait until it's released
###Code
!kubectl apply -f resources/fixed_v1.yaml
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=fixed \
-o jsonpath='{.items[0].metadata.name}')
###Output
_____no_output_____
###Markdown
Let's confirm that the state of the model is Available
###Code
for i in range(60):
state=!kubectl get sdep fixed -o jsonpath='{.status.state}'
state=state[0]
print(state)
if state=="Available":
break
time.sleep(1)
assert(state=="Available")
!curl -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' \
-X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions \
-H "Content-Type: application/json"
Now we can modify the model by providing a new image name, using the following config file:
%%writefile resources/fixed_v2.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: fixed
spec:
name: fixed
protocol: seldon
transport: rest
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/fixed-model:0.2
name: classifier
graph:
name: classifier
type: MODEL
name: default
replicas: 3
!kubectl apply -f resources/fixed_v2.yaml
###Output
_____no_output_____
###Markdown
Now let's actually send a couple of requests to make sure that there are no failed requests as the rolling update is performed
###Code
time.sleep(5) # To allow operator to start the update
for i in range(120):
responseRaw=!curl -s -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' -X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions -H "Content-Type: application/json"
try:
response = json.loads(responseRaw[0])
except:
print("Failed to parse json",responseRaw)
continue
assert(response['data']['ndarray'][0]==1 or response['data']['ndarray'][0]==5)
jsonRaw=!kubectl get deploy -l seldon-deployment-id=fixed -o json
data="".join(jsonRaw)
resources = json.loads(data)
numReplicas = int(resources["items"][0]["status"]["replicas"])
if numReplicas == 3:
break
time.sleep(1)
print("Rollout Success")
!kubectl delete -f resources/fixed_v1.yaml
###Output
_____no_output_____
###Markdown
Separate Service OrchestratorWe can test that the rolling update works when we use the annotation that allows us to have the service orchestrator on a separate pod, namely `seldon.io/engine-separate-pod: "true"`, as per the config file below:
###Code
%%writefile resources/fixed_v1_sep.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: fixed
spec:
name: fixed
protocol: seldon
transport: rest
annotations:
seldon.io/engine-separate-pod: "true"
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/fixed-model:0.1
name: classifier
graph:
name: classifier
type: MODEL
name: default
replicas: 1
!kubectl apply -f resources/fixed_v1_sep.yaml
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=fixed \
-o jsonpath='{.items[0].metadata.name}')
###Output
_____no_output_____
###Markdown
We can wait until the pod is available before starting the rolling update.
###Code
for i in range(60):
state=!kubectl get sdep fixed -o jsonpath='{.status.state}'
state=state[0]
print(state)
if state=="Available":
break
time.sleep(1)
assert(state=="Available")
!curl -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' \
-X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions \
-H "Content-Type: application/json"
###Output
_____no_output_____
###Markdown
Now we can make a rolling update by changing the version of the docker image we will be updating it for.
###Code
%%writefile resources/fixed_v2_sep.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: fixed
spec:
name: fixed
protocol: seldon
transport: rest
annotations:
seldon.io/engine-separate-pod: "true"
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/fixed-model:0.2
name: classifier
graph:
name: classifier
type: MODEL
name: default
replicas: 1
!kubectl apply -f resources/fixed_v2_sep.yaml
###Output
_____no_output_____
###Markdown
And we can send requests to confirm that the rolling update is performed without interruptions
###Code
time.sleep(5) # To allow operator to start the update
for i in range(120):
responseRaw=!curl -s -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' -X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions -H "Content-Type: application/json"
try:
response = json.loads(responseRaw[0])
except:
print("Failed to parse json",responseRaw)
continue
assert(response['data']['ndarray'][0]==1 or response['data']['ndarray'][0]==5)
jsonRaw=!kubectl get deploy -l seldon-deployment-id=fixed -o json
data="".join(jsonRaw)
resources = json.loads(data)
numReplicas = int(resources["items"][0]["status"]["replicas"])
if numReplicas == 1:
break
time.sleep(1)
print("Rollout Success")
!kubectl delete -f resources/fixed_v1_sep.yaml
###Output
_____no_output_____
###Markdown
Two PodSpecs We can test that the rolling update works when we have multiple podSpecs in our deployment.
###Code
%%writefile resources/fixed_v1_2podspecs.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: fixed
spec:
name: fixed
protocol: seldon
transport: rest
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/fixed-model:0.1
name: classifier1
- spec:
containers:
- image: seldonio/fixed-model:0.1
name: classifier2
graph:
name: classifier1
type: MODEL
children:
- name: classifier2
type: MODEL
name: default
replicas: 1
!kubectl apply -f resources/fixed_v1_2podspecs.yaml
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=fixed \
-o jsonpath='{.items[0].metadata.name}')
###Output
_____no_output_____
###Markdown
We can wait until the pod is available before starting the rolling update.
###Code
for i in range(60):
state=!kubectl get sdep fixed -o jsonpath='{.status.state}'
state=state[0]
print(state)
if state=="Available":
break
time.sleep(1)
assert(state=="Available")
!curl -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' \
-X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions \
-H "Content-Type: application/json"
###Output
_____no_output_____
###Markdown
Now we can make a rolling update by changing the version of the docker image we will be updating it for.
###Code
%%writefile resources/fixed_v2_2podspecs.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: fixed
spec:
name: fixed
protocol: seldon
transport: rest
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/fixed-model:0.1
name: classifier1
- spec:
containers:
- image: seldonio/fixed-model:0.2
name: classifier2
graph:
name: classifier1
type: MODEL
children:
- name: classifier2
type: MODEL
name: default
replicas: 1
!kubectl apply -f resources/fixed_v2_2podspecs.yaml
###Output
_____no_output_____
###Markdown
And we can send requests to confirm that the rolling update is performed without interruptions
###Code
time.sleep(5) # To allow operator to start the update
for i in range(120):
responseRaw=!curl -s -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' -X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions -H "Content-Type: application/json"
try:
response = json.loads(responseRaw[0])
except:
print("Failed to parse json",responseRaw)
continue
assert(response['data']['ndarray'][0]==1 or response['data']['ndarray'][0]==5)
jsonRaw=!kubectl get deploy -l seldon-deployment-id=fixed -o json
data="".join(jsonRaw)
resources = json.loads(data)
numReplicas = int(resources["items"][0]["status"]["replicas"])
if numReplicas == 1:
break
time.sleep(1)
print("Rollout Success")
!kubectl delete -f resources/fixed_v1_2podspecs.yaml
###Output
_____no_output_____
###Markdown
Two ModelsWe can test that the rolling update works when we have two predictors / models in our deployment.
###Code
%%writefile resources/fixed_v1_2models.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: fixed
spec:
name: fixed
protocol: seldon
transport: rest
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/fixed-model:0.1
name: classifier
- image: seldonio/fixed-model:0.1
name: classifier2
graph:
name: classifier
type: MODEL
children:
- name: classifier2
type: MODEL
name: default
replicas: 3
!kubectl apply -f resources/fixed_v1_2models.yaml
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=fixed \
-o jsonpath='{.items[0].metadata.name}')
###Output
_____no_output_____
###Markdown
We can wait until the pod is available before starting the rolling update.
###Code
for i in range(60):
state=!kubectl get sdep fixed -o jsonpath='{.status.state}'
state=state[0]
print(state)
if state=="Available":
break
time.sleep(1)
assert(state=="Available")
!curl -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' \
-X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions \
-H "Content-Type: application/json"
###Output
_____no_output_____
###Markdown
Now we can make a rolling update by changing the version of the docker image we will be updating it for.
###Code
%%writefile resources/fixed_v2_2models.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: fixed
spec:
name: fixed
protocol: seldon
transport: rest
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/fixed-model:0.2
name: classifier
- image: seldonio/fixed-model:0.2
name: classifier2
graph:
name: classifier
type: MODEL
children:
- name: classifier2
type: MODEL
name: default
replicas: 3
!kubectl apply -f resources/fixed_v2_2models.yaml
###Output
_____no_output_____
###Markdown
And we can send requests to confirm that the rolling update is performed without interruptions
###Code
time.sleep(5) # To allow operator to start the update
for i in range(120):
responseRaw=!curl -s -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' -X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions -H "Content-Type: application/json"
try:
response = json.loads(responseRaw[0])
except:
print("Failed to parse json",responseRaw)
continue
assert(response['data']['ndarray'][0]==1 or response['data']['ndarray'][0]==5)
jsonRaw=!kubectl get deploy -l seldon-deployment-id=fixed -o json
data="".join(jsonRaw)
resources = json.loads(data)
numReplicas = int(resources["items"][0]["status"]["replicas"])
if numReplicas == 3:
break
time.sleep(1)
print("Rollout Success")
!kubectl delete -f resources/fixed_v2_2models.yaml
###Output
_____no_output_____
###Markdown
Model name changesThis will not do a rolling update but create a new deployment.
###Code
%%writefile resources/fixed_v1.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: fixed
spec:
name: fixed
protocol: seldon
transport: rest
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/fixed-model:0.1
name: classifier
graph:
name: classifier
type: MODEL
name: default
replicas: 3
!kubectl apply -f resources/fixed_v1.yaml
###Output
_____no_output_____
###Markdown
We can wait until the pod is available.
###Code
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=fixed \
-o jsonpath='{.items[0].metadata.name}')
for i in range(60):
state=!kubectl get sdep fixed -o jsonpath='{.status.state}'
state=state[0]
print(state)
if state=="Available":
break
time.sleep(1)
assert(state=="Available")
!curl -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' \
-X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions \
-H "Content-Type: application/json"
###Output
_____no_output_____
###Markdown
Now when we apply the update, we should see the change taking place, but there should not be an actual full rolling update triggered.
###Code
%%writefile resources/fixed_v2_new_name.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: fixed
spec:
name: fixed
protocol: seldon
transport: rest
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/fixed-model:0.2
name: classifier2
graph:
name: classifier2
type: MODEL
name: default
replicas: 3
!kubectl apply -f resources/fixed_v2_new_name.yaml
time.sleep(5)
for i in range(120):
responseRaw=!curl -s -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' -X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions -H "Content-Type: application/json"
try:
response = json.loads(responseRaw[0])
except:
print("Failed to parse json",responseRaw)
continue
assert(response['data']['ndarray'][0]==1 or response['data']['ndarray'][0]==5)
jsonRaw=!kubectl get deploy -l seldon-deployment-id=fixed -o json
data="".join(jsonRaw)
resources = json.loads(data)
numItems = len(resources["items"])
if numItems == 1:
break
time.sleep(1)
print("Rollout Success")
!kubectl delete -f resources/fixed_v2_new_name.yaml
###Output
_____no_output_____
###Markdown
Rolling Update TestsCheck rolling updates function as expected.
###Code
import json
import time
!kubectl create namespace seldon
!kubectl config set-context $(kubectl config current-context) --namespace=seldon
###Output
_____no_output_____
###Markdown
Change Image
###Code
!kubectl apply -f resources/fixed_v1.yaml
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=fixed \
-o jsonpath='{.items[0].metadata.name}')
!curl -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' \
-X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions \
-H "Content-Type: application/json"
!kubectl apply -f resources/fixed_v2.yaml
time.sleep(5) # To allow operator to start the update
for i in range(60):
responseRaw=!curl -s -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' -X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions -H "Content-Type: application/json"
response = json.loads(responseRaw[0])
assert(response['data']['ndarray'][0]==1 or response['data']['ndarray'][0]==5)
jsonRaw=!kubectl get deploy -l seldon-deployment-id=fixed -o json
data="".join(jsonRaw)
resources = json.loads(data)
numReplicas = int(resources["items"][0]["status"]["replicas"])
if numReplicas == 3:
break
time.sleep(1)
print("Rollout Success")
!kubectl delete -f resources/fixed_v1.yaml
###Output
_____no_output_____
###Markdown
Separate Service Orchestrator
###Code
!kubectl apply -f resources/fixed_v1_sep.yaml
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=fixed \
-o jsonpath='{.items[0].metadata.name}')
!curl -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' \
-X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions \
-H "Content-Type: application/json"
!kubectl apply -f resources/fixed_v2_sep.yaml
time.sleep(5) # To allow operator to start the update
for i in range(60):
responseRaw=!curl -s -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' -X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions -H "Content-Type: application/json"
response = json.loads(responseRaw[0])
assert(response['data']['ndarray'][0]==1 or response['data']['ndarray'][0]==5)
jsonRaw=!kubectl get deploy -l seldon-deployment-id=fixed -o json
data="".join(jsonRaw)
resources = json.loads(data)
numReplicas = int(resources["items"][0]["status"]["replicas"])
if numReplicas == 1:
break
time.sleep(1)
print("Rollout Success")
!kubectl delete -f resources/fixed_v1_sep.yaml
###Output
_____no_output_____
###Markdown
Two PodSpecs
###Code
!kubectl apply -f resources/fixed_v1_2podspecs.yaml
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=fixed \
-o jsonpath='{.items[0].metadata.name}')
!curl -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' \
-X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions \
-H "Content-Type: application/json"
!kubectl apply -f resources/fixed_v2_2podspecs.yaml
time.sleep(5) # To allow operator to start the update
for i in range(60):
responseRaw=!curl -s -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' -X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions -H "Content-Type: application/json"
response = json.loads(responseRaw[0])
assert(response['data']['ndarray'][0]==1 or response['data']['ndarray'][0]==5)
jsonRaw=!kubectl get deploy -l seldon-deployment-id=fixed -o json
data="".join(jsonRaw)
resources = json.loads(data)
numReplicas = int(resources["items"][0]["status"]["replicas"])
if numReplicas == 1:
break
time.sleep(1)
print("Rollout Success")
!kubectl delete -f resources/fixed_v1_2podspecs.yaml
###Output
_____no_output_____
###Markdown
Two Models
###Code
!kubectl apply -f resources/fixed_v1_2models.yaml
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=fixed \
-o jsonpath='{.items[0].metadata.name}')
!curl -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' \
-X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions \
-H "Content-Type: application/json"
!kubectl apply -f resources/fixed_v2_2models.yaml
time.sleep(5) # To allow operator to start the update
for i in range(60):
responseRaw=!curl -s -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' -X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions -H "Content-Type: application/json"
response = json.loads(responseRaw[0])
assert(response['data']['ndarray'][0]==1 or response['data']['ndarray'][0]==5)
jsonRaw=!kubectl get deploy -l seldon-deployment-id=fixed -o json
data="".join(jsonRaw)
resources = json.loads(data)
numReplicas = int(resources["items"][0]["status"]["replicas"])
if numReplicas == 3:
break
time.sleep(1)
print("Rollout Success")
!kubectl delete -f resources/fixed_v2_2models.yaml
###Output
_____no_output_____
###Markdown
Model name changesThis will not do a rolling update but create a new deployment.
###Code
!kubectl apply -f resources/fixed_v1.yaml
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=fixed \
-o jsonpath='{.items[0].metadata.name}')
!curl -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' \
-X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions \
-H "Content-Type: application/json"
!kubectl apply -f resources/fixed_v2_new_name.yaml
time.sleep(5) # To allow operator to start the update
for i in range(60):
responseRaw=!curl -s -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' -X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions -H "Content-Type: application/json"
response = json.loads(responseRaw[0])
assert(response['data']['ndarray'][0]==1 or response['data']['ndarray'][0]==5)
jsonRaw=!kubectl get deploy -l seldon-deployment-id=fixed -o json
data="".join(jsonRaw)
resources = json.loads(data)
numItems = len(resources["items"])
if numItems == 1:
break
time.sleep(1)
print("Rollout Success")
!kubectl delete -f resources/fixed_v2_new_name.yaml
###Output
_____no_output_____
###Markdown
Rolling Update TestsCheck rolling updates function as expected.
###Code
import json
import time
###Output
_____no_output_____
###Markdown
Before we get started we'd like to make sure that we're making all the changes in a new blank namespace of the name `seldon`
###Code
!kubectl create namespace seldon
!kubectl config set-context $(kubectl config current-context) --namespace=seldon
###Output
_____no_output_____
###Markdown
Change Image We'll want to try modifying an image and seeing how the rolling update performs the changes.We'll first create the following model:
###Code
%%writefile resources/fixed_v1.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: fixed
spec:
name: fixed
protocol: seldon
transport: rest
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/fixed-model:0.1
name: classifier
graph:
name: classifier
type: MODEL
name: default
replicas: 3
###Output
_____no_output_____
###Markdown
Now we can run that model and wait until it's released
###Code
!kubectl apply -f resources/fixed_v1.yaml
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=fixed \
-o jsonpath='{.items[0].metadata.name}')
###Output
_____no_output_____
###Markdown
Let's confirm that the state of the model is Available
###Code
for i in range(60):
state = !kubectl get sdep fixed -o jsonpath='{.status.state}'
state = state[0]
print(state)
if state == "Available":
break
time.sleep(1)
assert state == "Available"
!curl -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' \
-X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions \
-H "Content-Type: application/json"
###Output
_____no_output_____
###Markdown
Now we can modify the model by providing a new image name, using the following config file:
###Code
%%writefile resources/fixed_v2.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: fixed
spec:
name: fixed
protocol: seldon
transport: rest
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/fixed-model:0.2
name: classifier
graph:
name: classifier
type: MODEL
name: default
replicas: 3
!kubectl apply -f resources/fixed_v2.yaml
###Output
_____no_output_____
###Markdown
Now let's actually send a couple of requests to make sure that there are no failed requests as the rolling update is performed
###Code
time.sleep(5) # To allow operator to start the update
for i in range(120):
responseRaw = !curl -s -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' -X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions -H "Content-Type: application/json"
try:
response = json.loads(responseRaw[0])
except:
print("Failed to parse json", responseRaw)
continue
assert response["data"]["ndarray"][0] == 1 or response["data"]["ndarray"][0] == 5
jsonRaw = !kubectl get deploy -l seldon-deployment-id=fixed -o json
data = "".join(jsonRaw)
resources = json.loads(data)
numReplicas = int(resources["items"][0]["status"]["replicas"])
if numReplicas == 3:
break
time.sleep(1)
print("Rollout Success")
!kubectl delete -f resources/fixed_v1.yaml
###Output
_____no_output_____
###Markdown
Change Replicas (no rolling update) We'll want to try modifying number of replicas and no rolling update is needed.We'll first create the following model:
###Code
%%writefile resources/fixed_v1_rep2.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: fixed
spec:
name: fixed
protocol: seldon
transport: rest
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/fixed-model:0.1
name: classifier
graph:
name: classifier
type: MODEL
name: default
replicas: 2
###Output
_____no_output_____
###Markdown
Now we can run that model and wait until it's released
###Code
!kubectl apply -f resources/fixed_v1_rep2.yaml
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=fixed \
-o jsonpath='{.items[0].metadata.name}')
###Output
_____no_output_____
###Markdown
Let's confirm that the state of the model is Available
###Code
for i in range(60):
state = !kubectl get sdep fixed -o jsonpath='{.status.state}'
state = state[0]
print(state)
if state == "Available":
break
time.sleep(1)
assert state == "Available"
!curl -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' \
-X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions \
-H "Content-Type: application/json"
###Output
_____no_output_____
###Markdown
Now we can modify the model by providing a new image name, using the following config file:
###Code
%%writefile resources/fixed_v1_rep4.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: fixed
spec:
name: fixed
protocol: seldon
transport: rest
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/fixed-model:0.1
name: classifier
graph:
name: classifier
type: MODEL
name: default
replicas: 4
!kubectl apply -f resources/fixed_v1_rep4.yaml
###Output
_____no_output_____
###Markdown
Now let's actually send a couple of requests to make sure that there are no failed requests as the rolling update is performed
###Code
time.sleep(5) # To allow operator to start the update
for i in range(120):
responseRaw = !curl -s -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' -X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions -H "Content-Type: application/json"
try:
response = json.loads(responseRaw[0])
except:
print("Failed to parse json", responseRaw)
continue
assert response["data"]["ndarray"][0] == 1 or response["data"]["ndarray"][0] == 5
jsonRaw = !kubectl get deploy -l seldon-deployment-id=fixed -o json
data = "".join(jsonRaw)
resources = json.loads(data)
numReplicas = int(resources["items"][0]["status"]["replicas"])
if numReplicas == 4:
break
time.sleep(1)
print("Rollout Success")
###Output
_____no_output_____
###Markdown
Now downsize back to 2
###Code
!kubectl apply -f resources/fixed_v1_rep2.yaml
time.sleep(5) # To allow operator to start the update
for i in range(120):
responseRaw = !curl -s -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' -X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions -H "Content-Type: application/json"
try:
response = json.loads(responseRaw[0])
except:
print("Failed to parse json", responseRaw)
continue
assert response["data"]["ndarray"][0] == 1 or response["data"]["ndarray"][0] == 5
jsonRaw = !kubectl get deploy -l seldon-deployment-id=fixed -o json
data = "".join(jsonRaw)
resources = json.loads(data)
numReplicas = int(resources["items"][0]["status"]["replicas"])
if numReplicas == 2:
break
time.sleep(1)
print("Rollout Success")
!kubectl delete -f resources/fixed_v1_rep2.yaml
###Output
_____no_output_____
###Markdown
Separate Service OrchestratorWe can test that the rolling update works when we use the annotation that allows us to have the service orchestrator on a separate pod, namely `seldon.io/engine-separate-pod: "true"`, as per the config file below. Though in this case both the service orchestrator and model pod will be recreated.
###Code
%%writefile resources/fixed_v1_sep.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: fixed
spec:
name: fixed
protocol: seldon
transport: rest
annotations:
seldon.io/engine-separate-pod: "true"
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/fixed-model:0.1
name: classifier
graph:
name: classifier
type: MODEL
name: default
replicas: 1
!kubectl apply -f resources/fixed_v1_sep.yaml
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=fixed \
-o jsonpath='{.items[0].metadata.name}')
###Output
_____no_output_____
###Markdown
We can wait until the pod is available before starting the rolling update.
###Code
for i in range(60):
state = !kubectl get sdep fixed -o jsonpath='{.status.state}'
state = state[0]
print(state)
if state == "Available":
break
time.sleep(1)
assert state == "Available"
!curl -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' \
-X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions \
-H "Content-Type: application/json"
###Output
_____no_output_____
###Markdown
Now we can make a rolling update by changing the version of the docker image we will be updating it for.
###Code
%%writefile resources/fixed_v2_sep.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: fixed
spec:
name: fixed
protocol: seldon
transport: rest
annotations:
seldon.io/engine-separate-pod: "true"
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/fixed-model:0.2
name: classifier
graph:
name: classifier
type: MODEL
name: default
replicas: 1
!kubectl apply -f resources/fixed_v2_sep.yaml
###Output
_____no_output_____
###Markdown
And we can send requests to confirm that the rolling update is performed without interruptions
###Code
time.sleep(5) # To allow operator to start the update
for i in range(120):
responseRaw = !curl -s -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' -X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions -H "Content-Type: application/json"
try:
response = json.loads(responseRaw[0])
except:
print("Failed to parse json", responseRaw)
continue
assert response["data"]["ndarray"][0] == 1 or response["data"]["ndarray"][0] == 5
jsonRaw = !kubectl get deploy -l seldon-deployment-id=fixed -o json
data = "".join(jsonRaw)
resources = json.loads(data)
numReplicas = int(resources["items"][0]["status"]["replicas"])
if numReplicas == 1:
break
time.sleep(1)
print("Rollout Success")
!kubectl delete -f resources/fixed_v1_sep.yaml
###Output
_____no_output_____
###Markdown
Two PodSpecs We can test that the rolling update works when we have multiple podSpecs in our deployment and only does a rolling update the first pod (which also contains the service orchestrator)
###Code
%%writefile resources/fixed_v1_2podspecs.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: fixed
spec:
name: fixed
protocol: seldon
transport: rest
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/fixed-model:0.1
name: classifier1
- spec:
containers:
- image: seldonio/fixed-model:0.1
name: classifier2
graph:
name: classifier1
type: MODEL
children:
- name: classifier2
type: MODEL
name: default
replicas: 1
!kubectl apply -f resources/fixed_v1_2podspecs.yaml
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=fixed \
-o jsonpath='{.items[0].metadata.name}')
###Output
_____no_output_____
###Markdown
We can wait until the pod is available before starting the rolling update.
###Code
for i in range(60):
state = !kubectl get sdep fixed -o jsonpath='{.status.state}'
state = state[0]
print(state)
if state == "Available":
break
time.sleep(1)
assert state == "Available"
!curl -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' \
-X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions \
-H "Content-Type: application/json"
###Output
_____no_output_____
###Markdown
Now we can make a rolling update by changing the version of the docker image we will be updating it for.
###Code
%%writefile resources/fixed_v2_2podspecs.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: fixed
spec:
name: fixed
protocol: seldon
transport: rest
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/fixed-model:0.1
name: classifier1
- spec:
containers:
- image: seldonio/fixed-model:0.2
name: classifier2
graph:
name: classifier1
type: MODEL
children:
- name: classifier2
type: MODEL
name: default
replicas: 1
!kubectl apply -f resources/fixed_v2_2podspecs.yaml
###Output
_____no_output_____
###Markdown
And we can send requests to confirm that the rolling update is performed without interruptions
###Code
time.sleep(5) # To allow operator to start the update
for i in range(120):
responseRaw = !curl -s -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' -X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions -H "Content-Type: application/json"
try:
response = json.loads(responseRaw[0])
except:
print("Failed to parse json", responseRaw)
continue
assert response["data"]["ndarray"][0] == 1 or response["data"]["ndarray"][0] == 5
jsonRaw = !kubectl get deploy -l seldon-deployment-id=fixed -o json
data = "".join(jsonRaw)
resources = json.loads(data)
numReplicas = int(resources["items"][0]["status"]["replicas"])
if numReplicas == 1:
break
time.sleep(1)
print("Rollout Success")
!kubectl delete -f resources/fixed_v1_2podspecs.yaml
###Output
_____no_output_____
###Markdown
Two ModelsWe can test that the rolling update works when we have two predictors / models in our deployment.
###Code
%%writefile resources/fixed_v1_2models.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: fixed
spec:
name: fixed
protocol: seldon
transport: rest
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/fixed-model:0.1
name: classifier
- image: seldonio/fixed-model:0.1
name: classifier2
graph:
name: classifier
type: MODEL
children:
- name: classifier2
type: MODEL
name: default
replicas: 3
!kubectl apply -f resources/fixed_v1_2models.yaml
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=fixed \
-o jsonpath='{.items[0].metadata.name}')
###Output
_____no_output_____
###Markdown
We can wait until the pod is available before starting the rolling update.
###Code
for i in range(60):
state = !kubectl get sdep fixed -o jsonpath='{.status.state}'
state = state[0]
print(state)
if state == "Available":
break
time.sleep(1)
assert state == "Available"
!curl -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' \
-X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions \
-H "Content-Type: application/json"
###Output
_____no_output_____
###Markdown
Now we can make a rolling update by changing the version of the docker image we will be updating it for.
###Code
%%writefile resources/fixed_v2_2models.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: fixed
spec:
name: fixed
protocol: seldon
transport: rest
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/fixed-model:0.2
name: classifier
- image: seldonio/fixed-model:0.2
name: classifier2
graph:
name: classifier
type: MODEL
children:
- name: classifier2
type: MODEL
name: default
replicas: 3
!kubectl apply -f resources/fixed_v2_2models.yaml
###Output
_____no_output_____
###Markdown
And we can send requests to confirm that the rolling update is performed without interruptions
###Code
time.sleep(5) # To allow operator to start the update
for i in range(120):
responseRaw = !curl -s -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' -X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions -H "Content-Type: application/json"
try:
response = json.loads(responseRaw[0])
except:
print("Failed to parse json", responseRaw)
continue
assert response["data"]["ndarray"][0] == 1 or response["data"]["ndarray"][0] == 5
jsonRaw = !kubectl get deploy -l seldon-deployment-id=fixed -o json
data = "".join(jsonRaw)
resources = json.loads(data)
numReplicas = int(resources["items"][0]["status"]["replicas"])
if numReplicas == 3:
break
time.sleep(1)
print("Rollout Success")
!kubectl delete -f resources/fixed_v2_2models.yaml
###Output
_____no_output_____
###Markdown
Two PredictorsWe can test that the rolling update works when we have two predictors in our deployment.
###Code
%%writefile resources/fixed_v1_2predictors.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: fixed
spec:
name: fixed
protocol: seldon
transport: rest
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/fixed-model:0.1
name: classifier
- image: seldonio/fixed-model:0.1
name: classifier2
graph:
name: classifier
type: MODEL
children:
- name: classifier2
type: MODEL
name: a
replicas: 3
traffic: 50
- componentSpecs:
- spec:
containers:
- image: seldonio/fixed-model:0.1
name: classifier
- image: seldonio/fixed-model:0.1
name: classifier2
graph:
name: classifier
type: MODEL
children:
- name: classifier2
type: MODEL
name: b
replicas: 1
traffic: 50
!kubectl apply -f resources/fixed_v1_2predictors.yaml
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=fixed \
-o jsonpath='{.items[0].metadata.name}')
###Output
_____no_output_____
###Markdown
We can wait until the pod is available before starting the rolling update.
###Code
for i in range(60):
state = !kubectl get sdep fixed -o jsonpath='{.status.state}'
state = state[0]
print(state)
if state == "Available":
break
time.sleep(1)
assert state == "Available"
!curl -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' \
-X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions \
-H "Content-Type: application/json"
###Output
_____no_output_____
###Markdown
Now we can make a rolling update by changing the version of the docker image we will be updating it for.
###Code
%%writefile resources/fixed_v2_2predictors.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: fixed
spec:
name: fixed
protocol: seldon
transport: rest
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/fixed-model:0.2
name: classifier
- image: seldonio/fixed-model:0.2
name: classifier2
graph:
name: classifier
type: MODEL
children:
- name: classifier2
type: MODEL
name: a
replicas: 3
traffic: 50
- componentSpecs:
- spec:
containers:
- image: seldonio/fixed-model:0.1
name: classifier
- image: seldonio/fixed-model:0.1
name: classifier2
graph:
name: classifier
type: MODEL
children:
- name: classifier2
type: MODEL
name: b
replicas: 1
traffic: 50
!kubectl apply -f resources/fixed_v2_2predictors.yaml
###Output
_____no_output_____
###Markdown
And we can send requests to confirm that the rolling update is performed without interruptions
###Code
time.sleep(5) # To allow operator to start the update
for i in range(120):
responseRaw = !curl -s -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' -X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions -H "Content-Type: application/json"
try:
response = json.loads(responseRaw[0])
except:
print("Failed to parse json", responseRaw)
continue
assert response["data"]["ndarray"][0] == 1 or response["data"]["ndarray"][0] == 5
jsonRaw = !kubectl get deploy -l seldon-deployment-id=fixed -o json
data = "".join(jsonRaw)
resources = json.loads(data)
numReplicas = int(resources["items"][0]["status"]["replicas"])
if numReplicas == 3:
break
time.sleep(1)
print("Rollout Success")
!kubectl delete -f resources/fixed_v2_2predictors.yaml
###Output
_____no_output_____
###Markdown
Model name changesThis will not do a rolling update but create a new deployment.
###Code
%%writefile resources/fixed_v1.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: fixed
spec:
name: fixed
protocol: seldon
transport: rest
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/fixed-model:0.1
name: classifier
graph:
name: classifier
type: MODEL
name: default
replicas: 3
!kubectl apply -f resources/fixed_v1.yaml
###Output
_____no_output_____
###Markdown
We can wait until the pod is available.
###Code
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=fixed \
-o jsonpath='{.items[0].metadata.name}')
for i in range(60):
state = !kubectl get sdep fixed -o jsonpath='{.status.state}'
state = state[0]
print(state)
if state == "Available":
break
time.sleep(1)
assert state == "Available"
!curl -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' \
-X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions \
-H "Content-Type: application/json"
###Output
_____no_output_____
###Markdown
Now when we apply the update, we should see the change taking place, but there should not be an actual full rolling update triggered.
###Code
%%writefile resources/fixed_v2_new_name.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: fixed
spec:
name: fixed
protocol: seldon
transport: rest
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/fixed-model:0.2
name: classifier2
graph:
name: classifier2
type: MODEL
name: default
replicas: 3
!kubectl apply -f resources/fixed_v2_new_name.yaml
time.sleep(5)
for i in range(120):
responseRaw = !curl -s -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' -X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions -H "Content-Type: application/json"
try:
response = json.loads(responseRaw[0])
except:
print("Failed to parse json", responseRaw)
continue
assert response["data"]["ndarray"][0] == 1 or response["data"]["ndarray"][0] == 5
jsonRaw = !kubectl get deploy -l seldon-deployment-id=fixed -o json
data = "".join(jsonRaw)
resources = json.loads(data)
numItems = len(resources["items"])
if numItems == 1:
break
time.sleep(1)
print("Rollout Success")
!kubectl delete -f resources/fixed_v2_new_name.yaml
###Output
_____no_output_____ |
Reproducible Data Analysis in Jupyter - Part 2.ipynb | ###Markdown
Jupyter Data Science Workflow - Part 2**From exploratory analysis to reproducible science** Continue from Reproducible Data Analysis in Jupyter_V3
###Code
%matplotlib inline
from jupyterworkflow.data import get_fremont_data
from sklearn.decomposition import PCA
from sklearn.mixture import GaussianMixture
import matplotlib.pyplot as plt
import pandas as pd
###Output
_____no_output_____
###Markdown
Get Data
###Code
data = get_fremont_data()
pivoted=data.pivot_table('Total', index=data.index.time, columns=data.index.date)
pivoted.plot(legend=False, alpha=0.01)
###Output
_____no_output_____
###Markdown
Look at the **Shape of the Pivot**: In this example (3163, 24) equals 3163 days and each observation consists of 24 hours
###Code
X = pivoted.T.values
X.shape
###Output
_____no_output_____
###Markdown
**Clean Up Data** - due to missing values
###Code
X = pivoted.fillna(0).T.values
X.shape
###Output
_____no_output_____
###Markdown
Principal component analysis
###Code
X2=PCA(2, svd_solver='full').fit_transform(X)
X2.shape
###Output
_____no_output_____
###Markdown
**Scatterplot** Shows two clusers
###Code
plt.scatter(X2[:,0], X2[:,1])
###Output
_____no_output_____
###Markdown
Unsupervised Clustering**Gaussian Mixture** To distinguish the identified clusters
###Code
gmm=GaussianMixture(2).fit(X)
#if we want to dostinguish more clusters just change the 2 into the number of clusters
labels=gmm.predict(X)
#the labels now show in which cluster the data point is (either 0, or 1)
plt.scatter(X2[:,0], X2[:,1], c=labels, cmap='rainbow')
plt.colorbar()
###Output
_____no_output_____
###Markdown
**Plot Pivoted Table**, but depending on labels Examine what happens in each cluster
###Code
pivoted.T[labels==0].T.plot(legend=False, alpha=0.01)
pivoted.T[labels==1].T.plot(legend=False, alpha=0.01)
###Output
_____no_output_____
###Markdown
Comparing with Day of Week
###Code
dayofweek=pd.DatetimeIndex(pivoted.columns).dayofweek
plt.scatter(X2[:,0], X2[:,1], c=dayofweek, cmap='rainbow')
plt.colorbar()
###Output
_____no_output_____
###Markdown
Analyzing Outliners **Is Label 1 equal to weekends?** - Shows the cases of labels 1 (expected to be weekends) which where actualy during the week - This shows, that the Gaussian Mixture did not perform verry well here
###Code
dates=pd.DatetimeIndex(pivoted.columns)
dates[(labels==1)&(dayofweek<5)]
###Output
_____no_output_____
###Markdown
**Is Label 0 equal to weekdays?** - Shows the cases of labels 0 (expected to be weekdays) which where actualy during the weekend - Here we see less differences
###Code
dates=pd.DatetimeIndex(pivoted.columns)
dates[(labels==0)&(dayofweek>4)]
###Output
_____no_output_____ |
notebooks/pyoperant.ipynb | ###Markdown
Box 3 is very quietBox 10, 14 are very loudBox 15 doesn't work
###Code
for ii in box.inputs:
print ii.read()
for oo in box.outputs:
print oo.write(False)
for ii in box_1.inputs:
print ii.read()
iface = box.interfaces['comedi']
box = PANELS['Zog6']()
box.reset()
box.test()
box.reward()
###Output
_____no_output_____ |
src/60_Hyperopt_elastic_net.ipynb | ###Markdown
Introduction- ElasticNetใไฝฟใฃใฆใฟใ- permutation importance ใ่ฟฝๅ Import everything I need :)
###Code
import warnings
warnings.filterwarnings('ignore')
import time
import multiprocessing
import glob
import gc
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import pandas as pd
from plotly.offline import init_notebook_mode, iplot
import plotly.graph_objs as go
from sklearn.preprocessing import LabelEncoder, StandardScaler
from sklearn.model_selection import KFold, train_test_split, GridSearchCV
from sklearn.metrics import mean_absolute_error
from sklearn import linear_model
from functools import partial
from hyperopt import fmin, hp, tpe, Trials, space_eval, STATUS_OK, STATUS_RUNNING
from fastprogress import progress_bar
###Output
_____no_output_____
###Markdown
Preparation
###Code
nb = 60
isSmallSet = False
length = 50000
model_name = 'elastic_net'
pd.set_option('display.max_columns', 200)
# use atomic numbers to recode atomic names
ATOMIC_NUMBERS = {
'H': 1,
'C': 6,
'N': 7,
'O': 8,
'F': 9
}
file_path = '../input/champs-scalar-coupling/'
glob.glob(file_path + '*')
# train
path = file_path + 'train.csv'
if isSmallSet:
train = pd.read_csv(path) [:length]
else:
train = pd.read_csv(path)
# test
path = file_path + 'test.csv'
if isSmallSet:
test = pd.read_csv(path)[:length]
else:
test = pd.read_csv(path)
# structure
path = file_path + 'structures.csv'
structures = pd.read_csv(path)
# fc_train
path = file_path + 'nb47_fc_train.csv'
if isSmallSet:
fc_train = pd.read_csv(path)[:length]
else:
fc_train = pd.read_csv(path)
# fc_test
path = file_path + 'nb47_fc_test.csv'
if isSmallSet:
fc_test = pd.read_csv(path)[:length]
else:
fc_test = pd.read_csv(path)
# train dist-interact
path = file_path + 'nb33_train_dist-interaction.csv'
if isSmallSet:
dist_interact_train = pd.read_csv(path)[:length]
else:
dist_interact_train = pd.read_csv(path)
# test dist-interact
path = file_path + 'nb33_test_dist-interaction.csv'
if isSmallSet:
dist_interact_test = pd.read_csv(path)[:length]
else:
dist_interact_test = pd.read_csv(path)
# ob charge train
path = file_path + 'train_ob_charges_V7EstimatioofMullikenChargeswithOpenBabel.csv'
if isSmallSet:
ob_charge_train = pd.read_csv(path)[:length].drop(['Unnamed: 0', 'error'], axis=1)
else:
ob_charge_train = pd.read_csv(path).drop(['Unnamed: 0', 'error'], axis=1)
# ob charge test
path = file_path + 'test_ob_charges_V7EstimatioofMullikenChargeswithOpenBabel.csv'
if isSmallSet:
ob_charge_test = pd.read_csv(path)[:length].drop(['Unnamed: 0', 'error'], axis=1)
else:
ob_charge_test = pd.read_csv(path).drop(['Unnamed: 0', 'error'], axis=1)
len(test), len(fc_test)
len(train), len(fc_train)
if isSmallSet:
print('using SmallSet !!')
print('-------------------')
print(f'There are {train.shape[0]} rows in train data.')
print(f'There are {test.shape[0]} rows in test data.')
print(f"There are {train['molecule_name'].nunique()} distinct molecules in train data.")
print(f"There are {test['molecule_name'].nunique()} distinct molecules in test data.")
print(f"There are {train['atom_index_0'].nunique()} unique atoms.")
print(f"There are {train['type'].nunique()} unique types.")
###Output
There are 4658147 rows in train data.
There are 2505542 rows in test data.
There are 85003 distinct molecules in train data.
There are 45772 distinct molecules in test data.
There are 29 unique atoms.
There are 8 unique types.
###Markdown
--- myFunc**metrics**
###Code
def kaggle_metric(df, preds):
df["prediction"] = preds
maes = []
for t in df.type.unique():
y_true = df[df.type==t].scalar_coupling_constant.values
y_pred = df[df.type==t].prediction.values
mae = np.log(mean_absolute_error(y_true, y_pred))
maes.append(mae)
return np.mean(maes)
###Output
_____no_output_____
###Markdown
---**momory**
###Code
def reduce_mem_usage(df, verbose=True):
numerics = ['int16', 'int32', 'int64', 'float16', 'float32', 'float64']
start_mem = df.memory_usage().sum() / 1024**2
for col in df.columns:
col_type = df[col].dtypes
if col_type in numerics:
c_min = df[col].min()
c_max = df[col].max()
if str(col_type)[:3] == 'int':
if c_min > np.iinfo(np.int8).min and c_max < np.iinfo(np.int8).max:
df[col] = df[col].astype(np.int8)
elif c_min > np.iinfo(np.int16).min and c_max < np.iinfo(np.int16).max:
df[col] = df[col].astype(np.int16)
elif c_min > np.iinfo(np.int32).min and c_max < np.iinfo(np.int32).max:
df[col] = df[col].astype(np.int32)
elif c_min > np.iinfo(np.int64).min and c_max < np.iinfo(np.int64).max:
df[col] = df[col].astype(np.int64)
else:
c_prec = df[col].apply(lambda x: np.finfo(x).precision).max()
if c_min > np.finfo(np.float16).min and c_max < np.finfo(np.float16).max and c_prec == np.finfo(np.float16).precision:
df[col] = df[col].astype(np.float16)
elif c_min > np.finfo(np.float32).min and c_max < np.finfo(np.float32).max and c_prec == np.finfo(np.float32).precision:
df[col] = df[col].astype(np.float32)
else:
df[col] = df[col].astype(np.float64)
end_mem = df.memory_usage().sum() / 1024**2
if verbose: print('Mem. usage decreased to {:5.2f} Mb ({:.1f}% reduction)'.format(end_mem, 100 * (start_mem - end_mem) / start_mem))
return df
class permutation_importance():
def __init__(self, model, metric):
self.is_computed = False
self.n_feat = 0
self.base_score = 0
self.model = model
self.metric = metric
self.df_result = []
def compute(self, X_valid, y_valid):
self.n_feat = len(X_valid.columns)
self.base_score = self.metric(y_valid, self.model.predict(X_valid))
self.df_result = pd.DataFrame({'feat': X_valid.columns,
'score': np.zeros(self.n_feat),
'score_diff': np.zeros(self.n_feat)})
# predict
for i, col in enumerate(X_valid.columns):
df_perm = X_valid.copy()
np.random.seed(1)
df_perm[col] = np.random.permutation(df_perm[col])
y_valid_pred = model.predict(df_perm)
score = self.metric(y_valid, y_valid_pred)
self.df_result['score'][self.df_result['feat']==col] = score
self.df_result['score_diff'][self.df_result['feat']==col] = self.base_score - score
self.is_computed = True
def get_negative_feature(self):
assert self.is_computed!=False, 'compute ใกใฝใใใๅฎ่กใใใฆใใพใใ'
idx = self.df_result['score_diff'] < 0
return self.df_result.loc[idx, 'feat'].values.tolist()
def get_positive_feature(self):
assert self.is_computed!=False, 'compute ใกใฝใใใๅฎ่กใใใฆใใพใใ'
idx = self.df_result['score_diff'] > 0
return self.df_result.loc[idx, 'feat'].values.tolist()
def show_permutation_importance(self, score_type='loss'):
assert self.is_computed!=False, 'compute ใกใฝใใใๅฎ่กใใใฆใใพใใ'
if score_type=='loss':
ascending = True
elif score_type=='accuracy':
ascending = False
else:
ascending = ''
plt.figure(figsize=(15, int(0.25*self.n_feat)))
sns.barplot(x="score_diff", y="feat", data=self.df_result.sort_values(by="score_diff", ascending=ascending))
plt.title('base_score - permutation_score')
###Output
_____no_output_____
###Markdown
Feature Engineering Build Distance Dataset
###Code
def build_type_dataframes(base, structures, coupling_type):
base = base[base['type'] == coupling_type].drop('type', axis=1).copy()
base = base.reset_index()
base['id'] = base['id'].astype('int32')
structures = structures[structures['molecule_name'].isin(base['molecule_name'])]
return base, structures
# a,b = build_type_dataframes(train, structures, '1JHN')
def add_coordinates(base, structures, index):
df = pd.merge(base, structures, how='inner',
left_on=['molecule_name', f'atom_index_{index}'],
right_on=['molecule_name', 'atom_index']).drop(['atom_index'], axis=1)
df = df.rename(columns={
'atom': f'atom_{index}',
'x': f'x_{index}',
'y': f'y_{index}',
'z': f'z_{index}'
})
return df
def add_atoms(base, atoms):
df = pd.merge(base, atoms, how='inner',
on=['molecule_name', 'atom_index_0', 'atom_index_1'])
return df
def merge_all_atoms(base, structures):
df = pd.merge(base, structures, how='left',
left_on=['molecule_name'],
right_on=['molecule_name'])
df = df[(df.atom_index_0 != df.atom_index) & (df.atom_index_1 != df.atom_index)]
return df
def add_center(df):
df['x_c'] = ((df['x_1'] + df['x_0']) * np.float32(0.5))
df['y_c'] = ((df['y_1'] + df['y_0']) * np.float32(0.5))
df['z_c'] = ((df['z_1'] + df['z_0']) * np.float32(0.5))
def add_distance_to_center(df):
df['d_c'] = ((
(df['x_c'] - df['x'])**np.float32(2) +
(df['y_c'] - df['y'])**np.float32(2) +
(df['z_c'] - df['z'])**np.float32(2)
)**np.float32(0.5))
def add_distance_between(df, suffix1, suffix2):
df[f'd_{suffix1}_{suffix2}'] = ((
(df[f'x_{suffix1}'] - df[f'x_{suffix2}'])**np.float32(2) +
(df[f'y_{suffix1}'] - df[f'y_{suffix2}'])**np.float32(2) +
(df[f'z_{suffix1}'] - df[f'z_{suffix2}'])**np.float32(2)
)**np.float32(0.5))
def add_distances(df):
n_atoms = 1 + max([int(c.split('_')[1]) for c in df.columns if c.startswith('x_')])
for i in range(1, n_atoms):
for vi in range(min(4, i)):
add_distance_between(df, i, vi)
def add_n_atoms(base, structures):
dfs = structures['molecule_name'].value_counts().rename('n_atoms').to_frame()
return pd.merge(base, dfs, left_on='molecule_name', right_index=True)
def build_couple_dataframe(some_csv, structures_csv, coupling_type, n_atoms=10):
base, structures = build_type_dataframes(some_csv, structures_csv, coupling_type)
base = add_coordinates(base, structures, 0)
base = add_coordinates(base, structures, 1)
base = base.drop(['atom_0', 'atom_1'], axis=1)
atoms = base.drop('id', axis=1).copy()
if 'scalar_coupling_constant' in some_csv:
atoms = atoms.drop(['scalar_coupling_constant'], axis=1)
add_center(atoms)
atoms = atoms.drop(['x_0', 'y_0', 'z_0', 'x_1', 'y_1', 'z_1'], axis=1)
atoms = merge_all_atoms(atoms, structures)
add_distance_to_center(atoms)
atoms = atoms.drop(['x_c', 'y_c', 'z_c', 'atom_index'], axis=1)
atoms.sort_values(['molecule_name', 'atom_index_0', 'atom_index_1', 'd_c'], inplace=True)
atom_groups = atoms.groupby(['molecule_name', 'atom_index_0', 'atom_index_1'])
atoms['num'] = atom_groups.cumcount() + 2
atoms = atoms.drop(['d_c'], axis=1)
atoms = atoms[atoms['num'] < n_atoms]
atoms = atoms.set_index(['molecule_name', 'atom_index_0', 'atom_index_1', 'num']).unstack()
atoms.columns = [f'{col[0]}_{col[1]}' for col in atoms.columns]
atoms = atoms.reset_index()
# # downcast back to int8
for col in atoms.columns:
if col.startswith('atom_'):
atoms[col] = atoms[col].fillna(0).astype('int8')
# atoms['molecule_name'] = atoms['molecule_name'].astype('int32')
full = add_atoms(base, atoms)
add_distances(full)
full.sort_values('id', inplace=True)
return full
def take_n_atoms(df, n_atoms, four_start=4):
labels = ['id', 'molecule_name', 'atom_index_1', 'atom_index_0']
for i in range(2, n_atoms):
label = f'atom_{i}'
labels.append(label)
for i in range(n_atoms):
num = min(i, 4) if i < four_start else 4
for j in range(num):
labels.append(f'd_{i}_{j}')
if 'scalar_coupling_constant' in df:
labels.append('scalar_coupling_constant')
return df[labels]
atoms = structures['atom'].values
types_train = train['type'].values
types_test = test['type'].values
structures['atom'] = structures['atom'].replace(ATOMIC_NUMBERS).astype('int8')
fulls_train = []
fulls_test = []
for type_ in progress_bar(train['type'].unique()):
full_train = build_couple_dataframe(train, structures, type_, n_atoms=10)
full_test = build_couple_dataframe(test, structures, type_, n_atoms=10)
full_train = take_n_atoms(full_train, 10)
full_test = take_n_atoms(full_test, 10)
fulls_train.append(full_train)
fulls_test.append(full_test)
structures['atom'] = atoms
train = pd.concat(fulls_train).sort_values(by=['id']) #, axis=0)
test = pd.concat(fulls_test).sort_values(by=['id']) #, axis=0)
train['type'] = types_train
test['type'] = types_test
train = train.fillna(0)
test = test.fillna(0)
###Output
_____no_output_____
###Markdown
dist-interact
###Code
train['dist_interact'] = dist_interact_train.values
test['dist_interact'] = dist_interact_test.values
###Output
_____no_output_____
###Markdown
basic
###Code
def map_atom_info(df_1,df_2, atom_idx):
df = pd.merge(df_1, df_2, how = 'left',
left_on = ['molecule_name', f'atom_index_{atom_idx}'],
right_on = ['molecule_name', 'atom_index'])
df = df.drop('atom_index', axis=1)
return df
# structure and ob_charges
ob_charge = pd.concat([ob_charge_train, ob_charge_test])
merge = pd.merge(ob_charge, structures, how='left',
left_on = ['molecule_name', 'atom_index'],
right_on = ['molecule_name', 'atom_index'])
for atom_idx in [0,1]:
train = map_atom_info(train, merge, atom_idx)
test = map_atom_info(test, merge, atom_idx)
train = train.rename(columns={
'atom': f'atom_{atom_idx}',
'x': f'x_{atom_idx}',
'y': f'y_{atom_idx}',
'z': f'z_{atom_idx}',
'eem': f'eem_{atom_idx}',
'mmff94': f'mmff94_{atom_idx}',
'gasteiger': f'gasteiger_{atom_idx}',
'qeq': f'qeq_{atom_idx}',
'qtpie': f'qtpie_{atom_idx}',
'eem2015ha': f'eem2015ha_{atom_idx}',
'eem2015hm': f'eem2015hm_{atom_idx}',
'eem2015hn': f'eem2015hn_{atom_idx}',
'eem2015ba': f'eem2015ba_{atom_idx}',
'eem2015bm': f'eem2015bm_{atom_idx}',
'eem2015bn': f'eem2015bn_{atom_idx}',})
test = test.rename(columns={
'atom': f'atom_{atom_idx}',
'x': f'x_{atom_idx}',
'y': f'y_{atom_idx}',
'z': f'z_{atom_idx}',
'eem': f'eem_{atom_idx}',
'mmff94': f'mmff94_{atom_idx}',
'gasteiger': f'gasteiger_{atom_idx}',
'qeq': f'qeq_{atom_idx}',
'qtpie': f'qtpie_{atom_idx}',
'eem2015ha': f'eem2015ha_{atom_idx}',
'eem2015hm': f'eem2015hm_{atom_idx}',
'eem2015hn': f'eem2015hn_{atom_idx}',
'eem2015ba': f'eem2015ba_{atom_idx}',
'eem2015bm': f'eem2015bm_{atom_idx}',
'eem2015bn': f'eem2015bn_{atom_idx}'})
# test = test.rename(columns={'atom': f'atom_{atom_idx}',
# 'x': f'x_{atom_idx}',
# 'y': f'y_{atom_idx}',
# 'z': f'z_{atom_idx}'})
# ob_charges
# train = map_atom_info(train, ob_charge_train, 0)
# test = map_atom_info(test, ob_charge_test, 0)
# train = map_atom_info(train, ob_charge_train, 1)
# test = map_atom_info(test, ob_charge_test, 1)
###Output
_____no_output_____
###Markdown
type0
###Code
def create_type0(df):
df['type_0'] = df['type'].apply(lambda x : x[0])
return df
# train['type_0'] = train['type'].apply(lambda x: x[0])
# test['type_0'] = test['type'].apply(lambda x: x[0])
###Output
_____no_output_____
###Markdown
distances
###Code
def distances(df):
df_p_0 = df[['x_0', 'y_0', 'z_0']].values
df_p_1 = df[['x_1', 'y_1', 'z_1']].values
df['dist'] = np.linalg.norm(df_p_0 - df_p_1, axis=1)
df['dist_x'] = (df['x_0'] - df['x_1']) ** 2
df['dist_y'] = (df['y_0'] - df['y_1']) ** 2
df['dist_z'] = (df['z_0'] - df['z_1']) ** 2
return df
# train = distances(train)
# test = distances(test)
###Output
_____no_output_____
###Markdown
็ตฑ่จ้
###Code
def create_features(df):
df['molecule_couples'] = df.groupby('molecule_name')['id'].transform('count')
df['molecule_dist_mean'] = df.groupby('molecule_name')['dist'].transform('mean')
df['molecule_dist_min'] = df.groupby('molecule_name')['dist'].transform('min')
df['molecule_dist_max'] = df.groupby('molecule_name')['dist'].transform('max')
df['atom_0_couples_count'] = df.groupby(['molecule_name', 'atom_index_0'])['id'].transform('count')
df['atom_1_couples_count'] = df.groupby(['molecule_name', 'atom_index_1'])['id'].transform('count')
df[f'molecule_atom_index_0_x_1_std'] = df.groupby(['molecule_name', 'atom_index_0'])['x_1'].transform('std')
df[f'molecule_atom_index_0_y_1_mean'] = df.groupby(['molecule_name', 'atom_index_0'])['y_1'].transform('mean')
df[f'molecule_atom_index_0_y_1_mean_diff'] = df[f'molecule_atom_index_0_y_1_mean'] - df['y_1']
df[f'molecule_atom_index_0_y_1_mean_div'] = df[f'molecule_atom_index_0_y_1_mean'] / df['y_1']
df[f'molecule_atom_index_0_y_1_max'] = df.groupby(['molecule_name', 'atom_index_0'])['y_1'].transform('max')
df[f'molecule_atom_index_0_y_1_max_diff'] = df[f'molecule_atom_index_0_y_1_max'] - df['y_1']
df[f'molecule_atom_index_0_y_1_std'] = df.groupby(['molecule_name', 'atom_index_0'])['y_1'].transform('std')
df[f'molecule_atom_index_0_z_1_std'] = df.groupby(['molecule_name', 'atom_index_0'])['z_1'].transform('std')
df[f'molecule_atom_index_0_dist_mean'] = df.groupby(['molecule_name', 'atom_index_0'])['dist'].transform('mean')
df[f'molecule_atom_index_0_dist_mean_diff'] = df[f'molecule_atom_index_0_dist_mean'] - df['dist']
df[f'molecule_atom_index_0_dist_mean_div'] = df[f'molecule_atom_index_0_dist_mean'] / df['dist']
df[f'molecule_atom_index_0_dist_max'] = df.groupby(['molecule_name', 'atom_index_0'])['dist'].transform('max')
df[f'molecule_atom_index_0_dist_max_diff'] = df[f'molecule_atom_index_0_dist_max'] - df['dist']
df[f'molecule_atom_index_0_dist_max_div'] = df[f'molecule_atom_index_0_dist_max'] / df['dist']
df[f'molecule_atom_index_0_dist_min'] = df.groupby(['molecule_name', 'atom_index_0'])['dist'].transform('min')
df[f'molecule_atom_index_0_dist_min_diff'] = df[f'molecule_atom_index_0_dist_min'] - df['dist']
df[f'molecule_atom_index_0_dist_min_div'] = df[f'molecule_atom_index_0_dist_min'] / df['dist']
df[f'molecule_atom_index_0_dist_std'] = df.groupby(['molecule_name', 'atom_index_0'])['dist'].transform('std')
df[f'molecule_atom_index_0_dist_std_diff'] = df[f'molecule_atom_index_0_dist_std'] - df['dist']
df[f'molecule_atom_index_0_dist_std_div'] = df[f'molecule_atom_index_0_dist_std'] / df['dist']
df[f'molecule_atom_index_1_dist_mean'] = df.groupby(['molecule_name', 'atom_index_1'])['dist'].transform('mean')
df[f'molecule_atom_index_1_dist_mean_diff'] = df[f'molecule_atom_index_1_dist_mean'] - df['dist']
df[f'molecule_atom_index_1_dist_mean_div'] = df[f'molecule_atom_index_1_dist_mean'] / df['dist']
df[f'molecule_atom_index_1_dist_max'] = df.groupby(['molecule_name', 'atom_index_1'])['dist'].transform('max')
df[f'molecule_atom_index_1_dist_max_diff'] = df[f'molecule_atom_index_1_dist_max'] - df['dist']
df[f'molecule_atom_index_1_dist_max_div'] = df[f'molecule_atom_index_1_dist_max'] / df['dist']
df[f'molecule_atom_index_1_dist_min'] = df.groupby(['molecule_name', 'atom_index_1'])['dist'].transform('min')
df[f'molecule_atom_index_1_dist_min_diff'] = df[f'molecule_atom_index_1_dist_min'] - df['dist']
df[f'molecule_atom_index_1_dist_min_div'] = df[f'molecule_atom_index_1_dist_min'] / df['dist']
df[f'molecule_atom_index_1_dist_std'] = df.groupby(['molecule_name', 'atom_index_1'])['dist'].transform('std')
df[f'molecule_atom_index_1_dist_std_diff'] = df[f'molecule_atom_index_1_dist_std'] - df['dist']
df[f'molecule_atom_index_1_dist_std_div'] = df[f'molecule_atom_index_1_dist_std'] / df['dist']
df[f'molecule_atom_1_dist_mean'] = df.groupby(['molecule_name', 'atom_1'])['dist'].transform('mean')
df[f'molecule_atom_1_dist_min'] = df.groupby(['molecule_name', 'atom_1'])['dist'].transform('min')
df[f'molecule_atom_1_dist_min_diff'] = df[f'molecule_atom_1_dist_min'] - df['dist']
df[f'molecule_atom_1_dist_min_div'] = df[f'molecule_atom_1_dist_min'] / df['dist']
df[f'molecule_atom_1_dist_std'] = df.groupby(['molecule_name', 'atom_1'])['dist'].transform('std')
df[f'molecule_atom_1_dist_std_diff'] = df[f'molecule_atom_1_dist_std'] - df['dist']
df[f'molecule_type_0_dist_std'] = df.groupby(['molecule_name', 'type_0'])['dist'].transform('std')
df[f'molecule_type_0_dist_std_diff'] = df[f'molecule_type_0_dist_std'] - df['dist']
df[f'molecule_type_dist_mean'] = df.groupby(['molecule_name', 'type'])['dist'].transform('mean')
df[f'molecule_type_dist_mean_diff'] = df[f'molecule_type_dist_mean'] - df['dist']
df[f'molecule_type_dist_mean_div'] = df[f'molecule_type_dist_mean'] / df['dist']
df[f'molecule_type_dist_max'] = df.groupby(['molecule_name', 'type'])['dist'].transform('max')
df[f'molecule_type_dist_min'] = df.groupby(['molecule_name', 'type'])['dist'].transform('min')
df[f'molecule_type_dist_std'] = df.groupby(['molecule_name', 'type'])['dist'].transform('std')
df[f'molecule_type_dist_std_diff'] = df[f'molecule_type_dist_std'] - df['dist']
# fc
df[f'molecule_type_fc_max'] = df.groupby(['molecule_name', 'type'])['fc'].transform('max')
df[f'molecule_type_fc_min'] = df.groupby(['molecule_name', 'type'])['fc'].transform('min')
df[f'molecule_type_fc_std'] = df.groupby(['molecule_name', 'type'])['fc'].transform('std')
df[f'molecule_type_fc_std_diff'] = df[f'molecule_type_fc_std'] - df['fc']
df[f'molecule_atom_index_0_fc_mean'] = df.groupby(['molecule_name', 'atom_index_0'])['fc'].transform('mean')
df[f'molecule_atom_index_0_fc_mean_diff'] = df[f'molecule_atom_index_0_fc_mean'] - df['fc']
df[f'molecule_atom_index_0_fc_mean_div'] = df[f'molecule_atom_index_0_fc_mean'] / df['dist']
df[f'molecule_atom_index_0_fc_max'] = df.groupby(['molecule_name', 'atom_index_0'])['fc'].transform('max')
df[f'molecule_atom_index_0_fc_max_diff'] = df[f'molecule_atom_index_0_fc_max'] - df['fc']
df[f'molecule_atom_index_0_fc_max_div'] = df[f'molecule_atom_index_0_fc_max'] / df['fc']
df[f'molecule_atom_index_0_fc_min'] = df.groupby(['molecule_name', 'atom_index_0'])['fc'].transform('min')
df[f'molecule_atom_index_0_fc_min_diff'] = df[f'molecule_atom_index_0_fc_min'] - df['fc']
df[f'molecule_atom_index_0_fc_min_div'] = df[f'molecule_atom_index_0_fc_min'] / df['fc']
df[f'molecule_atom_index_0_fc_std'] = df.groupby(['molecule_name', 'atom_index_0'])['fc'].transform('std')
df[f'molecule_atom_index_0_fc_std_diff'] = df[f'molecule_atom_index_0_fc_std'] - df['fc']
df[f'molecule_atom_index_0_fc_std_div'] = df[f'molecule_atom_index_0_fc_std'] / df['fc']
df[f'molecule_atom_index_1_fc_mean'] = df.groupby(['molecule_name', 'atom_index_1'])['fc'].transform('mean')
df[f'molecule_atom_index_1_fc_mean_diff'] = df[f'molecule_atom_index_1_fc_mean'] - df['fc']
df[f'molecule_atom_index_1_fc_mean_div'] = df[f'molecule_atom_index_1_fc_mean'] / df['fc']
df[f'molecule_atom_index_1_fc_max'] = df.groupby(['molecule_name', 'atom_index_1'])['fc'].transform('max')
df[f'molecule_atom_index_1_fc_max_diff'] = df[f'molecule_atom_index_1_fc_max'] - df['fc']
df[f'molecule_atom_index_1_fc_max_div'] = df[f'molecule_atom_index_1_fc_max'] / df['fc']
df[f'molecule_atom_index_1_fc_min'] = df.groupby(['molecule_name', 'atom_index_1'])['fc'].transform('min')
df[f'molecule_atom_index_1_fc_min_diff'] = df[f'molecule_atom_index_1_fc_min'] - df['fc']
df[f'molecule_atom_index_1_fc_min_div'] = df[f'molecule_atom_index_1_fc_min'] / df['fc']
df[f'molecule_atom_index_1_fc_std'] = df.groupby(['molecule_name', 'atom_index_1'])['fc'].transform('std')
df[f'molecule_atom_index_1_fc_std_diff'] = df[f'molecule_atom_index_1_fc_std'] - df['fc']
df[f'molecule_atom_index_1_fc_std_div'] = df[f'molecule_atom_index_1_fc_std'] / df['fc']
return df
###Output
_____no_output_____
###Markdown
angle features
###Code
def map_atom_info(df_1,df_2, atom_idx):
df = pd.merge(df_1, df_2, how = 'left',
left_on = ['molecule_name', f'atom_index_{atom_idx}'],
right_on = ['molecule_name', 'atom_index'])
df = df.drop('atom_index', axis=1)
return df
def create_closest(df):
df_temp=df.loc[:,["molecule_name","atom_index_0","atom_index_1","dist","x_0","y_0","z_0","x_1","y_1","z_1"]].copy()
df_temp_=df_temp.copy()
df_temp_= df_temp_.rename(columns={'atom_index_0': 'atom_index_1',
'atom_index_1': 'atom_index_0',
'x_0': 'x_1',
'y_0': 'y_1',
'z_0': 'z_1',
'x_1': 'x_0',
'y_1': 'y_0',
'z_1': 'z_0'})
df_temp=pd.concat(objs=[df_temp,df_temp_],axis=0)
df_temp["min_distance"]=df_temp.groupby(['molecule_name', 'atom_index_0'])['dist'].transform('min')
df_temp= df_temp[df_temp["min_distance"]==df_temp["dist"]]
df_temp=df_temp.drop(['x_0','y_0','z_0','min_distance', 'dist'], axis=1)
df_temp= df_temp.rename(columns={'atom_index_0': 'atom_index',
'atom_index_1': 'atom_index_closest',
'distance': 'distance_closest',
'x_1': 'x_closest',
'y_1': 'y_closest',
'z_1': 'z_closest'})
for atom_idx in [0,1]:
df = map_atom_info(df,df_temp, atom_idx)
df = df.rename(columns={'atom_index_closest': f'atom_index_closest_{atom_idx}',
'distance_closest': f'distance_closest_{atom_idx}',
'x_closest': f'x_closest_{atom_idx}',
'y_closest': f'y_closest_{atom_idx}',
'z_closest': f'z_closest_{atom_idx}'})
return df
def add_cos_features(df):
df["distance_0"]=((df['x_0']-df['x_closest_0'])**2+(df['y_0']-df['y_closest_0'])**2+(df['z_0']-df['z_closest_0'])**2)**(1/2)
df["distance_1"]=((df['x_1']-df['x_closest_1'])**2+(df['y_1']-df['y_closest_1'])**2+(df['z_1']-df['z_closest_1'])**2)**(1/2)
df["vec_0_x"]=(df['x_0']-df['x_closest_0'])/df["distance_0"]
df["vec_0_y"]=(df['y_0']-df['y_closest_0'])/df["distance_0"]
df["vec_0_z"]=(df['z_0']-df['z_closest_0'])/df["distance_0"]
df["vec_1_x"]=(df['x_1']-df['x_closest_1'])/df["distance_1"]
df["vec_1_y"]=(df['y_1']-df['y_closest_1'])/df["distance_1"]
df["vec_1_z"]=(df['z_1']-df['z_closest_1'])/df["distance_1"]
df["vec_x"]=(df['x_1']-df['x_0'])/df["dist"]
df["vec_y"]=(df['y_1']-df['y_0'])/df["dist"]
df["vec_z"]=(df['z_1']-df['z_0'])/df["dist"]
df["cos_0_1"]=df["vec_0_x"]*df["vec_1_x"]+df["vec_0_y"]*df["vec_1_y"]+df["vec_0_z"]*df["vec_1_z"]
df["cos_0"]=df["vec_0_x"]*df["vec_x"]+df["vec_0_y"]*df["vec_y"]+df["vec_0_z"]*df["vec_z"]
df["cos_1"]=df["vec_1_x"]*df["vec_x"]+df["vec_1_y"]*df["vec_y"]+df["vec_1_z"]*df["vec_z"]
df=df.drop(['vec_0_x','vec_0_y','vec_0_z','vec_1_x','vec_1_y','vec_1_z','vec_x','vec_y','vec_z'], axis=1)
return df
%%time
print('add fc')
print(len(train), len(test))
train['fc'] = fc_train.values
test['fc'] = fc_test.values
print('type0')
print(len(train), len(test))
train = create_type0(train)
test = create_type0(test)
print('distances')
print(len(train), len(test))
train = distances(train)
test = distances(test)
print('create_featueres')
print(len(train), len(test))
train = create_features(train)
test = create_features(test)
print('create_closest')
print(len(train), len(test))
train = create_closest(train)
test = create_closest(test)
train.drop_duplicates(inplace=True, subset=['id']) # ใชใใtrainใฎ่กๆฐใๅขใใใใฐใ็บ็
train = train.reset_index(drop=True)
print('add_cos_features')
print(len(train), len(test))
train = add_cos_features(train)
test = add_cos_features(test)
###Output
add fc
4658147 2505542
type0
4658147 2505542
distances
4658147 2505542
create_featueres
4658147 2505542
create_closest
4658147 2505542
add_cos_features
4658147 2505542
CPU times: user 2min 58s, sys: 4min 37s, total: 7min 36s
Wall time: 7min 36s
###Markdown
---nanใใใ็นๅพด้ใๅ้ค
###Code
drop_feats = train.columns[train.isnull().sum(axis=0) != 0].values
drop_feats
train = train.drop(drop_feats, axis=1)
test = test.drop(drop_feats, axis=1)
assert sum(train.isnull().sum(axis=0))==0, f'train ใซ nan ใใใใพใใ'
assert sum(test.isnull().sum(axis=0))==0, f'test ใซ nan ใใใใพใใ'
###Output
_____no_output_____
###Markdown
ใจใณใณใผใใฃใณใฐ
###Code
cat_cols = ['atom_1']
num_cols = list(set(train.columns) - set(cat_cols) - set(['type', "scalar_coupling_constant", 'molecule_name', 'id',
'atom_0', 'atom_1','atom_2', 'atom_3', 'atom_4', 'atom_5', 'atom_6', 'atom_7', 'atom_8', 'atom_9']))
print(f'ใซใใดใชใซใซ: {cat_cols}')
print(f'ๆฐๅค: {num_cols}')
###Output
ใซใใดใชใซใซ: ['atom_1']
ๆฐๅค: ['y_0', 'd_4_3', 'x_closest_1', 'd_2_0', 'molecule_atom_index_1_dist_max_div', 'molecule_atom_index_1_fc_min_div', 'molecule_atom_index_0_fc_min_diff', 'eem2015ba_0', 'molecule_atom_1_dist_min_div', 'molecule_atom_index_0_fc_min', 'd_5_2', 'molecule_atom_index_1_fc_mean_div', 'mmff94_0', 'd_5_3', 'molecule_atom_index_0_fc_max_div', 'molecule_type_dist_min', 'type_0', 'molecule_dist_min', 'qeq_0', 'eem2015hn_0', 'gasteiger_1', 'd_6_1', 'eem2015hn_1', 'd_9_1', 'molecule_type_dist_mean', 'atom_index_closest_1', 'molecule_atom_index_0_dist_mean', 'molecule_atom_index_0_fc_mean_div', 'd_7_1', 'molecule_type_dist_mean_div', 'd_4_0', 'd_8_3', 'd_3_1', 'molecule_atom_index_1_fc_max_div', 'eem_1', 'dist_y', 'molecule_atom_index_1_fc_min', 'molecule_atom_index_0_fc_max_diff', 'molecule_atom_index_1_fc_mean', 'molecule_atom_index_0_fc_min_div', 'molecule_atom_index_0_y_1_mean_diff', 'd_8_0', 'd_9_0', 'eem2015ha_1', 'atom_1_couples_count', 'molecule_atom_index_1_dist_min', 'molecule_dist_max', 'molecule_atom_index_0_dist_mean_diff', 'd_9_2', 'y_1', 'd_7_0', 'distance_0', 'atom_index_0', 'd_6_3', 'x_1', 'z_closest_0', 'z_1', 'molecule_atom_index_0_dist_max', 'eem2015bn_0', 'eem2015bn_1', 'd_7_2', 'dist_x', 'molecule_atom_index_1_dist_max', 'd_8_1', 'eem2015ba_1', 'y_closest_1', 'molecule_dist_mean', 'd_5_1', 'gasteiger_0', 'cos_0', 'qeq_1', 'd_3_2', 'cos_1', 'd_6_2', 'molecule_atom_index_0_dist_min_div', 'molecule_atom_1_dist_min_diff', 'eem2015hm_0', 'molecule_atom_index_0_dist_mean_div', 'molecule_atom_index_1_dist_min_div', 'molecule_atom_index_0_dist_max_diff', 'molecule_atom_index_1_fc_max', 'd_3_0', 'eem2015ha_0', 'dist', 'y_closest_0', 'eem_0', 'molecule_atom_index_1_dist_max_diff', 'atom_index_1', 'molecule_atom_index_0_fc_mean', 'molecule_atom_index_1_fc_mean_diff', 'molecule_atom_index_0_y_1_mean', 'd_2_1', 'molecule_atom_index_1_fc_min_diff', 'd_9_3', 'eem2015hm_1', 'x_0', 'eem2015bm_0', 'd_4_1', 'molecule_atom_index_0_fc_max', 'molecule_atom_index_1_fc_max_diff', 'd_1_0', 'molecule_atom_index_0_fc_mean_diff', 'qtpie_0', 'x_closest_0', 'molecule_atom_index_1_dist_mean_div', 'molecule_type_dist_mean_diff', 'z_closest_1', 'qtpie_1', 'molecule_type_dist_max', 'd_5_0', 'molecule_atom_index_1_dist_mean', 'molecule_atom_index_0_y_1_max', 'z_0', 'molecule_atom_1_dist_mean', 'd_4_2', 'cos_0_1', 'molecule_couples', 'molecule_atom_1_dist_min', 'd_8_2', 'eem2015bm_1', 'molecule_atom_index_0_dist_max_div', 'molecule_type_fc_min', 'fc', 'molecule_atom_index_0_y_1_max_diff', 'mmff94_1', 'molecule_type_fc_max', 'dist_z', 'molecule_atom_index_0_dist_min_diff', 'molecule_atom_index_1_dist_mean_diff', 'atom_index_closest_0', 'molecule_atom_index_1_dist_min_diff', 'molecule_atom_index_0_dist_min', 'd_6_0', 'd_7_3', 'distance_1', 'atom_0_couples_count']
###Markdown
LabelEncode- `atom_1` = {H, C, N}- `type_0` = {1, 2, 3}- `type` = {2JHC, ...}
###Code
for f in ['type_0', 'type']:
if f in train.columns:
lbl = LabelEncoder()
lbl.fit(list(train[f].values) + list(test[f].values))
train[f] = lbl.transform(list(train[f].values))
test[f] = lbl.transform(list(test[f].values))
###Output
_____no_output_____
###Markdown
one hot encoding
###Code
train = pd.get_dummies(train, columns=cat_cols)
test = pd.get_dummies(test, columns=cat_cols)
###Output
_____no_output_____
###Markdown
ๆจๆบๅ
###Code
scaler = StandardScaler()
train[num_cols] = scaler.fit_transform(train[num_cols])
test[num_cols] = scaler.transform(test[num_cols])
###Output
_____no_output_____
###Markdown
---**show features**
###Code
train.head(2)
print(train.columns)
###Output
Index(['id', 'molecule_name', 'atom_index_1', 'atom_index_0', 'atom_2',
'atom_3', 'atom_4', 'atom_5', 'atom_6', 'atom_7',
...
'y_closest_1', 'z_closest_1', 'distance_0', 'distance_1', 'cos_0_1',
'cos_0', 'cos_1', 'atom_1_C', 'atom_1_H', 'atom_1_N'],
dtype='object', length=152)
###Markdown
create train, test data
###Code
y = train['scalar_coupling_constant']
train = train.drop(['id', 'molecule_name', 'atom_0', 'scalar_coupling_constant'], axis=1)
test = test.drop(['id', 'molecule_name', 'atom_0'], axis=1)
# train = reduce_mem_usage(train)
# test = reduce_mem_usage(test)
X = train.copy()
X_test = test.copy()
assert len(X.columns) == len(X_test.columns), f'X ใจ X_test ใฎใตใคใบใ้ใใพใ X: {len(X.columns)}, X_test: {len(X_test.columns)}'
del train, test, full_train, full_test
gc.collect()
###Output
_____no_output_____
###Markdown
Hyperopt
###Code
X_train, X_valid, y_train, y_valid = train_test_split(X,
y,
test_size = 0.30,
random_state = 0)
# Define searched space
hyper_space = {'alpha': hp.choice('alpha', [0.01, 0.05, 0.1, 0.5, 1, 2]),
'l1_ratio': hp.choice('l1_ratio', [0, 0.1, 0.3, 0.5, 0.7, 0.9, 1])}
# Seting the number of Evals
MAX_EVALS= 30
%%time
# type ใใจใฎๅญฆ็ฟ
best_params_list = []
for t in sorted(X_train['type'].unique()):
print('*'*80)
print(f'- Training of type {t}')
print('*'*80)
X_t_train = X_train.loc[X_train['type'] == t]
X_t_valid = X_valid.loc[X_valid['type'] == t]
y_t_train = y_train[X_train['type'] == t]
y_t_valid = y_valid[X_valid['type'] == t]
# evaluate_metric
def evaluate_metric(params):
model = linear_model.ElasticNet(**params, random_state=42, max_iter=3000) # <=======================
model.fit(X_t_train, y_t_train)
pred = model.predict(X_t_valid)
y_t_train_pred = model.predict(X_t_train)
_X_t_valid = X_t_valid.copy()
_X_t_valid['scalar_coupling_constant'] = y_t_valid
cv_score = kaggle_metric(_X_t_valid, pred)
_X_t_valid = _X_t_valid.drop(['scalar_coupling_constant'], axis=1)
# print(f'mae(valid): {mean_absolute_error(y_t_valid, pred)}')
print(params)
print(f'training l1: {mean_absolute_error(y_t_train, y_t_train_pred) :.5f} \t valid l1: {mean_absolute_error(y_t_valid, pred) :.5f} ')
print(f'cv_score: {cv_score}')
print('-'*80)
print('\n')
return {
'loss': cv_score,
'status': STATUS_OK,
'stats_running': STATUS_RUNNING
}
# hyperopt
# Trail
trials = Trials()
# Set algoritm parameters
algo = partial(tpe.suggest,
n_startup_jobs=-1)
# Seting the number of Evals
MAX_EVALS= 20
# Fit Tree Parzen Estimator
best_vals = fmin(evaluate_metric, space=hyper_space, verbose=1,
algo=algo, max_evals=MAX_EVALS, trials=trials)
# Print best parameters
best_params = space_eval(hyper_space, best_vals)
best_params_list.append(best_params)
print("BEST PARAMETERS: " + str(best_params))
print('')
best_params_list
###Output
_____no_output_____ |
misc/kijang-emas-bank-negara.ipynb | ###Markdown
Welcome to Kijang Emas analysis!I was found around last week (18th March 2019), our Bank Negara opened public APIs for certain data, it was really cool and I want to help people get around with the data and what actually they can do with the data!We are going to cover 2 things here,1. Data Analytics2. Predictive Modelling (Linear regression, ARIMA, LSTM)Hell, I know nothing about Kijang Emas.**Again, do not use this code to buy something on the real world (if got positive return, please donate some to me)**
###Code
import requests
###Output
_____no_output_____
###Markdown
Data gatheringTo get the data is really simple, use this link to get kijang emas data, https://api.bnm.gov.my/public/kijang-emas/year/{year}/month/{month}Now, I want to get data from january 2018 - march 2019. 2018 data
###Code
data_2018 = []
for i in range(12):
data_2018.append(requests.get(
'https://api.bnm.gov.my/public/kijang-emas/year/2018/month/%d'%(i + 1),
headers = {'Accept': 'application/vnd.BNM.API.v1+json'},
).json())
###Output
_____no_output_____
###Markdown
2019 data
###Code
data_2019 = []
for i in range(3):
data_2019.append(requests.get(
'https://api.bnm.gov.my/public/kijang-emas/year/2019/month/%d'%(i + 1),
headers = {'Accept': 'application/vnd.BNM.API.v1+json'},
).json())
###Output
_____no_output_____
###Markdown
Take a peak our data ya
###Code
data_2018[0]['data'][:5]
###Output
_____no_output_____
###Markdown
Again, I got zero knowledge on kijang emas and I don't really care about the value, and I don't know what the value represented.Now I want to parse `effective_date` and `buying` from `one_oz`.
###Code
timestamp, selling = [], []
for month in data_2018 + data_2019:
for day in month['data']:
timestamp.append(day['effective_date'])
selling.append(day['one_oz']['selling'])
len(timestamp), len(selling)
###Output
_____no_output_____
###Markdown
Going to import matplotlib and seaborn for visualization, I really seaborn because of the font and colors, thats all, hah!
###Code
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
sns.set()
plt.figure(figsize = (15, 5))
plt.plot(selling)
plt.xticks(np.arange(len(timestamp))[::15], timestamp[::15], rotation = '45')
plt.show()
###Output
_____no_output_____
###Markdown
Perfect!So now let's we start our Data analytics. Distribution study
###Code
plt.figure(figsize = (15, 5))
sns.distplot(selling)
plt.show()
###Output
/Users/felixweizmann/Documents/GitHub/Stock-Prediction-Models/venv/lib/python3.7/site-packages/seaborn/distributions.py:2551: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms).
warnings.warn(msg, FutureWarning)
/Users/felixweizmann/Documents/GitHub/Stock-Prediction-Models/venv/lib/python3.7/site-packages/seaborn/distributions.py:2589: RuntimeWarning: Mean of empty slice.
line, = ax.plot(a.mean(), 0)
/Users/felixweizmann/Documents/GitHub/Stock-Prediction-Models/venv/lib/python3.7/site-packages/numpy/core/_methods.py:161: RuntimeWarning: invalid value encountered in double_scalars
ret = ret.dtype.type(ret / rcount)
/Users/felixweizmann/Documents/GitHub/Stock-Prediction-Models/venv/lib/python3.7/site-packages/numpy/lib/histograms.py:908: RuntimeWarning: invalid value encountered in true_divide
return n/db/n.sum(), bin_edges
###Markdown
Look at this, already normal distribution, coincidence? (I really wanted to show off unit scaling skills, too bad :/ )Now let's change our into Pandas, for lagging analysis.
###Code
import pandas as pd
df = pd.DataFrame({'timestamp':timestamp, 'selling':selling})
df.head()
def df_shift(df, lag = 0, start = 1, skip = 1, rejected_columns = []):
df = df.copy()
if not lag:
return df
cols = {}
for i in range(start, lag + 1, skip):
for x in list(df.columns):
if x not in rejected_columns:
if not x in cols:
cols[x] = ['{}_{}'.format(x, i)]
else:
cols[x].append('{}_{}'.format(x, i))
for k, v in cols.items():
columns = v
dfn = pd.DataFrame(data = None, columns = columns, index = df.index)
i = start - 1
for c in columns:
dfn[c] = df[k].shift(periods = i)
i += skip
df = pd.concat([df, dfn], axis = 1, join_axes = [df.index])
return df
###Output
_____no_output_____
###Markdown
**Shifted and moving average are not same.**
###Code
df_crosscorrelated = df_shift(
df, lag = 12, start = 4, skip = 2, rejected_columns = ['timestamp']
)
df_crosscorrelated['ma7'] = df_crosscorrelated['selling'].rolling(7).mean()
df_crosscorrelated['ma14'] = df_crosscorrelated['selling'].rolling(14).mean()
df_crosscorrelated['ma21'] = df_crosscorrelated['selling'].rolling(21).mean()
###Output
_____no_output_____
###Markdown
why we lagged or shifted to certain units?Virals took some time, impacts took some time, same goes to price lot / unit.Now I want to `lag` for until 12 units, `start` at 4 units shifted, `skip` every 2 units.
###Code
df_crosscorrelated.head(10)
plt.figure(figsize = (20, 4))
plt.subplot(1, 3, 1)
plt.scatter(df_crosscorrelated['selling'], df_crosscorrelated['selling_4'])
mse = (
(df_crosscorrelated['selling_4'] - df_crosscorrelated['selling']) ** 2
).mean()
plt.title('close vs shifted 4, average change: %f'%(mse))
plt.subplot(1, 3, 2)
plt.scatter(df_crosscorrelated['selling'], df_crosscorrelated['selling_8'])
mse = (
(df_crosscorrelated['selling_8'] - df_crosscorrelated['selling']) ** 2
).mean()
plt.title('close vs shifted 8, average change: %f'%(mse))
plt.subplot(1, 3, 3)
plt.scatter(df_crosscorrelated['selling'], df_crosscorrelated['selling_12'])
mse = (
(df_crosscorrelated['selling_12'] - df_crosscorrelated['selling']) ** 2
).mean()
plt.title('close vs shifted 12, average change: %f'%(mse))
plt.show()
###Output
_____no_output_____
###Markdown
Keep increasing and increasing!
###Code
plt.figure(figsize = (10, 5))
plt.scatter(
df_crosscorrelated['selling'],
df_crosscorrelated['selling_4'],
label = 'close vs shifted 4',
)
plt.scatter(
df_crosscorrelated['selling'],
df_crosscorrelated['selling_8'],
label = 'close vs shifted 8',
)
plt.scatter(
df_crosscorrelated['selling'],
df_crosscorrelated['selling_12'],
label = 'close vs shifted 12',
)
plt.legend()
plt.show()
fig, ax = plt.subplots(figsize = (15, 5))
df_crosscorrelated.plot(
x = 'timestamp', y = ['selling', 'ma7', 'ma14', 'ma21'], ax = ax
)
plt.xticks(np.arange(len(timestamp))[::10], timestamp[::10], rotation = '45')
plt.show()
###Output
_____no_output_____
###Markdown
As you can see, even moving average 7 already not followed sudden trending (blue line), means that, **dilation rate required less than 7 days! so fast!** How about correlation?We want to study linear relationship between, how many days required to give impact to future sold units?
###Code
colormap = plt.cm.RdBu
plt.figure(figsize = (15, 5))
plt.title('cross correlation', y = 1.05, size = 16)
sns.heatmap(
df_crosscorrelated.iloc[:, 1:].corr(),
linewidths = 0.1,
vmax = 1.0,
cmap = colormap,
linecolor = 'white',
annot = True,
)
plt.show()
###Output
_____no_output_____
###Markdown
Based on this correlation map, look at selling vs selling_X,**selling_X from 4 to 12 is getting lower, means that, if today is 50 mean, next 4 days should increased by 0.95 * 50 mean, and continue.** OutliersSimple, we can use Z-score to detect outliers, which timestamps gave very uncertain high and low value.
###Code
std_selling = (selling - np.mean(selling)) / np.std(selling)
def detect(signal, treshold = 2.0):
detected = []
for i in range(len(signal)):
if np.abs(signal[i]) > treshold:
detected.append(i)
return detected
###Output
_____no_output_____
###Markdown
Based on z-score table, 2.0 already positioned at 97.772% of the population.https://d2jmvrsizmvf4x.cloudfront.net/6iEAaVSaT3aGP52HMzo3_z-score-02.png
###Code
outliers = detect(std_selling)
plt.figure(figsize = (15, 7))
plt.plot(selling)
plt.plot(
np.arange(len(selling)),
selling,
'X',
label = 'outliers',
markevery = outliers,
c = 'r',
)
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
We can see that, **we have positive and negative outliers**. What happened to our local market on that days? So we should study sentiment from local news to do risk analysis. Give us predictive modelling!Okay okay. Predictive modellingLike I said, I want to compare with 3 models,1. Linear regression2. ARIMA3. LSTM Tensorflow (sorry Pytorch, not used to it)Which models give the best accuracy and lowest error rate?**I want to split first timestamp 80% for train, another 20% timestamp for test.**
###Code
from sklearn.linear_model import LinearRegression
train_selling = selling[: int(0.8 * len(selling))]
test_selling = selling[int(0.8 * len(selling)) :]
###Output
_____no_output_____
###Markdown
Beware of `:`!
###Code
future_count = len(test_selling)
future_count
###Output
_____no_output_____
###Markdown
Our model should forecast 61 future days ahead. Linear regression
###Code
%%time
linear_regression = LinearRegression().fit(
np.arange(len(train_selling)).reshape((-1, 1)), train_selling
)
linear_future = linear_regression.predict(
np.arange(len(train_selling) + future_count).reshape((-1, 1))
)
###Output
_____no_output_____
###Markdown
Took me 594 us to train linear regression from sklearn. Very quick!
###Code
fig, ax = plt.subplots(figsize = (15, 5))
ax.plot(selling, label = '20% test trend')
ax.plot(train_selling, label = '80% train trend')
ax.plot(linear_future, label = 'forecast linear regression')
plt.xticks(
np.arange(len(timestamp))[::10],
np.arange(len(timestamp))[::10],
rotation = '45',
)
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Oh no, if based on linear relationship, the trend is going down! ARIMAStands for Auto-regressive Moving Average.3 important parameters you need to know about ARIMA, ARIMA(p, d, q). You will able to see what is `p`, `d`, `q` from wikipedia, https://en.wikipedia.org/wiki/Autoregressive_integrated_moving_average.`p` for the order (number of time lags).`d` for degree of differencing.`q` for the order of the moving-average.Or,`p` is how long the periods we need to look back.`d` is the skip value during calculating future differences.`q` is how many periods for moving average.
###Code
import statsmodels.api as sm
from sklearn.preprocessing import MinMaxScaler
from itertools import product
Qs = range(0, 2)
qs = range(0, 2)
Ps = range(0, 2)
ps = range(0, 2)
D = 1
parameters = product(ps, qs, Ps, Qs)
parameters_list = list(parameters)
###Output
_____no_output_____
###Markdown
Problem with ARIMA, you cannot feed a high value, so we need to scale, simplest we can use, minmax scaling.
###Code
minmax = MinMaxScaler().fit(np.array([train_selling]).T)
minmax_values = minmax.transform(np.array([train_selling]).T)
###Output
_____no_output_____
###Markdown
Now using naive meshgrid parameter searching, which pairs of parameters are the best! **Lower is better!**
###Code
best_aic = float('inf')
for param in parameters_list:
try:
model = sm.tsa.statespace.SARIMAX(
minmax_values[:, 0],
order = (param[0], D, param[1]),
seasonal_order = (param[2], D, param[3], future_count),
).fit(disp = -1)
except Exception as e:
print(e)
continue
aic = model.aic
print(aic)
if aic < best_aic and aic:
best_model = model
best_aic = aic
arima_future = best_model.get_prediction(
start = 0, end = len(train_selling) + (future_count - 1)
)
arima_future = minmax.inverse_transform(
np.expand_dims(arima_future.predicted_mean, axis = 1)
)[:, 0]
fig, ax = plt.subplots(figsize = (15, 5))
ax.plot(selling, label = '20% test trend')
ax.plot(train_selling, label = '80% train trend')
ax.plot(linear_future, label = 'forecast linear regression')
ax.plot(arima_future, label = 'forecast ARIMA')
plt.xticks(
np.arange(len(timestamp))[::10],
np.arange(len(timestamp))[::10],
rotation = '45',
)
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Perfect!Now we left, RNN + LSTM
###Code
import tensorflow as tf
class Model:
def __init__(
self,
learning_rate,
num_layers,
size,
size_layer,
output_size,
forget_bias = 0.1,
):
def lstm_cell(size_layer):
return tf.nn.rnn_cell.LSTMCell(size_layer, state_is_tuple = False)
rnn_cells = tf.nn.rnn_cell.MultiRNNCell(
[lstm_cell(size_layer) for _ in range(num_layers)],
state_is_tuple = False,
)
self.X = tf.placeholder(tf.float32, (None, None, size))
self.Y = tf.placeholder(tf.float32, (None, output_size))
drop = tf.contrib.rnn.DropoutWrapper(
rnn_cells, output_keep_prob = forget_bias
)
self.hidden_layer = tf.placeholder(
tf.float32, (None, num_layers * 2 * size_layer)
)
self.outputs, self.last_state = tf.nn.dynamic_rnn(
drop, self.X, initial_state = self.hidden_layer, dtype = tf.float32
)
self.logits = tf.layers.dense(self.outputs[-1], output_size)
self.cost = tf.reduce_mean(tf.square(self.Y - self.logits))
self.optimizer = tf.train.AdamOptimizer(learning_rate).minimize(
self.cost
)
###Output
_____no_output_____
###Markdown
**Naively defined neural network parameters, no meshgrid here. this parameters came from my dream, believe me :)**
###Code
num_layers = 1
size_layer = 128
epoch = 500
dropout_rate = 0.6
skip = 10
###Output
_____no_output_____
###Markdown
Same goes to LSTM, we need to scale our value becaused LSTM use sigmoid and tanh functions during feed-forward, we don't want any gradient vanishing during backpropagation.
###Code
df = pd.DataFrame({'values': train_selling})
minmax = MinMaxScaler().fit(df)
df_log = minmax.transform(df)
df_log = pd.DataFrame(df_log)
df_log.head()
tf.reset_default_graph()
modelnn = Model(
learning_rate = 0.001,
num_layers = num_layers,
size = df_log.shape[1],
size_layer = size_layer,
output_size = df_log.shape[1],
forget_bias = dropout_rate
)
sess = tf.InteractiveSession()
sess.run(tf.global_variables_initializer())
%%time
for i in range(epoch):
init_value = np.zeros((1, num_layers * 2 * size_layer))
total_loss = 0
for k in range(0, df_log.shape[0] - 1, skip):
index = min(k + skip, df_log.shape[0] -1)
batch_x = np.expand_dims(
df_log.iloc[k : index, :].values, axis = 0
)
batch_y = df_log.iloc[k + 1 : index + 1, :].values
last_state, _, loss = sess.run(
[modelnn.last_state, modelnn.optimizer, modelnn.cost],
feed_dict = {
modelnn.X: batch_x,
modelnn.Y: batch_y,
modelnn.hidden_layer: init_value,
},
)
init_value = last_state
total_loss += loss
total_loss /= ((df_log.shape[0] - 1) / skip)
if (i + 1) % 100 == 0:
print('epoch:', i + 1, 'avg loss:', total_loss)
df = pd.DataFrame({'values': train_selling})
minmax = MinMaxScaler().fit(df)
df_log = minmax.transform(df)
df_log = pd.DataFrame(df_log)
future_day = future_count
output_predict = np.zeros((df_log.shape[0] + future_day, df_log.shape[1]))
output_predict[0] = df_log.iloc[0]
upper_b = (df_log.shape[0] // skip) * skip
init_value = np.zeros((1, num_layers * 2 * size_layer))
for k in range(0, (df_log.shape[0] // skip) * skip, skip):
out_logits, last_state = sess.run(
[modelnn.logits, modelnn.last_state],
feed_dict = {
modelnn.X: np.expand_dims(
df_log.iloc[k : k + skip], axis = 0
),
modelnn.hidden_layer: init_value,
},
)
init_value = last_state
output_predict[k + 1 : k + skip + 1] = out_logits
if upper_b < df_log.shape[0]:
out_logits, last_state = sess.run(
[modelnn.logits, modelnn.last_state],
feed_dict = {
modelnn.X: np.expand_dims(df_log.iloc[upper_b:], axis = 0),
modelnn.hidden_layer: init_value,
},
)
init_value = last_state
output_predict[upper_b + 1 : df_log.shape[0] + 1] = out_logits
df_log.loc[df_log.shape[0]] = out_logits[-1]
future_day = future_day - 1
for i in range(future_day):
out_logits, last_state = sess.run(
[modelnn.logits, modelnn.last_state],
feed_dict = {
modelnn.X: np.expand_dims(df_log.iloc[-skip:], axis = 0),
modelnn.hidden_layer: init_value,
},
)
init_value = last_state
output_predict[df_log.shape[0]] = out_logits[-1]
df_log.loc[df_log.shape[0]] = out_logits[-1]
df_log = minmax.inverse_transform(output_predict)
lstm_future = df_log[:,0]
fig, ax = plt.subplots(figsize = (15, 5))
ax.plot(selling, label = '20% test trend')
ax.plot(train_selling, label = '80% train trend')
ax.plot(linear_future, label = 'forecast linear regression')
ax.plot(arima_future, label = 'forecast ARIMA')
ax.plot(lstm_future, label='forecast lstm')
plt.xticks(
np.arange(len(timestamp))[::10],
np.arange(len(timestamp))[::10],
rotation = '45',
)
plt.legend()
plt.show()
from sklearn.metrics import r2_score
from scipy.stats import pearsonr, spearmanr
###Output
_____no_output_____
###Markdown
Accuracy based on correlation coefficient, **higher is better!**
###Code
def calculate_accuracy(real, predict):
r2 = r2_score(real, predict)
if r2 < 0:
r2 = 0
def change_percentage(val):
# minmax, we know that correlation is between -1 and 1
if val > 0:
return val
else:
return val + 1
pearson = pearsonr(real, predict)[0]
spearman = spearmanr(real, predict)[0]
pearson = change_percentage(pearson)
spearman = change_percentage(spearman)
return {
'r2': r2 * 100,
'pearson': pearson * 100,
'spearman': spearman * 100,
}
###Output
_____no_output_____
###Markdown
Distance error for mse and rmse, **lower is better!**
###Code
def calculate_distance(real, predict):
mse = ((real - predict) ** 2).mean()
rmse = np.sqrt(mse)
return {'mse': mse, 'rmse': rmse}
###Output
_____no_output_____
###Markdown
Now let's check distance error using Mean Square Error and Root Mean Square ErrorValidating based on 80% training timestamps
###Code
linear_cut = linear_future[: len(train_selling)]
arima_cut = arima_future[: len(train_selling)]
lstm_cut = lstm_future[: len(train_selling)]
###Output
_____no_output_____
###Markdown
Linear regression
###Code
calculate_distance(train_selling, linear_cut)
calculate_accuracy(train_selling, linear_cut)
###Output
_____no_output_____
###Markdown
ARIMA
###Code
calculate_distance(train_selling, arima_cut)
calculate_accuracy(train_selling, arima_cut)
###Output
_____no_output_____
###Markdown
LSTM
###Code
calculate_distance(train_selling, lstm_cut)
calculate_accuracy(train_selling, lstm_cut)
###Output
_____no_output_____
###Markdown
**LSTM learn better during training session!**How about another 20%?
###Code
linear_cut = linear_future[len(train_selling) :]
arima_cut = arima_future[len(train_selling) :]
lstm_cut = lstm_future[len(train_selling) :]
###Output
_____no_output_____
###Markdown
Linear regression
###Code
calculate_distance(test_selling, linear_cut)
calculate_accuracy(test_selling, linear_cut)
###Output
_____no_output_____
###Markdown
ARIMA
###Code
calculate_distance(test_selling, arima_cut)
calculate_accuracy(test_selling, arima_cut)
###Output
_____no_output_____
###Markdown
LSTM
###Code
calculate_distance(test_selling, lstm_cut)
calculate_accuracy(test_selling, lstm_cut)
###Output
_____no_output_____
###Markdown
Welcome to Kijang Emas analysis!I was found around last week (18th March 2019), our Bank Negara opened public APIs for certain data, it was really cool and I want to help people get around with the data and what actually they can do with the data!We are going to cover 2 things here,1. Data Analytics2. Predictive Modelling (Linear regression, ARIMA, LSTM)Hell, I know nothing about Kijang Emas.**Again, do not use this code to buy something on the real world (if got positive return, please donate some to me)**
###Code
import requests
###Output
_____no_output_____
###Markdown
Data gatheringTo get the data is really simple, use this link to get kijang emas data, https://api.bnm.gov.my/public/kijang-emas/year/{year}/month/{month}Now, I want to get data from january 2018 - march 2019. 2018 data
###Code
data_2018 = []
for i in range(12):
data_2018.append(requests.get(
'https://api.bnm.gov.my/public/kijang-emas/year/2018/month/%d'%(i + 1),
headers = {'Accept': 'application/vnd.BNM.API.v1+json'},
).json())
###Output
_____no_output_____
###Markdown
2019 data
###Code
data_2019 = []
for i in range(3):
data_2019.append(requests.get(
'https://api.bnm.gov.my/public/kijang-emas/year/2019/month/%d'%(i + 1),
headers = {'Accept': 'application/vnd.BNM.API.v1+json'},
).json())
###Output
_____no_output_____
###Markdown
Take a peak our data ya
###Code
data_2018[0]['data'][:5]
###Output
_____no_output_____
###Markdown
Again, I got zero knowledge on kijang emas and I don't really care about the value, and I don't know what the value represented.Now I want to parse `effective_date` and `buying` from `one_oz`.
###Code
timestamp, selling = [], []
for month in data_2018 + data_2019:
for day in month['data']:
timestamp.append(day['effective_date'])
selling.append(day['one_oz']['selling'])
len(timestamp), len(selling)
###Output
_____no_output_____
###Markdown
Going to import matplotlib and seaborn for visualization, I really seaborn because of the font and colors, thats all, hah!
###Code
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
sns.set()
plt.figure(figsize = (15, 5))
plt.plot(selling)
plt.xticks(np.arange(len(timestamp))[::15], timestamp[::15], rotation = '45')
plt.show()
###Output
_____no_output_____
###Markdown
Perfect!So now let's we start our Data analytics. Distribution study
###Code
plt.figure(figsize = (15, 5))
sns.distplot(selling)
plt.show()
###Output
_____no_output_____
###Markdown
Look at this, already normal distribution, coincidence? (I really wanted to show off unit scaling skills, too bad :/ )Now let's change our into Pandas, for lagging analysis.
###Code
import pandas as pd
df = pd.DataFrame({'timestamp':timestamp, 'selling':selling})
df.head()
def df_shift(df, lag = 0, start = 1, skip = 1, rejected_columns = []):
df = df.copy()
if not lag:
return df
cols = {}
for i in range(start, lag + 1, skip):
for x in list(df.columns):
if x not in rejected_columns:
if not x in cols:
cols[x] = ['{}_{}'.format(x, i)]
else:
cols[x].append('{}_{}'.format(x, i))
for k, v in cols.items():
columns = v
dfn = pd.DataFrame(data = None, columns = columns, index = df.index)
i = start - 1
for c in columns:
dfn[c] = df[k].shift(periods = i)
i += skip
df = pd.concat([df, dfn], axis = 1, join_axes = [df.index])
return df
###Output
_____no_output_____
###Markdown
**Shifted and moving average are not same.**
###Code
df_crosscorrelated = df_shift(
df, lag = 12, start = 4, skip = 2, rejected_columns = ['timestamp']
)
df_crosscorrelated['ma7'] = df_crosscorrelated['selling'].rolling(7).mean()
df_crosscorrelated['ma14'] = df_crosscorrelated['selling'].rolling(14).mean()
df_crosscorrelated['ma21'] = df_crosscorrelated['selling'].rolling(21).mean()
###Output
_____no_output_____
###Markdown
why we lagged or shifted to certain units?Virals took some time, impacts took some time, same goes to price lot / unit.Now I want to `lag` for until 12 units, `start` at 4 units shifted, `skip` every 2 units.
###Code
df_crosscorrelated.head(10)
plt.figure(figsize = (20, 4))
plt.subplot(1, 3, 1)
plt.scatter(df_crosscorrelated['selling'], df_crosscorrelated['selling_4'])
mse = (
(df_crosscorrelated['selling_4'] - df_crosscorrelated['selling']) ** 2
).mean()
plt.title('close vs shifted 4, average change: %f'%(mse))
plt.subplot(1, 3, 2)
plt.scatter(df_crosscorrelated['selling'], df_crosscorrelated['selling_8'])
mse = (
(df_crosscorrelated['selling_8'] - df_crosscorrelated['selling']) ** 2
).mean()
plt.title('close vs shifted 8, average change: %f'%(mse))
plt.subplot(1, 3, 3)
plt.scatter(df_crosscorrelated['selling'], df_crosscorrelated['selling_12'])
mse = (
(df_crosscorrelated['selling_12'] - df_crosscorrelated['selling']) ** 2
).mean()
plt.title('close vs shifted 12, average change: %f'%(mse))
plt.show()
###Output
_____no_output_____
###Markdown
Keep increasing and increasing!
###Code
plt.figure(figsize = (10, 5))
plt.scatter(
df_crosscorrelated['selling'],
df_crosscorrelated['selling_4'],
label = 'close vs shifted 4',
)
plt.scatter(
df_crosscorrelated['selling'],
df_crosscorrelated['selling_8'],
label = 'close vs shifted 8',
)
plt.scatter(
df_crosscorrelated['selling'],
df_crosscorrelated['selling_12'],
label = 'close vs shifted 12',
)
plt.legend()
plt.show()
fig, ax = plt.subplots(figsize = (15, 5))
df_crosscorrelated.plot(
x = 'timestamp', y = ['selling', 'ma7', 'ma14', 'ma21'], ax = ax
)
plt.xticks(np.arange(len(timestamp))[::10], timestamp[::10], rotation = '45')
plt.show()
###Output
_____no_output_____
###Markdown
As you can see, even moving average 7 already not followed sudden trending (blue line), means that, **dilation rate required less than 7 days! so fast!** How about correlation?We want to study linear relationship between, how many days required to give impact to future sold units?
###Code
colormap = plt.cm.RdBu
plt.figure(figsize = (15, 5))
plt.title('cross correlation', y = 1.05, size = 16)
sns.heatmap(
df_crosscorrelated.iloc[:, 1:].corr(),
linewidths = 0.1,
vmax = 1.0,
cmap = colormap,
linecolor = 'white',
annot = True,
)
plt.show()
###Output
_____no_output_____
###Markdown
Based on this correlation map, look at selling vs selling_X,**selling_X from 4 to 12 is getting lower, means that, if today is 50 mean, next 4 days should increased by 0.95 * 50 mean, and continue.** OutliersSimple, we can use Z-score to detect outliers, which timestamps gave very uncertain high and low value.
###Code
std_selling = (selling - np.mean(selling)) / np.std(selling)
def detect(signal, treshold = 2.0):
detected = []
for i in range(len(signal)):
if np.abs(signal[i]) > treshold:
detected.append(i)
return detected
###Output
_____no_output_____
###Markdown
Based on z-score table, 2.0 already positioned at 97.772% of the population.https://d2jmvrsizmvf4x.cloudfront.net/6iEAaVSaT3aGP52HMzo3_z-score-02.png
###Code
outliers = detect(std_selling)
plt.figure(figsize = (15, 7))
plt.plot(selling)
plt.plot(
np.arange(len(selling)),
selling,
'X',
label = 'outliers',
markevery = outliers,
c = 'r',
)
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
We can see that, **we have positive and negative outliers**. What happened to our local market on that days? So we should study sentiment from local news to do risk analysis. Give us predictive modelling!Okay okay. Predictive modellingLike I said, I want to compare with 3 models,1. Linear regression2. ARIMA3. LSTM Tensorflow (sorry Pytorch, not used to it)Which models give the best accuracy and lowest error rate?**I want to split first timestamp 80% for train, another 20% timestamp for test.**
###Code
from sklearn.linear_model import LinearRegression
train_selling = selling[: int(0.8 * len(selling))]
test_selling = selling[int(0.8 * len(selling)) :]
###Output
_____no_output_____
###Markdown
Beware of `:`!
###Code
future_count = len(test_selling)
future_count
###Output
_____no_output_____
###Markdown
Our model should forecast 61 future days ahead. Linear regression
###Code
%%time
linear_regression = LinearRegression().fit(
np.arange(len(train_selling)).reshape((-1, 1)), train_selling
)
linear_future = linear_regression.predict(
np.arange(len(train_selling) + future_count).reshape((-1, 1))
)
###Output
CPU times: user 0 ns, sys: 0 ns, total: 0 ns
Wall time: 608 ยตs
###Markdown
Took me 594 us to train linear regression from sklearn. Very quick!
###Code
fig, ax = plt.subplots(figsize = (15, 5))
ax.plot(selling, label = '20% test trend')
ax.plot(train_selling, label = '80% train trend')
ax.plot(linear_future, label = 'forecast linear regression')
plt.xticks(
np.arange(len(timestamp))[::10],
np.arange(len(timestamp))[::10],
rotation = '45',
)
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Oh no, if based on linear relationship, the trend is going down! ARIMAStands for Auto-regressive Moving Average.3 important parameters you need to know about ARIMA, ARIMA(p, d, q). You will able to see what is `p`, `d`, `q` from wikipedia, https://en.wikipedia.org/wiki/Autoregressive_integrated_moving_average.`p` for the order (number of time lags).`d` for degree of differencing.`q` for the order of the moving-average.Or,`p` is how long the periods we need to look back.`d` is the skip value during calculating future differences.`q` is how many periods for moving average.
###Code
import statsmodels.api as sm
from sklearn.preprocessing import MinMaxScaler
from itertools import product
Qs = range(0, 2)
qs = range(0, 2)
Ps = range(0, 2)
ps = range(0, 2)
D = 1
parameters = product(ps, qs, Ps, Qs)
parameters_list = list(parameters)
###Output
_____no_output_____
###Markdown
Problem with ARIMA, you cannot feed a high value, so we need to scale, simplest we can use, minmax scaling.
###Code
minmax = MinMaxScaler().fit(np.array([train_selling]).T)
minmax_values = minmax.transform(np.array([train_selling]).T)
###Output
/usr/local/lib/python3.6/dist-packages/sklearn/utils/validation.py:475: DataConversionWarning: Data with input dtype int64 was converted to float64 by MinMaxScaler.
warnings.warn(msg, DataConversionWarning)
###Markdown
Now using naive meshgrid parameter searching, which pairs of parameters are the best! **Lower is better!**
###Code
best_aic = float('inf')
for param in parameters_list:
try:
model = sm.tsa.statespace.SARIMAX(
minmax_values[:, 0],
order = (param[0], D, param[1]),
seasonal_order = (param[2], D, param[3], future_count),
).fit(disp = -1)
except Exception as e:
print(e)
continue
aic = model.aic
print(aic)
if aic < best_aic and aic:
best_model = model
best_aic = aic
arima_future = best_model.get_prediction(
start = 0, end = len(train_selling) + (future_count - 1)
)
arima_future = minmax.inverse_transform(
np.expand_dims(arima_future.predicted_mean, axis = 1)
)[:, 0]
fig, ax = plt.subplots(figsize = (15, 5))
ax.plot(selling, label = '20% test trend')
ax.plot(train_selling, label = '80% train trend')
ax.plot(linear_future, label = 'forecast linear regression')
ax.plot(arima_future, label = 'forecast ARIMA')
plt.xticks(
np.arange(len(timestamp))[::10],
np.arange(len(timestamp))[::10],
rotation = '45',
)
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Perfect!Now we left, RNN + LSTM
###Code
import tensorflow as tf
class Model:
def __init__(
self,
learning_rate,
num_layers,
size,
size_layer,
output_size,
forget_bias = 0.1,
):
def lstm_cell(size_layer):
return tf.nn.rnn_cell.LSTMCell(size_layer, state_is_tuple = False)
rnn_cells = tf.nn.rnn_cell.MultiRNNCell(
[lstm_cell(size_layer) for _ in range(num_layers)],
state_is_tuple = False,
)
self.X = tf.placeholder(tf.float32, (None, None, size))
self.Y = tf.placeholder(tf.float32, (None, output_size))
drop = tf.contrib.rnn.DropoutWrapper(
rnn_cells, output_keep_prob = forget_bias
)
self.hidden_layer = tf.placeholder(
tf.float32, (None, num_layers * 2 * size_layer)
)
self.outputs, self.last_state = tf.nn.dynamic_rnn(
drop, self.X, initial_state = self.hidden_layer, dtype = tf.float32
)
self.logits = tf.layers.dense(self.outputs[-1], output_size)
self.cost = tf.reduce_mean(tf.square(self.Y - self.logits))
self.optimizer = tf.train.AdamOptimizer(learning_rate).minimize(
self.cost
)
###Output
_____no_output_____
###Markdown
**Naively defined neural network parameters, no meshgrid here. this parameters came from my dream, believe me :)**
###Code
num_layers = 1
size_layer = 128
epoch = 500
dropout_rate = 0.6
skip = 10
###Output
_____no_output_____
###Markdown
Same goes to LSTM, we need to scale our value becaused LSTM use sigmoid and tanh functions during feed-forward, we don't want any gradient vanishing during backpropagation.
###Code
df = pd.DataFrame({'values': train_selling})
minmax = MinMaxScaler().fit(df)
df_log = minmax.transform(df)
df_log = pd.DataFrame(df_log)
df_log.head()
tf.reset_default_graph()
modelnn = Model(
learning_rate = 0.001,
num_layers = num_layers,
size = df_log.shape[1],
size_layer = size_layer,
output_size = df_log.shape[1],
forget_bias = dropout_rate
)
sess = tf.InteractiveSession()
sess.run(tf.global_variables_initializer())
%%time
for i in range(epoch):
init_value = np.zeros((1, num_layers * 2 * size_layer))
total_loss = 0
for k in range(0, df_log.shape[0] - 1, skip):
index = min(k + skip, df_log.shape[0] -1)
batch_x = np.expand_dims(
df_log.iloc[k : index, :].values, axis = 0
)
batch_y = df_log.iloc[k + 1 : index + 1, :].values
last_state, _, loss = sess.run(
[modelnn.last_state, modelnn.optimizer, modelnn.cost],
feed_dict = {
modelnn.X: batch_x,
modelnn.Y: batch_y,
modelnn.hidden_layer: init_value,
},
)
init_value = last_state
total_loss += loss
total_loss /= ((df_log.shape[0] - 1) / skip)
if (i + 1) % 100 == 0:
print('epoch:', i + 1, 'avg loss:', total_loss)
df = pd.DataFrame({'values': train_selling})
minmax = MinMaxScaler().fit(df)
df_log = minmax.transform(df)
df_log = pd.DataFrame(df_log)
future_day = future_count
output_predict = np.zeros((df_log.shape[0] + future_day, df_log.shape[1]))
output_predict[0] = df_log.iloc[0]
upper_b = (df_log.shape[0] // skip) * skip
init_value = np.zeros((1, num_layers * 2 * size_layer))
for k in range(0, (df_log.shape[0] // skip) * skip, skip):
out_logits, last_state = sess.run(
[modelnn.logits, modelnn.last_state],
feed_dict = {
modelnn.X: np.expand_dims(
df_log.iloc[k : k + skip], axis = 0
),
modelnn.hidden_layer: init_value,
},
)
init_value = last_state
output_predict[k + 1 : k + skip + 1] = out_logits
if upper_b < df_log.shape[0]:
out_logits, last_state = sess.run(
[modelnn.logits, modelnn.last_state],
feed_dict = {
modelnn.X: np.expand_dims(df_log.iloc[upper_b:], axis = 0),
modelnn.hidden_layer: init_value,
},
)
init_value = last_state
output_predict[upper_b + 1 : df_log.shape[0] + 1] = out_logits
df_log.loc[df_log.shape[0]] = out_logits[-1]
future_day = future_day - 1
for i in range(future_day):
out_logits, last_state = sess.run(
[modelnn.logits, modelnn.last_state],
feed_dict = {
modelnn.X: np.expand_dims(df_log.iloc[-skip:], axis = 0),
modelnn.hidden_layer: init_value,
},
)
init_value = last_state
output_predict[df_log.shape[0]] = out_logits[-1]
df_log.loc[df_log.shape[0]] = out_logits[-1]
df_log = minmax.inverse_transform(output_predict)
lstm_future = df_log[:,0]
fig, ax = plt.subplots(figsize = (15, 5))
ax.plot(selling, label = '20% test trend')
ax.plot(train_selling, label = '80% train trend')
ax.plot(linear_future, label = 'forecast linear regression')
ax.plot(arima_future, label = 'forecast ARIMA')
ax.plot(lstm_future, label='forecast lstm')
plt.xticks(
np.arange(len(timestamp))[::10],
np.arange(len(timestamp))[::10],
rotation = '45',
)
plt.legend()
plt.show()
from sklearn.metrics import r2_score
from scipy.stats import pearsonr, spearmanr
###Output
_____no_output_____
###Markdown
Accuracy based on correlation coefficient, **higher is better!**
###Code
def calculate_accuracy(real, predict):
r2 = r2_score(real, predict)
if r2 < 0:
r2 = 0
def change_percentage(val):
# minmax, we know that correlation is between -1 and 1
if val > 0:
return val
else:
return val + 1
pearson = pearsonr(real, predict)[0]
spearman = spearmanr(real, predict)[0]
pearson = change_percentage(pearson)
spearman = change_percentage(spearman)
return {
'r2': r2 * 100,
'pearson': pearson * 100,
'spearman': spearman * 100,
}
###Output
_____no_output_____
###Markdown
Distance error for mse and rmse, **lower is better!**
###Code
def calculate_distance(real, predict):
mse = ((real - predict) ** 2).mean()
rmse = np.sqrt(mse)
return {'mse': mse, 'rmse': rmse}
###Output
_____no_output_____
###Markdown
Now let's check distance error using Mean Square Error and Root Mean Square ErrorValidating based on 80% training timestamps
###Code
linear_cut = linear_future[: len(train_selling)]
arima_cut = arima_future[: len(train_selling)]
lstm_cut = lstm_future[: len(train_selling)]
###Output
_____no_output_____
###Markdown
Linear regression
###Code
calculate_distance(train_selling, linear_cut)
calculate_accuracy(train_selling, linear_cut)
###Output
_____no_output_____
###Markdown
ARIMA
###Code
calculate_distance(train_selling, arima_cut)
calculate_accuracy(train_selling, arima_cut)
###Output
_____no_output_____
###Markdown
LSTM
###Code
calculate_distance(train_selling, lstm_cut)
calculate_accuracy(train_selling, lstm_cut)
###Output
_____no_output_____
###Markdown
**LSTM learn better during training session!**How about another 20%?
###Code
linear_cut = linear_future[len(train_selling) :]
arima_cut = arima_future[len(train_selling) :]
lstm_cut = lstm_future[len(train_selling) :]
###Output
_____no_output_____
###Markdown
Linear regression
###Code
calculate_distance(test_selling, linear_cut)
calculate_accuracy(test_selling, linear_cut)
###Output
_____no_output_____
###Markdown
ARIMA
###Code
calculate_distance(test_selling, arima_cut)
calculate_accuracy(test_selling, arima_cut)
###Output
_____no_output_____
###Markdown
LSTM
###Code
calculate_distance(test_selling, lstm_cut)
calculate_accuracy(test_selling, lstm_cut)
###Output
_____no_output_____
###Markdown
Welcome to Kijang Emas analysis!I was found around last week (18th March 2019), our Bank Negara opened public APIs for certain data, it was really cool and I want to help people get around with the data and what actually they can do with the data!We are going to cover 2 things here,1. Data Analytics2. Predictive Modelling (Linear regression, ARIMA, LSTM)Hell, I know nothing about Kijang Emas.**Again, do not use this code to buy something on the real world (if got positive return, please donate some to me)**
###Code
import requests
from datetime import date
###Output
_____no_output_____
###Markdown
Data gatheringTo get the data is really simple, use this link to get kijang emas data, https://www.bnm.gov.my/kijang-emas-pricesA rest API is available at https://api.bnm.gov.my/portaltag/Kijang-EmasNow, I want to get data from january 2020 - march 2021.https://api.bnm.gov.my/portaloperation/KELatest
###Code
# latest https://api.bnm.gov.my/public/kijang-emas
requests.get('https://api.bnm.gov.my/public/kijang-emas',
headers = {'Accept': 'application/vnd.BNM.API.v1+json'},).json()
# by month year https://api.bnm.gov.my/public/kijang-emas/year/{year}/month/{month}
month= 12
year = 2020
print ('https://api.bnm.gov.my/public/kijang-emas/year/{}/month/{}'.format(year,month))
res=requests.get('https://api.bnm.gov.my/public/kijang-emas/year/{}/month/{}'.format(year,month),
headers = {'Accept': 'application/vnd.BNM.API.v1+json'},).json()
res['meta']['total_result']
###Output
https://api.bnm.gov.my/public/kijang-emas/year/2020/month/12
###Markdown
2020 data
###Code
data_2020 = []
for i in range(12):
res=requests.get('https://api.bnm.gov.my/public/kijang-emas/year/2020/month/%d'%(i + 1),
headers = {'Accept': 'application/vnd.BNM.API.v1+json'},
).json()
print('https://api.bnm.gov.my/public/kijang-emas/year/2020/month/%d'%(i + 1),res['meta']['total_result'])
data_2020.append(res)
###Output
https://api.bnm.gov.my/public/kijang-emas/year/2020/month/1 0
https://api.bnm.gov.my/public/kijang-emas/year/2020/month/2 0
https://api.bnm.gov.my/public/kijang-emas/year/2020/month/3 0
https://api.bnm.gov.my/public/kijang-emas/year/2020/month/4 0
https://api.bnm.gov.my/public/kijang-emas/year/2020/month/5 0
https://api.bnm.gov.my/public/kijang-emas/year/2020/month/6 0
https://api.bnm.gov.my/public/kijang-emas/year/2020/month/7 20
https://api.bnm.gov.my/public/kijang-emas/year/2020/month/8 19
https://api.bnm.gov.my/public/kijang-emas/year/2020/month/9 21
https://api.bnm.gov.my/public/kijang-emas/year/2020/month/10 21
https://api.bnm.gov.my/public/kijang-emas/year/2020/month/11 21
https://api.bnm.gov.my/public/kijang-emas/year/2020/month/12 22
###Markdown
2021 data
###Code
data_2021 = []
for i in range(3):
res=requests.get('https://api.bnm.gov.my/public/kijang-emas/year/2021/month/%d'%(i + 1),
headers = {'Accept': 'application/vnd.BNM.API.v1+json'},
).json()
print('https://api.bnm.gov.my/public/kijang-emas/year/2021/month/%d'%(i + 1),res['meta']['total_result'])
data_2021.append(res)
###Output
https://api.bnm.gov.my/public/kijang-emas/year/2021/month/1 19
https://api.bnm.gov.my/public/kijang-emas/year/2021/month/2 19
https://api.bnm.gov.my/public/kijang-emas/year/2021/month/3 3
###Markdown
Take a peak our data ya
###Code
data_2020[6]['data'][:5]
###Output
_____no_output_____
###Markdown
Again, I got zero knowledge on kijang emas and I don't really care about the value, and I don't know what the value represented.Now I want to parse `effective_date` and `buying` from `one_oz`.
###Code
timestamp, selling = [], []
for month in data_2020 + data_2021:
for day in month['data']:
timestamp.append(day['effective_date'])
selling.append(day['one_oz']['selling'])
len(timestamp), len(selling)
###Output
_____no_output_____
###Markdown
Going to import matplotlib and seaborn for visualization, I really seaborn because of the font and colors, thats all, hah!
###Code
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
sns.set()
plt.figure(figsize = (15, 5))
plt.plot(selling)
plt.xticks(np.arange(len(timestamp))[::15], timestamp[::15], rotation = '45')
plt.show()
###Output
_____no_output_____
###Markdown
Perfect!So now let's we start our Data analytics. Distribution study
###Code
plt.figure(figsize = (15, 5))
sns.histplot(data=selling,stat='density', kde=True)
plt.show()
###Output
_____no_output_____
###Markdown
Look at this, already normal distribution, coincidence? (I really want to show off [unit scaling](https://en.wikipedia.org/wiki/Feature_scaling) skills!)In case you are interested in [data normalization](https://towardsdatascience.com/all-kinds-of-cool-feature-scalers-537e54bc22ab), you have to understand scalars. The intention of a scaler is to lower the variance of the data in order to make most of the predictions lay in the area with the most data. There are many different scalers, which can boost your accuracy: RescalerRescaling, or min-max normalization uses the minimum and maximum values to scale an array. $$x'=\frac{x-\min(x)}{\max(x)-\min(x)}$$I havenโt really found it to be all that useful for machine-learning. I would say check it out only for the information and learning because this scalar typically throws estimations off and destroys accuracy in my experience. In one situation, I was able to use a rescaler as a min-max filter for bad data outputs on an endpoint. Though this certainly doesnโt cover the lost ground, I think that it was definitely a cool use for it.
###Code
def rescaler(x):
return (x-x.min())/(x.max()-x.min())
plt.figure(figsize = (15, 5))
sns.histplot(rescaler(np.array(selling)),stat='density')
plt.show()
###Output
_____no_output_____
###Markdown
Mean NormalizationMean Normalization is exactly what it sounds like, normalizing the data based on the mean. This one certainly could be useful, the only issue is that typically a z-score scalar does a lot better at normalizing the data than a mean normalizer. $$x'=\frac{x-mean(x)}{\max(x)-\min(x)}$$I havenโt used this one particularly that much, just as typically it returns a lower accuracy score than a standard scaler.
###Code
def mean_norm(x):
return (x-x.mean())/(x.max()-x.min())
plt.figure(figsize = (15, 5))
sns.histplot(mean_norm(np.array(selling)),stat='density')
plt.show()
###Output
_____no_output_____
###Markdown
Arbitrary Rescale$$x'=\min(x)+\frac{(x-x\min(x))*(\max(x)-\min(x))}{\max(x)-\min(x)}$$Arbitrary Rescale is particularly useful when you have a small quartile gap, meaning that the median isnโt far from the minimum or the maximum values.
###Code
def arb_rescaler(x):
min = x.min()
max = x.max()
return min+((x-x*min)*(x.max()-x.min()))/(x.max()-x.min())
plt.figure(figsize = (15, 5))
sns.histplot(rescaler(np.array(selling)),stat='density')
plt.show()
###Output
_____no_output_____
###Markdown
Standard ScalerA Standard Scaler, also known as z-score normalizer, is likely the best go-to for scaling continuous features. The idea behind StandardScaler is that it will transform your data such that its distribution will have a mean value 0 and standard deviation of 1.$$x'=\frac{x-\hat{x}}{\sigma}$$If you ever need an accuracy boost, this is the way to do it. Iโve used Standard Scalers a lot, probably everyday I use one at some point. For me, Standard Scaling has been the most useful out of all of the scalars, as it is for most people.
###Code
def standard_scaler(x):
return (x-x.mean())/(x.std())
plt.figure(figsize = (15, 5))
sns.histplot(standard_scaler(np.array(selling)),stat='density')
plt.show()
###Output
_____no_output_____
###Markdown
Unit Length ScalarAnother option we have on the machine-learning front is scaling to unit length. When scaling to vector unit length, we transform the components of a feature vector so that the transformed vector has a length of 1, or in other words, a norm of 1. $$x'=\frac{x}{||x||}$$There are different ways to define โlengthโ such as as l1 or l2-normalization. If you use l2-normalization, โunit normโ essentially means that if we squared each element in the vector, and summed them, it would equal 1. While in L1 normalization we normalize each element in the vector, so the absolute value of each element sums to 1.Scaling to unit length can offer a similar result to z-score normalization, and I have certainly found it pretty useful. Unit Length Scalars use Euclidean distance on the denominator. Overall Unit Length Scaling can be very useful towards boosting your modelโs accuracy.So given a matrix X, where the rows represent samples and the columns represent features of the sample, you can apply l2-normalization to normalize each row to a unit norm. This can be done easily in Python using sklearn.
###Code
from sklearn import preprocessing
def unit_length_scaler_l2(x):
return preprocessing.normalize(np.expand_dims(x, axis=0), norm='l2')[0]
print (np.sum(unit_length_scaler_l2(np.array(selling,dtype=np.float))**2, axis=0))
plt.figure(figsize = (15, 5))
sns.histplot(unit_length_scaler_l2(np.array(selling,dtype=np.float)),stat='density')
plt.show()
def unit_length_scaler_l1(x):
return preprocessing.normalize(np.expand_dims(x, axis=0), norm='l1')[0]
print (np.sum(np.abs(unit_length_scaler_l1(np.array(selling,dtype=np.float))), axis=0))
plt.figure(figsize = (15, 5))
sns.histplot(unit_length_scaler_l1(np.array(selling,dtype=np.float)),stat='density')
plt.show()
###Output
1.0
###Markdown
Now let's change our into Pandas, for lagging analysis.
###Code
import pandas as pd
df = pd.DataFrame({'timestamp':timestamp, 'selling':selling})
df.head()
def df_shift(df, lag = 0, start = 1, skip = 1, rejected_columns = []):
df = df.copy()
if not lag:
return df
cols = {}
for i in range(start, lag + 1, skip):
for x in list(df.columns):
if x not in rejected_columns:
if not x in cols:
cols[x] = ['{}_{}'.format(x, i)]
else:
cols[x].append('{}_{}'.format(x, i))
for k, v in cols.items():
columns = v
dfn = pd.DataFrame(data = None, columns = columns, index = df.index)
i = start - 1
for c in columns:
dfn[c] = df[k].shift(periods = i)
i += skip
df = pd.concat([df, dfn], axis = 1).reindex(df.index)
return df
###Output
_____no_output_____
###Markdown
**Shifted and moving average are not same.**
###Code
df_crosscorrelated = df_shift(
df, lag = 12, start = 4, skip = 2, rejected_columns = ['timestamp']
)
df_crosscorrelated['ma7'] = df_crosscorrelated['selling'].rolling(7).mean()
df_crosscorrelated['ma14'] = df_crosscorrelated['selling'].rolling(14).mean()
df_crosscorrelated['ma21'] = df_crosscorrelated['selling'].rolling(21).mean()
###Output
_____no_output_____
###Markdown
why we lagged or shifted to certain units?Virals took some time, impacts took some time, same goes to price lot / unit.Now I want to `lag` for until 12 units, `start` at 4 units shifted, `skip` every 2 units.
###Code
df_crosscorrelated.head(21)
plt.figure(figsize = (20, 4))
plt.subplot(1, 3, 1)
plt.scatter(df_crosscorrelated['selling'], df_crosscorrelated['selling_4'])
mse = (
(df_crosscorrelated['selling_4'] - df_crosscorrelated['selling']) ** 2
).mean()
plt.title('close vs shifted 4, average change: %f'%(mse))
plt.subplot(1, 3, 2)
plt.scatter(df_crosscorrelated['selling'], df_crosscorrelated['selling_8'])
mse = (
(df_crosscorrelated['selling_8'] - df_crosscorrelated['selling']) ** 2
).mean()
plt.title('close vs shifted 8, average change: %f'%(mse))
plt.subplot(1, 3, 3)
plt.scatter(df_crosscorrelated['selling'], df_crosscorrelated['selling_12'])
mse = (
(df_crosscorrelated['selling_12'] - df_crosscorrelated['selling']) ** 2
).mean()
plt.title('close vs shifted 12, average change: %f'%(mse))
plt.show()
###Output
_____no_output_____
###Markdown
MSE keeps increasing and increasing!
###Code
plt.figure(figsize = (10, 5))
plt.scatter(
df_crosscorrelated['selling'],
df_crosscorrelated['selling_4'],
label = 'close vs shifted 4',
)
plt.scatter(
df_crosscorrelated['selling'],
df_crosscorrelated['selling_8'],
label = 'close vs shifted 8',
)
plt.scatter(
df_crosscorrelated['selling'],
df_crosscorrelated['selling_12'],
label = 'close vs shifted 12',
)
plt.legend()
plt.show()
fig, ax = plt.subplots(figsize = (15, 5))
df_crosscorrelated.plot(
x = 'timestamp', y = ['selling', 'ma7', 'ma14', 'ma21'], ax = ax
)
plt.xticks(np.arange(len(timestamp))[::10], timestamp[::10], rotation = '45')
plt.show()
###Output
_____no_output_____
###Markdown
As you can see, even moving average 7 already not followed sudden trending (blue line), means that, **dilation rate required less than 7 days! so fast!** How about correlation?We want to study linear relationship between, how many days required to give impact to future sold units?
###Code
colormap = plt.cm.RdBu
plt.figure(figsize = (15, 5))
plt.title('cross correlation', y = 1.05, size = 16)
sns.heatmap(
df_crosscorrelated.iloc[:, 1:].corr(),
linewidths = 0.1,
vmax = 1.0,
cmap = colormap,
linecolor = 'white',
annot = True,
)
plt.show()
###Output
_____no_output_____
###Markdown
Based on this correlation map, look at selling vs selling_X,**selling_X from 4 to 12 is getting lower, means that, if today is 50 mean, next 4 days should increased by 0.95 * 50 mean, and continue.** OutliersSimple, we can use Z-score to detect outliers, which timestamps gave very uncertain high and low value.
###Code
std_selling = (selling - np.mean(selling)) / np.std(selling)
def detect(signal, treshold = 2.0):
detected = []
for i in range(len(signal)):
if np.abs(signal[i]) > treshold:
detected.append(i)
return detected
###Output
_____no_output_____
###Markdown
Based on z-score table, 2.0 already positioned at 97.772% of the population.https://d2jmvrsizmvf4x.cloudfront.net/6iEAaVSaT3aGP52HMzo3_z-score-02.png
###Code
outliers = detect(std_selling)
plt.figure(figsize = (15, 7))
plt.plot(selling)
plt.plot(
np.arange(len(selling)),
selling,
'X',
label = 'outliers',
markevery = outliers,
c = 'r',
)
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
We can see that, **we have positive and negative outliers**. What happened to our local market on that days? So we should study sentiment from local news to do risk analysis. Give us predictive modelling!Okay okay. Predictive modellingLike I said, I want to compare with 3 models,1. Linear regression2. ARIMA3. LSTM Tensorflow (sorry Pytorch, not used to it)Which models give the best accuracy and lowest error rate?**I want to split first timestamp 80% for train, another 20% timestamp for test.**
###Code
from sklearn.linear_model import LinearRegression
train_selling = selling[: int(0.8 * len(selling))]
test_selling = selling[int(0.8 * len(selling)) :]
###Output
_____no_output_____
###Markdown
Beware of `:`!
###Code
future_count = len(test_selling)
future_count
###Output
_____no_output_____
###Markdown
Our model should forecast 61 future days ahead. Linear regression
###Code
%%time
linear_regression = LinearRegression().fit(
np.arange(len(train_selling)).reshape((-1, 1)), train_selling
)
linear_future = linear_regression.predict(
np.arange(len(train_selling) + future_count).reshape((-1, 1))
)
###Output
CPU times: user 958 ยตs, sys: 132 ยตs, total: 1.09 ms
Wall time: 1.47 ms
###Markdown
Took me 594 us to train linear regression from sklearn. Very quick!
###Code
fig, ax = plt.subplots(figsize = (15, 5))
ax.plot(selling, label = '20% test trend')
ax.plot(train_selling, label = '80% train trend')
ax.plot(linear_future, label = 'forecast linear regression')
plt.xticks(
np.arange(len(timestamp))[::10],
np.arange(len(timestamp))[::10],
rotation = '45',
)
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Oh no, if based on linear relationship, the trend is going down! ARIMAStands for Auto-regressive Moving Average.3 important parameters you need to know about ARIMA, ARIMA(p, d, q). You will able to see what is `p`, `d`, `q` from wikipedia, https://en.wikipedia.org/wiki/Autoregressive_integrated_moving_average.`p` for the order (number of time lags).`d` for degree of differencing.`q` for the order of the moving-average.Or,`p` is how long the periods we need to look back.`d` is the skip value during calculating future differences.`q` is how many periods for moving average.
###Code
import statsmodels.api as sm
from sklearn.preprocessing import MinMaxScaler
from itertools import product
Qs = range(0, 2)
qs = range(0, 2)
Ps = range(0, 2)
ps = range(0, 2)
D = 1
parameters = product(ps, qs, Ps, Qs)
parameters_list = list(parameters)
###Output
_____no_output_____
###Markdown
Problem with ARIMA, you cannot feed a high value, so we need to scale, simplest we can use, minmax scaling.
###Code
minmax = MinMaxScaler().fit(np.array([train_selling]).T)
minmax_values = minmax.transform(np.array([train_selling]).T)
###Output
_____no_output_____
###Markdown
Now using naive meshgrid parameter searching, which pairs of parameters are the best! **Lower is better!**
###Code
best_aic = float('inf')
for param in parameters_list:
try:
model = sm.tsa.statespace.SARIMAX(
minmax_values[:, 0],
order = (param[0], D, param[1]),
seasonal_order = (param[2], D, param[3], future_count),
).fit(disp = -1)
except Exception as e:
print(e)
continue
aic = model.aic
print(aic)
if aic < best_aic and aic:
best_model = model
best_aic = aic
print(best_model.specification)
print(best_model.model_orders)
arima_future = best_model.get_prediction(
start = 0, end = len(train_selling) + (future_count - 1)
)
arima_future = minmax.inverse_transform(
np.expand_dims(arima_future.predicted_mean, axis = 1)
)[:, 0]
###Output
-187.78616146295357
-205.19453188021407
-201.4875872721453
-203.26230458282146
-186.52583297253435
-203.27996049747335
-199.7816724112022
-201.3554609926802
-186.643146275856
-203.28294846406104
-199.800155252524
-201.3573909358683
-184.90491807562725
-201.27461613963035
-197.8185181619375
-199.29909459860403
{'seasonal_periods': 33, 'measurement_error': False, 'time_varying_regression': False, 'simple_differencing': False, 'enforce_stationarity': True, 'enforce_invertibility': True, 'hamilton_representation': False, 'concentrate_scale': False, 'trend_offset': 1, 'order': (0, 1, 0), 'seasonal_order': (0, 1, 1, 33), 'k_diff': 1, 'k_seasonal_diff': 1, 'k_ar': 0, 'k_ma': 0, 'k_seasonal_ar': 0, 'k_seasonal_ma': 33, 'k_ar_params': 0, 'k_ma_params': 0, 'trend': None, 'k_trend': 0, 'k_exog': 0, 'mle_regression': False, 'state_regression': False}
{'trend': 0, 'exog': 0, 'ar': 0, 'ma': 0, 'seasonal_ar': 0, 'seasonal_ma': 33, 'reduced_ar': 0, 'reduced_ma': 33, 'exog_variance': 0, 'measurement_variance': 0, 'variance': 1}
###Markdown
Auto-ARIMA https://towardsdatascience.com/time-series-forecasting-using-auto-arima-in-python-bb83e49210cdUsually, in the basic ARIMA model, we need to provide the p,d, and q values which are essential. We use statistical techniques to generate these values by performing the difference to eliminate the non-stationarity and plotting ACF and PACF graphs. In Auto ARIMA, the model itself will generate the optimal p, d, and q values which would be suitable for the data set to provide better forecasting.
###Code
from pmdarima.arima import auto_arima
###Output
_____no_output_____
###Markdown
Test for StationarityStationarity is an important concept in time-series and any time-series data should undergo a stationarity test before proceeding with a model.We use the โAugmented Dickey-Fuller Testโ to check whether the data is stationary or not which is available in the โpmdarimaโ package.
###Code
from pmdarima.arima import ADFTest
adf_test = ADFTest(alpha = 0.05)
adf_test.should_diff(np.array(train_selling))
###Output
_____no_output_____
###Markdown
From the above, we can conclude that the data is stationary. Hence, we would not need to use the โIntegrated (I)โ concept, denoted by value โdโ in time series to make the data stationary while building the Auto ARIMA model. Building Auto ARIMA modelIn the Auto ARIMA model, note that small p,d,q values represent non-seasonal components, and capital P, D, Q represent seasonal components. It works similarly like hyper tuning techniques to find the optimal value of p, d, and q with different combinations and the final values would be determined with the lower AIC, BIC parameters taking into consideration.Here, we are trying with the p, d, q values ranging from 0 to 5 to get better optimal values from the model. There are many other parameters in this model and to know more about the functionality, visit this link [here](https://alkaline-ml.com/pmdarima/modules/generated/pmdarima.arima.auto_arima.html)
###Code
auto_arima_model=auto_arima(train_selling, start_p=0, d=1, start_q=0, D=1, start_Q=0, max_P=5, max_d=5, max_Q=5, m=12, seasonal=True, error_action='warn', trace=True, supress_warnings=True, stepwise=True, random_state=20, n_fits=50)
auto_arima_model.summary()
###Output
Performing stepwise search to minimize aic
ARIMA(0,1,0)(1,1,0)[12] : AIC=1488.389, Time=0.09 sec
ARIMA(0,1,0)(0,1,0)[12] : AIC=1524.236, Time=0.02 sec
ARIMA(1,1,0)(1,1,0)[12] : AIC=1490.281, Time=0.13 sec
ARIMA(0,1,1)(0,1,1)[12] : AIC=inf, Time=0.27 sec
ARIMA(0,1,0)(2,1,0)[12] : AIC=1475.763, Time=0.16 sec
ARIMA(0,1,0)(3,1,0)[12] : AIC=1475.858, Time=0.41 sec
ARIMA(0,1,0)(2,1,1)[12] : AIC=inf, Time=0.68 sec
ARIMA(0,1,0)(1,1,1)[12] : AIC=inf, Time=0.16 sec
ARIMA(0,1,0)(3,1,1)[12] : AIC=inf, Time=1.14 sec
ARIMA(1,1,0)(2,1,0)[12] : AIC=1477.721, Time=0.21 sec
ARIMA(0,1,1)(2,1,0)[12] : AIC=1477.726, Time=0.24 sec
ARIMA(1,1,1)(2,1,0)[12] : AIC=inf, Time=0.94 sec
ARIMA(0,1,0)(2,1,0)[12] intercept : AIC=1477.381, Time=0.51 sec
Best model: ARIMA(0,1,0)(2,1,0)[12]
Total fit time: 4.956 seconds
###Markdown
In the basic ARIMA or SARIMA model, you need to perform differencing and plot ACF and PACF graphs to determine these values which are time-consuming.However, it is always advisable to go with statistical techniques and implement the basic ARIMA model to understand the intuitive behind the p,d, and q values if you are new to time series. Forecasting on the test dataUsing the trained model which was built in the earlier step to forecast the sales on the test data.
###Code
auto_arima_future = train_selling
auto_arima_future.extend(auto_arima_model.predict(n_periods=len(test_selling)))
fig, ax = plt.subplots(figsize = (15, 5))
ax.plot(selling, label = '20% test trend')
ax.plot(linear_future, label = 'forecast linear regression')
ax.plot(arima_future, label = 'forecast ARIMA')
ax.plot(auto_arima_future, label = 'forecast auto ARIMA')
ax.plot(train_selling, label = '80% train trend')
plt.xticks(
np.arange(len(timestamp))[::10],
np.arange(len(timestamp))[::10],
rotation = '45',
)
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Perfect!Now we left, RNN + LSTM
###Code
import tensorflow as tf
class Model:
def __init__(
self,
learning_rate,
num_layers,
size,
size_layer,
output_size,
forget_bias = 0.1,
):
def lstm_cell(size_layer):
return tf.nn.rnn_cell.LSTMCell(size_layer, state_is_tuple = False)
rnn_cells = tf.nn.rnn_cell.MultiRNNCell(
[lstm_cell(size_layer) for _ in range(num_layers)],
state_is_tuple = False,
)
self.X = tf.placeholder(tf.float32, (None, None, size))
self.Y = tf.placeholder(tf.float32, (None, output_size))
drop = tf.contrib.rnn.DropoutWrapper(
rnn_cells, output_keep_prob = forget_bias
)
self.hidden_layer = tf.placeholder(
tf.float32, (None, num_layers * 2 * size_layer)
)
self.outputs, self.last_state = tf.nn.dynamic_rnn(
drop, self.X, initial_state = self.hidden_layer, dtype = tf.float32
)
self.logits = tf.layers.dense(self.outputs[-1], output_size)
self.cost = tf.reduce_mean(tf.square(self.Y - self.logits))
self.optimizer = tf.train.AdamOptimizer(learning_rate).minimize(
self.cost
)
###Output
_____no_output_____
###Markdown
**Naively defined neural network parameters, no meshgrid here. this parameters came from my dream, believe me :)**
###Code
num_layers = 1
size_layer = 128
epoch = 500
dropout_rate = 0.6
skip = 10
###Output
_____no_output_____
###Markdown
Same goes to LSTM, we need to scale our value becaused LSTM use sigmoid and tanh functions during feed-forward, we don't want any gradient vanishing during backpropagation.
###Code
df = pd.DataFrame({'values': train_selling})
minmax = MinMaxScaler().fit(df)
df_log = minmax.transform(df)
df_log = pd.DataFrame(df_log)
df_log.head()
tf.reset_default_graph()
modelnn = Model(
learning_rate = 0.001,
num_layers = num_layers,
size = df_log.shape[1],
size_layer = size_layer,
output_size = df_log.shape[1],
forget_bias = dropout_rate
)
sess = tf.InteractiveSession()
sess.run(tf.global_variables_initializer())
%%time
for i in range(epoch):
init_value = np.zeros((1, num_layers * 2 * size_layer))
total_loss = 0
for k in range(0, df_log.shape[0] - 1, skip):
index = min(k + skip, df_log.shape[0] -1)
batch_x = np.expand_dims(
df_log.iloc[k : index, :].values, axis = 0
)
batch_y = df_log.iloc[k + 1 : index + 1, :].values
last_state, _, loss = sess.run(
[modelnn.last_state, modelnn.optimizer, modelnn.cost],
feed_dict = {
modelnn.X: batch_x,
modelnn.Y: batch_y,
modelnn.hidden_layer: init_value,
},
)
init_value = last_state
total_loss += loss
total_loss /= ((df_log.shape[0] - 1) / skip)
if (i + 1) % 100 == 0:
print('epoch:', i + 1, 'avg loss:', total_loss)
df = pd.DataFrame({'values': train_selling})
minmax = MinMaxScaler().fit(df)
df_log = minmax.transform(df)
df_log = pd.DataFrame(df_log)
future_day = future_count
output_predict = np.zeros((df_log.shape[0] + future_day, df_log.shape[1]))
output_predict[0] = df_log.iloc[0]
upper_b = (df_log.shape[0] // skip) * skip
init_value = np.zeros((1, num_layers * 2 * size_layer))
for k in range(0, (df_log.shape[0] // skip) * skip, skip):
out_logits, last_state = sess.run(
[modelnn.logits, modelnn.last_state],
feed_dict = {
modelnn.X: np.expand_dims(
df_log.iloc[k : k + skip], axis = 0
),
modelnn.hidden_layer: init_value,
},
)
init_value = last_state
output_predict[k + 1 : k + skip + 1] = out_logits
if upper_b < df_log.shape[0]:
out_logits, last_state = sess.run(
[modelnn.logits, modelnn.last_state],
feed_dict = {
modelnn.X: np.expand_dims(df_log.iloc[upper_b:], axis = 0),
modelnn.hidden_layer: init_value,
},
)
init_value = last_state
output_predict[upper_b + 1 : df_log.shape[0] + 1] = out_logits
df_log.loc[df_log.shape[0]] = out_logits[-1]
future_day = future_day - 1
for i in range(future_day):
out_logits, last_state = sess.run(
[modelnn.logits, modelnn.last_state],
feed_dict = {
modelnn.X: np.expand_dims(df_log.iloc[-skip:], axis = 0),
modelnn.hidden_layer: init_value,
},
)
init_value = last_state
output_predict[df_log.shape[0]] = out_logits[-1]
df_log.loc[df_log.shape[0]] = out_logits[-1]
df_log = minmax.inverse_transform(output_predict)
lstm_future = df_log[:,0]
fig, ax = plt.subplots(figsize = (15, 5))
ax.plot(selling, label = '20% test trend')
ax.plot(train_selling, label = '80% train trend')
ax.plot(linear_future, label = 'forecast linear regression')
ax.plot(arima_future, label = 'forecast ARIMA')
ax.plot(lstm_future, label='forecast lstm')
plt.xticks(
np.arange(len(timestamp))[::10],
np.arange(len(timestamp))[::10],
rotation = '45',
)
plt.legend()
plt.show()
from sklearn.metrics import r2_score
from scipy.stats import pearsonr, spearmanr
###Output
_____no_output_____
###Markdown
Accuracy based on correlation coefficient, **higher is better!**
###Code
def calculate_accuracy(real, predict):
r2 = r2_score(real, predict)
if r2 < 0:
r2 = 0
def change_percentage(val):
# minmax, we know that correlation is between -1 and 1
if val > 0:
return val
else:
return val + 1
pearson = pearsonr(real, predict)[0]
spearman = spearmanr(real, predict)[0]
pearson = change_percentage(pearson)
spearman = change_percentage(spearman)
return {
'r2': r2 * 100,
'pearson': pearson * 100,
'spearman': spearman * 100,
}
###Output
_____no_output_____
###Markdown
Distance error for mse and rmse, **lower is better!**
###Code
def calculate_distance(real, predict):
mse = ((real - predict) ** 2).mean()
rmse = np.sqrt(mse)
return {'mse': mse, 'rmse': rmse}
###Output
_____no_output_____
###Markdown
Now let's check distance error using Mean Square Error and Root Mean Square ErrorValidating based on 80% training timestamps
###Code
linear_cut = linear_future[: len(train_selling)]
arima_cut = arima_future[: len(train_selling)]
lstm_cut = lstm_future[: len(train_selling)]
###Output
_____no_output_____
###Markdown
Linear regression
###Code
calculate_distance(train_selling, linear_cut)
calculate_accuracy(train_selling, linear_cut)
###Output
_____no_output_____
###Markdown
ARIMA
###Code
calculate_distance(train_selling, arima_cut)
calculate_accuracy(train_selling, arima_cut)
###Output
_____no_output_____
###Markdown
LSTM
###Code
calculate_distance(train_selling, lstm_cut)
calculate_accuracy(train_selling, lstm_cut)
###Output
_____no_output_____
###Markdown
**LSTM learn better during training session!**How about another 20%?
###Code
linear_cut = linear_future[len(train_selling) :]
arima_cut = arima_future[len(train_selling) :]
lstm_cut = lstm_future[len(train_selling) :]
###Output
_____no_output_____
###Markdown
Linear regression
###Code
calculate_distance(test_selling, linear_cut)
calculate_accuracy(test_selling, linear_cut)
###Output
_____no_output_____
###Markdown
ARIMA
###Code
calculate_distance(test_selling, arima_cut)
calculate_accuracy(test_selling, arima_cut)
###Output
_____no_output_____
###Markdown
LSTM
###Code
calculate_distance(test_selling, lstm_cut)
calculate_accuracy(test_selling, lstm_cut)
###Output
_____no_output_____ |
Codes/Data Integration.ipynb | ###Markdown
This is the collection of codes that read food atlas datasets and CDC health indicator datasets from Github repository, integrate datasets and cleaning data
###Code
#merge food atlas datasets into one
import pandas as pd
Overall_folder='C:/Users/cathy/Capstone_project_1/'
dfs=list()
url_folder='https://raw.githubusercontent.com/cathyxinxyz/Capstone_Project_1/master/Datasets/Food_atlas/'
filenames=['ACCESS','ASSISTANCE','HEALTH','INSECURITY','LOCAL','PRICES_TAXES','RESTAURANTS','SOCIOECONOMIC','STORES']
for i,filename in enumerate(filenames):
filepath=url_folder+filename+".csv"
d=pd.read_csv(filepath,index_col='FIPS',encoding="ISO-8859-1")
#append datasets to the list and drop the redundent columns:'State' and 'County'
if i!=0:
dfs.append(d.drop(['State', 'County'], axis=1))
else:
dfs.append(d)
#merge datasets
df_merge=pd.concat(dfs, join='outer', axis=1)
print (df_merge.head(5))
###Output
State County LACCESS_POP10 LACCESS_POP15 PCH_LACCESS_POP_10_15 \
FIPS
1001 AL Autauga 18428.439690 17496.693040 -5.056026
1003 AL Baldwin 35210.814080 30561.264430 -13.204891
1005 AL Barbour 5722.305602 6069.523628 6.067799
1007 AL Bibb 1044.867327 969.378841 -7.224696
1009 AL Blount 1548.175559 3724.428242 140.568857
PCT_LACCESS_POP10 PCT_LACCESS_POP15 LACCESS_LOWI10 LACCESS_LOWI15 \
FIPS
1001 33.769657 32.062255 5344.427472 6543.676824
1003 19.318473 16.767489 9952.144027 9886.831137
1005 20.840972 22.105560 3135.676086 2948.790251
1007 4.559753 4.230324 491.449066 596.162829
1009 2.700840 6.497380 609.027708 1650.959482
PCH_LACCESS_LOWI_10_15 ... PCH_SNAPS_12_16 SNAPSPTH12 \
FIPS ...
1001 22.439248 ... 30.957684 0.674004
1003 -0.656270 ... 58.313251 0.725055
1005 -5.959985 ... 11.961722 1.280590
1007 21.307144 ... 29.230770 0.719122
1009 171.081177 ... 68.421051 0.657144
SNAPSPTH16 PCH_SNAPSPTH_12_16 WICS08 WICS12 PCH_WICS_08_12 \
FIPS
1001 0.884221 31.189270 6 5 -16.66667
1003 1.050042 44.822353 25 27 8.00000
1005 1.502022 17.291382 6 7 16.66667
1007 0.927439 28.968229 6 5 -16.66667
1009 1.109109 68.777138 10 6 -40.00000
WICSPTH08 WICSPTH12 PCH_WICSPTH_08_12
FIPS
1001 0.119156 0.090067 -24.412460
1003 0.141875 0.141517 -0.252126
1005 0.201099 0.257344 27.968330
1007 0.277919 0.221268 -20.383970
1009 0.173028 0.103760 -40.033200
[5 rows x 279 columns]
###Markdown
Check columns for missing values
###Code
df_merge.describe()
number_null_values_percol=df_merge.isnull().sum(axis=0)
#columns with over 100 missing values
cols_with_over_5_percent_null_values=number_null_values_percol[number_null_values_percol>0.05*df_merge.shape[0]]
print (cols_with_over_5_percent_null_values.index)
#drop these columns first
df_merge=df_merge.drop(list(cols_with_over_5_percent_null_values.index), axis=1)
df_merge.shape
#check number of remaining columns
print (df_merge.columns)
###Output
Index(['State', 'County', 'LACCESS_POP10', 'LACCESS_POP15',
'PCH_LACCESS_POP_10_15', 'PCT_LACCESS_POP10', 'PCT_LACCESS_POP15',
'LACCESS_LOWI10', 'LACCESS_LOWI15', 'PCH_LACCESS_LOWI_10_15',
...
'PCH_SNAPS_12_16', 'SNAPSPTH12', 'SNAPSPTH16', 'PCH_SNAPSPTH_12_16',
'WICS08', 'WICS12', 'PCH_WICS_08_12', 'WICSPTH08', 'WICSPTH12',
'PCH_WICSPTH_08_12'],
dtype='object', length=195)
###Markdown
categorizes columns into three groups: category data ('State' and 'County'), count data, percent data, per 1000 pop, and percent changecolumns to keep: category data ('State' and 'County'), percent data, per 1000 pop, and percent change; remove count data because it is not adjusted by population sizeEach column name is highly abstract and unreadable, need to extract info from the variable information provided by Food_atlas
###Code
url='https://raw.githubusercontent.com/cathyxinxyz/Capstone_Project_1/master/Datasets/Food_atlas/variable_info.csv'
var_info_df=pd.read_csv(url,encoding="ISO-8859-1", index_col='Variable Code')
###Output
_____no_output_____
###Markdown
further filter varaibles based on following principles:i. keep variables that are adjusted by population size: '% change', 'Percent', ' per 1,000 pop','Percentage points';ii. keep variables that are mostly valuable for analysisiii. keep variables where values are valid: e.g. no negative values for variables with units as 'Percent' or ' per 1,000 pop'.
###Code
#units to keep: '% change', 'Percent', '# per 1,000 pop','Percentage points'
#var_info_df['Units'].isin(['Percent', '# per 1,000 pop','Dollars'])
var_info_df_subset=var_info_df[var_info_df['Units'].isin(['Percent', '# per 1,000 pop','Dollars'])]
var_subset=list(var_info_df_subset.index)
var_subset.extend(['State', 'County'])
#print (var_subset)
df_subset=df_merge.loc[:, var_subset]
#print (df_merge.shape)
print (df_subset.shape)
#check weather each column has valid values:
####### columns with units 'Percent' should have values between 0 and 100, any value that fall out of this range should be changed to NaN values
######
######
######
#Replace invalid values with np.nan
import numpy as np
for c in df_subset.columns:
if c in var_info_df.index:
if var_info_df.loc[c]['Units'] =='Percent':
df_subset[c][(df_subset[c]<0)|(df_subset[c]>100)]=np.nan
elif var_info_df.loc[c]['Units'] =='# per 1,000 pop':
df_subset[c][(df_subset[c]<0)|(df_subset[c]>1000)]=np.nan
elif var_info_df.loc[c]['Units'] =='Dollars':
df_subset[c][(df_subset[c]<0)]=np.nan
df_subset.shape
###Output
_____no_output_____
###Markdown
get the average of variables measured at two time points
###Code
var_tup_dict={}
for c in df_subset.columns:
if c in var_info_df.index:
k=(var_info_df.loc[c]['Category Name'],var_info_df.loc[c]['Sub_subcategory Name'],var_info_df.loc[c]['Units'])
if k not in var_tup_dict.keys():
var_tup_dict[k]=list()
var_tup_dict[k].append(c)
print (var_tup_dict)
n=1
var_name_cat_subcat=list()
for k in var_tup_dict.keys():
df_subset['var'+str(n)]=(df_subset[var_tup_dict[k][0]]+df_subset[var_tup_dict[k][-1]])/2
var_name_cat_subcat.append(['var'+str(n), k[0], k[1]])
df_subset=df_subset.drop(var_tup_dict[k], axis=1)
n+=1
df_subset.columns
df_subset.shape
further drop variables that have redundent information
dropped=['var'+str(n) for n in [24,25, 42]]
dropped.extend(['var'+str(n) for n in range(45,54)])
dropped.extend(['var'+str(n) for n in [55,56]])
df_subset=df_subset.drop(dropped, axis=1)
df_subset.shape
df_subset=df_subset.drop(['var28','var29','var43','var54','var57'],axis=1)
var_name_info_df=pd.DataFrame(var_name_cat_subcat, columns=['variable','category', 'sub_category'])
var_name_info_df.to_csv('C:/Users/cathy/Capstone_project_1/Datasets/Food_atlas/Var_name_info.csv',index=False)
df_subset.to_csv(Overall_folder+'Datasets/food_environment.csv')
###Output
_____no_output_____
###Markdown
Integrate CDC Datasets together
###Code
import pandas as pd
dfs=list()
sub_folder=Overall_folder+'/Datasets/CDC/'
filenames=['Diabetes_prevalence',
'Obesity_prevalence',
'Physical_inactive_prevalence']
for filename in filenames:
filepath=sub_folder+filename+".csv"
df=pd.read_csv(filepath,index_col='FIPS')
if 'Diabetes' in filename:
df.columns=df.columns.astype(str)+'_db'
elif 'Obesity' in filename:
df.columns=df.columns.astype(str)+'_ob'
elif 'Physical' in filename:
df.columns=df.columns.astype(str)+'_phy'
dfs.append(df)
#merge datasets
CDC_merge=pd.concat(dfs, join='outer', axis=1)
CDC_merge.info()
#Find out the non numeric entries in CDC_merge
for c in CDC_merge.columns:
num_non_numeric=sum(CDC_merge.applymap(lambda x: isinstance(x, (int, float)))[c])
if num_non_numeric>0:
print(c, num_non_numeric, CDC_merge[pd.to_numeric(CDC_merge[c], errors='coerce').isnull()])
#It turns out that some entries are 'No Data' or NaN, so I replace the 'No Data' with NaN values
CDC_merge=CDC_merge.replace('No Data', np.nan)
CDC_merge=CDC_merge.astype(float)
#now check the CDC_merge
CDC_merge.info()
#choose the latest prevalence of diabetes, obesity and physical inactivity to merge with df_tp
CDC_subset=CDC_merge[['2013_db','2013_ob','2011_phy','2012_phy','2013_phy']]
CDC_subset['prevalence of physical inactivity']=(CDC_subset['2011_phy']+CDC_subset['2012_phy']+CDC_subset['2013_phy'])/3
CDC_subset.head(5)
CDC_subset.rename(columns={'2013_db': 'prevalence of diabetes', '2013_ob': 'prevalence of obesity'}, inplace=True)
CDC_subset[['prevalence of diabetes', 'prevalence of obesity', 'prevalence of physical inactivity']].to_csv(Overall_folder+'Datasets/Db_ob_phy.csv')
###Output
_____no_output_____
###Markdown
Integrating geography dataset
###Code
df=pd.read_excel(Overall_folder+'Datasets/geography/ruralurbancodes2013.xls')
df.head(5)
df=df.set_index('FIPS')
df_RUCC_info=pd.DataFrame()
df_RUCC_info['RUCC_2013']=df['RUCC_2013'].unique()
df[df['RUCC_2013']==1]
df[df['RUCC_2013']==4]['Description'].unique()[0]
description_dict={1:df[df['RUCC_2013']==1]['Description'].unique()[0],
2:df[df['RUCC_2013']==2]['Description'].unique()[0],
3:df[df['RUCC_2013']==3]['Description'].unique()[0],
4:df[df['RUCC_2013']==4]['Description'].unique()[0],
5:df[df['RUCC_2013']==5]['Description'].unique()[0],
6:df[df['RUCC_2013']==6]['Description'].unique()[0],
7:df[df['RUCC_2013']==7]['Description'].unique()[0],
8:df[df['RUCC_2013']==8]['Description'].unique()[0],
9:df[df['RUCC_2013']==9]['Description'].unique()[0]}
description_dict
df_RUCC_info['RUCC_2013']
df_RUCC_info['categories']=df_RUCC_info['RUCC_2013'].map(description_dict)
df_RUCC_info
df_RUCC_info.to_csv(Overall_folder+'Datasets/rural_urban_category.csv', index=False)
df.to_csv(Overall_folder+'Datasets/rural_urban_codes.csv')
df[['RUCC_2013']].to_csv(Overall_folder+'Datasets/RUCC_codes.csv')
###Output
_____no_output_____
###Markdown
Integrate information of uninsured population from 2011 to 2013
###Code
def Guess_skiprows(filename, firstcol):
for n in range(100):
try:
df=pd.read_csv(filename, skiprows=n)
if 'year' in df.columns[0]:
print (n, df.columns)
skiprows=n
break
except:
next
return skiprows
import pandas as pd
def Extract_number(x):
import re
if type(x)==str:
num_string=''.join(re.findall('\d+', x ))
if num_string !='':
return float(num_string)
else:
return None
elif type(x) in [int, float]:
return x
def Choose_Subset(df):
df=df[df['agecat']==0]
df=df[df['sexcat']==0]
df=df[df['racecat']==0]
df=df[df['iprcat']==0]
return df
df_dicts={}
years=[2011, 2012, 2013]
for year in years:
filename='C:/Users/cathy/Capstone_Project_1/Datasets/SAHIE/sahie_{}.csv'.format(year)
firstcol='year'
skiprows=Guess_skiprows(filename, firstcol)
df=pd.read_csv(filename, skiprows=skiprows)
df=Choose_Subset(df)
df['FIPS']=df['statefips'].apply((lambda x:('0'+str(x))[-2:]))+df['countyfips'].apply((lambda x:('00'+str(x))[-3:]))
df['FIPS']=df['FIPS'].astype(int)
df=df.set_index('FIPS')
df['NUI']=df['NUI'].apply(Extract_number)
df_dicts[year]=df[['NUI']]
df_dem=pd.read_csv('C:/Users/cathy/Capstone_Project_1/Datasets/Food_atlas/Supplemental_data_county.csv', encoding="ISO-8859-1", index_col='FIPS')
for year in years:
df_dem['Population Estimate, {}'.format(year)]=df_dem['Population Estimate, {}'.format(year)].apply(lambda x:float(''.join(x.split(','))))
df_combineds=list()
for year in years:
df_combined=pd.concat([df_dicts[year], df_dem['Population Estimate, {}'.format(year)]],axis=1, join='inner')
df_combined['frac_uninsured_{}'.format(year)]=df_combined['NUI']/df_combined['Population Estimate, {}'.format(year)]
df_combineds.append(df_combined['frac_uninsured_{}'.format(year)])
df_frac_nui=pd.concat(df_combineds, axis=1)
df_frac_nui
import numpy as np
df_frac_nui['frac_uninsured']=(df_frac_nui['frac_uninsured_2011']+df_frac_nui['frac_uninsured_2012']+df_frac_nui['frac_uninsured_2013'])/3
df_frac_nui['frac_uninsured']
df_frac_nui[['frac_uninsured']].to_csv('C:/Users/cathy/Capstone_Project_1/Datasets/Uninsured.csv')
###Output
_____no_output_____
###Markdown
Integrate all datasets
###Code
filenames=['food_environment', 'Db_ob_phy', 'Uninsured', 'RUCC_codes']
Overall_folder='C:/Users/cathy/Capstone_Project_1/'
dfs=list()
for filename in filenames:
df=pd.read_csv(Overall_folder+'Datasets/'+filename+'.csv', index_col='FIPS', encoding="ISO-8859-1")
dfs.append(df)
df_merge=pd.concat(dfs, axis=1, join='inner')
df_merge.info()
df_merge.to_csv(Overall_folder+'Datasets/combined.csv')
###Output
_____no_output_____
###Markdown
combine state, county, fips code file into one for map
###Code
df=pd.read_csv(Overall_folder+'Datasets/Food_atlas/Supplemental_data_county.csv',encoding="ISO-8859-1", index_col='FIPS')
df.info()
df['State']=df['State'].apply((lambda x:x.lower()))
df['County']=df['County'].apply((lambda x:x.lower()))
df['State']=df['State'].apply((lambda x:("").join(x.split(' '))))
df['County']=df['County'].apply((lambda x:("").join(x.split(' '))))
df['County']
df[['State', 'County']].to_csv(Overall_folder+'Datasets/state_county_name.csv')
###Output
_____no_output_____ |
1. Preprocessing/2. Long vectors.ipynb | ###Markdown
---**Export of unprocessed features**---
###Code
import pandas as pd
import numpy as np
import os
import re
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.feature_extraction.text import CountVectorizer
import random
import pickle
from scipy import sparse
import math
import pprint
import sklearn as sk
import torch
from IPython.display import display
from toolbox import *
# from myMLtoolbox import *
%matplotlib inline
sns.set()
sns.set_context("notebook")
sns.set(rc={'figure.figsize':(14,6)})
cfg = load_cfg()
logVersions = load_LogVersions()
###Output
_____no_output_____
###Markdown
---**For figures**
###Code
from figures_toolbox import *
mpl.rcParams.update(mpl.rcParamsDefault)
sns.set(
context='paper',
style='ticks',
)
%matplotlib inline
mpl.rcParams.update(performancePlot_style)
###Output
_____no_output_____
###Markdown
Get uniprot list of proteins
###Code
uniprotIDs = pd.read_csv(
os.path.join(cfg['rawDataUniProt'],
"uniprot_allProteins_Human_v{}.pkl".format(logVersions['UniProt']['rawData'])),
header=None,
names=['uniprotID']
)
glance(uniprotIDs)
###Output
DataFrame: 20,386 rows 1 columns
###Markdown
Hubs
###Code
path0 = os.path.join(
cfg['outputPreprocessingIntAct'],
"listHubs_20p_v{}.pkl".format(logVersions['IntAct']['preprocessed']['all'])
)
with open(path0, 'rb') as f:
list_hubs20 = pickle.load(f)
glance(list_hubs20)
###Output
list: len 3240
['P42858', 'Q9NRI5', 'A8MQ03', 'P05067', 'P62993']
###Markdown
Load feature datasets
###Code
featuresDict = {
'bioProcessUniprot': {
'path': os.path.join(
cfg['outputPreprocessingUniprot'],
"bioProcessUniprot_v{}--{}.pkl".format(logVersions['UniProt']['rawData'], logVersions['UniProt']['preprocessed'])
),
'imputeNA': '0', # '0', 'mean', 'none'
'normalise':False,
'isBinary': True,
},
'cellCompUniprot': {
'path': os.path.join(
cfg['outputPreprocessingUniprot'],
"cellCompUniprot_v{}--{}.pkl".format(logVersions['UniProt']['rawData'], logVersions['UniProt']['preprocessed'])
),
'imputeNA': '0',
'normalise':False,
'isBinary': True,
},
'molFuncUniprot': {
'path': os.path.join(
cfg['outputPreprocessingUniprot'],
"molFuncUniprot_v{}--{}.pkl".format(logVersions['UniProt']['rawData'], logVersions['UniProt']['preprocessed'])
),
'imputeNA': '0',
'normalise':False,
'isBinary': True,
},
'domainUniprot': {
'path': os.path.join(
cfg['outputPreprocessingUniprot'],
"domainFT_v{}--{}.pkl".format(logVersions['UniProt']['rawData'], logVersions['UniProt']['preprocessed'])
),
'imputeNA': '0',
'normalise':False,
'isBinary': True,
},
'motifUniprot': {
'path': os.path.join(
cfg['outputPreprocessingUniprot'],
"motif_v{}--{}.pkl".format(logVersions['UniProt']['rawData'], logVersions['UniProt']['preprocessed'])
),
'imputeNA': '0',
'normalise':False,
'isBinary': True,
},
'Bgee': {
'path': os.path.join(
cfg['outputPreprocessingBgee'],
"Bgee_processed_v{}.pkl".format(logVersions['Bgee']['preprocessed'])
),
'imputeNA': '0',
'normalise':True,
'isBinary': False,
},
'tissueCellHPA': {
'path': os.path.join(
cfg['outputPreprocessingHPA'],
"tissueIHC_tissueCell_v{}.pkl".format(logVersions['HPA']['preprocessed']['tissueIHC_tissueCell'])
),
'imputeNA': '0',
'normalise':True,
'isBinary': False,
},
'tissueHPA': {
'path': os.path.join(
cfg['outputPreprocessingHPA'],
"tissueIHC_tissueOnly_v{}.pkl".format(logVersions['HPA']['preprocessed']['tissueIHC_tissueOnly'])
),
'imputeNA': '0',
'normalise':True,
'isBinary': False,
},
'RNAseqHPA': {
'path': os.path.join(
cfg['outputPreprocessingHPA'],
"consensusRNAseq_v{}.pkl".format(logVersions['HPA']['preprocessed']['consensusRNAseq'])
),
'imputeNA': 'mean',
'normalise':True,
'isBinary': False,
},
'subcellularLocationHPA': {
'path': os.path.join(
cfg['outputPreprocessingHPA'],
"subcellularLocation_v{}.pkl".format(logVersions['HPA']['preprocessed']['subcellularLocation'])
),
'imputeNA': '0',
'normalise':False,
'isBinary': True,
},
'sequence': {
'path': os.path.join(
cfg['outputPreprocessingUniprot'],
"sequenceData_v{}--{}.pkl".format(logVersions['UniProt']['rawData'], logVersions['UniProt']['preprocessed'])
),
'imputeNA':'none',
'normalise':False,
'isBinary': False,
}
}
def sneakPeak(featuresDict):
for feature, details in featuresDict.items():
df = pd.read_pickle(details['path'])
print('## ',feature)
glance(df)
print()
sneakPeak(featuresDict)
###Output
## bioProcessUniprot
DataFrame: 20,386 rows 12,249 columns
###Markdown
EDA **Number of GO terms for hubs and lone proteins**
###Code
def count_GOterms():
countGO = uniprotIDs.copy()
for feature, details in featuresDict.items():
print(feature)
if feature != 'sequence':
df = pd.read_pickle(details['path'])
foo = df.set_index('uniprotID').ne(0).sum(axis=1)
foo2 = pd.DataFrame(foo)
foo2.columns = [feature]
foo2.reset_index()
countGO = countGO.join(foo2, on='uniprotID', how='left')
return countGO
countGO = count_GOterms()
glance(countGO)
countGO.info()
countGO['isHub'] = countGO.uniprotID.isin(list_hubs20)
glance(countGO)
sns.displot(countGO, x="bioProcessUniprot", hue="isHub", kind='kde', common_norm=False);
doPlot=False
for feature in featuresDict.keys():
if feature != 'sequence':
foo = countGO.loc[countGO.isHub][feature]
bar = countGO.loc[~countGO.isHub][feature]
print(f"{feature}: on average, hubs have {foo.mean():.2f} GO terms, non-hubs have {bar.mean():.2f} (medians {foo.median():.2f} vs {bar.median():.2f})")
if doPlot:
sns.displot(countGO, x=feature, hue="isHub", kind='kde', common_norm=False)
plt.show();
###Output
bioProcessUniprot: on average, hubs have 11.54 GO terms, non-hubs have 5.72 (medians 6.00 vs 3.00)
cellCompUniprot: on average, hubs have 6.11 GO terms, non-hubs have 3.60 (medians 5.00 vs 3.00)
molFuncUniprot: on average, hubs have 4.19 GO terms, non-hubs have 2.49 (medians 3.00 vs 2.00)
domainUniprot: on average, hubs have 1.09 GO terms, non-hubs have 1.00 (medians 0.00 vs 0.00)
motifUniprot: on average, hubs have 0.28 GO terms, non-hubs have 0.16 (medians 0.00 vs 0.00)
Bgee: on average, hubs have 937.01 GO terms, non-hubs have 876.51 (medians 993.00 vs 940.00)
tissueCellHPA: on average, hubs have 85.08 GO terms, non-hubs have 84.32 (medians 83.00 vs 82.00)
tissueHPA: on average, hubs have 48.73 GO terms, non-hubs have 48.71 (medians 49.00 vs 49.00)
RNAseqHPA: on average, hubs have 56.54 GO terms, non-hubs have 52.39 (medians 61.00 vs 60.00)
subcellularLocationHPA: on average, hubs have 1.78 GO terms, non-hubs have 1.75 (medians 2.00 vs 2.00)
###Markdown
Export vectors lengths
###Code
def getVectorsLengths(featuresDict):
vectorsLengths = dict()
for feature, details in featuresDict.items():
df = pd.read_pickle(details['path'])
assert 'uniprotID' in df.columns
vectorsLengths[feature] = df.shape[1]-1 # -1 to remove uniprotID
return vectorsLengths
vectorsLengths = getVectorsLengths(featuresDict)
print(vectorsLengths)
versionRawImpute_overall = '6-0'
logVersions['featuresEngineering']['longVectors']['overall'] = versionRawImpute_overall
dump_LogVersions(logVersions)
with open(os.path.join(
cfg['outputFeaturesEngineering'],
"longVectors_lengths_v{}.pkl".format(versionRawImpute_overall)
), 'wb') as f:
pickle.dump(vectorsLengths, f)
###Output
_____no_output_____
###Markdown
Format long vectors
###Code
def formatRawData(featuresDict, uniprotIDs, vectorsLengths):
out = dict()
out['uniprotID'] = uniprotIDs.uniprotID.to_list()
for feature, details in featuresDict.items():
print(feature)
df = pd.read_pickle(details['path'])
print(' - initial dim:', df.shape)
print(' - merge with reference index list')
df = uniprotIDs.merge(
df,
on = 'uniprotID',
how='left',
validate='1:1'
)
df.set_index('uniprotID', inplace=True)
print(' - new dim:', df.shape)
assert details['imputeNA'] in ['0','mean','none']
if details['imputeNA'] == 'mean':
print(' - mean imputation')
meanValues = df.mean(axis = 0, skipna = True)
meanValues[np.isnan(meanValues)] = 0
df.fillna(meanValues, inplace=True)
# sanity check
assert df.isna().sum().sum() == 0
elif details['imputeNA'] == '0':
print(' - imputate with 0')
df.fillna(0, inplace=True)
# sanity check
assert df.isna().sum().sum() == 0
else:
print(' - no imputation: {:,} NAs'.format(df.isna().sum().sum()))
if details['normalise']:
print(' - normalise')
scal = sk.preprocessing.StandardScaler(copy = False)
df = scal.fit_transform(df)
elif feature == 'sequence':
df = df.sequence.to_list()
else:
df = df.values
# compare shape to vectorsLengths
if feature == 'sequence':
assert isinstance(df, list)
else:
assert df.shape[1] == vectorsLengths[feature]
out[feature] = df.copy()
return out
def sneakPeak2(featuresDict, n=5):
for feature, df in featuresDict.items():
print('## ',feature)
glance(df, n=n)
print()
###Output
_____no_output_____
###Markdown
Without normalising binary features
###Code
for feature in featuresDict:
if featuresDict[feature]['isBinary']:
featuresDict[feature]['normalise'] = False
featuresDict
outDict = formatRawData(featuresDict=featuresDict, uniprotIDs=uniprotIDs, vectorsLengths=vectorsLengths)
sneakPeak2(outDict)
sneakPeak2(outDict, n=0)
###Output
## uniprotID
list: len 20386
## bioProcessUniprot
np.array: shape (20386, 12248)
## cellCompUniprot
np.array: shape (20386, 1754)
## molFuncUniprot
np.array: shape (20386, 4346)
## domainUniprot
np.array: shape (20386, 2313)
## motifUniprot
np.array: shape (20386, 819)
## Bgee
np.array: shape (20386, 1147)
## tissueCellHPA
np.array: shape (20386, 189)
## tissueHPA
np.array: shape (20386, 62)
## RNAseqHPA
np.array: shape (20386, 61)
## subcellularLocationHPA
np.array: shape (20386, 33)
## sequence
list: len 20386
###Markdown
---**Export**- v6.1 09/11/2021
###Code
versionRawLimitedImpute = '6-1'
# logVersions['featuresEngineering'] = dict()
# logVersions['featuresEngineering']['longVectors']=dict()
logVersions['featuresEngineering']['longVectors']['keepBinary'] = versionRawLimitedImpute
dump_LogVersions(logVersions)
with open(os.path.join(
cfg['outputFeaturesEngineering'],
"longVectors_keepBinary_v{}.pkl".format(versionRawLimitedImpute)
), 'wb') as f:
pickle.dump(outDict, f)
###Output
_____no_output_____
###Markdown
WITH normalising binary features
###Code
for feature in featuresDict:
if featuresDict[feature]['isBinary']:
featuresDict[feature]['normalise'] = True
featuresDict
outDict2 = formatRawData(featuresDict=featuresDict, uniprotIDs=uniprotIDs, vectorsLengths=vectorsLengths)
sneakPeak2(outDict2)
###Output
## uniprotID
list: len 20386
['A0A024RBG1', 'A0A075B6H7', 'A0A075B6H8', 'A0A075B6H9', 'A0A075B6I0']
## bioProcessUniprot
np.array: shape (20386, 12248)
[[-0.02323526 -0.01400898 -0.02215341 ... -0.01213184 -0.00700398
-0.00990536]
[-0.02323526 -0.01400898 -0.02215341 ... -0.01213184 -0.00700398
-0.00990536]
[-0.02323526 -0.01400898 -0.02215341 ... -0.01213184 -0.00700398
-0.00990536]
...
[-0.02323526 -0.01400898 -0.02215341 ... -0.01213184 -0.00700398
-0.00990536]
[-0.02323526 -0.01400898 -0.02215341 ... -0.01213184 -0.00700398
-0.00990536]
[-0.02323526 -0.01400898 -0.02215341 ... -0.01213184 -0.00700398
-0.00990536]]
## cellCompUniprot
np.array: shape (20386, 1754)
[[-0.01400898 -0.01715827 -0.01213184 ... -0.00700398 -0.01213184
-0.00700398]
[-0.01400898 -0.01715827 -0.01213184 ... -0.00700398 -0.01213184
-0.00700398]
[-0.01400898 -0.01715827 -0.01213184 ... -0.00700398 -0.01213184
-0.00700398]
...
[-0.01400898 -0.01715827 -0.01213184 ... -0.00700398 -0.01213184
-0.00700398]
[-0.01400898 -0.01715827 -0.01213184 ... -0.00700398 -0.01213184
-0.00700398]
[-0.01400898 -0.01715827 -0.01213184 ... -0.00700398 -0.01213184
-0.00700398]]
## molFuncUniprot
np.array: shape (20386, 4346)
[[-0.00990536 -0.00990536 -0.01853351 ... -0.00990536 -0.02101605
-0.01213184]
[-0.00990536 -0.00990536 -0.01853351 ... -0.00990536 -0.02101605
-0.01213184]
[-0.00990536 -0.00990536 -0.01853351 ... -0.00990536 -0.02101605
-0.01213184]
...
[-0.00990536 -0.00990536 -0.01853351 ... -0.00990536 -0.02101605
-0.01213184]
[-0.00990536 -0.00990536 -0.01853351 ... -0.00990536 -0.02101605
-0.01213184]
[-0.00990536 -0.00990536 -0.01853351 ... -0.00990536 -0.02101605
-0.01213184]]
## domainUniprot
np.array: shape (20386, 2313)
[[-0.00990536 -0.00990536 -0.00990536 ... -0.00990536 -0.02426903
-0.00990536]
[-0.00990536 -0.00990536 -0.00990536 ... -0.00990536 -0.02426903
-0.00990536]
[-0.00990536 -0.00990536 -0.00990536 ... -0.00990536 -0.02426903
-0.00990536]
...
[-0.00990536 -0.00990536 -0.00990536 ... -0.00990536 -0.02426903
-0.00990536]
[-0.00990536 -0.00990536 -0.00990536 ... -0.00990536 -0.02426903
-0.00990536]
[-0.00990536 -0.00990536 -0.00990536 ... -0.00990536 -0.02426903
-0.00990536]]
## motifUniprot
np.array: shape (20386, 819)
[[-0.00700398 -0.00700398 -0.00700398 ... -0.00700398 -0.00990536
-0.0156629 ]
[-0.00700398 -0.00700398 -0.00700398 ... -0.00700398 -0.00990536
-0.0156629 ]
[-0.00700398 -0.00700398 -0.00700398 ... -0.00700398 -0.00990536
-0.0156629 ]
...
[-0.00700398 -0.00700398 -0.00700398 ... -0.00700398 -0.00990536
-0.0156629 ]
[-0.00700398 -0.00700398 -0.00700398 ... -0.00700398 -0.00990536
-0.0156629 ]
[-0.00700398 -0.00700398 -0.00700398 ... -0.00700398 -0.00990536
-0.0156629 ]]
## Bgee
np.array: shape (20386, 1147)
[[-0.10658714 0.68210993 0.64575424 ... 0.54472649 0.44463934
1.09760362]
[-0.10658714 -1.49017079 -1.67791238 ... -1.82328592 0.44463934
-0.57266454]
[-0.10658714 -1.49017079 -1.67791238 ... -1.82328592 0.44463934
-0.57266454]
...
[-0.10658714 -0.76607722 -0.90335684 ... -1.82328592 -2.20113346
-0.57266454]
[-0.10658714 -0.76607722 -0.90335684 ... -1.82328592 -2.20113346
-0.57266454]
[-0.10658714 -1.49017079 -1.67791238 ... -1.82328592 -0.87824706
-0.57266454]]
## tissueCellHPA
np.array: shape (20386, 189)
[[ 2.0749595 -0.01084193 -0.01467305 ... 0.48441489 1.16131334
0.62403542]
[ 0.0226513 -0.01084193 -0.01467305 ... -0.36092683 -0.44069869
-0.25659493]
[ 0.0226513 -0.01084193 -0.01467305 ... -0.36092683 -0.44069869
-0.25659493]
...
[ 0.0226513 -0.01084193 -0.01467305 ... -0.36092683 -0.44069869
-0.25659493]
[ 0.0226513 -0.01084193 -0.01467305 ... -0.36092683 -0.44069869
-0.25659493]
[ 0.0226513 -0.01084193 -0.01467305 ... -0.36092683 -0.44069869
-0.25659493]]
## tissueHPA
np.array: shape (20386, 62)
[[ 2.0749595 1.12532653 1.08594146 ... 1.07799286 1.16131334
0.62403542]
[ 0.0226513 -0.45370136 -0.49342663 ... -0.49604735 -0.44069869
-0.25659493]
[ 0.0226513 -0.45370136 -0.49342663 ... -0.49604735 -0.44069869
-0.25659493]
...
[ 0.0226513 -0.45370136 -0.49342663 ... -0.49604735 -0.44069869
-0.25659493]
[ 0.0226513 -0.45370136 -0.49342663 ... -0.49604735 -0.44069869
-0.25659493]
[ 0.0226513 -0.45370136 -0.49342663 ... -0.49604735 -0.44069869
-0.25659493]]
## RNAseqHPA
np.array: shape (20386, 61)
[[-4.02508290e-01 -4.78013516e-01 -4.20137262e-01 ... -2.76648740e-01
-3.15290044e-01 -6.00891684e-01]
[ 5.32096259e-15 3.25942180e-15 3.51673001e-15 ... 5.21505197e-15
8.51075141e-15 6.15586650e-15]
[ 5.32096259e-15 3.25942180e-15 3.51673001e-15 ... 5.21505197e-15
8.51075141e-15 6.15586650e-15]
...
[-4.02508290e-01 -4.92992221e-01 -3.78700781e-01 ... -2.86889949e-01
-6.34698518e-01 -5.30499838e-01]
[ 5.32096259e-15 3.25942180e-15 3.51673001e-15 ... 5.21505197e-15
8.51075141e-15 6.15586650e-15]
[ 5.32096259e-15 3.25942180e-15 3.51673001e-15 ... 5.21505197e-15
8.51075141e-15 6.15586650e-15]]
## subcellularLocationHPA
np.array: shape (20386, 33)
[[-0.01400898 -0.12096565 -0.6468028 ... -0.06508804 -0.03641696
-0.01715827]
[-0.01400898 -0.12096565 -0.6468028 ... -0.06508804 -0.03641696
-0.01715827]
[-0.01400898 -0.12096565 -0.6468028 ... -0.06508804 -0.03641696
-0.01715827]
...
[-0.01400898 -0.12096565 -0.6468028 ... -0.06508804 -0.03641696
-0.01715827]
[-0.01400898 -0.12096565 -0.6468028 ... -0.06508804 -0.03641696
-0.01715827]
[-0.01400898 -0.12096565 -0.6468028 ... -0.06508804 -0.03641696
-0.01715827]]
## sequence
list: len 20386
['MMKFKPNQTRTYDREGFKKRAACLCFRSEQEDEVLLVSSSRYPDQWIVPGGGMEPEEEPGGAAVREVYEEAGVKGKLGRLLGIFEQNQDRKHRTYVYVLTVTEILEDWEDSVNIGRKREWFKVEDAIKVLQCHKPVHAEYLEKLKLGCSPANGNSTVPSLPDNNALFVTAAQTSGLPSSVR', 'MEAPAQLLFLLLLWLPDTTREIVMTQSPPTLSLSPGERVTLSCRASQSVSSSYLTWYQQKPGQAPRLLIYGASTRATSIPARFSGSGSGTDFTLTISSLQPEDFAVYYCQQDYNLP', 'MDMRVPAQLLGLLLLWLPGVRFDIQMTQSPSFLSASVGDRVSIICWASEGISSNLAWYLQKPGKSPKLFLYDAKDLHPGVSSRFSGRGSGTDFTLTIISLKPEDFAAYYCKQDFSYP', 'MAWTPLLFLTLLLHCTGSLSQLVLTQSPSASASLGASVKLTCTLSSGHSSYAIAWHQQQPEKGPRYLMKLNSDGSHSKGDGIPDRFSGSSSGAERYLTISSLQSEDEADYYCQTWGTGI', 'MSVPTMAWMMLLLGLLAYGSGVDSQTVVTQEPSFSVSPGGTVTLTCGLSSGSVSTSYYPSWYQQTPGQAPRTLIYSTNTRSSGVPDRFSGSILGNKAALTITGAQADDESDYYCVLYMGSGI']
###Markdown
---**Export**- v6.1 09/11/2021
###Code
versionRawImputeAll = '6-1'
logVersions['featuresEngineering']['longVectors']['imputeAll'] = versionRawImputeAll
dump_LogVersions(logVersions)
with open(os.path.join(
cfg['outputFeaturesEngineering'],
"longVectors_imputeAll_v{}.pkl".format(versionRawImputeAll)
), 'wb') as f:
pickle.dump(outDict, f)
###Output
_____no_output_____ |
colab/workshop_01.ipynb | ###Markdown
[View in Colaboratory](https://colab.research.google.com/github/kecbigmt/rollin-tech/blob/master/colab/workshop_01.ipynb)
###Code
###Output
_____no_output_____ |
Introduction to TensorFlow for Artificial Intelligence, Machine Learning, and Deep Learning/Exercise2-Question.ipynb | ###Markdown
Exercise 2In the course you learned how to do classificaiton using Fashion MNIST, a data set containing items of clothing. There's another, similar dataset called MNIST which has items of handwriting -- the digits 0 through 9.Write an MNIST classifier that trains to 99% accuracy or above, and does it without a fixed number of epochs -- i.e. you should stop training once you reach that level of accuracy.Some notes:1. It should succeed in less than 10 epochs, so it is okay to change epochs= to 10, but nothing larger2. When it reaches 99% or greater it should print out the string "Reached 99% accuracy so cancelling training!"3. If you add any additional variables, make sure you use the same names as the ones used in the classI've started the code for you below -- how would you finish it?
###Code
import tensorflow as tf
from os import path, getcwd, chdir
# DO NOT CHANGE THE LINE BELOW. If you are developing in a local
# environment, then grab mnist.npz from the Coursera Jupyter Notebook
# and place it inside a local folder and edit the path to that location
path = f"{getcwd()}/../tmp2/mnist.npz"
# GRADED FUNCTION: train_mnist
def train_mnist():
# Please write your code only where you are indicated.
# please do not remove # model fitting inline comments.
# YOUR CODE SHOULD START HERE
class myCallBack(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
if(logs.get('acc')>0.99):
print("\nReached 99% accuracy so cancelling training!")
self.model.stop_training = True
# YOUR CODE SHOULD END HERE
mnist = tf.keras.datasets.mnist
(x_train, y_train),(x_test, y_test) = mnist.load_data(path=path)
# YOUR CODE SHOULD START HERE
x_train = x_train / 255.0
x_test = x_test / 255.0
#x_train = t_train/255.0
#x_test = x_test/255.0
# YOUR CODE SHOULD END HERE
callbacks = myCallBack()
model = tf.keras.models.Sequential([
# YOUR CODE SHOULD START HERE
tf.keras.layers.Flatten(input_shape=(28,28)),
tf.keras.layers.Dense(512, activation=tf.nn.relu),
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
# YOUR CODE SHOULD END HERE
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
# model fitting
history = model.fit(
# YOUR CODE SHOULD START HERE
x_train, y_train, epochs=10, callbacks=[callbacks]
# YOUR CODE SHOULD END HERE
)
# model fitting
return history.epoch, history.history['acc'][-1]
train_mnist()
# Now click the 'Submit Assignment' button above.
# Once that is complete, please run the following two cells to save your work and close the notebook
%%javascript
<!-- Save the notebook -->
IPython.notebook.save_checkpoint();
%%javascript
IPython.notebook.session.delete();
window.onbeforeunload = null
setTimeout(function() { window.close(); }, 1000);
###Output
_____no_output_____
###Markdown
Exercise 2In the course you learned how to do classificaiton using Fashion MNIST, a data set containing items of clothing. There's another, similar dataset called MNIST which has items of handwriting -- the digits 0 through 9.Write an MNIST classifier that trains to 99% accuracy or above, and does it without a fixed number of epochs -- i.e. you should stop training once you reach that level of accuracy.Some notes:1. It should succeed in less than 10 epochs, so it is okay to change epochs= to 10, but nothing larger2. When it reaches 99% or greater it should print out the string "Reached 99% accuracy so cancelling training!"3. If you add any additional variables, make sure you use the same names as the ones used in the classI've started the code for you below -- how would you finish it?
###Code
import tensorflow as tf
from os import path, getcwd, chdir
# DO NOT CHANGE THE LINE BELOW. If you are developing in a local
# environment, then grab mnist.npz from the Coursera Jupyter Notebook
# and place it inside a local folder and edit the path to that location
path = f"{getcwd()}/../tmp2/mnist.npz"
# GRADED FUNCTION: train_mnist
def train_mnist():
# Please write your code only where you are indicated.
# please do not remove # model fitting inline comments.
# YOUR CODE SHOULD START HERE
class myCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
if(logs.get('loss')<0.02):
print("\nReached 60% accuracy so cancelling training!")
self.model.stop_training = True
callbacks = myCallback()
# YOUR CODE SHOULD END HERE
mnist = tf.keras.datasets.mnist
(x_train, y_train),(x_test, y_test) = mnist.load_data()
# YOUR CODE SHOULD START HERE
x_train=x_train/255.0
x_test=x_test/255.0
# YOUR CODE SHOULD END HERE
model = tf.keras.models.Sequential([
# YOUR CODE SHOULD START HERE
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation=tf.nn.relu),
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
# YOUR CODE SHOULD END HERE
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
# model fitting
history = model.fit(# YOUR CODE SHOULD START HERE
x_train, y_train, epochs=10, callbacks=[callbacks]
# YOUR CODE SHOULD END HERE
)
# model fitting
return history.epoch, history.history['acc'][-1]
train_mnist()
# Now click the 'Submit Assignment' button above.
# Once that is complete, please run the following two cells to save your work and close the notebook
%%javascript
<!-- Save the notebook -->
IPython.notebook.save_checkpoint();
%%javascript
IPython.notebook.session.delete();
window.onbeforeunload = null
setTimeout(function() { window.close(); }, 1000);
###Output
_____no_output_____
###Markdown
Exercise 2In the course you learned how to do classificaiton using Fashion MNIST, a data set containing items of clothing. There's another, similar dataset called MNIST which has items of handwriting -- the digits 0 through 9.Write an MNIST classifier that trains to 99% accuracy or above, and does it without a fixed number of epochs -- i.e. you should stop training once you reach that level of accuracy.Some notes:1. It should succeed in less than 10 epochs, so it is okay to change epochs= to 10, but nothing larger2. When it reaches 99% or greater it should print out the string "Reached 99% accuracy so cancelling training!"3. If you add any additional variables, make sure you use the same names as the ones used in the classI've started the code for you below -- how would you finish it?
###Code
import tensorflow as tf
from os import path, getcwd, chdir
# DO NOT CHANGE THE LINE BELOW. If you are developing in a local
# environment, then grab mnist.npz from the Coursera Jupyter Notebook
# and place it inside a local folder and edit the path to that location
path = f"{getcwd()}/../tmp2/mnist.npz"
# GRADED FUNCTION: train_mnist
def train_mnist():
# Please write your code only where you are indicated.
# please do not remove # model fitting inline comments.
# YOUR CODE SHOULD START HERE
class myCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
if(logs.get('acc')>0.99):
print("\nReached 99% accuracy so cancelling training!")
self.model.stop_training = True
# YOUR CODE SHOULD END HERE
mnist = tf.keras.datasets.mnist
(x_train, y_train),(x_test, y_test) = mnist.load_data(path=path)
# YOUR CODE SHOULD START HERE
x_train, x_test = x_train / 255.0, x_test / 255.0
callbacks = myCallback()
# YOUR CODE SHOULD END HERE
model = tf.keras.models.Sequential([
# YOUR CODE SHOULD START HERE
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512 , activation = tf.nn.relu),
tf.keras.layers.Dense(10 , activation = tf.nn.softmax)
# YOUR CODE SHOULD END HERE
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
# model fitting
history = model.fit( x_train , y_train , epochs=10 , callbacks=[callbacks]
)
# model fitting
return history.epoch, history.history['acc'][-1]
train_mnist()
# Now click the 'Submit Assignment' button above.
# Once that is complete, please run the following two cells to save your work and close the notebook
%%javascript
<!-- Save the notebook -->
IPython.notebook.save_checkpoint();
%%javascript
IPython.notebook.session.delete();
window.onbeforeunload = null
setTimeout(function() { window.close(); }, 1000);
###Output
_____no_output_____ |
DigitReconnizer.ipynb | ###Markdown
Getting dataset information
###Code
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
x_train = pd.read_csv("data/train.csv")
x_test = pd.read_csv("data/test.csv")
x_test.head()
y_train = x_train["label"].values
y_train.shape
y_train[:10]
x_train = x_train.drop("label", axis=1).values
x_train.shape
x_test = x_test.values
x_test.shape
x_test = x_test.reshape(x_test.shape[0], 28,28)
x_train = x_train.reshape(x_train.shape[0], 28,28)
print(x_train.shape, x_test.shape)
def draw_mnist(data, nrow = 3, ncol = 3, title=None):
f, ax = plt.subplots(nrows=nrow, ncols=ncol, sharex=True, sharey=True)
for i in range(nrow):
for j in range(ncol):
ax[i, j].imshow(data[i * ncol + j], cmap=plt.cm.binary)
if title is None:
ax[i, j].set_title(i * ncol + j)
else:
ax[i, j].set_title(title[i * ncol + j])
plt.show()
draw_mnist(x_train, 3, 5, y_train)
draw_mnist(x_test, 3, 5)
###Output
_____no_output_____
###Markdown
Build model
###Code
import tensorflow as tf
def conv_layer(input, w, b, s = [1,1,1,1], p = 'SAME'):
conv = tf.nn.conv2d(input, w, s, p)
conv = tf.nn.bias_add(conv, b)
return tf.nn.relu(conv)
def pool_layer(input, size=2, s=[1, 1, 1, 1], p='SAME', ptype='max'):
pool = tf.nn.max_pool(input, ksize=[1, size, size, 1], strides=s, padding=p)
return pool
def fc_layer(input, w, b, relu=False, drop=False, drop_prob=0.5):
fc = tf.add(tf.matmul(input, w), b)
if relu:
fc = tf.nn.relu(fc)
if drop:
fc = tf.nn.dropout(fc, drop_prob)
return fc
def build_model_short(input):
# conv - relu - pool 1
w_conv11 = tf.Variable(tf.truncated_normal([5, 5, 1, 32]))
b_conv11 = tf.Variable(tf.zeros([32]))
conv1 = conv_layer(input, w_conv11, b_conv11)
pool1 = pool_layer(conv1)
# conv - relu - pool 2
w_conv12 = tf.Variable(tf.truncated_normal([3, 3, 32, 64], stddev=0.1))
b_conv12 = tf.Variable(tf.zeros([64]))
conv2 = conv_layer(pool1, w_conv12, b_conv12)
pool2 = pool_layer(conv2)
# flat
conv_size = pool2.get_shape().as_list()
flat_shape = conv_size[1] * conv_size[2] * conv_size[3]
flat = tf.reshape(pool2, [conv_size[0], flat_shape])
# fc1 size 100
fc1_size = 100
w_fc1 = tf.Variable(tf.truncated_normal([flat_shape, fc1_size], stddev=0.1))
b_fc1 = tf.Variable(tf.truncated_normal([fc1_size], stddev=0.1))
fc1 = fc_layer(flat, w_fc1, b_fc1, relu=True, drop_prob=0.4)
# fc2 size 10
fc2_size = 10
w_fc2 = tf.Variable(tf.truncated_normal([fc1_size, fc2_size], stddev=0.1))
b_fc2 = tf.Variable(tf.truncated_normal([fc2_size], stddev=0.1))
fc2 = fc_layer(fc1, w_fc2, b_fc2)
return fc2
lr = 0.0001
train_batch_size, eval_batch_size = 1000, 1000
num_classes = 10
input_w, input_h, channels = 28, 28, 1
train_input_shape = (train_batch_size, input_w, input_h, channels)
train_input = tf.placeholder(tf.float32, shape=train_input_shape, name='train_input')
train_target = tf.placeholder(tf.int32, shape=(train_batch_size, num_classes), name='train_target')
# eval_input_shape = (eval_batch_size, input_w, input_h, channels)
# eval_input = tf.placeholder(tf.float32, shape=eval_input_shape)
# eval_target = tf.placeholder(tf.int32, shape=(eval_batch_size, num_classes))
# gpu0
model_output = build_model_short(train_input)
# gpu1
# eval_model_output = build_model_short(eval_input)
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=model_output, labels=train_target))
# eval_cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=eval_model_output, labels=eval_target))
optimazer = tf.train.AdamOptimizer(learning_rate=lr).minimize(cross_entropy)
init = tf.global_variables_initializer()
# data preparation
EVAL_SIZE = 1000
one_short_labels = np.array([np.array([int(i==number) for i in range(10)]) for number in y_train])
eval_data = np.expand_dims(x_train[-EVAL_SIZE:], -1)/255.0
eval_labels = one_short_labels[-EVAL_SIZE:]
input_data = np.expand_dims(x_train[:-EVAL_SIZE], -1)/255.0
input_labels = one_short_labels[:-EVAL_SIZE]
print('train: ', input_data.shape, input_labels.shape)
print('eval: ', eval_data.shape, eval_labels.shape)
epochs = 30
sess = tf.Session()
sess.run(init)
for epoch in range(epochs):
start_batch = 0
end_batch = train_batch_size
while end_batch <= input_data.shape[0]:
_, cost_train = sess.run([optimazer, cross_entropy],
feed_dict={train_input: input_data[start_batch:end_batch],
train_target: input_labels[start_batch:end_batch]})
start_batch += train_batch_size
end_batch += train_batch_size
cost_eval = sess.run(cross_entropy,
feed_dict={train_input: eval_data,
train_target: eval_labels})
print('epoch: %d, train loss: %f, val loss: %f' % (epoch, cost_train, cost_eval))
test_data = np.expand_dims(x_test, -1)
print(test_data.shape)
answer = np.array([], dtype=np.int32)
start_batch = 0
end_batch = eval_batch_size
while end_batch <= test_data.shape[0]:
pred = sess.run(tf.nn.softmax(model_output), feed_dict={train_input: test_data[start_batch:end_batch]})
answer = np.hstack((answer, np.argmax(pred, axis=1, )))
start_batch += train_batch_size
end_batch += train_batch_size
sess.close()
answer.shape
answer
sub_sample = pd.read_csv('data/sample_submission.csv')
sub_sample.head()
submission = pd.DataFrame({'ImageId': range(1, answer.shape[0]+1), 'Label': answer })
# submission['Label'] = answer
submission.to_csv("sub_18_09_18_1.csv", index=False, encoding='utf-8')
###Output
_____no_output_____ |
Machine-Learning-in-90-days-master/Section 1- Python Crash Course/.ipynb_checkpoints/4.5-Matplotlib-checkpoint.ipynb | ###Markdown
MatplotLib TutorialMatplotlib is a plotting library for the Python programming language and its numerical mathematics extension NumPy. It provides an object-oriented API for embedding plots into applications using general-purpose GUI toolkits like Tkinter, wxPython, Qt, or GTK+.Some of the major Pros of Matplotlib are:* Generally easy to get started for simple plots* Support for custom labels and texts* Great control of every element in a figure* High-quality output in many formats* Very customizable in general
###Code
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
## Simple Examples
x=np.arange(0,10)
y=np.arange(11,21)
a=np.arange(40,50)
b=np.arange(50,60)
##plotting using matplotlib
##plt scatter
plt.scatter(x,y,c='g')
plt.xlabel('X axis')
plt.ylabel('Y axis')
plt.title('Graph in 2D')
plt.savefig('Test.png')
y=x*x
## plt plot
plt.plot(x,y,'r*',linestyle='dashed',linewidth=2, markersize=12)
plt.xlabel('X axis')
plt.ylabel('Y axis')
plt.title('2d Diagram')
## Creating Subplots
plt.subplot(2,2,1)
plt.plot(x,y,'r--')
plt.subplot(2,2,2)
plt.plot(x,y,'g*--')
plt.subplot(2,2,3)
plt.plot(x,y,'bo')
plt.subplot(2,2,4)
plt.plot(x,y,'go')
x = np.arange(1,11)
y = 3 * x + 5
plt.title("Matplotlib demo")
plt.xlabel("x axis caption")
plt.ylabel("y axis caption")
plt.plot(x,y)
plt.show()
np.pi
# Compute the x and y coordinates for points on a sine curve
x = np.arange(0, 4 * np.pi, 0.1)
y = np.sin(x)
plt.title("sine wave form")
# Plot the points using matplotlib
plt.plot(x, y)
plt.show()
#Subplot()
# Compute the x and y coordinates for points on sine and cosine curves
x = np.arange(0, 5 * np.pi, 0.1)
y_sin = np.sin(x)
y_cos = np.cos(x)
# Set up a subplot grid that has height 2 and width 1,
# and set the first such subplot as active.
plt.subplot(2, 1, 1)
# Make the first plot
plt.plot(x, y_sin,'r--')
plt.title('Sine')
# Set the second subplot as active, and make the second plot.
plt.subplot(2, 1, 2)
plt.plot(x, y_cos,'g--')
plt.title('Cosine')
# Show the figure.
plt.show()
## Bar plot
x = [2,8,10]
y = [11,16,9]
x2 = [3,9,11]
y2 = [6,15,7]
plt.bar(x, y)
plt.bar(x2, y2, color = 'g')
plt.title('Bar graph')
plt.ylabel('Y axis')
plt.xlabel('X axis')
plt.show()
###Output
_____no_output_____
###Markdown
Histograms
###Code
a = np.array([22,87,5,43,56,73,55,54,11,20,51,5,79,31,27])
plt.hist(a)
plt.title("histogram")
plt.show()
###Output
_____no_output_____
###Markdown
Box Plot using Matplotlib
###Code
data = [np.random.normal(0, std, 100) for std in range(1, 4)]
# rectangular box plot
plt.boxplot(data,vert=True,patch_artist=False);
data
###Output
_____no_output_____
###Markdown
Pie Chart
###Code
# Data to plot
labels = 'Python', 'C++', 'Ruby', 'Java'
sizes = [215, 130, 245, 210]
colors = ['gold', 'yellowgreen', 'lightcoral', 'lightskyblue']
explode = (0.4, 0, 0, 0) # explode 1st slice
# Plot
plt.pie(sizes, explode=explode, labels=labels, colors=colors,
autopct='%1.1f%%', shadow=False)
plt.axis('equal')
plt.show()
###Output
_____no_output_____ |
notebooks/202-vision-superresolution/202-vision-superresolution-video.ipynb | ###Markdown
Video Super Resolution with OpenVINOSuper Resolution is the process of enhancing the quality of an image by increasing the pixel count using deep learning. This notebook applies Single Image Super Resolution (SISR) to frames in a 360p (480ร360) video in 360p resolution. We use a model called [single-image-super-resolution-1032](https://github.com/openvinotoolkit/open_model_zoo/tree/develop/models/intel/single-image-super-resolution-1032) which is available from the Open Model Zoo. It is based on the research paper cited below. Y. Liu et al., ["An Attention-Based Approach for Single Image Super Resolution,"](https://arxiv.org/abs/1807.06779) 2018 24th International Conference on Pattern Recognition (ICPR), 2018, pp. 2777-2784, doi: 10.1109/ICPR.2018.8545760.**NOTE:** The Single Image Super Resolution (SISR) model used in this demo is not optimized for video. Results may vary depending on the video. We are looking for a more suitable Multi Image Super Resolution (MISR) model, so if you know of a great open source model, please let us know! You can start a [discussion](https://github.com/openvinotoolkit/openvino_notebooks/discussions) or create an [issue](https://github.com/openvinotoolkit/openvino_notebooks/issues) on GitHub. Preparation Imports
###Code
import os
import time
import urllib
from pathlib import Path
import cv2
import numpy as np
from IPython.display import (
HTML,
FileLink,
Pretty,
ProgressBar,
Video,
clear_output,
display,
)
from openvino.inference_engine import IECore
from pytube import YouTube
###Output
_____no_output_____
###Markdown
Settings
###Code
# Device to use for inference. For example, "CPU", or "GPU"
DEVICE = "CPU"
# 1032: 4x superresolution, 1033: 3x superresolution
MODEL_FILE = "model/single-image-super-resolution-1032.xml"
model_name = os.path.basename(MODEL_FILE)
model_xml_path = Path(MODEL_FILE).with_suffix(".xml")
###Output
_____no_output_____
###Markdown
Functions
###Code
def write_text_on_image(image: np.ndarray, text: str) -> np.ndarray:
"""
Write the specified text in the top left corner of the image
as white text with a black border.
:param image: image as numpy arry with HWC shape, RGB or BGR
:param text: text to write
:return: image with written text, as numpy array
"""
font = cv2.FONT_HERSHEY_PLAIN
org = (20, 20)
font_scale = 4
font_color = (255, 255, 255)
line_type = 1
font_thickness = 2
text_color_bg = (0, 0, 0)
x, y = org
image = cv2.UMat(image)
(text_w, text_h), _ = cv2.getTextSize(
text=text, fontFace=font, fontScale=font_scale, thickness=font_thickness
)
result_im = cv2.rectangle(
img=image, pt1=org, pt2=(x + text_w, y + text_h), color=text_color_bg, thickness=-1
)
textim = cv2.putText(
img=result_im,
text=text,
org=(x, y + text_h + font_scale - 1),
fontFace=font,
fontScale=font_scale,
color=font_color,
thickness=font_thickness,
lineType=line_type,
)
return textim.get()
def load_image(path: str) -> np.ndarray:
"""
Loads an image from `path` and returns it as BGR numpy array.
:param path: path to an image filename or url
:return: image as numpy array, with BGR channel order
"""
if path.startswith("http"):
# Set User-Agent to Mozilla because some websites block requests
# with User-Agent Python
request = urllib.request.Request(url=path, headers={"User-Agent": "Mozilla/5.0"})
response = urllib.request.urlopen(url=request)
array = np.asarray(bytearray(response.read()), dtype="uint8")
image = cv2.imdecode(buf=array, flags=-1) # Loads the image as BGR
else:
image = cv2.imread(filename=path)
return image
def convert_result_to_image(result) -> np.ndarray:
"""
Convert network result of floating point numbers to image with integer
values from 0-255. Values outside this range are clipped to 0 and 255.
:param result: a single superresolution network result in N,C,H,W shape
"""
result = result.squeeze(0).transpose(1, 2, 0)
result *= 255
result[result < 0] = 0
result[result > 255] = 255
result = result.astype(np.uint8)
return result
###Output
_____no_output_____
###Markdown
Load the Superresolution Model Load the model in Inference Engine with `ie.read_network` and load it to the specified device with `ie.load_network`
###Code
ie = IECore()
net = ie.read_network(model=model_xml_path)
exec_net = ie.load_network(network=net, device_name=DEVICE)
###Output
_____no_output_____
###Markdown
Get information about network inputs and outputs. The Super Resolution model expects two inputs: 1) the input image, 2) a bicubic interpolation of the input image to the target size 1920x1080. It returns the super resolution version of the image in 1920x1800.
###Code
# Network inputs and outputs are dictionaries. Get the keys for the
# dictionaries.
original_image_key = list(exec_net.input_info)[0]
bicubic_image_key = list(exec_net.input_info)[1]
output_key = list(exec_net.outputs.keys())[0]
# Get the expected input and target shape. `.dims[2:]` returns the height
# and width. OpenCV's resize function expects the shape as (width, height),
# so we reverse the shape with `[::-1]` and convert it to a tuple
input_height, input_width = tuple(exec_net.input_info[original_image_key].tensor_desc.dims[2:])
target_height, target_width = tuple(exec_net.input_info[bicubic_image_key].tensor_desc.dims[2:])
upsample_factor = int(target_height / input_height)
print(f"The network expects inputs with a width of {input_width}, " f"height of {input_height}")
print(f"The network returns images with a width of {target_width}, " f"height of {target_height}")
print(
f"The image sides are upsampled by a factor {upsample_factor}. "
f"The new image is {upsample_factor**2} times as large as the "
"original image"
)
###Output
_____no_output_____
###Markdown
Superresolution on VideoDownload a YouTube\* video with PyTube and enhance the video quality with superresolution. By default only the first 100 frames of the video are processed. Change NUM_FRAMES in the cell below to modify this. **Note:**- The resulting video does not contain audio.- The input video should be a landscape video and have an an input resultion of 360p (640x360) for the 1032 model, or 480p (720x480) for the 1033 model. Settings
###Code
VIDEO_DIR = "data"
OUTPUT_DIR = "output"
os.makedirs(name=str(OUTPUT_DIR), exist_ok=True)
# Number of frames to read from the input video. Set to 0 to read all frames.
NUM_FRAMES = 100
# The format for saving the result video's
# vp09 is slow, but widely available. If you have FFMPEG installed, you can
# change the FOURCC to `*"THEO"` to improve video writing speed
FOURCC = cv2.VideoWriter_fourcc(*"vp09")
###Output
_____no_output_____
###Markdown
Download and Prepare Video
###Code
# Use pytube to download a video. It downloads to the videos subdirectory.
# You can also place a local video there and comment out the following lines
VIDEO_URL = "https://www.youtube.com/watch?v=V8yS3WIkOrA"
yt = YouTube(VIDEO_URL)
# Use `yt.streams` to see all available streams. See the PyTube documentation
# https://python-pytube.readthedocs.io/en/latest/api.html for advanced
# filtering options
try:
os.makedirs(name=VIDEO_DIR, exist_ok=True)
stream = yt.streams.filter(resolution="360p").first()
filename = Path(stream.default_filename.encode("ascii", "ignore").decode("ascii")).stem
stream.download(output_path=OUTPUT_DIR, filename=filename)
print(f"Video {filename} downloaded to {OUTPUT_DIR}")
# Create Path objects for the input video and the resulting videos
video_path = Path(stream.get_file_path(filename, OUTPUT_DIR))
except Exception:
# If PyTube fails, use a local video stored in the VIDEO_DIR directory
video_path = Path(rf"{VIDEO_DIR}/CEO Pat Gelsinger on Leading Intel.mp4")
# Path names for the result videos
superres_video_path = Path(f"{OUTPUT_DIR}/{video_path.stem}_superres.mp4")
bicubic_video_path = Path(f"{OUTPUT_DIR}/{video_path.stem}_bicubic.mp4")
comparison_video_path = Path(f"{OUTPUT_DIR}/{video_path.stem}_superres_comparison.mp4")
# Open the video and get the dimensions and the FPS
cap = cv2.VideoCapture(filename=str(video_path))
ret, image = cap.read()
if not ret:
raise ValueError(f"The video at '{video_path}' cannot be read.")
fps = cap.get(cv2.CAP_PROP_FPS)
original_frame_height, original_frame_width = image.shape[:2]
cap.release()
print(
f"The input video has a frame width of {original_frame_width}, "
f"frame height of {original_frame_height} and runs at {fps:.2f} fps"
)
###Output
_____no_output_____
###Markdown
Create superresolution video, bicubic video and comparison video. The superresolution video contains the enhanced video, upsampled with superresolution, the bicubic video is the input video upsampled with bicubic interpolation, the combination video sets the bicubic video and the superresolution side by side.
###Code
superres_video = cv2.VideoWriter(
filename=str(superres_video_path),
fourcc=FOURCC,
fps=fps,
frameSize=(target_width, target_height),
)
bicubic_video = cv2.VideoWriter(
filename=str(bicubic_video_path),
fourcc=FOURCC,
fps=fps,
frameSize=(target_width, target_height),
)
comparison_video = cv2.VideoWriter(
filename=str(comparison_video_path),
fourcc=FOURCC,
fps=fps,
frameSize=(target_width * 2, target_height),
)
###Output
_____no_output_____
###Markdown
Do InferenceRead video frames and enhance them with superresolution. Save the superresolution video, the bicubic video and the comparison video to file.The code in this cell reads the video frame by frame. Each frame is resized and reshaped to network input shape and upsampled with bicubic interpolation to target shape. Both the original and the bicubic image are propagated through the network. The network result is a numpy array with floating point values, with a shape of (1,3,1920,1080). This array is converted to an 8-bit image with shape (1080,1920,3) and written to `superres_video`. The bicubic image is written to `bicubic_video` for comparison. Lastly, the bicubic and result frames are combined side by side and written to `comparison_video`. A progress bar shows the progress of the process. Inference time is measured, as well as total time to process each frame, which includes inference time as well as the time it takes to process and write the video.
###Code
start_time = time.perf_counter()
frame_nr = 1
total_inference_duration = 0
total_frames = cap.get(cv2.CAP_PROP_FRAME_COUNT) if NUM_FRAMES == 0 else NUM_FRAMES
progress_bar = ProgressBar(total=total_frames)
progress_bar.display()
cap = cv2.VideoCapture(filename=str(video_path))
try:
while cap.isOpened():
ret, image = cap.read()
if not ret:
cap.release()
break
if NUM_FRAMES > 0 and frame_nr == NUM_FRAMES:
break
# Resize the input image to network shape and convert from (H,W,C) to
# (N,C,H,W)
resized_image = cv2.resize(src=image, dsize=(input_width, input_height))
input_image_original = np.expand_dims(resized_image.transpose(2, 0, 1), axis=0)
# Resize and reshape the image to the target shape with bicubic
# interpolation
bicubic_image = cv2.resize(
src=image, dsize=(target_width, target_height), interpolation=cv2.INTER_CUBIC
)
input_image_bicubic = np.expand_dims(bicubic_image.transpose(2, 0, 1), axis=0)
# Do inference
inference_start_time = time.perf_counter()
result = exec_net.infer(
inputs={
original_image_key: input_image_original,
bicubic_image_key: input_image_bicubic,
}
)[output_key]
inference_stop_time = time.perf_counter()
inference_duration = inference_stop_time - inference_start_time
total_inference_duration += inference_duration
# Transform inference result into an image
result_frame = convert_result_to_image(result=result)
# Write resulting image and bicubic image to video
superres_video.write(image=result_frame)
bicubic_video.write(image=bicubic_image)
stacked_frame = np.hstack((bicubic_image, result_frame))
comparison_video.write(image=stacked_frame)
frame_nr = frame_nr + 1
# Update progress bar and status message
progress_bar.progress = frame_nr
progress_bar.update()
if frame_nr % 10 == 0:
clear_output(wait=True)
progress_bar.display()
display(
Pretty(
f"Processed frame {frame_nr}. Inference time: "
f"{inference_duration:.2f} seconds "
f"({1/inference_duration:.2f} FPS)"
)
)
except KeyboardInterrupt:
print("Processing interrupted.")
finally:
superres_video.release()
bicubic_video.release()
comparison_video.release()
end_time = time.perf_counter()
duration = end_time - start_time
print(f"Video's saved to {comparison_video_path.parent} directory.")
print(
f"Processed {frame_nr} frames in {duration:.2f} seconds. Total FPS "
f"(including video processing): {frame_nr/duration:.2f}. "
f"Inference FPS: {frame_nr/total_inference_duration:.2f}."
)
###Output
_____no_output_____
###Markdown
Show Side-by-Side Video of Bicubic and Superresolution Version
###Code
if not comparison_video_path.exists():
raise ValueError("The comparison video does not exist.")
else:
video_link = FileLink(comparison_video_path)
video_link.html_link_str = "<a href='%s' download>%s</a>"
display(
HTML(
f"Showing side by side comparison. If you cannot see the video in "
"your browser, please click on the following link to download "
f"the video<br>{video_link._repr_html_()}"
)
)
display(Video(comparison_video_path, width=800, embed=True))
###Output
_____no_output_____
###Markdown
Super Resolution with OpenVINOSuper Resolution is the process of enhancing the quality of an image by increasing the pixel count using deep learning. This notebook applies Single Image Super Resolution (SISR) to frames in a 360p (480ร360) video in 360p resolution. We use a model called [single-image-super-resolution-1032](https://github.com/openvinotoolkit/open_model_zoo/tree/develop/models/intel/single-image-super-resolution-1032) which is available from the Open Model Zoo. It is based on the research paper cited below. Y. Liu et al., ["An Attention-Based Approach for Single Image Super Resolution,"](https://arxiv.org/abs/1807.06779) 2018 24th International Conference on Pattern Recognition (ICPR), 2018, pp. 2777-2784, doi: 10.1109/ICPR.2018.8545760.**NOTE:** The Single Image Super Resolution (SISR) model used in this demo is not optimized for video. Results may vary depending on the video. We are looking for a more suitable Multi Image Super Resolution (MISR) model, so if you know of a great open source model, please let us know! You can start a [discussion](https://github.com/openvinotoolkit/openvino_notebooks/discussions) or create an [issue](https://github.com/openvinotoolkit/openvino_notebooks/issues) on GitHub. Preparation Imports
###Code
import os
import time
import urllib
from pathlib import Path
import cv2
import numpy as np
from IPython.display import HTML, FileLink, Pretty, ProgressBar, Video, clear_output, display
from openvino.inference_engine import IECore
from pytube import YouTube
###Output
_____no_output_____
###Markdown
Settings
###Code
# Device to use for inference. For example, "CPU", or "GPU"
DEVICE = "CPU"
# 1032: 4x superresolution, 1033: 3x superresolution
MODEL_FILE = "model/single-image-super-resolution-1032.xml"
model_name = os.path.basename(MODEL_FILE)
model_xml_path = Path(MODEL_FILE).with_suffix(".xml")
###Output
_____no_output_____
###Markdown
Functions
###Code
def write_text_on_image(image: np.ndarray, text: str) -> np.ndarray:
"""
Write the specified text in the top left corner of the image
as white text with a black border.
:param image: image as numpy arry with HWC shape, RGB or BGR
:param text: text to write
:return: image with written text, as numpy array
"""
font = cv2.FONT_HERSHEY_PLAIN
org = (20, 20)
font_scale = 4
font_color = (255, 255, 255)
line_type = 1
font_thickness = 2
text_color_bg = (0, 0, 0)
x, y = org
image = cv2.UMat(image)
(text_w, text_h), _ = cv2.getTextSize(text, font, font_scale, font_thickness)
result_im = cv2.rectangle(image, org, (x + text_w, y + text_h), text_color_bg, -1)
textim = cv2.putText(
result_im,
text,
(x, y + text_h + font_scale - 1),
font,
font_scale,
font_color,
font_thickness,
line_type,
)
return textim.get()
def load_image(path: str) -> np.ndarray:
"""
Loads an image from `path` and returns it as BGR numpy array.
:param path: path to an image filename or url
:return: image as numpy array, with BGR channel order
"""
if path.startswith("http"):
# Set User-Agent to Mozilla because some websites block requests
# with User-Agent Python
request = urllib.request.Request(path, headers={"User-Agent": "Mozilla/5.0"})
response = urllib.request.urlopen(request)
array = np.asarray(bytearray(response.read()), dtype="uint8")
image = cv2.imdecode(array, -1) # Loads the image as BGR
else:
image = cv2.imread(path)
return image
def convert_result_to_image(result) -> np.ndarray:
"""
Convert network result of floating point numbers to image with integer
values from 0-255. Values outside this range are clipped to 0 and 255.
:param result: a single superresolution network result in N,C,H,W shape
"""
result = result.squeeze(0).transpose(1, 2, 0)
result *= 255
result[result < 0] = 0
result[result > 255] = 255
result = result.astype(np.uint8)
return result
###Output
_____no_output_____
###Markdown
Load the Superresolution Model Load the model in Inference Engine with `ie.read_network` and load it to the specified device with `ie.load_network`
###Code
ie = IECore()
net = ie.read_network(str(model_xml_path), str(model_xml_path.with_suffix(".bin")))
exec_net = ie.load_network(network=net, device_name=DEVICE)
###Output
_____no_output_____
###Markdown
Get information about network inputs and outputs. The Super Resolution model expects two inputs: 1) the input image, 2) a bicubic interpolation of the input image to the target size 1920x1080. It returns the super resolution version of the image in 1920x1800.
###Code
# Network inputs and outputs are dictionaries. Get the keys for the
# dictionaries.
original_image_key = list(exec_net.input_info)[0]
bicubic_image_key = list(exec_net.input_info)[1]
output_key = list(exec_net.outputs.keys())[0]
# Get the expected input and target shape. `.dims[2:]` returns the height
# and width. OpenCV's resize function expects the shape as (width, height),
# so we reverse the shape with `[::-1]` and convert it to a tuple
input_height, input_width = tuple(exec_net.input_info[original_image_key].tensor_desc.dims[2:])
target_height, target_width = tuple(exec_net.input_info[bicubic_image_key].tensor_desc.dims[2:])
upsample_factor = int(target_height / input_height)
print(f"The network expects inputs with a width of {input_width}, " f"height of {input_height}")
print(f"The network returns images with a width of {target_width}, " f"height of {target_height}")
print(
f"The image sides are upsampled by a factor {upsample_factor}. "
f"The new image is {upsample_factor**2} times as large as the "
"original image"
)
###Output
_____no_output_____
###Markdown
Superresolution on VideoDownload a YouTube\* video with PyTube and enhance the video quality with superresolution. By default only the first 100 frames of the video are processed. Change NUM_FRAMES in the cell below to modify this. **Note:**- The resulting video does not contain audio.- The input video should be a landscape video and have an an input resultion of 360p (640x360) for the 1032 model, or 480p (720x480) for the 1033 model. Settings
###Code
VIDEO_DIR = "data"
OUTPUT_DIR = "output"
os.makedirs(str(OUTPUT_DIR), exist_ok=True)
# Number of frames to read from the input video. Set to 0 to read all frames.
NUM_FRAMES = 100
# The format for saving the result video's
# vp09 is slow, but widely available. If you have FFMPEG installed, you can
# change the FOURCC to `*"THEO"` to improve video writing speed
FOURCC = cv2.VideoWriter_fourcc(*"vp09")
###Output
_____no_output_____
###Markdown
Download and Prepare Video
###Code
# Use pytube to download a video. It downloads to the videos subdirectory.
# You can also place a local video there and comment out the following lines
VIDEO_URL = "https://www.youtube.com/watch?v=V8yS3WIkOrA"
yt = YouTube(VIDEO_URL)
# Use `yt.streams` to see all available streams. See the PyTube documentation
# https://python-pytube.readthedocs.io/en/latest/api.html for advanced
# filtering options
try:
os.makedirs(VIDEO_DIR, exist_ok=True)
stream = yt.streams.filter(resolution="360p").first()
filename = Path(stream.default_filename.encode("ascii", "ignore").decode("ascii")).stem
stream.download(OUTPUT_DIR, filename=filename)
print(f"Video {filename} downloaded to {OUTPUT_DIR}")
# Create Path objects for the input video and the resulting videos
video_path = Path(stream.get_file_path(filename, OUTPUT_DIR))
except Exception:
# If PyTube fails, use a local video stored in the VIDEO_DIR directory
video_path = Path(rf"{VIDEO_DIR}/CEO Pat Gelsinger on Leading Intel.mp4")
# Path names for the result videos
superres_video_path = Path(f"{OUTPUT_DIR}/{video_path.stem}_superres.mp4")
bicubic_video_path = Path(f"{OUTPUT_DIR}/{video_path.stem}_bicubic.mp4")
comparison_video_path = Path(f"{OUTPUT_DIR}/{video_path.stem}_superres_comparison.mp4")
# Open the video and get the dimensions and the FPS
cap = cv2.VideoCapture(str(video_path))
ret, image = cap.read()
if not ret:
raise ValueError(f"The video at '{video_path}' cannot be read.")
fps = cap.get(cv2.CAP_PROP_FPS)
original_frame_height, original_frame_width = image.shape[:2]
cap.release()
print(
f"The input video has a frame width of {original_frame_width}, "
f"frame height of {original_frame_height} and runs at {fps:.2f} fps"
)
###Output
_____no_output_____
###Markdown
Create superresolution video, bicubic video and comparison video. The superresolution video contains the enhanced video, upsampled with superresolution, the bicubic video is the input video upsampled with bicubic interpolation, the combination video sets the bicubic video and the superresolution side by side.
###Code
superres_video = cv2.VideoWriter(
str(superres_video_path),
FOURCC,
fps,
(target_width, target_height),
)
bicubic_video = cv2.VideoWriter(
str(bicubic_video_path),
FOURCC,
fps,
(target_width, target_height),
)
comparison_video = cv2.VideoWriter(
str(comparison_video_path),
FOURCC,
fps,
(target_width * 2, target_height),
)
###Output
_____no_output_____
###Markdown
Do InferenceRead video frames and enhance them with superresolution. Save the superresolution video, the bicubic video and the comparison video to file.The code in this cell reads the video frame by frame. Each frame is resized and reshaped to network input shape and upsampled with bicubic interpolation to target shape. Both the original and the bicubic image are propagated through the network. The network result is a numpy array with floating point values, with a shape of (1,3,1920,1080). This array is converted to an 8-bit image with shape (1080,1920,3) and written to `superres_video`. The bicubic image is written to `bicubic_video` for comparison. Lastly, the bicubic and result frames are combined side by side and written to `comparison_video`. A progress bar shows the progress of the process. Inference time is measured, as well as total time to process each frame, which includes inference time as well as the time it takes to process and write the video.
###Code
start_time = time.perf_counter()
frame_nr = 1
total_inference_duration = 0
total_frames = cap.get(cv2.CAP_PROP_FRAME_COUNT) if NUM_FRAMES == 0 else NUM_FRAMES
progress_bar = ProgressBar(total=total_frames)
progress_bar.display()
cap = cv2.VideoCapture(str(video_path))
try:
while cap.isOpened():
ret, image = cap.read()
if not ret:
cap.release()
break
if NUM_FRAMES > 0 and frame_nr == NUM_FRAMES:
break
# Resize the input image to network shape and convert from (H,W,C) to
# (N,C,H,W)
resized_image = cv2.resize(image, (input_width, input_height))
input_image_original = np.expand_dims(resized_image.transpose(2, 0, 1), axis=0)
# Resize and reshape the image to the target shape with bicubic
# interpolation
bicubic_image = cv2.resize(
image, (target_width, target_height), interpolation=cv2.INTER_CUBIC
)
input_image_bicubic = np.expand_dims(bicubic_image.transpose(2, 0, 1), axis=0)
# Do inference
inference_start_time = time.perf_counter()
result = exec_net.infer(
inputs={
original_image_key: input_image_original,
bicubic_image_key: input_image_bicubic,
}
)[output_key]
inference_stop_time = time.perf_counter()
inference_duration = inference_stop_time - inference_start_time
total_inference_duration += inference_duration
# Transform inference result into an image
result_frame = convert_result_to_image(result)
# Write resulting image and bicubic image to video
superres_video.write(result_frame)
bicubic_video.write(bicubic_image)
stacked_frame = np.hstack((bicubic_image, result_frame))
comparison_video.write(stacked_frame)
frame_nr = frame_nr + 1
# Update progress bar and status message
progress_bar.progress = frame_nr
progress_bar.update()
if frame_nr % 10 == 0:
clear_output(wait=True)
progress_bar.display()
display(
Pretty(
f"Processed frame {frame_nr}. Inference time: "
f"{inference_duration:.2f} seconds "
f"({1/inference_duration:.2f} FPS)"
)
)
except KeyboardInterrupt:
print("Processing interrupted.")
finally:
superres_video.release()
bicubic_video.release()
comparison_video.release()
end_time = time.perf_counter()
duration = end_time - start_time
print(f"Video's saved to {comparison_video_path.parent} directory.")
print(
f"Processed {frame_nr} frames in {duration:.2f} seconds. Total FPS "
f"(including video processing): {frame_nr/duration:.2f}. "
f"Inference FPS: {frame_nr/total_inference_duration:.2f}."
)
###Output
_____no_output_____
###Markdown
Show side-by-side video of bicubic and superresolution version
###Code
if not comparison_video_path.exists():
raise ValueError("The comparison video does not exist.")
else:
video_link = FileLink(comparison_video_path)
display(
HTML(
f"Showing side by side comparison. If you cannot see the video in "
"your browser, please click on the following link to download "
f"the video<br>{video_link._repr_html_()}"
)
)
display(Video(comparison_video_path, width=800, embed=True))
###Output
_____no_output_____
###Markdown
Video Super Resolution with OpenVINOSuper Resolution is the process of enhancing the quality of an image by increasing the pixel count using deep learning. This notebook applies Single Image Super Resolution (SISR) to frames in a 360p (480ร360) video in 360p resolution. We use a model called [single-image-super-resolution-1032](https://docs.openvino.ai/latest/omz_models_model_single_image_super_resolution_1032.html) which is available from the Open Model Zoo. It is based on the research paper cited below. Y. Liu et al., ["An Attention-Based Approach for Single Image Super Resolution,"](https://arxiv.org/abs/1807.06779) 2018 24th International Conference on Pattern Recognition (ICPR), 2018, pp. 2777-2784, doi: 10.1109/ICPR.2018.8545760.**NOTE:** The Single Image Super Resolution (SISR) model used in this demo is not optimized for video. Results may vary depending on the video. Preparation Preparation Imports
###Code
import time
import urllib
from pathlib import Path
import cv2
import numpy as np
from IPython.display import (
HTML,
FileLink,
Pretty,
ProgressBar,
Video,
clear_output,
display,
)
from openvino.runtime import Core
from pytube import YouTube
###Output
_____no_output_____
###Markdown
Settings
###Code
# Device to use for inference. For example, "CPU", or "GPU"
DEVICE = "CPU"
# 1032: 4x superresolution, 1033: 3x superresolution
MODEL_FILE = "model/single-image-super-resolution-1032.xml"
model_name = Path(MODEL_FILE).name
model_xml_path = Path(MODEL_FILE).with_suffix(".xml")
###Output
_____no_output_____
###Markdown
Functions
###Code
def write_text_on_image(image: np.ndarray, text: str) -> np.ndarray:
"""
Write the specified text in the top left corner of the image
as white text with a black border.
:param image: image as numpy array with HWC shape, RGB or BGR
:param text: text to write
:return: image with written text, as numpy array
"""
font = cv2.FONT_HERSHEY_PLAIN
org = (20, 20)
font_scale = 4
font_color = (255, 255, 255)
line_type = 1
font_thickness = 2
text_color_bg = (0, 0, 0)
x, y = org
image = cv2.UMat(image)
(text_w, text_h), _ = cv2.getTextSize(
text=text, fontFace=font, fontScale=font_scale, thickness=font_thickness
)
result_im = cv2.rectangle(
img=image, pt1=org, pt2=(x + text_w, y + text_h), color=text_color_bg, thickness=-1
)
textim = cv2.putText(
img=result_im,
text=text,
org=(x, y + text_h + font_scale - 1),
fontFace=font,
fontScale=font_scale,
color=font_color,
thickness=font_thickness,
lineType=line_type,
)
return textim.get()
def load_image(path: str) -> np.ndarray:
"""
Loads an image from `path` and returns it as BGR numpy array.
:param path: path to an image filename or url
:return: image as numpy array, with BGR channel order
"""
if path.startswith("http"):
# Set User-Agent to Mozilla because some websites block requests
# with User-Agent Python
request = urllib.request.Request(url=path, headers={"User-Agent": "Mozilla/5.0"})
response = urllib.request.urlopen(url=request)
array = np.asarray(bytearray(response.read()), dtype="uint8")
image = cv2.imdecode(buf=array, flags=-1) # Loads the image as BGR
else:
image = cv2.imread(filename=path)
return image
def convert_result_to_image(result) -> np.ndarray:
"""
Convert network result of floating point numbers to image with integer
values from 0-255. Values outside this range are clipped to 0 and 255.
:param result: a single superresolution network result in N,C,H,W shape
"""
result = result.squeeze(0).transpose(1, 2, 0)
result *= 255
result[result < 0] = 0
result[result > 255] = 255
result = result.astype(np.uint8)
return result
###Output
_____no_output_____
###Markdown
Load the Superresolution ModelLoad the model in Inference Engine with `ie.read_model` and compile it for the specified device with `ie.compile_model`.
###Code
ie = Core()
model = ie.read_model(model=model_xml_path)
compiled_model = ie.compile_model(model=model, device_name=DEVICE)
###Output
_____no_output_____
###Markdown
Get information about network inputs and outputs. The Super Resolution model expects two inputs: 1) the input image, 2) a bicubic interpolation of the input image to the target size 1920x1080. It returns the super resolution version of the image in 1920x1080.
###Code
# Network inputs and outputs are dictionaries. Get the keys for the
# dictionaries.
original_image_key, bicubic_image_key = compiled_model.inputs
output_key = next(iter(compiled_model.outputs))
# Get the expected input and target shape. `.dims[2:]` returns the height
# and width. OpenCV's resize function expects the shape as (width, height),
# so we reverse the shape with `[::-1]` and convert it to a tuple
input_height, input_width = list(original_image_key.shape)[2:]
target_height, target_width = list(bicubic_image_key.shape)[2:]
upsample_factor = int(target_height / input_height)
print(f"The network expects inputs with a width of {input_width}, " f"height of {input_height}")
print(f"The network returns images with a width of {target_width}, " f"height of {target_height}")
print(
f"The image sides are upsampled by a factor {upsample_factor}. "
f"The new image is {upsample_factor**2} times as large as the "
"original image"
)
###Output
_____no_output_____
###Markdown
Superresolution on VideoDownload a YouTube\* video with PyTube and enhance the video quality with superresolution. By default only the first 100 frames of the video are processed. Change NUM_FRAMES in the cell below to modify this. **Note:**- The resulting video does not contain audio.- The input video should be a landscape video and have an input resolution of 360p (640x360) for the 1032 model, or 480p (720x480) for the 1033 model. Settings
###Code
VIDEO_DIR = "data"
OUTPUT_DIR = "output"
Path(OUTPUT_DIR).mkdir(exist_ok=True)
# Maximum number of frames to read from the input video. Set to 0 to read all frames.
NUM_FRAMES = 100
# The format for saving the result videos. vp09 is slow, but widely available.
# If you have FFMPEG installed, you can change FOURCC to `*"THEO"` to improve video writing speed.
FOURCC = cv2.VideoWriter_fourcc(*"vp09")
###Output
_____no_output_____
###Markdown
Download and Prepare Video
###Code
# Use pytube to download a video. It downloads to the videos subdirectory.
# You can also place a local video there and comment out the following lines
VIDEO_URL = "https://www.youtube.com/watch?v=V8yS3WIkOrA"
yt = YouTube(VIDEO_URL)
# Use `yt.streams` to see all available streams. See the PyTube documentation
# https://python-pytube.readthedocs.io/en/latest/api.html for advanced
# filtering options
try:
Path(VIDEO_DIR).mkdir(exist_ok=True)
stream = yt.streams.filter(resolution="360p").first()
filename = Path(stream.default_filename.encode("ascii", "ignore").decode("ascii")).stem
stream.download(output_path=OUTPUT_DIR, filename=filename)
print(f"Video {filename} downloaded to {OUTPUT_DIR}")
# Create Path objects for the input video and the resulting videos
video_path = Path(stream.get_file_path(filename, OUTPUT_DIR))
except Exception:
# If PyTube fails, use a local video stored in the VIDEO_DIR directory
video_path = Path(rf"{VIDEO_DIR}/CEO Pat Gelsinger on Leading Intel.mp4")
# Path names for the result videos
superres_video_path = Path(f"{OUTPUT_DIR}/{video_path.stem}_superres.mp4")
bicubic_video_path = Path(f"{OUTPUT_DIR}/{video_path.stem}_bicubic.mp4")
comparison_video_path = Path(f"{OUTPUT_DIR}/{video_path.stem}_superres_comparison.mp4")
# Open the video and get the dimensions and the FPS
cap = cv2.VideoCapture(filename=str(video_path))
ret, image = cap.read()
if not ret:
raise ValueError(f"The video at '{video_path}' cannot be read.")
fps = cap.get(cv2.CAP_PROP_FPS)
frame_count = cap.get(cv2.CAP_PROP_FRAME_COUNT)
if NUM_FRAMES == 0:
total_frames = frame_count
else:
total_frames = min(frame_count, NUM_FRAMES)
original_frame_height, original_frame_width = image.shape[:2]
cap.release()
print(
f"The input video has a frame width of {original_frame_width}, "
f"frame height of {original_frame_height} and runs at {fps:.2f} fps"
)
###Output
_____no_output_____
###Markdown
Create superresolution video, bicubic video and comparison video. The superresolution video contains the enhanced video, upsampled with superresolution, the bicubic video is the input video upsampled with bicubic interpolation, the combination video sets the bicubic video and the superresolution side by side.
###Code
superres_video = cv2.VideoWriter(
filename=str(superres_video_path),
fourcc=FOURCC,
fps=fps,
frameSize=(target_width, target_height),
)
bicubic_video = cv2.VideoWriter(
filename=str(bicubic_video_path),
fourcc=FOURCC,
fps=fps,
frameSize=(target_width, target_height),
)
comparison_video = cv2.VideoWriter(
filename=str(comparison_video_path),
fourcc=FOURCC,
fps=fps,
frameSize=(target_width * 2, target_height),
)
###Output
_____no_output_____
###Markdown
Do InferenceRead video frames and enhance them with superresolution. Save the superresolution video, the bicubic video and the comparison video to file.The code in this cell reads the video frame by frame. Each frame is resized and reshaped to network input shape and upsampled with bicubic interpolation to target shape. Both the original and the bicubic image are propagated through the network. The network result is a numpy array with floating point values, with a shape of `(1,3,1920,1080)`. This array is converted to an 8-bit image with shape `(1080,1920,3)` and written to `superres_video`. The bicubic image is written to `bicubic_video` for comparison. Lastly, the bicubic and result frames are combined side by side and written to `comparison_video`. A progress bar shows the progress of the process. Inference time is measured, as well as total time to process each frame, which includes inference time as well as the time it takes to process and write the video.
###Code
start_time = time.perf_counter()
frame_nr = 0
total_inference_duration = 0
progress_bar = ProgressBar(total=total_frames)
progress_bar.display()
cap = cv2.VideoCapture(filename=str(video_path))
try:
while cap.isOpened():
ret, image = cap.read()
if not ret:
cap.release()
break
if frame_nr >= total_frames:
break
# Resize the input image to network shape and convert from (H,W,C) to
# (N,C,H,W)
resized_image = cv2.resize(src=image, dsize=(input_width, input_height))
input_image_original = np.expand_dims(resized_image.transpose(2, 0, 1), axis=0)
# Resize and reshape the image to the target shape with bicubic
# interpolation
bicubic_image = cv2.resize(
src=image, dsize=(target_width, target_height), interpolation=cv2.INTER_CUBIC
)
input_image_bicubic = np.expand_dims(bicubic_image.transpose(2, 0, 1), axis=0)
# Do inference
inference_start_time = time.perf_counter()
request = compiled_model.create_infer_request()
request.infer(
inputs={
original_image_key.any_name: input_image_original,
bicubic_image_key.any_name: input_image_bicubic,
}
)
result = request.get_output_tensor(output_key.index).data
inference_stop_time = time.perf_counter()
inference_duration = inference_stop_time - inference_start_time
total_inference_duration += inference_duration
# Transform inference result into an image
result_frame = convert_result_to_image(result=result)
# Write resulting image and bicubic image to video
superres_video.write(image=result_frame)
bicubic_video.write(image=bicubic_image)
stacked_frame = np.hstack((bicubic_image, result_frame))
comparison_video.write(image=stacked_frame)
frame_nr = frame_nr + 1
# Update progress bar and status message
progress_bar.progress = frame_nr
progress_bar.update()
if frame_nr % 10 == 0 or frame_nr == total_frames:
clear_output(wait=True)
progress_bar.display()
display(
Pretty(
f"Processed frame {frame_nr}. Inference time: "
f"{inference_duration:.2f} seconds "
f"({1/inference_duration:.2f} FPS)"
)
)
except KeyboardInterrupt:
print("Processing interrupted.")
finally:
superres_video.release()
bicubic_video.release()
comparison_video.release()
end_time = time.perf_counter()
duration = end_time - start_time
print(f"Video's saved to {comparison_video_path.parent} directory.")
print(
f"Processed {frame_nr} frames in {duration:.2f} seconds. Total FPS "
f"(including video processing): {frame_nr/duration:.2f}. "
f"Inference FPS: {frame_nr/total_inference_duration:.2f}."
)
###Output
_____no_output_____
###Markdown
Show Side-by-Side Video of Bicubic and Superresolution Version
###Code
if not comparison_video_path.exists():
raise ValueError("The comparison video does not exist.")
else:
video_link = FileLink(comparison_video_path)
video_link.html_link_str = "<a href='%s' download>%s</a>"
display(
HTML(
f"Showing side by side comparison. If you cannot see the video in "
"your browser, please click on the following link to download "
f"the video<br>{video_link._repr_html_()}"
)
)
display(Video(comparison_video_path, width=800, embed=True))
###Output
_____no_output_____
###Markdown
Video Super Resolution with OpenVINOSuper Resolution is the process of enhancing the quality of an image by increasing the pixel count using deep learning. This notebook applies Single Image Super Resolution (SISR) to frames in a 360p (480ร360) video in 360p resolution. We use a model called [single-image-super-resolution-1032](https://docs.openvino.ai/latest/omz_models_model_single_image_super_resolution_1032.html) which is available from the Open Model Zoo. It is based on the research paper cited below. Y. Liu et al., ["An Attention-Based Approach for Single Image Super Resolution,"](https://arxiv.org/abs/1807.06779) 2018 24th International Conference on Pattern Recognition (ICPR), 2018, pp. 2777-2784, doi: 10.1109/ICPR.2018.8545760.**NOTE:** The Single Image Super Resolution (SISR) model used in this demo is not optimized for video. Results may vary depending on the video. We are looking for a more suitable Multi Image Super Resolution (MISR) model, so if you know of a great open source model, please let us know! You can start a [discussion](https://github.com/openvinotoolkit/openvino_notebooks/discussions) or create an [issue](https://github.com/openvinotoolkit/openvino_notebooks/issues) on GitHub. Preparation Imports
###Code
import os
import time
import urllib
from pathlib import Path
import cv2
import numpy as np
from IPython.display import (
HTML,
FileLink,
Pretty,
ProgressBar,
Video,
clear_output,
display,
)
from openvino.inference_engine import IECore
from pytube import YouTube
###Output
_____no_output_____
###Markdown
Settings
###Code
# Device to use for inference. For example, "CPU", or "GPU"
DEVICE = "CPU"
# 1032: 4x superresolution, 1033: 3x superresolution
MODEL_FILE = "model/single-image-super-resolution-1032.xml"
model_name = os.path.basename(MODEL_FILE)
model_xml_path = Path(MODEL_FILE).with_suffix(".xml")
###Output
_____no_output_____
###Markdown
Functions
###Code
def write_text_on_image(image: np.ndarray, text: str) -> np.ndarray:
"""
Write the specified text in the top left corner of the image
as white text with a black border.
:param image: image as numpy array with HWC shape, RGB or BGR
:param text: text to write
:return: image with written text, as numpy array
"""
font = cv2.FONT_HERSHEY_PLAIN
org = (20, 20)
font_scale = 4
font_color = (255, 255, 255)
line_type = 1
font_thickness = 2
text_color_bg = (0, 0, 0)
x, y = org
image = cv2.UMat(image)
(text_w, text_h), _ = cv2.getTextSize(
text=text, fontFace=font, fontScale=font_scale, thickness=font_thickness
)
result_im = cv2.rectangle(
img=image, pt1=org, pt2=(x + text_w, y + text_h), color=text_color_bg, thickness=-1
)
textim = cv2.putText(
img=result_im,
text=text,
org=(x, y + text_h + font_scale - 1),
fontFace=font,
fontScale=font_scale,
color=font_color,
thickness=font_thickness,
lineType=line_type,
)
return textim.get()
def load_image(path: str) -> np.ndarray:
"""
Loads an image from `path` and returns it as BGR numpy array.
:param path: path to an image filename or url
:return: image as numpy array, with BGR channel order
"""
if path.startswith("http"):
# Set User-Agent to Mozilla because some websites block requests
# with User-Agent Python
request = urllib.request.Request(url=path, headers={"User-Agent": "Mozilla/5.0"})
response = urllib.request.urlopen(url=request)
array = np.asarray(bytearray(response.read()), dtype="uint8")
image = cv2.imdecode(buf=array, flags=-1) # Loads the image as BGR
else:
image = cv2.imread(filename=path)
return image
def convert_result_to_image(result) -> np.ndarray:
"""
Convert network result of floating point numbers to image with integer
values from 0-255. Values outside this range are clipped to 0 and 255.
:param result: a single superresolution network result in N,C,H,W shape
"""
result = result.squeeze(0).transpose(1, 2, 0)
result *= 255
result[result < 0] = 0
result[result > 255] = 255
result = result.astype(np.uint8)
return result
###Output
_____no_output_____
###Markdown
Load the Superresolution Model Load the model in Inference Engine with `ie.read_network` and load it to the specified device with `ie.load_network`
###Code
ie = IECore()
net = ie.read_network(model=model_xml_path)
exec_net = ie.load_network(network=net, device_name=DEVICE)
###Output
_____no_output_____
###Markdown
Get information about network inputs and outputs. The Super Resolution model expects two inputs: 1) the input image, 2) a bicubic interpolation of the input image to the target size 1920x1080. It returns the super resolution version of the image in 1920x1800.
###Code
# Network inputs and outputs are dictionaries. Get the keys for the
# dictionaries.
original_image_key = list(exec_net.input_info)[0]
bicubic_image_key = list(exec_net.input_info)[1]
output_key = list(exec_net.outputs.keys())[0]
# Get the expected input and target shape. `.dims[2:]` returns the height
# and width. OpenCV's resize function expects the shape as (width, height),
# so we reverse the shape with `[::-1]` and convert it to a tuple
input_height, input_width = tuple(exec_net.input_info[original_image_key].tensor_desc.dims[2:])
target_height, target_width = tuple(exec_net.input_info[bicubic_image_key].tensor_desc.dims[2:])
upsample_factor = int(target_height / input_height)
print(f"The network expects inputs with a width of {input_width}, " f"height of {input_height}")
print(f"The network returns images with a width of {target_width}, " f"height of {target_height}")
print(
f"The image sides are upsampled by a factor {upsample_factor}. "
f"The new image is {upsample_factor**2} times as large as the "
"original image"
)
###Output
_____no_output_____
###Markdown
Superresolution on VideoDownload a YouTube\* video with PyTube and enhance the video quality with superresolution. By default only the first 100 frames of the video are processed. Change NUM_FRAMES in the cell below to modify this. **Note:**- The resulting video does not contain audio.- The input video should be a landscape video and have an input resolution of 360p (640x360) for the 1032 model, or 480p (720x480) for the 1033 model. Settings
###Code
VIDEO_DIR = "data"
OUTPUT_DIR = "output"
os.makedirs(name=str(OUTPUT_DIR), exist_ok=True)
# Number of frames to read from the input video. Set to 0 to read all frames.
NUM_FRAMES = 100
# The format for saving the result video's
# vp09 is slow, but widely available. If you have FFMPEG installed, you can
# change the FOURCC to `*"THEO"` to improve video writing speed
FOURCC = cv2.VideoWriter_fourcc(*"vp09")
###Output
_____no_output_____
###Markdown
Download and Prepare Video
###Code
# Use pytube to download a video. It downloads to the videos subdirectory.
# You can also place a local video there and comment out the following lines
VIDEO_URL = "https://www.youtube.com/watch?v=V8yS3WIkOrA"
yt = YouTube(VIDEO_URL)
# Use `yt.streams` to see all available streams. See the PyTube documentation
# https://python-pytube.readthedocs.io/en/latest/api.html for advanced
# filtering options
try:
os.makedirs(name=VIDEO_DIR, exist_ok=True)
stream = yt.streams.filter(resolution="360p").first()
filename = Path(stream.default_filename.encode("ascii", "ignore").decode("ascii")).stem
stream.download(output_path=OUTPUT_DIR, filename=filename)
print(f"Video {filename} downloaded to {OUTPUT_DIR}")
# Create Path objects for the input video and the resulting videos
video_path = Path(stream.get_file_path(filename, OUTPUT_DIR))
except Exception:
# If PyTube fails, use a local video stored in the VIDEO_DIR directory
video_path = Path(rf"{VIDEO_DIR}/CEO Pat Gelsinger on Leading Intel.mp4")
# Path names for the result videos
superres_video_path = Path(f"{OUTPUT_DIR}/{video_path.stem}_superres.mp4")
bicubic_video_path = Path(f"{OUTPUT_DIR}/{video_path.stem}_bicubic.mp4")
comparison_video_path = Path(f"{OUTPUT_DIR}/{video_path.stem}_superres_comparison.mp4")
# Open the video and get the dimensions and the FPS
cap = cv2.VideoCapture(filename=str(video_path))
ret, image = cap.read()
if not ret:
raise ValueError(f"The video at '{video_path}' cannot be read.")
fps = cap.get(cv2.CAP_PROP_FPS)
original_frame_height, original_frame_width = image.shape[:2]
cap.release()
print(
f"The input video has a frame width of {original_frame_width}, "
f"frame height of {original_frame_height} and runs at {fps:.2f} fps"
)
###Output
_____no_output_____
###Markdown
Create superresolution video, bicubic video and comparison video. The superresolution video contains the enhanced video, upsampled with superresolution, the bicubic video is the input video upsampled with bicubic interpolation, the combination video sets the bicubic video and the superresolution side by side.
###Code
superres_video = cv2.VideoWriter(
filename=str(superres_video_path),
fourcc=FOURCC,
fps=fps,
frameSize=(target_width, target_height),
)
bicubic_video = cv2.VideoWriter(
filename=str(bicubic_video_path),
fourcc=FOURCC,
fps=fps,
frameSize=(target_width, target_height),
)
comparison_video = cv2.VideoWriter(
filename=str(comparison_video_path),
fourcc=FOURCC,
fps=fps,
frameSize=(target_width * 2, target_height),
)
###Output
_____no_output_____
###Markdown
Do InferenceRead video frames and enhance them with superresolution. Save the superresolution video, the bicubic video and the comparison video to file.The code in this cell reads the video frame by frame. Each frame is resized and reshaped to network input shape and upsampled with bicubic interpolation to target shape. Both the original and the bicubic image are propagated through the network. The network result is a numpy array with floating point values, with a shape of (1,3,1920,1080). This array is converted to an 8-bit image with shape (1080,1920,3) and written to `superres_video`. The bicubic image is written to `bicubic_video` for comparison. Lastly, the bicubic and result frames are combined side by side and written to `comparison_video`. A progress bar shows the progress of the process. Inference time is measured, as well as total time to process each frame, which includes inference time as well as the time it takes to process and write the video.
###Code
start_time = time.perf_counter()
frame_nr = 1
total_inference_duration = 0
total_frames = cap.get(cv2.CAP_PROP_FRAME_COUNT) if NUM_FRAMES == 0 else NUM_FRAMES
progress_bar = ProgressBar(total=total_frames)
progress_bar.display()
cap = cv2.VideoCapture(filename=str(video_path))
try:
while cap.isOpened():
ret, image = cap.read()
if not ret:
cap.release()
break
if NUM_FRAMES > 0 and frame_nr == NUM_FRAMES:
break
# Resize the input image to network shape and convert from (H,W,C) to
# (N,C,H,W)
resized_image = cv2.resize(src=image, dsize=(input_width, input_height))
input_image_original = np.expand_dims(resized_image.transpose(2, 0, 1), axis=0)
# Resize and reshape the image to the target shape with bicubic
# interpolation
bicubic_image = cv2.resize(
src=image, dsize=(target_width, target_height), interpolation=cv2.INTER_CUBIC
)
input_image_bicubic = np.expand_dims(bicubic_image.transpose(2, 0, 1), axis=0)
# Do inference
inference_start_time = time.perf_counter()
result = exec_net.infer(
inputs={
original_image_key: input_image_original,
bicubic_image_key: input_image_bicubic,
}
)[output_key]
inference_stop_time = time.perf_counter()
inference_duration = inference_stop_time - inference_start_time
total_inference_duration += inference_duration
# Transform inference result into an image
result_frame = convert_result_to_image(result=result)
# Write resulting image and bicubic image to video
superres_video.write(image=result_frame)
bicubic_video.write(image=bicubic_image)
stacked_frame = np.hstack((bicubic_image, result_frame))
comparison_video.write(image=stacked_frame)
frame_nr = frame_nr + 1
# Update progress bar and status message
progress_bar.progress = frame_nr
progress_bar.update()
if frame_nr % 10 == 0:
clear_output(wait=True)
progress_bar.display()
display(
Pretty(
f"Processed frame {frame_nr}. Inference time: "
f"{inference_duration:.2f} seconds "
f"({1/inference_duration:.2f} FPS)"
)
)
except KeyboardInterrupt:
print("Processing interrupted.")
finally:
superres_video.release()
bicubic_video.release()
comparison_video.release()
end_time = time.perf_counter()
duration = end_time - start_time
print(f"Video's saved to {comparison_video_path.parent} directory.")
print(
f"Processed {frame_nr} frames in {duration:.2f} seconds. Total FPS "
f"(including video processing): {frame_nr/duration:.2f}. "
f"Inference FPS: {frame_nr/total_inference_duration:.2f}."
)
###Output
_____no_output_____
###Markdown
Show Side-by-Side Video of Bicubic and Superresolution Version
###Code
if not comparison_video_path.exists():
raise ValueError("The comparison video does not exist.")
else:
video_link = FileLink(comparison_video_path)
video_link.html_link_str = "<a href='%s' download>%s</a>"
display(
HTML(
f"Showing side by side comparison. If you cannot see the video in "
"your browser, please click on the following link to download "
f"the video<br>{video_link._repr_html_()}"
)
)
display(Video(comparison_video_path, width=800, embed=True))
###Output
_____no_output_____
###Markdown
Super Resolution with OpenVINOSuper Resolution is the process of enhancing the quality of an image by increasing the pixel count using deep learning. This notebook applies Single Image Super Resolution (SISR) to frames in a 360p (480ร360) video in 360p resolution. We use a model called [single-image-super-resolution-1032](https://github.com/openvinotoolkit/open_model_zoo/tree/develop/models/intel/single-image-super-resolution-1032) which is available from the Open Model Zoo. It is based on the research paper cited below. Y. Liu et al., ["An Attention-Based Approach for Single Image Super Resolution,"](https://arxiv.org/abs/1807.06779) 2018 24th International Conference on Pattern Recognition (ICPR), 2018, pp. 2777-2784, doi: 10.1109/ICPR.2018.8545760.**NOTE:** The Single Image Super Resolution (SISR) model used in this demo is not optimized for video. Results may vary depending on the video. We are looking for a more suitable Multi Image Super Resolution (MISR) model, so if you know of a great open source model, please let us know! You can start a [discussion](https://github.com/openvinotoolkit/openvino_notebooks/discussions) or create an [issue](https://github.com/openvinotoolkit/openvino_notebooks/issues) on GitHub. Preparation Imports
###Code
import os
import time
import urllib
from pathlib import Path
import cv2
import numpy as np
import urllib
from IPython.display import (
HTML,
FileLink,
Pretty,
ProgressBar,
Video,
clear_output,
display,
)
from openvino.inference_engine import IECore
from pytube import YouTube
###Output
_____no_output_____
###Markdown
Settings
###Code
# Device to use for inference. For example, "CPU", or "GPU"
DEVICE = "CPU"
# 1032: 4x superresolution, 1033: 3x superresolution
MODEL_FILE = "models/single-image-super-resolution-1032.xml"
model_name = os.path.basename(MODEL_FILE)
model_xml_path = Path(MODEL_FILE).with_suffix(".xml")
###Output
_____no_output_____
###Markdown
Functions
###Code
def write_text_on_image(image: np.ndarray, text: str) -> np.ndarray:
"""
Write the specified text in the top left corner of the image
as white text with a black border.
:param image: image as numpy arry with HWC shape, RGB or BGR
:param text: text to write
:return: image with written text, as numpy array
"""
font = cv2.FONT_HERSHEY_PLAIN
org = (20, 20)
font_scale = 4
font_color = (255, 255, 255)
line_type = 1
font_thickness = 2
text_color_bg = (0, 0, 0)
x, y = org
image = cv2.UMat(image)
(text_w, text_h), _ = cv2.getTextSize(
text, font, font_scale, font_thickness
)
result_im = cv2.rectangle(
image, org, (x + text_w, y + text_h), text_color_bg, -1
)
textim = cv2.putText(
result_im,
text,
(x, y + text_h + font_scale - 1),
font,
font_scale,
font_color,
font_thickness,
line_type,
)
return textim.get()
def load_image(path: str) -> np.ndarray:
"""
Loads an image from `path` and returns it as BGR numpy array.
:param path: path to an image filename or url
:return: image as numpy array, with BGR channel order
"""
if path.startswith("http"):
# Set User-Agent to Mozilla because some websites block requests
# with User-Agent Python
request = urllib.request.Request(
path, headers={"User-Agent": "Mozilla/5.0"}
)
response = urllib.request.urlopen(request)
array = np.asarray(bytearray(response.read()), dtype="uint8")
image = cv2.imdecode(array, -1) # Loads the image as BGR
else:
image = cv2.imread(path)
return image
def convert_result_to_image(result) -> np.ndarray:
"""
Convert network result of floating point numbers to image with integer
values from 0-255. Values outside this range are clipped to 0 and 255.
:param result: a single superresolution network result in N,C,H,W shape
"""
result = result.squeeze(0).transpose(1, 2, 0)
result *= 255
result[result < 0] = 0
result[result > 255] = 255
result = result.astype(np.uint8)
return result
###Output
_____no_output_____
###Markdown
Load the Superresolution Model Load the model in Inference Engine with `ie.read_network` and load it to the specified device with `ie.load_network`
###Code
ie = IECore()
net = ie.read_network(
str(model_xml_path), str(model_xml_path.with_suffix(".bin"))
)
exec_net = ie.load_network(network=net, device_name=DEVICE)
###Output
_____no_output_____
###Markdown
Get information about network inputs and outputs. The Super Resolution model expects two inputs: 1) the input image, 2) a bicubic interpolation of the input image to the target size 1920x1080. It returns the super resolution version of the image in 1920x1800.
###Code
# Network inputs and outputs are dictionaries. Get the keys for the
# dictionaries.
original_image_key = list(exec_net.input_info)[0]
bicubic_image_key = list(exec_net.input_info)[1]
output_key = list(exec_net.outputs.keys())[0]
# Get the expected input and target shape. `.dims[2:]` returns the height
# and width. OpenCV's resize function expects the shape as (width, height),
# so we reverse the shape with `[::-1]` and convert it to a tuple
input_height, input_width = tuple(
exec_net.input_info[original_image_key].tensor_desc.dims[2:]
)
target_height, target_width = tuple(
exec_net.input_info[bicubic_image_key].tensor_desc.dims[2:]
)
upsample_factor = int(target_height / input_height)
print(
f"The network expects inputs with a width of {input_width}, "
f"height of {input_height}"
)
print(
f"The network returns images with a width of {target_width}, "
f"height of {target_height}"
)
print(
f"The image sides are upsampled by a factor {upsample_factor}. "
f"The new image is {upsample_factor**2} times as large as the "
"original image"
)
###Output
_____no_output_____
###Markdown
Superresolution on VideoDownload a YouTube\* video with PyTube and enhance the video quality with superresolution. By default only the first 100 frames of the video are processed. Change NUM_FRAMES in the cell below to modify this. **Note:**- The resulting video does not contain audio.- The input video should be a landscape video and have an an input resultion of 360p (640x360) for the 1032 model, or 480p (720x480) for the 1033 model. Settings
###Code
VIDEO_DIR = "videos"
# Number of frames to read from the input video. Set to 0 to read all frames.
NUM_FRAMES = 100
# The format for saving the result video's
# VP09 is slow, but widely available. If you have FFMPEG installed, you can
# change the FOURCC to `*"THEO"` to improve video writing speed
FOURCC = cv2.VideoWriter_fourcc(*"VP09")
###Output
_____no_output_____
###Markdown
Download and Prepare Video
###Code
# Use pytube to download a video. It downloads to the videos subdirectory.
# You can also place a local video there and comment out the following lines
VIDEO_URL = "https://www.youtube.com/watch?v=V8yS3WIkOrA"
yt = YouTube(VIDEO_URL)
# Use `yt.streams` to see all available streams. See the PyTube documentation
# https://python-pytube.readthedocs.io/en/latest/api.html for advanced
# filtering options
try:
os.makedirs(VIDEO_DIR, exist_ok=True)
stream = yt.streams.filter(resolution="360p").first()
filename = Path(
stream.default_filename.encode("ascii", "ignore").decode("ascii")
).stem
stream.download(VIDEO_DIR, filename=filename)
print(f"Video {filename} downloaded to {VIDEO_DIR}")
# Create Path objects for the input video and the resulting videos
video_path = Path(stream.get_file_path(filename, VIDEO_DIR))
except (socket.timeout, TimeoutError, urllib.error.HTTPError):
# If PyTube fails, use a local video stored in the VIDEO_DIR directory
video_path = Path(rf"{VIDEO_DIR}/CEO Pat Gelsinger on Leading Intel.mp4")
# Path names for the result videos
superres_video_path = video_path.with_name(f"{video_path.stem}_superres.mp4")
bicubic_video_path = video_path.with_name(f"{video_path.stem}_bicubic.mp4")
comparison_video_path = video_path.with_name(
f"{video_path.stem}_superres_comparison.mp4"
)
# Open the video and get the dimensions and the FPS
cap = cv2.VideoCapture(str(video_path))
ret, image = cap.read()
if not ret:
raise ValueError(f"The video at '{video_path}' cannot be read.")
fps = cap.get(cv2.CAP_PROP_FPS)
original_frame_height, original_frame_width = image.shape[:2]
cap.release()
print(
f"The input video has a frame width of {original_frame_width}, "
f"frame height of {original_frame_height} and runs at {fps:.2f} fps"
)
###Output
_____no_output_____
###Markdown
Create superresolution video, bicubic video and comparison video. The superresolution video contains the enhanced video, upsampled with superresolution, the bicubic video is the input video upsampled with bicubic interpolation, the combination video sets the bicubic video and the superresolution side by side.
###Code
superres_video = cv2.VideoWriter(
str(superres_video_path),
FOURCC,
fps,
(target_width, target_height),
)
bicubic_video = cv2.VideoWriter(
str(bicubic_video_path),
FOURCC,
fps,
(target_width, target_height),
)
comparison_video = cv2.VideoWriter(
str(comparison_video_path),
FOURCC,
fps,
(target_width * 2, target_height),
)
###Output
_____no_output_____
###Markdown
Do InferenceRead video frames and enhance them with superresolution. Save the superresolution video, the bicubic video and the comparison video to file.The code in this cell reads the video frame by frame. Each frame is resized and reshaped to network input shape and upsampled with bicubic interpolation to target shape. Both the original and the bicubic image are propagated through the network. The network result is a numpy array with floating point values, with a shape of (1,3,1920,1080). This array is converted to an 8-bit image with shape (1080,1920,3) and written to `superres_video`. The bicubic image is written to `bicubic_video` for comparison. Lastly, the bicubic and result frames are combined side by side and written to `comparison_video`. A progress bar shows the progress of the process. Inference time is measured, as well as total time to process each frame, which includes inference time as well as the time it takes to process and write the video.
###Code
start_time = time.perf_counter()
frame_nr = 1
total_inference_duration = 0
total_frames = (
cap.get(cv2.CAP_PROP_FRAME_COUNT) if NUM_FRAMES == 0 else NUM_FRAMES
)
progress_bar = ProgressBar(total=total_frames)
progress_bar.display()
cap = cv2.VideoCapture(str(video_path))
try:
while cap.isOpened():
ret, image = cap.read()
if not ret:
cap.release()
break
if NUM_FRAMES > 0 and frame_nr == NUM_FRAMES:
break
# Resize the input image to network shape and convert from (H,W,C) to
# (N,C,H,W)
resized_image = cv2.resize(image, (input_width, input_height))
input_image_original = np.expand_dims(
resized_image.transpose(2, 0, 1), axis=0
)
# Resize and reshape the image to the target shape with bicubic
# interpolation
bicubic_image = cv2.resize(
image, (target_width, target_height), interpolation=cv2.INTER_CUBIC
)
input_image_bicubic = np.expand_dims(
bicubic_image.transpose(2, 0, 1), axis=0
)
# Do inference
inference_start_time = time.perf_counter()
result = exec_net.infer(
inputs={
original_image_key: input_image_original,
bicubic_image_key: input_image_bicubic,
}
)[output_key]
inference_stop_time = time.perf_counter()
inference_duration = inference_stop_time - inference_start_time
total_inference_duration += inference_duration
# Transform inference result into an image
result_frame = convert_result_to_image(result)
# Write resulting image and bicubic image to video
superres_video.write(result_frame)
bicubic_video.write(bicubic_image)
stacked_frame = np.hstack((bicubic_image, result_frame))
comparison_video.write(stacked_frame)
frame_nr = frame_nr + 1
# Update progress bar and status message
progress_bar.progress = frame_nr
progress_bar.update()
if frame_nr % 10 == 0:
clear_output(wait=True)
progress_bar.display()
display(
Pretty(
f"Processed frame {frame_nr}. Inference time: "
f"{inference_duration:.2f} seconds "
f"({1/inference_duration:.2f} FPS)"
)
)
except KeyboardInterrupt:
print("Processing interrupted.")
finally:
superres_video.release()
bicubic_video.release()
comparison_video.release()
end_time = time.perf_counter()
duration = end_time - start_time
print(f"Video's saved to {comparison_video_path.parent} directory.")
print(
f"Processed {frame_nr} frames in {duration:.2f} seconds. Total FPS "
f"(including video processing): {frame_nr/duration:.2f}. "
f"Inference FPS: {frame_nr/total_inference_duration:.2f}."
)
###Output
_____no_output_____
###Markdown
Show side-by-side video of bicubic and superresolution version
###Code
if not comparison_video_path.exists():
raise ValueError("The comparison video does not exist.")
else:
video_link = FileLink(comparison_video_path)
display(
HTML(
f"Showing side by side comparison. If you cannot see the video in "
"your browser, please click on the following link to download "
f"the video<br>{video_link._repr_html_()}"
)
)
display(Video(comparison_video_path, width=800, embed=True))
###Output
_____no_output_____
###Markdown
Video Super Resolution with OpenVINOSuper Resolution is the process of enhancing the quality of an image by increasing the pixel count using deep learning. This notebook applies Single Image Super Resolution (SISR) to frames in a 360p (480ร360) video in 360p resolution. We use a model called [single-image-super-resolution-1032](https://docs.openvino.ai/latest/omz_models_model_single_image_super_resolution_1032.html) which is available from the Open Model Zoo. It is based on the research paper cited below. Y. Liu et al., ["An Attention-Based Approach for Single Image Super Resolution,"](https://arxiv.org/abs/1807.06779) 2018 24th International Conference on Pattern Recognition (ICPR), 2018, pp. 2777-2784, doi: 10.1109/ICPR.2018.8545760.**NOTE:** The Single Image Super Resolution (SISR) model used in this demo is not optimized for video. Results may vary depending on the video. Preparation Preparation Imports
###Code
import time
import urllib
from pathlib import Path
import cv2
import numpy as np
from IPython.display import (
HTML,
FileLink,
Pretty,
ProgressBar,
Video,
clear_output,
display,
)
from openvino.runtime import Core
from pytube import YouTube
###Output
_____no_output_____
###Markdown
Settings
###Code
# Device to use for inference. For example, "CPU", or "GPU"
DEVICE = "CPU"
# 1032: 4x superresolution, 1033: 3x superresolution
MODEL_FILE = "model/single-image-super-resolution-1032.xml"
model_name = Path(MODEL_FILE).name
model_xml_path = Path(MODEL_FILE).with_suffix(".xml")
###Output
_____no_output_____
###Markdown
Functions
###Code
def write_text_on_image(image: np.ndarray, text: str) -> np.ndarray:
"""
Write the specified text in the top left corner of the image
as white text with a black border.
:param image: image as numpy array with HWC shape, RGB or BGR
:param text: text to write
:return: image with written text, as numpy array
"""
font = cv2.FONT_HERSHEY_PLAIN
org = (20, 20)
font_scale = 4
font_color = (255, 255, 255)
line_type = 1
font_thickness = 2
text_color_bg = (0, 0, 0)
x, y = org
image = cv2.UMat(image)
(text_w, text_h), _ = cv2.getTextSize(
text=text, fontFace=font, fontScale=font_scale, thickness=font_thickness
)
result_im = cv2.rectangle(
img=image, pt1=org, pt2=(x + text_w, y + text_h), color=text_color_bg, thickness=-1
)
textim = cv2.putText(
img=result_im,
text=text,
org=(x, y + text_h + font_scale - 1),
fontFace=font,
fontScale=font_scale,
color=font_color,
thickness=font_thickness,
lineType=line_type,
)
return textim.get()
def load_image(path: str) -> np.ndarray:
"""
Loads an image from `path` and returns it as BGR numpy array.
:param path: path to an image filename or url
:return: image as numpy array, with BGR channel order
"""
if path.startswith("http"):
# Set User-Agent to Mozilla because some websites block requests
# with User-Agent Python
request = urllib.request.Request(url=path, headers={"User-Agent": "Mozilla/5.0"})
response = urllib.request.urlopen(url=request)
array = np.asarray(bytearray(response.read()), dtype="uint8")
image = cv2.imdecode(buf=array, flags=-1) # Loads the image as BGR
else:
image = cv2.imread(filename=path)
return image
def convert_result_to_image(result) -> np.ndarray:
"""
Convert network result of floating point numbers to image with integer
values from 0-255. Values outside this range are clipped to 0 and 255.
:param result: a single superresolution network result in N,C,H,W shape
"""
result = result.squeeze(0).transpose(1, 2, 0)
result *= 255
result[result < 0] = 0
result[result > 255] = 255
result = result.astype(np.uint8)
return result
###Output
_____no_output_____
###Markdown
Load the Superresolution ModelLoad the model in Inference Engine with `ie.read_model` and compile it for the specified device with `ie.compile_model`.
###Code
ie = Core()
model = ie.read_model(model=model_xml_path)
compiled_model = ie.compile_model(model=model, device_name=DEVICE)
###Output
_____no_output_____
###Markdown
Get information about network inputs and outputs. The Super Resolution model expects two inputs: 1) the input image, 2) a bicubic interpolation of the input image to the target size 1920x1080. It returns the super resolution version of the image in 1920x1080.
###Code
# Network inputs and outputs are dictionaries. Get the keys for the
# dictionaries.
original_image_key, bicubic_image_key = compiled_model.inputs
output_key = compiled_model.output(0)
# Get the expected input and target shape. `.dims[2:]` returns the height
# and width. OpenCV's resize function expects the shape as (width, height),
# so we reverse the shape with `[::-1]` and convert it to a tuple
input_height, input_width = list(original_image_key.shape)[2:]
target_height, target_width = list(bicubic_image_key.shape)[2:]
upsample_factor = int(target_height / input_height)
print(f"The network expects inputs with a width of {input_width}, " f"height of {input_height}")
print(f"The network returns images with a width of {target_width}, " f"height of {target_height}")
print(
f"The image sides are upsampled by a factor {upsample_factor}. "
f"The new image is {upsample_factor**2} times as large as the "
"original image"
)
###Output
_____no_output_____
###Markdown
Superresolution on VideoDownload a YouTube\* video with PyTube and enhance the video quality with superresolution. By default only the first 100 frames of the video are processed. Change NUM_FRAMES in the cell below to modify this. **Note:**- The resulting video does not contain audio.- The input video should be a landscape video and have an input resolution of 360p (640x360) for the 1032 model, or 480p (720x480) for the 1033 model. Settings
###Code
VIDEO_DIR = "data"
OUTPUT_DIR = "output"
Path(OUTPUT_DIR).mkdir(exist_ok=True)
# Maximum number of frames to read from the input video. Set to 0 to read all frames.
NUM_FRAMES = 100
# The format for saving the result videos. vp09 is slow, but widely available.
# If you have FFMPEG installed, you can change FOURCC to `*"THEO"` to improve video writing speed.
FOURCC = cv2.VideoWriter_fourcc(*"vp09")
###Output
_____no_output_____
###Markdown
Download and Prepare Video
###Code
# Use pytube to download a video. It downloads to the videos subdirectory.
# You can also place a local video there and comment out the following lines
VIDEO_URL = "https://www.youtube.com/watch?v=V8yS3WIkOrA"
yt = YouTube(VIDEO_URL)
# Use `yt.streams` to see all available streams. See the PyTube documentation
# https://python-pytube.readthedocs.io/en/latest/api.html for advanced
# filtering options
try:
Path(VIDEO_DIR).mkdir(exist_ok=True)
stream = yt.streams.filter(resolution="360p").first()
filename = Path(stream.default_filename.encode("ascii", "ignore").decode("ascii")).stem
stream.download(output_path=OUTPUT_DIR, filename=filename)
print(f"Video {filename} downloaded to {OUTPUT_DIR}")
# Create Path objects for the input video and the resulting videos
video_path = Path(stream.get_file_path(filename, OUTPUT_DIR))
except Exception:
# If PyTube fails, use a local video stored in the VIDEO_DIR directory
video_path = Path(rf"{VIDEO_DIR}/CEO Pat Gelsinger on Leading Intel.mp4")
# Path names for the result videos
superres_video_path = Path(f"{OUTPUT_DIR}/{video_path.stem}_superres.mp4")
bicubic_video_path = Path(f"{OUTPUT_DIR}/{video_path.stem}_bicubic.mp4")
comparison_video_path = Path(f"{OUTPUT_DIR}/{video_path.stem}_superres_comparison.mp4")
# Open the video and get the dimensions and the FPS
cap = cv2.VideoCapture(filename=str(video_path))
ret, image = cap.read()
if not ret:
raise ValueError(f"The video at '{video_path}' cannot be read.")
fps = cap.get(cv2.CAP_PROP_FPS)
frame_count = cap.get(cv2.CAP_PROP_FRAME_COUNT)
if NUM_FRAMES == 0:
total_frames = frame_count
else:
total_frames = min(frame_count, NUM_FRAMES)
original_frame_height, original_frame_width = image.shape[:2]
cap.release()
print(
f"The input video has a frame width of {original_frame_width}, "
f"frame height of {original_frame_height} and runs at {fps:.2f} fps"
)
###Output
_____no_output_____
###Markdown
Create superresolution video, bicubic video and comparison video. The superresolution video contains the enhanced video, upsampled with superresolution, the bicubic video is the input video upsampled with bicubic interpolation, the combination video sets the bicubic video and the superresolution side by side.
###Code
superres_video = cv2.VideoWriter(
filename=str(superres_video_path),
fourcc=FOURCC,
fps=fps,
frameSize=(target_width, target_height),
)
bicubic_video = cv2.VideoWriter(
filename=str(bicubic_video_path),
fourcc=FOURCC,
fps=fps,
frameSize=(target_width, target_height),
)
comparison_video = cv2.VideoWriter(
filename=str(comparison_video_path),
fourcc=FOURCC,
fps=fps,
frameSize=(target_width * 2, target_height),
)
###Output
_____no_output_____
###Markdown
Do InferenceRead video frames and enhance them with superresolution. Save the superresolution video, the bicubic video and the comparison video to file.The code in this cell reads the video frame by frame. Each frame is resized and reshaped to network input shape and upsampled with bicubic interpolation to target shape. Both the original and the bicubic image are propagated through the network. The network result is a numpy array with floating point values, with a shape of `(1,3,1920,1080)`. This array is converted to an 8-bit image with shape `(1080,1920,3)` and written to `superres_video`. The bicubic image is written to `bicubic_video` for comparison. Lastly, the bicubic and result frames are combined side by side and written to `comparison_video`. A progress bar shows the progress of the process. Inference time is measured, as well as total time to process each frame, which includes inference time as well as the time it takes to process and write the video.
###Code
start_time = time.perf_counter()
frame_nr = 0
total_inference_duration = 0
progress_bar = ProgressBar(total=total_frames)
progress_bar.display()
cap = cv2.VideoCapture(filename=str(video_path))
try:
while cap.isOpened():
ret, image = cap.read()
if not ret:
cap.release()
break
if frame_nr >= total_frames:
break
# Resize the input image to network shape and convert from (H,W,C) to
# (N,C,H,W)
resized_image = cv2.resize(src=image, dsize=(input_width, input_height))
input_image_original = np.expand_dims(resized_image.transpose(2, 0, 1), axis=0)
# Resize and reshape the image to the target shape with bicubic
# interpolation
bicubic_image = cv2.resize(
src=image, dsize=(target_width, target_height), interpolation=cv2.INTER_CUBIC
)
input_image_bicubic = np.expand_dims(bicubic_image.transpose(2, 0, 1), axis=0)
# Do inference
inference_start_time = time.perf_counter()
result = compiled_model(
inputs={
original_image_key.any_name: input_image_original,
bicubic_image_key.any_name: input_image_bicubic,
}
)[output_key]
inference_stop_time = time.perf_counter()
inference_duration = inference_stop_time - inference_start_time
total_inference_duration += inference_duration
# Transform inference result into an image
result_frame = convert_result_to_image(result=result)
# Write resulting image and bicubic image to video
superres_video.write(image=result_frame)
bicubic_video.write(image=bicubic_image)
stacked_frame = np.hstack((bicubic_image, result_frame))
comparison_video.write(image=stacked_frame)
frame_nr = frame_nr + 1
# Update progress bar and status message
progress_bar.progress = frame_nr
progress_bar.update()
if frame_nr % 10 == 0 or frame_nr == total_frames:
clear_output(wait=True)
progress_bar.display()
display(
Pretty(
f"Processed frame {frame_nr}. Inference time: "
f"{inference_duration:.2f} seconds "
f"({1/inference_duration:.2f} FPS)"
)
)
except KeyboardInterrupt:
print("Processing interrupted.")
finally:
superres_video.release()
bicubic_video.release()
comparison_video.release()
end_time = time.perf_counter()
duration = end_time - start_time
print(f"Video's saved to {comparison_video_path.parent} directory.")
print(
f"Processed {frame_nr} frames in {duration:.2f} seconds. Total FPS "
f"(including video processing): {frame_nr/duration:.2f}. "
f"Inference FPS: {frame_nr/total_inference_duration:.2f}."
)
###Output
_____no_output_____
###Markdown
Show Side-by-Side Video of Bicubic and Superresolution Version
###Code
if not comparison_video_path.exists():
raise ValueError("The comparison video does not exist.")
else:
video_link = FileLink(comparison_video_path)
video_link.html_link_str = "<a href='%s' download>%s</a>"
display(
HTML(
f"Showing side by side comparison. If you cannot see the video in "
"your browser, please click on the following link to download "
f"the video<br>{video_link._repr_html_()}"
)
)
display(Video(comparison_video_path, width=800, embed=True))
###Output
_____no_output_____
###Markdown
Super Resolution with OpenVINOSuper Resolution is the process of enhancing the quality of an image by increasing the pixel count using deep learning. This notebook applies Single Image Super Resolution (SISR) to frames in a 360p (480ร360) video in 360p resolution. We use a model called [single-image-super-resolution-1032](https://github.com/openvinotoolkit/open_model_zoo/tree/develop/models/intel/single-image-super-resolution-1032) which is available from the Open Model Zoo. It is based on the research paper cited below. Y. Liu et al., ["An Attention-Based Approach for Single Image Super Resolution,"](https://arxiv.org/abs/1807.06779) 2018 24th International Conference on Pattern Recognition (ICPR), 2018, pp. 2777-2784, doi: 10.1109/ICPR.2018.8545760.**NOTE:** The Single Image Super Resolution (SISR) model used in this demo is not optimized for video. Results may vary depending on the video. We are looking for a more suitable Multi Image Super Resolution (MISR) model, so if you know of a great open source model, please let us know! You can start a [discussion](https://github.com/openvinotoolkit/openvino_notebooks/discussions) or create an [issue](https://github.com/openvinotoolkit/openvino_notebooks/issues) on GitHub. Preparation Imports
###Code
import os
import time
import urllib
from pathlib import Path
import cv2
import numpy as np
from IPython.display import (
HTML,
FileLink,
Pretty,
ProgressBar,
Video,
clear_output,
display,
)
from openvino.inference_engine import IECore
from pytube import YouTube
###Output
_____no_output_____
###Markdown
Settings
###Code
# Device to use for inference. For example, "CPU", or "GPU"
DEVICE = "CPU"
# 1032: 4x superresolution, 1033: 3x superresolution
MODEL_FILE = "models/single-image-super-resolution-1032.xml"
model_name = os.path.basename(MODEL_FILE)
model_xml_path = Path(MODEL_FILE).with_suffix(".xml")
###Output
_____no_output_____
###Markdown
Functions
###Code
def write_text_on_image(image: np.ndarray, text: str) -> np.ndarray:
"""
Write the specified text in the top left corner of the image
as white text with a black border.
:param image: image as numpy arry with HWC shape, RGB or BGR
:param text: text to write
:return: image with written text, as numpy array
"""
font = cv2.FONT_HERSHEY_PLAIN
org = (20, 20)
font_scale = 4
font_color = (255, 255, 255)
line_type = 1
font_thickness = 2
text_color_bg = (0, 0, 0)
x, y = org
image = cv2.UMat(image)
(text_w, text_h), _ = cv2.getTextSize(
text, font, font_scale, font_thickness
)
result_im = cv2.rectangle(
image, org, (x + text_w, y + text_h), text_color_bg, -1
)
textim = cv2.putText(
result_im,
text,
(x, y + text_h + font_scale - 1),
font,
font_scale,
font_color,
font_thickness,
line_type,
)
return textim.get()
def load_image(path: str) -> np.ndarray:
"""
Loads an image from `path` and returns it as BGR numpy array.
:param path: path to an image filename or url
:return: image as numpy array, with BGR channel order
"""
if path.startswith("http"):
# Set User-Agent to Mozilla because some websites block requests
# with User-Agent Python
request = urllib.request.Request(
path, headers={"User-Agent": "Mozilla/5.0"}
)
response = urllib.request.urlopen(request)
array = np.asarray(bytearray(response.read()), dtype="uint8")
image = cv2.imdecode(array, -1) # Loads the image as BGR
else:
image = cv2.imread(path)
return image
def convert_result_to_image(result) -> np.ndarray:
"""
Convert network result of floating point numbers to image with integer
values from 0-255. Values outside this range are clipped to 0 and 255.
:param result: a single superresolution network result in N,C,H,W shape
"""
result = result.squeeze(0).transpose(1, 2, 0)
result *= 255
result[result < 0] = 0
result[result > 255] = 255
result = result.astype(np.uint8)
return result
###Output
_____no_output_____
###Markdown
Load the Superresolution Model Load the model in Inference Engine with `ie.read_network` and load it to the specified device with `ie.load_network`
###Code
ie = IECore()
net = ie.read_network(
str(model_xml_path), str(model_xml_path.with_suffix(".bin"))
)
exec_net = ie.load_network(network=net, device_name=DEVICE)
###Output
_____no_output_____
###Markdown
Get information about network inputs and outputs. The Super Resolution model expects two inputs: 1) the input image, 2) a bicubic interpolation of the input image to the target size 1920x1080. It returns the super resolution version of the image in 1920x1800.
###Code
# Network inputs and outputs are dictionaries. Get the keys for the
# dictionaries.
original_image_key = list(exec_net.input_info)[0]
bicubic_image_key = list(exec_net.input_info)[1]
output_key = list(exec_net.outputs.keys())[0]
# Get the expected input and target shape. `.dims[2:]` returns the height
# and width. OpenCV's resize function expects the shape as (width, height),
# so we reverse the shape with `[::-1]` and convert it to a tuple
input_height, input_width = tuple(
exec_net.input_info[original_image_key].tensor_desc.dims[2:]
)
target_height, target_width = tuple(
exec_net.input_info[bicubic_image_key].tensor_desc.dims[2:]
)
upsample_factor = int(target_height / input_height)
print(
f"The network expects inputs with a width of {input_width}, "
f"height of {input_height}"
)
print(
f"The network returns images with a width of {target_width}, "
f"height of {target_height}"
)
print(
f"The image sides are upsampled by a factor {upsample_factor}. "
f"The new image is {upsample_factor**2} times as large as the "
"original image"
)
###Output
_____no_output_____
###Markdown
Superresolution on VideoDownload a YouTube\* video with PyTube and enhance the video quality with superresolution. By default only the first 100 frames of the video are processed. Change NUM_FRAMES in the cell below to modify this. **Note:**- The resulting video does not contain audio.- The input video should be a landscape video and have an an input resultion of 360p (640x360) for the 1032 model, or 480p (720x480) for the 1033 model. Settings
###Code
VIDEO_DIR = "videos"
# Number of frames to read from the input video. Set to 0 to read all frames.
NUM_FRAMES = 100
# The format for saving the result video's
# VP09 is slow, but widely available. If you have FFMPEG installed, you can
# change the FOURCC to `*"THEO"` to improve video writing speed
FOURCC = cv2.VideoWriter_fourcc(*"VP09")
###Output
_____no_output_____
###Markdown
Download and Prepare Video
###Code
# Use pytube to download a video. It downloads to the videos subdirectory.
# You can also place a local video there and comment out the following lines
VIDEO_URL = "https://www.youtube.com/watch?v=V8yS3WIkOrA"
yt = YouTube(VIDEO_URL)
# Use `yt.streams` to see all available streams. See the PyTube documentation
# https://python-pytube.readthedocs.io/en/latest/api.html for advanced
# filtering options
os.makedirs(VIDEO_DIR, exist_ok=True)
stream = yt.streams.filter(resolution="360p").first()
filename = Path(
stream.default_filename.encode("ascii", "ignore").decode("ascii")
).stem
stream.download(VIDEO_DIR, filename=filename)
print(f"Video {filename} downloaded to {VIDEO_DIR}")
# Create Path objects for the input video and the resulting videos
video_path = Path(stream.get_file_path(filename, VIDEO_DIR))
superres_video_path = video_path.with_name(f"{video_path.stem}_superres.mp4")
bicubic_video_path = video_path.with_name(f"{video_path.stem}_bicubic.mp4")
comparison_video_path = video_path.with_name(
f"{video_path.stem}_superres_comparison.mp4"
)
###Output
_____no_output_____
###Markdown
Open the video and get the dimensions and the FPS
###Code
cap = cv2.VideoCapture(str(video_path))
ret, image = cap.read()
if not ret:
raise ValueError(f"The video at '{video_path}' cannot be read.")
fps = cap.get(cv2.CAP_PROP_FPS)
original_frame_height, original_frame_width = image.shape[:2]
cap.release()
print(
f"The input video has a frame width of {original_frame_width}, "
f"frame height of {original_frame_height} and runs at {fps:.2f} fps"
)
###Output
_____no_output_____
###Markdown
Create superresolution video, bicubic video and comparison video. The superresolution video contains the enhanced video, upsampled with superresolution, the bicubic video is the input video upsampled with bicubic interpolation, the combination video sets the bicubic video and the superresolution side by side.
###Code
superres_video = cv2.VideoWriter(
str(superres_video_path),
FOURCC,
fps,
(target_width, target_height),
)
bicubic_video = cv2.VideoWriter(
str(bicubic_video_path),
FOURCC,
fps,
(target_width, target_height),
)
comparison_video = cv2.VideoWriter(
str(comparison_video_path),
FOURCC,
fps,
(target_width * 2, target_height),
)
###Output
_____no_output_____
###Markdown
Do InferenceRead video frames and enhance them with superresolution. Save the superresolution video, the bicubic video and the comparison video to file.The code in this cell reads the video frame by frame. Each frame is resized and reshaped to network input shape and upsampled with bicubic interpolation to target shape. Both the original and the bicubic image are propagated through the network. The network result is a numpy array with floating point values, with a shape of (1,3,1920,1080). This array is converted to an 8-bit image with shape (1080,1920,3) and written to `superres_video`. The bicubic image is written to `bicubic_video` for comparison. Lastly, the bicubic and result frames are combined side by side and written to `comparison_video`. A progress bar shows the progress of the process. Inference time is measured, as well as total time to process each frame, which includes inference time as well as the time it takes to process and write the video.
###Code
start_time = time.perf_counter()
frame_nr = 1
total_inference_duration = 0
total_frames = (
cap.get(cv2.CAP_PROP_FRAME_COUNT) if NUM_FRAMES == 0 else NUM_FRAMES
)
progress_bar = ProgressBar(total=total_frames)
progress_bar.display()
cap = cv2.VideoCapture(str(video_path))
try:
while cap.isOpened():
ret, image = cap.read()
if not ret:
cap.release()
break
if NUM_FRAMES > 0 and frame_nr == NUM_FRAMES:
break
# Resize the input image to network shape and convert from (H,W,C) to
# (N,C,H,W)
resized_image = cv2.resize(image, (input_width, input_height))
input_image_original = np.expand_dims(
resized_image.transpose(2, 0, 1), axis=0
)
# Resize and reshape the image to the target shape with bicubic
# interpolation
bicubic_image = cv2.resize(
image, (target_width, target_height), interpolation=cv2.INTER_CUBIC
)
input_image_bicubic = np.expand_dims(
bicubic_image.transpose(2, 0, 1), axis=0
)
# Do inference
inference_start_time = time.perf_counter()
result = exec_net.infer(
inputs={
original_image_key: input_image_original,
bicubic_image_key: input_image_bicubic,
}
)[output_key]
inference_stop_time = time.perf_counter()
inference_duration = inference_stop_time - inference_start_time
total_inference_duration += inference_duration
# Transform inference result into an image
result_frame = convert_result_to_image(result)
# Write resulting image and bicubic image to video
superres_video.write(result_frame)
bicubic_video.write(bicubic_image)
stacked_frame = np.hstack((bicubic_image, result_frame))
comparison_video.write(stacked_frame)
frame_nr = frame_nr + 1
# Update progress bar and status message
progress_bar.progress = frame_nr
progress_bar.update()
if frame_nr % 10 == 0:
clear_output(wait=True)
progress_bar.display()
display(
Pretty(
f"Processed frame {frame_nr}. Inference time: "
f"{inference_duration:.2f} seconds "
f"({1/inference_duration:.2f} FPS)"
)
)
except KeyboardInterrupt:
print("Processing interrupted.")
finally:
superres_video.release()
bicubic_video.release()
comparison_video.release()
end_time = time.perf_counter()
duration = end_time - start_time
print(f"Video's saved to {comparison_video_path.parent} directory.")
print(
f"Processed {frame_nr} frames in {duration:.2f} seconds. Total FPS "
f"(including video processing): {frame_nr/duration:.2f}. "
f"Inference FPS: {frame_nr/total_inference_duration:.2f}."
)
###Output
_____no_output_____
###Markdown
Show side-by-side video of bicubic and superresolution version
###Code
if not comparison_video_path.exists():
raise ValueError("The comparison video does not exist.")
else:
video_link = FileLink(comparison_video_path)
display(
HTML(
f"Showing side by side comparison. If you cannot see the video in "
"your browser, please click on the following link to download "
f"the video<br>{video_link._repr_html_()}"
)
)
display(Video(comparison_video_path, width=800, embed=True))
###Output
_____no_output_____
###Markdown
Super Resolution with OpenVINOSuper Resolution is the process of enhancing the quality of an image by increasing the pixel count using deep learning. This notebook applies Single Image Super Resolution (SISR) to frames in a 360p (480ร360) video in 360p resolution. We use a model called [single-image-super-resolution-1032](https://github.com/openvinotoolkit/open_model_zoo/tree/develop/models/intel/single-image-super-resolution-1032) which is available from the Open Model Zoo. It is based on the research paper cited below. Y. Liu et al., ["An Attention-Based Approach for Single Image Super Resolution,"](https://arxiv.org/abs/1807.06779) 2018 24th International Conference on Pattern Recognition (ICPR), 2018, pp. 2777-2784, doi: 10.1109/ICPR.2018.8545760.**NOTE:** The Single Image Super Resolution (SISR) model used in this demo is not optimized for video. Results may vary depending on the video. We are looking for a more suitable Multi Image Super Resolution (MISR) model, so if you know of a great open source model, please let us know! You can start a [discussion](https://github.com/openvinotoolkit/openvino_notebooks/discussions) or create an [issue](https://github.com/openvinotoolkit/openvino_notebooks/issues) on GitHub. Preparation Imports
###Code
import os
import socket
import time
import urllib
from pathlib import Path
import cv2
import numpy as np
import urllib
from IPython.display import (
HTML,
FileLink,
Pretty,
ProgressBar,
Video,
clear_output,
display,
)
from openvino.inference_engine import IECore
from pytube import YouTube
###Output
_____no_output_____
###Markdown
Settings
###Code
# Device to use for inference. For example, "CPU", or "GPU"
DEVICE = "CPU"
# 1032: 4x superresolution, 1033: 3x superresolution
MODEL_FILE = "models/single-image-super-resolution-1032.xml"
model_name = os.path.basename(MODEL_FILE)
model_xml_path = Path(MODEL_FILE).with_suffix(".xml")
###Output
_____no_output_____
###Markdown
Functions
###Code
def write_text_on_image(image: np.ndarray, text: str) -> np.ndarray:
"""
Write the specified text in the top left corner of the image
as white text with a black border.
:param image: image as numpy arry with HWC shape, RGB or BGR
:param text: text to write
:return: image with written text, as numpy array
"""
font = cv2.FONT_HERSHEY_PLAIN
org = (20, 20)
font_scale = 4
font_color = (255, 255, 255)
line_type = 1
font_thickness = 2
text_color_bg = (0, 0, 0)
x, y = org
image = cv2.UMat(image)
(text_w, text_h), _ = cv2.getTextSize(
text, font, font_scale, font_thickness
)
result_im = cv2.rectangle(
image, org, (x + text_w, y + text_h), text_color_bg, -1
)
textim = cv2.putText(
result_im,
text,
(x, y + text_h + font_scale - 1),
font,
font_scale,
font_color,
font_thickness,
line_type,
)
return textim.get()
def load_image(path: str) -> np.ndarray:
"""
Loads an image from `path` and returns it as BGR numpy array.
:param path: path to an image filename or url
:return: image as numpy array, with BGR channel order
"""
if path.startswith("http"):
# Set User-Agent to Mozilla because some websites block requests
# with User-Agent Python
request = urllib.request.Request(
path, headers={"User-Agent": "Mozilla/5.0"}
)
response = urllib.request.urlopen(request)
array = np.asarray(bytearray(response.read()), dtype="uint8")
image = cv2.imdecode(array, -1) # Loads the image as BGR
else:
image = cv2.imread(path)
return image
def convert_result_to_image(result) -> np.ndarray:
"""
Convert network result of floating point numbers to image with integer
values from 0-255. Values outside this range are clipped to 0 and 255.
:param result: a single superresolution network result in N,C,H,W shape
"""
result = result.squeeze(0).transpose(1, 2, 0)
result *= 255
result[result < 0] = 0
result[result > 255] = 255
result = result.astype(np.uint8)
return result
###Output
_____no_output_____
###Markdown
Load the Superresolution Model Load the model in Inference Engine with `ie.read_network` and load it to the specified device with `ie.load_network`
###Code
ie = IECore()
net = ie.read_network(
str(model_xml_path), str(model_xml_path.with_suffix(".bin"))
)
exec_net = ie.load_network(network=net, device_name=DEVICE)
###Output
_____no_output_____
###Markdown
Get information about network inputs and outputs. The Super Resolution model expects two inputs: 1) the input image, 2) a bicubic interpolation of the input image to the target size 1920x1080. It returns the super resolution version of the image in 1920x1800.
###Code
# Network inputs and outputs are dictionaries. Get the keys for the
# dictionaries.
original_image_key = list(exec_net.input_info)[0]
bicubic_image_key = list(exec_net.input_info)[1]
output_key = list(exec_net.outputs.keys())[0]
# Get the expected input and target shape. `.dims[2:]` returns the height
# and width. OpenCV's resize function expects the shape as (width, height),
# so we reverse the shape with `[::-1]` and convert it to a tuple
input_height, input_width = tuple(
exec_net.input_info[original_image_key].tensor_desc.dims[2:]
)
target_height, target_width = tuple(
exec_net.input_info[bicubic_image_key].tensor_desc.dims[2:]
)
upsample_factor = int(target_height / input_height)
print(
f"The network expects inputs with a width of {input_width}, "
f"height of {input_height}"
)
print(
f"The network returns images with a width of {target_width}, "
f"height of {target_height}"
)
print(
f"The image sides are upsampled by a factor {upsample_factor}. "
f"The new image is {upsample_factor**2} times as large as the "
"original image"
)
###Output
_____no_output_____
###Markdown
Superresolution on VideoDownload a YouTube\* video with PyTube and enhance the video quality with superresolution. By default only the first 100 frames of the video are processed. Change NUM_FRAMES in the cell below to modify this. **Note:**- The resulting video does not contain audio.- The input video should be a landscape video and have an an input resultion of 360p (640x360) for the 1032 model, or 480p (720x480) for the 1033 model. Settings
###Code
VIDEO_DIR = "videos"
# Number of frames to read from the input video. Set to 0 to read all frames.
NUM_FRAMES = 100
# The format for saving the result video's
# vp09 is slow, but widely available. If you have FFMPEG installed, you can
# change the FOURCC to `*"THEO"` to improve video writing speed
FOURCC = cv2.VideoWriter_fourcc(*"vp09")
###Output
_____no_output_____
###Markdown
Download and Prepare Video
###Code
# Use pytube to download a video. It downloads to the videos subdirectory.
# You can also place a local video there and comment out the following lines
VIDEO_URL = "https://www.youtube.com/watch?v=V8yS3WIkOrA"
yt = YouTube(VIDEO_URL)
# Use `yt.streams` to see all available streams. See the PyTube documentation
# https://python-pytube.readthedocs.io/en/latest/api.html for advanced
# filtering options
try:
os.makedirs(VIDEO_DIR, exist_ok=True)
stream = yt.streams.filter(resolution="360p").first()
filename = Path(
stream.default_filename.encode("ascii", "ignore").decode("ascii")
).stem
stream.download(VIDEO_DIR, filename=filename)
print(f"Video {filename} downloaded to {VIDEO_DIR}")
# Create Path objects for the input video and the resulting videos
video_path = Path(stream.get_file_path(filename, VIDEO_DIR))
except (socket.timeout, TimeoutError, urllib.error.HTTPError):
# If PyTube fails, use a local video stored in the VIDEO_DIR directory
video_path = Path(rf"{VIDEO_DIR}/CEO Pat Gelsinger on Leading Intel.mp4")
# Path names for the result videos
superres_video_path = video_path.with_name(f"{video_path.stem}_superres.mp4")
bicubic_video_path = video_path.with_name(f"{video_path.stem}_bicubic.mp4")
comparison_video_path = video_path.with_name(
f"{video_path.stem}_superres_comparison.mp4"
)
# Open the video and get the dimensions and the FPS
cap = cv2.VideoCapture(str(video_path))
ret, image = cap.read()
if not ret:
raise ValueError(f"The video at '{video_path}' cannot be read.")
fps = cap.get(cv2.CAP_PROP_FPS)
original_frame_height, original_frame_width = image.shape[:2]
cap.release()
print(
f"The input video has a frame width of {original_frame_width}, "
f"frame height of {original_frame_height} and runs at {fps:.2f} fps"
)
###Output
_____no_output_____
###Markdown
Create superresolution video, bicubic video and comparison video. The superresolution video contains the enhanced video, upsampled with superresolution, the bicubic video is the input video upsampled with bicubic interpolation, the combination video sets the bicubic video and the superresolution side by side.
###Code
superres_video = cv2.VideoWriter(
str(superres_video_path),
FOURCC,
fps,
(target_width, target_height),
)
bicubic_video = cv2.VideoWriter(
str(bicubic_video_path),
FOURCC,
fps,
(target_width, target_height),
)
comparison_video = cv2.VideoWriter(
str(comparison_video_path),
FOURCC,
fps,
(target_width * 2, target_height),
)
###Output
_____no_output_____
###Markdown
Do InferenceRead video frames and enhance them with superresolution. Save the superresolution video, the bicubic video and the comparison video to file.The code in this cell reads the video frame by frame. Each frame is resized and reshaped to network input shape and upsampled with bicubic interpolation to target shape. Both the original and the bicubic image are propagated through the network. The network result is a numpy array with floating point values, with a shape of (1,3,1920,1080). This array is converted to an 8-bit image with shape (1080,1920,3) and written to `superres_video`. The bicubic image is written to `bicubic_video` for comparison. Lastly, the bicubic and result frames are combined side by side and written to `comparison_video`. A progress bar shows the progress of the process. Inference time is measured, as well as total time to process each frame, which includes inference time as well as the time it takes to process and write the video.
###Code
start_time = time.perf_counter()
frame_nr = 1
total_inference_duration = 0
total_frames = (
cap.get(cv2.CAP_PROP_FRAME_COUNT) if NUM_FRAMES == 0 else NUM_FRAMES
)
progress_bar = ProgressBar(total=total_frames)
progress_bar.display()
cap = cv2.VideoCapture(str(video_path))
try:
while cap.isOpened():
ret, image = cap.read()
if not ret:
cap.release()
break
if NUM_FRAMES > 0 and frame_nr == NUM_FRAMES:
break
# Resize the input image to network shape and convert from (H,W,C) to
# (N,C,H,W)
resized_image = cv2.resize(image, (input_width, input_height))
input_image_original = np.expand_dims(
resized_image.transpose(2, 0, 1), axis=0
)
# Resize and reshape the image to the target shape with bicubic
# interpolation
bicubic_image = cv2.resize(
image, (target_width, target_height), interpolation=cv2.INTER_CUBIC
)
input_image_bicubic = np.expand_dims(
bicubic_image.transpose(2, 0, 1), axis=0
)
# Do inference
inference_start_time = time.perf_counter()
result = exec_net.infer(
inputs={
original_image_key: input_image_original,
bicubic_image_key: input_image_bicubic,
}
)[output_key]
inference_stop_time = time.perf_counter()
inference_duration = inference_stop_time - inference_start_time
total_inference_duration += inference_duration
# Transform inference result into an image
result_frame = convert_result_to_image(result)
# Write resulting image and bicubic image to video
superres_video.write(result_frame)
bicubic_video.write(bicubic_image)
stacked_frame = np.hstack((bicubic_image, result_frame))
comparison_video.write(stacked_frame)
frame_nr = frame_nr + 1
# Update progress bar and status message
progress_bar.progress = frame_nr
progress_bar.update()
if frame_nr % 10 == 0:
clear_output(wait=True)
progress_bar.display()
display(
Pretty(
f"Processed frame {frame_nr}. Inference time: "
f"{inference_duration:.2f} seconds "
f"({1/inference_duration:.2f} FPS)"
)
)
except KeyboardInterrupt:
print("Processing interrupted.")
finally:
superres_video.release()
bicubic_video.release()
comparison_video.release()
end_time = time.perf_counter()
duration = end_time - start_time
print(f"Video's saved to {comparison_video_path.parent} directory.")
print(
f"Processed {frame_nr} frames in {duration:.2f} seconds. Total FPS "
f"(including video processing): {frame_nr/duration:.2f}. "
f"Inference FPS: {frame_nr/total_inference_duration:.2f}."
)
###Output
_____no_output_____
###Markdown
Show side-by-side video of bicubic and superresolution version
###Code
if not comparison_video_path.exists():
raise ValueError("The comparison video does not exist.")
else:
video_link = FileLink(comparison_video_path)
display(
HTML(
f"Showing side by side comparison. If you cannot see the video in "
"your browser, please click on the following link to download "
f"the video<br>{video_link._repr_html_()}"
)
)
display(Video(comparison_video_path, width=800, embed=True))
###Output
_____no_output_____
###Markdown
Video Super Resolution with OpenVINOSuper Resolution is the process of enhancing the quality of an image by increasing the pixel count using deep learning. This notebook applies Single Image Super Resolution (SISR) to frames in a 360p (480ร360) video in 360p resolution. We use a model called [single-image-super-resolution-1032](https://docs.openvino.ai/latest/omz_models_model_single_image_super_resolution_1032.html) which is available from the Open Model Zoo. It is based on the research paper cited below. Y. Liu et al., ["An Attention-Based Approach for Single Image Super Resolution,"](https://arxiv.org/abs/1807.06779) 2018 24th International Conference on Pattern Recognition (ICPR), 2018, pp. 2777-2784, doi: 10.1109/ICPR.2018.8545760.**NOTE:** The Single Image Super Resolution (SISR) model used in this demo is not optimized for video. Results may vary depending on the video. Preparation Preparation Imports
###Code
import time
import urllib
from pathlib import Path
import cv2
import numpy as np
from IPython.display import (
HTML,
FileLink,
Pretty,
ProgressBar,
Video,
clear_output,
display,
)
from openvino.runtime import Core
from pytube import YouTube
###Output
_____no_output_____
###Markdown
Settings
###Code
# Device to use for inference. For example, "CPU", or "GPU"
DEVICE = "CPU"
# 1032: 4x superresolution, 1033: 3x superresolution
MODEL_FILE = "model/single-image-super-resolution-1032.xml"
model_name = Path(MODEL_FILE).name
model_xml_path = Path(MODEL_FILE).with_suffix(".xml")
###Output
_____no_output_____
###Markdown
Functions
###Code
def write_text_on_image(image: np.ndarray, text: str) -> np.ndarray:
"""
Write the specified text in the top left corner of the image
as white text with a black border.
:param image: image as numpy array with HWC shape, RGB or BGR
:param text: text to write
:return: image with written text, as numpy array
"""
font = cv2.FONT_HERSHEY_PLAIN
org = (20, 20)
font_scale = 4
font_color = (255, 255, 255)
line_type = 1
font_thickness = 2
text_color_bg = (0, 0, 0)
x, y = org
image = cv2.UMat(image)
(text_w, text_h), _ = cv2.getTextSize(
text=text, fontFace=font, fontScale=font_scale, thickness=font_thickness
)
result_im = cv2.rectangle(
img=image, pt1=org, pt2=(x + text_w, y + text_h), color=text_color_bg, thickness=-1
)
textim = cv2.putText(
img=result_im,
text=text,
org=(x, y + text_h + font_scale - 1),
fontFace=font,
fontScale=font_scale,
color=font_color,
thickness=font_thickness,
lineType=line_type,
)
return textim.get()
def load_image(path: str) -> np.ndarray:
"""
Loads an image from `path` and returns it as BGR numpy array.
:param path: path to an image filename or url
:return: image as numpy array, with BGR channel order
"""
if path.startswith("http"):
# Set User-Agent to Mozilla because some websites block requests
# with User-Agent Python
request = urllib.request.Request(url=path, headers={"User-Agent": "Mozilla/5.0"})
response = urllib.request.urlopen(url=request)
array = np.asarray(bytearray(response.read()), dtype="uint8")
image = cv2.imdecode(buf=array, flags=-1) # Loads the image as BGR
else:
image = cv2.imread(filename=path)
return image
def convert_result_to_image(result) -> np.ndarray:
"""
Convert network result of floating point numbers to image with integer
values from 0-255. Values outside this range are clipped to 0 and 255.
:param result: a single superresolution network result in N,C,H,W shape
"""
result = result.squeeze(0).transpose(1, 2, 0)
result *= 255
result[result < 0] = 0
result[result > 255] = 255
result = result.astype(np.uint8)
return result
###Output
_____no_output_____
###Markdown
Load the Superresolution ModelLoad the model in Inference Engine with `ie.read_model` and compile it for the specified device with `ie.compile_model`.
###Code
ie = Core()
model = ie.read_model(model=model_xml_path)
compiled_model = ie.compile_model(model=model, device_name=DEVICE)
###Output
_____no_output_____
###Markdown
Get information about network inputs and outputs. The Super Resolution model expects two inputs: 1) the input image, 2) a bicubic interpolation of the input image to the target size 1920x1080. It returns the super resolution version of the image in 1920x1080.
###Code
# Network inputs and outputs are dictionaries. Get the keys for the
# dictionaries.
original_image_key, bicubic_image_key = compiled_model.inputs
output_key = compiled_model.output(0)
# Get the expected input and target shape. `.dims[2:]` returns the height
# and width. OpenCV's resize function expects the shape as (width, height),
# so we reverse the shape with `[::-1]` and convert it to a tuple
input_height, input_width = list(original_image_key.shape)[2:]
target_height, target_width = list(bicubic_image_key.shape)[2:]
upsample_factor = int(target_height / input_height)
print(f"The network expects inputs with a width of {input_width}, " f"height of {input_height}")
print(f"The network returns images with a width of {target_width}, " f"height of {target_height}")
print(
f"The image sides are upsampled by a factor {upsample_factor}. "
f"The new image is {upsample_factor**2} times as large as the "
"original image"
)
###Output
_____no_output_____
###Markdown
Superresolution on VideoDownload a YouTube\* video with PyTube and enhance the video quality with superresolution. By default only the first 100 frames of the video are processed. Change NUM_FRAMES in the cell below to modify this. **Note:**- The resulting video does not contain audio.- The input video should be a landscape video and have an input resolution of 360p (640x360) for the 1032 model, or 480p (720x480) for the 1033 model. Settings
###Code
VIDEO_DIR = "data"
OUTPUT_DIR = "output"
Path(OUTPUT_DIR).mkdir(exist_ok=True)
# Maximum number of frames to read from the input video. Set to 0 to read all frames.
NUM_FRAMES = 100
# The format for saving the result videos. vp09 is slow, but widely available.
# If you have FFMPEG installed, you can change FOURCC to `*"THEO"` to improve video writing speed.
FOURCC = cv2.VideoWriter_fourcc(*"vp09")
###Output
_____no_output_____
###Markdown
Download and Prepare Video
###Code
# Use pytube to download a video. It downloads to the videos subdirectory.
# You can also place a local video there and comment out the following lines
VIDEO_URL = "https://www.youtube.com/watch?v=V8yS3WIkOrA"
yt = YouTube(VIDEO_URL)
# Use `yt.streams` to see all available streams. See the PyTube documentation
# https://python-pytube.readthedocs.io/en/latest/api.html for advanced
# filtering options
try:
Path(VIDEO_DIR).mkdir(exist_ok=True)
stream = yt.streams.filter(resolution="360p").first()
filename = Path(stream.default_filename.encode("ascii", "ignore").decode("ascii")).stem
stream.download(output_path=OUTPUT_DIR, filename=filename)
print(f"Video {filename} downloaded to {OUTPUT_DIR}")
# Create Path objects for the input video and the resulting videos
video_path = Path(stream.get_file_path(filename, OUTPUT_DIR))
except Exception:
# If PyTube fails, use a local video stored in the VIDEO_DIR directory
video_path = Path(rf"{VIDEO_DIR}/CEO Pat Gelsinger on Leading Intel.mp4")
# Path names for the result videos
superres_video_path = Path(f"{OUTPUT_DIR}/{video_path.stem}_superres.mp4")
bicubic_video_path = Path(f"{OUTPUT_DIR}/{video_path.stem}_bicubic.mp4")
comparison_video_path = Path(f"{OUTPUT_DIR}/{video_path.stem}_superres_comparison.mp4")
# Open the video and get the dimensions and the FPS
cap = cv2.VideoCapture(filename=str(video_path))
ret, image = cap.read()
if not ret:
raise ValueError(f"The video at '{video_path}' cannot be read.")
fps = cap.get(cv2.CAP_PROP_FPS)
frame_count = cap.get(cv2.CAP_PROP_FRAME_COUNT)
if NUM_FRAMES == 0:
total_frames = frame_count
else:
total_frames = min(frame_count, NUM_FRAMES)
original_frame_height, original_frame_width = image.shape[:2]
cap.release()
print(
f"The input video has a frame width of {original_frame_width}, "
f"frame height of {original_frame_height} and runs at {fps:.2f} fps"
)
###Output
_____no_output_____
###Markdown
Create superresolution video, bicubic video and comparison video. The superresolution video contains the enhanced video, upsampled with superresolution, the bicubic video is the input video upsampled with bicubic interpolation, the combination video sets the bicubic video and the superresolution side by side.
###Code
superres_video = cv2.VideoWriter(
filename=str(superres_video_path),
fourcc=FOURCC,
fps=fps,
frameSize=(target_width, target_height),
)
bicubic_video = cv2.VideoWriter(
filename=str(bicubic_video_path),
fourcc=FOURCC,
fps=fps,
frameSize=(target_width, target_height),
)
comparison_video = cv2.VideoWriter(
filename=str(comparison_video_path),
fourcc=FOURCC,
fps=fps,
frameSize=(target_width * 2, target_height),
)
###Output
_____no_output_____
###Markdown
Do InferenceRead video frames and enhance them with superresolution. Save the superresolution video, the bicubic video and the comparison video to file.The code in this cell reads the video frame by frame. Each frame is resized and reshaped to network input shape and upsampled with bicubic interpolation to target shape. Both the original and the bicubic image are propagated through the network. The network result is a numpy array with floating point values, with a shape of `(1,3,1920,1080)`. This array is converted to an 8-bit image with shape `(1080,1920,3)` and written to `superres_video`. The bicubic image is written to `bicubic_video` for comparison. Lastly, the bicubic and result frames are combined side by side and written to `comparison_video`. A progress bar shows the progress of the process. Inference time is measured, as well as total time to process each frame, which includes inference time as well as the time it takes to process and write the video.
###Code
start_time = time.perf_counter()
frame_nr = 0
total_inference_duration = 0
progress_bar = ProgressBar(total=total_frames)
progress_bar.display()
cap = cv2.VideoCapture(filename=str(video_path))
try:
while cap.isOpened():
ret, image = cap.read()
if not ret:
cap.release()
break
if frame_nr >= total_frames:
break
# Resize the input image to network shape and convert from (H,W,C) to
# (N,C,H,W)
resized_image = cv2.resize(src=image, dsize=(input_width, input_height))
input_image_original = np.expand_dims(resized_image.transpose(2, 0, 1), axis=0)
# Resize and reshape the image to the target shape with bicubic
# interpolation
bicubic_image = cv2.resize(
src=image, dsize=(target_width, target_height), interpolation=cv2.INTER_CUBIC
)
input_image_bicubic = np.expand_dims(bicubic_image.transpose(2, 0, 1), axis=0)
# Do inference
inference_start_time = time.perf_counter()
result = compiled_model(
{
original_image_key.any_name: input_image_original,
bicubic_image_key.any_name: input_image_bicubic,
}
)[output_key]
inference_stop_time = time.perf_counter()
inference_duration = inference_stop_time - inference_start_time
total_inference_duration += inference_duration
# Transform inference result into an image
result_frame = convert_result_to_image(result=result)
# Write resulting image and bicubic image to video
superres_video.write(image=result_frame)
bicubic_video.write(image=bicubic_image)
stacked_frame = np.hstack((bicubic_image, result_frame))
comparison_video.write(image=stacked_frame)
frame_nr = frame_nr + 1
# Update progress bar and status message
progress_bar.progress = frame_nr
progress_bar.update()
if frame_nr % 10 == 0 or frame_nr == total_frames:
clear_output(wait=True)
progress_bar.display()
display(
Pretty(
f"Processed frame {frame_nr}. Inference time: "
f"{inference_duration:.2f} seconds "
f"({1/inference_duration:.2f} FPS)"
)
)
except KeyboardInterrupt:
print("Processing interrupted.")
finally:
superres_video.release()
bicubic_video.release()
comparison_video.release()
end_time = time.perf_counter()
duration = end_time - start_time
print(f"Video's saved to {comparison_video_path.parent} directory.")
print(
f"Processed {frame_nr} frames in {duration:.2f} seconds. Total FPS "
f"(including video processing): {frame_nr/duration:.2f}. "
f"Inference FPS: {frame_nr/total_inference_duration:.2f}."
)
###Output
_____no_output_____
###Markdown
Show Side-by-Side Video of Bicubic and Superresolution Version
###Code
if not comparison_video_path.exists():
raise ValueError("The comparison video does not exist.")
else:
video_link = FileLink(comparison_video_path)
video_link.html_link_str = "<a href='%s' download>%s</a>"
display(
HTML(
f"Showing side by side comparison. If you cannot see the video in "
"your browser, please click on the following link to download "
f"the video<br>{video_link._repr_html_()}"
)
)
display(Video(comparison_video_path, width=800, embed=True))
###Output
_____no_output_____
###Markdown
Video Super Resolution with OpenVINOSuper Resolution is the process of enhancing the quality of an image by increasing the pixel count using deep learning. This notebook applies Single Image Super Resolution (SISR) to frames in a 360p (480ร360) video in 360p resolution. We use a model called [single-image-super-resolution-1032](https://docs.openvino.ai/latest/omz_models_model_single_image_super_resolution_1032.html) which is available from the Open Model Zoo. It is based on the research paper cited below. Y. Liu et al., ["An Attention-Based Approach for Single Image Super Resolution,"](https://arxiv.org/abs/1807.06779) 2018 24th International Conference on Pattern Recognition (ICPR), 2018, pp. 2777-2784, doi: 10.1109/ICPR.2018.8545760.**NOTE:** The Single Image Super Resolution (SISR) model used in this demo is not optimized for video. Results may vary depending on the video. Preparation Imports
###Code
import time
import urllib
from pathlib import Path
import cv2
import numpy as np
from IPython.display import HTML, FileLink, Pretty, ProgressBar, Video, clear_output, display
from openvino.inference_engine import IECore
from pytube import YouTube
###Output
_____no_output_____
###Markdown
Settings
###Code
# Device to use for inference. For example, "CPU", or "GPU"
DEVICE = "CPU"
# 1032: 4x superresolution, 1033: 3x superresolution
MODEL_FILE = "model/single-image-super-resolution-1032.xml"
model_name = Path(MODEL_FILE).name
model_xml_path = Path(MODEL_FILE).with_suffix(".xml")
###Output
_____no_output_____
###Markdown
Functions
###Code
def write_text_on_image(image: np.ndarray, text: str) -> np.ndarray:
"""
Write the specified text in the top left corner of the image
as white text with a black border.
:param image: image as numpy array with HWC shape, RGB or BGR
:param text: text to write
:return: image with written text, as numpy array
"""
font = cv2.FONT_HERSHEY_PLAIN
org = (20, 20)
font_scale = 4
font_color = (255, 255, 255)
line_type = 1
font_thickness = 2
text_color_bg = (0, 0, 0)
x, y = org
image = cv2.UMat(image)
(text_w, text_h), _ = cv2.getTextSize(
text=text, fontFace=font, fontScale=font_scale, thickness=font_thickness
)
result_im = cv2.rectangle(
img=image, pt1=org, pt2=(x + text_w, y + text_h), color=text_color_bg, thickness=-1
)
textim = cv2.putText(
img=result_im,
text=text,
org=(x, y + text_h + font_scale - 1),
fontFace=font,
fontScale=font_scale,
color=font_color,
thickness=font_thickness,
lineType=line_type,
)
return textim.get()
def load_image(path: str) -> np.ndarray:
"""
Loads an image from `path` and returns it as BGR numpy array.
:param path: path to an image filename or url
:return: image as numpy array, with BGR channel order
"""
if path.startswith("http"):
# Set User-Agent to Mozilla because some websites block requests
# with User-Agent Python
request = urllib.request.Request(url=path, headers={"User-Agent": "Mozilla/5.0"})
response = urllib.request.urlopen(url=request)
array = np.asarray(bytearray(response.read()), dtype="uint8")
image = cv2.imdecode(buf=array, flags=-1) # Loads the image as BGR
else:
image = cv2.imread(filename=path)
return image
def convert_result_to_image(result) -> np.ndarray:
"""
Convert network result of floating point numbers to image with integer
values from 0-255. Values outside this range are clipped to 0 and 255.
:param result: a single superresolution network result in N,C,H,W shape
"""
result = result.squeeze(0).transpose(1, 2, 0)
result *= 255
result[result < 0] = 0
result[result > 255] = 255
result = result.astype(np.uint8)
return result
###Output
_____no_output_____
###Markdown
Load the Superresolution Model Load the model in Inference Engine with `ie.read_network` and load it to the specified device with `ie.load_network`
###Code
ie = IECore()
net = ie.read_network(model=model_xml_path)
exec_net = ie.load_network(network=net, device_name=DEVICE)
###Output
_____no_output_____
###Markdown
Get information about network inputs and outputs. The Super Resolution model expects two inputs: 1) the input image, 2) a bicubic interpolation of the input image to the target size 1920x1080. It returns the super resolution version of the image in 1920x1800.
###Code
# Network inputs and outputs are dictionaries. Get the keys for the
# dictionaries.
original_image_key = list(exec_net.input_info)[0]
bicubic_image_key = list(exec_net.input_info)[1]
output_key = list(exec_net.outputs.keys())[0]
# Get the expected input and target shape. `.dims[2:]` returns the height
# and width. OpenCV's resize function expects the shape as (width, height),
# so we reverse the shape with `[::-1]` and convert it to a tuple
input_height, input_width = tuple(exec_net.input_info[original_image_key].tensor_desc.dims[2:])
target_height, target_width = tuple(exec_net.input_info[bicubic_image_key].tensor_desc.dims[2:])
upsample_factor = int(target_height / input_height)
print(f"The network expects inputs with a width of {input_width}, " f"height of {input_height}")
print(f"The network returns images with a width of {target_width}, " f"height of {target_height}")
print(
f"The image sides are upsampled by a factor {upsample_factor}. "
f"The new image is {upsample_factor**2} times as large as the "
"original image"
)
###Output
_____no_output_____
###Markdown
Superresolution on VideoDownload a YouTube\* video with PyTube and enhance the video quality with superresolution. By default only the first 100 frames of the video are processed. Change NUM_FRAMES in the cell below to modify this. **Note:**- The resulting video does not contain audio.- The input video should be a landscape video and have an input resolution of 360p (640x360) for the 1032 model, or 480p (720x480) for the 1033 model. Settings
###Code
VIDEO_DIR = "data"
OUTPUT_DIR = "output"
Path(OUTPUT_DIR).mkdir(exist_ok=True)
# Maximum number of frames to read from the input video. Set to 0 to read all frames.
NUM_FRAMES = 100
# The format for saving the result videos. vp09 is slow, but widely available.
# If you have FFMPEG installed, you can change FOURCC to `*"THEO"` to improve video writing speed.
FOURCC = cv2.VideoWriter_fourcc(*"vp09")
###Output
_____no_output_____
###Markdown
Download and Prepare Video
###Code
# Use pytube to download a video. It downloads to the videos subdirectory.
# You can also place a local video there and comment out the following lines
VIDEO_URL = "https://www.youtube.com/watch?v=V8yS3WIkOrA"
yt = YouTube(VIDEO_URL)
# Use `yt.streams` to see all available streams. See the PyTube documentation
# https://python-pytube.readthedocs.io/en/latest/api.html for advanced
# filtering options
try:
Path(VIDEO_DIR).mkdir(exist_ok=True)
stream = yt.streams.filter(resolution="360p").first()
filename = Path(stream.default_filename.encode("ascii", "ignore").decode("ascii")).stem
stream.download(output_path=OUTPUT_DIR, filename=filename)
print(f"Video {filename} downloaded to {OUTPUT_DIR}")
# Create Path objects for the input video and the resulting videos
video_path = Path(stream.get_file_path(filename, OUTPUT_DIR))
except Exception:
# If PyTube fails, use a local video stored in the VIDEO_DIR directory
video_path = Path(rf"{VIDEO_DIR}/CEO Pat Gelsinger on Leading Intel.mp4")
# Path names for the result videos
superres_video_path = Path(f"{OUTPUT_DIR}/{video_path.stem}_superres.mp4")
bicubic_video_path = Path(f"{OUTPUT_DIR}/{video_path.stem}_bicubic.mp4")
comparison_video_path = Path(f"{OUTPUT_DIR}/{video_path.stem}_superres_comparison.mp4")
# Open the video and get the dimensions and the FPS
cap = cv2.VideoCapture(filename=str(video_path))
ret, image = cap.read()
if not ret:
raise ValueError(f"The video at '{video_path}' cannot be read.")
fps = cap.get(cv2.CAP_PROP_FPS)
frame_count = cap.get(cv2.CAP_PROP_FRAME_COUNT)
if NUM_FRAMES == 0:
total_frames = frame_count
else:
total_frames = min(frame_count, NUM_FRAMES)
original_frame_height, original_frame_width = image.shape[:2]
cap.release()
print(
f"The input video has a frame width of {original_frame_width}, "
f"frame height of {original_frame_height} and runs at {fps:.2f} fps"
)
###Output
_____no_output_____
###Markdown
Create superresolution video, bicubic video and comparison video. The superresolution video contains the enhanced video, upsampled with superresolution, the bicubic video is the input video upsampled with bicubic interpolation, the combination video sets the bicubic video and the superresolution side by side.
###Code
superres_video = cv2.VideoWriter(
filename=str(superres_video_path),
fourcc=FOURCC,
fps=fps,
frameSize=(target_width, target_height),
)
bicubic_video = cv2.VideoWriter(
filename=str(bicubic_video_path),
fourcc=FOURCC,
fps=fps,
frameSize=(target_width, target_height),
)
comparison_video = cv2.VideoWriter(
filename=str(comparison_video_path),
fourcc=FOURCC,
fps=fps,
frameSize=(target_width * 2, target_height),
)
###Output
_____no_output_____
###Markdown
Do InferenceRead video frames and enhance them with superresolution. Save the superresolution video, the bicubic video and the comparison video to file.The code in this cell reads the video frame by frame. Each frame is resized and reshaped to network input shape and upsampled with bicubic interpolation to target shape. Both the original and the bicubic image are propagated through the network. The network result is a numpy array with floating point values, with a shape of (1,3,1920,1080). This array is converted to an 8-bit image with shape (1080,1920,3) and written to `superres_video`. The bicubic image is written to `bicubic_video` for comparison. Lastly, the bicubic and result frames are combined side by side and written to `comparison_video`. A progress bar shows the progress of the process. Inference time is measured, as well as total time to process each frame, which includes inference time as well as the time it takes to process and write the video.
###Code
start_time = time.perf_counter()
frame_nr = 0
total_inference_duration = 0
progress_bar = ProgressBar(total=total_frames)
progress_bar.display()
cap = cv2.VideoCapture(filename=str(video_path))
try:
while cap.isOpened():
ret, image = cap.read()
if not ret:
cap.release()
break
if frame_nr >= total_frames:
break
# Resize the input image to network shape and convert from (H,W,C) to
# (N,C,H,W)
resized_image = cv2.resize(src=image, dsize=(input_width, input_height))
input_image_original = np.expand_dims(resized_image.transpose(2, 0, 1), axis=0)
# Resize and reshape the image to the target shape with bicubic
# interpolation
bicubic_image = cv2.resize(
src=image, dsize=(target_width, target_height), interpolation=cv2.INTER_CUBIC
)
input_image_bicubic = np.expand_dims(bicubic_image.transpose(2, 0, 1), axis=0)
# Do inference
inference_start_time = time.perf_counter()
result = exec_net.infer(
inputs={
original_image_key: input_image_original,
bicubic_image_key: input_image_bicubic,
}
)[output_key]
inference_stop_time = time.perf_counter()
inference_duration = inference_stop_time - inference_start_time
total_inference_duration += inference_duration
# Transform inference result into an image
result_frame = convert_result_to_image(result=result)
# Write resulting image and bicubic image to video
superres_video.write(image=result_frame)
bicubic_video.write(image=bicubic_image)
stacked_frame = np.hstack((bicubic_image, result_frame))
comparison_video.write(image=stacked_frame)
frame_nr = frame_nr + 1
# Update progress bar and status message
progress_bar.progress = frame_nr
progress_bar.update()
if frame_nr % 10 == 0 or frame_nr == total_frames:
clear_output(wait=True)
progress_bar.display()
display(
Pretty(
f"Processed frame {frame_nr}. Inference time: "
f"{inference_duration:.2f} seconds "
f"({1/inference_duration:.2f} FPS)"
)
)
except KeyboardInterrupt:
print("Processing interrupted.")
finally:
superres_video.release()
bicubic_video.release()
comparison_video.release()
end_time = time.perf_counter()
duration = end_time - start_time
print(f"Video's saved to {comparison_video_path.parent} directory.")
print(
f"Processed {frame_nr} frames in {duration:.2f} seconds. Total FPS "
f"(including video processing): {frame_nr/duration:.2f}. "
f"Inference FPS: {frame_nr/total_inference_duration:.2f}."
)
###Output
_____no_output_____
###Markdown
Show Side-by-Side Video of Bicubic and Superresolution Version
###Code
if not comparison_video_path.exists():
raise ValueError("The comparison video does not exist.")
else:
video_link = FileLink(comparison_video_path)
video_link.html_link_str = "<a href='%s' download>%s</a>"
display(
HTML(
f"Showing side by side comparison. If you cannot see the video in "
"your browser, please click on the following link to download "
f"the video<br>{video_link._repr_html_()}"
)
)
display(Video(comparison_video_path, width=800, embed=True))
###Output
_____no_output_____
###Markdown
Video Super Resolution with OpenVINOSuper Resolution is the process of enhancing the quality of an image by increasing the pixel count using deep learning. This notebook applies Single Image Super Resolution (SISR) to frames in a 360p (480ร360) video in 360p resolution. We use a model called [single-image-super-resolution-1032](https://github.com/openvinotoolkit/open_model_zoo/tree/develop/models/intel/single-image-super-resolution-1032) which is available from the Open Model Zoo. It is based on the research paper cited below. Y. Liu et al., ["An Attention-Based Approach for Single Image Super Resolution,"](https://arxiv.org/abs/1807.06779) 2018 24th International Conference on Pattern Recognition (ICPR), 2018, pp. 2777-2784, doi: 10.1109/ICPR.2018.8545760.**NOTE:** The Single Image Super Resolution (SISR) model used in this demo is not optimized for video. Results may vary depending on the video. We are looking for a more suitable Multi Image Super Resolution (MISR) model, so if you know of a great open source model, please let us know! You can start a [discussion](https://github.com/openvinotoolkit/openvino_notebooks/discussions) or create an [issue](https://github.com/openvinotoolkit/openvino_notebooks/issues) on GitHub. Preparation Imports
###Code
import os
import time
import urllib
from pathlib import Path
import cv2
import numpy as np
from IPython.display import HTML, FileLink, Pretty, ProgressBar, Video, clear_output, display
from openvino.inference_engine import IECore
from pytube import YouTube
###Output
_____no_output_____
###Markdown
Settings
###Code
# Device to use for inference. For example, "CPU", or "GPU"
DEVICE = "CPU"
# 1032: 4x superresolution, 1033: 3x superresolution
MODEL_FILE = "model/single-image-super-resolution-1032.xml"
model_name = os.path.basename(MODEL_FILE)
model_xml_path = Path(MODEL_FILE).with_suffix(".xml")
###Output
_____no_output_____
###Markdown
Functions
###Code
def write_text_on_image(image: np.ndarray, text: str) -> np.ndarray:
"""
Write the specified text in the top left corner of the image
as white text with a black border.
:param image: image as numpy arry with HWC shape, RGB or BGR
:param text: text to write
:return: image with written text, as numpy array
"""
font = cv2.FONT_HERSHEY_PLAIN
org = (20, 20)
font_scale = 4
font_color = (255, 255, 255)
line_type = 1
font_thickness = 2
text_color_bg = (0, 0, 0)
x, y = org
image = cv2.UMat(image)
(text_w, text_h), _ = cv2.getTextSize(text, font, font_scale, font_thickness)
result_im = cv2.rectangle(image, org, (x + text_w, y + text_h), text_color_bg, -1)
textim = cv2.putText(
result_im,
text,
(x, y + text_h + font_scale - 1),
font,
font_scale,
font_color,
font_thickness,
line_type,
)
return textim.get()
def load_image(path: str) -> np.ndarray:
"""
Loads an image from `path` and returns it as BGR numpy array.
:param path: path to an image filename or url
:return: image as numpy array, with BGR channel order
"""
if path.startswith("http"):
# Set User-Agent to Mozilla because some websites block requests
# with User-Agent Python
request = urllib.request.Request(path, headers={"User-Agent": "Mozilla/5.0"})
response = urllib.request.urlopen(request)
array = np.asarray(bytearray(response.read()), dtype="uint8")
image = cv2.imdecode(array, -1) # Loads the image as BGR
else:
image = cv2.imread(path)
return image
def convert_result_to_image(result) -> np.ndarray:
"""
Convert network result of floating point numbers to image with integer
values from 0-255. Values outside this range are clipped to 0 and 255.
:param result: a single superresolution network result in N,C,H,W shape
"""
result = result.squeeze(0).transpose(1, 2, 0)
result *= 255
result[result < 0] = 0
result[result > 255] = 255
result = result.astype(np.uint8)
return result
###Output
_____no_output_____
###Markdown
Load the Superresolution Model Load the model in Inference Engine with `ie.read_network` and load it to the specified device with `ie.load_network`
###Code
ie = IECore()
net = ie.read_network(str(model_xml_path), str(model_xml_path.with_suffix(".bin")))
exec_net = ie.load_network(network=net, device_name=DEVICE)
###Output
_____no_output_____
###Markdown
Get information about network inputs and outputs. The Super Resolution model expects two inputs: 1) the input image, 2) a bicubic interpolation of the input image to the target size 1920x1080. It returns the super resolution version of the image in 1920x1800.
###Code
# Network inputs and outputs are dictionaries. Get the keys for the
# dictionaries.
original_image_key = list(exec_net.input_info)[0]
bicubic_image_key = list(exec_net.input_info)[1]
output_key = list(exec_net.outputs.keys())[0]
# Get the expected input and target shape. `.dims[2:]` returns the height
# and width. OpenCV's resize function expects the shape as (width, height),
# so we reverse the shape with `[::-1]` and convert it to a tuple
input_height, input_width = tuple(exec_net.input_info[original_image_key].tensor_desc.dims[2:])
target_height, target_width = tuple(exec_net.input_info[bicubic_image_key].tensor_desc.dims[2:])
upsample_factor = int(target_height / input_height)
print(f"The network expects inputs with a width of {input_width}, " f"height of {input_height}")
print(f"The network returns images with a width of {target_width}, " f"height of {target_height}")
print(
f"The image sides are upsampled by a factor {upsample_factor}. "
f"The new image is {upsample_factor**2} times as large as the "
"original image"
)
###Output
_____no_output_____
###Markdown
Superresolution on VideoDownload a YouTube\* video with PyTube and enhance the video quality with superresolution. By default only the first 100 frames of the video are processed. Change NUM_FRAMES in the cell below to modify this. **Note:**- The resulting video does not contain audio.- The input video should be a landscape video and have an an input resultion of 360p (640x360) for the 1032 model, or 480p (720x480) for the 1033 model. Settings
###Code
VIDEO_DIR = "data"
OUTPUT_DIR = "output"
os.makedirs(str(OUTPUT_DIR), exist_ok=True)
# Number of frames to read from the input video. Set to 0 to read all frames.
NUM_FRAMES = 100
# The format for saving the result video's
# vp09 is slow, but widely available. If you have FFMPEG installed, you can
# change the FOURCC to `*"THEO"` to improve video writing speed
FOURCC = cv2.VideoWriter_fourcc(*"vp09")
###Output
_____no_output_____
###Markdown
Download and Prepare Video
###Code
# Use pytube to download a video. It downloads to the videos subdirectory.
# You can also place a local video there and comment out the following lines
VIDEO_URL = "https://www.youtube.com/watch?v=V8yS3WIkOrA"
yt = YouTube(VIDEO_URL)
# Use `yt.streams` to see all available streams. See the PyTube documentation
# https://python-pytube.readthedocs.io/en/latest/api.html for advanced
# filtering options
try:
os.makedirs(VIDEO_DIR, exist_ok=True)
stream = yt.streams.filter(resolution="360p").first()
filename = Path(stream.default_filename.encode("ascii", "ignore").decode("ascii")).stem
stream.download(OUTPUT_DIR, filename=filename)
print(f"Video {filename} downloaded to {OUTPUT_DIR}")
# Create Path objects for the input video and the resulting videos
video_path = Path(stream.get_file_path(filename, OUTPUT_DIR))
except Exception:
# If PyTube fails, use a local video stored in the VIDEO_DIR directory
video_path = Path(rf"{VIDEO_DIR}/CEO Pat Gelsinger on Leading Intel.mp4")
# Path names for the result videos
superres_video_path = Path(f"{OUTPUT_DIR}/{video_path.stem}_superres.mp4")
bicubic_video_path = Path(f"{OUTPUT_DIR}/{video_path.stem}_bicubic.mp4")
comparison_video_path = Path(f"{OUTPUT_DIR}/{video_path.stem}_superres_comparison.mp4")
# Open the video and get the dimensions and the FPS
cap = cv2.VideoCapture(str(video_path))
ret, image = cap.read()
if not ret:
raise ValueError(f"The video at '{video_path}' cannot be read.")
fps = cap.get(cv2.CAP_PROP_FPS)
original_frame_height, original_frame_width = image.shape[:2]
cap.release()
print(
f"The input video has a frame width of {original_frame_width}, "
f"frame height of {original_frame_height} and runs at {fps:.2f} fps"
)
###Output
_____no_output_____
###Markdown
Create superresolution video, bicubic video and comparison video. The superresolution video contains the enhanced video, upsampled with superresolution, the bicubic video is the input video upsampled with bicubic interpolation, the combination video sets the bicubic video and the superresolution side by side.
###Code
superres_video = cv2.VideoWriter(
str(superres_video_path),
FOURCC,
fps,
(target_width, target_height),
)
bicubic_video = cv2.VideoWriter(
str(bicubic_video_path),
FOURCC,
fps,
(target_width, target_height),
)
comparison_video = cv2.VideoWriter(
str(comparison_video_path),
FOURCC,
fps,
(target_width * 2, target_height),
)
###Output
_____no_output_____
###Markdown
Do InferenceRead video frames and enhance them with superresolution. Save the superresolution video, the bicubic video and the comparison video to file.The code in this cell reads the video frame by frame. Each frame is resized and reshaped to network input shape and upsampled with bicubic interpolation to target shape. Both the original and the bicubic image are propagated through the network. The network result is a numpy array with floating point values, with a shape of (1,3,1920,1080). This array is converted to an 8-bit image with shape (1080,1920,3) and written to `superres_video`. The bicubic image is written to `bicubic_video` for comparison. Lastly, the bicubic and result frames are combined side by side and written to `comparison_video`. A progress bar shows the progress of the process. Inference time is measured, as well as total time to process each frame, which includes inference time as well as the time it takes to process and write the video.
###Code
start_time = time.perf_counter()
frame_nr = 1
total_inference_duration = 0
total_frames = cap.get(cv2.CAP_PROP_FRAME_COUNT) if NUM_FRAMES == 0 else NUM_FRAMES
progress_bar = ProgressBar(total=total_frames)
progress_bar.display()
cap = cv2.VideoCapture(str(video_path))
try:
while cap.isOpened():
ret, image = cap.read()
if not ret:
cap.release()
break
if NUM_FRAMES > 0 and frame_nr == NUM_FRAMES:
break
# Resize the input image to network shape and convert from (H,W,C) to
# (N,C,H,W)
resized_image = cv2.resize(image, (input_width, input_height))
input_image_original = np.expand_dims(resized_image.transpose(2, 0, 1), axis=0)
# Resize and reshape the image to the target shape with bicubic
# interpolation
bicubic_image = cv2.resize(
image, (target_width, target_height), interpolation=cv2.INTER_CUBIC
)
input_image_bicubic = np.expand_dims(bicubic_image.transpose(2, 0, 1), axis=0)
# Do inference
inference_start_time = time.perf_counter()
result = exec_net.infer(
inputs={
original_image_key: input_image_original,
bicubic_image_key: input_image_bicubic,
}
)[output_key]
inference_stop_time = time.perf_counter()
inference_duration = inference_stop_time - inference_start_time
total_inference_duration += inference_duration
# Transform inference result into an image
result_frame = convert_result_to_image(result)
# Write resulting image and bicubic image to video
superres_video.write(result_frame)
bicubic_video.write(bicubic_image)
stacked_frame = np.hstack((bicubic_image, result_frame))
comparison_video.write(stacked_frame)
frame_nr = frame_nr + 1
# Update progress bar and status message
progress_bar.progress = frame_nr
progress_bar.update()
if frame_nr % 10 == 0:
clear_output(wait=True)
progress_bar.display()
display(
Pretty(
f"Processed frame {frame_nr}. Inference time: "
f"{inference_duration:.2f} seconds "
f"({1/inference_duration:.2f} FPS)"
)
)
except KeyboardInterrupt:
print("Processing interrupted.")
finally:
superres_video.release()
bicubic_video.release()
comparison_video.release()
end_time = time.perf_counter()
duration = end_time - start_time
print(f"Video's saved to {comparison_video_path.parent} directory.")
print(
f"Processed {frame_nr} frames in {duration:.2f} seconds. Total FPS "
f"(including video processing): {frame_nr/duration:.2f}. "
f"Inference FPS: {frame_nr/total_inference_duration:.2f}."
)
###Output
_____no_output_____
###Markdown
Show Side-by-Side Video of Bicubic and Superresolution Version
###Code
if not comparison_video_path.exists():
raise ValueError("The comparison video does not exist.")
else:
video_link = FileLink(comparison_video_path)
display(
HTML(
f"Showing side by side comparison. If you cannot see the video in "
"your browser, please click on the following link to download "
f"the video<br>{video_link._repr_html_()}"
)
)
display(Video(comparison_video_path, width=800, embed=True))
###Output
_____no_output_____
###Markdown
Super Resolution with OpenVINOSuper Resolution is the process of enhancing the quality of an image by increasing the pixel count using deep learning. This notebook applies Single Image Super Resolution (SISR) to frames in a 360p (480ร360) video in 360p resolution. We use a model called [single-image-super-resolution-1032](https://github.com/openvinotoolkit/open_model_zoo/tree/develop/models/intel/single-image-super-resolution-1032) which is available from the Open Model Zoo. It is based on the research paper cited below. Y. Liu et al., ["An Attention-Based Approach for Single Image Super Resolution,"](https://arxiv.org/abs/1807.06779) 2018 24th International Conference on Pattern Recognition (ICPR), 2018, pp. 2777-2784, doi: 10.1109/ICPR.2018.8545760.**NOTE:** The Single Image Super Resolution (SISR) model used in this demo is not optimized for video. Results may vary depending on the video. We are looking for a more suitable Multi Image Super Resolution (MISR) model, so if you know of a great open source model, please let us know! You can start a [discussion](https://github.com/openvinotoolkit/openvino_notebooks/discussions) or create an [issue](https://github.com/openvinotoolkit/openvino_notebooks/issues) on GitHub. Preparation Imports
###Code
import os
import time
import urllib
from pathlib import Path
import cv2
import numpy as np
from IPython.display import HTML, FileLink, Pretty, ProgressBar, Video, clear_output, display
from openvino.inference_engine import IECore
from pytube import YouTube
###Output
_____no_output_____
###Markdown
Settings
###Code
# Device to use for inference. For example, "CPU", or "GPU"
DEVICE = "CPU"
# 1032: 4x superresolution, 1033: 3x superresolution
MODEL_FILE = "model/single-image-super-resolution-1032.xml"
model_name = os.path.basename(MODEL_FILE)
model_xml_path = Path(MODEL_FILE).with_suffix(".xml")
###Output
_____no_output_____
###Markdown
Functions
###Code
def write_text_on_image(image: np.ndarray, text: str) -> np.ndarray:
"""
Write the specified text in the top left corner of the image
as white text with a black border.
:param image: image as numpy arry with HWC shape, RGB or BGR
:param text: text to write
:return: image with written text, as numpy array
"""
font = cv2.FONT_HERSHEY_PLAIN
org = (20, 20)
font_scale = 4
font_color = (255, 255, 255)
line_type = 1
font_thickness = 2
text_color_bg = (0, 0, 0)
x, y = org
image = cv2.UMat(image)
(text_w, text_h), _ = cv2.getTextSize(text, font, font_scale, font_thickness)
result_im = cv2.rectangle(image, org, (x + text_w, y + text_h), text_color_bg, -1)
textim = cv2.putText(
result_im,
text,
(x, y + text_h + font_scale - 1),
font,
font_scale,
font_color,
font_thickness,
line_type,
)
return textim.get()
def load_image(path: str) -> np.ndarray:
"""
Loads an image from `path` and returns it as BGR numpy array.
:param path: path to an image filename or url
:return: image as numpy array, with BGR channel order
"""
if path.startswith("http"):
# Set User-Agent to Mozilla because some websites block requests
# with User-Agent Python
request = urllib.request.Request(path, headers={"User-Agent": "Mozilla/5.0"})
response = urllib.request.urlopen(request)
array = np.asarray(bytearray(response.read()), dtype="uint8")
image = cv2.imdecode(array, -1) # Loads the image as BGR
else:
image = cv2.imread(path)
return image
def convert_result_to_image(result) -> np.ndarray:
"""
Convert network result of floating point numbers to image with integer
values from 0-255. Values outside this range are clipped to 0 and 255.
:param result: a single superresolution network result in N,C,H,W shape
"""
result = result.squeeze(0).transpose(1, 2, 0)
result *= 255
result[result < 0] = 0
result[result > 255] = 255
result = result.astype(np.uint8)
return result
###Output
_____no_output_____
###Markdown
Load the Superresolution Model Load the model in Inference Engine with `ie.read_network` and load it to the specified device with `ie.load_network`
###Code
ie = IECore()
net = ie.read_network(str(model_xml_path), str(model_xml_path.with_suffix(".bin")))
exec_net = ie.load_network(network=net, device_name=DEVICE)
###Output
_____no_output_____
###Markdown
Get information about network inputs and outputs. The Super Resolution model expects two inputs: 1) the input image, 2) a bicubic interpolation of the input image to the target size 1920x1080. It returns the super resolution version of the image in 1920x1800.
###Code
# Network inputs and outputs are dictionaries. Get the keys for the
# dictionaries.
original_image_key = list(exec_net.input_info)[0]
bicubic_image_key = list(exec_net.input_info)[1]
output_key = list(exec_net.outputs.keys())[0]
# Get the expected input and target shape. `.dims[2:]` returns the height
# and width. OpenCV's resize function expects the shape as (width, height),
# so we reverse the shape with `[::-1]` and convert it to a tuple
input_height, input_width = tuple(exec_net.input_info[original_image_key].tensor_desc.dims[2:])
target_height, target_width = tuple(exec_net.input_info[bicubic_image_key].tensor_desc.dims[2:])
upsample_factor = int(target_height / input_height)
print(f"The network expects inputs with a width of {input_width}, " f"height of {input_height}")
print(f"The network returns images with a width of {target_width}, " f"height of {target_height}")
print(
f"The image sides are upsampled by a factor {upsample_factor}. "
f"The new image is {upsample_factor**2} times as large as the "
"original image"
)
###Output
_____no_output_____
###Markdown
Superresolution on VideoDownload a YouTube\* video with PyTube and enhance the video quality with superresolution. By default only the first 100 frames of the video are processed. Change NUM_FRAMES in the cell below to modify this. **Note:**- The resulting video does not contain audio.- The input video should be a landscape video and have an an input resultion of 360p (640x360) for the 1032 model, or 480p (720x480) for the 1033 model. Settings
###Code
VIDEO_DIR = "data"
OUTPUT_DIR = "output"
os.makedirs(str(OUTPUT_DIR), exist_ok=True)
# Number of frames to read from the input video. Set to 0 to read all frames.
NUM_FRAMES = 100
# The format for saving the result video's
# vp09 is slow, but widely available. If you have FFMPEG installed, you can
# change the FOURCC to `*"THEO"` to improve video writing speed
FOURCC = cv2.VideoWriter_fourcc(*"vp09")
###Output
_____no_output_____
###Markdown
Download and Prepare Video
###Code
# Use pytube to download a video. It downloads to the videos subdirectory.
# You can also place a local video there and comment out the following lines
VIDEO_URL = "https://www.youtube.com/watch?v=V8yS3WIkOrA"
yt = YouTube(VIDEO_URL)
# Use `yt.streams` to see all available streams. See the PyTube documentation
# https://python-pytube.readthedocs.io/en/latest/api.html for advanced
# filtering options
try:
os.makedirs(VIDEO_DIR, exist_ok=True)
stream = yt.streams.filter(resolution="360p").first()
filename = Path(stream.default_filename.encode("ascii", "ignore").decode("ascii")).stem
stream.download(OUTPUT_DIR, filename=filename)
print(f"Video {filename} downloaded to {OUTPUT_DIR}")
# Create Path objects for the input video and the resulting videos
video_path = Path(stream.get_file_path(filename, OUTPUT_DIR))
except Exception:
# If PyTube fails, use a local video stored in the VIDEO_DIR directory
video_path = Path(rf"{VIDEO_DIR}/CEO Pat Gelsinger on Leading Intel.mp4")
# Path names for the result videos
superres_video_path = Path(f"{OUTPUT_DIR}/{video_path.stem}_superres.mp4")
bicubic_video_path = Path(f"{OUTPUT_DIR}/{video_path.stem}_bicubic.mp4")
comparison_video_path = Path(f"{OUTPUT_DIR}/{video_path.stem}_superres_comparison.mp4")
# Open the video and get the dimensions and the FPS
cap = cv2.VideoCapture(str(video_path))
ret, image = cap.read()
if not ret:
raise ValueError(f"The video at '{video_path}' cannot be read.")
fps = cap.get(cv2.CAP_PROP_FPS)
original_frame_height, original_frame_width = image.shape[:2]
cap.release()
print(
f"The input video has a frame width of {original_frame_width}, "
f"frame height of {original_frame_height} and runs at {fps:.2f} fps"
)
###Output
_____no_output_____
###Markdown
Create superresolution video, bicubic video and comparison video. The superresolution video contains the enhanced video, upsampled with superresolution, the bicubic video is the input video upsampled with bicubic interpolation, the combination video sets the bicubic video and the superresolution side by side.
###Code
superres_video = cv2.VideoWriter(
str(superres_video_path),
FOURCC,
fps,
(target_width, target_height),
)
bicubic_video = cv2.VideoWriter(
str(bicubic_video_path),
FOURCC,
fps,
(target_width, target_height),
)
comparison_video = cv2.VideoWriter(
str(comparison_video_path),
FOURCC,
fps,
(target_width * 2, target_height),
)
###Output
_____no_output_____
###Markdown
Do InferenceRead video frames and enhance them with superresolution. Save the superresolution video, the bicubic video and the comparison video to file.The code in this cell reads the video frame by frame. Each frame is resized and reshaped to network input shape and upsampled with bicubic interpolation to target shape. Both the original and the bicubic image are propagated through the network. The network result is a numpy array with floating point values, with a shape of (1,3,1920,1080). This array is converted to an 8-bit image with shape (1080,1920,3) and written to `superres_video`. The bicubic image is written to `bicubic_video` for comparison. Lastly, the bicubic and result frames are combined side by side and written to `comparison_video`. A progress bar shows the progress of the process. Inference time is measured, as well as total time to process each frame, which includes inference time as well as the time it takes to process and write the video.
###Code
start_time = time.perf_counter()
frame_nr = 1
total_inference_duration = 0
total_frames = cap.get(cv2.CAP_PROP_FRAME_COUNT) if NUM_FRAMES == 0 else NUM_FRAMES
progress_bar = ProgressBar(total=total_frames)
progress_bar.display()
cap = cv2.VideoCapture(str(video_path))
try:
while cap.isOpened():
ret, image = cap.read()
if not ret:
cap.release()
break
if NUM_FRAMES > 0 and frame_nr == NUM_FRAMES:
break
# Resize the input image to network shape and convert from (H,W,C) to
# (N,C,H,W)
resized_image = cv2.resize(image, (input_width, input_height))
input_image_original = np.expand_dims(resized_image.transpose(2, 0, 1), axis=0)
# Resize and reshape the image to the target shape with bicubic
# interpolation
bicubic_image = cv2.resize(
image, (target_width, target_height), interpolation=cv2.INTER_CUBIC
)
input_image_bicubic = np.expand_dims(bicubic_image.transpose(2, 0, 1), axis=0)
# Do inference
inference_start_time = time.perf_counter()
result = exec_net.infer(
inputs={
original_image_key: input_image_original,
bicubic_image_key: input_image_bicubic,
}
)[output_key]
inference_stop_time = time.perf_counter()
inference_duration = inference_stop_time - inference_start_time
total_inference_duration += inference_duration
# Transform inference result into an image
result_frame = convert_result_to_image(result)
# Write resulting image and bicubic image to video
superres_video.write(result_frame)
bicubic_video.write(bicubic_image)
stacked_frame = np.hstack((bicubic_image, result_frame))
comparison_video.write(stacked_frame)
frame_nr = frame_nr + 1
# Update progress bar and status message
progress_bar.progress = frame_nr
progress_bar.update()
if frame_nr % 10 == 0:
clear_output(wait=True)
progress_bar.display()
display(
Pretty(
f"Processed frame {frame_nr}. Inference time: "
f"{inference_duration:.2f} seconds "
f"({1/inference_duration:.2f} FPS)"
)
)
except KeyboardInterrupt:
print("Processing interrupted.")
finally:
superres_video.release()
bicubic_video.release()
comparison_video.release()
end_time = time.perf_counter()
duration = end_time - start_time
print(f"Video's saved to {comparison_video_path.parent} directory.")
print(
f"Processed {frame_nr} frames in {duration:.2f} seconds. Total FPS "
f"(including video processing): {frame_nr/duration:.2f}. "
f"Inference FPS: {frame_nr/total_inference_duration:.2f}."
)
###Output
_____no_output_____
###Markdown
Show side-by-side video of bicubic and superresolution version
###Code
if not comparison_video_path.exists():
raise ValueError("The comparison video does not exist.")
else:
video_link = FileLink(comparison_video_path)
display(
HTML(
f"Showing side by side comparison. If you cannot see the video in "
"your browser, please click on the following link to download "
f"the video<br>{video_link._repr_html_()}"
)
)
display(Video(comparison_video_path, width=800, embed=True))
###Output
_____no_output_____
###Markdown
Video Super Resolution with OpenVINOSuper Resolution is the process of enhancing the quality of an image by increasing the pixel count using deep learning. This notebook applies Single Image Super Resolution (SISR) to frames in a 360p (480ร360) video in 360p resolution. We use a model called [single-image-super-resolution-1032](https://github.com/openvinotoolkit/open_model_zoo/tree/develop/models/intel/single-image-super-resolution-1032) which is available from the Open Model Zoo. It is based on the research paper cited below. Y. Liu et al., ["An Attention-Based Approach for Single Image Super Resolution,"](https://arxiv.org/abs/1807.06779) 2018 24th International Conference on Pattern Recognition (ICPR), 2018, pp. 2777-2784, doi: 10.1109/ICPR.2018.8545760.**NOTE:** The Single Image Super Resolution (SISR) model used in this demo is not optimized for video. Results may vary depending on the video. We are looking for a more suitable Multi Image Super Resolution (MISR) model, so if you know of a great open source model, please let us know! You can start a [discussion](https://github.com/openvinotoolkit/openvino_notebooks/discussions) or create an [issue](https://github.com/openvinotoolkit/openvino_notebooks/issues) on GitHub. Preparation Imports
###Code
import os
import time
import urllib
from pathlib import Path
import cv2
import numpy as np
from IPython.display import (
HTML,
FileLink,
Pretty,
ProgressBar,
Video,
clear_output,
display,
)
from openvino.inference_engine import IECore
from pytube import YouTube
###Output
_____no_output_____
###Markdown
Settings
###Code
# Device to use for inference. For example, "CPU", or "GPU"
DEVICE = "CPU"
# 1032: 4x superresolution, 1033: 3x superresolution
MODEL_FILE = "model/single-image-super-resolution-1032.xml"
model_name = os.path.basename(MODEL_FILE)
model_xml_path = Path(MODEL_FILE).with_suffix(".xml")
###Output
_____no_output_____
###Markdown
Functions
###Code
def write_text_on_image(image: np.ndarray, text: str) -> np.ndarray:
"""
Write the specified text in the top left corner of the image
as white text with a black border.
:param image: image as numpy array with HWC shape, RGB or BGR
:param text: text to write
:return: image with written text, as numpy array
"""
font = cv2.FONT_HERSHEY_PLAIN
org = (20, 20)
font_scale = 4
font_color = (255, 255, 255)
line_type = 1
font_thickness = 2
text_color_bg = (0, 0, 0)
x, y = org
image = cv2.UMat(image)
(text_w, text_h), _ = cv2.getTextSize(
text=text, fontFace=font, fontScale=font_scale, thickness=font_thickness
)
result_im = cv2.rectangle(
img=image, pt1=org, pt2=(x + text_w, y + text_h), color=text_color_bg, thickness=-1
)
textim = cv2.putText(
img=result_im,
text=text,
org=(x, y + text_h + font_scale - 1),
fontFace=font,
fontScale=font_scale,
color=font_color,
thickness=font_thickness,
lineType=line_type,
)
return textim.get()
def load_image(path: str) -> np.ndarray:
"""
Loads an image from `path` and returns it as BGR numpy array.
:param path: path to an image filename or url
:return: image as numpy array, with BGR channel order
"""
if path.startswith("http"):
# Set User-Agent to Mozilla because some websites block requests
# with User-Agent Python
request = urllib.request.Request(url=path, headers={"User-Agent": "Mozilla/5.0"})
response = urllib.request.urlopen(url=request)
array = np.asarray(bytearray(response.read()), dtype="uint8")
image = cv2.imdecode(buf=array, flags=-1) # Loads the image as BGR
else:
image = cv2.imread(filename=path)
return image
def convert_result_to_image(result) -> np.ndarray:
"""
Convert network result of floating point numbers to image with integer
values from 0-255. Values outside this range are clipped to 0 and 255.
:param result: a single superresolution network result in N,C,H,W shape
"""
result = result.squeeze(0).transpose(1, 2, 0)
result *= 255
result[result < 0] = 0
result[result > 255] = 255
result = result.astype(np.uint8)
return result
###Output
_____no_output_____
###Markdown
Load the Superresolution Model Load the model in Inference Engine with `ie.read_network` and load it to the specified device with `ie.load_network`
###Code
ie = IECore()
net = ie.read_network(model=model_xml_path)
exec_net = ie.load_network(network=net, device_name=DEVICE)
###Output
_____no_output_____
###Markdown
Get information about network inputs and outputs. The Super Resolution model expects two inputs: 1) the input image, 2) a bicubic interpolation of the input image to the target size 1920x1080. It returns the super resolution version of the image in 1920x1800.
###Code
# Network inputs and outputs are dictionaries. Get the keys for the
# dictionaries.
original_image_key = list(exec_net.input_info)[0]
bicubic_image_key = list(exec_net.input_info)[1]
output_key = list(exec_net.outputs.keys())[0]
# Get the expected input and target shape. `.dims[2:]` returns the height
# and width. OpenCV's resize function expects the shape as (width, height),
# so we reverse the shape with `[::-1]` and convert it to a tuple
input_height, input_width = tuple(exec_net.input_info[original_image_key].tensor_desc.dims[2:])
target_height, target_width = tuple(exec_net.input_info[bicubic_image_key].tensor_desc.dims[2:])
upsample_factor = int(target_height / input_height)
print(f"The network expects inputs with a width of {input_width}, " f"height of {input_height}")
print(f"The network returns images with a width of {target_width}, " f"height of {target_height}")
print(
f"The image sides are upsampled by a factor {upsample_factor}. "
f"The new image is {upsample_factor**2} times as large as the "
"original image"
)
###Output
_____no_output_____
###Markdown
Superresolution on VideoDownload a YouTube\* video with PyTube and enhance the video quality with superresolution. By default only the first 100 frames of the video are processed. Change NUM_FRAMES in the cell below to modify this. **Note:**- The resulting video does not contain audio.- The input video should be a landscape video and have an input resolution of 360p (640x360) for the 1032 model, or 480p (720x480) for the 1033 model. Settings
###Code
VIDEO_DIR = "data"
OUTPUT_DIR = "output"
os.makedirs(name=str(OUTPUT_DIR), exist_ok=True)
# Number of frames to read from the input video. Set to 0 to read all frames.
NUM_FRAMES = 100
# The format for saving the result video's
# vp09 is slow, but widely available. If you have FFMPEG installed, you can
# change the FOURCC to `*"THEO"` to improve video writing speed
FOURCC = cv2.VideoWriter_fourcc(*"vp09")
###Output
_____no_output_____
###Markdown
Download and Prepare Video
###Code
# Use pytube to download a video. It downloads to the videos subdirectory.
# You can also place a local video there and comment out the following lines
VIDEO_URL = "https://www.youtube.com/watch?v=V8yS3WIkOrA"
yt = YouTube(VIDEO_URL)
# Use `yt.streams` to see all available streams. See the PyTube documentation
# https://python-pytube.readthedocs.io/en/latest/api.html for advanced
# filtering options
try:
os.makedirs(name=VIDEO_DIR, exist_ok=True)
stream = yt.streams.filter(resolution="360p").first()
filename = Path(stream.default_filename.encode("ascii", "ignore").decode("ascii")).stem
stream.download(output_path=OUTPUT_DIR, filename=filename)
print(f"Video {filename} downloaded to {OUTPUT_DIR}")
# Create Path objects for the input video and the resulting videos
video_path = Path(stream.get_file_path(filename, OUTPUT_DIR))
except Exception:
# If PyTube fails, use a local video stored in the VIDEO_DIR directory
video_path = Path(rf"{VIDEO_DIR}/CEO Pat Gelsinger on Leading Intel.mp4")
# Path names for the result videos
superres_video_path = Path(f"{OUTPUT_DIR}/{video_path.stem}_superres.mp4")
bicubic_video_path = Path(f"{OUTPUT_DIR}/{video_path.stem}_bicubic.mp4")
comparison_video_path = Path(f"{OUTPUT_DIR}/{video_path.stem}_superres_comparison.mp4")
# Open the video and get the dimensions and the FPS
cap = cv2.VideoCapture(filename=str(video_path))
ret, image = cap.read()
if not ret:
raise ValueError(f"The video at '{video_path}' cannot be read.")
fps = cap.get(cv2.CAP_PROP_FPS)
original_frame_height, original_frame_width = image.shape[:2]
cap.release()
print(
f"The input video has a frame width of {original_frame_width}, "
f"frame height of {original_frame_height} and runs at {fps:.2f} fps"
)
###Output
_____no_output_____
###Markdown
Create superresolution video, bicubic video and comparison video. The superresolution video contains the enhanced video, upsampled with superresolution, the bicubic video is the input video upsampled with bicubic interpolation, the combination video sets the bicubic video and the superresolution side by side.
###Code
superres_video = cv2.VideoWriter(
filename=str(superres_video_path),
fourcc=FOURCC,
fps=fps,
frameSize=(target_width, target_height),
)
bicubic_video = cv2.VideoWriter(
filename=str(bicubic_video_path),
fourcc=FOURCC,
fps=fps,
frameSize=(target_width, target_height),
)
comparison_video = cv2.VideoWriter(
filename=str(comparison_video_path),
fourcc=FOURCC,
fps=fps,
frameSize=(target_width * 2, target_height),
)
###Output
_____no_output_____
###Markdown
Do InferenceRead video frames and enhance them with superresolution. Save the superresolution video, the bicubic video and the comparison video to file.The code in this cell reads the video frame by frame. Each frame is resized and reshaped to network input shape and upsampled with bicubic interpolation to target shape. Both the original and the bicubic image are propagated through the network. The network result is a numpy array with floating point values, with a shape of (1,3,1920,1080). This array is converted to an 8-bit image with shape (1080,1920,3) and written to `superres_video`. The bicubic image is written to `bicubic_video` for comparison. Lastly, the bicubic and result frames are combined side by side and written to `comparison_video`. A progress bar shows the progress of the process. Inference time is measured, as well as total time to process each frame, which includes inference time as well as the time it takes to process and write the video.
###Code
start_time = time.perf_counter()
frame_nr = 1
total_inference_duration = 0
total_frames = cap.get(cv2.CAP_PROP_FRAME_COUNT) if NUM_FRAMES == 0 else NUM_FRAMES
progress_bar = ProgressBar(total=total_frames)
progress_bar.display()
cap = cv2.VideoCapture(filename=str(video_path))
try:
while cap.isOpened():
ret, image = cap.read()
if not ret:
cap.release()
break
if NUM_FRAMES > 0 and frame_nr == NUM_FRAMES:
break
# Resize the input image to network shape and convert from (H,W,C) to
# (N,C,H,W)
resized_image = cv2.resize(src=image, dsize=(input_width, input_height))
input_image_original = np.expand_dims(resized_image.transpose(2, 0, 1), axis=0)
# Resize and reshape the image to the target shape with bicubic
# interpolation
bicubic_image = cv2.resize(
src=image, dsize=(target_width, target_height), interpolation=cv2.INTER_CUBIC
)
input_image_bicubic = np.expand_dims(bicubic_image.transpose(2, 0, 1), axis=0)
# Do inference
inference_start_time = time.perf_counter()
result = exec_net.infer(
inputs={
original_image_key: input_image_original,
bicubic_image_key: input_image_bicubic,
}
)[output_key]
inference_stop_time = time.perf_counter()
inference_duration = inference_stop_time - inference_start_time
total_inference_duration += inference_duration
# Transform inference result into an image
result_frame = convert_result_to_image(result=result)
# Write resulting image and bicubic image to video
superres_video.write(image=result_frame)
bicubic_video.write(image=bicubic_image)
stacked_frame = np.hstack((bicubic_image, result_frame))
comparison_video.write(image=stacked_frame)
frame_nr = frame_nr + 1
# Update progress bar and status message
progress_bar.progress = frame_nr
progress_bar.update()
if frame_nr % 10 == 0:
clear_output(wait=True)
progress_bar.display()
display(
Pretty(
f"Processed frame {frame_nr}. Inference time: "
f"{inference_duration:.2f} seconds "
f"({1/inference_duration:.2f} FPS)"
)
)
except KeyboardInterrupt:
print("Processing interrupted.")
finally:
superres_video.release()
bicubic_video.release()
comparison_video.release()
end_time = time.perf_counter()
duration = end_time - start_time
print(f"Video's saved to {comparison_video_path.parent} directory.")
print(
f"Processed {frame_nr} frames in {duration:.2f} seconds. Total FPS "
f"(including video processing): {frame_nr/duration:.2f}. "
f"Inference FPS: {frame_nr/total_inference_duration:.2f}."
)
###Output
_____no_output_____
###Markdown
Show Side-by-Side Video of Bicubic and Superresolution Version
###Code
if not comparison_video_path.exists():
raise ValueError("The comparison video does not exist.")
else:
video_link = FileLink(comparison_video_path)
video_link.html_link_str = "<a href='%s' download>%s</a>"
display(
HTML(
f"Showing side by side comparison. If you cannot see the video in "
"your browser, please click on the following link to download "
f"the video<br>{video_link._repr_html_()}"
)
)
display(Video(comparison_video_path, width=800, embed=True))
###Output
_____no_output_____
###Markdown
Video Super Resolution with OpenVINOSuper Resolution is the process of enhancing the quality of an image by increasing the pixel count using deep learning. This notebook applies Single Image Super Resolution (SISR) to frames in a 360p (480ร360) video in 360p resolution. We use a model called [single-image-super-resolution-1032](https://github.com/openvinotoolkit/open_model_zoo/tree/develop/models/intel/single-image-super-resolution-1032) which is available from the Open Model Zoo. It is based on the research paper cited below. Y. Liu et al., ["An Attention-Based Approach for Single Image Super Resolution,"](https://arxiv.org/abs/1807.06779) 2018 24th International Conference on Pattern Recognition (ICPR), 2018, pp. 2777-2784, doi: 10.1109/ICPR.2018.8545760.**NOTE:** The Single Image Super Resolution (SISR) model used in this demo is not optimized for video. Results may vary depending on the video. We are looking for a more suitable Multi Image Super Resolution (MISR) model, so if you know of a great open source model, please let us know! You can start a [discussion](https://github.com/openvinotoolkit/openvino_notebooks/discussions) or create an [issue](https://github.com/openvinotoolkit/openvino_notebooks/issues) on GitHub. Preparation Imports
###Code
import os
import time
import urllib
from pathlib import Path
import cv2
import numpy as np
from IPython.display import HTML, FileLink, Pretty, ProgressBar, Video, clear_output, display
from openvino.inference_engine import IECore
from pytube import YouTube
###Output
_____no_output_____
###Markdown
Settings
###Code
# Device to use for inference. For example, "CPU", or "GPU"
DEVICE = "CPU"
# 1032: 4x superresolution, 1033: 3x superresolution
MODEL_FILE = "model/single-image-super-resolution-1032.xml"
model_name = os.path.basename(MODEL_FILE)
model_xml_path = Path(MODEL_FILE).with_suffix(".xml")
###Output
_____no_output_____
###Markdown
Functions
###Code
def write_text_on_image(image: np.ndarray, text: str) -> np.ndarray:
"""
Write the specified text in the top left corner of the image
as white text with a black border.
:param image: image as numpy arry with HWC shape, RGB or BGR
:param text: text to write
:return: image with written text, as numpy array
"""
font = cv2.FONT_HERSHEY_PLAIN
org = (20, 20)
font_scale = 4
font_color = (255, 255, 255)
line_type = 1
font_thickness = 2
text_color_bg = (0, 0, 0)
x, y = org
image = cv2.UMat(image)
(text_w, text_h), _ = cv2.getTextSize(text, font, font_scale, font_thickness)
result_im = cv2.rectangle(image, org, (x + text_w, y + text_h), text_color_bg, -1)
textim = cv2.putText(
result_im,
text,
(x, y + text_h + font_scale - 1),
font,
font_scale,
font_color,
font_thickness,
line_type,
)
return textim.get()
def load_image(path: str) -> np.ndarray:
"""
Loads an image from `path` and returns it as BGR numpy array.
:param path: path to an image filename or url
:return: image as numpy array, with BGR channel order
"""
if path.startswith("http"):
# Set User-Agent to Mozilla because some websites block requests
# with User-Agent Python
request = urllib.request.Request(path, headers={"User-Agent": "Mozilla/5.0"})
response = urllib.request.urlopen(request)
array = np.asarray(bytearray(response.read()), dtype="uint8")
image = cv2.imdecode(array, -1) # Loads the image as BGR
else:
image = cv2.imread(path)
return image
def convert_result_to_image(result) -> np.ndarray:
"""
Convert network result of floating point numbers to image with integer
values from 0-255. Values outside this range are clipped to 0 and 255.
:param result: a single superresolution network result in N,C,H,W shape
"""
result = result.squeeze(0).transpose(1, 2, 0)
result *= 255
result[result < 0] = 0
result[result > 255] = 255
result = result.astype(np.uint8)
return result
###Output
_____no_output_____
###Markdown
Load the Superresolution Model Load the model in Inference Engine with `ie.read_network` and load it to the specified device with `ie.load_network`
###Code
ie = IECore()
net = ie.read_network(str(model_xml_path), str(model_xml_path.with_suffix(".bin")))
exec_net = ie.load_network(network=net, device_name=DEVICE)
###Output
_____no_output_____
###Markdown
Get information about network inputs and outputs. The Super Resolution model expects two inputs: 1) the input image, 2) a bicubic interpolation of the input image to the target size 1920x1080. It returns the super resolution version of the image in 1920x1800.
###Code
# Network inputs and outputs are dictionaries. Get the keys for the
# dictionaries.
original_image_key = list(exec_net.input_info)[0]
bicubic_image_key = list(exec_net.input_info)[1]
output_key = list(exec_net.outputs.keys())[0]
# Get the expected input and target shape. `.dims[2:]` returns the height
# and width. OpenCV's resize function expects the shape as (width, height),
# so we reverse the shape with `[::-1]` and convert it to a tuple
input_height, input_width = tuple(exec_net.input_info[original_image_key].tensor_desc.dims[2:])
target_height, target_width = tuple(exec_net.input_info[bicubic_image_key].tensor_desc.dims[2:])
upsample_factor = int(target_height / input_height)
print(f"The network expects inputs with a width of {input_width}, " f"height of {input_height}")
print(f"The network returns images with a width of {target_width}, " f"height of {target_height}")
print(
f"The image sides are upsampled by a factor {upsample_factor}. "
f"The new image is {upsample_factor**2} times as large as the "
"original image"
)
###Output
_____no_output_____
###Markdown
Superresolution on VideoDownload a YouTube\* video with PyTube and enhance the video quality with superresolution. By default only the first 100 frames of the video are processed. Change NUM_FRAMES in the cell below to modify this. **Note:**- The resulting video does not contain audio.- The input video should be a landscape video and have an an input resultion of 360p (640x360) for the 1032 model, or 480p (720x480) for the 1033 model. Settings
###Code
VIDEO_DIR = "data"
OUTPUT_DIR = "output"
os.makedirs(str(OUTPUT_DIR), exist_ok=True)
# Number of frames to read from the input video. Set to 0 to read all frames.
NUM_FRAMES = 100
# The format for saving the result video's
# vp09 is slow, but widely available. If you have FFMPEG installed, you can
# change the FOURCC to `*"THEO"` to improve video writing speed
FOURCC = cv2.VideoWriter_fourcc(*"vp09")
###Output
_____no_output_____
###Markdown
Download and Prepare Video
###Code
# Use pytube to download a video. It downloads to the videos subdirectory.
# You can also place a local video there and comment out the following lines
VIDEO_URL = "https://www.youtube.com/watch?v=V8yS3WIkOrA"
yt = YouTube(VIDEO_URL)
# Use `yt.streams` to see all available streams. See the PyTube documentation
# https://python-pytube.readthedocs.io/en/latest/api.html for advanced
# filtering options
try:
os.makedirs(VIDEO_DIR, exist_ok=True)
stream = yt.streams.filter(resolution="360p").first()
filename = Path(stream.default_filename.encode("ascii", "ignore").decode("ascii")).stem
stream.download(OUTPUT_DIR, filename=filename)
print(f"Video {filename} downloaded to {OUTPUT_DIR}")
# Create Path objects for the input video and the resulting videos
video_path = Path(stream.get_file_path(filename, OUTPUT_DIR))
except Exception:
# If PyTube fails, use a local video stored in the VIDEO_DIR directory
video_path = Path(rf"{VIDEO_DIR}/CEO Pat Gelsinger on Leading Intel.mp4")
# Path names for the result videos
superres_video_path = Path(f"{OUTPUT_DIR}/{video_path.stem}_superres.mp4")
bicubic_video_path = Path(f"{OUTPUT_DIR}/{video_path.stem}_bicubic.mp4")
comparison_video_path = Path(f"{OUTPUT_DIR}/{video_path.stem}_superres_comparison.mp4")
# Open the video and get the dimensions and the FPS
cap = cv2.VideoCapture(str(video_path))
ret, image = cap.read()
if not ret:
raise ValueError(f"The video at '{video_path}' cannot be read.")
fps = cap.get(cv2.CAP_PROP_FPS)
original_frame_height, original_frame_width = image.shape[:2]
cap.release()
print(
f"The input video has a frame width of {original_frame_width}, "
f"frame height of {original_frame_height} and runs at {fps:.2f} fps"
)
###Output
_____no_output_____
###Markdown
Create superresolution video, bicubic video and comparison video. The superresolution video contains the enhanced video, upsampled with superresolution, the bicubic video is the input video upsampled with bicubic interpolation, the combination video sets the bicubic video and the superresolution side by side.
###Code
superres_video = cv2.VideoWriter(
str(superres_video_path),
FOURCC,
fps,
(target_width, target_height),
)
bicubic_video = cv2.VideoWriter(
str(bicubic_video_path),
FOURCC,
fps,
(target_width, target_height),
)
comparison_video = cv2.VideoWriter(
str(comparison_video_path),
FOURCC,
fps,
(target_width * 2, target_height),
)
###Output
_____no_output_____
###Markdown
Do InferenceRead video frames and enhance them with superresolution. Save the superresolution video, the bicubic video and the comparison video to file.The code in this cell reads the video frame by frame. Each frame is resized and reshaped to network input shape and upsampled with bicubic interpolation to target shape. Both the original and the bicubic image are propagated through the network. The network result is a numpy array with floating point values, with a shape of (1,3,1920,1080). This array is converted to an 8-bit image with shape (1080,1920,3) and written to `superres_video`. The bicubic image is written to `bicubic_video` for comparison. Lastly, the bicubic and result frames are combined side by side and written to `comparison_video`. A progress bar shows the progress of the process. Inference time is measured, as well as total time to process each frame, which includes inference time as well as the time it takes to process and write the video.
###Code
start_time = time.perf_counter()
frame_nr = 1
total_inference_duration = 0
total_frames = cap.get(cv2.CAP_PROP_FRAME_COUNT) if NUM_FRAMES == 0 else NUM_FRAMES
progress_bar = ProgressBar(total=total_frames)
progress_bar.display()
cap = cv2.VideoCapture(str(video_path))
try:
while cap.isOpened():
ret, image = cap.read()
if not ret:
cap.release()
break
if NUM_FRAMES > 0 and frame_nr == NUM_FRAMES:
break
# Resize the input image to network shape and convert from (H,W,C) to
# (N,C,H,W)
resized_image = cv2.resize(image, (input_width, input_height))
input_image_original = np.expand_dims(resized_image.transpose(2, 0, 1), axis=0)
# Resize and reshape the image to the target shape with bicubic
# interpolation
bicubic_image = cv2.resize(
image, (target_width, target_height), interpolation=cv2.INTER_CUBIC
)
input_image_bicubic = np.expand_dims(bicubic_image.transpose(2, 0, 1), axis=0)
# Do inference
inference_start_time = time.perf_counter()
result = exec_net.infer(
inputs={
original_image_key: input_image_original,
bicubic_image_key: input_image_bicubic,
}
)[output_key]
inference_stop_time = time.perf_counter()
inference_duration = inference_stop_time - inference_start_time
total_inference_duration += inference_duration
# Transform inference result into an image
result_frame = convert_result_to_image(result)
# Write resulting image and bicubic image to video
superres_video.write(result_frame)
bicubic_video.write(bicubic_image)
stacked_frame = np.hstack((bicubic_image, result_frame))
comparison_video.write(stacked_frame)
frame_nr = frame_nr + 1
# Update progress bar and status message
progress_bar.progress = frame_nr
progress_bar.update()
if frame_nr % 10 == 0:
clear_output(wait=True)
progress_bar.display()
display(
Pretty(
f"Processed frame {frame_nr}. Inference time: "
f"{inference_duration:.2f} seconds "
f"({1/inference_duration:.2f} FPS)"
)
)
except KeyboardInterrupt:
print("Processing interrupted.")
finally:
superres_video.release()
bicubic_video.release()
comparison_video.release()
end_time = time.perf_counter()
duration = end_time - start_time
print(f"Video's saved to {comparison_video_path.parent} directory.")
print(
f"Processed {frame_nr} frames in {duration:.2f} seconds. Total FPS "
f"(including video processing): {frame_nr/duration:.2f}. "
f"Inference FPS: {frame_nr/total_inference_duration:.2f}."
)
###Output
_____no_output_____
###Markdown
Show side-by-side video of bicubic and superresolution version
###Code
if not comparison_video_path.exists():
raise ValueError("The comparison video does not exist.")
else:
video_link = FileLink(comparison_video_path)
display(
HTML(
f"Showing side by side comparison. If you cannot see the video in "
"your browser, please click on the following link to download "
f"the video<br>{video_link._repr_html_()}"
)
)
display(Video(comparison_video_path, width=800, embed=True))
###Output
_____no_output_____ |
Notebooks/Model Building.ipynb | ###Markdown
Importing libraries and utils
###Code
import utils
import pandas as pd
###Output
_____no_output_____
###Markdown
Loading the data
###Code
offense, defense = utils.get_data("stats")
salary = utils.get_data("salary")
AFC, NFC = utils.get_data("wins")
###Output
_____no_output_____
###Markdown
Verifying the data loaded correctly
###Code
offense[2]
defense[3]
salary
AFC[0]
NFC[2]
###Output
_____no_output_____
###Markdown
Cleaning the data
###Code
Salary = utils.clean_data("salary", test = salary)
Stats = utils.clean_data("stats", offense = offense, defense = defense)
Wins = utils.clean_data("wins", AFCl = AFC, NFCl = NFC)
###Output
_____no_output_____
###Markdown
Verifying the data cleaned correctly
###Code
Salary
Stats
Wins
###Output
_____no_output_____
###Markdown
Beginning cluster analysis
###Code
CSalary = Salary.drop(["YEAR", "TEAM"], axis = 1)
utils.find_clusters(CSalary)
SCSalary = utils.scale_data(CSalary)
utils.find_clusters(SCSalary)
#The scores after scaling are significantly worse. Considering that all of the salary numbers are in the same unit (% of cap), maybe it is best not to scale here afterall
###Output
_____no_output_____
###Markdown
Using PCA
###Code
utils.pca_exp_var(CSalary)
PCSalary = utils.pca(CSalary, .99)
utils.find_clusters(PCSalary)
# 6 clusters appears to be a good choice for silhouette score, I choose this number
###Output
_____no_output_____
###Markdown
Clustering the data using KMeans
###Code
clusters = utils.cluster_data(PCSalary, 6)
clusters
###Output
_____no_output_____
###Markdown
Adding the cluster assignments to the unscaled data for easier interpretation
###Code
SalaryClustered = utils.add_clusters(Salary, clusters)
SalaryClustered
###Output
_____no_output_____
###Markdown
Graphing components from PCA
###Code
pcadf = pd.DataFrame(PCSalary)
pcadf.columns = ("PC1", "PC2","PC3", "PC4", "PC5", "PC6", "PC7", "PC8", "PC9", "PC10", "PC11", "PC12", "PC13", "PC14", "PC15", "PC16", "PC17")
pcadf = utils.add_clusters(pcadf, clusters)
cluster0, cluster1, cluster2, cluster3, cluster4, cluster5 = utils.break_clusters(pcadf)
utils.plot(cluster0, cluster1, cluster2, cluster3, cluster4, cluster5, "PC1", "PC2", "Component 1", "Component 2", "Cluster 0", "Cluster 1", "Cluster 2", "Cluster 3", "Cluster 4", "Cluster 5")
utils.plot(cluster0, cluster1, cluster2, cluster3, cluster4, cluster5, "PC2", "PC3", "Component 2", "Component 3", "Cluster 0", "Cluster 1", "Cluster 2", "Cluster 3", "Cluster 4", "Cluster 5")
###Output
_____no_output_____
###Markdown
Examining the clustered salary data
###Code
SalaryClustered.groupby(["cluster"]).count()
SalaryClustered.groupby(["cluster"]).mean()
SalaryClustered.groupby(["cluster"]).std()
SalaryClustered["Offense"] = SalaryClustered["QB"] + SalaryClustered["RB"] + SalaryClustered["FB"] + SalaryClustered["WR"] + SalaryClustered["TE"] + SalaryClustered["T"] + SalaryClustered["RT"] + SalaryClustered["LT"] + SalaryClustered["G"] + SalaryClustered["C"]
SalaryClustered["Defense"] = SalaryClustered["DE"] + SalaryClustered["DT"] + SalaryClustered["OLB"] + SalaryClustered["ILB"] + SalaryClustered["LB"] + SalaryClustered["CB"] + SalaryClustered["SS"] + SalaryClustered["FS"] + SalaryClustered["S"]
SalaryClustered["Special Teams"] = SalaryClustered["K"] + SalaryClustered["P"] + SalaryClustered["LS"]
SalaryClustered.groupby(["cluster"]).mean()
cluster0, cluster1, cluster2, cluster3, cluster4, cluster5 = utils.break_clusters(SalaryClustered)
utils.plot(cluster0, cluster1, cluster2, cluster3, cluster4, cluster5, "Offense", "Defense", "% Of Cap Spent on Offense", "% Of Cap Spent on Defense", "Cluster 0", "Cluster 1", "Cluster 2", "Cluster 3", "Cluster 4", "Cluster 5")
###Output
_____no_output_____
###Markdown
Adding win % and stats to check spending value
###Code
WSalaryClustered = (SalaryClustered.merge(Wins, how='inner', on=["YEAR","TEAM"]))
SWSalaryClustered = (WSalaryClustered.merge(Stats, how='inner', on=["YEAR","TEAM"]))
SWSalaryClustered.groupby(["cluster"]).mean()
cluster0, cluster1, cluster2, cluster3, cluster4, cluster5 = utils.break_clusters(SWSalaryClustered)
utils.plot(cluster0, cluster1, cluster2, cluster3, cluster4, cluster5, "Yds_x", "Offense", "Offensive Yards Per Game", "% Of Cap Spent on Offense", "Cluster 0", "Cluster 1", "Cluster 2", "Cluster 3", "Cluster 4", "Cluster 5")
utils.plot(cluster0, cluster1, cluster2, cluster3, cluster4, cluster5, "Yds_y", "Defense", "Deffensive Yards Per Game", "% Of Cap Spent on Defense", "Cluster 0", "Cluster 1", "Cluster 2", "Cluster 3", "Cluster 4", "Cluster 5")
utils.plot(cluster0, cluster1, cluster2, cluster3, cluster4, cluster5, "Yds.1_x", "QB", "Offensive Passing Yards Per Game", "% Of Cap Spent on QB", "Cluster 0", "Cluster 1", "Cluster 2", "Cluster 3", "Cluster 4", "Cluster 5")
utils.plot(cluster0, cluster1, cluster2, cluster3, cluster4, cluster5, "Yds.1_x", "WR", "Offensive Passing Yards Per Game", "% Of Cap Spent on WR", "Cluster 0", "Cluster 1", "Cluster 2", "Cluster 3", "Cluster 4", "Cluster 5")
utils.plot(cluster0, cluster1, cluster2, cluster3, cluster4, cluster5, "Yds.2_x", "RB", "Offensive Rushing Yards Per Game", "% Of Cap Spent on RB", "Cluster 0", "Cluster 1", "Cluster 2", "Cluster 3", "Cluster 4", "Cluster 5")
utils.plot(cluster0, cluster1, cluster2, cluster3, cluster4, cluster5, "Yds.1_y", "CB", "Defensive Passing Yards Per Game", "% Of Cap Spent on CB", "Cluster 0", "Cluster 1", "Cluster 2", "Cluster 3", "Cluster 4", "Cluster 5")
utils.plot(cluster0, cluster1, cluster2, cluster3, cluster4, cluster5, "Offense", "W%", "% Of Cap Spent on Offense", "Win Percentage", "Cluster 0", "Cluster 1", "Cluster 2", "Cluster 3", "Cluster 4", "Cluster 5")
utils.plot(cluster0, cluster1, cluster2, cluster3, cluster4, cluster5, "Defense", "W%", "% Of Cap Spent on Defense", "Win Percentage", "Cluster 0", "Cluster 1", "Cluster 2", "Cluster 3", "Cluster 4", "Cluster 5")
utils.plot(cluster0, cluster1, cluster2, cluster3, cluster4, cluster5, "QB", "W%", "% Of Cap Spent on QB", "Win Percentage", "Cluster 0", "Cluster 1", "Cluster 2", "Cluster 3", "Cluster 4", "Cluster 5")
###Output
_____no_output_____
###Markdown
Importing libraries and utils
###Code
import utils
offense, defense = utils.get_data()
###Output
_____no_output_____
###Markdown
Verifying the data is loaded correctly
###Code
offense[4]
defense[7]
###Output
_____no_output_____
###Markdown
Cleaning and merging the data into a single frame
###Code
Teams, ClusterTeams = utils.clean_data(offense, defense)
###Output
_____no_output_____
###Markdown
Verifying the data has loaded correctly
###Code
Teams
ClusterTeams
###Output
_____no_output_____
###Markdown
Scaling the data
###Code
ScaledClusterTeams = utils.scale_data(ClusterTeams)
###Output
_____no_output_____
###Markdown
Verifying the data scaled correctly
###Code
ScaledClusterTeams
###Output
_____no_output_____
###Markdown
Finding the number of clusters
###Code
import sys
!conda install --yes --prefix {sys.prefix} -c districtdatalabs yellowbrick
utils.find_clusters(ScaledClusterTeams)
###Output
C:\Users\Michael\anaconda3\lib\site-packages\sklearn\utils\deprecation.py:143: FutureWarning: The sklearn.metrics.classification module is deprecated in version 0.22 and will be removed in version 0.24. The corresponding classes / functions should instead be imported from sklearn.metrics. Anything that cannot be imported from sklearn.metrics is now part of the private API.
warnings.warn(message, FutureWarning)
C:\Users\Michael\anaconda3\lib\site-packages\yellowbrick\utils\kneed.py:182: YellowbrickWarning: No "knee" or "elbow point" detected This could be due to bad clustering, no actual clusters being formed etc.
warnings.warn(warning_message, YellowbrickWarning)
C:\Users\Michael\anaconda3\lib\site-packages\yellowbrick\utils\kneed.py:140: YellowbrickWarning: No 'knee' or 'elbow point' detected This could be due to bad clustering, no actual clusters being formed etc.
warnings.warn(warning_message, YellowbrickWarning)
C:\Users\Michael\anaconda3\lib\site-packages\yellowbrick\cluster\elbow.py:343: YellowbrickWarning: No 'knee' or 'elbow' point detected, pass `locate_elbow=False` to remove the warning
warnings.warn(warning_message, YellowbrickWarning)
###Markdown
Using PCA
###Code
utils.pca_exp_var(ScaledClusterTeams)
#It appears that some reduction can be done here
pcaTeams = utils.pca(ScaledClusterTeams, .90)
utils.find_clusters(pcaTeams)
#Since it appears that 3 clusters is the best on both methods, we will use 3 clusters
###Output
_____no_output_____
###Markdown
Clustering the data using KMeans
###Code
clusters = utils.cluster_data(pcaTeams, 3)
clusters
###Output
_____no_output_____
###Markdown
Adding the cluster assignments to the unscaled data for easier interpretation
###Code
TeamsClustered = utils.add_clusters(Teams, clusters)
TeamsClustered
TeamsClustered.groupby(["cluster"]).count()
TeamsClustered.groupby(["cluster"]).mean()
###Output
_____no_output_____
###Markdown
Graphing components from PCA
###Code
import pandas as pd
pcadf = pd.DataFrame(pcaTeams)
pcadf.columns = ("PC1", "PC2","PC3", "PC4", "PC5", "PC6", "PC7", "PC8", "PC9", "PC10", "PC11", "PC12", "PC13")
pcadf = utils.add_clusters(pcadf, clusters)
cluster0, cluster1, cluster2 = utils.break_clusters(pcadf)
utils.plot(cluster0, cluster1, cluster2, "PC1", "PC2", "Component 1", "Component 2", "Cluster 0", "Cluster 1", "Cluster 2")
utils.plot(cluster0, cluster1, cluster2, "PC2", "PC3", "Component 2", "Component 3", "Cluster 0", "Cluster 1", "Cluster 2")
###Output
_____no_output_____
###Markdown
Graphing the original data with clusters to find meaningful differences
###Code
cluster0, cluster1, cluster2 = utils.break_clusters(TeamsClustered)
utils.plot(cluster0, cluster1, cluster2, "PF_x", "PF_y", "Points Per Game For", "Points Per Game Against", "Cluster 0", "Cluster 1", "Cluster 2")
utils.plot(cluster0, cluster1, cluster2, "Yds_x", "Yds_y", "Yards Per Game For", "Yards Per Game Against", "Cluster 0", "Cluster 1", "Cluster 2")
utils.plot(cluster0, cluster1, cluster2, "TO_x", "TO_y", "Turnovers Per Game", "Take Aways Per Game", "Cluster 0", "Cluster 1", "Cluster 2")
utils.plot(cluster0, cluster1, cluster2, "Yds.2_x", "Yds.1_x", "Rushing Yards Per Game For", "Passing Yards Per Game For", "Cluster 0", "Cluster 1", "Cluster 2")
utils.plot(cluster0, cluster1, cluster2, "Yds.2_y", "Yds.1_y", "Rushing Yards Per Game Against", "Passing Yards Per Game Against", "Cluster 0", "Cluster 1", "Cluster 2")
utils.plot(cluster0, cluster1, cluster2, "Pen_x", "Pen_y", "Offensive Penalties Committed", "Defensive Penalties Committed", "Cluster 0", "Cluster 1", "Cluster 2")
###Output
_____no_output_____
###Markdown
Loading, cleaning, and merging the data
###Code
pbp = utils.get_data("pbp")
weather = utils.get_data("weather")
pbp = utils.clean_data("pbp", pbp)
weather = utils.clean_data("weather", weather)
FullData = pbp.merge(weather, how = 'inner', on = ('game_date', 'home_team'))
FullData
###Output
_____no_output_____
###Markdown
Finding the amounts of made and missed field goals, and calculating the percentage of made field goals in the data
###Code
FullData.loc[:, "field_goal_result"].value_counts()
4122/(759+4122)
###Output
_____no_output_____
###Markdown
Checking for null values
###Code
FullData.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 4881 entries, 0 to 4880
Data columns (total 14 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 posteam 4881 non-null object
1 game_date 4881 non-null datetime64[ns]
2 half_seconds_remaining 4881 non-null float64
3 game_half 4881 non-null object
4 field_goal_result 4881 non-null object
5 kick_distance 4881 non-null float64
6 score_differential 4881 non-null float64
7 kicker_player_name 4881 non-null object
8 kicker_player_id 4881 non-null object
9 home_team 4881 non-null object
10 stadium 4881 non-null object
11 weather_temperature 4881 non-null float64
12 weather_wind_mph 4881 non-null float64
13 weather_detail 4881 non-null object
dtypes: datetime64[ns](1), float64(5), object(8)
memory usage: 572.0+ KB
###Markdown
Seperating the predictors and the target and using LabelEncoder to encode the target
###Code
from sklearn.preprocessing import LabelEncoder
LabEnc = LabelEncoder()
LabEnc.fit(FullData.loc[:, "field_goal_result"])
y = LabEnc.transform(FullData.loc[:, "field_goal_result"])
X = FullData[["posteam", "half_seconds_remaining", "game_half", "kick_distance", "score_differential", "kicker_player_name", "stadium", "weather_temperature", "weather_wind_mph", "weather_detail"]]
#After encoding, a result of made is given the value of 0, while a result of missed is given the value of 1
y
###Output
_____no_output_____
###Markdown
Creating Train and Test Splits
###Code
from sklearn.model_selection import train_test_split
Xtrain, Xtest, ytrain, ytest = train_test_split(X, y, test_size = 0.2, random_state = 52594, stratify = y)
###Output
_____no_output_____
###Markdown
Finding the number of made and missed field goals in the training data, as well as the percentage of field goals made
###Code
import numpy as np
np.unique(ytrain, return_counts = True)
3297/(3297 + 607)
###Output
_____no_output_____
###Markdown
Encoding the training data with OneHotEncoding
###Code
CatVar = ['posteam', 'game_half', 'kicker_player_name', 'stadium', 'weather_detail']
OtherVar = ['half_seconds_remaining', "kick_distance", "score_differential", "weather_temperature", "weather_wind_mph"]
cat = utils.get_cat(CatVar, X)#Gets the possible values for everything in the FullFrame so encoding doesn't result in different column numbers
enc_Xtrain = utils.ohe(CatVar, OtherVar, Xtrain, cat)
enc_Xtest = utils.ohe(CatVar, OtherVar, Xtest, cat)
###Output
_____no_output_____
###Markdown
Scaling the data
###Code
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(enc_Xtrain)
scaled_enc_Xtrain = scaler.transform(enc_Xtrain)
scaled_enc_Xtest = scaler.transform(enc_Xtest)
scaled_enc_Xtrain = pd.DataFrame(scaled_enc_Xtrain, columns = enc_Xtrain.columns)
scaled_enc_Xtrain
###Output
_____no_output_____
###Markdown
Building a base logistic regression model
###Code
from sklearn.linear_model import LogisticRegression
BaseLogReg = utils.cross_validate(scaled_enc_Xtrain, ytrain, "LogReg", penalty = 'none', score = 'f1')
#The f1 score per cross validation test. The scores are low and highly variable
BaseLogReg
utils.get_average_score(BaseLogReg)
#Repeating for classification accuracy. It looks decent, but it is lower than predicting win on every kick
BaseLogReg = utils.cross_validate(scaled_enc_Xtrain, ytrain, "LogReg", penalty = 'none')
BaseLogReg
utils.get_average_score(BaseLogReg)
#Confusion matrix for this model on the test data
BaseLogRegFit = LogisticRegression(penalty = 'none', max_iter = 10000)
BaseLogRegFit.fit(scaled_enc_Xtrain, ytrain)
ypred = BaseLogRegFit.predict(scaled_enc_Xtrain)
from sklearn.metrics import confusion_matrix
matrix = confusion_matrix(ytrain, ypred)
matrix
from sklearn.metrics import f1_score
f1_score(ytrain, ypred)
BaseLogRegFit.score(scaled_enc_Xtrain, ytrain)
###Output
_____no_output_____
###Markdown
Grid search for LogReg
###Code
from sklearn.model_selection import GridSearchCV
grid = {'C': [0.0001, 0.001, 0.01, 0.1, 1, 10, 100], 'penalty': ['l1', 'l2']}
LogRegGrid = LogisticRegression(solver = 'saga', max_iter=10000)
GridSearch = GridSearchCV(estimator = LogRegGrid, param_grid = grid, cv = 10, return_train_score = True, verbose = 2, scoring = 'f1')
GridSearch.fit(scaled_enc_Xtrain, ytrain)
pd.DataFrame(GridSearch.cv_results_).sort_values('mean_test_score', ascending = False).T
#Confusion matrix for the winning model from the grid search
GridLogRegFit = LogisticRegression(C = 10, penalty = 'l2', max_iter = 10000)
GridLogRegFit.fit(scaled_enc_Xtrain, ytrain)
ypred = GridLogRegFit.predict(scaled_enc_Xtrain)
from sklearn.metrics import confusion_matrix
matrix = confusion_matrix(ytrain, ypred)
matrix
#This model is almost identical to the base one, performing slightly worse on the confusion matrix
f1_score(ytrain, ypred)
GridLogRegFit.score(scaled_enc_Xtrain, ytrain)
###Output
_____no_output_____
###Markdown
Installing imbalanced learn for oversampling
###Code
pip install -U imbalanced-learn
###Output
Requirement already up-to-date: imbalanced-learn in c:\users\michael\anaconda3\lib\site-packages (0.7.0)
Requirement already satisfied, skipping upgrade: scipy>=0.19.1 in c:\users\michael\anaconda3\lib\site-packages (from imbalanced-learn) (1.5.0)
Requirement already satisfied, skipping upgrade: numpy>=1.13.3 in c:\users\michael\anaconda3\lib\site-packages (from imbalanced-learn) (1.18.5)
Requirement already satisfied, skipping upgrade: joblib>=0.11 in c:\users\michael\anaconda3\lib\site-packages (from imbalanced-learn) (0.16.0)
Requirement already satisfied, skipping upgrade: scikit-learn>=0.23 in c:\users\michael\anaconda3\lib\site-packages (from imbalanced-learn) (0.23.1)
Requirement already satisfied, skipping upgrade: threadpoolctl>=2.0.0 in c:\users\michael\anaconda3\lib\site-packages (from scikit-learn>=0.23->imbalanced-learn) (2.1.0)
Note: you may need to restart the kernel to use updated packages.
###Markdown
Importing random over sampler
###Code
from imblearn.over_sampling import RandomOverSampler
###Output
_____no_output_____
###Markdown
Using RandomOverSampler to over sample the data to improve the f1 score
###Code
#Using f1 as the scorer. The values used for this were determined by experimentation, looking for the best combination of f1 and classification accuracy across the cross validations
ROSLogReg = utils.cross_validate(scaled_enc_Xtrain, ytrain, "LogReg", penalty = 'l1', solver = 'saga', oversample = 'ros', ss = .6, score = 'f1')
utils.get_average_score(ROSLogReg)
ROSLogReg
#Redoing the above to get the classification accuracy score
ROSLogReg = utils.cross_validate(scaled_enc_Xtrain, ytrain, "LogReg", penalty = 'l1', solver = 'saga', oversample = 'ros', CatVar = CatVar, ss = .6, score = 'acc')
utils.get_average_score(ROSLogReg)
ROSLogReg
#Applying the ROS to the testing data
ros = RandomOverSampler(random_state = 52594, sampling_strategy = .6)
rosX, rosy = ros.fit_resample(scaled_enc_Xtrain, ytrain)
rosLogReg = LogisticRegression(penalty = 'l1', solver = 'saga', max_iter = 10000).fit(rosX, rosy)
ypred = rosLogReg.predict(scaled_enc_Xtrain)
f1_score(ytrain, ypred)
rosLogReg.score(scaled_enc_Xtrain, ytrain)
matrix = confusion_matrix(ytrain, ypred)
matrix
###Output
_____no_output_____
###Markdown
Making a decision tree model using grid search
###Code
from sklearn.tree import DecisionTreeClassifier
DecisionTree = DecisionTreeClassifier(max_leaf_nodes = 100, min_samples_leaf= 200)
DecisionTree.fit(enc_Xtrain, ytrain)
from sklearn.model_selection import cross_validate
dtcv = cross_validate(DecisionTree, scaled_enc_Xtrain, ytrain, cv= 10, return_train_score = True, return_estimator = True, verbose = 2, scoring = 'f1')
#This suggests the model is simply predicting "make" for every kick, which will not work for the purposes of this project
dtcv['test_score']
###Output
_____no_output_____
###Markdown
Using grid search for Decision Tree
###Code
dtgrid = [{'max_depth':[1,2,3,4,5,6,7,8,9,10,15,20,25,30], 'max_features' :[1,2,3,4,5,6,7,8,9,10,15,20,25,30]}]
gridsearchdt = GridSearchCV(estimator = DecisionTreeClassifier(), param_grid = dtgrid, scoring = 'f1', cv = 10, verbose = 2)
griddt = gridsearchdt.fit(enc_Xtrain, ytrain)
pd.DataFrame(gridsearchdt.cv_results_).sort_values('mean_test_score', ascending = False).T
###Output
_____no_output_____
###Markdown
Using Random Over Sample on Decision Tree Model
###Code
#Using f1 as the scorer. The values used for this were determined by experimentation, looking for the best combination of f1 and classification accuracy across the cross validations
ROSDecisionTree = utils.cross_validate(scaled_enc_Xtrain, ytrain, "Tree", penalty = 'l1', solver = 'saga', oversample = 'ros', ss = .6, score = 'f1')
ROSDecisionTree
utils.get_average_score(ROSDecisionTree)
#Using f1 as the scorer. The values used for this were determined by experimentation, looking for the best combination of f1 and classification accuracy across the cross validations
ROSDecisionTree = utils.cross_validate(scaled_enc_Xtrain, ytrain, "Tree", penalty = 'l1', solver = 'saga', oversample = 'ros', ss = .6, score = 'acc')
ROSDecisionTree
utils.get_average_score(ROSDecisionTree)
###Output
_____no_output_____
###Markdown
Building Random Forest Model
###Code
from sklearn.ensemble import RandomForestClassifier
RanFor = RandomForestClassifier(n_estimators = 700, max_features = 'auto', max_depth = 20)
rfcv = cross_validate(RanFor, scaled_enc_Xtrain, ytrain, cv= 10, return_train_score = True, return_estimator = True, verbose = 2, scoring = "f1")
#These scores are very low
rfcv['test_score']
###Output
_____no_output_____
###Markdown
Using a grid search for a random forest model
###Code
rfcgrid = [{'max_depth':[1,2,3,4,5,6,7,8,9,10,15,20,25,30], 'max_features' :[1,2,3,4,5,6,7,8,9,10,15,20,25,30]}]
gridsearchrfc = GridSearchCV(estimator = RandomForestClassifier(), param_grid = rfcgrid, scoring = 'f1', cv = 10, verbose = 2)
gridRFC = gridsearchrfc.fit(enc_Xtrain, ytrain)
pd.DataFrame(gridsearchrfc.cv_results_).sort_values('mean_test_score', ascending = False).T
###Output
_____no_output_____
###Markdown
Using Random Over Sampler for Random Forest
###Code
#Using f1 as the scorer. The values used for this were determined by experimentation, looking for the best combination of f1 and classification accuracy across the cross validations
ROSRandomForest = utils.cross_validate(scaled_enc_Xtrain, ytrain, "RF", penalty = 'l1', solver = 'saga', oversample = 'ros', ss = 1.0, score = 'f1')
ROSRandomForest
utils.get_average_score(ROSRandomForest)
#Again for accuracy
ROSRandomForest = utils.cross_validate(scaled_enc_Xtrain, ytrain, "RF", penalty = 'l1', solver = 'saga', oversample = 'ros', ss = 1.0, score = 'acc')
ROSRandomForest
utils.get_average_score(ROSRandomForest)
###Output
_____no_output_____
###Markdown
The best performing model is the over sampled linear regression model Fitting the model and using it on the test data
###Code
ros = RandomOverSampler(random_state = 52594, sampling_strategy = .6)
rosX, rosy = ros.fit_resample(scaled_enc_Xtrain, ytrain)
rosLogReg = LogisticRegression(penalty = 'l1', solver = 'saga', max_iter = 10000).fit(rosX, rosy)
ypred = rosLogReg.predict(scaled_enc_Xtest)
f1_score(ytest, ypred)
rosLogReg.score(scaled_enc_Xtest, ytest)
matrix = confusion_matrix(ytest, ypred)
matrix
###Output
_____no_output_____
###Markdown
Predicting 4 field goals from an NFL game (Ravens @ Eagles, 10/18/2020) Prediction 1, actual result: made
###Code
Predict1 = pd.DataFrame([['BAL', 732, 'Half1', 46.0, 14, "J.Tucker", "Lincoln Financial Field", 65, 7, "Normal"]], columns = X.columns)
enc_Predict1 = utils.ohe(CatVar, OtherVar, Predict1, cat)
scaled_enc_Predict1 = scaler.transform(enc_Predict1)
rosLogReg.predict_proba(scaled_enc_Predict1)
###Output
_____no_output_____
###Markdown
Predicting the same field goal, but if it was the other team and other kicker
###Code
Predict2 = pd.DataFrame([['PHI', 732, 'Half1', 46.0, 14, "J.Elliott", "Lincoln Financial Field", 65, 7, "Normal"]], columns = X.columns)
enc_Predict2 = utils.ohe(CatVar, OtherVar, Predict2, cat)
scaled_enc_Predict2 = scaler.transform(enc_Predict2)
rosLogReg.predict_proba(scaled_enc_Predict2)
###Output
_____no_output_____
###Markdown
While the percentages are suspect, it is clear that the model strongly accounts for the kicker and the kicking team Prediction 2, actual result: missed
###Code
Predict3 = pd.DataFrame([['PHI', 0, 'Half1', 52.0, -17, "J.Elliott", "Lincoln Financial Field", 65, 7, "Normal"]], columns = X.columns)
enc_Predict3 = utils.ohe(CatVar, OtherVar, Predict3, cat)
scaled_enc_Predict3 = scaler.transform(enc_Predict3)
rosLogReg.predict_proba(scaled_enc_Predict3)
###Output
_____no_output_____
###Markdown
Predicting the same field goal, but other team other kicker
###Code
Predict4 = pd.DataFrame([['BAL', 0, 'Half1', 52.0, -17, "J.Tucker", "Lincoln Financial Field", 65, 7, "Normal"]], columns = X.columns)
enc_Predict4 = utils.ohe(CatVar, OtherVar, Predict4, cat)
scaled_enc_Predict4 = scaler.transform(enc_Predict4)
rosLogReg.predict_proba(scaled_enc_Predict4)
###Output
_____no_output_____
###Markdown
Prediction 3, actual result: made
###Code
Predict5 = pd.DataFrame([['BAL', 601, 'Half2', 55.0, 10, "J.Tucker", "Lincoln Financial Field", 65, 7, "Normal"]], columns = X.columns)
enc_Predict5 = utils.ohe(CatVar, OtherVar, Predict5, cat)
scaled_enc_Predict5 = scaler.transform(enc_Predict5)
rosLogReg.predict_proba(scaled_enc_Predict5)
###Output
_____no_output_____
###Markdown
Predicting the same field goal, but other team, other kicker
###Code
Predict6 = pd.DataFrame([['PHI', 601, 'Half2', 55.0, 10, "J.Elliott", "Lincoln Financial Field", 65, 7, "Normal"]], columns = X.columns)
enc_Predict6 = utils.ohe(CatVar, OtherVar, Predict6, cat)
scaled_enc_Predict6 = scaler.transform(enc_Predict6)
rosLogReg.predict_proba(scaled_enc_Predict6)
###Output
_____no_output_____
###Markdown
Prediction 4, actual result: made
###Code
Predict7 = pd.DataFrame([['BAL', 437, 'Half2', 46.0, 13, "J.Tucker", "Lincoln Financial Field", 65, 7, "Normal"]], columns = X.columns)
enc_Predict7 = utils.ohe(CatVar, OtherVar, Predict7, cat)
scaled_enc_Predict7 = scaler.transform(enc_Predict7)
rosLogReg.predict_proba(scaled_enc_Predict7)
###Output
_____no_output_____
###Markdown
Predicting the same field goal, but other team, other kicker
###Code
Predict8 = pd.DataFrame([['PHI', 437, 'Half2', 46.0, 13, "J.Elliott", "Lincoln Financial Field", 65, 7, "Normal"]], columns = X.columns)
enc_Predict8 = utils.ohe(CatVar, OtherVar, Predict8, cat)
scaled_enc_Predict8 = scaler.transform(enc_Predict8)
rosLogReg.predict_proba(scaled_enc_Predict8)
###Output
_____no_output_____ |
Smit/Sem V/CS/Set_3.ipynb | ###Markdown
[View in Colaboratory](https://colab.research.google.com/github/nishi1612/SC374-Computational-and-Numerical-Methods/blob/master/Set_3.ipynb) Set 3--- **Finding roots of polynomial by bisection method**
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import math
from google.colab import files
def iterations(n, arr , i):
plt.plot(range(n),arr)
plt.xlabel('No. of iterations')
plt.ylabel('Value of c')
plt.grid(True)
plt.savefig("Iterations" + str(i) + ".png")
files.download("Iterations" + str(i) + ".png")
plt.show()
def graph(i):
plt.xlabel('x')
plt.ylabel('y')
plt.grid(True)
plt.legend(loc='upper right')
plt.savefig("Graph" + str(i) + ".png")
files.download("Graph" + str(i) + ".png")
plt.show()
def bissection( a,b,epsilon,k):
table = pd.DataFrame(columns=['a','b','c','b-c','f(a)*f(c)','Assign'])
c = (a+b)/2;
dist = b-c;
i = 0
arr = []
while(dist>epsilon):
ans_a = func(a,k);
ans_b = func(b,k);
ans_c = func(c,k);
ans = ""
if(ans_a*ans_c < 0):
b=c;
ans = "b=c"
else:
a=c;
ans = "a=c";
table.loc[i] = [a,b,c,dist,ans_a*ans_c,ans]
arr.append(c)
i = i+1
c = (a+b) / 2
dist = b-c
return (a+b)/2 ,i , arr , table;
def func(x,k):
if k==1:
return x**6 - x - 1;
elif k==2:
return x**3 - x**2 - x - 1;
elif k==3:
return x - 1 - 0.3*math.cos(x);
elif k==4:
return 0.5 + math.sin(x) - math.cos(x);
elif k==5:
return x - math.e**(-x);
elif k==6:
return math.e**(-x) - math.sin(x);
elif k==7:
return x**3 - 2*x - 2;
elif k==8:
return x**4 - x - 1;
elif k==9:
return math.e**(x) - x - 2;
elif k==10:
return 1- x + math.sin(x);
elif k==11:
return x - math.tan(x);
x = np.arange(-2,3,0.001)
plt.plot(x,x**6,label='$x^6$')
plt.plot(x,x+1,label="x+1")
graph(1)
plt.plot(x**6-x-1,label='$x^6$ - x - 1')
graph(1)
a , n , arr , table = bissection(1,2,0.001,1)
iterations(n,arr,1)
print(str(a) + "\n" + str(func(a,1)))
table
b , n , arr , table = bissection(-1,0,0.001,1)
iterations(n,arr,1)
print(str(b) + "\n" + str(func(b,1)))
table
x = np.arange(-2,3,0.001)
plt.plot(x,x**3,label='$x^3$')
plt.plot(x,x**2 + x + 1,label='$x^2 + x + 1$')
graph(2)
plt.plot(x**3 - (x**2 + x + 1),label='$x^3 - x^2 - x - 1$')
graph(2)
a , n , arr, table = bissection(1,2,0.0001,2)
iterations(n,arr,2)
print(str(a) + "\n" + str(func(a,2)))
table
x = np.arange(-3,5,0.001)
plt.plot(x,x-1,label='$x-1$')
plt.plot(x,0.3*np.cos(x),label='$0.3cos(x)$')
graph(3)
plt.plot(x,x-1-0.3*np.cos(x) , label='$x - 1 - 0.3cos(x)$')
graph(3)
a , n , arr , table = bissection(0,2,0.0001,3)
iterations(n,arr,3)
print(str(a) + "\n" + str(func(a,3)))
table
x = np.arange(-10,10,0.001)
plt.plot(x,0.5 + np.sin(x),label='$0.5 + sin(x)$')
plt.plot(x,np.cos(x),label='$cos(x)$')
graph(4)
plt.plot(x,0.5 + np.sin(x) - np.cos(x),label='$0.5 + sin(x) - cos(x)$')
graph(4)
a , n , arr , table = bissection(0,2,0.0001,4)
iterations(n,arr,4)
print(str(a) + "\n" + str(func(a,4)))
table
x = np.arange(-0,5,0.001)
plt.plot(x,x,label='$x$')
plt.plot(x,np.e**(-x),label='$e^{-x}$')
graph(5)
plt.plot(x,x - np.e**(-x),label='$x - e^{-x}$')
graph(5)
a , n , arr , table = bissection(0,1,0.0001,5)
iterations(n,arr,5)
print(str(a) + "\n" + str(func(a,5)))
table
x = np.arange(0,5,0.001)
plt.plot(x,np.sin(x),label='$sin(x)$')
plt.plot(x,np.e**(-x),label='$e^{-x}$')
graph(6)
plt.plot(x,np.sin(x) - np.e**(-x),label='$sin(x) - e^{-x}$')
graph(6)
a , n , arr , table = bissection(0,1,0.0001,6)
iterations(n,arr,6)
print(str(a) + "\n" + str(func(a,6)))
table
a , n , arr , table = bissection(3,4,0.0001,6)
iterations(n,arr,6)
print(str(a) + "\n" + str(func(a,6)))
table
x = np.arange(-2,4,0.001)
plt.plot(x,x**3,label='$x^3$')
plt.plot(x,2*x+2,label='$2x + 2$')
graph(7)
plt.plot(x,x**3 - 2*x - 2,label='$x^3 - 2x - 2$')
graph(7)
a , n , arr , table = bissection(1,2,0.0001,7)
iterations(n,arr,7)
print(str(a) + "\n" + str(func(a,7)))
table
x = np.arange(-2,4,0.001)
plt.plot(x,x**4,label='$x^4$')
plt.plot(x,x+1,label='$x+1$')
graph(8)
plt.plot(x,x**4 - x - 1,label='$x^4 - x - 1$')
graph(8)
a , n , arr , table = bissection(-1,0,0.0001,8)
iterations(n,arr,8)
print(str(a) + "\n" + str(func(a,8)))
table
a , n , arr , table = bissection(1,2,0.0001,8)
iterations(n,arr,8)
print(str(a) + "\n" + str(func(a,8)))
table
x = np.arange(-5,4,0.001)
plt.plot(x,np.e**(x),label='$e^x$')
plt.plot(x,x+2,label='$x+2$')
graph(9)
plt.plot(x,np.e**(x) - x - 2,label='$e^2 - x - 2$')
graph(9)
a , n , arr , table = bissection(1,2,0.0001,9)
iterations(n,arr,9)
print(str(a) + "\n" + str(func(a,9)))
table
x = np.arange(-5,4,0.001)
plt.plot(x,-np.sin(x),label='$-sin(x)$')
plt.plot(x,1-x,label='$1 - x$')
graph(10)
plt.plot(x,-np.sin(x) - 1 + x,label='$-sin(x) - 1 + x$')
graph(10)
a , n , arr , table = bissection(0,2,0.0001,10)
iterations(n,arr,10)
print(str(a) + "\n" + str(func(a,10)))
table
x = np.arange(-10,10,.001)
plt.plot(np.tan(x),label='$tan(x)$')
plt.plot(x,label='$x$')
graph(11)
plt.plot(np.tan(x) - x,label='$x - tan(x)$')
graph(11)
a , n , arr , table = bissection(4,5,0.0001,11)
iterations(n,arr,11)
print(str(a) + "\n" + str(func(a,11)))
table
a , n , arr , table = bissection(80,120,0.0001,11)
iterations(n,arr,11)
print(str(a) + "\n" + str(func(a,11)))
table
###Output
_____no_output_____ |
concepts/datastore/datastore-api.ipynb | ###Markdown
Azure ML Datastore Python SDKdescription: overview of the AML Datastore Python SDK
###Code
from azureml.core import Workspace
ws = Workspace.from_config()
ws
import git
from pathlib import Path
# get root of git repo
prefix = Path(git.Repo(".", search_parent_directories=True).working_tree_dir)
ds = ws.get_default_datastore()
ds
from azureml.core import Datastore
name = "TuringNLR"
container_name = "public"
account_name = "turingnlr"
# register a new datastore - use public Turing blob container
ds2 = input_ds = Datastore.register_azure_blob_container(
ws, name, container_name, account_name
)
ds2
ws.datastores
# upload files, then create a Dataset from the datastore and path to use
ds.upload(
str(prefix.joinpath("data", "raw", "iris")),
target_path="datasets/iris",
show_progress=True,
)
###Output
_____no_output_____
###Markdown
Azure ML Datastore Python SDKdescription: overview of the AML Datastore Python SDK
###Code
from azureml.core import Workspace
ws = Workspace.from_config()
ws
import git
from pathlib import Path
# get root of git repo
prefix = Path(git.Repo(".", search_parent_directories=True).working_tree_dir)
ds = ws.get_default_datastore()
ds
from azureml.core import Datastore
name = "TuringNLR"
container_name = "public"
account_name = "turingnlr"
# register a new datastore - use public Turing blob container
ds2 = input_ds = Datastore.register_azure_blob_container(
ws, name, container_name, account_name
)
ds2
ws.datastores
# upload files, then create a Dataset from the datastore and path to use
ds.upload(
str(prefix.joinpath("data", "raw", "iris")),
target_path="datasets/iris",
show_progress=True,
)
###Output
_____no_output_____ |
ไธๅๅคงๅญฆ/D02ๆบๅจๅญฆไน ไธๆทฑๅบฆๅญฆไน /Torchๅบ็ก/02Tensor-07Tensor็ๅฏผๆฐไธ่ชๅจๆฑๅฏผ.ipynb | ###Markdown
่ชๅจๆฑๅฏผ็็ธๅ
ณ่ฎพ็ฝฎ - Tensor็ๅฑๆง๏ผ - requires_grad=True - ๆฏๅฆ็จๆฅๆฑๅฏผ - is_leaf๏ผ - ๅถๅญ่็นๅฟ
้กปๆฏ่ฎก็ฎ็็ปๆ๏ผ - ็จๆทๅๅปบ็Tensor็is_leaf=True๏ผๅฐฝ็ฎกrequires_grad=True๏ผไนis_leaf=True๏ผ๏ผ - requires_grad=False็Tensor็is_leaf=True๏ผ - grad_fn๏ผ - ็จๆฅๆๅฎๆฑๅฏผๅฝๆฐ๏ผ - grad - ็จๆฅ่ฟๅๅฏผๆฐ๏ผ - dtype - ๅชๆtorch.float็ๅผ ้ๆ่ฝๆฑๅฏผ๏ผ 1. ๆฑๅฏผ็ไพๅญ
###Code
import torch
# x่ชๅ้
x = torch.Tensor([5])
x.requires_grad=True
# yๅ ๅ้
y = x ** 2
# ๆฑๅฏผ
y.backward()
# ๅฏผๆฐ็็ปๆ
print(x.grad)
###Output
tensor([10.])
###Markdown
2. ๆฑๅฏผ็ๅฏ่งๅ(ๅฏผๆฐๅฝๆฐ็ๆฒ็บฟ)
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import torch
# x่ชๅ้
x = torch.linspace(0, 10, 100)
x.requires_grad=True
# yๅ ๅ้
y = (x - 5) ** 2 + 3
z = y.sum()
# ๆฑๅฏผ
z.backward()
print()
# ๅฏ่งๅ
plt.plot(x.detach(), y.detach(), color=(1, 0, 0, 1), label='$y=(x-5)^2 + 3$')
plt.plot(x.detach(), x.grad.detach(), color=(1, 0, 1, 1), label='$y=2(x-5)$')
plt.legend()
plt.show()
# print(x.grad)
# print(x)
###Output
###Markdown
3. ๆฑๅฏผ็ธๅ
ณ็ๅฑๆงๅผ
###Code
import torch
# x่ชๅ้
x = torch.Tensor([5])
x.requires_grad=True
# ๆฑๅฏผๅ็ๅฑๆง
print("-------------ๆฑๅฏผๅx")
print("leaf:", x.is_leaf)
print("grad_fn:", x.grad_fn)
print("grad:", x.grad)
# yๅ ๅ้
y = x ** 2
print("-------------ๆฑๅฏผๅy")
print("requires_grad:", y.requires_grad)
print("leaf:", y.is_leaf)
print("grad_fn:", y.grad_fn)
print("grad:", y.grad)
# ๆฑๅฏผ
y.backward() # ๅชๅฏนๆ ้่ฟ็ฎ
print("-------------ๆฑๅฏผๅx")
# ๆฑๅฏผๅ็ๅฑๆง
print("leaf:", x.is_leaf)
print("grad_fn:", x.grad_fn)
print("grad:", x.grad)
print("-------------ๆฑๅฏผๅy")
print("requires_grad:", y.requires_grad)
print("leaf:", y.is_leaf)
print("grad_fn:", y.grad_fn)
print("grad:", y.grad)
###Output
-------------ๆฑๅฏผๅx
leaf: True
grad_fn: None
grad: None
-------------ๆฑๅฏผๅy
requires_grad: True
leaf: False
grad_fn: <PowBackward0 object at 0x11ee90cf8>
grad: None
-------------ๆฑๅฏผๅx
leaf: True
grad_fn: None
grad: tensor([10.])
-------------ๆฑๅฏผๅy
requires_grad: True
leaf: False
grad_fn: <PowBackward0 object at 0x11ee90828>
grad: None
###Markdown
Tensor็backwardๅฝๆฐ backwardๅฝๆฐๅฎไน - ๅฝๆฐๅฎไน๏ผ```python backward(self, gradient=None, retain_graph=None, create_graph=False)```- ๅๆฐ่ฏดๆ๏ผ - gradient=None๏ผ้่ฆๆฑๅฏผ็ๅพฎๅๅผ ้๏ผ - retain_graph=None๏ผไฟ็ๅพ๏ผๅฆๅๆฏๆฌก่ฎก็ฎๅฎๆฏ๏ผๅบๅๅปบ็ๅพ้ฝไผ่ขซ้ๆพใ - create_graph=False๏ผๅๅปบๅฏผๆฐๅพ๏ผไธป่ฆ็จๆฅๆฑ้ซ้ถๅฏผๆฐ๏ผ ๆฑๅฏผ็้็จๆจกๅผ- ๅฝๆฐ่กจ่พพๅผ๏ผ - $z = 2x + 3y$- ๆๅทฅๆฑๅฏผ๏ผ - $\dfrac{\partial{z}}{\partial{x}} = 2$
###Code
import torch
x = torch.Tensor([1, 2, 3])
x.requires_grad=True # ่ฟไธชๅฑๆงๅฟ
้กปๅจ z = 2*x + 3*y ่กจ่พพๅผๆๅปบๅพ็ๆถๅ่ฎพ็ฝฎ
y = torch.Tensor([4, 5, 6])
z = 2*x + 3*y
z.backward(x) # ๅฏนxๆฑๅฏผ๏ผๅพๅฐ็็ปๆ๏ผ่ช็ถๆฏ 2๏ผไฝๆฏx็gradๆฏ 2 * x
print(x.grad, y.grad, z.grad) # ๆฒกๆๅฏนyๆฑๅฏผ๏ผๆไปฅๅฏนyๆฒกๆ่ฆๆฑ
###Output
tensor([2., 4., 6.]) None None
###Markdown
็่งฃๅฏผๆฐ - ๅฝๆฐ่กจ่พพๅผ๏ผ - $z = x^2$- ๆๅทฅๆฑๅฏผ๏ผ - $\dfrac{\partial{z}}{\partial{x}} = 2x$ - $\color{red}{ไธ้ข่ฟ็จๆไน่ฎก็ฎ็ๅข๏ผ}$ ็ปๆๅผ ้ไธบๆ ้็ๆ
ๅต- ๅฆๆzๆฏๆ ้๏ผๅ็ดๆฅ่ฎก็ฎๅฏผๆฐ๏ผ$\dfrac{\partial{z}}{\partial{x}} = 2x$
###Code
import torch
x = torch.Tensor([2])
x.requires_grad=True
z = x**2 # ๆฑๅฏผๅฝๆฐ
z.backward() # ๅฏนxๆฑๅฏผ๏ผ2 * x ๏ผๅฏผๆฐไธบ2x=4
print(x.grad, z.grad)
###Output
tensor([4.]) None
###Markdown
็ปๆๅผ ้ไธบๅ้็ๆ
ๅต- ๅฆๆzๆฏๅ้๏ผๅ้่ฆๅ
่ฎก็ฎzไธx็ๅ
็งฏ๏ผๅพๅฐๆ ้็ปๆ๏ผ็ถๅๅๆฑๅฏผใ - $z = x^2$ - $l = z \cdot x$ - $\dfrac{\partial{l}}{\partial{x}} = \dfrac{\partial{l}}{\partial{z}} \dfrac{\partial{z}}{\partial{x}} = x \dfrac{\partial{z}}{\partial{x}} = x 2x$
###Code
import torch
x = torch.Tensor([2])
x.requires_grad=True
y = x**2 # ๆฑๅฏผๅฝๆฐ
y.backward(x) # 2 x x = 8
print(x.grad, y.grad)
print(x.grad/x) # ๆญฃๅฎ็ปๆ
###Output
tensor([8.]) None
tensor([4.], grad_fn=<DivBackward0>)
###Markdown
ๅๆฑๅฏผๅ้ไธบ1ๅ้- ๆ นๆฎไธ้ข็ๆจๅฏผ๏ผๅจ่ชๅจๆฑๅฏผไธญๅ
ๅซๅ ไธช้ป่ฎคๅจไฝ๏ผ - 1. ไฝฟ็จz.backward()๏ผๆฒกๆๆๅฎๅพฎๅ้็ๆ
ๅตไธ๏ผๅฎ้
ไธๆฏๅฏนๅพ็ๆๆๆ ่ฎฐไธบrequires_grad=True็ๅถๅญๅผ ้ๅฎ็ฐๆฑๅฏผ๏ผ - ๅฝๅถๅญ่็น้ฝๆฏrequires_grad=False๏ผไผๆๅบๅผๅธธใ - `RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn` - 2. ไฝฟ็จz.backward(x)๏ผ็ดๆฅๆๅฎ้่ฆ็ๆฑๅฏผ๏ผ - ๅ
ถๅฎ่ฟ็งๆๅฎ๏ผๆฏๆฒกๆๆไน็๏ผๅ ไธบๆๅฎx๏ผไนๆฏๅฏนๆๆrequires_grad=True็ๅถๅญ่็นๆฑๅฏผใ - ไธ้ขไพๅญไฝไผไธ๏ผๅคไธชๅถๅญ่็น็่ชๅจๆฑๅฏผ๏ผ - ๅฐฑ็ฎๅชๅฏนxๆฑๅฏผ๏ผๅฎ้
ๅฏนyไนไผๆฑๅฏผ๏ผ
###Code
import torch
x = torch.Tensor([1, 2, 3])
y = torch.Tensor([4, 5, 6])
x.requires_grad=True
y.requires_grad=True
z = 3*x + 2*y # ๆฑๅฏผๅฝๆฐ
z.backward(x) # ๅฏนxๆฑๅฏผ
print(x.grad, y.grad) # [3., 6., 9.] ๏ผๅฏผๆฐๆฏ3 ไธ [2., 4., 6.]๏ผๅฏผๆฐๆฏ2
print(x.grad/x, y.grad/x) # [3., 6., 9.] ๏ผๅฏผๆฐๆฏ3 ไธ [2., 4., 6.]๏ผๅฏผๆฐๆฏ2
###Output
tensor([3., 6., 9.]) tensor([2., 4., 6.])
tensor([3., 3., 3.], grad_fn=<DivBackward0>) tensor([2., 2., 2.], grad_fn=<DivBackward0>)
###Markdown
- ไปไธ้ขไพๅญ็ๅบ๏ผbackward็ๅๆฐๅผ ้๏ผไป
ไป
ๆฏๆๆฑๅฏผๅฝๆฐไปๅ้่ฝฌๆขๆๆ ้ๆฑๅฏผ๏ผ ๆฌ่บซๅนถๆฒกๆๆๅฎๅฏนๅชไธชๅ้๏ผๅผ ้ๆฑๅฏผ็๏ผ็ๅซไนใ - ็ฑไบbackward็ๅๆฐไป
ไป
ๆฏๅ้ๅฐๅ้็่ฝฌๅๅทฅไฝ๏ผๆไปฅๆไปฌๅป่ฟไธชๅๆฐไธบ1ๅณๅฏใไธ้ขๆฏๆจ็็่ฎบใ - $z = x^2$- $l = z \cdot 1$- $\dfrac{\partial{l}}{\partial{x}} = \dfrac{\partial{l}}{\partial{z}} \dfrac{\partial{z}}{\partial{x}} = \dfrac{\partial{z \cdot 1 }}{\partial{z}} \dfrac{\partial{z}}{\partial{x}} = \dfrac{\partial{z}}{\partial{x}} = 2x$ - ๅ1ๅผ ้ไฝไธบๆขฏๅบฆๆฑๅฏผ
###Code
import torch
x = torch.Tensor([1, 2, 3])
x.requires_grad=True
z = x**2 # ๆฑๅฏผๅฝๆฐ
z.backward(torch.ones_like(x))
print(x.grad, z.grad)
###Output
tensor([2., 4., 6.]) None
###Markdown
- ไธ้ข็ๆไฝไธๅ1ๅผ ้็ๅ็ๅฎๅ
จไธ่ด - ๅชๆฏ็จๆท่ชๅทฑๅไบ่ฟไธชๅ
็งฏ่ฟ็ฎ่ๅทฒใ
###Code
import torch
x = torch.Tensor([1, 2, 3])
x.requires_grad=True
z = (x**2).sum() # ็ดๆฅๆฑๅ
z.backward()
print(x.grad, z.grad)
###Output
tensor([2., 4., 6.]) None
###Markdown
ๅคๆ็ๆฑๅฏผ่ฟ็ฎไพๅญ- ไธ้ขๆฏ่ฎก็ฎ็ๅพ็คบๆๅพ๏ผ - 
###Code
import torch
# ๅถๅญ่็น
x = torch.Tensor([1, 2, 3])
y = torch.Tensor([3, 4, 5])
z = torch.Tensor([1, 2, 3])
x.requires_grad=True
y.requires_grad=True
z.requires_grad=True
# ไธญ้ด่็น
xy = x + y
xy2 = xy ** 2
z3 = z ** 3
xy2z3=xy2 * z3
# ๆฑๅฏผๆฐ
xy2z3.backward(torch.Tensor([1.0, 1.0, 1.0]))
print(x.grad, y.grad, z.grad)
print(xy.grad, xy2.grad, z3.grad, xy2z3.grad) # ๆฒกๆๆขฏๅบฆ๏ผๅ ไธบไธๆฏๅถๅญ่็น
print(xy.grad_fn, xy2.grad_fn, z3.grad_fn, xy2z3.grad_fn)
print(xy.requires_grad, xy2.requires_grad, z3.requires_grad, xy2z3.requires_grad)
###Output
tensor([ 8., 96., 432.]) tensor([ 8., 96., 432.]) tensor([ 48., 432., 1728.])
None None None None
<AddBackward0 object at 0x11efe69e8> <PowBackward0 object at 0x11efe6940> <PowBackward0 object at 0x11efe6a90> <MulBackward0 object at 0x11efe6ac8>
True True True True
###Markdown
ไธญ้ดๅฏผๆฐ- ไฝฟ็จไธ้ขๆจกๅผ็ผ็จ๏ผๅฏไปฅๅ็ฐๅ
ถไธญๅช่ฎก็ฎๅบ่พๅ
ฅๅ้็ๅฏผๆฐ๏ผไธญ้ดๅ้็ๅฏผๆฐๆฏๆ ๆณ่ทๅ็๏ผๅฆๆๆณ่ทๅไธญ้ดๅ้็ๅฏผๆฐ๏ผ้่ฆๆณจๅไธไธชๅ่ฐ้ฉๅญๅฝๆฐ๏ผ้่ฟ่ฟไธชๅฝๆฐ่ฟๅใ - ่ทๅไธญ้ดๅ้ๅฏผๆฐ็ไพๅญ
###Code
import torch
# ๅถๅญ่็น
x = torch.Tensor([1, 2, 3])
y = torch.Tensor([3, 4, 5])
z = torch.Tensor([1, 2, 3])
x.requires_grad=True
y.requires_grad=True
z.requires_grad=True
# ไธญ้ด่็น
xy = x + y
# xyz = xy * z
# xyz.backward(torch.Tensor([1, 1, 1]))
xyz = torch.dot(xy, z)
# ====================
def get_xy_grad(grad):
print(F"xy็ๅฏผๆฐ๏ผ{ grad }") # ๅฏไปฅไฟๅญๅฐๅ
จๅฑๅ้ไฝฟ็จใ
xy.register_hook(get_xy_grad)
# ====================
xyz.backward()
print(x.grad, y.grad, z.grad)
print(xy.grad, y.grad, z.grad)
###Output
xy็ๅฏผๆฐ๏ผtensor([1., 2., 3.])
tensor([1., 2., 3.]) tensor([1., 2., 3.]) tensor([4., 6., 8.])
None tensor([1., 2., 3.]) tensor([4., 6., 8.])
###Markdown
้ซ้ถๅฏผๆฐ1. ๆไพcreate_graphๅๆฐ็จๆฅไฟ็ๅฏผๆฐ็ๅพ๏ผ็จๆฅๅฎ็ฐ้ซ็บงๅฏผๆฐ็่ฎก็ฎใ2. ้ซ้ถๅฏผๆฐๅ ไธบไธๆฏๅถๅญ่็น๏ผ้่ฆ้่ฟๅ่ฐ้ฉๅญ่ทๅ
###Code
import torch
x = torch.Tensor([1])
x.requires_grad=True
z = x**6 # ๆฑๅฏผๅฝๆฐ
z.backward(create_graph=True) # retain_graphไฟ็็ๆฏๆฌ่บซ็่ฟ็ฎๅพ๏ผcreate_graphๆฏไฟ็ๅพฎๅๅพ
print(x.grad) # ๅฏผๆฐ3
# ====================
def get_xy_grad(grad):
print(F"x.grad็้ซ้ถๅฏผๆฐ๏ผ{ grad }") # ๅฏไปฅไฟๅญๅฐๅ
จๅฑๅ้ไฝฟ็จใ
x.register_hook(get_xy_grad)
# ====================
x.grad.backward(create_graph=True)
###Output
tensor([6.], grad_fn=<CloneBackward>)
x.grad็้ซ้ถๅฏผๆฐ๏ผtensor([30.], grad_fn=<MulBackward0>)
###Markdown
Tensor็่ชๅจๆฑๅฏผ - ๆไบไธ้ข็ๅบ็ก๏ผไธ้ข็torch.autogradไธญ็่ชๅจๆฑๅฏผ๏ผๅฐฑๅบๆฌไธ้ๅธธ็ฎๅใ- Torchๆไพไบtorch.autogradๆจกๅๆฅๅฎ็ฐ่ชๅจๆฑๅฏผ๏ผ่ฏฅๆจกๅๆด้ฒ็่ฐ็จๅฆไธ๏ผ - `['Variable', 'Function', 'backward', 'grad_mode']` backward็ไฝฟ็จ - autogradๆไพ็backwardๆฏTensor็backward็้ๆๅฝๆฐ็ๆฌ๏ผไฝฟ็จ่ฐไธไธไพฟๆท๏ผไฝๅคไบไธไธช้ๆฉ๏ผ```python torch.autograd.backward( tensors, grad_tensors=None, retain_graph=None, create_graph=False, grad_variables=None)```- ๅๆฐ่ฏดๆ๏ผ - tensors๏ผ่ขซๆฑๅฏผ็ๅ้๏ผๅฟ
้กปๅ
ทๆgrad_fn๏ผ๏ผ - grad_tensors=None๏ผๆขฏๅบฆๅ้๏ผ - retain_graph=None๏ผไฟ็่ฎก็ฎๅพ๏ผ - create_graph=False๏ผๅๅปบไธช้ซ้ถๅพฎๅๅพ๏ผๅฏไปฅ่ชๅทฑๆๅทฅๅพๅฐ้ซ้ถๅฏผๆฐ๏ผไนๅฏไปฅไฝฟ็จไธ้ข็gradๅฐ่ฃ
ๅฝๆฐ๏ผ๏ผ - grad_variables=None๏ผๅ
ผๅฎนๅๆฅVariable็ๆฌ็ๅๆฐ๏ผๅจๆฐ็็ๆฌไธญไธๅไฝฟ็จใ - torch.autograd.backwardๅฝๆฐ็ไฝฟ็จไพๅญ - ๅๆฐgrad_variablesๅจๆ็่ฟไธช็ๆฌไธญ๏ผๅทฒ็ปไธ่ฝไฝฟ็จใ
###Code
import torch
# ๅถๅญ่็น
x = torch.Tensor([1, 2, 3])
y = torch.Tensor([3, 4, 5])
z = torch.Tensor([1, 2, 3])
x.requires_grad=True
y.requires_grad=True
z.requires_grad=True
# ไธญ้ด่็น
xy = x + y
# xyz = xy * z
# xyz.backward(torch.Tensor([1, 1, 1]))
xyz = torch.dot(xy, z)
# ====================
def get_xy_grad(grad):
print(F"xy็ๅฏผๆฐ๏ผ{ grad }") # ๅฏไปฅไฟๅญๅฐๅ
จๅฑๅ้ไฝฟ็จใ
xy.register_hook(get_xy_grad)
# ====================
torch.autograd.backward(xyz)
print(x.grad, y.grad, z.grad)
print(xy.grad, y.grad, z.grad)
###Output
xy็ๅฏผๆฐ๏ผtensor([1., 2., 3.])
tensor([1., 2., 3.]) tensor([1., 2., 3.]) tensor([4., 6., 8.])
None tensor([1., 2., 3.]) tensor([4., 6., 8.])
###Markdown
grad็ไฝฟ็จ- ็จๆฅ่ฎก็ฎ่พๅบๅ
ณไบ่พๅ
ฅ็ๆขฏๅบฆ็ๅ๏ผไธๆฏ่ฟๅๆๆ็ๆขฏๅบฆ๏ผ่ๆฏๅฏนๆไธช่พๅ
ฅๅ้็ๆฑๅฏผ๏ผ$\dfrac{\partial{z}}{\partial{x}}$- ่ฟไธชๅฝๆฐ็ๅ่ฝๅบ่ฏฅไธhookๅ่ฝ็ฑปไผผใ - gradๅฝๆฐ็ๅฎไน๏ผ```python torch.autograd.grad( outputs, inputs, grad_outputs=None, retain_graph=None, create_graph=False, only_inputs=True, allow_unused=False)```- ๅๆฐ่ฏดๆ๏ผ - outputs๏ผ่พๅบๅผ ้ๅ่กจ๏ผไธbackwardๅฝๆฐไธญ็tensorsไฝ็จไธๆ ท๏ผ - inputs๏ผ่พๅ
ฅๅผ ้ๅ่กจ๏ผ็จๆฅ่ฐ็จregister_hook็ๅผ ้๏ผ - grad_outputs๏ผๆขฏๅบฆๅผ ้ๅ่กจ๏ผไธbackwardๅฝๆฐไธญ็grad_tensorsไฝ็จไธๆ ท๏ผ - retain_graph๏ผ้ป่พๅผ๏ผ็จๆฅๆๅฎ่ฟ็ฎๅฎๆฏๆฏๅฆๆธ
้ค่ฎก็ฎๅพ๏ผ - create_graph๏ผ้ป่พๅผ๏ผ็จๆฅๅๅปบๆขฏๅบฆ็่ฎก็ฎๅพ๏ผๆขฏๅบฆ็ๆขฏๅบฆๅฐฑๆฏ้ซ้ถๅฏผๆฐ๏ผ - only_inputs๏ผ้ป่พๅผ๏ผ็จๆฅๆๅฎ่ฟๅ็่ฎก็ฎ็ปๆ๏ผไธไป
ไป
ๆฏinputsๆๅฎ็ๅผ ้๏ผ่ๆฏ่ฎก็ฎๆๆๅถๅญ่็น็ๅฏผๆฐใ้ป่ฎคๅผTrue๏ผ่ฟไธชๅๆฐๅทฒ็ปไธๆจ่ไฝฟ็จ๏ผ่ไธๅทฒ็ปๆฒกๆไฝ็จไบ๏ผๅ่ฎก็ฎๅถๅญ่็น็ๅฏผๆฐๆฒกไฝฟ็จbackwardๅฝๆฐใ - allow_unused๏ผ้ป่พๅผ๏ผ็จๆฅๆฃๆตๆฏๅฆๆฏไธช่พๅ
ฅ้ฝ็จๆฅ่ฎก็ฎ่พๅบ๏ผFalse่กจ็คบไธ้่ฆ๏ผTrue่กจ็คบๅฆๆๆ่พๅ
ฅๆฒกๆ็จไบ่พๅบ่ฎก็ฎ๏ผๅๆๅบ้่ฏฏใๅฆๆๆฒกๆ่พๅ
ฅ้ฝๆฏ็จ๏ผๅTrueไธFalse็ปๆ้ฝไธๆ ทใ้ป่ฎคๅผFalse - grad็ไฝฟ็จไพๅญ
###Code
import torch
# ๅถๅญ่็น
x = torch.Tensor([1, 2, 3])
y = torch.Tensor([3, 4, 5])
z = torch.Tensor([1, 2, 3])
x.requires_grad=True
y.requires_grad=True
z.requires_grad=True
# ไธญ้ด่็น
xy = x + y
xyz = torch.dot(xy, z)
# ====================
gd = torch.autograd.grad(xyz, x, retain_graph=True)
print(x.grad, y.grad, z.grad)
print(xy.grad, y.grad, z.grad)
print(gd)
print(torch.autograd.grad(xyz, xy,retain_graph=True))
print(torch.autograd.grad(xyz, y,retain_graph=True))
print(torch.autograd.grad(xyz, z,retain_graph=True, allow_unused=True))
# ====================
###Output
None None None
None None None
(tensor([1., 2., 3.]),)
(tensor([1., 2., 3.]),)
(tensor([1., 2., 3.]),)
(tensor([4., 6., 8.]),)
###Markdown
grad็้ซ้ถๆฑๅฏผ- ไฝฟ็จcreate_graphๅๅปบๅฏผๆฐ็ๅพ๏ผๅนถๅฏนๅฏผๆฐๅๆฑๅฏผ๏ผไป่ๅฎ็ฐ้ซ้ถๆฑๅฏผใ
###Code
import torch
x = torch.Tensor([1])
x.requires_grad=True
z = x**6 # ๆฑๅฏผๅฝๆฐ
gd_1 = torch.autograd.grad(z, x, create_graph=True)
gd_2 = torch.autograd.grad(gd_1, x)
print(F"ไธ้ถๅฏผๆฐ๏ผ{gd_1},\nไบ้ถๅฏผๆฐ๏ผ {gd_2}")
###Output
ไธ้ถๅฏผๆฐ๏ผ(tensor([6.], grad_fn=<MulBackward0>),),
ไบ้ถๅฏผๆฐ๏ผ (tensor([30.]),)
###Markdown
ๆฑๅฏผ็ๆงๅถ set_grad_enabled็ฑป- set_grad_enabledๅฝๆฐๅฏไปฅๅผๅฏไธๅ
ณ้ญๅฏผๆฐ่ฎก็ฎ - ไธไธชไธไธๆ็ฎก็ๅฏน่ฑก- ๅฝๆฐๅฃฐๆๅฆไธ๏ผ```python torch.autograd.set_grad_enabled(mode)```- ๅๆฐ๏ผ - mode๏ผ้ป่พๅผ๏ผTrueๅผๅฏ๏ผFalseๅ
ณ้ญ ้ๅธธไฝฟ็จไพๅญ
###Code
import torch
x = torch.Tensor([1, 2, 3])
y = torch.Tensor([3, 4, 5])
z = torch.Tensor([1, 2, 3])
x.requires_grad=True
y.requires_grad=True
z.requires_grad=True
torch.autograd.set_grad_enabled(False) # ๅ
จๅฑไธไธๆ
xy = x + y
xyz = torch.dot(xy, z)
torch.autograd.set_grad_enabled(True)
print(xy.requires_grad, xyz.requires_grad, z.requires_grad)
###Output
False False True
###Markdown
ไธไธๆไฝฟ็จไพๅญ
###Code
import torch
x = torch.Tensor([1, 2, 3])
y = torch.Tensor([3, 4, 5])
z = torch.Tensor([1, 2, 3])
x.requires_grad=True
y.requires_grad=True
z.requires_grad=True
with torch.autograd.set_grad_enabled(False) as grad_ctx: # ๅฑ้จไธไธๆ
xy = x + y # ๅ็ปๆ๏ผไฝ็จ่ๅด่ชๅจ็ปๆ
xyz = torch.dot(xy, z)
print(xy.requires_grad, xyz.requires_grad, z.requires_grad)
###Output
False True True
###Markdown
enable_grad็ฑป- ่ฟไธช็ฑปๆฏไธไธช่ฃ
้ฅฐๅจ็ฑป๏ผๆไพๆดๅ ็ฎๆท็ๅผๅฏๆนๅผใ - ไนๆฏไธไธชไธไธๆ็ฎก็ๅจ๏ผ - ่ฃ
้ฅฐๅจ็จไบๅฝๆฐไธ็ฑป๏ผ```python torch.autograd.enable_grad()``` ่ฃ
้ฅฐๅจไฝฟ็จไพๅญ
###Code
import torch
x = torch.Tensor([1, 2, 3])
y = torch.Tensor([3, 4, 5])
z = torch.Tensor([1, 2, 3])
x.requires_grad=True
y.requires_grad=True
z.requires_grad=True
@ torch.autograd.enable_grad()
def func_xy(x, y):
return x + y # ๅ็ปๆ๏ผไฝ็จ่ๅด่ชๅจ็ปๆ
xy = func_xy(x, y)
xyz = torch.dot(xy, z)
print(xy.requires_grad, xyz.requires_grad, z.requires_grad)
###Output
True True True
###Markdown
ไธไธๆไฝฟ็จไพๅญ
###Code
import torch
x = torch.Tensor([1, 2, 3])
y = torch.Tensor([3, 4, 5])
z = torch.Tensor([1, 2, 3])
x.requires_grad=True
y.requires_grad=True
z.requires_grad=True
with torch.autograd.enable_grad():
xy = x + y
xyz = torch.dot(xy, z)
print(xy.requires_grad, xyz.requires_grad, z.requires_grad)
###Output
True True True
###Markdown
no_grad็ฑป- ไธenable_grad็ฑปไธๆ ท็ไฝฟ็จๆนๅผ๏ผไฝ็จๅด็ธๅใ- ๆณจๆ๏ผ - no_gradไธenable_gradๆฏๅฝๆฐ่ฃ
้ฅฐๅจ๏ผไธๆฏ็ฑป่ฃ
้ฅฐๅจ๏ผ ่ฃ
้ฅฐๅจไฝฟ็จๆนๅผ- ๅฏนๆดไธชๅฝๆฐไฝ็จ๏ผ้ๅๅฝๆฐๆจกๅผ๏ผๅฆๆๅฝๆฐไธญๆ็นๆฎ็ๆ
ๅต๏ผๅฏไปฅๅตๅฅไฝฟ็จใ
###Code
import torch
x = torch.Tensor([1, 2, 3])
y = torch.Tensor([3, 4, 5])
z = torch.Tensor([1, 2, 3])
x.requires_grad=True
y.requires_grad=True
z.requires_grad=True
@ torch.autograd.no_grad()
def func_xy(x, y):
return x + y # ๅ็ปๆ๏ผไฝ็จ่ๅด่ชๅจ็ปๆ
xy = func_xy(x, y)
xyz = torch.dot(xy, z)
print(xy.requires_grad, xyz.requires_grad, z.requires_grad)
###Output
False True True
###Markdown
ไธไธๆไฝฟ็จๆนๅผ- ้ๅไบๅจ้ๅฝๆฐๆ
ๅตไธไฝฟ็จ
###Code
import torch
x = torch.Tensor([1, 2, 3])
y = torch.Tensor([3, 4, 5])
z = torch.Tensor([1, 2, 3])
x.requires_grad=True
y.requires_grad=True
z.requires_grad=True
with torch.autograd.no_grad():
xy = x + y
xyz = torch.dot(xy, z)
print(xy.requires_grad, xyz.requires_grad, z.requires_grad)
###Output
False True True
###Markdown
no_gradไธenable_gradๆททๅไฝฟ็จ- ่ฟ็งๆททๅไฝฟ็จ๏ผๅฏไปฅๆปก่ถณๅผๅ็ไปปไฝๆ
ๅต็้ๆฑ๏ผ
###Code
import torch
x = torch.Tensor([1, 2, 3])
y = torch.Tensor([3, 4, 5])
z = torch.Tensor([1, 2, 3])
x.requires_grad=True
y.requires_grad=True
z.requires_grad=True
with torch.autograd.no_grad():
xy = x + y
with torch.autograd.enable_grad():
z3 = z **3
xy2 = xy ** 2 # ๅ ไธบxy็requires_grad=False๏ผๆดไธช่ฟ็ฎไนๆฏFalse
print(xy.requires_grad, z3.requires_grad, xy2.requires_grad)
###Output
False True False
|
notebooks/.ipynb_checkpoints/Intro_07_Xarray_and_plotting_with_cartopy-checkpoint.ipynb | ###Markdown
Plotting with [cartopy](https://scitools.org.uk/cartopy/docs/latest/)From Cartopy website:* Cartopy is a Python package designed for geospatial data processing in order to produce maps and other geospatial data analyses.* Cartopy makes use of the powerful PROJ.4, NumPy and Shapely libraries and includes a programmatic interface built on top of Matplotlib for the creation of publication quality maps.* Key features of cartopy are its object oriented projection definitions, and its ability to transform points, lines, vectors, polygons and images between those projections.* You will find cartopy especially useful for large area / small scale data, where Cartesian assumptions of spherical data traditionally break down. If youโve ever experienced a singularity at the pole or a cut-off at the dateline, it is likely you will appreciate cartopyโs unique features!
###Code
import numpy as np
import matplotlib.pyplot as plt
import xarray as xr
import cartopy.crs as ccrs
###Output
_____no_output_____
###Markdown
Read in data using xarray- Read in the Saildrone USV file either from a local disc `xr.open_dataset(file)`- change latitude and longitude to lat and lon `.rename({'longitude':'lon','latitude':'lat'})`
###Code
file = '../data/saildrone-gen_5-antarctica_circumnavigation_2019-sd1020-20190119T040000-20190803T043000-1440_minutes-v1.1564857794963.nc'
ds_usv =
###Output
_____no_output_____
###Markdown
Open the dataset, mask land, plot result* `ds_sst = xr.open_dataset(url)`* use `ds_sst = ds_sst.where(ds_sst.mask==1)` to mask values equal to 1
###Code
#If you are offline use the first url
#url = '../data/20111101120000-CMC-L4_GHRSST-SSTfnd-CMC0.2deg-GLOB-v02.0-fv02.0.nc'
url = 'https://podaac-opendap.jpl.nasa.gov/opendap/allData/ghrsst/data/GDS2/L4/GLOB/CMC/CMC0.2deg/v2/2011/305/20111101120000-CMC-L4_GHRSST-SSTfnd-CMC0.2deg-GLOB-v02.0-fv02.0.nc'
###Output
_____no_output_____
###Markdown
explore the in situ data and quickly plot using cartopy* first set up the axis with the projection you want: https://scitools.org.uk/cartopy/docs/latest/crs/projections.html* plot to that axis and tell the projection that your data is in Run the cell below and see what the image looks like. Then try adding in the lines below, one at a time, and re-run cell to see what happens* set a background image `ax.stock_img()`* draw coastlines `ax.coastlines(resolution='50m')`* add a colorbary and label it `cax = plt.colorbar(cs1)` `cax.set_label('SST (K)')`
###Code
#for polar data, plot temperature
datamin = 0
datamax = 12
ax = plt.axes(projection=ccrs.SouthPolarStereo()) #here is where you set your axis projection
(ds_sst.analysed_sst-273.15).plot(ax=ax,
transform=ccrs.PlateCarree(), #set data projection
vmin=datamin, #data min
vmax=datamax) #data min
cs1 = ax.scatter(ds_usv.lon, ds_usv.lat,
transform=ccrs.PlateCarree(), #set data projection
s=10.0, #size for scatter point
c=ds_usv.TEMP_CTD_MEAN, #make the color of the scatter point equal to the USV temperature
edgecolor='none', #no edgecolor
cmap='jet', #colormap
vmin=datamin, #data min
vmax=datamax) #data max
ax.set_extent([-180, 180, -90, -45], crs=ccrs.PlateCarree()) #data projection
###Output
_____no_output_____
###Markdown
Plot the salinity* Take the code from above but use `c=ds_usv.SAL_MEAN`* Run the code, what looks wrong?* Change `datamin` and `datamax`
###Code
###Output
_____no_output_____
###Markdown
Let's plot some data off California* Read in data from a cruise along the California / Baja Coast* `ds_usv = xr.open_dataset(url).rename({'longitude':'lon','latitude':'lat'})`
###Code
#use the first URL if you are offline
#url = '../data/saildrone-gen_4-baja_2018-sd1002-20180411T180000-20180611T055959-1_minutes-v1.nc'
url = 'https://podaac-opendap.jpl.nasa.gov/opendap/hyrax/allData/insitu/L2/saildrone/Baja/saildrone-gen_4-baja_2018-sd1002-20180411T180000-20180611T055959-1_minutes-v1.nc'
###Output
_____no_output_____
###Markdown
* Plot the data using the code from above, but change the projection`ax = plt.axes(projection=ccrs.PlateCarree())`
###Code
###Output
_____no_output_____
###Markdown
Plotting with [cartopy](https://scitools.org.uk/cartopy/docs/latest/)From Cartopy website:* Cartopy is a Python package designed for geospatial data processing in order to produce maps and other geospatial data analyses.* Cartopy makes use of the powerful PROJ.4, NumPy and Shapely libraries and includes a programmatic interface built on top of Matplotlib for the creation of publication quality maps.* Key features of cartopy are its object oriented projection definitions, and its ability to transform points, lines, vectors, polygons and images between those projections.* You will find cartopy especially useful for large area / small scale data, where Cartesian assumptions of spherical data traditionally break down. If youโve ever experienced a singularity at the pole or a cut-off at the dateline, it is likely you will appreciate cartopyโs unique features!
###Code
import numpy as np
import matplotlib.pyplot as plt
import xarray as xr
import cartopy.crs as ccrs
###Output
_____no_output_____
###Markdown
Read in data using xarray- Read in the Saildrone USV file either from a local disc `xr.open_dataset()- change latitude and longitude to lat and lon `.rename({'longitude':'lon','latitude':'lat'})`
###Code
file = '../data/saildrone-gen_5-antarctica_circumnavigation_2019-sd1020-20190119T040000-20190803T043000-1440_minutes-v1.1564857794963.nc'
ds_usv =
###Output
_____no_output_____
###Markdown
Open the dataset, mask land, plot result*`xr.open_dataset`* use `.where` to mask values equal to 1
###Code
#If you are offline use the first url
#url = '../data/20111101120000-CMC-L4_GHRSST-SSTfnd-CMC0.2deg-GLOB-v02.0-fv02.0.nc'
url = 'https://podaac-opendap.jpl.nasa.gov/opendap/allData/ghrsst/data/GDS2/L4/GLOB/CMC/CMC0.2deg/v2/2011/305/20111101120000-CMC-L4_GHRSST-SSTfnd-CMC0.2deg-GLOB-v02.0-fv02.0.nc'
ds_sst =
ds_sst =
###Output
_____no_output_____
###Markdown
explore the in situ data and quickly plot using cartopy* first set up the axis with the projection you want: https://scitools.org.uk/cartopy/docs/latest/crs/projections.html* plot to that axis and tell the projection that your data is in* set a background image `ax.stock_img()`* draw coastlines `ax.coastlines(resolution='50m')`* add a colorbary and label it `cax = plt.colorbar(cs1)` `cax.set_label('SST (K)')`
###Code
#for polar data, plot temperature
ax = plt.axes(projection=ccrs.SouthPolarStereo())
(ds_sst.analysed_sst-273.15).plot(ax=ax,
transform=ccrs.PlateCarree(),
vmin=0,
vmax=12)
cs1 = ax.scatter(ds_usv.lon, ds_usv.lat,
transform=ccrs.PlateCarree(),
s=10.0,
c=ds_usv.TEMP_CTD_MEAN,
edgecolor='none',
cmap='jet',
vmin=0,vmax=12)
ax.set_extent([-180, 180, -90, -45], crs=ccrs.PlateCarree())
###Output
_____no_output_____
###Markdown
Exercise!
###Code
#now you try to plot plot salinity ds_usv.SAL_MEAN
###Output
_____no_output_____
###Markdown
Let's plot some data off of California* `.rename({'longitude':'lon','latitude':'lat'})`
###Code
#use the first URL if you are offline
#url = '../data/saildrone-gen_4-baja_2018-sd1002-20180411T180000-20180611T055959-1_minutes-v1.nc'
url = 'https://podaac-opendap.jpl.nasa.gov/opendap/hyrax/allData/insitu/L2/saildrone/Baja/saildrone-gen_4-baja_2018-sd1002-20180411T180000-20180611T055959-1_minutes-v1.nc'
ds_usv =
###Output
_____no_output_____
###Markdown
Exercise!* for NON polar ds_usv data, use `ccrs.PlateCarree()` as your projection
###Code
#for polar data, plot temperature
# now add an extent to your figure
lonmin,lonmax = ds_usv.lon.min().data-2,ds_usv.lon.max().data+2
latmin,latmax = ds_usv.lat.min().data-2,ds_usv.lat.max().data+2
ax.set_extent([lonmin,lonmax,latmin,latmax], crs=ccrs.PlateCarree())
###Output
_____no_output_____ |
lecture_05/04_loss_function.ipynb | ###Markdown
ใ่ชคๅทฎใใฎๅฎ็พฉๅบๅใจๆญฃ่งฃใฎ้ใงใ่ชคๅทฎใใๅฎ็พฉใใพใใ ่ชคๅทฎใซใฏๆงใ
ใชๅฎ็พฉใฎไปๆนใใใใพใใใไปๅใฏใไบไนๅ่ชคๅทฎใใซใคใใฆ่งฃ่ชฌใใพใใ ไบไนๅ่ชคๅทฎใใฅใผใฉใซใใใใฏใผใฏใซใฏ่คๆฐใฎๅบๅใจใใใใใใซๅฏพๅฟใใๆญฃ่งฃใใใใพใใ ใใใใไฝฟใใไบไนๅ่ชคๅทฎใฏไปฅไธใฎๅผใงๅฎ็พฉใใใพใใ $$ E = \frac{1}{2} \sum_{k=1}^n(y_k-t_k)^2 $$$y_k$ใฏๅบๅใ$t_k$ใฏๆญฃ่งฃใ$n$ใฏๅบๅๅฑคใฎใใฅใผใญใณๆฐใ่กจใใพใใ $\frac{1}{2}$ใใใใใฎใฏใๅพฎๅใใๅฝขใๆฑใใใใใใใใใงใใ ใใใงใ็ทๅใๅใๅใฎๅใ
ใฎไบไน่ชคๅทฎใใฐใฉใใซๆ็ปใใพใใ $$E_k = \frac{1}{2}(y_k-t_k)^2$$ไปฅไธใฎใณใผใใซใใใ`t`ใฎๅคใ0.25ใ0.5ใ0.75ใฎใจใใ`y`ใฎๅคใจใจใใซไบไน่ชคๅทฎใใฉใๅคๅใใใฎใใ็ขบ่ชใใพใใ
###Code
import numpy as np
import matplotlib.pyplot as plt
def square_error(y, t):
return (y - t)**2/2 # ไบไน่ชคๅทฎ
y = np.linspace(0, 1)
ts = [0.25, 0.5, 0.75]
for t in ts:
plt.plot(y, square_error(y, t), label="t="+str(t))
plt.legend()
plt.xlabel("y")
plt.ylabel("Error")
plt.show()
###Output
_____no_output_____ |
Sentiment_Analysis_RNN.ipynb | ###Markdown
้่ฟRNNไฝฟ็จimdbๆฐๆฎ้ๅฎๆๆ
ๆๅ็ฑปไปปๅก
###Code
from __future__ import absolute_import,print_function,division,unicode_literals
import tensorflow as tf
import tensorflow.keras as keras
import numpy as np
import os
tf.__version__
tf.random.set_seed(22)
np.random.seed(22)
os.environ['TF_CPP_LOG_LEVEL'] = '2'
# ่ถ
ๅๆฐ
vocab_size = 10000
max_review_length = 80
embedding_dim = 100
units = 64
num_classes = 2
batch_size = 32
epochs = 10
# ๅ ่ฝฝๆฐๆฎ้
imdb = keras.datasets.imdb
(train_data,train_labels),(test_data,test_labels) = imdb.load_data(num_words = vocab_size)
train_data[0]
len(train_data)
# ๅปบ็ซ่ฏๅ
ธ
word_index = imdb.get_word_index()
word_index = {k:(v + 3) for k ,v in word_index.items()}
word_index["<PAD>"] = 0
word_index["<START>"] = 1
word_index["<UNK>"] = 2
word_index["<UNSED>"] = 3
reversed_word_index = dict([(value,key) for (key,value) in word_index.items()])
def decode_review(text):
return ' '.join([reversed_word_index.get(i,'?') for i in text])
decode_review(train_data[0])
train_data = train_data[:20000]
val_data = train_data[20000:25000]
train_labels = train_labels[:20000]
val_labels = train_labels[20000:25000]
# ่กฅ้ฝๆฐๆฎ
train_data = keras.preprocessing.sequence.pad_sequences(train_data,value = word_index["<PAD>"],padding = 'post',maxlen = max_review_length )
test_data = keras.preprocessing.sequence.pad_sequences(test_data,value = word_index["<PAD>"],padding = 'post',maxlen = max_review_length )
train_data[0]
# ๆๅปบๆจกๅ
class RNNModel(keras.Model):
def __init__(self,units,num_classes,num_layers):
super(RNNModel,self).__init__()
self.units = units
self.embedding = keras.layers.Embedding(vocab_size,embedding_dim,input_length = max_review_length)
"""
self.lstm = keras.layers.LSTM(units,return_sequences = True)
self.lstm_2 = keras.layers.LSTM(units)
"""
self.lstm = keras.layers.Bidirectional(keras.layers.LSTM(self.units))
self.dense = keras.layers.Dense(1)
def call(self,x,training = None,mask = None):
x = self.embedding(x)
x = self.lstm(x)
x = self.dense(x)
return x
model.summary()
model = RNNModel(units,num_classes,num_layers=2)
model.compile(optimizer = keras.optimizers.Adam(0.001),
loss = keras.losses.BinaryCrossentropy(from_logits = True),
metrics = ['accuracy'])
model.fit(train_data,train_labels,
epochs = epochs,batch_size = batch_size,
validation_data = (test_data,test_labels))
model.summary()
result = model.evaluate(test_data,test_labels)
# output:loss: 0.6751 - accuracy: 0.8002
def GRU_Model():
model = keras.Sequential([
keras.layers.Embedding(input_dim = vocab_size,output_dim = 32,input_length = max_review_length),
keras.layers.GRU(32,return_sequences = True),
keras.layers.GRU(1,activation = 'sigmoid',return_sequences = False)
])
model.compile(optimizer = keras.optimizers.Adam(0.001),
loss = keras.losses.BinaryCrossentropy(from_logits = True),
metrics = ['accuracy'])
return model
model = GRU_Model()
model.summary()
%%time
history = model.fit(train_data,train_labels,batch_size = batch_size,epochs = epochs,validation_split = 0.1)
import matplotlib.pyplot as plt
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.legend(['training','validation'], loc = 'upper left')
plt.xlabel('epoch')
plt.ylabel('accuracy')
plt.show()
###Output
_____no_output_____
###Markdown
###Code
from google.colab import drive
drive.mount('/gdrive')
import numpy as np
# read txt files of reviews and labels
with open('/gdrive/My Drive/Colab Notebooks/sentiment_analysis/data/reviews.txt', 'r') as f:
reviews = f.read()
with open('/gdrive/My Drive/Colab Notebooks/sentiment_analysis/data/labels.txt', 'r') as f:
labels = f.read()
print(reviews[:200])
print()
print(labels[:26])
#preprocess and tokenize text data
#convert to lowercase
#clean data: remove punctuation
from string import punctuation
#string.punctuation python 3.0
print(punctuation)
reviews = reviews.lower()
clean_reviews = ''.join([c for c in reviews if c not in punctuation])
#clean data: remove \n chars that separates reviews from each-other
# split clean reviews by \n and join them again
reviews_split = clean_reviews.split('\n')
clean_reviews = ' '.join(reviews_split)
#create list of all words in cleaned reviews and print some of them
words = clean_reviews.split()
words[:20]
#encode each word and label as int
# create a dict that maps each unique word to int vals
# subclass of dict: counts the hashtable object
#creates a dict that maps obj to the n of times they apear in the input
from collections import Counter
#create dict of words where most frequent words are assigned lowest int vals
w_counts = Counter(words)
w_sorted = sorted(w_counts, key=w_counts.get, reverse=True)
# vocab = sorted(counts, key=counts.get, reverse=True)
#create dict and assign 1 to most frequent word
w_to_int = {word: i for i, word in enumerate(w_sorted, 1)}
# create a list that will contain all int values assigned to each word for each review
reviews_ints = []
# get each review in reviews previously splitted by \n
for review in reviews_split:
#then for each word in this review get the int val from the w_to_int dict
#and append it to the reviews_ints.
#Now each word in each review is stored as int inside reviews_ints
reviews_ints.append([w_to_int[word] for word in review.split()])
###Output
_____no_output_____
###Markdown
Test data preprocessing
###Code
# stats about vocabulary
print('Unique words: ', len((w_to_int))) # should ~ 74000+
print()
# print tokens in first review
print('Tokenized review: \n', reviews_ints[:1])
###Output
Unique words: 74072
Tokenized review:
[[21025, 308, 6, 3, 1050, 207, 8, 2138, 32, 1, 171, 57, 15, 49, 81, 5785, 44, 382, 110, 140, 15, 5194, 60, 154, 9, 1, 4975, 5852, 475, 71, 5, 260, 12, 21025, 308, 13, 1978, 6, 74, 2395, 5, 613, 73, 6, 5194, 1, 24103, 5, 1983, 10166, 1, 5786, 1499, 36, 51, 66, 204, 145, 67, 1199, 5194, 19869, 1, 37442, 4, 1, 221, 883, 31, 2988, 71, 4, 1, 5787, 10, 686, 2, 67, 1499, 54, 10, 216, 1, 383, 9, 62, 3, 1406, 3686, 783, 5, 3483, 180, 1, 382, 10, 1212, 13583, 32, 308, 3, 349, 341, 2913, 10, 143, 127, 5, 7690, 30, 4, 129, 5194, 1406, 2326, 5, 21025, 308, 10, 528, 12, 109, 1448, 4, 60, 543, 102, 12, 21025, 308, 6, 227, 4146, 48, 3, 2211, 12, 8, 215, 23]]
###Markdown
Convert labelsLabels have values positive and negative that should be converted to 1 and 0 respectively
###Code
#convert labels to be all 1 and 0
# 1=positive, 0=negative label conversion
labels_split = labels.split('\n')
encoded_labels = np.array([1 if label == 'positive' else 0 for label in labels_split])
###Output
_____no_output_____
###Markdown
Remove OutliersSome of the reviews are too long or too short. The model requires length of input data to be consistent. So extremely long or short reviews should be eliminated and the rest of reviews should either be truncated or padded with new values to reach the appropriate length.
###Code
# check for outliers in reviews
review_lens = Counter([len(x) for x in reviews_ints])
print("Zero-length reviews: {}".format(review_lens[0]))
print("Maximum review length: {}".format(max(review_lens)))
#remove 0-length reviews and respective labels
print('Number of reviews before removing outliers: ', len(reviews_ints))
# get indices of any reviews with length 0
non_zero_idx = [i for i, review in enumerate(reviews_ints) if len(review) != 0]
# remove 0-length reviews and their labels
reviews_ints = [reviews_ints[ii] for ii in non_zero_idx]
encoded_labels = np.array([encoded_labels[ii] for ii in non_zero_idx])
print('Number of reviews after removing outliers: ', len(reviews_ints))
#truncate long reviews or pad the short ones with columns of 0 on the left
def pad_reviews(reviews_ints, r_length):
# create a 0-filled 2D array with num_rows=num_reviews & num_cols=r_length
padded_r = np.zeros((len(reviews_ints), r_length), dtype=int)
# for each review,
for i, review_ints in enumerate(reviews_ints):
# fill each row of the 0-filled 2D array with the encoded int values
# of the review. To conserve the 0 values on the left of each row
# when the review is too short start filling from the end
# if the review is too long, just truncated up to r_length
padded_r[i, -len(review_ints):] = np.array(review_ints)[:r_length]
return padded_r
###Output
_____no_output_____
###Markdown
Test implementation
###Code
# Input size for each review
r_length = 200
features = pad_reviews(reviews_ints, r_length=r_length)
assert len(features)==len(reviews_ints), "Your features should have as many rows as reviews."
assert len(features[0])==r_length, "Each feature row should contain seq_length values."
# print first 10 word values of the first 20 batches
print(features[:20,:10])
###Output
[[ 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0]
[22382 42 46418 15 706 17139 3389 47 77 35]
[ 4505 505 15 3 3342 162 8312 1652 6 4819]
[ 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0]
[ 54 10 14 116 60 798 552 71 364 5]
[ 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0]
[ 1 330 578 34 3 162 748 2731 9 325]
[ 9 11 10171 5305 1946 689 444 22 280 673]
[ 0 0 0 0 0 0 0 0 0 0]
[ 1 307 10399 2069 1565 6202 6528 3288 17946 10628]
[ 0 0 0 0 0 0 0 0 0 0]
[ 21 122 2069 1565 515 8181 88 6 1325 1182]
[ 1 20 6 76 40 6 58 81 95 5]]
###Markdown
Split data in training, validation and test set
###Code
# 0.8 train - 0.1 validation - 0.1 test
split_factor = 0.8
split_index = int(len(features) * split_factor)
train_data, rest_of_data = features[:split_index], features[split_index:]
train_y, rest_of_data_y = encoded_labels[:split_index], encoded_labels[split_index:]
test_index = int(len(rest_of_data) * 0.5)
valid_data, test_data = rest_of_data[:test_index], rest_of_data[test_index:]
val_y, test_y = rest_of_data_y[:test_index], rest_of_data_y[test_index:]
print("Train set: \t\t{}".format(train_data.shape),
"\nValidation set: \t{}".format(valid_data.shape),
"\nTest set: \t\t{}".format(test_data.shape))
import torch
from torch.utils.data import TensorDataset, DataLoader
batch_size = 50
# convert to Tensor
train_set = TensorDataset(torch.from_numpy(train_data), torch.from_numpy(train_y))
valid_set = TensorDataset(torch.from_numpy(valid_data), torch.from_numpy(val_y))
test_set = TensorDataset(torch.from_numpy(test_data), torch.from_numpy(test_y))
# load in batches
train_loader = DataLoader(train_set, shuffle=True, batch_size=batch_size)
valid_loader = DataLoader(valid_set, shuffle=True, batch_size=batch_size)
test_loader = DataLoader(test_set, shuffle=True, batch_size=batch_size)
sample_x, sample_y = dataiter.next()
print('Sample input size: ', sample_x.size()) # batch_size, seq_length
print('Sample input: \n', sample_x)
print()
print('Sample label size: ', sample_y.size()) # batch_size
print('Sample label: \n', sample_y)
###Output
Sample input size: torch.Size([50, 200])
Sample input:
tensor([[ 0, 0, 0, ..., 4, 11, 18],
[ 281, 21, 1236, ..., 9, 11, 8],
[ 11, 18, 14, ..., 82, 2, 11],
...,
[ 54, 10, 14, ..., 93, 8, 61],
[ 0, 0, 0, ..., 164, 104, 544],
[7785, 743, 1, ..., 6, 7785, 743]])
Sample label size: torch.Size([50])
Sample label:
tensor([0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0,
1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0,
0, 1])
###Markdown
Create model
###Code
# Check if GPU is available
train_on_gpu=torch.cuda.is_available()
if(train_on_gpu):
print('Training on GPU.')
else:
print('No GPU available, training on CPU.')
import torch.nn as nn
class SentimentNet(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, drop_prob=0.5):
super(SentimentNet, self).__init__()
self.output_size = output_size
self.n_layers = n_layers
self.hidden_dim = hidden_dim
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers,
dropout=drop_prob, batch_first=True)
self.dropout = nn.Dropout(0.3)
self.fc = nn.Linear(hidden_dim, output_size)
self.sig = nn.Sigmoid()
def forward(self, x, hidden):
batch_size = x.size(0)
x = x.long()
embeds = self.embedding(x)
lstm_out, hidden = self.lstm(embeds, hidden)
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
out = self.dropout(lstm_out)
out = self.fc(out)
sig_out = self.sig(out)
sig_out = sig_out.view(batch_size, -1)
sig_out = sig_out[:, -1] # get last batch of labels
return sig_out, hidden
def init_hidden(self, batch_size):
# Create two new tensors with sizes n_layers x batch_size x hidden_dim,
# initialized to zero, for hidden state and cell state of LSTM
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
# Instantiate the model w/ hyperparams
vocab_size = len(w_to_int) + 1
output_size = 1
embedding_dim = 400
hidden_dim = 256
n_layers = 2
net = SentimentNet(vocab_size, output_size, embedding_dim, hidden_dim, n_layers)
print(net)
###Output
SentimentNet(
(embedding): Embedding(74073, 400)
(lstm): LSTM(400, 256, num_layers=2, batch_first=True, dropout=0.5)
(dropout): Dropout(p=0.3)
(fc): Linear(in_features=256, out_features=1, bias=True)
(sig): Sigmoid()
)
###Markdown
Training
###Code
lr=0.001
criterion = nn.BCELoss()
optimizer = torch.optim.Adam(net.parameters(), lr=lr)
epochs = 4
counter = 0
print_every = 100
# gradient clipping
clip=5
if(train_on_gpu):
net.cuda()
net.train()
for e in range(epochs):
h = net.init_hidden(batch_size)
for inputs, labels in train_loader:
counter += 1
if(train_on_gpu):
inputs, labels = inputs.cuda(), labels.cuda()
h = tuple([each.data for each in h])
net.zero_grad()
output, h = net(inputs, h)
loss = criterion(output.squeeze(), labels.float())
loss.backward()
# prevent the exploding gradient problem in RNNs / LSTMs.
nn.utils.clip_grad_norm_(net.parameters(), clip)
optimizer.step()
if counter % print_every == 0:
val_h = net.init_hidden(batch_size)
val_losses = []
net.eval()
for inputs, labels in valid_loader:
val_h = tuple([each.data for each in val_h])
if(train_on_gpu):
inputs, labels = inputs.cuda(), labels.cuda()
output, val_h = net(inputs, val_h)
val_loss = criterion(output.squeeze(), labels.float())
val_losses.append(val_loss.item())
net.train()
print("Epoch: {}/{}...".format(e+1, epochs),
"Step: {}...".format(counter),
"Loss: {:.6f}...".format(loss.item()),
"Val Loss: {:.6f}".format(np.mean(val_losses)))
###Output
Epoch: 1/4... Step: 100... Loss: 0.601795... Val Loss: 0.647134
Epoch: 1/4... Step: 200... Loss: 0.629885... Val Loss: 0.610831
Epoch: 1/4... Step: 300... Loss: 0.617199... Val Loss: 0.701001
Epoch: 1/4... Step: 400... Loss: 0.668857... Val Loss: 0.519369
Epoch: 2/4... Step: 500... Loss: 0.467429... Val Loss: 0.536550
Epoch: 2/4... Step: 600... Loss: 0.265516... Val Loss: 0.495581
Epoch: 2/4... Step: 700... Loss: 0.415150... Val Loss: 0.453255
Epoch: 2/4... Step: 800... Loss: 0.694688... Val Loss: 0.489844
Epoch: 3/4... Step: 900... Loss: 0.303322... Val Loss: 0.454577
Epoch: 3/4... Step: 1000... Loss: 0.285727... Val Loss: 0.570366
Epoch: 3/4... Step: 1100... Loss: 0.252437... Val Loss: 0.454075
Epoch: 3/4... Step: 1200... Loss: 0.148807... Val Loss: 0.414570
Epoch: 4/4... Step: 1300... Loss: 0.201714... Val Loss: 0.465686
Epoch: 4/4... Step: 1400... Loss: 0.129139... Val Loss: 0.484931
Epoch: 4/4... Step: 1500... Loss: 0.211350... Val Loss: 0.526655
Epoch: 4/4... Step: 1600... Loss: 0.112922... Val Loss: 0.533203
###Markdown
Testing
###Code
test_losses = []
num_correct = 0
h = net.init_hidden(batch_size)
net.eval()
for inputs, labels in test_loader:
h = tuple([each.data for each in h])
if(train_on_gpu):
inputs, labels = inputs.cuda(), labels.cuda()
output, h = net(inputs, h)
test_loss = criterion(output.squeeze(), labels.float())
test_losses.append(test_loss.item())
# convert output probabilities to predicted class (0 or 1)
pred = torch.round(output.squeeze())
correct_tensor = pred.eq(labels.float().view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
num_correct += np.sum(correct)
print("Test loss: {:.3f}".format(np.mean(test_losses)))
test_acc = num_correct/len(test_loader.dataset)
print("Test accuracy: {:.3f}".format(test_acc))
# negative test review
test_review_neg = 'The worst movie I have seen; acting was terrible and I want my money back. This movie had bad acting and the dialogue was slow.'
from string import punctuation
def tokenize_review(test_review):
test_review = test_review.lower()
test_text = ''.join([c for c in test_review if c not in punctuation])
test_words = test_text.split()
test_ints = []
test_ints.append([w_to_int[word] for word in test_words])
return test_ints
test_ints = tokenize_review(test_review_neg)
print(test_ints)
seq_length=200
features = pad_reviews(test_ints, seq_length)
print(features)
feature_tensor = torch.from_numpy(features)
print(feature_tensor.size())
def predict(net, test_review, sequence_length=200):
net.eval()
test_ints = tokenize_review(test_review)
seq_length=sequence_length
features = pad_reviews(test_ints, seq_length)
feature_tensor = torch.from_numpy(features)
batch_size = feature_tensor.size(0)
h = net.init_hidden(batch_size)
if(train_on_gpu):
feature_tensor = feature_tensor.cuda()
output, h = net(feature_tensor, h)
# convert output probabilities to predicted class (0 or 1)
pred = torch.round(output.squeeze())
print('Prediction value, pre-rounding: {:.6f}'.format(output.item()))
if(pred.item()==1):
print("Positive review detected!")
else:
print("Negative review detected.")
# positive test review
test_review_pos = 'This movie had the best acting and the dialogue was so good. I loved it.'
seq_length=200
predict(net, test_review_neg, seq_length)
###Output
Prediction value, pre-rounding: 0.007705
Negative review detected.
###Markdown
Determine Whether a movie review is good sentiment or bad Each (unique)word is encoded by a number representing how common it is in the dataset
###Code
%tensorflow_version 2.x
from keras.datasets import imdb
from keras.preprocessing import sequence
import tensorflow as tf
import os
import numpy as np
import keras
# set some parameters
VOCAB_SIZE = 88584
MAXLEN = 250 # each movie review has differnt lengths(shorter ones we padd with 0 longer ones we truncate)
BATCH_SIZE = 64
(train_data,train_labels),(test_data,test_label) = imdb.load_data(num_words=VOCAB_SIZE)
###Output
_____no_output_____
###Markdown
* **lets have a look at a single movie review**
###Code
train_data[0]
len(train_data[0])
print(train_labels[0])
print(train_labels[2])
print(train_labels[10])
###Output
1
0
1
###Markdown
more preprocessing making the lengths uniform* MAX_LENGTH = 250 WORDS* if the length is shorter than 250 we add 0's to the left(padding)* if the length is longer than 250 words we cut of the extras
###Code
train_data = sequence.pad_sequences(train_data,MAXLEN)
test_data = sequence.pad_sequences(test_data,MAXLEN)
print(train_data[1])
###Output
[ 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 1 194 1153 194 8255 78 228 5 6 1463 4369
5012 134 26 4 715 8 118 1634 14 394 20 13
119 954 189 102 5 207 110 3103 21 14 69 188
8 30 23 7 4 249 126 93 4 114 9 2300
1523 5 647 4 116 9 35 8163 4 229 9 340
1322 4 118 9 4 130 4901 19 4 1002 5 89
29 952 46 37 4 455 9 45 43 38 1543 1905
398 4 1649 26 6853 5 163 11 3215 10156 4 1153
9 194 775 7 8255 11596 349 2637 148 605 15358 8003
15 123 125 68 23141 6853 15 349 165 4362 98 5
4 228 9 43 36893 1157 15 299 120 5 120 174
11 220 175 136 50 9 4373 228 8255 5 25249 656
245 2350 5 4 9837 131 152 491 18 46151 32 7464
1212 14 9 6 371 78 22 625 64 1382 9 8
168 145 23 4 1690 15 16 4 1355 5 28 6
52 154 462 33 89 78 285 16 145 95]
###Markdown
CREATING THE MODEL
###Code
model = tf.keras.Sequential([
tf.keras.layers.Embedding(VOCAB_SIZE,32),
tf.keras.layers.LSTM(32),
tf.keras.layers.Dense(1,activation='sigmoid')
])
model.summary()
###Output
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding (Embedding) (None, None, 32) 2834688
_________________________________________________________________
lstm (LSTM) (None, 32) 8320
_________________________________________________________________
dense (Dense) (None, 1) 33
=================================================================
Total params: 2,843,041
Trainable params: 2,843,041
Non-trainable params: 0
_________________________________________________________________
###Markdown
Training
###Code
model.compile(loss = 'binary_crossentropy',optimizer = 'rmsprop',metrics = ['acc'] )
history = model.fit(train_data,train_labels,epochs=10,validation_split=0.2)
###Output
Epoch 1/10
625/625 [==============================] - 40s 64ms/step - loss: 0.4164 - acc: 0.8101 - val_loss: 0.2961 - val_acc: 0.8776
Epoch 2/10
625/625 [==============================] - 40s 63ms/step - loss: 0.2324 - acc: 0.9110 - val_loss: 0.2978 - val_acc: 0.8824
Epoch 3/10
625/625 [==============================] - 39s 63ms/step - loss: 0.1802 - acc: 0.9330 - val_loss: 0.2694 - val_acc: 0.8922
Epoch 4/10
625/625 [==============================] - 39s 63ms/step - loss: 0.1479 - acc: 0.9488 - val_loss: 0.3084 - val_acc: 0.8874
Epoch 5/10
625/625 [==============================] - 39s 62ms/step - loss: 0.1261 - acc: 0.9570 - val_loss: 0.2888 - val_acc: 0.8800
Epoch 6/10
625/625 [==============================] - 39s 63ms/step - loss: 0.1052 - acc: 0.9641 - val_loss: 0.3012 - val_acc: 0.8834
Epoch 7/10
625/625 [==============================] - 39s 62ms/step - loss: 0.0942 - acc: 0.9689 - val_loss: 0.3348 - val_acc: 0.8888
Epoch 8/10
625/625 [==============================] - 39s 63ms/step - loss: 0.0823 - acc: 0.9724 - val_loss: 0.3839 - val_acc: 0.8718
Epoch 9/10
625/625 [==============================] - 39s 63ms/step - loss: 0.0720 - acc: 0.9770 - val_loss: 0.3737 - val_acc: 0.8852
Epoch 10/10
625/625 [==============================] - 39s 63ms/step - loss: 0.0631 - acc: 0.9791 - val_loss: 0.3957 - val_acc: 0.8836
###Markdown
Evaluating Model on Test dataset
###Code
results = model.evaluate(test_data,test_label)
print(results)
###Output
782/782 [==============================] - 13s 16ms/step - loss: 0.4738 - acc: 0.8568
[0.4737767279148102, 0.8568000197410583]
###Markdown
Predictions * get the imdb Dictionay(with those words)* function that chops the text into words only* loop through the words * check if the word is in the dictionary * take its integer representation in the dictionary and place it into a list called (token )* if word is not in dictionary then in the token list we replace it with 0
###Code
word_index = imdb.get_word_index()
def encode_text(text):
# chop the review(sentence) into tokens(individual words)
tokens = keras.preprocessing.text.text_to_word_sequence(text)
tokens = [word_index[word] if word in word_index else 0 for word in tokens]
return sequence.pad_sequences([tokens],MAXLEN)[0]
text = 'this is a movie i like'
encoded = encode_text(text)
print(encoded)
###Output
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 11 6 3 17 10 37]
###Markdown
Decode Function
###Code
reverse_word_index = {value:key for (key,value) in word_index.items()}
# loops through all words in vocab get key(integer representation) and value(word)
def decode_integer(integers):
PAD = 0
text = " "
for num in integers:
if num != 0:
text += reverse_word_index[num] + " " # gets the word from the dictionary concats it with a space the concat it with (text) to make a sentence
return text[:-1] # return everything in the sentence except the last character (which is a space)
print(decode_integer(encoded))
###Output
this is a movie i like
###Markdown
Prediction Function
###Code
def predict(text):
encoded_text = encode_text(text)
pred = np.zeros((1,250)) # blank nupy array filled with 0's and of 250 length(what model expects)
pred[0] = encoded_text # pick first array in the pred (array of arrays) and (set it to the array from encoding function)
results = model.predict(pred)
print(results[0])
# Example of positive reviews and negative review
positive_review = "i really really love it its awesome"
negative_review = " i hate that movie it sucks"
def predict(text):
encoded_text = encode_text(text)
pred = np.zeros((1,250)) # blank nupy array filled with 0's and of 250 length(what model expects)
pred[0] = encoded_text # pick first array in the pred (array of arrays) and (set it to the array from encoding function)
results = model.predict(pred)
review_status = results[0][0]
if review_status > 0.5:
print('positive sentiment')
else:
print('negative sentiment')
print(predict(positive_review))
print(predict(negative_review))
###Output
_____no_output_____ |
docs/examples/quickstart.ipynb | ###Markdown
Quickstart Creating an isothermFirst, we need to import the package.
###Code
import pygaps
###Output
_____no_output_____
###Markdown
The backbone of the framework is the PointIsotherm class. This class stores the isothermdata alongside isotherm properties such as the material, adsorbate and temperature, as wellas providing easy interaction with the framework calculations. There are several ways to create a PointIsotherm object:- directly from arrays- from a pandas.DataFrame- parsing json, csv files, or excel files- from an sqlite databaseSee the [isotherm creation](../manual/isotherm.rst) part of the documentation for a more in-depth explanation. For the simplest method, the data can be passed in as arrays of *pressure* and *loading*. There are four other required parameters: the material name, the material batch or ID, the adsorbateused and the temperature (in K) at which the data was recorded.
###Code
isotherm = pygaps.PointIsotherm(
pressure=[0.1, 0.2, 0.3, 0.4, 0.5, 0.4, 0.35, 0.25, 0.15, 0.05],
loading=[0.1, 0.2, 0.3, 0.4, 0.5, 0.45, 0.4, 0.3, 0.15, 0.05],
material= 'Carbon X1',
adsorbate = 'N2',
temperature = 77,
)
isotherm.plot()
###Output
WARNING: 'pressure_unit' was not specified, assumed as 'bar'
WARNING: 'pressure_mode' was not specified, assumed as 'absolute'
WARNING: 'adsorbent_unit' was not specified, assumed as 'g'
WARNING: 'adsorbent_basis' was not specified, assumed as 'mass'
WARNING: 'loading_unit' was not specified, assumed as 'mmol'
WARNING: 'loading_basis' was not specified, assumed as 'molar'
###Markdown
Unless specified, the loading is read in *mmol/g* and thepressure is read in *bar*, although these settings can be changed.Read more about it in the [units section](../manual/units.rst) of the manual.The isotherm can also have other properties which are passed in at creation. Alternatively, the data can be passed in the form of a pandas.DataFrame.This allows for other complementary data, such as isosteric enthalpy, XRD peak intensity, or othersimultaneous measurements corresponding to each point to be saved.The DataFrame should have at least two columns: the pressuresat which each point was recorded, and the loadings for each point.The `loading_key` and `pressure_key` parameters specify which column in the DataFramecontain the loading and pressure, respectively. The `other_keys` parametershould be a list of other columns to be saved.
###Code
import pandas
data = pandas.DataFrame({
'pressure': [0.1, 0.2, 0.3, 0.4, 0.5, 0.45, 0.35, 0.25, 0.15, 0.05],
'loading': [0.1, 0.2, 0.3, 0.4, 0.5, 0.5, 0.4, 0.3, 0.15, 0.05],
'isosteric_enthalpy (kJ/mol)': [10, 10, 10, 10, 10, 10, 10, 10, 10, 10]
})
isotherm = pygaps.PointIsotherm(
isotherm_data=data,
pressure_key='pressure',
loading_key='loading',
other_keys=['isosteric_enthalpy (kJ/mol)'],
material= 'Carbon X1',
adsorbate = 'N2',
temperature = 77,
pressure_unit='bar',
pressure_mode='absolute',
loading_unit='mmol',
loading_basis='molar',
adsorbent_unit='g',
adsorbent_basis='mass',
material_batch = 'Batch 1',
iso_type='characterisation'
)
isotherm.plot()
###Output
_____no_output_____
###Markdown
pyGAPS also comes with a variety of parsers. Here we can use the JSON parser to get an isotherm previously saved on disk. For more info on parsing to and from various formats see the [manual](../manual/parsing.rst) and the associated [examples](../examples/parsing.ipynb).
###Code
with open(r'data/carbon_x1_n2.json') as f:
isotherm = pygaps.isotherm_from_json(f.read())
###Output
_____no_output_____
###Markdown
To see a summary of the isotherm as well as a graph, use the included function:
###Code
isotherm.print_info()
###Output
Material: Takeda 5A
Adsorbate: nitrogen
Temperature: 77.355K
iso_type: Isotherme
material_batch: Test
Units:
Uptake in: mmol/g
Pressure in: bar
Other properties:
is_real: True
lab: MADIREL
machine: Triflex
t_act: 200.0
user: PI
###Markdown
Now that the PointIsotherm is created, we are ready to do some analysis.--- Isotherm analysisThe framework has several isotherm analysis tools which are commonly used to characteriseporous materials such as:- BET surface area- the t-plot method / alpha s method- mesoporous PSD (pore size distribution) calculations- microporous PSD calculations- DFT kernel fitting PSD methods- isosteric enthalpy of adsorption calculation- etc.All methods work directly with generated Isotherms. For example, to perform a tplot analysis and get the results in a dictionary use:
###Code
result_dict = pygaps.t_plot(isotherm)
import pprint
pprint.pprint(result_dict)
###Output
{'results': [{'adsorbed_volume': 0.4493471225837101,
'area': 99.54915759758686,
'corr_coef': 0.9996658295304233,
'intercept': 0.012929909242021878,
'section': [84, 85, 86, 87, 88, 89, 90],
'slope': 0.0028645150000192595}],
't_curve': array([0.14381104, 0.14800322, 0.1525095 , 0.15712503, 0.1617626 ,
0.16612841, 0.17033488, 0.17458578, 0.17879119, 0.18306956,
0.18764848, 0.19283516, 0.19881473, 0.2058225 , 0.21395749,
0.2228623 , 0.23213447, 0.2411563 , 0.24949659, 0.25634201,
0.2635719 , 0.27002947, 0.27633547, 0.28229453, 0.28784398,
0.29315681, 0.29819119, 0.30301872, 0.30762151, 0.31210773,
0.31641915, 0.32068381, 0.32481658, 0.32886821, 0.33277497,
0.33761078, 0.34138501, 0.34505614, 0.34870159, 0.35228919,
0.35587619, 0.35917214, 0.36264598, 0.36618179, 0.36956969,
0.37295932, 0.37630582, 0.37957513, 0.38277985, 0.38608229,
0.3892784 , 0.3924393 , 0.39566979, 0.39876923, 0.40194987,
0.40514492, 0.40824114, 0.41138787, 0.41450379, 0.41759906,
0.42072338, 0.42387825, 0.42691471, 0.43000525, 0.44357547,
0.46150731, 0.47647445, 0.49286816, 0.50812087, 0.52341251,
0.53937129, 0.55659203, 0.57281485, 0.5897311 , 0.609567 ,
0.62665975, 0.64822743, 0.66907008, 0.69046915, 0.71246898,
0.73767931, 0.76126425, 0.79092372, 0.82052677, 0.85273827,
0.88701466, 0.92485731, 0.96660227, 1.01333614, 1.06514197,
1.1237298 , 1.19133932, 1.27032012, 1.36103511, 1.45572245,
1.55317729])}
###Markdown
If in an interactive environment, such as iPython or Jupyter, it is useful to see thedetails of the calculation directly. To do this, increase the verbosity of the method and use matplotlib to display extra information, including graphs:
###Code
import matplotlib.pyplot as plt
result_dict = pygaps.area_BET(isotherm, verbose=True)
plt.show()
###Output
BET surface area: a = 1111 m2/g
Minimum pressure point chosen is 0.01 and maximum is 0.093
The slope of the BET fit: s = 87.602
The intercept of the BET fit: i = 0.238
The BET constant is: C = 368
Amount for a monolayer: n = 0.01138 mol/g
###Markdown
Depending on the method, different parameters can be passed to tweak the way thecalculations are performed. For example, if a mesoporous size distribution isdesired using the Dollimore-Heal method on the desorption branch of the isotherm,assuming the pores are cylindrical and that adsorbate thickness canbe described by a Halsey-type thickness curve, the code will look like:
###Code
result_dict = pygaps.psd_mesoporous(
isotherm,
psd_model='DH',
branch='des',
pore_geometry='cylinder',
thickness_model='Halsey',
verbose=True,
)
plt.show()
###Output
_____no_output_____
###Markdown
For more information on how to use each method, check the [manual](../manual/characterisation.rst) and the associated [examples](../examples/characterisation.rst).--- Isotherm modellingThe framework comes with functionality to fit point isotherm data with commonisotherm models such as Henry, Langmuir, Temkin, Virial etc.The modelling is done through the ModelIsotherm class. The class is similar to thePointIsotherm class, and shares the same ability to store parameters. However, instead ofdata, it stores model coefficients for the model it's describing.To create a ModelIsotherm, the same parameters dictionary / pandas DataFrame procedure canbe used. But, assuming we've already created a PointIsotherm object, we can use it to instantiatethe ModelIsotherm instead. To do this we use the class method:
###Code
model_iso = pygaps.ModelIsotherm.from_pointisotherm(isotherm, model='BET', verbose=True)
###Output
Attempting to model using BET
Model BET success, RMSE is 1.086
###Markdown
A minimisation procedure will then attempt to fit the model's parameters to the isotherm points.If successful, the ModelIsotherm is returned.In the user wants to screen several models at once, the class method can also be passed aparameter which allows the ModelIsotherm to select the best fitting model. Below, we will attempt to fit several simple available models, and the one with the best RMSE will bereturned. Depending on the models requested, this method may take significant processing time.
###Code
model_iso = pygaps.ModelIsotherm.from_pointisotherm(isotherm, guess_model='all', verbose=True)
###Output
Attempting to model using Henry
Model Henry success, RMSE is 7.419
Attempting to model using Langmuir
Model Langmuir success, RMSE is 2.120
Attempting to model using DSLangmuir
Model DSLangmuir success, RMSE is 0.846
Attempting to model using DR
Model DR success, RMSE is 1.315
Attempting to model using Freundlich
Model Freundlich success, RMSE is 0.738
Attempting to model using Quadratic
Model Quadratic success, RMSE is 0.848
Attempting to model using BET
Model BET success, RMSE is 1.086
Attempting to model using TemkinApprox
Model TemkinApprox success, RMSE is 2.046
Attempting to model using Toth
Model Toth success, RMSE is 0.757
Attempting to model using Jensen-Seaton
Model Jensen-Seaton success, RMSE is 0.533
Best model fit is Jensen-Seaton
###Markdown
More advanced settings can also be specified, such as the optimisation model to be used in theoptimisation routine or the initial parameter guess. For in-depth examples and discussion check the [manual](../manual/modelling.rst) and the associated [examples](../examples/modelling.rst).To print the model parameters use the same print method as before.
###Code
# Prints isotherm parameters and model info
model_iso.print_info()
###Output
Material: Takeda 5A
Adsorbate: nitrogen
Temperature: 77.355K
iso_type: Isotherme
material_batch: Test
Units:
Uptake in: mmol/g
Pressure in: bar
Other properties:
branch: ads
plot_fit: False
is_real: False
lab: MADIREL
machine: Triflex
t_act: 200.0
user: PI
Jensen-Seaton isotherm model.
RMSE = 0.5325
Model parameters:
K = 544353796.820126
a = 16.729588
b = 0.335824
c = 0.180727
Model applicable range:
Pressure range: 0.00 - 0.96
Loading range: 0.51 - 21.59
###Markdown
We can calculate the loading at any pressure using the internal model by using the ``loading_at`` function.
###Code
# Returns the loading at 1 bar calculated with the model
model_iso.loading_at(1.0)
# Returns the loading in the range 0-1 bar calculated with the model
pressure = [0.1,0.5,1]
model_iso.loading_at(pressure)
###Output
_____no_output_____
###Markdown
PlottingpyGAPS makes graphing both PointIsotherm and ModelIsotherm objects easy to facilitatevisual observations, inclusion in publications and consistency. Plotting an isotherm isas simple as:
###Code
import matplotlib.pyplot as plt
pygaps.plot_iso([isotherm, model_iso], branch='ads')
plt.show()
###Output
_____no_output_____
###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
TensorFlow Recommenders: Quickstart View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook In this tutorial, we build a simple matrix factorization model using the [MovieLens 100K dataset](https://grouplens.org/datasets/movielens/100k/) with TFRS. We can use this model to recommend movies for a given user. Import TFRSFirst, install and import TFRS:
###Code
!pip install -q tensorflow-recommenders
!pip install -q --upgrade tensorflow-datasets
from typing import Dict, Text
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
import tensorflow_recommenders as tfrs
###Output
_____no_output_____
###Markdown
Read the data
###Code
# Ratings data.
ratings = tfds.load('movie_lens/100k-ratings', split="train")
# Features of all the available movies.
movies = tfds.load('movie_lens/100k-movies', split="train")
# Select the basic features.
ratings = ratings.map(lambda x: {
"movie_title": x["movie_title"],
"user_id": x["user_id"]
})
movies = movies.map(lambda x: x["movie_title"])
###Output
_____no_output_____
###Markdown
Build vocabularies to convert user ids and movie titles into integer indices for embedding layers:
###Code
user_ids_vocabulary = tf.keras.layers.experimental.preprocessing.StringLookup()
user_ids_vocabulary.adapt(ratings.map(lambda x: x["user_id"]))
movie_titles_vocabulary = tf.keras.layers.experimental.preprocessing.StringLookup()
movie_titles_vocabulary.adapt(movies)
###Output
_____no_output_____
###Markdown
Define a modelWe can define a TFRS model by inheriting from `tfrs.Model` and implementing the `compute_loss` method:
###Code
class MovieLensModel(tfrs.Model):
# We derive from a custom base class to help reduce boilerplate. Under the hood,
# these are still plain Keras Models.
def __init__(
self,
user_model: tf.keras.Model,
movie_model: tf.keras.Model,
task: tfrs.tasks.Retrieval):
super().__init__()
# Set up user and movie representations.
self.user_model = user_model
self.movie_model = movie_model
# Set up a retrieval task.
self.task = task
def compute_loss(self, features: Dict[Text, tf.Tensor], training=False) -> tf.Tensor:
# Define how the loss is computed.
user_embeddings = self.user_model(features["user_id"])
movie_embeddings = self.movie_model(features["movie_title"])
return self.task(user_embeddings, movie_embeddings)
###Output
_____no_output_____
###Markdown
Define the two models and the retrieval task.
###Code
# Define user and movie models.
user_model = tf.keras.Sequential([
user_ids_vocabulary,
tf.keras.layers.Embedding(user_ids_vocabulary.vocab_size(), 64)
])
movie_model = tf.keras.Sequential([
movie_titles_vocabulary,
tf.keras.layers.Embedding(movie_titles_vocabulary.vocab_size(), 64)
])
# Define your objectives.
task = tfrs.tasks.Retrieval(metrics=tfrs.metrics.FactorizedTopK(
movies.batch(128).map(movie_model)
)
)
###Output
_____no_output_____
###Markdown
Fit and evaluate it.Create the model, train it, and generate predictions:
###Code
# Create a retrieval model.
model = MovieLensModel(user_model, movie_model, task)
model.compile(optimizer=tf.keras.optimizers.Adagrad(0.5))
# Train for 3 epochs.
model.fit(ratings.batch(4096), epochs=3)
# Use brute-force search to set up retrieval using the trained representations.
index = tfrs.layers.ann.BruteForce(model.user_model)
index.index(movies.batch(100).map(model.movie_model), movies)
# Get some recommendations.
_, titles = index(np.array(["42"]))
print(f"Top 3 recommendations for user 42: {titles[0, :3]}")
###Output
_____no_output_____
###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
TensorFlow Recommenders: Quickstart View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook In this tutorial, we build a simple matrix factorization model using the [MovieLens 100K dataset](https://grouplens.org/datasets/movielens/100k/) with TFRS. We can use this model to recommend movies for a given user. Import TFRSFirst, install and import TFRS:
###Code
!pip install -q tensorflow-recommenders
!pip install -q --upgrade tensorflow-datasets
from typing import Dict, Text
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
import tensorflow_recommenders as tfrs
###Output
_____no_output_____
###Markdown
Read the data
###Code
# Ratings data.
ratings = tfds.load('movielens/100k-ratings', split="train")
# Features of all the available movies.
movies = tfds.load('movielens/100k-movies', split="train")
# Select the basic features.
ratings = ratings.map(lambda x: {
"movie_title": x["movie_title"],
"user_id": x["user_id"]
})
movies = movies.map(lambda x: x["movie_title"])
###Output
_____no_output_____
###Markdown
Build vocabularies to convert user ids and movie titles into integer indices for embedding layers:
###Code
user_ids_vocabulary = tf.keras.layers.experimental.preprocessing.StringLookup(mask_token=None)
user_ids_vocabulary.adapt(ratings.map(lambda x: x["user_id"]))
movie_titles_vocabulary = tf.keras.layers.experimental.preprocessing.StringLookup(mask_token=None)
movie_titles_vocabulary.adapt(movies)
###Output
_____no_output_____
###Markdown
Define a modelWe can define a TFRS model by inheriting from `tfrs.Model` and implementing the `compute_loss` method:
###Code
class MovieLensModel(tfrs.Model):
# We derive from a custom base class to help reduce boilerplate. Under the hood,
# these are still plain Keras Models.
def __init__(
self,
user_model: tf.keras.Model,
movie_model: tf.keras.Model,
task: tfrs.tasks.Retrieval):
super().__init__()
# Set up user and movie representations.
self.user_model = user_model
self.movie_model = movie_model
# Set up a retrieval task.
self.task = task
def compute_loss(self, features: Dict[Text, tf.Tensor], training=False) -> tf.Tensor:
# Define how the loss is computed.
user_embeddings = self.user_model(features["user_id"])
movie_embeddings = self.movie_model(features["movie_title"])
return self.task(user_embeddings, movie_embeddings)
###Output
_____no_output_____
###Markdown
Define the two models and the retrieval task.
###Code
# Define user and movie models.
user_model = tf.keras.Sequential([
user_ids_vocabulary,
tf.keras.layers.Embedding(user_ids_vocabulary.vocab_size(), 64)
])
movie_model = tf.keras.Sequential([
movie_titles_vocabulary,
tf.keras.layers.Embedding(movie_titles_vocabulary.vocab_size(), 64)
])
# Define your objectives.
task = tfrs.tasks.Retrieval(metrics=tfrs.metrics.FactorizedTopK(
movies.batch(128).map(movie_model)
)
)
###Output
_____no_output_____
###Markdown
Fit and evaluate it.Create the model, train it, and generate predictions:
###Code
# Create a retrieval model.
model = MovieLensModel(user_model, movie_model, task)
model.compile(optimizer=tf.keras.optimizers.Adagrad(0.5))
# Train for 3 epochs.
model.fit(ratings.batch(4096), epochs=3)
# Use brute-force search to set up retrieval using the trained representations.
index = tfrs.layers.factorized_top_k.BruteForce(model.user_model)
index.index(movies.batch(100).map(model.movie_model), movies)
# Get some recommendations.
_, titles = index(np.array(["42"]))
print(f"Top 3 recommendations for user 42: {titles[0, :3]}")
###Output
_____no_output_____
###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
TensorFlow Recommenders: Quickstart View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook In this tutorial, we build a simple matrix factorization model using the [MovieLens 100K dataset](https://grouplens.org/datasets/movielens/100k/) with TFRS. We can use this model to recommend movies for a given user. Import TFRSFirst, install and import TFRS:
###Code
!pip install -q tensorflow-recommenders
!pip install -q --upgrade tensorflow-datasets
from typing import Dict, Text
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
import tensorflow_recommenders as tfrs
###Output
_____no_output_____
###Markdown
Read the data
###Code
# Ratings data.
ratings = tfds.load('movielens/100k-ratings', split="train")
# Features of all the available movies.
movies = tfds.load('movielens/100k-movies', split="train")
# Select the basic features.
ratings = ratings.map(lambda x: {
"movie_title": x["movie_title"],
"user_id": x["user_id"]
})
movies = movies.map(lambda x: x["movie_title"])
###Output
_____no_output_____
###Markdown
Build vocabularies to convert user ids and movie titles into integer indices for embedding layers:
###Code
user_ids_vocabulary = tf.keras.layers.StringLookup(mask_token=None)
user_ids_vocabulary.adapt(ratings.map(lambda x: x["user_id"]))
movie_titles_vocabulary = tf.keras.layers.StringLookup(mask_token=None)
movie_titles_vocabulary.adapt(movies)
###Output
_____no_output_____
###Markdown
Define a modelWe can define a TFRS model by inheriting from `tfrs.Model` and implementing the `compute_loss` method:
###Code
class MovieLensModel(tfrs.Model):
# We derive from a custom base class to help reduce boilerplate. Under the hood,
# these are still plain Keras Models.
def __init__(
self,
user_model: tf.keras.Model,
movie_model: tf.keras.Model,
task: tfrs.tasks.Retrieval):
super().__init__()
# Set up user and movie representations.
self.user_model = user_model
self.movie_model = movie_model
# Set up a retrieval task.
self.task = task
def compute_loss(self, features: Dict[Text, tf.Tensor], training=False) -> tf.Tensor:
# Define how the loss is computed.
user_embeddings = self.user_model(features["user_id"])
movie_embeddings = self.movie_model(features["movie_title"])
return self.task(user_embeddings, movie_embeddings)
###Output
_____no_output_____
###Markdown
Define the two models and the retrieval task.
###Code
# Define user and movie models.
user_model = tf.keras.Sequential([
user_ids_vocabulary,
tf.keras.layers.Embedding(user_ids_vocabulary.vocab_size(), 64)
])
movie_model = tf.keras.Sequential([
movie_titles_vocabulary,
tf.keras.layers.Embedding(movie_titles_vocabulary.vocab_size(), 64)
])
# Define your objectives.
task = tfrs.tasks.Retrieval(metrics=tfrs.metrics.FactorizedTopK(
movies.batch(128).map(movie_model)
)
)
###Output
_____no_output_____
###Markdown
Fit and evaluate it.Create the model, train it, and generate predictions:
###Code
# Create a retrieval model.
model = MovieLensModel(user_model, movie_model, task)
model.compile(optimizer=tf.keras.optimizers.Adagrad(0.5))
# Train for 3 epochs.
model.fit(ratings.batch(4096), epochs=3)
# Use brute-force search to set up retrieval using the trained representations.
index = tfrs.layers.factorized_top_k.BruteForce(model.user_model)
index.index_from_dataset(
movies.batch(100).map(lambda title: (title, model.movie_model(title))))
# Get some recommendations.
_, titles = index(np.array(["42"]))
print(f"Top 3 recommendations for user 42: {titles[0, :3]}")
###Output
_____no_output_____
###Markdown
QuickstartThis document should get you started with pyGAPS, providing several commonoperations like creating/reading isotherms, running characterisation routines,model fitting and data export/plotting.You can download this code as a Jupyter Notebook and run it for yourself! See banner at the top for the link. Creating an isothermFirst, we need to import the package.
###Code
import pygaps as pg
###Output
_____no_output_____
###Markdown
The backbone of the framework is the PointIsotherm class. This class stores theisotherm data alongside isotherm properties such as the material, adsorbate andtemperature, as well as providing easy interaction with the frameworkcalculations. There are several ways to create a PointIsotherm object:- directly from arrays- from a `pandas.DataFrame`- parsing AIF, json, csv, or excel files- loading manufacturer reports (Micromeritics, Belsorp, 3P, Quantachrome, etc.)- loading from an sqlite databaseSee the [isotherm creation](../manual/isotherm.rst) part of the documentationfor a more in-depth explanation. For the simplest method, the data can be passedin as arrays of `pressure` and `loading`. There are three other requiredmetadata parameters: the adsorbent material name, the adsorbate molecule used,and the temperature at which the data was recorded.
###Code
isotherm = pg.PointIsotherm(
pressure=[0.1, 0.2, 0.3, 0.4, 0.5, 0.4, 0.35, 0.25, 0.15, 0.05],
loading=[0.1, 0.2, 0.3, 0.4, 0.5, 0.45, 0.4, 0.3, 0.15, 0.05],
material='Carbon X1',
adsorbate='N2',
temperature=77,
)
###Output
WARNING: 'pressure_mode' was not specified , assumed as 'absolute'
WARNING: 'pressure_unit' was not specified , assumed as 'bar'
WARNING: 'material_basis' was not specified , assumed as 'mass'
WARNING: 'material_unit' was not specified , assumed as 'g'
WARNING: 'loading_basis' was not specified , assumed as 'molar'
WARNING: 'loading_unit' was not specified , assumed as 'mmol'
WARNING: 'temperature_unit' was not specified , assumed as 'K'
###Markdown
We can see that most units are set to defaults and we are being notified aboutit. Let's assume we don't want to change anything for now.To see a summary of the isotherm metadata, use the `print` function:
###Code
print(isotherm)
###Output
Material: Carbon X1
Adsorbate: nitrogen
Temperature: 77.0K
Units:
Uptake in: mmol/g
Pressure in: bar
###Markdown
Unless specified, the loading is read in *mmol/g* and the pressure is read in*bar*. We can specify (and convert) units to anything from *weight% vs Pa*,*mol/cm3 vs relative pressure* to *cm3/mol vs torr*. Read more about how pyGAPShandles units in this [section](../manual/units.rst) of the manual. The isothermcan also have other properties which are passed in at creation.Alternatively, the data can be passed in the form of a `pandas.DataFrame`. Thisallows for other complementary data, such as ambient pressure, isostericenthalpy, or other simultaneous measurements corresponding to each point to besaved.The DataFrame should have at least two columns: the pressures at which eachpoint was recorded, and the loadings for each point. The `loading_key` and`pressure_key` parameters specify which column in the DataFrame contain theloading and pressure, respectively. We will also take the opportunity to set ourown units, and a few extra metadata properties.
###Code
import pandas as pd
# create (or import) a DataFrame
data = pd.DataFrame({
'pressure': [0.1, 0.2, 0.3, 0.4, 0.5, 0.45, 0.35, 0.25, 0.15, 0.05],
'loading': [0.1, 0.2, 0.3, 0.4, 0.5, 0.5, 0.4, 0.3, 0.15, 0.05],
'Enthalpy [kJ/mol]': [15, 14, 13.5, 13, 12, 11, 10, 10, 10, 10],
})
isotherm = pg.PointIsotherm(
isotherm_data=data,
pressure_key='pressure',
loading_key='loading',
material='Carbon X1',
adsorbate='N2',
temperature=20,
pressure_mode='relative',
pressure_unit=None, # we are in relative mode
loading_basis='mass',
loading_unit='g',
material_basis='volume',
material_unit='cm3',
temperature_unit='ยฐC',
material_batch='Batch 1',
iso_type='characterisation'
)
###Output
_____no_output_____
###Markdown
All passed metadata is stored in the `isotherm.properties` dictionary.
###Code
print(isotherm.properties)
###Output
{'material_batch': 'Batch 1', 'iso_type': 'characterisation'}
###Markdown
A summary and a plot can be generated by using the `print_info` function. Noticehow the units are automatically taken into account in the plots.
###Code
isotherm.print_info(y2_range=[0, 20])
###Output
Material: Carbon X1
Adsorbate: nitrogen
Temperature: 350.15K
Units:
Uptake in: g/cm3
Relative pressure
Other properties:
material_batch: Batch 1
iso_type: characterisation
###Markdown
pyGAPS also comes with a variety of parsers that allow isotherms to be saved orloaded. Here we can use the JSON parser to get an isotherm previously saved ondisk. For more info on parsing to and from various formats see the[manual](../manual/parsing.rst) and the associated[examples](../examples/parsing.ipynb).
###Code
import pygaps.parsing as pgp
isotherm = pgp.isotherm_from_json(r'data/carbon_x1_n2.json')
###Output
_____no_output_____
###Markdown
We can then inspect the isotherm data using various functions:
###Code
isotherm.data()
isotherm.pressure(branch="des")
###Output
_____no_output_____
###Markdown
To see just a plot of the isotherm, use the `plot` function:
###Code
isotherm.plot()
###Output
_____no_output_____
###Markdown
Isotherms can be plotted in different units/modes, or can be permanentlyconverted. If conversion is desired, find out more in[this section](../manual/isotherm.rstconverting-isotherm-units-modes-and-basis).For example, using the previous isotherm:
###Code
# This just displays the isotherm in a different unit
isotherm.plot(pressure_unit='torr', loading_basis='percent')
# The isotherm is still internally in the same units
print(f"Isotherm is still in {isotherm.pressure_unit} and {isotherm.loading_unit}.")
# While the underlying units can be completely converted
isotherm.convert(pressure_mode='relative')
print(f"Isotherm is now permanently in {isotherm.pressure_mode} pressure.")
isotherm.plot()
###Output
Isotherm is now permanently in relative pressure.
###Markdown
Now that the PointIsotherm is created, we are ready to do some analysis of its properties.--- Isotherm analysisThe framework has several isotherm analysis tools which are commonly used tocharacterise porous materials such as:- BET surface area- the t-plot method / alpha s method- mesoporous PSD (pore size distribution) calculations- microporous PSD calculations- DFT kernel fitting PSD methods- isosteric enthalpy of adsorption calculation- and much more...All methods work directly with generated Isotherms. For example, to perform at-plot analysis and get the results in a dictionary use:
###Code
import pprint
import pygaps.characterisation as pgc
result_dict = pgc.t_plot(isotherm)
pprint.pprint(result_dict)
###Output
{'results': [{'adsorbed_volume': 0.44934712258371,
'area': 99.54915759758691,
'corr_coef': 0.9996658295304236,
'intercept': 0.012929909242021876,
'section': [84, 85, 86, 87, 88, 89, 90],
'slope': 0.0028645150000192613}],
't_curve': array([0.14381104, 0.14800322, 0.1525095 , 0.15712503, 0.1617626 ,
0.16612841, 0.17033488, 0.17458578, 0.17879119, 0.18306956,
0.18764848, 0.19283516, 0.19881473, 0.2058225 , 0.21395749,
0.2228623 , 0.23213447, 0.2411563 , 0.24949659, 0.25634201,
0.2635719 , 0.27002947, 0.27633547, 0.28229453, 0.28784398,
0.29315681, 0.29819119, 0.30301872, 0.30762151, 0.31210773,
0.31641915, 0.32068381, 0.32481658, 0.32886821, 0.33277497,
0.33761078, 0.34138501, 0.34505614, 0.34870159, 0.35228919,
0.35587619, 0.35917214, 0.36264598, 0.36618179, 0.36956969,
0.37295932, 0.37630582, 0.37957513, 0.38277985, 0.38608229,
0.3892784 , 0.3924393 , 0.39566979, 0.39876923, 0.40194987,
0.40514492, 0.40824114, 0.41138787, 0.41450379, 0.41759906,
0.42072338, 0.42387825, 0.42691471, 0.43000525, 0.44357547,
0.46150731, 0.47647445, 0.49286816, 0.50812087, 0.52341251,
0.53937129, 0.55659203, 0.57281485, 0.5897311 , 0.609567 ,
0.62665975, 0.64822743, 0.66907008, 0.69046915, 0.71246898,
0.73767931, 0.76126425, 0.79092372, 0.82052677, 0.85273827,
0.88701466, 0.92485731, 0.96660227, 1.01333614, 1.06514197,
1.1237298 , 1.19133932, 1.27032012, 1.36103511, 1.45572245,
1.55317729])}
###Markdown
If in an interactive environment, such as iPython or Jupyter, it is useful tosee the details of the calculation directly. To do this, increase the verbosityof the method to display extra information, including graphs:
###Code
result_dict = pgc.area_BET(isotherm, verbose=True)
###Output
BET area: a = 1110 m2/g
The BET constant is: C = 372.8
Minimum pressure point is 0.0105 and maximum is 0.0979
Statistical monolayer at: n = 0.0114 mol/g
The slope of the BET fit: s = 87.7
The intercept of the BET fit: i = 0.236
###Markdown
Depending on the method, parameters can be specified to tweak the way thecalculations are performed. For example, if a mesoporous size distribution isdesired using the Dollimore-Heal method on the desorption branch of theisotherm, assuming the pores are cylindrical and that adsorbate thickness can bedescribed by a Halsey-type thickness curve, the code will look like:
###Code
result_dict = pgc.psd_mesoporous(
isotherm,
psd_model='DH',
branch='des',
pore_geometry='cylinder',
thickness_model='Halsey',
verbose=True,
)
###Output
_____no_output_____
###Markdown
For more information on how to use each method, check the[manual](../manual/characterisation.rst) and the associated[examples](../examples/characterisation.rst).--- Isotherm fittingThe framework comes with functionality to fit point isotherm data with commonisotherm models such as Henry, Langmuir, Temkin, Virial etc.The model is contained in the ModelIsotherm class. The class is similar to thePointIsotherm class, and shares the parameters and metadata. However, instead ofpoint data, it stores model coefficients for the model it's describing.To create a ModelIsotherm, the same parameters dictionary / `pandas.DataFrame`procedure can be used. But, assuming we've already created a PointIsothermobject, we can just pass it to the `pygaps.model_iso` function.
###Code
import pygaps.modelling as pgm
model_iso = pgm.model_iso(isotherm, model='DSLangmuir', verbose=True)
model_iso
###Output
Attempting to model using DSLangmuir.
Model DSLangmuir success, RMSE is 0.846
###Markdown
A minimisation procedure will then attempt to fit the model's parameters to theisotherm points. If successful, the ModelIsotherm is returned.In the user wants to screen several models at once, the `model` can be given as`guess`, to try to find the best fitting model. Below, we will attempt to fitseveral simple available models, and the one with the best RMSE will bereturned. This method may take significant processing time, and there is noguarantee that the model is physically relevant.
###Code
model_iso = pgm.model_iso(isotherm, model='guess', verbose=True)
###Output
Attempting to model using Henry.
Model Henry success, RMSE is 7.42
Attempting to model using Langmuir.
Model Langmuir success, RMSE is 2.12
Attempting to model using DSLangmuir.
Model DSLangmuir success, RMSE is 0.846
Attempting to model using DR.
Model DR success, RMSE is 1.31
Attempting to model using Freundlich.
Model Freundlich success, RMSE is 0.738
Attempting to model using Quadratic.
Model Quadratic success, RMSE is 0.848
Attempting to model using BET.
Model BET success, RMSE is 1.09
Attempting to model using TemkinApprox.
Model TemkinApprox success, RMSE is 2.05
Attempting to model using Toth.
Model Toth success, RMSE is 0.752
Attempting to model using JensenSeaton.
Model JensenSeaton success, RMSE is 0.533
Best model fit is JensenSeaton.
###Markdown
More advanced settings can also be specified, such as the parameters for thefitting routine or the initial parameter guess. For in-depth examples anddiscussion check the [manual](../manual/modelling.rst) and the associated[examples](../examples/modelling.rst).To print the model parameters use the same print method as before.
###Code
# Prints isotherm parameters and model info
model_iso.print_info()
###Output
Material: Takeda 5A
Adsorbate: nitrogen
Temperature: 77.355K
Units:
Uptake in: mmol/g
Relative pressure
Other properties:
plot_fit: False
iso_type: Isotherme
lab: MADIREL
instrument: Triflex
material_batch: Test
activation_temperature: 200.0
user: PI
JensenSeaton isotherm model.
RMSE = 0.5325
Model parameters:
K = 5.516e+08
a = 16.73
b = 0.3403
c = 0.1807
Model applicable range:
Pressure range: 1.86e-07 - 0.946
Loading range: 0.51 - 21.6
###Markdown
We can calculate the loading at any pressure using the internal model by using the ``loading_at`` function.
###Code
# Returns the loading at 1 bar calculated with the model
model_iso.loading_at(1.0)
# Returns the loading for three points in the 0-1 bar range
pressure = [0.1,0.5,1]
model_iso.loading_at(pressure)
###Output
_____no_output_____
###Markdown
PlottingpyGAPS makes graphing both PointIsotherm and ModelIsotherm objects easy to facilitatevisual observations, inclusion in publications and consistency. Plotting an isotherm isas simple as:
###Code
import pygaps.graphing as pgg
pgg.plot_iso(
[isotherm, model_iso], # Two isotherms
branch='ads', # Plot only the adsorption branch
lgd_keys=['material', 'adsorbate', 'type'], # Text in the legend, as taken from the isotherms
)
###Output
_____no_output_____
###Markdown
Here is a more involved plot, where we create the figure beforehand, and specifymany more customisation options.
###Code
import matplotlib.pyplot as plt
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(8, 4))
pgg.plot_iso(
[isotherm, model_iso],
ax=ax1,
branch='all',
pressure_mode="relative%",
x_range=(None, 80),
color=["r", "k"],
lgd_keys=['adsorbate', 'branch', 'type'],
)
model_iso.plot(
ax=ax2,
x_points=isotherm.pressure(),
loading_unit="mol",
y1_range=(None, 0.023),
marker="s",
color="k",
y1_line_style={
"linestyle": "--",
"markersize": 3
},
logx=True,
lgd_pos="upper left",
lgd_keys=['material', 'key', 'type'],
)
###Output
_____no_output_____
###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
TensorFlow Recommenders: Quickstart View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook In this tutorial, we build a simple matrix factorization model using the [MovieLens 100K dataset](https://grouplens.org/datasets/movielens/100k/) with TFRS. We can use this model to recommend movies for a given user. Import TFRSFirst, install and import TFRS:
###Code
!pip install -q tensorflow-recommenders
!pip install -q --upgrade tensorflow-datasets
from typing import Dict, Text
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
import tensorflow_recommenders as tfrs
###Output
_____no_output_____
###Markdown
Read the data
###Code
# Ratings data.
ratings = tfds.load('movie_lens/100k-ratings', split="train")
# Features of all the available movies.
movies = tfds.load('movie_lens/100k-movies', split="train")
# Select the basic features.
ratings = ratings.map(lambda x: {
"movie_title": x["movie_title"],
"user_id": x["user_id"]
})
movies = movies.map(lambda x: x["movie_title"])
###Output
_____no_output_____
###Markdown
Build vocabularies to convert user ids and movie titles into integer indices for embedding layers:
###Code
user_ids_vocabulary = tf.keras.layers.experimental.preprocessing.StringLookup(mask_token=None)
user_ids_vocabulary.adapt(ratings.map(lambda x: x["user_id"]))
movie_titles_vocabulary = tf.keras.layers.experimental.preprocessing.StringLookup(mask_token=None)
movie_titles_vocabulary.adapt(movies)
###Output
_____no_output_____
###Markdown
Define a modelWe can define a TFRS model by inheriting from `tfrs.Model` and implementing the `compute_loss` method:
###Code
class MovieLensModel(tfrs.Model):
# We derive from a custom base class to help reduce boilerplate. Under the hood,
# these are still plain Keras Models.
def __init__(
self,
user_model: tf.keras.Model,
movie_model: tf.keras.Model,
task: tfrs.tasks.Retrieval):
super().__init__()
# Set up user and movie representations.
self.user_model = user_model
self.movie_model = movie_model
# Set up a retrieval task.
self.task = task
def compute_loss(self, features: Dict[Text, tf.Tensor], training=False) -> tf.Tensor:
# Define how the loss is computed.
user_embeddings = self.user_model(features["user_id"])
movie_embeddings = self.movie_model(features["movie_title"])
return self.task(user_embeddings, movie_embeddings)
###Output
_____no_output_____
###Markdown
Define the two models and the retrieval task.
###Code
# Define user and movie models.
user_model = tf.keras.Sequential([
user_ids_vocabulary,
tf.keras.layers.Embedding(user_ids_vocabulary.vocab_size(), 64)
])
movie_model = tf.keras.Sequential([
movie_titles_vocabulary,
tf.keras.layers.Embedding(movie_titles_vocabulary.vocab_size(), 64)
])
# Define your objectives.
task = tfrs.tasks.Retrieval(metrics=tfrs.metrics.FactorizedTopK(
movies.batch(128).map(movie_model)
)
)
###Output
_____no_output_____
###Markdown
Fit and evaluate it.Create the model, train it, and generate predictions:
###Code
# Create a retrieval model.
model = MovieLensModel(user_model, movie_model, task)
model.compile(optimizer=tf.keras.optimizers.Adagrad(0.5))
# Train for 3 epochs.
model.fit(ratings.batch(4096), epochs=3)
# Use brute-force search to set up retrieval using the trained representations.
index = tfrs.layers.ann.BruteForce(model.user_model)
index.index(movies.batch(100).map(model.movie_model), movies)
# Get some recommendations.
_, titles = index(np.array(["42"]))
print(f"Top 3 recommendations for user 42: {titles[0, :3]}")
###Output
_____no_output_____
###Markdown
First steps with xmovie
###Code
import warnings
import matplotlib.pyplot as plt
import xarray as xr
from shapely.errors import ShapelyDeprecationWarning
from xmovie import Movie
warnings.filterwarnings(
action='ignore',
category=ShapelyDeprecationWarning, # in cartopy
)
warnings.filterwarnings(
action="ignore",
category=UserWarning,
message=r"No `(vmin|vmax)` provided. Data limits are calculated from input. Depending on the input this can take long. Pass `\1` to avoid this step"
)
%matplotlib inline
###Output
_____no_output_____
###Markdown
Basics
###Code
# Load test dataset
ds = xr.tutorial.open_dataset('air_temperature').isel(time=slice(0, 150))
# Create movie object
mov = Movie(ds.air)
###Output
_____no_output_____
###Markdown
Preview movie frames
###Code
# Preview 10th frame
mov.preview(10)
plt.savefig("movie_preview.png")
! rm -f frame*.png *.mp4 *.gif
###Output
rm: cannot remove 'frame*.png': No such file or directory
rm: cannot remove '*.mp4': No such file or directory
rm: cannot remove '*.gif': No such file or directory
###Markdown
Create movie files
###Code
mov.save('movie.mp4') # Use to save a high quality mp4 movie
mov.save('movie_gif.gif') # Use to save a gif
###Output
Movie created at movie.mp4
Movie created at movie_mp4.mp4
GIF created at movie_gif.gif
###Markdown
In many cases it is useful to have both a high quality movie and a lower resolution gif of the same animation. If that is desired, just deactivate the `remove_movie` option and give a filename with `.gif`. xmovie will first render a high quality movie and then convert it to a gif, without removing the movie afterwards. Optional frame-generation progress bars Display a progressbar with `progress=True`, (requires tqdm). This can be helpful for long running animations.
###Code
mov.save('movie_combo.gif', remove_movie=False, progress=True)
###Output
_____no_output_____
###Markdown
Modify the framerate of the output with the keyword arguments `framerate` (for movies) and `gif_framerate` (for gifs).
###Code
mov.save('movie_fast.gif', remove_movie=False, progress=True, framerate=20, gif_framerate=20)
mov.save('movie_slow.gif', remove_movie=False, progress=True, framerate=5, gif_framerate=5)
###Output
_____no_output_____
###Markdown
  Frame dimension selection By default, the movie passes through the `'time'` dimension of the DataArray, but this can be easily changed with the `framedim` argument:
###Code
mov = Movie(ds.air, framedim='lon')
mov.save('lon_movie.gif')
###Output
Movie created at lon_movie.mp4
GIF created at lon_movie.gif
###Markdown
 Modifying plots Rotating globe (preset)
###Code
from xmovie.presets import rotating_globe
mov = Movie(ds.air, plotfunc=rotating_globe)
mov.save('movie_rotating.gif', progress=True)
###Output
_____no_output_____
###Markdown

###Code
mov = Movie(ds.air, plotfunc=rotating_globe, style='dark')
mov.save('movie_rotating_dark.gif', progress=True)
###Output
_____no_output_____
###Markdown
 Specifying xarray plot method to be used Change the plotting function with the parameter `plotmethod`.
###Code
mov = Movie(ds.air, rotating_globe, plotmethod='contour')
mov.save('movie_cont.gif')
mov = Movie(ds.air, rotating_globe, plotmethod='contourf')
mov.save('movie_contf.gif')
###Output
Movie created at movie_cont.mp4
GIF created at movie_cont.gif
Movie created at movie_contf.mp4
GIF created at movie_contf.gif
###Markdown
 Changing preset settings
###Code
import numpy as np
ds = xr.tutorial.open_dataset('rasm', decode_times=False).Tair # 36 times in total
# Interpolate time for smoother animation
ds['time'].values[:] = np.arange(len(ds['time']))
ds = ds.interp(time=np.linspace(0, 10, 60))
# `Movie` accepts keywords for the xarray plotting interface and provides a set of 'own' keywords like
# `coast`, `land` and `style` to facilitate the styling of plots
mov = Movie(ds, rotating_globe,
# Keyword arguments to the xarray plotting interface
cmap='RdYlBu_r',
x='xc',
y='yc',
shading='auto',
# Custom keyword arguments to `rotating_globe
lat_start=45,
lat_rotations=0.05,
lon_rotations=0.2,
land=False,
coastline=True,
style='dark')
mov.save('movie_rasm.gif', progress=True)
###Output
_____no_output_____
###Markdown
 User-provided Besides the presets, xmovie is designed to animate any custom plot which can be wrapped in a function acting on a matplotlib figure. This can contain xarray plotting commands, 'pure' matplotlib or a combination of both. This can come in handy when you want to animate a complex static plot.
###Code
ds = xr.tutorial.open_dataset('rasm', decode_times=False).Tair
fig = plt.figure(figsize=[10,5])
tt = 30
station = dict(x=100, y=150)
ds_station = ds.sel(**station)
(ax1, ax2) = fig.subplots(ncols=2)
ds.isel(time=tt).plot(ax=ax1)
ax1.plot(station['x'], station['y'], marker='*', color='k' ,markersize=15)
ax1.text(station['x']+4, station['y']+4, 'Station', color='k' )
ax1.set_aspect(1)
ax1.set_facecolor('0.5')
ax1.set_title('');
# Time series
ds_station.isel(time=slice(0,tt+1)).plot.line(ax=ax2, x='time')
ax2.set_xlim(ds.time.min().data, ds.time.max().data)
ax2.set_ylim(ds_station.min(), ds_station.max())
ax2.set_title('Data at station');
fig.subplots_adjust(wspace=0.6)
fig.savefig("static.png")
###Output
_____no_output_____
###Markdown
All you need to do is wrap your plotting calls into a functions `func(ds, fig, frame)`, where ds is an xarray dataset you pass to `Movie`, fig is a matplotlib.figure handle and tt is the movie frame.
###Code
def custom_plotfunc(ds, fig, tt, *args, **kwargs):
# Define station location for timeseries
station = dict(x=100, y=150)
ds_station = ds.sel(**station)
(ax1, ax2) = fig.subplots(ncols=2)
# Map axis
# Colorlimits need to be fixed or your video is going to cause seizures.
# This is the only modification from the code above!
ds.isel(time=tt).plot(ax=ax1, vmin=ds.min(), vmax=ds.max(), cmap='RdBu_r')
ax1.plot(station['x'], station['y'], marker='*', color='k' ,markersize=15)
ax1.text(station['x']+4, station['y']+4, 'Station', color='k' )
ax1.set_aspect(1)
ax1.set_facecolor('0.5')
ax1.set_title('');
# Time series
ds_station.isel(time=slice(0,tt+1)).plot.line(ax=ax2, x='time')
ax2.set_xlim(ds.time.min().data, ds.time.max().data)
ax2.set_ylim(ds_station.min(), ds_station.max())
ax2.set_title('Data at station');
fig.subplots_adjust(wspace=0.6)
return None, None
# ^ This is not strictly necessary, but otherwise a warning will be raised.
mov_custom = Movie(ds, custom_plotfunc)
mov_custom.preview(30)
mov_custom.save('movie_custom.gif', progress=True)
###Output
_____no_output_____
###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
TensorFlow Recommenders: Quickstart View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook In this tutorial, we build a simple matrix factorization model using the [MovieLens 100K dataset](https://grouplens.org/datasets/movielens/100k/) with TFRS. We can use this model to recommend movies for a given user. Import TFRSFirst, install and import TFRS:
###Code
!pip install -q tensorflow-recommenders
!pip install -q --upgrade tensorflow-datasets
from typing import Dict, Text
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
import tensorflow_recommenders as tfrs
###Output
_____no_output_____
###Markdown
Read the data
###Code
# Ratings data.
ratings = tfds.load('movielens/100k-ratings', split="train")
# Features of all the available movies.
movies = tfds.load('movielens/100k-movies', split="train")
# Select the basic features.
ratings = ratings.map(lambda x: {
"movie_title": x["movie_title"],
"user_id": x["user_id"]
})
movies = movies.map(lambda x: x["movie_title"])
###Output
_____no_output_____
###Markdown
Build vocabularies to convert user ids and movie titles into integer indices for embedding layers:
###Code
user_ids_vocabulary = tf.keras.layers.experimental.preprocessing.StringLookup(mask_token=None)
user_ids_vocabulary.adapt(ratings.map(lambda x: x["user_id"]))
movie_titles_vocabulary = tf.keras.layers.experimental.preprocessing.StringLookup(mask_token=None)
movie_titles_vocabulary.adapt(movies)
###Output
_____no_output_____
###Markdown
Define a modelWe can define a TFRS model by inheriting from `tfrs.Model` and implementing the `compute_loss` method:
###Code
class MovieLensModel(tfrs.Model):
# We derive from a custom base class to help reduce boilerplate. Under the hood,
# these are still plain Keras Models.
def __init__(
self,
user_model: tf.keras.Model,
movie_model: tf.keras.Model,
task: tfrs.tasks.Retrieval):
super().__init__()
# Set up user and movie representations.
self.user_model = user_model
self.movie_model = movie_model
# Set up a retrieval task.
self.task = task
def compute_loss(self, features: Dict[Text, tf.Tensor], training=False) -> tf.Tensor:
# Define how the loss is computed.
user_embeddings = self.user_model(features["user_id"])
movie_embeddings = self.movie_model(features["movie_title"])
return self.task(user_embeddings, movie_embeddings)
###Output
_____no_output_____
###Markdown
Define the two models and the retrieval task.
###Code
# Define user and movie models.
user_model = tf.keras.Sequential([
user_ids_vocabulary,
tf.keras.layers.Embedding(user_ids_vocabulary.vocabulary_size(), 64)
])
movie_model = tf.keras.Sequential([
movie_titles_vocabulary,
tf.keras.layers.Embedding(movie_titles_vocabulary.vocabulary_size(), 64)
])
# Define your objectives.
task = tfrs.tasks.Retrieval(metrics=tfrs.metrics.FactorizedTopK(
movies.batch(128).map(movie_model)
)
)
###Output
_____no_output_____
###Markdown
Fit and evaluate it.Create the model, train it, and generate predictions:
###Code
# Create a retrieval model.
model = MovieLensModel(user_model, movie_model, task)
model.compile(optimizer=tf.keras.optimizers.Adagrad(0.5))
# Train for 3 epochs.
model.fit(ratings.batch(4096), epochs=3)
# Use brute-force search to set up retrieval using the trained representations.
index = tfrs.layers.factorized_top_k.BruteForce(model.user_model)
index.index(movies.batch(100).map(model.movie_model), movies)
# Get some recommendations.
_, titles = index(np.array(["42"]))
print(f"Top 3 recommendations for user 42: {titles[0, :3]}")
###Output
_____no_output_____
###Markdown
Quickstart Creating an isothermFirst, we need to import the package.
###Code
import pygaps as pg
###Output
_____no_output_____
###Markdown
The backbone of the framework is the PointIsotherm class. This class stores the isothermdata alongside isotherm properties such as the material, adsorbate and temperature, as wellas providing easy interaction with the framework calculations. There are several ways to create a PointIsotherm object:- directly from arrays- from a pandas.DataFrame- parsing json, csv files, or excel files- loading from an sqlite databaseSee the [isotherm creation](../manual/isotherm.rst) part of the documentation for a more in-depth explanation. For the simplest method, the data can be passed in as arrays of *pressure* and *loading*. There are four other required parameters: the material name, the material batch or ID, the adsorbateused and the temperature (in K) at which the data was recorded.
###Code
isotherm = pg.PointIsotherm(
pressure=[0.1, 0.2, 0.3, 0.4, 0.5, 0.4, 0.35, 0.25, 0.15, 0.05],
loading=[0.1, 0.2, 0.3, 0.4, 0.5, 0.45, 0.4, 0.3, 0.15, 0.05],
material= 'Carbon X1',
adsorbate = 'N2',
temperature = 77,
)
###Output
WARNING: 'pressure_mode' was not specified , assumed as 'absolute'
WARNING: 'pressure_unit' was not specified , assumed as 'bar'
WARNING: 'material_basis' was not specified , assumed as 'mass'
WARNING: 'material_unit' was not specified , assumed as 'g'
WARNING: 'loading_basis' was not specified , assumed as 'molar'
WARNING: 'loading_unit' was not specified , assumed as 'mmol'
###Markdown
To see a summary of the isotherm, use the `print` function:
###Code
print(isotherm)
###Output
Material: Carbon X1
Adsorbate: nitrogen
Temperature: 77.0K
Units:
Uptake in: mmol/g
Pressure in: bar
Other properties:
###Markdown
Unless specified, the loading is read in *mmol/g* and thepressure is read in *bar*. Parameters specified can modify units to anythingfrom *weight% vs Pa*, *mol/cm3 vs relative pressure* to *cm3/mol vs torr*.Read more about how pyGAPS handles units in this [section](../manual/units.rst) of the manual.The isotherm can also have other properties which are passed in at creation. Alternatively, the data can be passed in the form of a pandas.DataFrame.This allows for other complementary data, such as isosteric enthalpy, XRD peak intensity, or othersimultaneous measurements corresponding to each point to be saved.The DataFrame should have at least two columns: the pressuresat which each point was recorded, and the loadings for each point.The `loading_key` and `pressure_key` parameters specify which column in the DataFramecontain the loading and pressure, respectively. The `other_keys` parametershould be a list of other columns to be saved.
###Code
import pandas as pd
data = pd.DataFrame({
'pressure': [0.1, 0.2, 0.3, 0.4, 0.5, 0.45, 0.35, 0.25, 0.15, 0.05],
'loading': [0.1, 0.2, 0.3, 0.4, 0.5, 0.5, 0.4, 0.3, 0.15, 0.05],
'isosteric_enthalpy [kJ/mol]': [15, 14, 13.5, 13, 12, 11, 10, 10, 10, 10],
'unimportant_data': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1] # This will not be saved!
})
isotherm = pg.PointIsotherm(
isotherm_data=data,
pressure_key='pressure',
loading_key='loading',
other_keys=['isosteric_enthalpy [kJ/mol]'],
material= 'Carbon X1',
adsorbate = 'N2',
temperature = 77,
pressure_unit='bar',
pressure_mode='absolute',
loading_unit='mmol',
loading_basis='molar',
material_unit='g',
material_basis='mass',
material_batch = 'Batch 1',
iso_type='characterisation'
)
###Output
_____no_output_____
###Markdown
A summary and a plot can be generated by using the `print_info` function.
###Code
isotherm.print_info(y2_range=[0, 20])
###Output
Material: Carbon X1
Adsorbate: nitrogen
Temperature: 77.0K
Units:
Uptake in: mmol/g
Pressure in: bar
Other properties:
material_batch: Batch 1
iso_type: characterisation
###Markdown
pyGAPS also comes with a variety of parsers. Here we can use the JSON parser to get an isotherm previously saved on disk. For more info on parsing to and from various formats see the [manual](../manual/parsing.rst) and the associated [examples](../examples/parsing.ipynb).
###Code
isotherm = pg.isotherm_from_json(r'data/carbon_x1_n2.json')
###Output
_____no_output_____
###Markdown
To see just a plot of the isotherm, use the `plot` function:
###Code
isotherm.plot()
###Output
_____no_output_____
###Markdown
Isotherms can be plotted in different units/modes, or can be permanently converted. If conversion is desired, find out more in [this section](../manual/isotherm.rstconverting-isotherm-units-modes-and-basis). For example, using the previous isotherm.
###Code
# This just displays the isotherm in a different unit
isotherm.plot(pressure_unit='torr', loading_basis='percent')
print(f"Isotherm is still in {isotherm.pressure_unit} and {isotherm.loading_unit}.")
# While the underlying units can be completely converted
isotherm.convert(pressure_mode='relative')
print(f"Isotherm is now permanently in {isotherm.pressure_mode} pressure.")
###Output
Isotherm is now permanently in relative pressure.
###Markdown
Now that the PointIsotherm is created, we are ready to do some analysis.--- Isotherm analysisThe framework has several isotherm analysis tools which are commonly used to characteriseporous materials such as:- BET surface area- the t-plot method / alpha s method- mesoporous PSD (pore size distribution) calculations- microporous PSD calculations- DFT kernel fitting PSD methods- isosteric enthalpy of adsorption calculation- etc.All methods work directly with generated Isotherms. For example, to perform a tplot analysis and get the results in a dictionary use:
###Code
result_dict = pg.t_plot(isotherm)
import pprint
pprint.pprint(result_dict)
###Output
{'results': [{'adsorbed_volume': 0.4493471225837101,
'area': 99.54915759758687,
'corr_coef': 0.9996658295304233,
'intercept': 0.012929909242021878,
'section': [84, 85, 86, 87, 88, 89, 90],
'slope': 0.0028645150000192604}],
't_curve': array([0.14381104, 0.14800322, 0.1525095 , 0.15712503, 0.1617626 ,
0.16612841, 0.17033488, 0.17458578, 0.17879119, 0.18306956,
0.18764848, 0.19283516, 0.19881473, 0.2058225 , 0.21395749,
0.2228623 , 0.23213447, 0.2411563 , 0.24949659, 0.25634201,
0.2635719 , 0.27002947, 0.27633547, 0.28229453, 0.28784398,
0.29315681, 0.29819119, 0.30301872, 0.30762151, 0.31210773,
0.31641915, 0.32068381, 0.32481658, 0.32886821, 0.33277497,
0.33761078, 0.34138501, 0.34505614, 0.34870159, 0.35228919,
0.35587619, 0.35917214, 0.36264598, 0.36618179, 0.36956969,
0.37295932, 0.37630582, 0.37957513, 0.38277985, 0.38608229,
0.3892784 , 0.3924393 , 0.39566979, 0.39876923, 0.40194987,
0.40514492, 0.40824114, 0.41138787, 0.41450379, 0.41759906,
0.42072338, 0.42387825, 0.42691471, 0.43000525, 0.44357547,
0.46150731, 0.47647445, 0.49286816, 0.50812087, 0.52341251,
0.53937129, 0.55659203, 0.57281485, 0.5897311 , 0.609567 ,
0.62665975, 0.64822743, 0.66907008, 0.69046915, 0.71246898,
0.73767931, 0.76126425, 0.79092372, 0.82052677, 0.85273827,
0.88701466, 0.92485731, 0.96660227, 1.01333614, 1.06514197,
1.1237298 , 1.19133932, 1.27032012, 1.36103511, 1.45572245,
1.55317729])}
###Markdown
If in an interactive environment, such as iPython or Jupyter, it is useful to see thedetails of the calculation directly. To do this, increase the verbosity of the method to display extra information, including graphs:
###Code
result_dict = pg.area_BET(isotherm, verbose=True)
###Output
BET surface area: a = 1.11e+03 m2/g
Minimum pressure point is 0.010 and maximum is 0.093
The slope of the BET fit: s = 8.76e+01
The intercept of the BET fit: i = 2.37e-01
The BET constant is: C = 369.9
Amount for a monolayer: n = 1.14e-02 mol/g
###Markdown
Depending on the method, different parameters can be passed to tweak the way thecalculations are performed. For example, if a mesoporous size distribution isdesired using the Dollimore-Heal method on the desorption branch of the isotherm,assuming the pores are cylindrical and that adsorbate thickness canbe described by a Halsey-type thickness curve, the code will look like:
###Code
result_dict = pg.psd_mesoporous(
isotherm,
psd_model='DH',
branch='des',
pore_geometry='cylinder',
thickness_model='Halsey',
verbose=True,
)
###Output
_____no_output_____
###Markdown
For more information on how to use each method, check the [manual](../manual/characterisation.rst) and the associated [examples](../examples/characterisation.rst).--- Isotherm modellingThe framework comes with functionality to fit point isotherm data with commonisotherm models such as Henry, Langmuir, Temkin, Virial etc.The modelling is done through the ModelIsotherm class. The class is similar to thePointIsotherm class, and shares the same ability to store parameters. However, instead ofdata, it stores model coefficients for the model it's describing.To create a ModelIsotherm, the same parameters dictionary / pandas DataFrame procedure canbe used. But, assuming we've already created a PointIsotherm object, we can just pass it to the `pygaps.model_iso` function.
###Code
model_iso = pg.model_iso(isotherm, model='DSLangmuir', verbose=True)
###Output
Attempting to model using DSLangmuir.
Model DSLangmuir success, RMSE is 0.846
###Markdown
A minimisation procedure will then attempt to fit the model's parameters to the isotherm points.If successful, the ModelIsotherm is returned.In the user wants to screen several models at once, the class method can also be passed aparameter which allows the ModelIsotherm to select the best fitting model. Below, we will attempt to fit several simple available models, and the one with the best RMSE will bereturned. Depending on the models requested, this method may take significant processing time.
###Code
model_iso = pg.model_iso(isotherm, model='guess', verbose=True)
###Output
Attempting to model using Henry.
Model Henry success, RMSE is 7.419
Attempting to model using Langmuir.
Model Langmuir success, RMSE is 2.120
Attempting to model using DSLangmuir.
Model DSLangmuir success, RMSE is 0.846
Attempting to model using DR.
Model DR success, RMSE is 1.312
Attempting to model using Freundlich.
Model Freundlich success, RMSE is 0.738
Attempting to model using Quadratic.
Model Quadratic success, RMSE is 0.848
Attempting to model using BET.
Model BET success, RMSE is 1.086
Attempting to model using TemkinApprox.
Model TemkinApprox success, RMSE is 2.046
Attempting to model using Toth.
Model Toth success, RMSE is 0.755
Attempting to model using JensenSeaton.
Model JensenSeaton success, RMSE is 0.533
Best model fit is JensenSeaton.
###Markdown
More advanced settings can also be specified, such as the optimisation model to be used in theoptimisation routine or the initial parameter guess. For in-depth examples and discussion check the [manual](../manual/modelling.rst) and the associated [examples](../examples/modelling.rst).To print the model parameters use the same print method as before.
###Code
# Prints isotherm parameters and model info
model_iso.print_info()
###Output
Material: Takeda 5A
Adsorbate: nitrogen
Temperature: 77.355K
Units:
Uptake in: mmol/g
Relative pressure
Other properties:
branch: ads
plot_fit: False
iso_type: Isotherme
lab: MADIREL
machine: Triflex
material_batch: Test
t_act: 200.0
user: PI
JensenSeaton isotherm model.
RMSE = 0.5325
Model parameters:
K = 551556682.14
a = 16.73
b = 0.34
c = 0.18
Model applicable range:
Pressure range: 0.00 - 0.95
Loading range: 0.51 - 21.59
###Markdown
We can calculate the loading at any pressure using the internal model by using the ``loading_at`` function.
###Code
# Returns the loading at 1 bar calculated with the model
model_iso.loading_at(1.0)
# Returns the loading in the range 0-1 bar calculated with the model
pressure = [0.1,0.5,1]
model_iso.loading_at(pressure)
###Output
_____no_output_____
###Markdown
PlottingpyGAPS makes graphing both PointIsotherm and ModelIsotherm objects easy to facilitatevisual observations, inclusion in publications and consistency. Plotting an isotherm isas simple as:
###Code
pg.plot_iso([isotherm, model_iso], branch='ads')
###Output
_____no_output_____ |
notebooks/nlp/bert_sequence_classification.ipynb | ###Markdown
ํ๊ฒฝ ์ค๋น1. ๋ผ์ด๋ธ๋ฌ๋ฆฌ ๋ค์ด๋ก๋2. ๋ค์ด๋ฒ ์ํํ๊ณผ ๊ธ๋ถ์ ๋ฐ์ดํฐ๋ฅผ ๋ค์ด๋ก๋ํฉ๋๋ค (ํ์ผ๋ณด๊ธฐ + ์๋ก๊ณ ์นจ ํ ํ์ธ)* ์์ฒด ๋ฐ์ดํฐ์
์ ์ฌ์ฉํ ๊ฒฝ์ฐ ๋ด์ฉ๊ณผ ์นดํ
๊ณ ๋ฆฌ๊ฐ ๊ฐ๊ฐ content์ label ์ด์ ๋ค์ด๊ฐ๋ ํ์ผ(์๋ ์์ ์ฐธ์กฐ)๋ก dataset.xlsx๋ก ์ ์ฅ ํ ๊ธฐ์กด ํ์ผ์ ๋ฎ์ด์ฐ๊ธฐ ํ๋ฉด ๋ฉ๋๋ค. * ์์
ํ์ผ์ label๊ณผ content์ ์์๋ ์๊ด์์ผ๋ label์ 0๋ถํฐ ์์ํ๋ ์ซ์๋ก ์
๋ ฅํ๋ฉด ์ข์ต๋๋ค. ์๋ฅผ๋ค์ด ์นดํ
๊ณ ๋ฆฌ๊ฐ 4๊ฐ๋ฉด label์ 0, 1, 2, 3์ผ๋ก ํ์ํด์ฃผ์ธ์.```label content1 ์ํ๊ฐ ์ฌ๋ฐ๋ค. 1 ์ด ์ํ ์ถ์ฒํด์. 0 ์ง๋ฃจํ ์ํ์์ต๋๋ค.... ```
###Code
!pip3 install -q transformers
!git clone https://github.com/kiyoungkim1/ReadyToUseAI
from ReadyToUseAI.src.nlp import make_sample_dataset, bert_sequence_classification
make_sample_dataset.nsmc(mode='test', text_only=False) # mode: which datasets? 'train' or 'test'
###Output
_____no_output_____
###Markdown
[Training] * ์ฒจ๋ถ๋ ์ํ์ ๊ฒฝ์ฐ ์ฝ 40min ์์ (Tesla T4 GPU)* min_sentence_length๋ณด๋ค ๊ธด ๋ฌธ์ฅ๋ง ์ฌ์ฉํฉ๋๋ค.* MAX_LEN์ ๋ชจ๋ธ์ด ์ธ์ํ๋ token์ ๊ธธ์ด๋ก, ์ ์ฒด๊ธธ์ด๊ฐ ์ฝ MAX_LEN์ 2๋ฐฐ๋ณด๋ค ๊ธด ๋ฌธ์ฅ์ ๋ท๋ถ๋ถ์ด ์ญ์ ๋ฉ๋๋ค (์๋ฅผ๋ค์ด MAX_LEN = 128์ด๋ฉด, ๋๋ต ๊ธธ์ด๊ฐ 256์ด์์ธ ๋ฌธ์ฅ์ ๋ท๋ถ๋ถ์ด ๋ฌด์๋จ).* batch_size๋ ํ๋ฒ์ ๋ช๊ฐ์ sample์ ๊ณ์ฐํ๋์ง๋ฅผ ๋ํ๋ด๋ฉฐ, ์ ํ๋ ๋ฉ๋ชจ๋ฆฌ์์ MAX_LEN์ ์ค์ด๋ฉด batch_size๋ฅผ ํค์ธ ์ ์๊ณ , MAX_LEN๋ฅผ ํค์ฐ๋ฉด batch_size๋ฅผ ์ค์ฌ์ผ ํฉ๋๋ค. * epochs๋ ๋ฐ์ดํฐ์
์ ๋ช๋ฒ ๋ฐ๋ณตํด์ ํ์ตํ ์ง ์ฌ๋ถ์ด๋ฉฐ, dataset_split์ ์ ์ฒด ๋ฐ์ดํฐ ์ค ๋ช %๋ฅผ ๊ฒ์ฆ์ฉ ๋ฐ์ดํฐ์
์ผ๋ก ์ฌ์ฉํ ์ง ์ฌ๋ถ์
๋๋ค.
###Code
CLS = bert_sequence_classification.Classification(model_name='kykim/bert-kor-base', min_sentence_length=10, MAX_LEN=128, batch_size=32, use_bert_tokenizer=True)
CLS.dataset(data_path='dataset.xlsx')
CLS.load_model(mode='train')
CLS.train(epochs=3, dataset_split=0.1)
###Output
_____no_output_____
###Markdown
[Inference]* sentences์ ์ํ๋ ๋ฌธ์ฅ์ ์๋ ํ์๊ณผ ๊ฐ์ด ๋ฃ์ผ๋ฉด ํด๋นํ๋ ์นดํ
๊ณ ๋ฆฌ๋ฅผ ๋ฐํํฉ๋๋ค.* saved_model_path๋ ํ์ต๋ ๋ชจ๋ธ์ด ์ ์ฅ๋ 'ํด๋๋ช
'์
๋๋ค.
###Code
sentences = ['์ํ ์ฌ๋ฐ์ด์', '์ํ ์ฌ๋ฏธ์์ด์', '๊ทธ๋ฅ ์๊ฐ๋ผ์ฐ๊ธฐ์ฉ', '์์ ์ถ์ฒ์']
saved_model_path='model/saved/3'
CLS = bert_sequence_classification.Classification(model_name='kykim/bert-kor-base', min_sentence_length=10, MAX_LEN=128, batch_size=64, use_bert_tokenizer=True)
CLS.load_model(mode='inference', saved_model_path=saved_model_path)
logit = CLS.inference(sentences=sentences)
print(logit) # ๋ค์ด๋ฒ ์ํํ์ ๊ฒฝ์ฐ 0์ ๋ถ์ ์นดํ
๊ณ ๋ฆฌ, 1์ ๊ธ์ ์นดํ
๊ณ ๋ฆฌ๋ก ์ค์ ๋์ด ์์
###Output
_____no_output_____ |
Coursera/Applied of Machine Learning/atividades_week1/Assignment+1.ipynb | ###Markdown
---_You are currently looking at **version 1.3** of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the [Jupyter Notebook FAQ](https://www.coursera.org/learn/python-machine-learning/resources/bANLa) course resource._--- Assignment 1 - Introduction to Machine Learning For this assignment, you will be using the Breast Cancer Wisconsin (Diagnostic) Database to create a classifier that can help diagnose patients. First, read through the description of the dataset (below).
###Code
import numpy as np
import pandas as pd
from sklearn.datasets import load_breast_cancer
cancer = load_breast_cancer()
print(cancer.DESCR) # Print the data set description
###Output
Breast Cancer Wisconsin (Diagnostic) Database
=============================================
Notes
-----
Data Set Characteristics:
:Number of Instances: 569
:Number of Attributes: 30 numeric, predictive attributes and the class
:Attribute Information:
- radius (mean of distances from center to points on the perimeter)
- texture (standard deviation of gray-scale values)
- perimeter
- area
- smoothness (local variation in radius lengths)
- compactness (perimeter^2 / area - 1.0)
- concavity (severity of concave portions of the contour)
- concave points (number of concave portions of the contour)
- symmetry
- fractal dimension ("coastline approximation" - 1)
The mean, standard error, and "worst" or largest (mean of the three
largest values) of these features were computed for each image,
resulting in 30 features. For instance, field 3 is Mean Radius, field
13 is Radius SE, field 23 is Worst Radius.
- class:
- WDBC-Malignant
- WDBC-Benign
:Summary Statistics:
===================================== ====== ======
Min Max
===================================== ====== ======
radius (mean): 6.981 28.11
texture (mean): 9.71 39.28
perimeter (mean): 43.79 188.5
area (mean): 143.5 2501.0
smoothness (mean): 0.053 0.163
compactness (mean): 0.019 0.345
concavity (mean): 0.0 0.427
concave points (mean): 0.0 0.201
symmetry (mean): 0.106 0.304
fractal dimension (mean): 0.05 0.097
radius (standard error): 0.112 2.873
texture (standard error): 0.36 4.885
perimeter (standard error): 0.757 21.98
area (standard error): 6.802 542.2
smoothness (standard error): 0.002 0.031
compactness (standard error): 0.002 0.135
concavity (standard error): 0.0 0.396
concave points (standard error): 0.0 0.053
symmetry (standard error): 0.008 0.079
fractal dimension (standard error): 0.001 0.03
radius (worst): 7.93 36.04
texture (worst): 12.02 49.54
perimeter (worst): 50.41 251.2
area (worst): 185.2 4254.0
smoothness (worst): 0.071 0.223
compactness (worst): 0.027 1.058
concavity (worst): 0.0 1.252
concave points (worst): 0.0 0.291
symmetry (worst): 0.156 0.664
fractal dimension (worst): 0.055 0.208
===================================== ====== ======
:Missing Attribute Values: None
:Class Distribution: 212 - Malignant, 357 - Benign
:Creator: Dr. William H. Wolberg, W. Nick Street, Olvi L. Mangasarian
:Donor: Nick Street
:Date: November, 1995
This is a copy of UCI ML Breast Cancer Wisconsin (Diagnostic) datasets.
https://goo.gl/U2Uwz2
Features are computed from a digitized image of a fine needle
aspirate (FNA) of a breast mass. They describe
characteristics of the cell nuclei present in the image.
Separating plane described above was obtained using
Multisurface Method-Tree (MSM-T) [K. P. Bennett, "Decision Tree
Construction Via Linear Programming." Proceedings of the 4th
Midwest Artificial Intelligence and Cognitive Science Society,
pp. 97-101, 1992], a classification method which uses linear
programming to construct a decision tree. Relevant features
were selected using an exhaustive search in the space of 1-4
features and 1-3 separating planes.
The actual linear program used to obtain the separating plane
in the 3-dimensional space is that described in:
[K. P. Bennett and O. L. Mangasarian: "Robust Linear
Programming Discrimination of Two Linearly Inseparable Sets",
Optimization Methods and Software 1, 1992, 23-34].
This database is also available through the UW CS ftp server:
ftp ftp.cs.wisc.edu
cd math-prog/cpo-dataset/machine-learn/WDBC/
References
----------
- W.N. Street, W.H. Wolberg and O.L. Mangasarian. Nuclear feature extraction
for breast tumor diagnosis. IS&T/SPIE 1993 International Symposium on
Electronic Imaging: Science and Technology, volume 1905, pages 861-870,
San Jose, CA, 1993.
- O.L. Mangasarian, W.N. Street and W.H. Wolberg. Breast cancer diagnosis and
prognosis via linear programming. Operations Research, 43(4), pages 570-577,
July-August 1995.
- W.H. Wolberg, W.N. Street, and O.L. Mangasarian. Machine learning techniques
to diagnose breast cancer from fine-needle aspirates. Cancer Letters 77 (1994)
163-171.
###Markdown
The object returned by `load_breast_cancer()` is a scikit-learn Bunch object, which is similar to a dictionary.
###Code
cancer.keys()
###Output
_____no_output_____
###Markdown
Question 0 (Example)How many features does the breast cancer dataset have?*This function should return an integer.*
###Code
# You should write your whole answer within the function provided. The autograder will call
# this function and compare the return value against the correct solution value
def answer_zero():
# This function returns the number of features of the breast cancer dataset, which is an integer.
# The assignment question description will tell you the general format the autograder is expecting
return len(cancer['feature_names'])
# You can examine what your function returns by calling it in the cell. If you have questions
# about the assignment formats, check out the discussion forums for any FAQs
answer_zero()
###Output
_____no_output_____
###Markdown
Question 1Scikit-learn works with lists, numpy arrays, scipy-sparse matrices, and pandas DataFrames, so converting the dataset to a DataFrame is not necessary for training this model. Using a DataFrame does however help make many things easier such as munging data, so let's practice creating a classifier with a pandas DataFrame. Convert the sklearn.dataset `cancer` to a DataFrame. *This function should return a `(569, 31)` DataFrame with * *columns = * ['mean radius', 'mean texture', 'mean perimeter', 'mean area', 'mean smoothness', 'mean compactness', 'mean concavity', 'mean concave points', 'mean symmetry', 'mean fractal dimension', 'radius error', 'texture error', 'perimeter error', 'area error', 'smoothness error', 'compactness error', 'concavity error', 'concave points error', 'symmetry error', 'fractal dimension error', 'worst radius', 'worst texture', 'worst perimeter', 'worst area', 'worst smoothness', 'worst compactness', 'worst concavity', 'worst concave points', 'worst symmetry', 'worst fractal dimension', 'target']*and index = * RangeIndex(start=0, stop=569, step=1)
###Code
def answer_one():
df=pd.DataFrame(cancer['data'], columns=cancer['feature_names'])
df.insert(len(cancer['feature_names']),'target' , cancer['target'], allow_duplicates=False)
# Your code here
return df;
answer_one()
###Output
_____no_output_____
###Markdown
Question 2What is the class distribution? (i.e. how many instances of `malignant` (encoded 0) and how many `benign` (encoded 1)?)*This function should return a Series named `target` of length 2 with integer values and index =* `['malignant', 'benign']`
###Code
def answer_two():
cancerdf = answer_one()
ben=cancerdf['target'].sum() #return the benign(encoded 1) sum
mal=len(cancerdf['target'])-ben #malignant sum
target={'malignant': mal, 'benign': ben}
# Your code here
return pd.Series(data=target, index=['malignant', 'benign']) # Return your answer
answer_two()
###Output
_____no_output_____
###Markdown
Question 3Split the DataFrame into `X` (the data) and `y` (the labels).*This function should return a tuple of length 2:* `(X, y)`*, where* * `X`*, a pandas DataFrame, has shape* `(569, 30)`* `y`*, a pandas Series, has shape* `(569,)`.
###Code
def answer_three():
cancerdf = answer_one()
X=cancerdf.drop('target',axis=1)
y=cancerdf['target']
return X, y
X, y=answer_three()
X.shape, y.shape
###Output
_____no_output_____
###Markdown
Question 4Using `train_test_split`, split `X` and `y` into training and test sets `(X_train, X_test, y_train, and y_test)`.**Set the random number generator state to 0 using `random_state=0` to make sure your results match the autograder!***This function should return a tuple of length 4:* `(X_train, X_test, y_train, y_test)`*, where* * `X_train` *has shape* `(426, 30)`* `X_test` *has shape* `(143, 30)`* `y_train` *has shape* `(426,)`* `y_test` *has shape* `(143,)`
###Code
from sklearn.model_selection import train_test_split
def answer_four():
X, y = answer_three()
# Your code here
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
return X_train, X_test, y_train, y_test
X_train, X_test, y_train, y_test = answer_four()
X_train.shape, X_test.shape, y_train.shape, y_test.shape
###Output
_____no_output_____
###Markdown
Question 5Using KNeighborsClassifier, fit a k-nearest neighbors (knn) classifier with `X_train`, `y_train` and using one nearest neighbor (`n_neighbors = 1`).*This function should return a * `sklearn.neighbors.classification.KNeighborsClassifier`.
###Code
from sklearn.neighbors import KNeighborsClassifier
def answer_five():
X_train, X_test, y_train, y_test = answer_four()
knn = KNeighborsClassifier(n_neighbors = 1)
knn.fit(X_train, y_train)
# Your code here
return knn# Return your answer
knn=answer_five()
knn
###Output
_____no_output_____
###Markdown
Question 6Using your knn classifier, predict the class label using the mean value for each feature.Hint: You can use `cancerdf.mean()[:-1].values.reshape(1, -1)` which gets the mean value for each feature, ignores the target column, and reshapes the data from 1 dimension to 2 (necessary for the precict method of KNeighborsClassifier).*This function should return a numpy array either `array([ 0.])` or `array([ 1.])`*
###Code
def answer_six():
cancerdf = answer_one()
means = cancerdf.mean()[:-1].values.reshape(1, -1)
# Your code here
return knn.predict(means) # Return your answer
predicts=answer_six()
predicts[0] #is benign (1)
###Output
_____no_output_____
###Markdown
Question 7Using your knn classifier, predict the class labels for the test set `X_test`.*This function should return a numpy array with shape `(143,)` and values either `0.0` or `1.0`.*
###Code
def answer_seven():
X_train, X_test, y_train, y_test = answer_four()
knn = answer_five()
# Your code here
return knn.predict(X_test)# Return your answer
answer_seven()
###Output
_____no_output_____
###Markdown
Question 8Find the score (mean accuracy) of your knn classifier using `X_test` and `y_test`.*This function should return a float between 0 and 1*
###Code
def answer_eight():
X_train, X_test, y_train, y_test = answer_four()
knn = answer_five()
# Your code here
return knn.score(X_test, y_test)# Return your answer
answer_eight()
###Output
_____no_output_____
###Markdown
Optional plotTry using the plotting function below to visualize the differet predicition scores between training and test sets, as well as malignant and benign cells.
###Code
def accuracy_plot():
import matplotlib.pyplot as plt
%matplotlib notebook
X_train, X_test, y_train, y_test = answer_four()
# Find the training and testing accuracies by target value (i.e. malignant, benign)
mal_train_X = X_train[y_train==0]
mal_train_y = y_train[y_train==0]
ben_train_X = X_train[y_train==1]
ben_train_y = y_train[y_train==1]
mal_test_X = X_test[y_test==0]
mal_test_y = y_test[y_test==0]
ben_test_X = X_test[y_test==1]
ben_test_y = y_test[y_test==1]
knn = answer_five()
scores = [knn.score(mal_train_X, mal_train_y), knn.score(ben_train_X, ben_train_y),
knn.score(mal_test_X, mal_test_y), knn.score(ben_test_X, ben_test_y)]
plt.figure()
# Plot the scores as a bar chart
bars = plt.bar(np.arange(4), scores, color=['#4c72b0','#4c72b0','#55a868','#55a868'])
# directly label the score onto the bars
for bar in bars:
height = bar.get_height()
plt.gca().text(bar.get_x() + bar.get_width()/2, height*.90, '{0:.{1}f}'.format(height, 2),
ha='center', color='w', fontsize=11)
# remove all the ticks (both axes), and tick labels on the Y axis
plt.tick_params(top='off', bottom='off', left='off', right='off', labelleft='off', labelbottom='on')
# remove the frame of the chart
for spine in plt.gca().spines.values():
spine.set_visible(False)
plt.xticks([0,1,2,3], ['Malignant\nTraining', 'Benign\nTraining', 'Malignant\nTest', 'Benign\nTest'], alpha=0.8);
plt.title('Training and Test Accuracies for Malignant and Benign Cells', alpha=0.8)
###Output
_____no_output_____
###Markdown
Uncomment the plotting function to see the visualization.**Comment out** the plotting function when submitting your notebook for grading.
###Code
accuracy_plot()
###Output
_____no_output_____ |
Python/AbsoluteAndOtherAlgorithms/3Activity/SPEC_50.ipynb | ###Markdown
1. Import libraries
###Code
#----------------------------Reproducible----------------------------------------------------------------------------------------
import numpy as np
import random as rn
import os
seed=0
os.environ['PYTHONHASHSEED'] = str(seed)
np.random.seed(seed)
rn.seed(seed)
#----------------------------Reproducible----------------------------------------------------------------------------------------
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
#--------------------------------------------------------------------------------------------------------------------------------
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.cm as cm
%matplotlib inline
matplotlib.style.use('ggplot')
import random
import scipy.sparse as sparse
import scipy.io
from keras.utils import to_categorical
from sklearn.ensemble import ExtraTreesClassifier
from sklearn.model_selection import cross_val_score
from sklearn.preprocessing import MinMaxScaler
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
import scipy.io
from skfeature.function.similarity_based import SPEC
import time
import pandas as pd
#--------------------------------------------------------------------------------------------------------------------------------
def ETree(p_train_feature,p_train_label,p_test_feature,p_test_label,p_seed):
clf = ExtraTreesClassifier(n_estimators=50, random_state=p_seed)
# Training
clf.fit(p_train_feature, p_train_label)
# Training accuracy
print('Training accuracy๏ผ',clf.score(p_train_feature, np.array(p_train_label)))
print('Training accuracy๏ผ',accuracy_score(np.array(p_train_label),clf.predict(p_train_feature)))
#print('Training accuracy๏ผ',np.sum(clf.predict(p_train_feature)==np.array(p_train_label))/p_train_label.shape[0])
# Testing accuracy
print('Testing accuracy๏ผ',clf.score(p_test_feature, np.array(p_test_label)))
print('Testing accuracy๏ผ',accuracy_score(np.array(p_test_label),clf.predict(p_test_feature)))
#print('Testing accuracy๏ผ',np.sum(clf.predict(p_test_feature)==np.array(p_test_label))/p_test_label.shape[0])
#--------------------------------------------------------------------------------------------------------------------------------
def write_to_csv(p_data,p_path):
dataframe = pd.DataFrame(p_data)
dataframe.to_csv(p_path, mode='a',header=False,index=False,sep=',')
###Output
_____no_output_____
###Markdown
2. Loading data
###Code
train_data_arr=np.array(pd.read_csv('./Dataset/Activity/final_X_train.txt',header=None))
test_data_arr=np.array(pd.read_csv('./Dataset/Activity/final_X_test.txt',header=None))
train_label_arr=(np.array(pd.read_csv('./Dataset/Activity/final_y_train.txt',header=None))-1)
test_label_arr=(np.array(pd.read_csv('./Dataset/Activity/final_y_test.txt',header=None))-1)
data_arr=np.r_[train_data_arr,test_data_arr]
label_arr=np.r_[train_label_arr,test_label_arr]
label_arr_onehot=label_arr
print(data_arr.shape)
print(label_arr_onehot.shape)
data_arr=MinMaxScaler(feature_range=(0,1)).fit_transform(data_arr)
C_train_x,C_test_x,C_train_y,C_test_y= train_test_split(data_arr,label_arr_onehot,test_size=0.2,random_state=seed)
x_train,x_validate,y_train_onehot,y_validate_onehot= train_test_split(C_train_x,C_train_y,test_size=0.1,random_state=seed)
x_test=C_test_x
y_test_onehot=C_test_y
print('Shape of x_train: ' + str(x_train.shape))
print('Shape of x_validate: ' + str(x_validate.shape))
print('Shape of x_test: ' + str(x_test.shape))
print('Shape of y_train: ' + str(y_train_onehot.shape))
print('Shape of y_validate: ' + str(y_validate_onehot.shape))
print('Shape of y_test: ' + str(y_test_onehot.shape))
print('Shape of C_train_x: ' + str(C_train_x.shape))
print('Shape of C_train_y: ' + str(C_train_y.shape))
print('Shape of C_test_x: ' + str(C_test_x.shape))
print('Shape of C_test_y: ' + str(C_test_y.shape))
key_feture_number=50
###Output
_____no_output_____
###Markdown
3. Classifying 1 Extra Trees
###Code
train_feature=C_train_x
train_label=C_train_y
test_feature=C_test_x
test_label=C_test_y
print('Shape of train_feature: ' + str(train_feature.shape))
print('Shape of train_label: ' + str(train_label.shape))
print('Shape of test_feature: ' + str(test_feature.shape))
print('Shape of test_label: ' + str(test_label.shape))
p_seed=seed
ETree(train_feature,train_label,test_feature,test_label,p_seed)
###Output
Shape of train_feature: (4595, 561)
Shape of train_label: (4595, 1)
Shape of test_feature: (1149, 561)
Shape of test_label: (1149, 1)
###Markdown
4. Model
###Code
start = time.clock()
# construct affinity matrix
kwargs = {'style': 0}
# obtain the scores of features, and sort the feature scores in an ascending order according to the feature scores
train_score = SPEC.spec(train_feature, **kwargs)
train_idx = SPEC.feature_ranking(train_score, **kwargs)
# obtain the dataset on the selected features
train_selected_x = train_feature[:, train_idx[0:key_feture_number]]
print("train_selected_x",train_selected_x.shape)
# obtain the scores of features, and sort the feature scores in an ascending order according to the feature scores
test_score = SPEC.spec(test_feature, **kwargs)
test_idx = SPEC.feature_ranking(test_score, **kwargs)
# obtain the dataset on the selected features
test_selected_x = test_feature[:, test_idx[0:key_feture_number]]
print("test_selected_x",test_selected_x.shape)
time_cost=time.clock() - start
write_to_csv(np.array([time_cost]),"./log/SPEC_time"+str(key_feture_number)+".csv")
C_train_selected_x=train_selected_x
C_test_selected_x=test_selected_x
C_train_selected_y=C_train_y
C_test_selected_y=C_test_y
print('Shape of C_train_selected_x: ' + str(C_train_selected_x.shape))
print('Shape of C_test_selected_x: ' + str(C_test_selected_x.shape))
print('Shape of C_train_selected_y: ' + str(C_train_selected_y.shape))
print('Shape of C_test_selected_y: ' + str(C_test_selected_y.shape))
###Output
Shape of C_train_selected_x: (4595, 50)
Shape of C_test_selected_x: (1149, 50)
Shape of C_train_selected_y: (4595, 1)
Shape of C_test_selected_y: (1149, 1)
###Markdown
5. Classifying 2 Extra Trees
###Code
train_feature=C_train_selected_x
train_label=C_train_y
test_feature=C_test_selected_x
test_label=C_test_y
print('Shape of train_feature: ' + str(train_feature.shape))
print('Shape of train_label: ' + str(train_label.shape))
print('Shape of test_feature: ' + str(test_feature.shape))
print('Shape of test_label: ' + str(test_label.shape))
p_seed=seed
ETree(train_feature,train_label,test_feature,test_label,p_seed)
###Output
Shape of train_feature: (4595, 50)
Shape of train_label: (4595, 1)
Shape of test_feature: (1149, 50)
Shape of test_label: (1149, 1)
###Markdown
6. Reconstruction loss
###Code
from sklearn.linear_model import LinearRegression
def mse_check(train, test):
LR = LinearRegression(n_jobs = -1)
LR.fit(train[0], train[1])
MSELR = ((LR.predict(test[0]) - test[1]) ** 2).mean()
return MSELR
train_feature_tuple=(C_train_selected_x,C_train_x)
test_feature_tuple=(C_test_selected_x,C_test_x)
reconstruction_loss=mse_check(train_feature_tuple, test_feature_tuple)
print(reconstruction_loss)
###Output
0.12698834866363995
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.