hexsha
stringlengths 40
40
| size
int64 6
14.9M
| ext
stringclasses 1
value | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 6
260
| max_stars_repo_name
stringlengths 6
119
| max_stars_repo_head_hexsha
stringlengths 40
41
| max_stars_repo_licenses
sequence | max_stars_count
int64 1
191k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 6
260
| max_issues_repo_name
stringlengths 6
119
| max_issues_repo_head_hexsha
stringlengths 40
41
| max_issues_repo_licenses
sequence | max_issues_count
int64 1
67k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 6
260
| max_forks_repo_name
stringlengths 6
119
| max_forks_repo_head_hexsha
stringlengths 40
41
| max_forks_repo_licenses
sequence | max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | avg_line_length
float64 2
1.04M
| max_line_length
int64 2
11.2M
| alphanum_fraction
float64 0
1
| cells
sequence | cell_types
sequence | cell_type_groups
sequence |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
e728c0c240027e6b4b3c806bf5fb8fe2e8dea0fd | 1,272 | ipynb | Jupyter Notebook | examples/gallery/demos/bokeh/autompg_histogram.ipynb | jonmmease/holoviews | 27407e1a5d8020c39c135fa3f8c4fdeb11fea5c0 | [
"BSD-3-Clause"
] | 1 | 2019-01-02T20:20:09.000Z | 2019-01-02T20:20:09.000Z | examples/gallery/demos/bokeh/autompg_histogram.ipynb | jonmmease/holoviews | 27407e1a5d8020c39c135fa3f8c4fdeb11fea5c0 | [
"BSD-3-Clause"
] | null | null | null | examples/gallery/demos/bokeh/autompg_histogram.ipynb | jonmmease/holoviews | 27407e1a5d8020c39c135fa3f8c4fdeb11fea5c0 | [
"BSD-3-Clause"
] | null | null | null | 19.272727 | 105 | 0.543239 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
e728c45a8a9ad99f1dca274a59f2399fe8661482 | 28,616 | ipynb | Jupyter Notebook | 4 jigsaw/how-to-preprocessing-for-glove-part2-usage.ipynb | MLVPRASAD/KaggleProjects | 379e062cf58d83ff57a456552bb956df68381fdd | [
"MIT"
] | 2 | 2020-01-25T08:31:14.000Z | 2022-03-23T18:24:03.000Z | 4 jigsaw/how-to-preprocessing-for-glove-part2-usage.ipynb | MLVPRASAD/KaggleProjects | 379e062cf58d83ff57a456552bb956df68381fdd | [
"MIT"
] | null | null | null | 4 jigsaw/how-to-preprocessing-for-glove-part2-usage.ipynb | MLVPRASAD/KaggleProjects | 379e062cf58d83ff57a456552bb956df68381fdd | [
"MIT"
] | null | null | null | 35.725343 | 1,731 | 0.572756 | [
[
[
"# Preface",
"_____no_output_____"
],
[
"In this notebook I continue the work of https://www.kaggle.com/christofhenkel/how-to-preprocessing-for-glove-part1-eda unfortunatly I had to split the kernel into two due to memory issues.\n\nSince I am rather lazy, I forked Benjamins https://www.kaggle.com/bminixhofer/speed-up-your-rnn-with-sequence-bucketing to have a solid starting point. In the following I want to share 3 tricks that not only speed up the preprocessing a bit, but also improve a models accuracy.\n\nThe 3 main contributions of the two kernels kernel are the following:\n\n- loading embedding from pickles \n- aimed preprocessing for GloVe and fasttext vectors (the main content of this notebook)\n- fixing some unknown words by trying their lower/ uppercase versions\n\nIn this kernel I copy list of in-vocabulary and oov symbols and run a publlic kernel as a benchmark\n\nWhat I will not cover are word-specific preprocessing steps like handling contractions, or mispellings (again, since I am rather lazy and do not want to hardcode dictionaries).\n\nThe neural network architecture is taken from the best scoring public kernel at the time of writing: [Simple LSTM with Identity Parameters - Fast AI](https://www.kaggle.com/kunwar31/simple-lstm-with-identity-parameters-fastai).",
"_____no_output_____"
]
],
[
[
"# Put these at the top of every notebook, to get automatic reloading and inline plotting\n%reload_ext autoreload\n%autoreload 2\n%matplotlib inline\n\nimport fastai\nfrom fastai.train import Learner\nfrom fastai.train import DataBunch\nfrom fastai.callbacks import *\nfrom fastai.basic_data import DatasetType\nimport fastprogress\nfrom fastprogress import force_console_behavior\nimport numpy as np\nfrom pprint import pprint\nimport pandas as pd\nimport os\nimport time\n\nimport gc\nimport random\nfrom tqdm._tqdm_notebook import tqdm_notebook as tqdm\nfrom keras.preprocessing import text, sequence\nimport torch\nfrom torch import nn\nfrom torch.utils import data\nfrom torch.nn import functional as F\n",
"Using TensorFlow backend.\n"
],
[
"tqdm.pandas()",
"_____no_output_____"
],
[
"# disable progress bars when submitting\ndef is_interactive():\n return 'SHLVL' not in os.environ\n\nif not is_interactive():\n def nop(it, *a, **k):\n return it\n\n tqdm = nop\n\n fastprogress.fastprogress.NO_BAR = True\n master_bar, progress_bar = force_console_behavior()\n fastai.basic_train.master_bar, fastai.basic_train.progress_bar = master_bar, progress_bar",
"_____no_output_____"
],
[
"def seed_everything(seed=123):\n random.seed(seed)\n os.environ['PYTHONHASHSEED'] = str(seed)\n np.random.seed(seed)\n torch.manual_seed(seed)\n torch.cuda.manual_seed(seed)\n torch.backends.cudnn.deterministic = True\nseed_everything()",
"_____no_output_____"
]
],
[
[
"Here, compared to most other public kernels I replace the pretrained embedding files with their pickle corresponds. Loading a pickled version extremly improves timing ;)",
"_____no_output_____"
]
],
[
[
"CRAWL_EMBEDDING_PATH = '../input/pickled-crawl300d2m-for-kernel-competitions/crawl-300d-2M.pkl'\nGLOVE_EMBEDDING_PATH = '../input/pickled-glove840b300d-for-10sec-loading/glove.840B.300d.pkl'",
"_____no_output_____"
]
],
[
[
"Of course we also need to adjust the load_embeddings function, to now handle the pickled dict.",
"_____no_output_____"
]
],
[
[
"NUM_MODELS = 2\nLSTM_UNITS = 128\nDENSE_HIDDEN_UNITS = 4 * LSTM_UNITS\nMAX_LEN = 220\n\ndef get_coefs(word, *arr):\n return word, np.asarray(arr, dtype='float32')\n\n\ndef load_embeddings(path):\n with open(path,'rb') as f:\n emb_arr = pickle.load(f)\n return emb_arr\n\n\n",
"_____no_output_____"
]
],
[
[
"The next function is really important. Although we put a lot of effort in making the preprocessing right there are stil some out of vocabulary words we could easily fix. One example I implement here is to try a \"lower/upper case version of a\" word if an embedding is not found, which sometimes gives us an embedding. Sorry for the bad coding style in the loop",
"_____no_output_____"
]
],
[
[
"def build_matrix(word_index, path):\n embedding_index = load_embeddings(path)\n embedding_matrix = np.zeros((max_features + 1, 300))\n unknown_words = []\n \n for word, i in word_index.items():\n if i <= max_features:\n try:\n embedding_matrix[i] = embedding_index[word]\n except KeyError:\n try:\n embedding_matrix[i] = embedding_index[word.lower()]\n except KeyError:\n try:\n embedding_matrix[i] = embedding_index[word.title()]\n except KeyError:\n unknown_words.append(word)\n return embedding_matrix, unknown_words",
"_____no_output_____"
],
[
"\n\ndef sigmoid(x):\n return 1 / (1 + np.exp(-x))\n\nclass SpatialDropout(nn.Dropout2d):\n def forward(self, x):\n x = x.unsqueeze(2) # (N, T, 1, K)\n x = x.permute(0, 3, 2, 1) # (N, K, 1, T)\n x = super(SpatialDropout, self).forward(x) # (N, K, 1, T), some features are masked\n x = x.permute(0, 3, 2, 1) # (N, T, 1, K)\n x = x.squeeze(2) # (N, T, K)\n return x\n\ndef train_model(learn,test,output_dim,lr=0.001,\n batch_size=512, n_epochs=4,\n enable_checkpoint_ensemble=True):\n \n all_test_preds = []\n checkpoint_weights = [2 ** epoch for epoch in range(n_epochs)]\n test_loader = torch.utils.data.DataLoader(test, batch_size=batch_size, shuffle=False)\n n = len(learn.data.train_dl)\n phases = [(TrainingPhase(n).schedule_hp('lr', lr * (0.6**(i)))) for i in range(n_epochs)]\n sched = GeneralScheduler(learn, phases)\n learn.callbacks.append(sched)\n for epoch in range(n_epochs):\n learn.fit(1)\n test_preds = np.zeros((len(test), output_dim)) \n for i, x_batch in enumerate(test_loader):\n X = x_batch[0].cuda()\n y_pred = sigmoid(learn.model(X).detach().cpu().numpy())\n test_preds[i * batch_size:(i+1) * batch_size, :] = y_pred\n\n all_test_preds.append(test_preds)\n\n\n if enable_checkpoint_ensemble:\n test_preds = np.average(all_test_preds, weights=checkpoint_weights, axis=0) \n else:\n test_preds = all_test_preds[-1]\n \n return test_preds",
"_____no_output_____"
]
],
[
[
"Let's discuss the function, which is most popular in most public kernels.",
"_____no_output_____"
],
[
"In principle this functions just deletes some special characters. Which is not optimal and I will explain why in a bit. What is additionally inefficient is that later the keras tokenizer with its default parameters is used which has its own with the above function redundant behavior.",
"_____no_output_____"
]
],
[
[
"train = pd.read_csv('../input/jigsaw-unintended-bias-in-toxicity-classification/train.csv')\ntest = pd.read_csv('../input/jigsaw-unintended-bias-in-toxicity-classification/test.csv')",
"_____no_output_____"
]
],
[
[
"## Preprocessing",
"_____no_output_____"
],
[
"See part1 for an explanation how I came to the list of symbols and contraction function. I copied them from that kernel.",
"_____no_output_____"
]
],
[
[
"symbols_to_isolate = '.,?!-;*\"…:—()%#$&_/@\・ω+=”“[]^–>\\\\°<~•≠™ˈʊɒ∞§{}·τα❤☺ɡ|¢→̶`❥━┣┫┗O►★©―ɪ✔®\\x96\\x92●£♥➤´¹☕≈÷♡◐║▬′ɔː€۩۞†μ✒➥═☆ˌ◄½ʻπδηλσερνʃ✬SUPERIT☻±♍µº¾✓◾؟.⬅℅»Вав❣⋅¿¬♫CMβ█▓▒░⇒⭐›¡₂₃❧▰▔◞▀▂▃▄▅▆▇↙γ̄″☹➡«φ⅓„✋:¥̲̅́∙‛◇✏▷❓❗¶˚˙)сиʿ✨。ɑ\\x80◕!%¯−flfi₁²ʌ¼⁴⁄₄⌠♭✘╪▶☭✭♪☔☠♂☃☎✈✌✰❆☙○‣⚓年∎ℒ▪▙☏⅛casǀ℮¸w‚∼‖ℳ❄←☼⋆ʒ⊂、⅔¨͡๏⚾⚽Φ×θ₩?(℃⏩☮⚠月✊❌⭕▸■⇌☐☑⚡☄ǫ╭∩╮,例>ʕɐ̣Δ₀✞┈╱╲▏▕┃╰▊▋╯┳┊≥☒↑☝ɹ✅☛♩☞AJB◔◡↓♀⬆̱ℏ\\x91⠀ˤ╚↺⇤∏✾◦♬³の|/∵∴√Ω¤☜▲↳▫‿⬇✧ovm-208'‰≤∕ˆ⚜☁'\nsymbols_to_delete = '\\n🍕\\r🐵😑\\xa0\\ue014\\t\\uf818\\uf04a\\xad😢🐶️\\uf0e0😜😎👊\\u200b\\u200e😁عدويهصقأناخلىبمغر😍💖💵Е👎😀😂\\u202a\\u202c🔥😄🏻💥ᴍʏʀᴇɴᴅᴏᴀᴋʜᴜʟᴛᴄᴘʙғᴊᴡɢ😋👏שלוםבי😱‼\\x81エンジ故障\\u2009🚌ᴵ͞🌟😊😳😧🙀😐😕\\u200f👍😮😃😘אעכח💩💯⛽🚄🏼ஜ😖ᴠ🚲‐😟😈💪🙏🎯🌹😇💔😡\\x7f👌ἐὶήιὲκἀίῃἴξ🙄H😠\\ufeff\\u2028😉😤⛺🙂\\u3000تحكسة👮💙فزط😏🍾🎉😞\\u2008🏾😅😭👻😥😔😓🏽🎆🍻🍽🎶🌺🤔😪\\x08‑🐰🐇🐱🙆😨🙃💕𝘊𝘦𝘳𝘢𝘵𝘰𝘤𝘺𝘴𝘪𝘧𝘮𝘣💗💚地獄谷улкнПоАН🐾🐕😆ה🔗🚽歌舞伎🙈😴🏿🤗🇺🇸мυтѕ⤵🏆🎃😩\\u200a🌠🐟💫💰💎эпрд\\x95🖐🙅⛲🍰🤐👆🙌\\u2002💛🙁👀🙊🙉\\u2004ˢᵒʳʸᴼᴷᴺʷᵗʰᵉᵘ\\x13🚬🤓\\ue602😵άοόςέὸתמדףנרךצט😒͝🆕👅👥👄🔄🔤👉👤👶👲🔛🎓\\uf0b7\\uf04c\\x9f\\x10成都😣⏺😌🤑🌏😯ех😲Ἰᾶὁ💞🚓🔔📚🏀👐\\u202d💤🍇\\ue613小土豆🏡❔⁉\\u202f👠》कर्मा🇹🇼🌸蔡英文🌞🎲レクサス😛外国人关系Сб💋💀🎄💜🤢َِьыгя不是\\x9c\\x9d🗑\\u2005💃📣👿༼つ༽😰ḷЗз▱ц🤣卖温哥华议会下降你失去所有的钱加拿大坏税骗子🐝ツ🎅\\x85🍺آإشء🎵🌎͟ἔ油别克🤡🤥😬🤧й\\u2003🚀🤴ʲшчИОРФДЯМюж😝🖑ὐύύ特殊作戦群щ💨圆明园קℐ🏈😺🌍⏏ệ🍔🐮🍁🍆🍑🌮🌯🤦\\u200d𝓒𝓲𝓿𝓵안영하세요ЖљКћ🍀😫🤤ῦ我出生在了可以说普通话汉语好极🎼🕺🍸🥂🗽🎇🎊🆘🤠👩🖒🚪天一家⚲\\u2006⚭⚆⬭⬯⏖新✀╌🇫🇷🇩🇪🇮🇬🇧😷🇨🇦ХШ🌐\\x1f杀鸡给猴看ʁ𝗪𝗵𝗲𝗻𝘆𝗼𝘂𝗿𝗮𝗹𝗶𝘇𝗯𝘁𝗰𝘀𝘅𝗽𝘄𝗱📺ϖ\\u2000үսᴦᎥһͺ\\u2007հ\\u2001ɩye൦lƽh𝐓𝐡𝐞𝐫𝐮𝐝𝐚𝐃𝐜𝐩𝐭𝐢𝐨𝐧Ƅᴨןᑯ໐ΤᏧ௦Іᴑ܁𝐬𝐰𝐲𝐛𝐦𝐯𝐑𝐙𝐣𝐇𝐂𝐘𝟎ԜТᗞ౦〔Ꭻ𝐳𝐔𝐱𝟔𝟓𝐅🐋ffi💘💓ё𝘥𝘯𝘶💐🌋🌄🌅𝙬𝙖𝙨𝙤𝙣𝙡𝙮𝙘𝙠𝙚𝙙𝙜𝙧𝙥𝙩𝙪𝙗𝙞𝙝𝙛👺🐷ℋ𝐀𝐥𝐪🚶𝙢Ἱ🤘ͦ💸ج패티W𝙇ᵻ👂👃ɜ🎫\\uf0a7БУі🚢🚂ગુજરાતીῆ🏃𝓬𝓻𝓴𝓮𝓽𝓼☘﴾̯﴿₽\\ue807𝑻𝒆𝒍𝒕𝒉𝒓𝒖𝒂𝒏𝒅𝒔𝒎𝒗𝒊👽😙\\u200cЛ‒🎾👹⎌🏒⛸公寓养宠物吗🏄🐀🚑🤷操美𝒑𝒚𝒐𝑴🤙🐒欢迎来到阿拉斯ספ𝙫🐈𝒌𝙊𝙭𝙆𝙋𝙍𝘼𝙅ﷻ🦄巨收赢得白鬼愤怒要买额ẽ🚗🐳𝟏𝐟𝟖𝟑𝟕𝒄𝟗𝐠𝙄𝙃👇锟斤拷𝗢𝟳𝟱𝟬⦁マルハニチロ株式社⛷한국어ㄸㅓ니͜ʖ𝘿𝙔₵𝒩ℯ𝒾𝓁𝒶𝓉𝓇𝓊𝓃𝓈𝓅ℴ𝒻𝒽𝓀𝓌𝒸𝓎𝙏ζ𝙟𝘃𝗺𝟮𝟭𝟯𝟲👋🦊多伦🐽🎻🎹⛓🏹🍷🦆为和中友谊祝贺与其想象对法如直接问用自己猜本传教士没积唯认识基督徒曾经让相信耶稣复活死怪他但当们聊些政治题时候战胜因圣把全堂结婚孩恐惧且栗谓这样还♾🎸🤕🤒⛑🎁批判检讨🏝🦁🙋😶쥐스탱트뤼도석유가격인상이경제황을렵게만들지않록잘관리해야합다캐나에서대마초와화약금의품런성분갈때는반드시허된사용🔫👁凸ὰ💲🗯𝙈Ἄ𝒇𝒈𝒘𝒃𝑬𝑶𝕾𝖙𝖗𝖆𝖎𝖌𝖍𝖕𝖊𝖔𝖑𝖉𝖓𝖐𝖜𝖞𝖚𝖇𝕿𝖘𝖄𝖛𝖒𝖋𝖂𝕴𝖟𝖈𝕸👑🚿💡知彼百\\uf005𝙀𝒛𝑲𝑳𝑾𝒋𝟒😦𝙒𝘾𝘽🏐𝘩𝘨ὼṑ𝑱𝑹𝑫𝑵𝑪🇰🇵👾ᓇᒧᔭᐃᐧᐦᑳᐨᓃᓂᑲᐸᑭᑎᓀᐣ🐄🎈🔨🐎🤞🐸💟🎰🌝🛳点击查版🍭𝑥𝑦𝑧NG👣\\uf020っ🏉ф💭🎥Ξ🐴👨🤳🦍\\x0b🍩𝑯𝒒😗𝟐🏂👳🍗🕉🐲چی𝑮𝗕𝗴🍒ꜥⲣⲏ🐑⏰鉄リ事件ї💊「」\\uf203\\uf09a\\uf222\\ue608\\uf202\\uf099\\uf469\\ue607\\uf410\\ue600燻製シ虚偽屁理屈Г𝑩𝑰𝒀𝑺🌤𝗳𝗜𝗙𝗦𝗧🍊ὺἈἡχῖΛ⤏🇳𝒙ψՁմեռայինրւդձ冬至ὀ𝒁🔹🤚🍎𝑷🐂💅𝘬𝘱𝘸𝘷𝘐𝘭𝘓𝘖𝘹𝘲𝘫کΒώ💢ΜΟΝΑΕ🇱♲𝝈↴💒⊘Ȼ🚴🖕🖤🥘📍👈➕🚫🎨🌑🐻𝐎𝐍𝐊𝑭🤖🎎😼🕷grntidufbk𝟰🇴🇭🇻🇲𝗞𝗭𝗘𝗤👼📉🍟🍦🌈🔭《🐊🐍\\uf10aლڡ🐦\\U0001f92f\\U0001f92a🐡💳ἱ🙇𝗸𝗟𝗠𝗷🥜さようなら🔼'",
"_____no_output_____"
],
[
"from nltk.tokenize.treebank import TreebankWordTokenizer\ntokenizer = TreebankWordTokenizer()\n\n\nisolate_dict = {ord(c):f' {c} ' for c in symbols_to_isolate}\nremove_dict = {ord(c):f'' for c in symbols_to_delete}\n\n\ndef handle_punctuation(x):\n x = x.translate(remove_dict)\n x = x.translate(isolate_dict)\n return x\n\ndef handle_contractions(x):\n x = tokenizer.tokenize(x)\n return x\n\ndef fix_quote(x):\n x = [x_[1:] if x_.startswith(\"'\") else x_ for x_ in x]\n x = ' '.join(x)\n return x\n\ndef preprocess(x):\n x = handle_punctuation(x)\n x = handle_contractions(x)\n x = fix_quote(x)\n return x",
"_____no_output_____"
]
],
[
[
"So lets apply that preprocess function to our text",
"_____no_output_____"
]
],
[
[
"#train['comment_text'] = train['comment_text'].progress_apply(lambda x:preprocess(x))\n#test['comment_text'] = test['comment_text'].progress_apply(lambda x:preprocess(x))",
"_____no_output_____"
],
[
"train['comment_text'].head()",
"_____no_output_____"
],
[
"x_train = train['comment_text'].progress_apply(lambda x:preprocess(x))\ny_aux_train = train[['target', 'severe_toxicity', 'obscene', 'identity_attack', 'insult', 'threat']]\nx_test = test['comment_text'].progress_apply(lambda x:preprocess(x))\n\nidentity_columns = [\n 'male', 'female', 'homosexual_gay_or_lesbian', 'christian', 'jewish',\n 'muslim', 'black', 'white', 'psychiatric_or_mental_illness']\n# Overall\nweights = np.ones((len(x_train),)) / 4\n# Subgroup\nweights += (train[identity_columns].fillna(0).values>=0.5).sum(axis=1).astype(bool).astype(np.int) / 4\n# Background Positive, Subgroup Negative\nweights += (( (train['target'].values>=0.5).astype(bool).astype(np.int) +\n (train[identity_columns].fillna(0).values<0.5).sum(axis=1).astype(bool).astype(np.int) ) > 1 ).astype(bool).astype(np.int) / 4\n# Background Negative, Subgroup Positive\nweights += (( (train['target'].values<0.5).astype(bool).astype(np.int) +\n (train[identity_columns].fillna(0).values>=0.5).sum(axis=1).astype(bool).astype(np.int) ) > 1 ).astype(bool).astype(np.int) / 4\nloss_weight = 1.0 / weights.mean()\n\ny_train = np.vstack([(train['target'].values>=0.5).astype(np.int),weights]).T\n\nmax_features = 400000\n\n",
"_____no_output_____"
]
],
[
[
"Its really important that you intitialize the keras tokenizer correctly. Per default it does lower case and removes a lot of symbols. We want neither of that!",
"_____no_output_____"
]
],
[
[
"tokenizer = text.Tokenizer(num_words = max_features, filters='',lower=False)",
"_____no_output_____"
],
[
"\ntokenizer.fit_on_texts(list(x_train) + list(x_test))\n\ncrawl_matrix, unknown_words_crawl = build_matrix(tokenizer.word_index, CRAWL_EMBEDDING_PATH)\nprint('n unknown words (crawl): ', len(unknown_words_crawl))\n\nglove_matrix, unknown_words_glove = build_matrix(tokenizer.word_index, GLOVE_EMBEDDING_PATH)\nprint('n unknown words (glove): ', len(unknown_words_glove))\n\nmax_features = max_features or len(tokenizer.word_index) + 1\nmax_features\n\nembedding_matrix = np.concatenate([crawl_matrix, glove_matrix], axis=-1)\nembedding_matrix.shape\n\ndel crawl_matrix\ndel glove_matrix\ngc.collect()\n\n# x_train_torch = torch.tensor(x_train, dtype=torch.long)\ny_train_torch = torch.tensor(np.hstack([y_train, y_aux_train]), dtype=torch.float32)",
"n unknown words (crawl): 148783\nn unknown words (glove): 152316\n"
]
],
[
[
"# Sequence Bucketing",
"_____no_output_____"
]
],
[
[
"x_train = tokenizer.texts_to_sequences(x_train)\nx_test = tokenizer.texts_to_sequences(x_test)",
"_____no_output_____"
],
[
"lengths = torch.from_numpy(np.array([len(x) for x in x_train]))\n\n#maxlen = lengths.max() \nmaxlen = 300\nx_train_padded = torch.from_numpy(sequence.pad_sequences(x_train, maxlen=maxlen))\nx_train_padded.shape",
"_____no_output_____"
],
[
"class SequenceBucketCollator():\n def __init__(self, choose_length, sequence_index, length_index, label_index=None):\n self.choose_length = choose_length\n self.sequence_index = sequence_index\n self.length_index = length_index\n self.label_index = label_index\n \n def __call__(self, batch):\n batch = [torch.stack(x) for x in list(zip(*batch))]\n \n sequences = batch[self.sequence_index]\n lengths = batch[self.length_index]\n \n length = self.choose_length(lengths)\n mask = torch.arange(start=maxlen, end=0, step=-1) < length\n padded_sequences = sequences[:, mask]\n \n batch[self.sequence_index] = padded_sequences\n \n if self.label_index is not None:\n return [x for i, x in enumerate(batch) if i != self.label_index], batch[self.label_index]\n \n return batch",
"_____no_output_____"
]
],
[
[
"Method 1 is quite a bit solower than the rest, but method 2 and 3 are pretty close to each other (keep in mind that the majority of the time it takes to train the NN is spent in the actual computation anyway, not while loading). I am going to use method 3 because it is much more elegant and can be used as a drop-in replacement to static padding.",
"_____no_output_____"
],
[
"The `train_model` function is exactly the same. The NN itself is also only slightly different. It also accepts an optional `lengths` parameter because lengths are part of the dataset now.",
"_____no_output_____"
]
],
[
[
"class NeuralNet(nn.Module):\n def __init__(self, embedding_matrix, num_aux_targets):\n super(NeuralNet, self).__init__()\n embed_size = embedding_matrix.shape[1]\n \n self.embedding = nn.Embedding(max_features, embed_size)\n self.embedding.weight = nn.Parameter(torch.tensor(embedding_matrix, dtype=torch.float32))\n self.embedding.weight.requires_grad = False\n self.embedding_dropout = SpatialDropout(0.3)\n \n self.lstm1 = nn.LSTM(embed_size, LSTM_UNITS, bidirectional=True, batch_first=True)\n self.lstm2 = nn.LSTM(LSTM_UNITS * 2, LSTM_UNITS, bidirectional=True, batch_first=True)\n \n self.linear1 = nn.Linear(DENSE_HIDDEN_UNITS, DENSE_HIDDEN_UNITS)\n self.linear2 = nn.Linear(DENSE_HIDDEN_UNITS, DENSE_HIDDEN_UNITS)\n \n self.linear_out = nn.Linear(DENSE_HIDDEN_UNITS, 1)\n self.linear_aux_out = nn.Linear(DENSE_HIDDEN_UNITS, num_aux_targets)\n \n def forward(self, x, lengths=None):\n h_embedding = self.embedding(x.long())\n h_embedding = self.embedding_dropout(h_embedding)\n \n h_lstm1, _ = self.lstm1(h_embedding)\n h_lstm2, _ = self.lstm2(h_lstm1)\n \n # global average pooling\n avg_pool = torch.mean(h_lstm2, 1)\n # global max pooling\n max_pool, _ = torch.max(h_lstm2, 1)\n \n h_conc = torch.cat((max_pool, avg_pool), 1)\n h_conc_linear1 = F.relu(self.linear1(h_conc))\n h_conc_linear2 = F.relu(self.linear2(h_conc))\n \n hidden = h_conc + h_conc_linear1 + h_conc_linear2\n \n result = self.linear_out(hidden)\n aux_result = self.linear_aux_out(hidden)\n out = torch.cat([result, aux_result], 1)\n \n return out",
"_____no_output_____"
]
],
[
[
"# Training",
"_____no_output_____"
],
[
"For training in this kernel, I will use sequence bucketing with maximum length.",
"_____no_output_____"
],
[
"Now we can instantiate a test, train and valid dataset and train the network. The validation dataset is only added so that the fast.ai DataBunch works as expected and it consists of only 2 samples.",
"_____no_output_____"
]
],
[
[
"# lengths = torch.from_numpy(np.array([len(x) for x in x_train]))\ntest_lengths = torch.from_numpy(np.array([len(x) for x in x_test]))\n# maxlen = 299\n\n# x_train_padded = torch.from_numpy(sequence.pad_sequences(x_train, maxlen=maxlen))\nx_test_padded = torch.from_numpy(sequence.pad_sequences(x_test, maxlen=maxlen))",
"_____no_output_____"
],
[
"batch_size = 512\ntest_dataset = data.TensorDataset(x_test_padded, test_lengths)\ntrain_dataset = data.TensorDataset(x_train_padded, lengths, y_train_torch)\nvalid_dataset = data.Subset(train_dataset, indices=[0, 1])\n\ntrain_collator = SequenceBucketCollator(lambda lenghts: lenghts.max(), \n sequence_index=0, \n length_index=1, \n label_index=2)\ntest_collator = SequenceBucketCollator(lambda lenghts: lenghts.max(), sequence_index=0, length_index=1)\n\ntrain_loader = data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True, collate_fn=train_collator)\nvalid_loader = data.DataLoader(valid_dataset, batch_size=batch_size, shuffle=False, collate_fn=train_collator)\ntest_loader = data.DataLoader(test_dataset, batch_size=batch_size, shuffle=False, collate_fn=test_collator)\n\ndatabunch = DataBunch(train_dl=train_loader, valid_dl=valid_loader, collate_fn=train_collator)",
"_____no_output_____"
],
[
"def custom_loss(data, targets):\n ''' Define custom loss function for weighted BCE on 'target' column '''\n bce_loss_1 = nn.BCEWithLogitsLoss(weight=targets[:,1:2])(data[:,:1],targets[:,:1])\n bce_loss_2 = nn.BCEWithLogitsLoss()(data[:,1:],targets[:,2:])\n return (bce_loss_1 * loss_weight) + bce_loss_2",
"_____no_output_____"
]
],
[
[
"Now, train the model and see that it is faster than before!\n\nOn my local machine, one epoch with statically padded sequences takes 7:25 to train (445 seconds). With sequence bucketing, one batch takes 6:26 (386 seconds). So the version with sequence bucketing is 1.15x faster.",
"_____no_output_____"
]
],
[
[
"all_test_preds = []\n\nfor model_idx in range(NUM_MODELS):\n print('Model ', model_idx)\n seed_everything(1 + model_idx)\n model = NeuralNet(embedding_matrix, y_aux_train.shape[-1])\n learn = Learner(databunch, model, loss_func=custom_loss)\n test_preds = train_model(learn,test_dataset,output_dim=7) \n all_test_preds.append(test_preds)",
"Model 0\nepoch train_loss valid_loss time \n0 0.270631 0.014791 10:32 \nepoch train_loss valid_loss time \n0 0.277120 0.014462 10:32 \nepoch train_loss valid_loss time \n0 0.268385 0.013938 10:34 \nepoch train_loss valid_loss time \n0 0.254095 0.011693 10:35 \nModel 1\nepoch train_loss valid_loss time \n0 0.280602 0.005935 10:35 \nepoch train_loss valid_loss time \n0 0.267733 0.009191 10:31 \nepoch train_loss valid_loss time \n0 0.262522 0.009077 10:30 \nepoch train_loss valid_loss time \n0 0.256765 0.010041 10:29 \n"
],
[
"submission = pd.DataFrame.from_dict({\n 'id': test['id'],\n 'prediction': np.mean(all_test_preds, axis=0)[:, 0]\n})\n\nsubmission.to_csv('submission.csv', index=False)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
e728c4b94034224edd55165d565c87065ac0e6b9 | 5,525 | ipynb | Jupyter Notebook | notebooks/low_frequency/detect_ieq.ipynb | rice-solar-physics/synthetic-observables-paper-models | 80f68bceb7ecbcd238c196e3cc07d19e88617720 | [
"MIT"
] | null | null | null | notebooks/low_frequency/detect_ieq.ipynb | rice-solar-physics/synthetic-observables-paper-models | 80f68bceb7ecbcd238c196e3cc07d19e88617720 | [
"MIT"
] | 8 | 2019-06-11T10:32:49.000Z | 2021-10-19T19:51:00.000Z | notebooks/low_frequency/detect_ieq.ipynb | rice-solar-physics/synthetic-observables-paper-models | 80f68bceb7ecbcd238c196e3cc07d19e88617720 | [
"MIT"
] | null | null | null | 26.309524 | 210 | 0.575747 | [
[
[
"# Calculate Detector Counts: Low-frequency Nanoflares and Ionization Equilibrium",
"_____no_output_____"
]
],
[
[
"import os\n\nimport numpy as np\nimport astropy.units as u\nfrom astropy.coordinates import SkyCoord\nimport matplotlib.pyplot as plt\nimport matplotlib.colors\nimport dask\nimport distributed\n\nimport synthesizAR\nfrom synthesizAR.instruments import InstrumentSDOAIA\nfrom synthesizAR.atomic import EmissionModel\nimport synthesizAR.maps\n\n%matplotlib inline",
"_____no_output_____"
],
[
"client = distributed.Client()\nclient",
"_____no_output_____"
],
[
"field = synthesizAR.Field.restore('/storage-home/w/wtb2/data/timelag_synthesis_v2/low_frequency/field_checkpoint/')",
"_____no_output_____"
],
[
"em_model = EmissionModel.restore('/storage-home/w/wtb2/data/timelag_synthesis_v2/base_emission_model.json')",
"_____no_output_____"
],
[
"em_model.calculate_ionization_fraction(field, '/storage-home/w/wtb2/data/timelag_synthesis_v2/low_frequency/ieq/ionization_fractions.h5')",
"_____no_output_____"
],
[
"em_model.save('/storage-home/w/wtb2/data/timelag_synthesis_v2/low_frequency/ieq/emission_model.json')",
"_____no_output_____"
],
[
"em_model = EmissionModel.restore('/storage-home/w/wtb2/data/timelag_synthesis_v2/low_frequency/ieq/emission_model.json')",
"_____no_output_____"
],
[
"aia = InstrumentSDOAIA([0, 30000]*u.s,)",
"_____no_output_____"
],
[
"observer = synthesizAR.Observer(field, [aia], parallel=True)",
"_____no_output_____"
],
[
"observer.build_detector_files('/storage-home/w/wtb2/data/timelag_synthesis_v2/low_frequency/ieq', ds=0.5*u.Mm)",
"/storage-home/w/wtb2/anaconda3/envs/synthesizar/lib/python3.6/site-packages/scipy/interpolate/_fitpack_impl.py:299: RuntimeWarning: The maximal number of iterations (20) allowed for finding smoothing\nspline with fp=s has been reached. Probable cause: s too small.\n(abs(fp-s)/s>0.001)\n warnings.warn(RuntimeWarning(_iermess[ier][0]))\n/storage-home/w/wtb2/anaconda3/envs/synthesizar/lib/python3.6/site-packages/scipy/interpolate/_fitpack_impl.py:299: RuntimeWarning: A theoretically impossible result when finding a smoothing spline\nwith fp = s. Probable cause: s too small. (abs(fp-s)/s>0.001)\n warnings.warn(RuntimeWarning(_iermess[ier][0]))\n"
],
[
"futures = observer.flatten_detector_counts(emission_model=em_model)",
"_____no_output_____"
],
[
"bin_futures = observer.bin_detector_counts('/storage-home/w/wtb2/data/timelag_synthesis_v2/low_frequency/ieq')",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e728c6dccfe6b12f2a71c5b3b98623b97c0c5e5b | 12,089 | ipynb | Jupyter Notebook | Python/04_Image_Display.ipynb | Ivana144/SimpleITK-Notebooks | 7ae207c1240c90babab761010865f94468cd8828 | [
"Apache-2.0"
] | null | null | null | Python/04_Image_Display.ipynb | Ivana144/SimpleITK-Notebooks | 7ae207c1240c90babab761010865f94468cd8828 | [
"Apache-2.0"
] | null | null | null | Python/04_Image_Display.ipynb | Ivana144/SimpleITK-Notebooks | 7ae207c1240c90babab761010865f94468cd8828 | [
"Apache-2.0"
] | null | null | null | 36.086567 | 368 | 0.638597 | [
[
[
"<h1 align=\"center\">Image Display</h1>\n\nThe native SimpleITK approach to displaying images is to use an external viewing program. In the notebook environment it is convenient to use matplotlib to display inline images and if the need arises we can implement some reasonably rich inline graphical user interfaces, combining control components from the ipywidgets package and matplotlib based display.\n\nIn this notebook we cover the usage of external programs and matplotlib for viewing images. We also instantiate a more involved inline interface that uses ipywidgets to control display. For the latter type of moderately complex display, used in many of the notebooks, take a look at the [gui.py](gui.py) file.",
"_____no_output_____"
]
],
[
[
"from __future__ import print_function\n\nimport SimpleITK as sitk\n\n%matplotlib notebook\nimport matplotlib.pyplot as plt\nimport gui\n\n# Utility method that either downloads data from the Girder repository or\n# if already downloaded returns the file name for reading from disk (cached data).\n%run update_path_to_download_script\nfrom downloaddata import fetch_data as fdata",
"_____no_output_____"
]
],
[
[
"## Image Display with An External Viewer\n\nSimpleITK provides two options for invoking an external viewer, use a procedural interface or an object oriented one. \n\n### Procedural interface\nSimpleITK provides a built in ``Show`` method. This function writes the image out to disk and than launches a program for visualization. By default it is configured to use the Fiji program, because it readily supports many medical image formats and loads quickly. However, the ``Show`` visualization program is easily customizable via environment variables:\n\n<ul>\n<li>SITK_SHOW_COMMAND: Viewer to use (<a href=\"http://www.itksnap.org\">ITK-SNAP</a>, <a href=\"http://www.slicer.org\">3D Slicer</a>...) </li>\n<li>SITK_SHOW_COLOR_COMMAND: Viewer to use when displaying color images.</li>\n<li>SITK_SHOW_3D_COMMAND: Viewer to use for 3D images.</li>\n</ul>\n\nIn general, the Show command accepts three parameters: (1) image to display; (2) window title; (3) boolean specifying whether to print the invoked command and additional debugging information.",
"_____no_output_____"
]
],
[
[
"mr_image = sitk.ReadImage(fdata('training_001_mr_T1.mha'))",
"_____no_output_____"
],
[
"sitk.Show?",
"_____no_output_____"
],
[
"try:\n sitk.Show(mr_image)\nexcept RuntimeError:\n print('SimpleITK Show method could not find the viewer (ImageJ not installed or ' +\n 'environment variable pointing to non existant viewer).')",
"_____no_output_____"
]
],
[
[
"Use a different viewer by setting environment variable(s). Do this from within your Jupyter notebook using 'magic' functions, or set in a more permanent manner using your OS specific convention. ",
"_____no_output_____"
]
],
[
[
"%env SITK_SHOW_COMMAND /Applications/ITK-SNAP.app/Contents/MacOS/ITK-SNAP \n\ntry:\n sitk.Show(mr_image)\nexcept RuntimeError:\n print('SimpleITK Show method could not find the viewer (ITK-SNAP not installed or ' +\n 'environment variable pointing to non existant viewer).')",
"_____no_output_____"
],
[
"%env SITK_SHOW_COMMAND '/Applications/ImageJ/ImageJ.app/Contents/MacOS/JavaApplicationStub'\ntry:\n sitk.Show(mr_image)\nexcept RuntimeError:\n print('SimpleITK Show method could not find the viewer (ImageJ not installed or ' +\n 'environment variable pointing to non existant viewer).')",
"_____no_output_____"
],
[
"%env SITK_SHOW_COMMAND '/Applications/Slicer.app/Contents/MacOS/Slicer'\ntry:\n sitk.Show(mr_image)\nexcept RuntimeError:\n print('SimpleITK Show method could not find the viewer (Slicer not installed or ' +\n 'environment variable pointing to non existant viewer).')",
"_____no_output_____"
]
],
[
[
"### Object Oriented interface\n\nThe [Image Viewer](https://simpleitk.org/doxygen/latest/html/classitk_1_1simple_1_1ImageViewer.html) class provides a more standard approach to controlling image viewing by setting various instance variable values. Also, it ensures that all of your viewing settings are documented, as they are part of the code and not external environment variables.\n\nA caveat to this is that if you have set various environment variables to control SimpleITK settings, the image viewer will use these settings as the default ones and not the standard defaults (Fiji as viewer etc.).",
"_____no_output_____"
]
],
[
[
"# Which external viewer will the image_viewer use if we don't specify the external viewing application? \n# (see caveat above)\nimage_viewer = sitk.ImageViewer()\nimage_viewer.SetApplication('/Applications/Fiji.app/Contents/MacOS/ImageJ-macosx')\nimage_viewer.SetTitle('MR image')\n\n# Use the default image viewer.\nimage_viewer.Execute(mr_image)\n\n# Change viewer, and display again.\nimage_viewer.SetApplication('/Applications/ITK-SNAP.app/Contents/MacOS/ITK-SNAP')\nimage_viewer.Execute(mr_image)\n\n# Change the viewer command, (use ITK-SNAP -z option to open the image in zoomed mode)\nimage_viewer.SetCommand('/Applications/ITK-SNAP.app/Contents/MacOS/ITK-SNAP -z 3')\nimage_viewer.Execute(mr_image)\n\nprint('Default format for saved file used in display: ' + image_viewer.GetFileExtension())\n\n# Change the file format (possibly to make it compatible with your viewer of choice)\nimage_viewer.SetFileExtension('.nrrd')\nimage_viewer.Execute(mr_image)",
"_____no_output_____"
]
],
[
[
"## Inline display with matplotlib",
"_____no_output_____"
]
],
[
[
"mr_image = sitk.ReadImage(fdata('training_001_mr_T1.mha'))\nnpa = sitk.GetArrayViewFromImage(mr_image)\n\n# Display the image slice from the middle of the stack, z axis\nz = int(mr_image.GetDepth()/2)\nnpa_zslice = sitk.GetArrayViewFromImage(mr_image)[z,:,:]\n\n# Three plots displaying the same data, how do we deal with the high dynamic range?\nfig = plt.figure(figsize=(10,3))\n\nfig.add_subplot(1,3,1)\nplt.imshow(npa_zslice)\nplt.title('default colormap', fontsize=10)\nplt.axis('off')\n\nfig.add_subplot(1,3,2)\nplt.imshow(npa_zslice,cmap=plt.cm.Greys_r);\nplt.title('grey colormap', fontsize=10)\nplt.axis('off')\n\nfig.add_subplot(1,3,3)\nplt.title('grey colormap,\\n scaling based on volumetric min and max values', fontsize=10)\nplt.imshow(npa_zslice,cmap=plt.cm.Greys_r, vmin=npa.min(), vmax=npa.max())\nplt.axis('off');",
"_____no_output_____"
],
[
"# Display the image slice in the middle of the stack, x axis\n \nx = int(mr_image.GetWidth()/2)\n\nnpa_xslice = npa[:,:,x]\nplt.figure(figsize=(10,2))\nplt.imshow(npa_xslice, cmap=plt.cm.Greys_r)\nplt.axis('off')\n\nprint('Image spacing: {0}'.format(mr_image.GetSpacing()))",
"_____no_output_____"
],
[
"# Collapse along the x axis\nextractSliceFilter = sitk.ExtractImageFilter() \nsize = list(mr_image.GetSize())\nsize[0] = 0\nextractSliceFilter.SetSize( size )\n \nindex = (x, 0, 0)\nextractSliceFilter.SetIndex(index)\nsitk_xslice = extractSliceFilter.Execute(mr_image)\n\n# Resample slice to isotropic\noriginal_spacing = sitk_xslice.GetSpacing()\noriginal_size = sitk_xslice.GetSize()\n\nmin_spacing = min(sitk_xslice.GetSpacing())\nnew_spacing = [min_spacing, min_spacing]\nnew_size = [int(round(original_size[0]*(original_spacing[0]/min_spacing))), \n int(round(original_size[1]*(original_spacing[1]/min_spacing)))]\nresampleSliceFilter = sitk.ResampleImageFilter()\nresampleSliceFilter.SetSize(new_size)\nresampleSliceFilter.SetTransform(sitk.Transform())\nresampleSliceFilter.SetInterpolator(sitk.sitkNearestNeighbor)\nresampleSliceFilter.SetOutputOrigin(sitk_xslice.GetOrigin())\nresampleSliceFilter.SetOutputSpacing(new_spacing)\nresampleSliceFilter.SetOutputDirection(sitk_xslice.GetDirection())\nresampleSliceFilter.SetDefaultPixelValue(0)\nresampleSliceFilter.SetOutputPixelType(sitk_xslice.GetPixelID())\n\n# Why is the image pixelated?\nsitk_isotropic_xslice = resampleSliceFilter.Execute(sitk_xslice)\nplt.figure(figsize=(10,2))\nplt.imshow(sitk.GetArrayViewFromImage(sitk_isotropic_xslice), cmap=plt.cm.Greys_r)\nplt.axis('off')\nprint('Image spacing: {0}'.format(sitk_isotropic_xslice.GetSpacing()))",
"_____no_output_____"
]
],
[
[
"## Inline display with matplotlib and ipywidgets\n\nDisplay two volumes side by side, with sliders to control the displayed slice. The menu on the bottom left allows you to home (return to original view), back and forward between views, pan, zoom and save a view. \n\nA variety of interfaces combining matplotlib display and ipywidgets can be found in the [gui.py](gui.py) file.\n",
"_____no_output_____"
]
],
[
[
"ct_image = sitk.ReadImage(fdata('training_001_ct.mha'))\nct_window_level = [720,80]\nmr_window_level = [790,395]",
"_____no_output_____"
],
[
"gui.MultiImageDisplay([mr_image, ct_image], figure_size=(10,3), window_level_list=[mr_window_level,ct_window_level]);",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
e728cb81e0c2cfc26cd8666ecfeb14886ef440b7 | 37,347 | ipynb | Jupyter Notebook | ufabc/img_with_text.ipynb | BrunoASNascimento/image-processing-study | f7fc770851552190a22430162785201a299a1bc4 | [
"MIT"
] | null | null | null | ufabc/img_with_text.ipynb | BrunoASNascimento/image-processing-study | f7fc770851552190a22430162785201a299a1bc4 | [
"MIT"
] | null | null | null | ufabc/img_with_text.ipynb | BrunoASNascimento/image-processing-study | f7fc770851552190a22430162785201a299a1bc4 | [
"MIT"
] | null | null | null | 247.331126 | 32,643 | 0.904865 | [
[
[
"<a href=\"https://colab.research.google.com/github/BrunoASNascimento/image-processing-study/blob/master/img_with_text.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
]
],
[
[
"from PIL import ImageFont, ImageDraw, Image\nfrom google.colab.patches import cv2_imshow\nimport numpy as np\nimport cv2",
"_____no_output_____"
],
[
"text= \"TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST\"",
"_____no_output_____"
],
[
"len(text)",
"_____no_output_____"
],
[
"def fix_text(text,step):\n text_list = text.split(' ')\n dinamic_control =0\n text_finaly =[]\n for text_str in text_list:\n dinamic_control +=len(text_str)\n if dinamic_control <=step:\n text_finaly.append(text_str)\n else:\n text_finaly.append(''.join(['\\n', text_str]))\n dinamic_control=0\n return ' '.join(text_finaly)",
"_____no_output_____"
],
[
"image = cv2.imread(\"/content/img_test.png\")\n\n# Convert to PIL Image\ncv2_im_rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)\npil_im = Image.fromarray(cv2_im_rgb)\n\ndraw = ImageDraw.Draw(pil_im)\n\n# use a bitmap font\nfont = ImageFont.truetype(font='/content/arial.ttf', size=15, index=0)\n\n# Draw the text\ndraw.text((330, 200), fix_text(text,30), font=font,fill=(255,0,0))\n\n# Save the image\ncv2_im_processed = cv2.cvtColor(np.array(pil_im), cv2.COLOR_RGB2BGR)\ncv2.imwrite(\"result.png\", cv2_im_processed)\n\ncv2_imshow(cv2_im_processed)",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
e728df11599c6b17f0880da73d2965efcdcc44cf | 43,158 | ipynb | Jupyter Notebook | notebooks/analysis/validate_biomass_training_data.ipynb | carbonplan/trace | 5cf113891bdefa29c2afd4478dff099e0458c82c | [
"MIT"
] | 14 | 2021-02-15T22:40:52.000Z | 2022-02-24T15:25:28.000Z | notebooks/analysis/validate_biomass_training_data.ipynb | carbonplan/trace | 5cf113891bdefa29c2afd4478dff099e0458c82c | [
"MIT"
] | 75 | 2021-02-11T17:57:42.000Z | 2022-03-22T00:47:57.000Z | notebooks/analysis/validate_biomass_training_data.ipynb | carbonplan/trace | 5cf113891bdefa29c2afd4478dff099e0458c82c | [
"MIT"
] | 2 | 2021-09-28T01:51:19.000Z | 2021-11-22T21:32:35.000Z | 32.745068 | 286 | 0.500487 | [
[
[
"%load_ext autoreload\n%autoreload 2\nimport fsspec\nimport numpy as np\nimport pandas as pd\nimport xarray as xr\nfrom carbonplan.data import cat\nimport cartopy.crs as ccrs\nimport matplotlib.pyplot as plt\nimport warnings\nwarnings.filterwarnings('ignore')\nimport matplotlib\nfrom carbonplan_data import utils\nfrom matplotlib import cm\nfrom sklearn.metrics import r2_score, mean_absolute_error, mean_absolute_percentage_error\nimport rioxarray\n\nfrom carbonplan_trace.v1 import utils as trace_utils\nfrom carbonplan_trace.v1 import glas_allometric_eq as allo\n\nimport os\nfrom carbonplan_trace.v0.data import cat as trace_cat\n\nmatplotlib.rc('font', family='sans-serif') \nmatplotlib.rc('font', serif='Helvetica Neue') \nmatplotlib.rc('text', usetex='false') \nmatplotlib.rcParams.update({'font.size': 14, \"svg.fonttype\": \"none\"})\n\nfrom gcsfs import GCSFileSystem\nfs = GCSFileSystem(cache_timeout=0)\n\nimport seaborn as sns\nsns.set_theme()\n",
"_____no_output_____"
]
],
[
[
"# Open data\n\n- from this study\n- from Lidar validation data sets (Margolis et al 2015 and Neigh et al 2015)\n- Spawn et al 2020 (for year 2010)\n- Hansen et al 2007 (for year 2000)\n\nTo do:\n\n- Jon Wang\n- FIA\n",
"_____no_output_____"
]
],
[
[
"def open_biomass_data(tiles=None, min_lat=-90, max_lat=90, min_lon=-180, max_lon=180):\n folder = \"s3://carbonplan-climatetrace/v1/data/intermediates/biomass/\"\n if not tiles:\n s3fs = fsspec.get_filesystem_class(\"s3\")()\n tiles = [\n os.path.splitext(os.path.split(path)[-1])[0]\n for path in s3fs.ls(folder)\n if not path.endswith(\"/\")\n ]\n uris = [f\"{folder}{tile}.zarr\" for tile in tiles]\n ds_list = []\n for uri in uris:\n try:\n mapper = fsspec.get_mapper(uri)\n ds = xr.open_zarr(mapper, consolidated=True)\n ds = ds.stack(unique_index=(\"record_index\", \"shot_number\")).dropna(\n dim=\"unique_index\", how=\"any\", subset=[\"lat\", \"lon\", \"biomass\"]\n )\n ds = ds.drop_vars(\"spatial_ref\")\n ds.attrs[\"crs\"] = \"EPSG:4326\"\n ds = trace_utils.subset_data_for_bounding_box(ds, min_lat, max_lat, min_lon, max_lon)\n ds_list.append(\n ds[\n [\n \"biomass\",\n \"lat\",\n \"lon\",\n \"allometric_eq\",\n \"ecoregion\",\n \"igbp\",\n ]\n ].where(ds.igbp.isin([1, 2, 3, 4, 5, 8, 9]), drop=True)\n )\n except KeyError:\n print(f\"did not find {uri}\")\n except ZeroDivisionError:\n print(f\"{uri} do not have any data\")\n\n ds = xr.concat(ds_list, dim=\"unique_index\", data_vars=\"minimal\").chunk({\"unique_index\": 2000})\n for k in ds:\n _ = ds[k].encoding.pop(\"chunks\", None)\n\n return ds\n\n\ndef turn_point_cloud_to_grid(ds, format_grid):\n gridded_df = format_grid.sel(lat=ds.lat, lon=ds.lon, method=\"nearest\")\n gridded_df[\"biomass\"] = ds[\"biomass\"]\n gridded_df = (\n gridded_df.to_dataframe()\n .reset_index(drop=True)\n .groupby([\"lat\", \"lon\"])\n .biomass.mean()\n .reset_index()\n .dropna()\n )\n pivot = gridded_df.pivot(columns=\"lon\", index=\"lat\", values=\"biomass\").reindex(\n index=format_grid.lat.values, columns=format_grid.lon.values\n )\n ds_grid = xr.DataArray(\n data=pivot.values,\n dims=[\"lat\", \"lon\"],\n coords=[format_grid.coords[\"lat\"], format_grid.coords[\"lon\"]],\n )\n ds_grid = ds_grid.to_dataset(name=\"biomass\", promote_attrs=True)\n\n return ds_grid\n\n\ndef open_spawn_data(\n min_lat=-90,\n max_lat=90,\n min_lon=-180,\n max_lon=180,\n coarsen=10,\n to_coarsen=False,\n preprocessed=True,\n):\n # for year 2010\n # https://daac.ornl.gov/VEGETATION/guides/Global_Maps_C_Density_2010.html\n # in a different crs, units Mg C/ha\n\n if preprocessed:\n spawn = trace_utils.open_zarr_file(\n \"s3://carbonplan-climatetrace/intermediate/spawn_biomass_3km.zarr\"\n )\n spawn = spawn.sel(lat=slice(max_lat, min_lat), lon=slice(min_lon, max_lon))\n\n else:\n spawn = xr.open_rasterio(\n \"gs://carbonplan-data/raw/2010-harmonized-biomass/global/300m/aboveground.tif\"\n ).chunk({\"x\": 5120, \"y\": 5120})\n\n spawn = 0.2 * (spawn.rename({\"x\": \"lon\", \"y\": \"lat\"}).squeeze(drop=True))\n spawn = spawn.sel(lat=slice(max_lat, min_lat), lon=slice(min_lon, max_lon))\n if to_coarsen:\n spawn = spawn.coarsen(lat=coarsen, lon=coarsen, boundary=\"trim\").mean().compute()\n spawn = spawn.to_dataset(name=\"biomass\", promote_attrs=True)\n\n return spawn\n\n\ndef open_hansen_data(\n tiles=None,\n min_lat=-90,\n max_lat=90,\n min_lon=-180,\n max_lon=180,\n coarsen=100,\n to_coarsen=False,\n preprocessed=True,\n):\n # https://data.globalforestwatch.org/datasets/8f93a6f94a414f9588ce4657a39c59ff_1?geometry=-146.250%2C-66.901%2C146.250%2C82.366\n # units in Mg biomass/ha\n\n if preprocessed:\n hansen = trace_utils.open_zarr_file(\n \"s3://carbonplan-climatetrace/intermediate/hansen_biomass_3km.zarr\"\n )\n hansen = hansen.sel(lat=slice(min_lat, max_lat), lon=slice(min_lon, max_lon))\n\n else:\n hansen = []\n failed = []\n for tile in tiles:\n try:\n lat, lon = trace_utils.get_lat_lon_tags_from_tile_path(tile)\n # get Hansen data\n hansen_tile = trace_cat.gfw_biomass(lat=lat, lon=lon).to_dask()\n hansen_tile = hansen_tile.rename({\"x\": \"lon\", \"y\": \"lat\"}).squeeze(drop=True)\n if to_coarsen:\n hansen_tile = (\n hansen_tile.coarsen(lat=coarsen, lon=coarsen, boundary=\"trim\")\n .mean()\n .compute()\n )\n hansen.append(hansen_tile.to_dataset(name=\"biomass\", promote_attrs=True))\n except:\n print(tile + \"failed\")\n failed.append(tile)\n\n failed = []\n for tile in failed:\n lat, lon = trace_utils.get_lat_lon_tags_from_tile_path(tile)\n # get Hansen data\n hansen_tile = trace_cat.hansen_biomass(lat=lat, lon=lon).to_dask()\n hansen_tile = hansen_tile.rename({\"x\": \"lon\", \"y\": \"lat\"}).squeeze(drop=True)\n if to_coarsen:\n hansen_tile = (\n hansen_tile.coarsen(lat=coarsen, lon=coarsen, boundary=\"trim\").mean().compute()\n )\n hansen.append(hansen_tile.to_dataset(name=\"biomass\", promote_attrs=True))\n\n hansen = xr.combine_by_coords(hansen, combine_attrs=\"drop_conflicts\").chunk(\n {\"lat\": 5120, \"lon\": 5120}\n )\n\n return hansen",
"_____no_output_____"
],
[
"import pandas as pd\n\nrename_dict = {\n \"Glas record index\": \"record_index\",\n \"rec_ndx\": \"record_index\",\n \"Shotn\": \"shot_number\",\n \"shotn\": \"shot_number\",\n \"lngtd\": \"lon\",\n \"h14\": \"VH\",\n \"fslope\": \"f_slope\",\n \"Senergy\": \"senergy\",\n \"h25\": \"h25_Neigh\",\n \"h50\": \"h50_Neigh\",\n \"h75\": \"h75_Neigh\",\n \"h90\": \"h90_Neigh\",\n \"glas_biom_str\": \"biomass\",\n}\n\n\ndef convert_df_to_xr(df):\n df[\"unique_index\"] = (\n df.record_index.astype(str).str.zfill(9) + \"_\" + df.shot_number.astype(str).str.zfill(2)\n )\n df.set_index([\"record_index\", \"shot_number\"], inplace=True)\n\n ds = {}\n for c in df.columns:\n ds[c] = xr.DataArray(\n df[c].values,\n dims=[\"unique_index\"],\n coords={\"unique_index\": df.unique_index.values},\n )\n ds = xr.Dataset(ds)\n ds.coords[\"unique_index\"] = df.index\n return ds\n\n\ndef open_margolis_data(min_lat=-90, max_lat=90, min_lon=-180, max_lon=180):\n files = [\n \"gs://carbonplan-climatetrace/inputs/boreal_lidar_biomass/Alaska_BBB3_hg_L3c_L3f_wPALSbiom.txt\",\n \"gs://carbonplan-climatetrace/inputs/boreal_lidar_biomass/Canada_east_BBB3_hg_L3c_L3f_wPALSbiom.txt\",\n \"gs://carbonplan-climatetrace/inputs/boreal_lidar_biomass/Canada_west_BBB3_hg_L3c_L3f_wPALSbiom.txt\",\n ]\n\n margolis = []\n for file in files:\n with fs.open(file) as f:\n data = np.genfromtxt(f, skip_header=8)\n with fs.open(file) as f:\n lines = f.readlines()\n headers = lines[7]\n headers = [c for c in headers.decode(\"utf-8\").strip().split(\" \") if c != \"\"]\n margolis.append(pd.DataFrame(data=data, columns=headers))\n margolis = pd.concat(margolis)\n for c in margolis:\n if c in rename_dict:\n margolis.rename(columns={c: rename_dict[c]}, inplace=True)\n\n margolis = margolis.replace(-9999, np.nan).replace(99999, np.nan)\n margolis = convert_df_to_xr(margolis)\n margolis = trace_utils.subset_data_for_bounding_box(\n margolis, min_lat, max_lat, min_lon, max_lon\n )\n\n return margolis",
"_____no_output_____"
],
[
"min_lat = -90\nmax_lat = 90\nmin_lon = -180\nmax_lon = 180\n\ntiles = trace_utils.find_tiles_for_bounding_box(\n min_lat=min_lat, max_lat=max_lat, min_lon=min_lon, max_lon=max_lon\n)\nprint(\"loading hansen\")\nhansen = open_hansen_data(\n preprocessed=True,\n min_lat=min_lat,\n max_lat=max_lat,\n min_lon=min_lon,\n max_lon=max_lon,\n)\nprint(\"loading spawn\")\nspawn = open_spawn_data(\n preprocessed=True,\n min_lat=min_lat,\n max_lat=max_lat,\n min_lon=min_lon,\n max_lon=max_lon,\n)\nprint(\"loading this study\")\nstudy = open_biomass_data(\n tiles=None,\n min_lat=min_lat,\n max_lat=max_lat,\n min_lon=min_lon,\n max_lon=max_lon,\n)\nprint(\"gridding study\")\ngridded_study_hansen = turn_point_cloud_to_grid(ds=study, format_grid=hansen)\ngridded_study_spawn = turn_point_cloud_to_grid(ds=study, format_grid=spawn)\n# print('loading margolis')\n# margolis = open_margolis_data(min_lat=min_lat, max_lat=max_lat, min_lon=min_lon, max_lon=max_lon)",
"_____no_output_____"
],
[
"study",
"_____no_output_____"
],
[
"study.nbytes / 1e9",
"_____no_output_____"
],
[
"gridded_study_hansen",
"_____no_output_____"
],
[
"gridded_study_spawn",
"_____no_output_____"
],
[
"study[\"realm\"] = allo.get_realm_from_ecoregion(study.ecoregion).load()",
"_____no_output_____"
],
[
"study",
"_____no_output_____"
]
],
[
[
"# Compare biomass on scatter plots + error metrics\n",
"_____no_output_____"
],
[
"- index gridded study to gridded hansen and spawn, check for bias, MAE, r2, visual inspection\n",
"_____no_output_____"
]
],
[
[
"def index_point_cloud_to_reference(ds, format_grid, name):\n gridded_df = format_grid.sel(lat=ds.lat, lon=ds.lon, method=\"nearest\")\n for v in ds:\n if v not in [\"lat\", \"lon\"]:\n gridded_df[v] = ds[v]\n grouped = gridded_df.to_dataframe().reset_index(drop=True).dropna().groupby([\"lat\", \"lon\"])\n\n mode = lambda x: x.value_counts().index[0]\n\n grouped = grouped.agg(\n biomass=pd.NamedAgg(column=\"biomass\", aggfunc=\"mean\"),\n n_shots_in_grid=pd.NamedAgg(column=\"biomass\", aggfunc=\"count\"),\n allometric_eq=pd.NamedAgg(column=\"allometric_eq\", aggfunc=mode),\n # ecoregion=pd.NamedAgg(column=\"ecoregion\", aggfunc=mode),\n # igbp=pd.NamedAgg(column=\"igbp\", aggfunc=mode),\n realm=pd.NamedAgg(column=\"realm\", aggfunc=mode),\n )\n\n gridded_df_xr = xr.Dataset.from_dataframe(grouped.reset_index())\n format_grid_records = trace_utils.find_matching_records(\n data=format_grid, lats=gridded_df_xr.lat, lons=gridded_df_xr.lon\n )\n grouped[f\"{name}_biomass\"] = format_grid_records.biomass.values\n\n return grouped",
"_____no_output_____"
],
[
"print(\"hansen\")\nfn = \"gridded_study_df_hansen_forests_only.csv\"\nif os.path.exists(fn):\n gridded_study_df_hansen = pd.read_csv(fn)\nelse:\n gridded_study_df_hansen = index_point_cloud_to_reference(study, hansen, \"hansen\")\n gridded_study_df_hansen.to_csv(fn)\n gridded_study_df_hansen = gridded_study_df_hansen.reset_index()\n\nprint(\"spawn\")\nfn = \"gridded_study_df_spawn_forests_only.csv\"\nif os.path.exists(fn):\n gridded_study_df_spawn = pd.read_csv(fn)\nelse:\n gridded_study_df_spawn = index_point_cloud_to_reference(study, spawn, \"spawn\")\n gridded_study_df_spawn.to_csv(fn)\n gridded_study_df_spawn = gridded_study_df_spawn.reset_index()",
"_____no_output_____"
]
],
[
[
"## R squared, bias, and MAE compared to Hansen and Spawn datasets\n\n- overall score\n- score by Realm\n",
"_____no_output_____"
]
],
[
[
"for name, df in zip([\"Hansen\", \"Spawn\"], [gridded_study_df_hansen, gridded_study_df_spawn]):\n for count_threshold in [0, 10, 20]:\n col_name = f\"{name.lower()}_biomass\"\n sub = df.loc[df.n_shots_in_grid >= count_threshold, [col_name, \"biomass\"]].dropna()\n ytrue = sub[col_name].values\n ypred = sub.biomass.values\n bias = np.mean(ypred - ytrue)\n r2 = r2_score(ytrue, ypred)\n mae = mean_absolute_error(ytrue, ypred)\n print(\n f\"Comparing to {name.ljust(6, ' ')} dataset with threshold {count_threshold}, number of records = {len(ytrue)}, R2 = {str(round(r2, 2)).ljust(5, ' ')}, bias = {round(bias, 1)} Mg/ha, MAE = {round(mae, 1)} Mg/ha\"\n )\n\n print()",
"_____no_output_____"
],
[
"# separate out the calculation by realm\ncount_threshold = 10\nfor name, df in zip([\"Hansen\", \"Spawn\"], [gridded_study_df_hansen, gridded_study_df_spawn]):\n col_name = f\"{name.lower()}_biomass\"\n for realm in df.realm.unique():\n sub = df.loc[\n (df.realm == realm) & (df.n_shots_in_grid >= count_threshold),\n [col_name, \"biomass\"],\n ].dropna()\n ytrue = sub[col_name].values\n ypred = sub.biomass.values\n bias = np.mean(ypred - ytrue)\n r2 = r2_score(ytrue, ypred)\n mae = mean_absolute_error(ytrue, ypred)\n print(\n f\"Comparing to {name.ljust(6, ' ')} dataset for realm {realm.ljust(13, ' ')} with threshold {count_threshold}, number of records = {str(len(ytrue)).ljust(6, ' ')}, R2 = {str(round(r2, 2)).ljust(5, ' ')}, bias = {round(bias, 1)} Mg/ha, MAE = {round(mae, 1)} Mg/ha\"\n )\n\n print()",
"_____no_output_____"
],
[
"# separate out the calculation by allometric eq\ncount_threshold = 10\nfor name, df in zip([\"Hansen\", \"Spawn\"], [gridded_study_df_hansen, gridded_study_df_spawn]):\n col_name = f\"{name.lower()}_biomass\"\n for realm in df.realm.unique():\n print(\n f\"Comparing to {name.ljust(6, ' ')} dataset for realm {realm.ljust(13, ' ')} with threshold {count_threshold}\"\n )\n sub = df.loc[(df.realm == realm) & (df.n_shots_in_grid >= count_threshold)]\n for eq in sub.allometric_eq.unique():\n subsub = sub.loc[sub.allometric_eq == eq]\n ytrue = subsub[col_name].values\n ypred = subsub[\"biomass\"].values\n bias = np.mean(ypred - ytrue)\n r2 = r2_score(ytrue, ypred)\n mae = mean_absolute_error(ytrue, ypred)\n print(\n f\" For {eq.ljust(30, ' ')} equation, number of records = {str(len(ytrue)).ljust(6, ' ')}, R2 = {str(round(r2, 2)).ljust(5, ' ')}, bias = {round(bias, 1)} Mg/ha, MAE = {round(mae, 1)} Mg/ha\"\n )\n\n print()",
"_____no_output_____"
]
],
[
[
"## Scatter plots compared to Hansen and Spawn datasets\n\n- overall\n- by Realm\n",
"_____no_output_____"
]
],
[
[
"def plot_scatter_comparison(\n ax,\n x_col,\n y_col,\n reference_name,\n comparison_name,\n plot_params,\n c=\"k\",\n s=0.01,\n alpha=0.1,\n):\n tot = np.hstack((x_col, y_col))\n xmax = np.percentile(tot, 99.5)\n xmin = plot_params[\"xmin\"]\n unit = plot_params[\"unit\"]\n\n ax.plot([xmin, xmax], [xmin, xmax], \"r\")\n r2 = r2_score(x_col, y_col)\n mae = mean_absolute_error(x_col, y_col)\n ax.scatter(x_col, y_col, c=c, s=s, alpha=alpha, marker=\"o\")\n ax.text(plot_params[\"text_x\"], xmax * 0.9, f\"R squared = {round(r2, 2)}\")\n ax.text(plot_params[\"text_x\"], xmax * 0.81, f\"MAE = {round(mae, 2)} {unit}\")\n if unit != \"\":\n unit_str = f\"({unit})\"\n else:\n unit_str = \"\"\n ax.set_xlabel(f\"Biomass from {reference_name} {unit_str}\")\n ax.set_ylabel(f\"Biomass from {comparison_name} {unit_str}\")\n ax.set_xlim(xmin, xmax)\n ax.set_ylim(xmin, xmax)\n ticks = np.arange(0, xmax, 100)\n ax.set_xticks(ticks)\n ax.set_yticks(ticks)",
"_____no_output_____"
],
[
"plot_params = {\n \"xmin\": -10,\n \"xmax\": 510,\n \"unit\": \"Mg/ha\",\n \"text_x\": 10,\n \"text_y1\": 450,\n \"text_y2\": 420,\n \"ticks\": np.arange(0, 510, 100),\n}",
"_____no_output_____"
],
[
"fig, axarr = plt.subplots(nrows=1, ncols=2, figsize=(13, 6))\nfor i, (name, df) in enumerate(\n zip([\"Hansen\", \"Spawn\"], [gridded_study_df_hansen, gridded_study_df_spawn])\n):\n col_name = f\"{name.lower()}_biomass\"\n sub = df[[col_name, \"biomass\"]].dropna()\n plot_scatter_comparison(\n ax=axarr[i],\n x_col=sub[col_name].values,\n y_col=sub.biomass.values,\n reference_name=name,\n comparison_name=\"this study\",\n plot_params=plot_params,\n )\n\nplt.show()\nplt.close()",
"_____no_output_____"
],
[
"for realm in gridded_study_df_hansen.realm.unique():\n fig, axarr = plt.subplots(nrows=1, ncols=2, figsize=(13, 6))\n for i, (name, df) in enumerate(\n zip(\n [\"Hansen\", \"Spawn\"],\n [gridded_study_df_hansen, gridded_study_df_spawn],\n )\n ):\n col_name = f\"{name.lower()}_biomass\"\n plot_scatter_comparison(\n ax=axarr[i],\n x_col=df.loc[(df.realm == realm), col_name].values,\n y_col=df.loc[(df.realm == realm), \"biomass\"].values,\n reference_name=name,\n comparison_name=\"this study\",\n plot_params=plot_params,\n s=0.05,\n alpha=0.2,\n )\n plt.suptitle(realm)\n plt.show()\n plt.close()",
"_____no_output_____"
]
],
[
[
"## index study to margolis, check for bias, MAE, r2\n",
"_____no_output_____"
]
],
[
[
"study_df = study[[\"lat\", \"lon\", \"biomass\"]].to_dataframe().reset_index()\nmargolis_df = (\n margolis[[\"lat\", \"lon\", \"biomass\"]]\n .to_dataframe()\n .reset_index()\n .rename(columns={\"biomass\": \"margolis_biomass\"})\n)",
"_____no_output_____"
],
[
"precision = 4\nstudy_df[\"lat_round\"] = study_df.lat.round(precision)\nstudy_df[\"lon_round\"] = study_df.lon.round(precision)\nmargolis_df[\"lat_round\"] = margolis_df.lat.round(precision)\nmargolis_df[\"lon_round\"] = margolis_df.lon.round(precision)\n\nmerged = pd.merge(\n left=study_df,\n right=margolis_df,\n on=[\"lat_round\", \"lon_round\"],\n suffixes=[\"_study\", \"_margolis\"],\n how=\"inner\",\n)",
"_____no_output_____"
],
[
"plt.figure(figsize=(6, 6))\nplot_scatter_comparison(\n ax=plt.gca(),\n x_col=merged.margolis_biomass.values,\n y_col=merged.biomass.values,\n reference_name=\"Margolis\",\n comparison_name=\"this study\",\n plot_params=plot_params,\n s=0.1,\n alpha=1,\n)\n\nplt.show()\nplt.close()",
"_____no_output_____"
]
],
[
[
"# Plot biomass data on maps\n",
"_____no_output_____"
]
],
[
[
"from cartopy.io import shapereader\nimport geopandas as gpd\n\n\ndef cartopy_proj_albers():\n return ccrs.AlbersEqualArea(\n central_longitude=-96,\n central_latitude=23,\n standard_parallels=(29.5, 45.5),\n )\n\n\ndef cartopy_borders(projection=utils.projections(\"albers\", \"conus\")):\n states_df = gpd.read_file(\n shapereader.natural_earth(\"50m\", \"cultural\", \"admin_1_states_provinces\")\n )\n states = states_df.loc[(states_df[\"iso_a2\"].isin([\"US\", \"CA\"]))]\n states = states.set_crs(epsg=4326).to_crs(projection)[\"geometry\"].values\n\n countries_df = gpd.read_file(shapereader.natural_earth(\"50m\", \"cultural\", \"admin_0_countries\"))\n countries = (\n countries_df[countries_df[\"ADMIN\"].isin([\"United States of America\", \"Canada\"])]\n .set_crs(epsg=4326)\n .to_crs(projection)[\"geometry\"]\n .values\n )\n\n return states, countries\n\n\ndef cartopy_proj_plate_carree():\n return ccrs.PlateCarree()\n\n\ndef cartopy_borders_global():\n states_df = gpd.read_file(\n shapereader.natural_earth(\"50m\", \"cultural\", \"admin_1_states_provinces\")\n )\n states = states_df.set_crs(epsg=4326).to_crs(epsg=32662)[\"geometry\"].values\n\n countries_df = gpd.read_file(shapereader.natural_earth(\"50m\", \"cultural\", \"admin_0_countries\"))\n countries = countries_df.set_crs(epsg=4326).to_crs(epsg=32662)[\"geometry\"].values\n\n return states, countries",
"_____no_output_____"
],
[
"import matplotlib as mpl\nfrom mpl_toolkits.axes_grid1 import make_axes_locatable\n\n\ndef map_pretty(ax, title=\"\", min_lat=-90, max_lat=90, min_lon=-180, max_lon=180):\n state_borders, country_borders = cartopy_borders_global()\n\n ax.add_geometries(\n state_borders,\n facecolor=\"none\",\n edgecolor=\"k\",\n crs=cartopy_proj_plate_carree(),\n linewidth=0.1,\n zorder=0,\n )\n ax.add_geometries(\n country_borders,\n facecolor=\"none\",\n edgecolor=\"k\",\n crs=cartopy_proj_plate_carree(),\n linewidth=0.3,\n zorder=0,\n )\n ax.axis(\"off\")\n ax.set_extent([min_lon, max_lon, min_lat, max_lat])\n ax.text(0.35, 1.05, title, transform=ax.transAxes)\n\n\ndef add_colorbar(\n fig,\n to_plot=None,\n x_location=1.08,\n y_location=0.76,\n height=0.12,\n width=0.018,\n vmin=None,\n vmax=None,\n cbar_label=\"\",\n cmap=\"viridis\",\n):\n\n cax = fig.add_axes([x_location, y_location, width, height])\n cax.text(\n 0.5,\n -0.08,\n vmin,\n transform=cax.transAxes,\n horizontalalignment=\"center\",\n verticalalignment=\"center\",\n )\n cax.text(\n 0.5,\n 1.08,\n vmax,\n transform=cax.transAxes,\n horizontalalignment=\"center\",\n verticalalignment=\"center\",\n )\n cax.text(\n 1.8,\n 0.5,\n cbar_label,\n transform=cax.transAxes,\n verticalalignment=\"center\",\n multialignment=\"center\",\n rotation=-90,\n )\n if to_plot is not None:\n cbar = fig.colorbar(to_plot, cax=cax, orientation=\"vertical\")\n else:\n norm = mpl.colors.Normalize(vmin=vmin, vmax=vmax)\n cbar = fig.colorbar(\n mpl.cm.ScalarMappable(norm=norm, cmap=cmap),\n cax=cax,\n orientation=\"vertical\",\n )\n cbar.outline.set_visible(False)\n cbar.set_ticks([])\n return cbar",
"_____no_output_____"
],
[
"plot_params = {\n \"cmap\": cm.Greens,\n \"var_lims\": (0, 500),\n \"label\": \"Woody Biomass\\n(Mg/ha)\",\n}\n\nmin_lat = gridded_study_hansen.lat.min().values\nmax_lat = gridded_study_hansen.lat.max().values\nmin_lon = gridded_study_hansen.lon.min().values\nmax_lon = gridded_study_hansen.lon.max().values",
"_____no_output_____"
],
[
"if \"band\" in study:\n study = study.drop_vars(\"band\")",
"_____no_output_____"
],
[
"nrows = 2\nncols = 2\ndata_sets = [study, gridded_study_hansen, spawn, hansen]\ntitles = [\n \"This study (points)\",\n \"This study (gridded)\",\n \"Spawn 2020\",\n \"Hansen 2019\",\n]\n\nvmin, vmax = plot_params[\"var_lims\"][0], plot_params[\"var_lims\"][1]\n\nplt.figure(figsize=(20, 10))\nfor i, d in enumerate(data_sets):\n plt.subplot(nrows, ncols, i + 1, projection=cartopy_proj_plate_carree())\n ax = plt.gca()\n\n if \"lat\" in d.coords:\n map_plot = d.biomass.plot.imshow(\n ax=ax,\n cmap=plot_params[\"cmap\"],\n vmin=vmin,\n vmax=vmax,\n add_colorbar=False,\n transform=ccrs.PlateCarree(),\n )\n else:\n # plot point cloud in scatter plots\n map_plot = d.plot.scatter(\n x=\"lon\",\n y=\"lat\",\n hue=\"biomass\",\n hue_style=\"continuous\",\n s=0.00005,\n ax=ax,\n cmap=plot_params[\"cmap\"],\n vmin=vmin,\n vmax=vmax,\n add_guide=False,\n transform=ccrs.PlateCarree(),\n )\n\n map_pretty(\n ax,\n title=titles[i],\n min_lat=min_lat - 5,\n max_lat=max_lat + 5,\n min_lon=min_lon - 5,\n max_lon=max_lon + 5,\n )\n\nfig = plt.gcf()\ncax = fig.add_axes([1.05, 0.43, 0.03, 0.15])\ncbar = fig.colorbar(map_plot, cax=cax, orientation=\"vertical\")\ncax.text(\n 0.5,\n -0.12,\n plot_params[\"var_lims\"][0],\n transform=cax.transAxes,\n horizontalalignment=\"center\",\n)\ncax.text(\n 0.5,\n 1.05,\n plot_params[\"var_lims\"][1],\n transform=cax.transAxes,\n horizontalalignment=\"center\",\n)\ncax.text(\n 1.8,\n 0.5,\n plot_params[\"label\"],\n transform=cax.transAxes,\n verticalalignment=\"center\",\n multialignment=\"center\",\n rotation=-90,\n)\ncbar.outline.set_visible(False)\ncbar.set_ticks([])\nplt.tight_layout()\nplt.show()\nplt.close()",
"_____no_output_____"
],
[
"gridded_study_spawn[\"diff_spawn\"] = gridded_study_spawn[\"biomass\"] - spawn[\"biomass\"]\ngridded_study_hansen[\"diff_hansen\"] = gridded_study_hansen[\"biomass\"] - hansen[\"biomass\"]",
"_____no_output_____"
],
[
"gridded_study_spawn[\"diff_spawn\"].isnull().sum().values",
"_____no_output_____"
],
[
"gridded_study_spawn[\"biomass\"].isnull().sum().values",
"_____no_output_____"
],
[
"gridded_study_hansen[\"diff_hansen\"].isnull().sum().values",
"_____no_output_____"
],
[
"gridded_study_hansen[\"biomass\"].isnull().sum().values",
"_____no_output_____"
],
[
"if \"spatial_ref\" in gridded_study_hansen:\n gridded_study_hansen = gridded_study_hansen.drop_vars(\"spatial_ref\")\n gridded_study_spawn = gridded_study_spawn.drop_vars(\"spatial_ref\")",
"_____no_output_____"
],
[
"plot_params = {\n \"cmap\": cm.RdBu,\n \"var_lims\": (-250, 250),\n \"label\": \"Diff in Biomass\\n(Mg/ha)\",\n}",
"_____no_output_____"
],
[
"min_lat = gridded_study_hansen.lat.min()\nmax_lat = gridded_study_hansen.lat.max()\nmin_lon = gridded_study_hansen.lon.min()\nmax_lon = gridded_study_hansen.lon.max()\n\nnrows = 2\nncols = 1\ncols = [\"diff_spawn\", \"diff_hansen\"]\ndatasets = [gridded_study_spawn, gridded_study_hansen]\ntitles = [\"Diff to Spawn\", \"Diff to Hansen\"]\n\nvmin, vmax = plot_params[\"var_lims\"][0], plot_params[\"var_lims\"][1]\n\nplt.figure(figsize=(15, 14))\nfor i, (c, d) in enumerate(zip(cols, datasets)):\n plt.subplot(nrows, ncols, i + 1, projection=cartopy_proj_plate_carree())\n ax = plt.gca()\n\n map_plot = d[c].plot.imshow(\n ax=ax,\n cmap=plot_params[\"cmap\"],\n vmin=vmin,\n vmax=vmax,\n add_colorbar=False,\n transform=ccrs.PlateCarree(),\n )\n\n map_pretty(\n ax,\n title=titles[i],\n min_lat=min_lat - 5,\n max_lat=max_lat + 5,\n min_lon=min_lon - 5,\n max_lon=max_lon + 5,\n )\n\nfig = plt.gcf()\ncax = fig.add_axes([1.05, 0.43, 0.03, 0.15])\ncbar = fig.colorbar(map_plot, cax=cax, orientation=\"vertical\")\ncax.text(\n 0.5,\n -0.12,\n plot_params[\"var_lims\"][0],\n transform=cax.transAxes,\n horizontalalignment=\"center\",\n)\ncax.text(\n 0.5,\n 1.05,\n plot_params[\"var_lims\"][1],\n transform=cax.transAxes,\n horizontalalignment=\"center\",\n)\ncax.text(\n 1.8,\n 0.5,\n plot_params[\"label\"],\n transform=cax.transAxes,\n verticalalignment=\"center\",\n multialignment=\"center\",\n rotation=-90,\n)\ncbar.outline.set_visible(False)\ncbar.set_ticks([])\nplt.tight_layout()\nplt.show()\nplt.close()",
"_____no_output_____"
],
[
"gridded_study_spawn[\"mpe_spawn\"] = 100.0 * gridded_study_spawn[\"diff_spawn\"] / spawn[\"biomass\"]\ngridded_study_hansen[\"mpe_hansen\"] = 100.0 * gridded_study_hansen[\"diff_hansen\"] / hansen[\"biomass\"]\n\n# zarin = version 1 (baccini et al equation for tropics but using landsat)\n# updated everything when producing global map (Harris et al) mostly using Baccini et al equation",
"_____no_output_____"
],
[
"plot_params = {\n \"cmap\": cm.RdBu,\n \"var_lims\": (-250, 250),\n \"label\": \"Diff in Biomass\\n(Mg/ha)\",\n}\n\nnrows = 2\nncols = 1\ncols = [\"mpe_spawn\", \"mpe_hansen\"]\ndatasets = [gridded_study_spawn, gridded_study_hansen]\ntitles = [\"Percentage diff to Spawn\", \"Percentage to Hansen\"]\n\nvmin, vmax = plot_params[\"var_lims\"][0], plot_params[\"var_lims\"][1]\n\nplt.figure(figsize=(15, 14))\nfor i, (c, d) in enumerate(zip(cols, datasets)):\n plt.subplot(nrows, ncols, i + 1, projection=cartopy_proj_plate_carree())\n ax = plt.gca()\n\n map_plot = d[c].plot.imshow(\n ax=ax,\n cmap=plot_params[\"cmap\"],\n vmin=vmin,\n vmax=vmax,\n add_colorbar=False,\n transform=ccrs.PlateCarree(),\n )\n\n map_pretty(\n ax,\n title=titles[i],\n min_lat=min_lat - 5,\n max_lat=max_lat + 5,\n min_lon=min_lon - 5,\n max_lon=max_lon + 5,\n )\n\nfig = plt.gcf()\ncax = fig.add_axes([1.05, 0.43, 0.03, 0.15])\ncbar = fig.colorbar(map_plot, cax=cax, orientation=\"vertical\")\ncax.text(\n 0.5,\n -0.12,\n plot_params[\"var_lims\"][0],\n transform=cax.transAxes,\n horizontalalignment=\"center\",\n)\ncax.text(\n 0.5,\n 1.05,\n plot_params[\"var_lims\"][1],\n transform=cax.transAxes,\n horizontalalignment=\"center\",\n)\ncax.text(\n 1.8,\n 0.5,\n plot_params[\"label\"],\n transform=cax.transAxes,\n verticalalignment=\"center\",\n multialignment=\"center\",\n rotation=-90,\n)\ncbar.outline.set_visible(False)\ncbar.set_ticks([])\nplt.tight_layout()\nplt.show()\nplt.close()",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e728f22267f6dd5d871846687a238af2af091f36 | 8,823 | ipynb | Jupyter Notebook | notebook/Unit3-3-UML.ipynb | alex-treebeard/home | bec975dcfb7a473da956215be4f5b4fd3d10a686 | [
"MIT"
] | null | null | null | notebook/Unit3-3-UML.ipynb | alex-treebeard/home | bec975dcfb7a473da956215be4f5b4fd3d10a686 | [
"MIT"
] | null | null | null | notebook/Unit3-3-UML.ipynb | alex-treebeard/home | bec975dcfb7a473da956215be4f5b4fd3d10a686 | [
"MIT"
] | null | null | null | 32.557196 | 276 | 0.600249 | [
[
[
"# The UML Class Diagrams\n\n## 1 The Project Of Student and Grade \n\n**If the project have `many different type modules`,you may feel `confused`.** \n\nThe fine-organized module structures will help you understand software\n\nWe arrange modules to the suitable folders.\n\n* The `edu` package: `grade.py student.py`\n\n```python\n\\mit\n \\edu\n __init__.py\n grade.py\n student.py\nmain_app.py```\n```",
"_____no_output_____"
]
],
[
[
"# %load ./mit/distinmulti/main_app.py\nfrom edu.grade import *\n\nug1 = UG('Jane Doe', 2014)\n# 1 creat the course named sixHundred\nsixHundred = Grades()\n\n# 2 some students taking a course named sixHundred\nsixHundred.addStudent(ug1)\n\n# 3 add Grades of students\nsixHundred.addGrade(ug1, 85)\nsixHundred.addGrade(ug1, 90)\nprint('The student grades:', sixHundred.grades)\n# 4 produce a grade report\nprint(gradeReport(sixHundred))",
"_____no_output_____"
],
[
"!python ./mit/distinmulti/main_app.py",
"_____no_output_____"
]
],
[
[
"## 2 UML\n\nUnified Modeling Language https://en.wikipedia.org/wiki/Unified_Modeling_Language\n\nThe Unified Modeling Language (UML)(统一建模语言) is a general-purpose, developmental, modeling language in the field of software engineering that is intended to provide a standard way to **visualize** the design of a system\n\nThe creation of UML was originally motivated by the desire to standardize the disparate `notational` systems and approaches to `software design`\n\n",
"_____no_output_____"
],
[
"### 2.1 The Class diagram\n\nThe **class diagram(https://en.wikipedia.org/wiki/Class_diagram)** in UML is a type of static structure diagram that describes the structure of a system by showing the system's classes, their attributes, operations (or methods), and the relationships among objects. \n\nA class is depicted on the class diagram as a rectangle with **three** horizontal sections:\n\n* The `upper` section shows the class's `name`; \n\n* The`middle` section contains the class's `attributes`; \n\n* The `lower` section contains the class's `operations (or \"methods\")`. \n\n\n\n### 2.2 Relationship \n\nUML relations notation the relationship is a general term covering the specific types of logical **connections** found on `class and object` diagrams. \n\nUML defines the following relationships: \n\n\n\n\nHere,we use the following relationships:\n\n* Association(关联)\n\n* Inheritance(继承)\n\n\n#### 2.2.1 Inheritance\n\nInheritance:继承-`Class-level` relationships : \n\nTo model inheritance on a class diagram, **a solid line** is drawn from the `child class` (the class inheriting the behavior) with **a closed, unfilled arrowhead (or triangle)** pointing to the super class\n\n\n\n#### 2.2.2 Association \n\nAssociation:关联-`Instance-level` relationships \n\nAssociation represents the static relationship shared among the objects of `two classes`. \n\nThe directional relationship represented by **a line with an arrowhead**. The arrowhead depicts a container-contained directional flow\n\n\n",
"_____no_output_____"
],
[
"### 2.3 Reverse Code to UML\n\nStarting with Visual Studio 2017, the **UML Designers** have been **removed** from Visual Studio.\n\nhttps://devblogs.microsoft.com/devops/uml-designers-have-been-removed-layer-designer-now-supports-live-architectural-analysis/\n\nWe are removing the UML designers from Visual Studio “15” Enterprise. Removing a feature is always a hard decision, but we want to ensure that our resources are invested in features that deliver the most customer value. \n\nOur reasons are twofold:\n\n* On examining telemetry data, we found that the **designers** were being used by **very few customers**, and this was confirmed when we consulted with our sales and technical support teams.\n\n* We were also faced with investing significant engineering resource to react to changes happening in the Visual Studio core for this release.\n\n**Reverse Source Codes to UML**\n\nThe Reverse is a process to produce UML class model from a given input of source code.\n\nBy bringing code content into visual UML model, this <b style=\"color:blue\">helps programmers or software engineers to review an implementation, identify potential bugs or deficiency and look for possible improvements</b>. \n\nApart from this, developers may reverse a code library as UML classes and construct model with them, like to reverse a generic collection framework and develop your own framework by extending the generic one\n\n\n**[Creating UML diagrams for Python code](https://github.com/PySEE/home/blob/S2019/guide/UMLPython.md)**\n\n",
"_____no_output_____"
],
[
"\n### 2.4 The UML of Student and Grade\n\n\n#### 2.4.1 The Class diagram and Inheritance\n\n\n\n\n",
"_____no_output_____"
],
[
"\n#### 2.4.2 The Class Diagram:Association \n\ngrade.py\n```python\nfrom .student import *\n\nclass Grades(object):\n \"\"\"A mapping from students to a list of grades\"\"\"\n```\n\n\n",
"_____no_output_____"
],
[
"## Reference\n\n### UML \n\nUnified Modeling Language https://en.wikipedia.org/wiki/Unified_Modeling_Language\n\nUML http://www.uml.org/\n\nDonald Bell. [UML basics:An introduction to the Unified Modeling Language](https://www.ibm.com/developerworks/rational/library/769.html)\n\n * [UML basics: The class diagram](https://www.ibm.com/developerworks/rational/library/content/RationalEdge/sep04/bell/index.html?ca=drs-)\n \n### Python & UML \n\n[Pyreverse: UML Diagrams for Python](https://www.logilab.org/blogentry/6883)\n\n* [Pylint](https://www.pylint.org/) is shipped with `Pyreverse` which creates UML diagrams for python code\n\n[Graphviz - Graph Visualization Software](http://www.graphviz.org/)\n\n[Creating UML diagrams for Python code](https://github.com/PySEE/home/blob/S2019/guide/UMLPython.md)\n\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
e728f73204ff0f1d0691aa257e545a52b882d02d | 3,817 | ipynb | Jupyter Notebook | [ideas]/utf8.ipynb | batthias/mammal-brainstorming | 55226110d34083303200179ab5426c0bd5d17e78 | [
"MIT"
] | null | null | null | [ideas]/utf8.ipynb | batthias/mammal-brainstorming | 55226110d34083303200179ab5426c0bd5d17e78 | [
"MIT"
] | null | null | null | [ideas]/utf8.ipynb | batthias/mammal-brainstorming | 55226110d34083303200179ab5426c0bd5d17e78 | [
"MIT"
] | null | null | null | 19.375635 | 46 | 0.352633 | [
[
[
"# Let us create some characters",
"_____no_output_____"
],
[
"## Latin lowercase",
"_____no_output_____"
]
],
[
[
"for c in range(ord('a'),ord('z')+1):\n print('c.add(\"' + chr(c) + '\")')\n",
"c.add(\"a\")\nc.add(\"b\")\nc.add(\"c\")\nc.add(\"d\")\nc.add(\"e\")\nc.add(\"f\")\nc.add(\"g\")\nc.add(\"h\")\nc.add(\"i\")\nc.add(\"j\")\nc.add(\"k\")\nc.add(\"l\")\nc.add(\"m\")\nc.add(\"n\")\nc.add(\"o\")\nc.add(\"p\")\nc.add(\"q\")\nc.add(\"r\")\nc.add(\"s\")\nc.add(\"t\")\nc.add(\"u\")\nc.add(\"v\")\nc.add(\"w\")\nc.add(\"x\")\nc.add(\"y\")\nc.add(\"z\")\n"
]
],
[
[
"## Latin Uppercase",
"_____no_output_____"
]
],
[
[
"for c in range(ord('A'),ord('Z')+1):\n print('c.add(\"' + chr(c) + '\")')",
"c.add(\"A\")\nc.add(\"B\")\nc.add(\"C\")\nc.add(\"D\")\nc.add(\"E\")\nc.add(\"F\")\nc.add(\"G\")\nc.add(\"H\")\nc.add(\"I\")\nc.add(\"J\")\nc.add(\"K\")\nc.add(\"L\")\nc.add(\"M\")\nc.add(\"N\")\nc.add(\"O\")\nc.add(\"P\")\nc.add(\"Q\")\nc.add(\"R\")\nc.add(\"S\")\nc.add(\"T\")\nc.add(\"U\")\nc.add(\"V\")\nc.add(\"W\")\nc.add(\"X\")\nc.add(\"Y\")\nc.add(\"Z\")\n"
]
],
[
[
"## Greek lowercase",
"_____no_output_____"
]
],
[
[
"for c in range(ord('α'),ord('ω')+1):\n print('c.add(\"' + chr(c) + '\")')",
"c.add(\"α\")\nc.add(\"β\")\nc.add(\"γ\")\nc.add(\"δ\")\nc.add(\"ε\")\nc.add(\"ζ\")\nc.add(\"η\")\nc.add(\"θ\")\nc.add(\"ι\")\nc.add(\"κ\")\nc.add(\"λ\")\nc.add(\"μ\")\nc.add(\"ν\")\nc.add(\"ξ\")\nc.add(\"ο\")\nc.add(\"π\")\nc.add(\"ρ\")\nc.add(\"ς\")\nc.add(\"σ\")\nc.add(\"τ\")\nc.add(\"υ\")\nc.add(\"φ\")\nc.add(\"χ\")\nc.add(\"ψ\")\nc.add(\"ω\")\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e728f83d399f75649fa6f7890e4343e0fb2aa882 | 215,912 | ipynb | Jupyter Notebook | SII/ML/5-Anomaly_Detection.ipynb | vafaei-ar/alzahra-workshop-2019 | ebcc293e53de5a81379c97fff95fe24f7c67ba39 | [
"MIT"
] | 3 | 2019-11-24T13:55:16.000Z | 2021-01-08T20:09:55.000Z | SII/ML/5-Anomaly_Detection.ipynb | vafaei-ar/alzahra-workshop-2019 | ebcc293e53de5a81379c97fff95fe24f7c67ba39 | [
"MIT"
] | null | null | null | SII/ML/5-Anomaly_Detection.ipynb | vafaei-ar/alzahra-workshop-2019 | ebcc293e53de5a81379c97fff95fe24f7c67ba39 | [
"MIT"
] | null | null | null | 241.782755 | 48,240 | 0.925141 | [
[
[
"### Anomaly Detection\n* What are Outliers ?\n* Statistical Methods for Univariate Data\n* Using Gaussian Mixture Models\n* Isolation Forest\n* Local Outlier Factor",
"_____no_output_____"
],
[
"### Outliers\n* New data which doesn't belong to general trend (or distribution) of entire data are known as outliers.\n* Data belonging to general trend are known as inliners.\n* Learning models are impacted by presence of outliers.\n* Anomaly detection is another use of outlier detection in which we find out unusual behaviour.\n* Data which were detected outliers can be deleted from complete dataset.\n* Outliers can also be marked before using them in learning methods",
"_____no_output_____"
],
[
"### Statistical Methods for Univariate Data\n* Using Standard Deviation Method - zscore\n* Using Interquartile Range Method - IRQ",
"_____no_output_____"
],
[
"##### Using Standard Deviation Method\n* If univariate data follows Gaussian Distribution, we can use standard deviation to figure out where our data lies",
"_____no_output_____"
]
],
[
[
"import numpy as np",
"_____no_output_____"
],
[
"data = np.random.normal(size=1000)",
"_____no_output_____"
]
],
[
[
"* Adding More Outliers",
"_____no_output_____"
]
],
[
[
"data[-5:] = [3.5,3.6,4,3.56,4.2]",
"_____no_output_____"
],
[
"from scipy.stats import zscore",
"_____no_output_____"
]
],
[
[
"* Detecting Outliers",
"_____no_output_____"
]
],
[
[
"data[np.abs(zscore(data)) > 3]",
"_____no_output_____"
]
],
[
[
"##### Using Interquartile Range\n* For univariate data not following Gaussian Distribution IQR is a way to detect outliers",
"_____no_output_____"
]
],
[
[
"from scipy.stats import iqr",
"_____no_output_____"
],
[
"data = np.random.normal(size=1000)",
"_____no_output_____"
],
[
"data[-5:]=[-2,9,11,-3,-21]",
"_____no_output_____"
],
[
"iqr_value = iqr(data)",
"_____no_output_____"
],
[
"lower_threshold = np.percentile(data,25) - iqr_value*1.5",
"_____no_output_____"
],
[
"upper_threshold = np.percentile(data,75) + iqr_value*1.5",
"_____no_output_____"
],
[
"upper_threshold",
"_____no_output_____"
],
[
"lower_threshold",
"_____no_output_____"
],
[
"data[np.where(data < lower_threshold)]",
"_____no_output_____"
],
[
"data[np.where(data > upper_threshold)]",
"_____no_output_____"
]
],
[
[
"### Using Gaussian Mixture Models",
"_____no_output_____"
]
],
[
[
"# Number of samples per component\nn_samples = 500\n\n# Generate random sample, two components\nnp.random.seed(0)\nC = np.array([[0., -0.1], [1.7, .4]])\nC2 = np.array([[1., -0.1], [2.7, .2]])\n#X = np.r_[np.dot(np.random.randn(n_samples, 2), C)]\n #.7 * np.random.randn(n_samples, 2) + np.array([-6, 3])]\nX = np.r_[np.dot(np.random.randn(n_samples, 2), C),np.dot(np.random.randn(n_samples, 2), C2)]",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\n%matplotlib inline",
"_____no_output_____"
],
[
"X[-5:] = [[4,-1],[4.1,-1.1],[3.9,-1],[4.0,-1.2],[4.0,-1.3]]",
"_____no_output_____"
],
[
"plt.scatter(X[:,0], X[:,1],s=5)",
"_____no_output_____"
],
[
"from sklearn.mixture import GaussianMixture",
"_____no_output_____"
],
[
"gmm = GaussianMixture(n_components=3)",
"_____no_output_____"
],
[
"gmm.fit(X)",
"_____no_output_____"
],
[
"pred = gmm.predict(X)",
"_____no_output_____"
],
[
"pred[:50]",
"_____no_output_____"
],
[
"plt.scatter(X[:,0], X[:,1],s=10,c=pred)",
"_____no_output_____"
]
],
[
[
"### Fitting Elliptical Envelope\n* The assumption here is, regular data comes from known distribution ( Gaussion distribution )\n* Inliner location & variance will be calculated using `Mahalanobis distances` which is less impacted by outliers.\n* Calculate robust covariance fit of the data.",
"_____no_output_____"
]
],
[
[
"from sklearn.datasets import make_blobs\nX,_ = make_blobs(n_features=2, centers=2, cluster_std=2.5, n_samples=1000)",
"_____no_output_____"
],
[
"plt.scatter(X[:,0], X[:,1],s=10)",
"_____no_output_____"
],
[
"from sklearn.covariance import EllipticEnvelope",
"_____no_output_____"
],
[
"ev = EllipticEnvelope(contamination=.1)",
"_____no_output_____"
],
[
"ev.fit(X)",
"_____no_output_____"
],
[
"cluster = ev.predict(X)",
"_____no_output_____"
],
[
"plt.scatter(X[:,0], X[:,1],s=10,c=cluster)",
"_____no_output_____"
]
],
[
[
"### Isolation Forest\n* Based on RandomForest\n* Useful in detecting outliers in high dimension datasets.\n* This algorithm randomly selects a feature & splits further.\n* Random partitioning produces shorter part for anomolies.\n* When a forest of random trees collectively produce shorter path lengths for particular samples, they are highly likely to be anomalies.",
"_____no_output_____"
]
],
[
[
"rng = np.random.RandomState(42)\n\n# Generate train data\nX = 0.3 * rng.randn(100, 2)\nX_train = np.r_[X + 2, X - 2]\n# Generate some regular novel observations\nX = 0.3 * rng.randn(20, 2)\nX_test = np.r_[X + 2, X - 2]\n# Generate some abnormal novel observations\nX_outliers = rng.uniform(low=-4, high=4, size=(20, 2))",
"_____no_output_____"
],
[
"from sklearn.ensemble import IsolationForest",
"_____no_output_____"
],
[
"data = np.r_[X_train,X_test,X_outliers]",
"_____no_output_____"
],
[
"iso = IsolationForest(behaviour='new', contamination='auto')",
"_____no_output_____"
],
[
"iso.fit(data)",
"_____no_output_____"
],
[
"pred = iso.predict(data)",
"_____no_output_____"
],
[
"plt.scatter(data[:,0], data[:,1],s=10,c=pred)",
"_____no_output_____"
]
],
[
[
"### Local Outlier Factor\n* Based on nearest neighbours\n* Suited for moderately high dimension datasets\n* LOF computes a score reflecting degree of abnormility of a data.\n* LOF Calculation\n - Local density is calculated from k-nearest neighbors.\n - LOF of each data is equal to the ratio of the average local density of his k-nearest neighbors, and its own local density.\n - An abnormal data is expected to have smaller local density.\n* LOF tells you not only how outlier the data is but how outlier is it with respect to all data",
"_____no_output_____"
]
],
[
[
"from sklearn.neighbors import LocalOutlierFactor",
"_____no_output_____"
],
[
"lof = LocalOutlierFactor(n_neighbors=25,contamination=.1)",
"_____no_output_____"
],
[
"pred = lof.fit_predict(data)",
"_____no_output_____"
],
[
"s = np.abs(lof.negative_outlier_factor_)",
"_____no_output_____"
],
[
"plt.scatter(data[:,0], data[:,1],s=s*10,c=pred)",
"_____no_output_____"
]
],
[
[
"### Outlier Detection using DBSCAN\n* DBSCAN is a clustering method based on density\n* Groups data which are closer to each other.\n* Doesn't use distance vector calculation method\n* Data not close enough to any cluster is not assigned any cluster & these can be anomalies",
"_____no_output_____"
]
],
[
[
"from sklearn.cluster import DBSCAN",
"_____no_output_____"
],
[
"dbscan = DBSCAN(eps=.3)",
"_____no_output_____"
],
[
"dbscan.fit(data)",
"_____no_output_____"
],
[
"plt.scatter(data[:,0], data[:,1],s=s*10,c=dbscan.labels_)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
e728f8d72303fc343b42c7443cc897dbc7794264 | 6,150 | ipynb | Jupyter Notebook | notebooks/pre-analysis-plan.ipynb | ls88-openscienceconnector/final-project-team-3 | e66895f20aefbdf30c33844bd74e6b7ec2ca3aee | [
"CC0-1.0"
] | 1 | 2019-04-19T16:38:37.000Z | 2019-04-19T16:38:37.000Z | notebooks/pre-analysis-plan.ipynb | ls88-openscienceconnector/final-project-team-3 | e66895f20aefbdf30c33844bd74e6b7ec2ca3aee | [
"CC0-1.0"
] | null | null | null | notebooks/pre-analysis-plan.ipynb | ls88-openscienceconnector/final-project-team-3 | e66895f20aefbdf30c33844bd74e6b7ec2ca3aee | [
"CC0-1.0"
] | 3 | 2019-04-20T01:46:57.000Z | 2019-04-28T22:53:14.000Z | 64.736842 | 1,083 | 0.650407 | [
[
[
"# L&S 88 - Final Project Team 3 - Most Common Name in the U.S\n## Pre-Analysis Plan\n_Jamie Xie, Kai Chen, Jae Eu, 19 April 2019 7:00 PM_\n\n**Table of Contents**\n\n1. [Abstract](#Abstract)\n2. [Data](#Data)\n3. [Strategy](#Strategy)\n4. [Analysis](#Analysis)\n5. [Deliverables](#Deliverables)\n6. [Sources](#Sources)\n\n### Abstract\n\nThis project will attempt to reproduce analysis by Five-Thirty-Eight to find the most common first and last name combination in the U.S in 2013. The original analysis gathered data of the most common baby first names in the U.S by decade from 1910 to 2010 from the Social Security Administration. Using this data, they factored in life-expectancy to figure out which people are still alive from each decade. This analysis also allowed them to adjust for names that may be more popular now than than they were before. Next, because there are people in the population not included in the SSA data, they corrected the popularity of the first names based on the percentage of the Hispanic population in the U.S. Lastly, they gathered last name data from the Census Bureau and combined their first and last name data using an adjustment matrix from previous research that showed which combinations of first and last names are more common than others. We will attempt to replicate these three adjustments to see if we also find that the most common name in the U.S is James Smith.\n\n### Data\n\nThe data is from the [Github repository](https://github.com/fivethirtyeight/data.git) containing the original data on which the analysis was performed. This data includes a table of the total population and Hispanic population by state in 2013, data on last names which includes a breakdown of names by race and ethnicity, an aging curve table which predicts the chances that somone born from a certain decade is still alive in 2013, and an adjustments table which describes the popularity of specific first and last name combinations. We will also gather the first name data they used from the [Social Security Administration](https://www.ssa.gov/oact/babynames/limits.html), which lists the most popular 1000 baby names by each year. \n\n### Strategy\n\nOur coding analysis will attempt to reproduce the three adjustments that the original analysis performed to see if we can come up with the same most common name in the U.S. using the original study's guidelines. We will first load the original tables in a Jupyter notebook, along with the SSA baby name data. Using the aging curve and SSA data, we will predict the most common first names in the U.S. as of 2013. We will then use the data about the Hispanic population in each state to try to correct the popularity of the most common first names. Next, we will combine the surname data from the U.S Census (already ordered by popularity) with the adjust first name data to create a matrix. Lastly, we will apply the adjustments matrix to see which first and last name combinations are more common than others.\n\n### Analysis\n\nOur statistical analyses will be completely based upon the statistical analyses and assumptions performed in the original study, as we do not have the ability to reproduce them. These will include the age-adjustment using actuarial data to figure out which people with common names are still around, as well as the adjustment of first and last name combinations based on their actual popularity.\n\n### Deliverables\n\n```\n| final-project-team-3\n | data\n | - adjustments.csv # original data\n | - aging-curve.csv # original data\n | - data1910-2020.csv # first name data from SSA\n | - state-pop.csv # original data\n | - surnames.csv # original data\n | - adjusted-name-combinations-list.csv # output of original data\n | - adjusted-name-combinations-matrix.csv # output of original data\n | - new-top-firstNames.csv # output of original data\n | - new-top-surnames.csv # output of original data\n | - independent-name-combinations-by-pop.csv # output of original data\n | notebooks\n | - Final Project Team 3.ipynb # our reproduction analysis\n | output\n | - README.md # README file containing link to Google Slides for our presentation\n \n```\n \nAt the end of this project, we will present our findings in a presentation using Google Slides. All other files in this repository are supporting files that will not be covered in the presentation.\n\n### Sources\n\nChalabi, Mona and Flowers, Andrew. Dear Mona, What’s The Most Common Name In America? FiveThirtyEight. 2013.\nhttps://fivethirtyeight.com/features/whats-the-most-common-name-in-america/\n\nFiveThirtyEight. Most Common Name.\nhttps://github.com/fivethirtyeight/data.git\n\nPopular Baby Names. Social Security Administration.\nhttps://www.ssa.gov/oact/babynames/limits.html",
"_____no_output_____"
]
]
] | [
"markdown"
] | [
[
"markdown"
]
] |
e729133e03c6d4d43573485c202d07bb9c530b2e | 59,970 | ipynb | Jupyter Notebook | dev/02_data_transforms.ipynb | davidpfahler/fastai_dev | a86b15f86138a9902e8649e3f745e76a19139ab3 | [
"Apache-2.0"
] | null | null | null | dev/02_data_transforms.ipynb | davidpfahler/fastai_dev | a86b15f86138a9902e8649e3f745e76a19139ab3 | [
"Apache-2.0"
] | null | null | null | dev/02_data_transforms.ipynb | davidpfahler/fastai_dev | a86b15f86138a9902e8649e3f745e76a19139ab3 | [
"Apache-2.0"
] | null | null | null | 56.362782 | 31,108 | 0.754677 | [
[
[
"#default_exp data.transform",
"_____no_output_____"
],
[
"#export\nfrom local.torch_basics import *\nfrom local.test import *\nfrom local.notebook.showdoc import show_doc",
"_____no_output_____"
],
[
"from PIL import Image",
"_____no_output_____"
]
],
[
[
"# Transforms",
"_____no_output_____"
],
[
"## Helpers",
"_____no_output_____"
]
],
[
[
"#exports\ndef type_hints(f):\n \"Same as `typing.get_type_hints` but returns `{}` if not allowed type\"\n return typing.get_type_hints(f) if isinstance(f, typing._allowed_types) else {}",
"_____no_output_____"
],
[
"#export\ndef anno_ret(func):\n \"Get the return annotation of `func`\"\n if not func: return None\n ann = type_hints(func)\n if not ann: return None\n return ann.get('return')",
"_____no_output_____"
],
[
"#hide\ndef f(x) -> float: return x\ntest_eq(anno_ret(f), float)\ndef f(x) -> Tuple[float,float]: return x\ntest_eq(anno_ret(f), Tuple[float,float])\ndef f(x) -> None: return x\ntest_eq(anno_ret(f), NoneType)\ndef f(x): return x\ntest_eq(anno_ret(f), None)\ntest_eq(anno_ret(None), None)",
"_____no_output_____"
],
[
"#export\ncmp_instance = functools.cmp_to_key(lambda a,b: 0 if a==b else 1 if issubclass(a,b) else -1)",
"_____no_output_____"
],
[
"td = {int:1, numbers.Number:2, numbers.Integral:3}\ntest_eq(sorted(td, key=cmp_instance), [numbers.Number, numbers.Integral, int])",
"_____no_output_____"
],
[
"#export\ndef _p1_anno(f):\n \"Get the annotation of first param of `f`\"\n hints = type_hints(f)\n ann = [o for n,o in hints.items() if n!='return']\n return ann[0] if ann else object",
"_____no_output_____"
],
[
"def _f(a, b): pass\ntest_eq(_p1_anno(_f), object)\ndef _f(a, b)->str: pass\ntest_eq(_p1_anno(_f), object)\ndef _f(a, b:str)->float: pass\ntest_eq(_p1_anno(_f), str)\ndef _f(a:int, b:int)->float: pass\ntest_eq(_p1_anno(_f), int)\ntest_eq(_p1_anno(attrgetter('foo')), object)",
"_____no_output_____"
]
],
[
[
"## Types",
"_____no_output_____"
]
],
[
[
"#export\n@delegates(plt.subplots, keep=True)\ndef subplots(nrows=1, ncols=1, **kwargs):\n fig,ax = plt.subplots(nrows,ncols,**kwargs)\n if nrows*ncols==1: ax = array([ax])\n return fig,ax",
"_____no_output_____"
],
[
"#export\nclass TensorImageBase(TensorBase):\n _show_args = {'cmap':'viridis'}\n def show(self, ctx=None, **kwargs):\n return show_image(self, ctx=ctx, **{**self._show_args, **kwargs})\n\n def get_ctxs(self, max_n=10, rows=None, cols=None, figsize=None, **kwargs):\n n_samples = min(self.shape[0], max_n)\n rows = rows or int(np.ceil(math.sqrt(n_samples)))\n cols = cols or int(np.ceil(math.sqrt(n_samples)))\n figsize = (cols*3, rows*3) if figsize is None else figsize\n _,axs = subplots(rows, cols, figsize=figsize)\n return axs.flatten()",
"_____no_output_____"
],
[
"#export\nclass TensorImage(TensorImageBase): pass",
"_____no_output_____"
],
[
"#export\nclass TensorImageBW(TensorImage): _show_args = {'cmap':'Greys'}",
"_____no_output_____"
],
[
"#export\nclass TensorMask(TensorImageBase): _show_args = {'alpha':0.5, 'cmap':'tab20'}",
"_____no_output_____"
],
[
"im = Image.open(TEST_IMAGE)",
"_____no_output_____"
],
[
"im_t = TensorImage(array(im))\ntest_eq(type(im_t), TensorImage)",
"_____no_output_____"
],
[
"im_t2 = TensorMask(tensor(1))\ntest_eq(type(im_t2), TensorMask)\ntest_eq(im_t2, tensor(1))",
"_____no_output_____"
],
[
"ax = im_t.show(figsize=(2,2))",
"_____no_output_____"
],
[
"test_fig_exists(ax)",
"_____no_output_____"
],
[
"#hide\naxes = im_t.get_ctxs(1)\ntest_eq(axes.shape,[1])\nplt.close()\naxes = im_t.get_ctxs(4)\ntest_eq(axes.shape,[4])\nplt.close()",
"_____no_output_____"
]
],
[
[
"## TypeDispatch -",
"_____no_output_____"
]
],
[
[
"#export\nclass TypeDispatch:\n \"Dictionary-like object; `__getitem__` matches keys of types using `issubclass`\"\n def __init__(self, *funcs):\n self.funcs,self.cache = {},{}\n for f in funcs: self.add(f)\n self.inst = None\n\n def _reset(self):\n self.funcs = {k:self.funcs[k] for k in sorted(self.funcs, key=cmp_instance, reverse=True)}\n self.cache = {**self.funcs}\n\n def add(self, f):\n \"Add type `t` and function `f`\"\n self.funcs[_p1_anno(f) or object] = f\n self._reset()\n\n def returns(self, x): return anno_ret(self[type(x)])\n def returns_none(self, x):\n r = anno_ret(self[type(x)])\n return r if r == NoneType else None\n\n def __repr__(self): return str({getattr(k,'__name__',str(k)):v.__name__ for k,v in self.funcs.items()})\n\n def __call__(self, x, *args, **kwargs):\n f = self[type(x)]\n if not f: return x\n if self.inst: f = types.MethodType(f, self.inst)\n return f(x, *args, **kwargs)\n\n def __get__(self, inst, owner):\n self.inst = inst\n return self\n\n def __getitem__(self, k):\n \"Find first matching type that is a super-class of `k`\"\n if k in self.cache: return self.cache[k]\n types = [f for f in self.funcs if issubclass(k,f)]\n res = self.funcs[types[0]] if types else None\n self.cache[k] = res\n return res",
"_____no_output_____"
],
[
"def f_col(x:typing.Collection): return x\ndef f_nin(x:numbers.Integral)->int: return x+1\ndef f_bti(x:TensorMask): return x\ndef f_fti(x:TensorImage): return x\ndef f_bll(x:bool): return x\ndef f_num(x:numbers.Number): return x\nt = TypeDispatch(f_nin,f_fti,f_num,f_bti,f_bll)\n\ntest_eq(t[int], f_nin)\ntest_eq(t[str], None)\ntest_eq(t[TensorImage], f_fti)\ntest_eq(t[float], f_num)\nt.add(f_col)\ntest_eq(t[str], f_col)\ntest_eq(t[int], f_nin)\ntest_eq(t(1), 2)\ntest_eq(t.returns(1), int)\nt",
"_____no_output_____"
],
[
"def m_nin(self, x:numbers.Integral): return x+1\ndef m_bll(self, x:bool): return x\ndef m_num(self, x:numbers.Number): return x\n\nt = TypeDispatch(m_nin,m_num,m_bll)\nclass A: f = t\na = A()\ntest_eq(a.f(1), 2)\ntest_eq(a.f(1.), 1.)",
"_____no_output_____"
]
],
[
[
"## Transform -",
"_____no_output_____"
]
],
[
[
"#export\nclass _TfmDict(dict):\n def __setitem__(self,k,v):\n if k=='_': k='encodes'\n if k not in ('encodes','decodes') or not isinstance(v,Callable): return super().__setitem__(k,v)\n if k not in self: super().__setitem__(k,TypeDispatch())\n res = self[k]\n res.add(v)",
"_____no_output_____"
],
[
"#export\nclass _TfmMeta(type):\n def __new__(cls, name, bases, dict):\n res = super().__new__(cls, name, bases, dict)\n res.__signature__ = inspect.signature(res.__init__)\n return res\n\n def __call__(cls, *args, **kwargs):\n f = args[0] if args else None\n n = getattr(f,'__name__',None)\n if not hasattr(cls,'encodes'): cls.encodes=TypeDispatch()\n if not hasattr(cls,'decodes'): cls.decodes=TypeDispatch()\n if isinstance(f,Callable) and n in ('decodes','encodes','_'):\n getattr(cls,'encodes' if n=='_' else n).add(f)\n return f\n return super().__call__(*args, **kwargs)\n\n @classmethod\n def __prepare__(cls, name, bases): return _TfmDict()",
"_____no_output_____"
],
[
"#export\nclass Transform(metaclass=_TfmMeta):\n \"Delegates (`__call__`,`decode`) to (`encodes`,`decodes`) if `filt` matches\"\n filt,init_enc,as_item_force,as_item,order = None,False,None,True,0\n def __init__(self, enc=None, dec=None, filt=None, as_item=False):\n self.filt,self.as_item = ifnone(filt, self.filt),as_item\n self.init_enc = enc or dec\n if not self.init_enc: return\n\n # Passing enc/dec, so need to remove (base) class level enc/dec\n del(self.__class__.encodes,self.__class__.decodes)\n self.encodes,self.decodes = (TypeDispatch(),TypeDispatch())\n if enc:\n self.encodes.add(enc)\n self.order = getattr(self.encodes,'order',self.order)\n if dec: self.decodes.add(dec)\n\n @property\n def use_as_item(self): return ifnone(self.as_item_force, self.as_item)\n def __call__(self, x, **kwargs): return self._call('encodes', x, **kwargs)\n def decode (self, x, **kwargs): return self._call('decodes', x, **kwargs)\n def setup(self, items=None): return getattr(self,'setups',noop)(items)\n def __repr__(self): return f'{self.__class__.__name__}: {self.use_as_item} {self.encodes} {self.decodes}'\n\n def _call(self, fn, x, filt=None, **kwargs):\n if filt!=self.filt and self.filt is not None: return x\n f = getattr(self, fn)\n if self.use_as_item or not is_listy(x): return self._do_call(f, x, **kwargs)\n res = tuple(self._do_call(f, x_, **kwargs) for x_ in x)\n return retain_type(res, x)\n\n def _do_call(self, f, x, **kwargs):\n return x if f is None else retain_type(f(x, **kwargs), x, f.returns_none(x))\n\nadd_docs(Transform, decode=\"Delegate to `decodes` to undo transform\", setup=\"Delegate to `setups` to set up transform\")",
"_____no_output_____"
],
[
"show_doc(Transform)",
"_____no_output_____"
]
],
[
[
"Base class that delegates `__call__` and `decode` to `encodes` and `decodes`, doing nothing if param annotation doesn't match type. If called with listy `x` then it calls function with each item (unless `whole_typle`, in which case it's passed directly as a whole). The function (if matching 1st param type) will cast the result to the same as the input type, unless there's a return annotation (in which case it's cast to that), or the return annotation is `None` (in which case no casting is done).\n\nDetails: `Transform` is a base class where you override encodes and/or decodes. e.g. `__call__` uses `call` which looks up what to call using `func`. If `whole_tuple` is set, that just returns `encodes` (or `decodes` if not `is_enc`). Otherwise we find the first annotated param with `_p1_anno` and check if `x` is an instance of that (if not `is_listy(x)`). If it is, we return the function (encodes/decodes), otherwise None. `call` then passes on to `_do_call` which does nothing if function is `None`. If `x` is listy, then we return a *list* of {functions or `None`}, and a list of results from `_do_call` for each function is returned.",
"_____no_output_____"
]
],
[
[
"class A(Transform): pass\n@A\ndef encodes(self, x): return x+1\nf1 = A()\ntest_eq(f1(1), 2)\n\nclass B(A): pass\nf2 = B()\ntest_eq(f2(1), 2)\n\nclass A(Transform): pass\nf3 = A()\ntest_eq_type(f3(2), 2)\ntest_eq_type(f3.decode(2.0), 2.0)",
"_____no_output_____"
]
],
[
[
"`Transform` can be used as a decorator, to turn a function into a `Transform`.",
"_____no_output_____"
]
],
[
[
"@Transform\ndef f(x): return x//2\ntest_eq_type(f(2), 1)\ntest_eq_type(f.decode(2.0), 2.0)",
"_____no_output_____"
]
],
[
[
"You can derive from `Transform` and use either `_` or `encodes` for your encoding function.",
"_____no_output_____"
]
],
[
[
"class A(Transform):\n def _(self, x:TensorImage): return -x\nf = A()\nt = f(im_t)\ntest_eq(t, -im_t)\ntest_eq(f(1), 1)\ntest_eq(type(t), TensorImage)\nf",
"_____no_output_____"
]
],
[
[
"Without return annotation we get an `Int` back since that's what was passed.",
"_____no_output_____"
]
],
[
[
"class A(Transform): pass\n@A\ndef _(self, x:Int): return x//2 # `_` is an abbreviation for `encodes`\n@A\ndef encodes(self, x:float): return x+1\n\nf = A()\ntest_eq_type(f(Int(2)), Int(1))\ntest_eq_type(f(2), 2)\ntest_eq_type(f(2.), 3.)",
"_____no_output_____"
]
],
[
[
"Without return annotation we don't cast if we're not a subclass of the input type.",
"_____no_output_____"
]
],
[
[
"class A(Transform):\n def encodes(self, x:Int): return x/2\n def _(self, x:float): return x+1\n\nf = A()\ntest_eq_type(f(Int(2)), 1.)\ntest_eq_type(f(2), 2)\ntest_eq_type(f(Float(2.)), Float(3.))",
"_____no_output_____"
]
],
[
[
"With return annotation `None` we get back whatever Python creates usually.",
"_____no_output_____"
]
],
[
[
"def func(x)->None: return x/2\nf = Transform(func)\ntest_eq_type(f(2), 1.)\ntest_eq_type(f(2.), 1.)",
"_____no_output_____"
]
],
[
[
"Since `decodes` has no return annotation, but `encodes` created an `Int` and we pass that result here to `decode`, we end up with an `Int`.",
"_____no_output_____"
]
],
[
[
"def func(x): return Int(x+1)\ndef dec (x): return x-1\nf = Transform(func,dec)\nt = f(1)\ntest_eq_type(t, Int(2))\ntest_eq_type(f.decode(t), Int(1))",
"_____no_output_____"
]
],
[
[
"If the transform has `filt` then it's only applied if `filt` param matches.",
"_____no_output_____"
]
],
[
[
"f.filt = 1\ntest_eq(f(1, filt=1),2)\ntest_eq_type(f(1, filt=0), 1)",
"_____no_output_____"
],
[
"class A(Transform): \n def encodes(self, xy): x,y=xy; return (x+y,y)\n def decodes(self, xy): x,y=xy; return (x-y,y)\n\nf = A(as_item=True)\nt = f((1,2))\ntest_eq(t, (3,2))\ntest_eq(f.decode(t), (1,2))\nf.filt = 1\ntest_eq(f((1,2), filt=1), (3,2))\ntest_eq(f((1,2), filt=0), (1,2))",
"_____no_output_____"
],
[
"class AL(Transform): pass\n@AL\ndef encodes(self, x): return L(x_+1 for x_ in x)\n@AL\ndef decodes(self, x): return L(x_-1 for x_ in x)\n\nf = AL(as_item=True)\nt = f([1,2])\ntest_eq(t, [2,3])\ntest_eq(f.decode(t), [1,2])",
"_____no_output_____"
],
[
"def neg_int(x:numbers.Integral): return -x\n\nf = Transform(neg_int, as_item=False)\ntest_eq(f([1]), (-1,))\ntest_eq(f([1.]), (1.,))\ntest_eq(f([1.,2,3.]), (1.,-2,3.))\ntest_eq(f.decode([1,2]), (1,2))",
"_____no_output_____"
],
[
"#export\nclass InplaceTransform(Transform):\n \"A `Transform` that modifies in-place and just returns whatever it's passed\"\n def _call(self, fn, x, filt=None, **kwargs):\n super()._call(fn,x,filt,**kwargs)\n return x",
"_____no_output_____"
]
],
[
[
"## TupleTransform",
"_____no_output_____"
]
],
[
[
"#export\nclass TupleTransform(Transform):\n \"`Transform` that always treats `as_item` as `False`\"\n as_item_force=False",
"_____no_output_____"
],
[
"#export\nclass ItemTransform (Transform):\n \"`Transform` that always treats `as_item` as `True`\"\n as_item_force=True",
"_____no_output_____"
],
[
"def float_to_int(x:(float,int)): return Int(x)\n\nf = TupleTransform(float_to_int)\ntest_eq_type(f([1.]), (Int(1),))\ntest_eq_type(f([1]), (Int(1),))\ntest_eq_type(f(['1']), ('1',))\ntest_eq_type(f([1,'1']), (Int(1),'1'))\ntest_eq(f.decode([1]), [1])\n\ntest_eq_type(f(TupleBase(1.)), TupleBase(Int(1)))",
"_____no_output_____"
],
[
"class B(TupleTransform): pass\nclass C(TupleTransform): pass\nf = B()\ntest_eq(f([1]), [1])",
"_____no_output_____"
],
[
"@B\ndef _(self, x:int): return x+1\n@B\ndef _(self, x:str): return x+'1'\n@B\ndef _(self, x)->None: return str(x)+'!'\n\nb,c = B(),C()\ntest_eq(b([1]), [2])\ntest_eq(b(['1']), ('11',))\ntest_eq(b([1.0]), ('1.0!',))\ntest_eq(c([1]), [1])\ntest_eq(b([1,2]), (2,3))\ntest_eq(b.decode([2]), [2])\nassert pickle.loads(pickle.dumps(b))",
"_____no_output_____"
],
[
"@B\ndef decodes(self, x:int): return x-1\ntest_eq(b.decode([2]), [1])\ntest_eq(b.decode(('2',)), ('2',))",
"_____no_output_____"
]
],
[
[
"Non-type-constrained functions are applied to all elements of a tuple.",
"_____no_output_____"
]
],
[
[
"class A(TupleTransform): pass\n@A\ndef _(self, x): return x+1\n@A\ndef decodes(self, x): return x-1\n\nf = A()\nt = f((1,2.0))\ntest_eq_type(t, (2,3.0))\ntest_eq_type(f.decode(t), (1,2.0))",
"_____no_output_____"
]
],
[
[
"Type-constrained functions are applied to only matching elements of a tuple, and return annotations are only applied where matching.",
"_____no_output_____"
]
],
[
[
"class B(TupleTransform):\n def encodes(self, x:int): return Int(x+1)\n def encodes(self, x:str): return x+'1'\n def decodes(self, x:Int): return x//2\n\nf = B()\nstart = (1.,2,'3')\nt = f(start)\ntest_eq_type(t, (1.,Int(3),'31'))\ntest_eq(f.decode(t), (1.,Int(1),'31'))",
"_____no_output_____"
]
],
[
[
"The same behavior also works with `typing` module type classes.",
"_____no_output_____"
]
],
[
[
"class A(Transform): pass\n@A\ndef _(self, x:numbers.Integral): return x+1\n@A\ndef _(self, x:float): return x*3\n@A\ndef decodes(self, x:int): return x-1\n\nf = A()\nstart = 1.0\nt = f(start)\ntest_eq(t, 3.)\ntest_eq(f.decode(t), 3)\n\nf = A(as_item=False)\nstart = (1.,2,3.)\nt = f(start)\ntest_eq(t, (3.,3,9.))\ntest_eq(f.decode(t), (3.,2,9.))",
"_____no_output_____"
]
],
[
[
"Transform accepts lists",
"_____no_output_____"
]
],
[
[
"def a(x): return L(x_+1 for x_ in x)\ndef b(x): return L(x_-1 for x_ in x)\nf = TupleTransform(a,b)\n\nt = f((L(1,2),))\ntest_eq(t, (L(2,3),))\ntest_eq(f.decode(t), (L(1,2),))",
"_____no_output_____"
]
],
[
[
"## Export -",
"_____no_output_____"
]
],
[
[
"#hide\nfrom local.notebook.export import notebook2script\nnotebook2script(all_fs=True)",
"Converted 00_test.ipynb.\nConverted 01_core.ipynb.\nConverted 01a_torch_core.ipynb.\nConverted 01b_script.ipynb.\nConverted 01c_dataloader.ipynb.\nConverted 02_data_transforms.ipynb.\nConverted 03_data_pipeline.ipynb.\nConverted 05_data_core.ipynb.\nConverted 06_data_source.ipynb.\nConverted 07_vision_core.ipynb.\nConverted 08_pets_tutorial.ipynb.\nConverted 09_vision_augment.ipynb.\nConverted 11_layers.ipynb.\nConverted 11a_vision_models_xresnet.ipynb.\nConverted 12_optimizer.ipynb.\nConverted 13_learner.ipynb.\nConverted 14_callback_schedule.ipynb.\nConverted 15_callback_hook.ipynb.\nConverted 16_callback_progress.ipynb.\nConverted 17_callback_tracker.ipynb.\nConverted 18_callback_fp16.ipynb.\nConverted 19_callback_mixup.ipynb.\nConverted 20_metrics.ipynb.\nConverted 21_tutorial_imagenette.ipynb.\nConverted 30_text_core.ipynb.\nConverted 31_text_data.ipynb.\nConverted 32_text_models_awdlstm.ipynb.\nConverted 33_text_models_core.ipynb.\nConverted 34_callback_rnn.ipynb.\nConverted 35_tutorial_wikitext.ipynb.\nConverted 36_text_models_qrnn.ipynb.\nConverted 40_tabular_core.ipynb.\nConverted 41_tabular_model.ipynb.\nConverted 50_data_block.ipynb.\nConverted 90_notebook_core.ipynb.\nConverted 91_notebook_export.ipynb.\nConverted 92_notebook_showdoc.ipynb.\nConverted 93_notebook_export2html.ipynb.\nConverted 94_index.ipynb.\nConverted 95_utils_test.ipynb.\nConverted 96_data_external.ipynb.\nConverted notebook2jekyll.ipynb.\nConverted tmp.ipynb.\n"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e72924bdd63433eb2227bae8c017b9f7efbfbac7 | 778,005 | ipynb | Jupyter Notebook | 3. Facial Keypoint Detection, Complete Pipeline.ipynb | pmobbs/facial-keypoints | 0bf49a6437f52408993b1be2d0354d04028c6976 | [
"MIT"
] | 1 | 2020-07-17T17:36:51.000Z | 2020-07-17T17:36:51.000Z | 3. Facial Keypoint Detection, Complete Pipeline.ipynb | pmobbs/facial-keypoints | 0bf49a6437f52408993b1be2d0354d04028c6976 | [
"MIT"
] | null | null | null | 3. Facial Keypoint Detection, Complete Pipeline.ipynb | pmobbs/facial-keypoints | 0bf49a6437f52408993b1be2d0354d04028c6976 | [
"MIT"
] | null | null | null | 1,834.917453 | 323,020 | 0.959029 | [
[
[
"## Face and Facial Keypoint detection\n\nAfter you've trained a neural network to detect facial keypoints, you can then apply this network to *any* image that includes faces. The neural network expects a Tensor of a certain size as input and, so, to detect any face, you'll first have to do some pre-processing.\n\n1. Detect all the faces in an image using a face detector (we'll be using a Haar Cascade detector in this notebook).\n2. Pre-process those face images so that they are grayscale, and transformed to a Tensor of the input size that your net expects. This step will be similar to the `data_transform` you created and applied in Notebook 2, whose job was tp rescale, normalize, and turn any iimage into a Tensor to be accepted as input to your CNN.\n3. Use your trained model to detect facial keypoints on the image.\n\n---",
"_____no_output_____"
],
[
"In the next python cell we load in required libraries for this section of the project.",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib.image as mpimg\n%matplotlib inline\n\n",
"_____no_output_____"
]
],
[
[
"#### Select an image \n\nSelect an image to perform facial keypoint detection on; you can select any image of faces in the `images/` directory.",
"_____no_output_____"
]
],
[
[
"import cv2\n# load in color image for face detection\nimage = cv2.imread('images/obamas.jpg')\n\n# switch red and blue color channels \n# --> by default OpenCV assumes BLUE comes first, not RED as in many images\nimage = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)\n \n# plot the image\nfig = plt.figure(figsize=(9,9))\nplt.imshow(image)",
"_____no_output_____"
]
],
[
[
"## Detect all faces in an image\n\nNext, you'll use one of OpenCV's pre-trained Haar Cascade classifiers, all of which can be found in the `detector_architectures/` directory, to find any faces in your selected image.\n\nIn the code below, we loop over each face in the original image and draw a red square on each face (in a copy of the original image, so as not to modify the original). You can even [add eye detections](https://docs.opencv.org/3.4.1/d7/d8b/tutorial_py_face_detection.html) as an *optional* exercise in using Haar detectors.\n\nAn example of face detection on a variety of images is shown below.\n\n<img src='images/haar_cascade_ex.png' width=80% height=80%/>\n",
"_____no_output_____"
]
],
[
[
"# load in a haar cascade classifier for detecting frontal faces\nface_cascade = cv2.CascadeClassifier('detector_architectures/haarcascade_frontalface_default.xml')\n\n# run the detector\n# the output here is an array of detections; the corners of each detection box\n# if necessary, modify these parameters until you successfully identify every face in a given image\nfaces = face_cascade.detectMultiScale(image, 1.2, 2)\n\n# make a copy of the original image to plot detections on\nimage_with_detections = image.copy()\n\n# loop over the detected faces, mark the image where each face is found\nfor (x,y,w,h) in faces:\n # draw a rectangle around each detected face\n # you may also need to change the width of the rectangle drawn depending on image resolution\n cv2.rectangle(image_with_detections,(x,y),(x+w,y+h),(255,0,0),3) \n\nfig = plt.figure(figsize=(9,9))\n\nplt.imshow(image_with_detections)",
"_____no_output_____"
]
],
[
[
"## Loading in a trained model\n\nOnce you have an image to work with (and, again, you can select any image of faces in the `images/` directory), the next step is to pre-process that image and feed it into your CNN facial keypoint detector.\n\nFirst, load your best model by its filename.",
"_____no_output_____"
]
],
[
[
"import torch\nfrom models import Net\n\nnet = Net()\n\n## TODO: load the best saved model parameters (by your path name)\n## You'll need to un-comment the line below and add the correct name for *your* saved model\nnet.load_state_dict(torch.load('saved_models/keypoints_model_1.pt'))\n\n## print out your net and prepare it for testing (uncomment the line below)\nnet.eval()",
"_____no_output_____"
],
[
"# visualize the output\ndef visualize_output(test_image, test_output):\n\n plt.figure()\n\n # un-transform the image data\n #image = test_image.data # get the image from it's Variable wrapper\n image = test_image.numpy() # convert to numpy array from a Tensor\n image = np.transpose(image, (1, 2, 0)) # transpose to go from torch to numpy image\n\n # un-transform the predicted key_pts data\n predicted_key_pts = test_output.data\n predicted_key_pts = predicted_key_pts.numpy()\n # undo normalization of keypoints \n predicted_key_pts = predicted_key_pts*50.0+100\n \n # call show_all_keypoints\n show_all_keypoints(np.squeeze(image), predicted_key_pts)\n\n plt.axis('off')\n\n plt.show()\n",
"_____no_output_____"
],
[
"def show_all_keypoints(image, predicted_key_pts, gt_pts=None):\n \"\"\"Show image with predicted keypoints\"\"\"\n # image is grayscale\n plt.imshow(image, cmap='gray')\n plt.scatter(predicted_key_pts[:, 0], predicted_key_pts[:, 1], s=20, marker='.', c='m')\n # plot ground truth points as green pts\n if gt_pts is not None:\n plt.scatter(gt_pts[:, 0], gt_pts[:, 1], s=20, marker='.', c='g')\n",
"_____no_output_____"
]
],
[
[
"## Keypoint detection\n\nNow, we'll loop over each detected face in an image (again!) only this time, you'll transform those faces in Tensors that your CNN can accept as input images.\n\n### TODO: Transform each detected face into an input Tensor\n\nYou'll need to perform the following steps for each detected face:\n1. Convert the face from RGB to grayscale\n2. Normalize the grayscale image so that its color range falls in [0,1] instead of [0,255]\n3. Rescale the detected face to be the expected square size for your CNN (224x224, suggested)\n4. Reshape the numpy image into a torch image.\n\nYou may find it useful to consult to transformation code in `data_load.py` to help you perform these processing steps.\n\n\n### TODO: Detect and display the predicted keypoints\n\nAfter each face has been appropriately converted into an input Tensor for your network to see as input, you'll wrap that Tensor in a Variable() and can apply your `net` to each face. The ouput should be the predicted the facial keypoints. These keypoints will need to be \"un-normalized\" for display, and you may find it helpful to write a helper function like `show_keypoints`. You should end up with an image like the following with facial keypoints that closely match the facial features on each individual face:\n\n<img src='images/michelle_detected.png' width=30% height=30%/>\n\n\n",
"_____no_output_____"
]
],
[
[
"image_copy = np.copy(image)\n\n# loop over the detected faces from your haar cascade\nfor (x,y,w,h) in faces:\n \n # Select the region of interest that is the face in the image \n roi = image_copy[y:y+h, x:x+w]\n \n ## TODO: Convert the face region from RGB to grayscale\n gray_roi = cv2.cvtColor(roi, cv2.COLOR_RGB2GRAY)\n \n ## TODO: Normalize the grayscale image so that its color range falls in [0,1] instead of [0,255]\n gray_norm = gray_roi/255.0\n \n ## TODO: Rescale the detected face to be the expected square size for your CNN (224x224, suggested)\n resized_image = cv2.resize(gray_norm, (224, 224)) \n \n ## TODO: Reshape the numpy image shape (H0 x W1 x C2) into a torch image shape (C2 x H0 x W1)\n #image_trans = np.transpose(resized_image, (2, 0, 1))\n \n ## TODO: Make facial keypoint predictions using your loaded, trained network \n ## perform a forward pass to get the predicted facial keypoints\n\n resized_image = resized_image.reshape(-1, 1, 224, 224)\n img_tensor = torch.from_numpy(resized_image)\n \n\n #img_tensor.unsqueeze_(0)\n \n #img_tensor.requires_grad_(False)\n #model.to(device)\n img_tensor = img_tensor.type(torch.FloatTensor)\n output_pts = net(img_tensor)\n # reshape to batch_size x 68 x 2 pts\n output_pts = output_pts.view(output_pts.size()[0], 68, -1)\n \n ## TODO: Display each detected face and the corresponding keypoints \n #plt.imshow(resized_image, cmap=\"gray\")\n visualize_output(img_tensor[0], output_pts[0]) \n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
e72931a905120db43f8b245a774eef8aa04c53a9 | 12,834 | ipynb | Jupyter Notebook | lotus_predict/.ipynb_checkpoints/Try_suffer_tf-checkpoint.ipynb | BuiNgocHai/youtube-8m | 8aa922b02b81821655f9dbd78a575b732ed27b77 | [
"Apache-2.0"
] | null | null | null | lotus_predict/.ipynb_checkpoints/Try_suffer_tf-checkpoint.ipynb | BuiNgocHai/youtube-8m | 8aa922b02b81821655f9dbd78a575b732ed27b77 | [
"Apache-2.0"
] | null | null | null | lotus_predict/.ipynb_checkpoints/Try_suffer_tf-checkpoint.ipynb | BuiNgocHai/youtube-8m | 8aa922b02b81821655f9dbd78a575b732ed27b77 | [
"Apache-2.0"
] | null | null | null | 37.858407 | 270 | 0.537011 | [
[
[
"import numpy as np \nimport pandas as pd\n\nimport os\nimport tensorflow as tf\nimport numpy as np\nfrom IPython.display import YouTubeVideo\nfrom tensorflow import gfile\nfrom tensorflow import logging",
"/home/vicker/.local/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint8 = np.dtype([(\"qint8\", np.int8, 1)])\n/home/vicker/.local/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_quint8 = np.dtype([(\"quint8\", np.uint8, 1)])\n/home/vicker/.local/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint16 = np.dtype([(\"qint16\", np.int16, 1)])\n/home/vicker/.local/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_quint16 = np.dtype([(\"quint16\", np.uint16, 1)])\n/home/vicker/.local/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint32 = np.dtype([(\"qint32\", np.int32, 1)])\n/home/vicker/.local/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n np_resource = np.dtype([(\"resource\", np.ubyte, 1)])\n/home/vicker/.local/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint8 = np.dtype([(\"qint8\", np.int8, 1)])\n/home/vicker/.local/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_quint8 = np.dtype([(\"quint8\", np.uint8, 1)])\n/home/vicker/.local/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint16 = np.dtype([(\"qint16\", np.int16, 1)])\n/home/vicker/.local/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_quint16 = np.dtype([(\"quint16\", np.uint16, 1)])\n/home/vicker/.local/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint32 = np.dtype([(\"qint32\", np.int32, 1)])\n/home/vicker/.local/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n np_resource = np.dtype([(\"resource\", np.ubyte, 1)])\n"
],
[
"frame_lvl_record = '/home/vicker/Desktop/train_new_0.tfrecord'\nfeat_rgb = []\nfeat_audio = []\n\nfor example in tf.python_io.tf_record_iterator(frame_lvl_record): \n tf_seq_example = tf.train.SequenceExample.FromString(example)\n n_frames = len(tf_seq_example.feature_lists.feature_list['audio'].feature)\n sess = tf.InteractiveSession()\n rgb_frame = []\n audio_frame = []\n # iterate through frames\n for i in range(n_frames):\n rgb_frame.append(tf.cast(tf.decode_raw(\n tf_seq_example.feature_lists.feature_list['rgb']\n .feature[i].bytes_list.value[0],tf.uint8)\n ,tf.float32).eval())\n audio_frame.append(tf.cast(tf.decode_raw(\n tf_seq_example.feature_lists.feature_list['audio']\n .feature[i].bytes_list.value[0],tf.uint8)\n ,tf.float32).eval())\n \n \n sess.close()\n \n feat_audio.append(audio_frame)\n feat_rgb.append(rgb_frame)\n break",
"_____no_output_____"
],
[
"print('The first video has %d frames' %len(feat_rgb[0]))\nfeat_rgb[0]",
"_____no_output_____"
],
[
"record_iterator = tf.python_io.tf_record_iterator('/home/vicker/Desktop/output17.tfrecord')\nwriter = tf.io.TFRecordWriter('/home/vicker/Desktop/output173.tfrecord')\nfor string_record in record_iterator:\n example = tf.train.SequenceExample()\n example.ParseFromString(string_record)\n \n\n \n writer.write(example.SerializeToString())\nwriter.close()\n# for b_str in b:\n# b = tf.python_io.tf_record_iterator('/home/vicker/Desktop/train_new_0.tfrecord')\n# b_ex = tf.train.SequenceExample()\n# b_ex.ParseFromString(string_record)\n# print(b_ex)\n# writer.write(b_ex.SerializeToString())\n",
"_____no_output_____"
],
[
"record_iterator = tf.python_io.tf_record_iterator('/home/vicker/Desktop/train3764.tfrecord')\ni = 0\nwriter = tf.io.TFRecordWriter('/home/vicker/Desktop/output172.tfrecord')\nfor string_record in record_iterator:\n example = tf.train.Example()\n example.ParseFromString(string_record)\n #print(example.features.feature['labels'].int64_list.value[0])\n #print(example.features.feature['labels'])\n #example.features.feature['labels'].int64_list.value[0] = 0\n print(example)\n #writer.write(example.SerializeToString())\n i+=1\n if i ==3:\n break\nwriter.close()\n# image = example.features.feature[\"rgb\"].b",
"_____no_output_____"
],
[
"record_iterator = tf.python_io.tf_record_iterator('/home/vicker/Desktop/train3764.tfrecord')\ni = 0\nfor string_record in record_iterator:\n example = tf.train.Example()\n example.ParseFromString(string_record)\n print(example)\n i+=1\n if i ==3:\n break\n# image = example.features.feature[\"rgb\"].b",
"features {\n feature {\n key: \"id\"\n value {\n bytes_list {\n value: \"J58S\"\n }\n }\n }\n feature {\n key: \"labels\"\n value {\n int64_list {\n value: 2\n value: 45\n value: 51\n value: 56\n value: 77\n value: 295\n }\n }\n }\n}\n\nfeatures {\n feature {\n key: \"id\"\n value {\n bytes_list {\n value: \"x68S\"\n }\n }\n }\n feature {\n key: \"labels\"\n value {\n int64_list {\n value: 0\n value: 1\n value: 1635\n }\n }\n }\n}\n\nfeatures {\n feature {\n key: \"id\"\n value {\n bytes_list {\n value: \"UL8S\"\n }\n }\n }\n feature {\n key: \"labels\"\n value {\n int64_list {\n value: 168\n value: 330\n }\n }\n }\n}\n\n"
],
[
"print(example)",
"_____no_output_____"
],
[
"feature_list = {\n 'rgb': tf.train.FeatureList(feature=rgb_features),\n}\ncontext_features = {\n FLAGS.labels_feature_key:\n _int64_list_feature(sorted(map(int, labels.split(';')))),\n 'id':\n _bytes_feature(_make_bytes(map(ord, video_file))),\n 'mean_' + 'rgb':\n tf.train.Feature(\n float_list=tf.train.FloatList(value=mean_rgb_features)),\n}\n",
"_____no_output_____"
],
[
"s = 'train3764.tfrecord'\ns[:5]",
"_____no_output_____"
],
[
" with tf.name_scope(\"input\"):\n files = gfile.Glob('/home/vicker/Desktop/output17.tfrecord')\n if not files:\n raise IOError(\"Unable to find input files. data_pattern='\" +\n data_pattern + \"'\")\n logging.info(\"number of input files: \" + str(len(files)))\n filename_queue = tf.train.string_input_producer(files,\n num_epochs=1,\n shuffle=False)\n examples_and_labels = [\n reader.prepare_reader(filename_queue) for _ in range(1)\n ]\n\n input_data_dict = (tf.train.batch_join(examples_and_labels,\n batch_size=batch_size,\n allow_smaller_final_batch=True,\n enqueue_many=True))\n video_id_batch = input_data_dict[\"video_ids\"]\n video_batch = input_data_dict[\"video_matrix\"]\n num_frames_batch = input_data_dict[\"num_frames\"]",
"_____no_output_____"
],
[
"index = 10\nfor i in range(index +1,19):\n print(i)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e7293a3c98dc5208f9fdb630514d254095dcafe5 | 376,612 | ipynb | Jupyter Notebook | image-processing/watershed-obj-segmentation/watershed.ipynb | jafetimbre/ms-school-stuff | 0fbec4c9adb63989cbe54a5047d791ad8c04bda0 | [
"MIT"
] | 2 | 2021-10-29T19:18:57.000Z | 2021-10-29T19:19:02.000Z | image-processing/watershed-obj-segmentation/watershed.ipynb | jafetimbre/ms-school-stuff | 0fbec4c9adb63989cbe54a5047d791ad8c04bda0 | [
"MIT"
] | null | null | null | image-processing/watershed-obj-segmentation/watershed.ipynb | jafetimbre/ms-school-stuff | 0fbec4c9adb63989cbe54a5047d791ad8c04bda0 | [
"MIT"
] | null | null | null | 1,442.957854 | 168,350 | 0.958437 | [
[
[
"<a href=\"https://colab.research.google.com/github/jafetimbre/ms-school-stuff/blob/master/image-processing/watershed-obj-segmentation/watershed.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
]
],
[
[
"import cv2\nimport numpy as np\nfrom urllib.request import urlopen\nimport matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"def show_im(img):\n plt.figure(figsize = (10, 6))\n plt.axis(\"off\")\n plt.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB))",
"_____no_output_____"
],
[
"req = urlopen(\"https://raw.githubusercontent.com/jafetimbre/ms-school-stuff/master/image-processing/watershed-obj-segmentation/res/bani.jpg\")\narr = np.asarray(bytearray(req.read()), dtype=np.uint8)\ncoins = cv2.imdecode(arr, -1)\n\nshow_im(coins)",
"_____no_output_____"
],
[
"gray = cv2.cvtColor(coins, cv2.COLOR_BGR2GRAY)\nret, thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)\n\nshow_im(thresh)",
"_____no_output_____"
],
[
"# noise removal\nkernel = np.ones((3,3), np.uint8)\nopening = cv2.morphologyEx(thresh, cv2.MORPH_OPEN,kernel, iterations = 2)\n\n# sure background area\nsure_bg = cv2.dilate(opening,kernel,iterations=3)\n\n# Finding sure foreground area\ndist_transform = cv2.distanceTransform(opening,cv2.DIST_L2,5)\nret, sure_fg = cv2.threshold(dist_transform,0.7*dist_transform.max(),255,0)\n\n# Finding unknown region\nsure_fg = np.uint8(sure_fg)\nunknown = cv2.subtract(sure_bg,sure_fg)",
"_____no_output_____"
],
[
"show_im(sure_fg)\nshow_im(sure_bg)",
"_____no_output_____"
],
[
"# Marker labelling\nret, markers = cv2.connectedComponents(sure_fg)\n\n# Add one to all labels so that sure background is not 0, but 1\nmarkers = markers+1\n\n# Now, mark the region of unknown with zero\nmarkers[unknown==255] = 0",
"_____no_output_____"
],
[
"markers = cv2.watershed(coins, markers)\ncoins[markers == -1] = [255,0,0]",
"_____no_output_____"
],
[
"show_im(coins)",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e72945a8377351e1ee6b5b0f87ddef3e53c1cb9c | 34,003 | ipynb | Jupyter Notebook | Convolutional Neural Networks/Exercise_1_Cats_vs_Dogs_Question-FINAL.ipynb | ornob39/Tensor-Flow-in-Practice-Specialization | b955658ced6231ab19a1fc1197ae6616745bfe1d | [
"MIT"
] | null | null | null | Convolutional Neural Networks/Exercise_1_Cats_vs_Dogs_Question-FINAL.ipynb | ornob39/Tensor-Flow-in-Practice-Specialization | b955658ced6231ab19a1fc1197ae6616745bfe1d | [
"MIT"
] | null | null | null | Convolutional Neural Networks/Exercise_1_Cats_vs_Dogs_Question-FINAL.ipynb | ornob39/Tensor-Flow-in-Practice-Specialization | b955658ced6231ab19a1fc1197ae6616745bfe1d | [
"MIT"
] | null | null | null | 70.987474 | 10,528 | 0.786578 | [
[
[
"# ATTENTION: Please do not alter any of the provided code in the exercise. Only add your own code where indicated\n# ATTENTION: Please do not add or remove any cells in the exercise. The grader will check specific cells based on the cell position.\n# ATTENTION: Please use the provided epoch values when training.\n\n# In this exercise you will train a CNN on the FULL Cats-v-dogs dataset\n# This will require you doing a lot of data preprocessing because\n# the dataset isn't split into training and validation for you\n# This code block has all the required inputs\nimport os\nimport zipfile\nimport random\nimport tensorflow as tf\nimport shutil\nfrom tensorflow.keras.optimizers import RMSprop\nfrom tensorflow.keras.preprocessing.image import ImageDataGenerator\nfrom shutil import copyfile\nfrom os import getcwd",
"_____no_output_____"
],
[
"path_cats_and_dogs = f\"{getcwd()}/../tmp2/cats-and-dogs.zip\"\nshutil.rmtree('/tmp')\n\nlocal_zip = path_cats_and_dogs\nzip_ref = zipfile.ZipFile(local_zip, 'r')\nzip_ref.extractall('/tmp')\nzip_ref.close()\n",
"_____no_output_____"
],
[
"print(len(os.listdir('/tmp/PetImages/Cat/')))\nprint(len(os.listdir('/tmp/PetImages/Dog/')))\n\n# Expected Output:\n# 1500\n# 1500",
"1500\n1500\n"
],
[
"# Use os.mkdir to create your directories\n# You will need a directory for cats-v-dogs, and subdirectories for training\n# and testing. These in turn will need subdirectories for 'cats' and 'dogs'\n\ntry:\n #YOUR CODE GOES HERE\n os.mkdir('/tmp/cats-v-dogs')\n os.mkdir('/tmp/cats-v-dogs/training')\n os.mkdir('/tmp/cats-v-dogs/testing')\n os.mkdir('/tmp/cats-v-dogs/training/cats')\n os.mkdir('/tmp/cats-v-dogs/training/dogs')\n os.mkdir('/tmp/cats-v-dogs/testing/cats')\n os.mkdir('/tmp/cats-v-dogs/testing/dogs')\nexcept OSError:\n pass",
"_____no_output_____"
],
[
"# Write a python function called split_data which takes\n# a SOURCE directory containing the files\n# a TRAINING directory that a portion of the files will be copied to\n# a TESTING directory that a portion of the files will be copie to\n# a SPLIT SIZE to determine the portion\n# The files should also be randomized, so that the training set is a random\n# X% of the files, and the test set is the remaining files\n# SO, for example, if SOURCE is PetImages/Cat, and SPLIT SIZE is .9\n# Then 90% of the images in PetImages/Cat will be copied to the TRAINING dir\n# and 10% of the images will be copied to the TESTING dir\n# Also -- All images should be checked, and if they have a zero file length,\n# they will not be copied over\n#\n# os.listdir(DIRECTORY) gives you a listing of the contents of that directory\n# os.path.getsize(PATH) gives you the size of the file\n# copyfile(source, destination) copies a file from source to destination\n# random.sample(list, len(list)) shuffles a list\ndef split_data(SOURCE, TRAINING, TESTING, SPLIT_SIZE):\n# YOUR CODE STARTS HERE\n dataset = []\n \n for unitData in os.listdir(SOURCE):\n data = SOURCE + unitData\n if (os.path.getsize(data) > 0):\n dataset.append(unitData)\n else:\n print('Skipped ' + unitData)\n print('Invalid file size! i.e Zero length.')\n \n train_data_length = int(len(dataset) * SPLIT_SIZE)\n test_data_length = int(len(dataset) - train_data_length)\n shuffled_set = random.sample(dataset, len(dataset))\n train_set = shuffled_set[0:train_data_length]\n test_set = shuffled_set[-test_data_length:]\n \n for unitData in train_set:\n temp_train_data = SOURCE + unitData\n final_train_data = TRAINING + unitData\n copyfile(temp_train_data, final_train_data)\n \n for unitData in test_set:\n temp_test_data = SOURCE + unitData\n final_test_data = TESTING + unitData\n copyfile(temp_train_data, final_test_data)\n\n# YOUR CODE ENDS HERE\n\nCAT_SOURCE_DIR = \"/tmp/PetImages/Cat/\"\nTRAINING_CATS_DIR = \"/tmp/cats-v-dogs/training/cats/\"\nTESTING_CATS_DIR = \"/tmp/cats-v-dogs/testing/cats/\"\nDOG_SOURCE_DIR = \"/tmp/PetImages/Dog/\"\nTRAINING_DOGS_DIR = \"/tmp/cats-v-dogs/training/dogs/\"\nTESTING_DOGS_DIR = \"/tmp/cats-v-dogs/testing/dogs/\"\n\nsplit_size = .9\nsplit_data(CAT_SOURCE_DIR, TRAINING_CATS_DIR, TESTING_CATS_DIR, split_size)\nsplit_data(DOG_SOURCE_DIR, TRAINING_DOGS_DIR, TESTING_DOGS_DIR, split_size)",
"_____no_output_____"
],
[
"print(len(os.listdir('/tmp/cats-v-dogs/training/cats/')))\nprint(len(os.listdir('/tmp/cats-v-dogs/training/dogs/')))\nprint(len(os.listdir('/tmp/cats-v-dogs/testing/cats/')))\nprint(len(os.listdir('/tmp/cats-v-dogs/testing/dogs/')))\n\n# Expected output:\n# 1350\n# 1350\n# 150\n# 150",
"1350\n1350\n150\n150\n"
],
[
"# DEFINE A KERAS MODEL TO CLASSIFY CATS V DOGS\n# USE AT LEAST 3 CONVOLUTION LAYERS\nmodel = tf.keras.models.Sequential([\n tf.keras.layers.Conv2D(16,(3,3), activation = 'relu', input_shape = (150,150,3)),\n tf.keras.layers.MaxPooling2D(2,2),\n tf.keras.layers.Conv2D(16,(3,3), activation = 'relu'),\n tf.keras.layers.MaxPooling2D(2,2),\n tf.keras.layers.Conv2D(16,(3,3), activation = 'relu'),\n tf.keras.layers.MaxPooling2D(2,2),\n \n tf.keras.layers.Flatten(),\n tf.keras.layers.Dense(512 , activation='relu'),\n tf.keras.layers.Dense(1 , activation='sigmoid')\n# YOUR CODE HERE\n])\n\nmodel.compile(optimizer=RMSprop(lr=0.001), loss='binary_crossentropy', metrics=['acc'])",
"_____no_output_____"
]
],
[
[
"# NOTE:\n\nIn the cell below you **MUST** use a batch size of 10 (`batch_size=10`) for the `train_generator` and the `validation_generator`. Using a batch size greater than 10 will exceed memory limits on the Coursera platform.",
"_____no_output_____"
]
],
[
[
"\nTRAINING_DIR = '/tmp/cats-v-dogs/training'\ntrain_datagen = ImageDataGenerator( rescale = 1/255)\n\n# NOTE: YOU MUST USE A BATCH SIZE OF 10 (batch_size=10) FOR THE \n# TRAIN GENERATOR.\ntrain_generator = train_datagen.flow_from_directory(\nTRAINING_DIR , batch_size = 10 , class_mode = 'binary' , target_size = (150,150)\n)\n\nVALIDATION_DIR = '/tmp/cats-v-dogs/testing'\nvalidation_datagen = ImageDataGenerator( rescale = 1/255)\n\n# NOTE: YOU MUST USE A BACTH SIZE OF 10 (batch_size=10) FOR THE \n# VALIDATION GENERATOR.\nvalidation_generator = validation_datagen.flow_from_directory(\nVALIDATION_DIR , batch_size= 10 , class_mode = 'binary' , target_size=(150,150)\n)\n\n\n\n# Expected Output:\n# Found 2700 images belonging to 2 classes.\n# Found 300 images belonging to 2 classes.",
"Found 2700 images belonging to 2 classes.\nFound 300 images belonging to 2 classes.\n"
],
[
"history = model.fit_generator(train_generator,\n epochs=2,\n verbose=1,\n validation_data=validation_generator)\n",
"Epoch 1/2\n270/270 [==============================] - 42s 154ms/step - loss: 0.6935 - acc: 0.5767 - val_loss: 0.6875 - val_acc: 0.5000\nEpoch 2/2\n270/270 [==============================] - 36s 132ms/step - loss: 0.6037 - acc: 0.6881 - val_loss: 0.9902 - val_acc: 0.5000\n"
],
[
"# PLOT LOSS AND ACCURACY\n%matplotlib inline\n\nimport matplotlib.image as mpimg\nimport matplotlib.pyplot as plt\n\n#-----------------------------------------------------------\n# Retrieve a list of list results on training and test data\n# sets for each training epoch\n#-----------------------------------------------------------\nacc=history.history['acc']\nval_acc=history.history['val_acc']\nloss=history.history['loss']\nval_loss=history.history['val_loss']\n\nepochs=range(len(acc)) # Get number of epochs\n\n#------------------------------------------------\n# Plot training and validation accuracy per epoch\n#------------------------------------------------\nplt.plot(epochs, acc, 'r', \"Training Accuracy\")\nplt.plot(epochs, val_acc, 'b', \"Validation Accuracy\")\nplt.title('Training and validation accuracy')\nplt.figure()\n\n#------------------------------------------------\n# Plot training and validation loss per epoch\n#------------------------------------------------\nplt.plot(epochs, loss, 'r', \"Training Loss\")\nplt.plot(epochs, val_loss, 'b', \"Validation Loss\")\n\n\nplt.title('Training and validation loss')\n\n# Desired output. Charts with training and validation metrics. No crash :)",
"_____no_output_____"
]
],
[
[
"# Submission Instructions",
"_____no_output_____"
]
],
[
[
"# Now click the 'Submit Assignment' button above.",
"_____no_output_____"
]
],
[
[
"# When you're done or would like to take a break, please run the two cells below to save your work and close the Notebook. This will free up resources for your fellow learners. ",
"_____no_output_____"
]
],
[
[
"%%javascript\n<!-- Save the notebook -->\nIPython.notebook.save_checkpoint();",
"_____no_output_____"
],
[
"%%javascript\nIPython.notebook.session.delete();\nwindow.onbeforeunload = null\nsetTimeout(function() { window.close(); }, 1000);",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
e729484ea1049f1a450ebb267395eeed36a73ee8 | 3,038 | ipynb | Jupyter Notebook | course/chapter6/section4.ipynb | jackie930/notebooks | 7ad9790b33dd74f13756f0013f35c3e0552e9bb5 | [
"Apache-2.0"
] | 1 | 2021-12-15T19:41:07.000Z | 2021-12-15T19:41:07.000Z | course/chapter6/section4.ipynb | jackie930/notebooks | 7ad9790b33dd74f13756f0013f35c3e0552e9bb5 | [
"Apache-2.0"
] | null | null | null | course/chapter6/section4.ipynb | jackie930/notebooks | 7ad9790b33dd74f13756f0013f35c3e0552e9bb5 | [
"Apache-2.0"
] | 1 | 2021-11-07T18:18:06.000Z | 2021-11-07T18:18:06.000Z | 21.394366 | 122 | 0.498025 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
e72954164fd401e0c705b039b74f67bc5ba7f54b | 233,114 | ipynb | Jupyter Notebook | projects/projects_summer_2021/15_football.ipynb | nlihin/my-binder | 303e3ff0123b004e362b7a07ec4ac5add3ac8b56 | [
"CC0-1.0"
] | 2 | 2021-05-19T09:36:32.000Z | 2022-03-15T00:41:32.000Z | projects/projects_summer_2021/15_football.ipynb | nlihin/my-binder | 303e3ff0123b004e362b7a07ec4ac5add3ac8b56 | [
"CC0-1.0"
] | null | null | null | projects/projects_summer_2021/15_football.ipynb | nlihin/my-binder | 303e3ff0123b004e362b7a07ec4ac5add3ac8b56 | [
"CC0-1.0"
] | 13 | 2021-04-19T15:44:42.000Z | 2022-03-01T08:41:56.000Z | 196.886824 | 58,232 | 0.883829 | [
[
[
"## Group15\n### members names: \nOr Nir \nElad Mor \nEitan Hameiri \nHananel Yefet",
"_____no_output_____"
],
[
"### Data source: \nhttps://www.kaggle.com/ahmedterry/cristiano-ronald-vs-lionel-messi-weekly-updated",
"_____no_output_____"
],
[
"# **RONALDO vs MESSI**",
"_____no_output_____"
],
[
"<img src=https://pbs.twimg.com/media/EoRIiXcWMAAcS22.jpg width=\"400\">",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n\ndf = pd.read_csv('https://raw.githubusercontent.com/verticale3185/FINEL-PROJECT/main/cristiano_vs_messi.csv')\ndf",
"_____no_output_____"
]
],
[
[
"this data",
"_____no_output_____"
],
[
"# Data explanation: \n \nThis dataset summarizes goals scored by Messi and Ronaldo of all their careers until 2020. \nEach row introduce as one goal. \nThere are 1300 rows and 10 columns.\n\n## Columns:\n**Player** = Ronaldo / Messi. \n**Comp** = competition. \n**Round** = the round of the match in the competition. \n**Date** = date of the match. \n**Venue** = home / away. \n**Opp** = the opposing team. \n**Pos** = possion of the plater on the field. \n**Min** = a minute of match. \n**Type** = type of the goal. \n**Assisted** = the name of the team player that assisted to the player (there non-assisted goals - solo)",
"_____no_output_____"
],
[
"# Handling missing data:",
"_____no_output_____"
],
[
"In the dataset, every empty raw is because there was more than one goal in the same match. \nSo, we fill all the mising data in the relevent data by \"ffill\".",
"_____no_output_____"
]
],
[
[
"columns =['comp', 'round', 'date', 'venue', 'opp', 'pos']\nfor x in columns:\n df[x]=df[x].fillna(method='ffill')",
"_____no_output_____"
]
],
[
[
"1. **assisted column:** \nEvery NaN meaning to non assisted goal - solo attack. \nSo we replaced all NaN with \"Solo\". \n\n2. **date column:** \nWe changed the type to dateime type and made new \"year\" column. \n\n3. **min column:** \nWe changed the type to integer and cleaned signs like ' and +. \n\n4. **pos column:** \nCleaned not necessary signs.\n \n5. we created seperate charts for each player.",
"_____no_output_____"
]
],
[
[
"df.assisted = df.assisted.fillna('Solo')\ndf.date = pd.to_datetime(df.date)\ndf['year'] = pd.DatetimeIndex(df['date']).year\ndf['year'].fillna(method = 'ffill', inplace = True)\ndf['year'] = df['year'].astype(int)\n\ndf['min'] = df['min'].astype(str)\ndf['min'] = df['min'].str.replace(\"'\", \"\")\nnew_time = df['min'].str.extract('(\\d+)[+](\\d+)',expand=True).dropna().astype(int).sum(axis=1)\ndf.loc[new_time.index,'min'] = new_time\ndf['min'] = df['min'].astype(int)\n\nif len(df.pos) > 2:\n df.pos = df.pos.str[0:2]\n\ndf['player'] = df['player'].str.title()",
"_____no_output_____"
],
[
"messi_df = df.loc[df.player == 'Messi'].reset_index()\nronaldo_df = df.loc[df.player == 'Ronaldo']",
"_____no_output_____"
]
],
[
[
"<img src=https://bolavip.com/__export/1598657320847/sites/bolavip/img/2020/08/28/lionel_messi_vs_cristiano_ronaldo.jpg_1546398727.jpg width=\"600\">",
"_____no_output_____"
],
[
"## 1. Who Scored More Goals Per Year?",
"_____no_output_____"
]
],
[
[
"plt.suptitle('Goals Per Year For Fvery Player', fontsize = 15 )\nronaldo_df.groupby(['year'])['player'].count().plot(kind='bar',color='brown',label='Ronaldo',figsize=(15,5),width = 0.4)\nmessi_df.groupby(['year'])['player'].count().plot(kind='bar',color='#14D622',label='Messi',figsize=(15,5),width = 0.2)\nplt.legend()\nplt.ylabel('Goals' , fontsize = 15)\nplt.xlabel('Year for all competitions', fontsize = 15)\nplt.grid(True)\nplt.show()",
"_____no_output_____"
]
],
[
[
"### From this comparison we can understand: \n1. 2012 was a dramatic year in Messi and Ronaldo's career, it was the best year for Messi career's and the worst year for Ronaldo career's. \n2. Until 2012 Messi scored more goals than Ronaldo, and from 2013 Ronaldo scores more goals. \n3. Ronaldo is more stable than Messi across the years.\n4. Ronaldo and Messi have passed their record and are scoring less goals than before.",
"_____no_output_____"
],
[
"## 2. Who Scored More Goals Per Type?",
"_____no_output_____"
]
],
[
[
"messi_type = df[df['player'] == 'Messi'].type.value_counts()\nronaldo_type = df[df['player'] == 'Ronaldo'].type.value_counts()\ndf_types = pd.DataFrame({ 'Messi' : messi_type ,\n 'Ronaldo' : ronaldo_type})\ndf_types.T",
"_____no_output_____"
],
[
"print(\"This graph show the number of goals in percent of the total goals for each player\")\n(df_types/df_types.sum()).plot.bar(figsize=(20, 5), color = ('#14D622',\"brown\"), fontsize = 15)\nplt.grid(True)",
"This graph show the number of goals in percent of the total goals for each player\n"
]
],
[
[
"### From this comparison we can understand: \n1. Messi scored 60% of his goals in his dominant foot, compared to Ronaldo who scored a little less than 40% in his dominant foot. \n2. Ronaldo scored more goals on penalties and header. \n3. Messi and Ronaldo scored the same percents with free kicks.",
"_____no_output_____"
],
[
"## 3. Who Scored More Solo Goals?",
"_____no_output_____"
]
],
[
[
"df['Solo_messi'] = ((df['assisted'] == 'Solo') & (df['player'] == 'Messi'))\ndf['Solo_ronaldo']= ((df['assisted'] == 'Solo') & (df['player'] == 'Ronaldo'))\n\nSolo_ronaldo = (df['Solo_ronaldo'] == True).sum()\nSolo_messi = (df['Solo_messi'] == True).sum()\nnonSolo_messi = (len(df['Solo_messi']) - Solo_messi)\nnonSolo_ronaldo = (len(df['Solo_ronaldo']) - Solo_ronaldo) \n\ndf_pie = pd.DataFrame({'Ronaldo' : [Solo_ronaldo , nonSolo_ronaldo],\n 'Messi' : [Solo_messi , nonSolo_messi]},\n index = ['Solo' , 'Assisted'])\nplot =df_pie.plot.pie(subplots=True, figsize=(11, 6) , autopct='%1.1f%%')\ndf_pie",
"_____no_output_____"
]
],
[
[
"### From this comparison we can understand: \nRonaldo scored a few more solo goals than Messi.",
"_____no_output_____"
],
[
"## 4. Who Scored More Goals Per Possition?",
"_____no_output_____"
]
],
[
[
"df['pos']=df.pos.str.strip()\nmessi_pos = df[df['player'] == 'Messi'].pos.value_counts()\nronaldo_pos = df[df['player'] == 'Ronaldo'].pos.value_counts()\ndf_pos = pd.DataFrame({ 'Messi' : messi_pos , \n 'Ronaldo' : ronaldo_pos})\ndf_pos = df_pos/df_pos.sum()\n\ndf_pos.plot.bar(title = 'player favorite position', color = ('#14D622',\"brown\"), figsize = (12,5) , fontsize = 15)\ndf.groupby([\"player\",'pos'])[['pos']].count()",
"_____no_output_____"
]
],
[
[
"### From this comparison we can understand: \n1. The best position for Ronaldo is LW and for Messi is RW.\n2. Messi better than Ronaldo in CF possition.\n3. Ronaldo better than Messi on the \"weak side\".\n4. Messi scored from more possionss than Ronaldo, but in low percents.",
"_____no_output_____"
],
[
"<img src=https://i.insider.com/60801d9644f4540019207e58 width=\"700\">",
"_____no_output_____"
],
[
"## 5. Who Was The Best Scorer In \"LaLiga\" (the spanish league)?",
"_____no_output_____"
]
],
[
[
"plt.plot()\nronaldo_df.loc[ronaldo_df.comp=='LaLiga'].groupby(['year'])['player'].count().plot(kind='line',color = 'brown',label='Ronaldo',figsize=(20,8))\nmessi_df.loc[(messi_df.comp=='LaLiga')].groupby(['year'])['player'].count().plot(kind='line',color = '#14D622',label='Messi',figsize=(20,6))\nplt.title('Goals Scored In \"LaLiga\"', fontsize = 30)\nplt.ylabel('Number Of Goals', fontsize = 20)\nplt.xlabel('Year', fontsize = 20)\nplt.xlim(2009,2018)\nplt.legend()\nplt.grid(True)\nplt.show()",
"_____no_output_____"
]
],
[
[
"#### Average goals per player per season:",
"_____no_output_____"
]
],
[
[
"df.loc[df.comp=='LaLiga'].groupby(['player','year'])[['year']].count().groupby([\"player\"]).mean().style.set_caption(\"Average goals per player per season\")",
"_____no_output_____"
]
],
[
[
"### From this comparison we can understand: \n1. Ronaldo is more stable than Messi across the years.\n2. In the period from 2009 to 2018, Messi was the best scorer for 6 seasons (2009, 2010, 2012, 2016, 2017, 2018) compared to only 4 seasons for Ronaldo (2011, 2013, 2014, 2015).\n3. Ronaldo scores an average of 2 goals more than Messi per year.",
"_____no_output_____"
],
[
"## 6. Who Is The Best Scorer In The Money Time?",
"_____no_output_____"
]
],
[
[
"plt.subplot(1,2,1)\ndf.loc[(df['comp']=='Champions League') & (df['round'] == 'Group Stage')].groupby(['player'])['date'].count().plot(kind='bar',width = 0.2, color = ( \"#14D622\",\"brown\"))\nplt.title('Goals Scored in \"UEFA Champions League\" Group stages',fontsize = 20)\nplt.xlabel('Players',fontsize = 14)\nplt.ylabel('Number of Goals',fontsize = 14)\nplt.ylim(0,75)\n\n\nplt.subplot(1,2,2)\ndf.loc[(df['comp']=='Champions League') & (df['round'] != 'Group Stage')].groupby(['player'])['date'].count().plot(kind='bar',figsize=(20,6),width = 0.2, color = ( \"#14D622\",\"brown\"))\nplt.title('Goals Scored in UEFA Champions League Knockout stages',fontsize=20)\nplt.xlabel('Players',fontsize = 14)\nplt.ylabel('Number of Goals',fontsize = 14)\nplt.ylim(0,70)\nplt.suptitle('Who scored more in UEFA Champions League Stages?',fontsize=30,style ='italic',y=1.05)\n\nplt.show()",
"_____no_output_____"
]
],
[
[
"#### Goals Scored in UEFA Champions League Knockout stages:",
"_____no_output_____"
]
],
[
[
"df.loc[(df['comp']=='Champions League') & (df['round'] == 'Group Stage')].groupby(['player','comp'])[['date']].count()",
"_____no_output_____"
]
],
[
[
"#### Goals Scored in \"UEFA Champions League\" Group stages:",
"_____no_output_____"
]
],
[
[
"df.loc[(df['comp']=='Champions League') & (df['round'] != 'Group Stage')].groupby(['player','comp'])[['date']].count()",
"_____no_output_____"
]
],
[
[
"### From this comparison we can understand: \nRonaldo scored more in the knockout stage and Messi scored more in the group stage.\n",
"_____no_output_____"
],
[
"<img src=https://ss.thgim.com/photos/article30894670.ece/alternates/FREE_690/Getty-Images-Messi-Ronaldo width=\"700\">",
"_____no_output_____"
],
[
"# Final Conclusions:",
"_____no_output_____"
],
[
"1. Based on Graph 6, we can concluded that the pressure exerted on the players in the final stages of the league does not affect Ronaldo, and even makes him play better. \nOn the other hand, the pressure does affect Messi and he scored more goals in the groupe stage. ",
"_____no_output_____"
],
[
"2. Based on graphs 1 and 5, we can concluded that Ronaldo is a more stable player than Messi. \nIn Messi's career we see ups and downs in his abilities across the years, and in Ronaldo's career we see stability ",
"_____no_output_____"
],
[
"3. Based on graphs 2,4, we can concluded that Ronaldo is a more versatile player than Messi. \nWe can see that Ronaldo scores in higher percentages from more positions on the field and more different types.",
"_____no_output_____"
],
[
"4. Based on Graph 3, we can concluded that there is almost no difference between Ronaldo and Messi in their abilities to score on their own without the help of their teammates (solo).",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
e729609956dd7a70cd58ade779466d7107e970e9 | 7,829 | ipynb | Jupyter Notebook | Data Augmentation.ipynb | arjunparmar/COVID19-upgraded | 679be2554c5620cc38a515eed8f3e608fff53413 | [
"MIT"
] | 3 | 2020-06-11T15:00:18.000Z | 2020-07-20T04:19:09.000Z | Data Augmentation.ipynb | arjunparmar/COVID19-upgraded | 679be2554c5620cc38a515eed8f3e608fff53413 | [
"MIT"
] | 14 | 2020-06-23T20:49:16.000Z | 2022-03-12T00:34:55.000Z | Data Augmentation.ipynb | arjunparmar/COVID19-upgraded | 679be2554c5620cc38a515eed8f3e608fff53413 | [
"MIT"
] | 2 | 2021-05-19T23:52:56.000Z | 2021-12-11T11:25:35.000Z | 17.16886 | 84 | 0.351003 | [
[
[
"import cv2\nimport numpy as np",
"_____no_output_____"
],
[
"def brightcontrast(image,value):\n brightimg=np.array(image[:,:]*value)\n brightimg[brightimg > 255] = 255\n brightimg = brightimg.astype('uint8') \n return brightimg",
"_____no_output_____"
],
[
"image=cv2.imread(\"/home/paa/COVID-19/Dataset/Normal/1.jpeg\",0)\nimage=cv2.resize(image,(200,200))\nbimg=brightcontrast(image,1.1)\ncv2.imshow(\"bimg\",bimg)\nbbimg=brightcontrast(image,1.2)\ncv2.imshow(\"bbimg\",bbimg)\ncimg=brightcontrast(image,0.9)\ncv2.imshow(\"cimg\",cimg)\nccimg=brightcontrast(image,0.8)\ncv2.imshow(\"ccimg\",ccimg)\ncv2.waitKey(0)\ncv2.destroyAllWindows()",
"_____no_output_____"
],
[
"j=1\nfor i in range(0,332):\n image=cv2.imread(\"/home/paa/IEEE COVID19/COVID/\"+str(i)+\".jpeg\",0)\n image=cv2.resize(image,(500,500))\n cv2.imwrite(\"/home/paa/IEEE COVID19/COVID19/\"+str(j)+\".jpeg\",image)\n j+=1\n bimg=brightcontrast(image,1.1)\n cv2.imwrite(\"/home/paa/IEEE COVID19/COVID19/\"+str(j)+\".jpeg\",bimg)\n j+=1\n bbimg=brightcontrast(image,1.2)\n cv2.imwrite(\"/home/paa/IEEE COVID19/COVID19/\"+str(j)+\".jpeg\",bbimg)\n j+=1\n cimg=brightcontrast(image,0.9)\n cv2.imwrite(\"/home/paa/IEEE COVID19/COVID19/\"+str(j)+\".jpeg\",cimg)\n j+=1\n ccimg=brightcontrast(image,0.8)\n cv2.imwrite(\"/home/paa/IEEE COVID19/COVID19/\"+str(j)+\".jpeg\",ccimg)\n j+=1\n print(i)",
"0\n1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22\n23\n24\n25\n26\n27\n28\n29\n30\n31\n32\n33\n34\n35\n36\n37\n38\n39\n40\n41\n42\n43\n44\n45\n46\n47\n48\n49\n50\n51\n52\n53\n54\n55\n56\n57\n58\n59\n60\n61\n62\n63\n64\n65\n66\n67\n68\n69\n70\n71\n72\n73\n74\n75\n76\n77\n78\n79\n80\n81\n82\n83\n84\n85\n86\n87\n88\n89\n90\n91\n92\n93\n94\n95\n96\n97\n98\n99\n100\n101\n102\n103\n104\n105\n106\n107\n108\n109\n110\n111\n112\n113\n114\n115\n116\n117\n118\n119\n120\n121\n122\n123\n124\n125\n126\n127\n128\n129\n130\n131\n132\n133\n134\n135\n136\n137\n138\n139\n140\n141\n142\n143\n144\n145\n146\n147\n148\n149\n150\n151\n152\n153\n154\n155\n156\n157\n158\n159\n160\n161\n162\n163\n164\n165\n166\n167\n168\n169\n170\n171\n172\n173\n174\n175\n176\n177\n178\n179\n180\n181\n182\n183\n184\n185\n186\n187\n188\n189\n190\n191\n192\n193\n194\n195\n196\n197\n198\n199\n200\n201\n202\n203\n204\n205\n206\n207\n208\n209\n210\n211\n212\n213\n214\n215\n216\n217\n218\n219\n220\n221\n222\n223\n224\n225\n226\n227\n228\n229\n230\n231\n232\n233\n234\n235\n236\n237\n238\n239\n240\n241\n242\n243\n244\n245\n246\n247\n248\n249\n250\n251\n252\n253\n254\n255\n256\n257\n258\n259\n260\n261\n262\n263\n264\n265\n266\n267\n268\n269\n270\n271\n272\n273\n274\n275\n276\n277\n278\n279\n280\n281\n282\n283\n284\n285\n286\n287\n288\n289\n290\n291\n292\n293\n294\n295\n296\n297\n298\n299\n300\n301\n302\n303\n304\n305\n306\n307\n308\n309\n310\n311\n312\n313\n314\n315\n316\n317\n318\n319\n320\n321\n322\n323\n324\n325\n326\n327\n328\n329\n330\n331\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code"
]
] |
e729691905500a27fcec7d8d34c079b020dee1e6 | 54,009 | ipynb | Jupyter Notebook | Course 1/Notebooks/01_Basic_Introduction_to_Python_for_use_with_BIM.ipynb | nasirkuvvetli/intro-python-bim | 0ef464c9d79362e8309ea4a1c40563322a4fc602 | [
"MIT"
] | 65 | 2019-05-08T20:21:26.000Z | 2022-03-30T16:32:35.000Z | Course 1/Notebooks/01_Basic_Introduction_to_Python_for_use_with_BIM.ipynb | nasirkuvvetli/intro-python-bim | 0ef464c9d79362e8309ea4a1c40563322a4fc602 | [
"MIT"
] | null | null | null | Course 1/Notebooks/01_Basic_Introduction_to_Python_for_use_with_BIM.ipynb | nasirkuvvetli/intro-python-bim | 0ef464c9d79362e8309ea4a1c40563322a4fc602 | [
"MIT"
] | 12 | 2020-01-24T08:06:13.000Z | 2022-01-23T19:47:30.000Z | 30.686932 | 503 | 0.58381 | [
[
[
"# Introduction to Python 3 in jupyternotebook\n\nThis is an introduction to the Python programming language for participants on the Programming with openBIM course as arranged by [BIMFag](https://bimfag.no/). It is based on [learnpythons basic course](https://www.learnpython.org/). \n\nIt is not a complete Python course, so for deeper walkthrough of python and the different aspects of the language see more on [learnpython](https://www.learnpython.org/) or one can get free courses on eg. [Udacity](https://www.udacity.com/course/programming-foundations-with-python--ud036), [EdX](https://learn.edx.org/topic-python/) or [CodeAcademy](https://www.codecademy.com/learn/learn-python-3). \n\nThis tutorial will walk through these chapters: \n\n* [2 Hello jupyter, world and you](#Hello-jupyter,-world-and-you)\n* [3 Variables and types](#Variables-and-types)\n* [4 Basic String formatting](#Basic-String-formatting)\n* [5 Collection types (Arrays)](#Collection-types-(Arrays))\n* [6 Conditions and If statements](#Conditions-and-If-statements) \n* [7 While Loops](#While-Loops)\n* [8 For Loops](#For-Loops)\n* [9 Functions](#Functions)\n* [10 Classes and Objects](#Classes-and-Objects)\n* [11 Modules and Packages](#Modules-and-Packages)\n\n\nin addition since this is a openBIM programming course, we will use BIM models as part of the introduction. \n\n**At the end of this notebook you will learn how to visualize objects in a BIM model. Like the windows of Grethes hus**",
"_____no_output_____"
],
[
"# Hello jupyter, world and you\n\nThe [jupyter notebook](https://jupyter.org/) lets us define and edit both text cells and markdown (text) cells. This is used here to introduce the concepts along the way, and also provide code cells where commands could be tested and executed. \n\nWe do it in cells, so eg. you could always create a new cell, edit or change. \n\n## Create a new cell, and write what you think about jupyter.\nUse the \"+\" button above to create a new cell under this one. ",
"_____no_output_____"
]
],
[
[
"### Change this cell from a Code cell to a Markdown cell, by using the roll down menu above. ",
"_____no_output_____"
]
],
[
[
"## Let the program tell something to a user\n\nThe print statement is a powerfull directive in python and it enables a program to print out a result, eg. \"Hello, world\". Which prints a data type called a string to the screen. ",
"_____no_output_____"
]
],
[
[
"# Here we print the string \"Hello, world!\" to the screen. \nprint(\"Hello, world!\")",
"Hello, world!\n"
]
],
[
[
"We will use the print statement several times in this course together with \"string formatting\". A typpical way of formatting a string is by appending another string to it, like \"Hello, \"+\"world\". ",
"_____no_output_____"
]
],
[
[
"# Here we append the two strings \"Hello, \" and \"world!\" to the screen.\nprint(\"Hello,\"+\"world!\")",
"Hello,world!\n"
]
],
[
[
"As seen above now we didn't get a space between the two strings. A space could either be part of e.g. the first string, or be a separate string in itself like \" \". Lets append that too. ",
"_____no_output_____"
]
],
[
[
"# Here we append the three strings \"Hello,\", \" \" and \"world!\"\nprint(\"Hello,\"+\" \"+\"world!\")",
"Hello, world!\n"
]
],
[
[
"## Let the program ask a user for input and show the result\nSince we want to say hello to you too, you could use the \"input()\" function to get user intput and store that in a variable, and print to screen.",
"_____no_output_____"
]
],
[
[
"# Here we get the name from the user and store it in a variable called name, append that to another vairable as we print. \nx = input(\"Write your name here: \")\ny = \"Hello, \"\nprint(y+x+\"!\")",
"Write your name here: Ingvild\nHello, Ingvild!\n"
]
],
[
[
"# Variables and types \n\nPython is completely object oriented, and not \"statically typed\". In other programming languages, that is statically typed, you need to declare the variable and what type of data it holds, before using it. In Python you do not need to declare variables before using them, or declare their type. \n\nThis is seen above where we just store the input into x and y that stores the string \"Hello, \" without first declaring them to hold data of string type first. Every variable in Python is an object, and we will go through some of the types here. ",
"_____no_output_____"
],
[
"## Numbers \nFor this course two different number types are relevant, integer and floating point numbers. ",
"_____no_output_____"
]
],
[
[
"# This is how you define an integer\nmyInt = 9\nprint(myInt)",
"9\n"
],
[
"# To define a floating point number you could use one of the following approaches\nmyFloat1 = 9.0\nprint(myFloat1)\nmyFloat2 = float(9)\nprint(myFloat2)\nmyFloat3 = 9.\nprint(myFloat3)",
"9.0\n9.0\n9.0\n"
]
],
[
[
"One could aslo \"cast\" an integer into a float, like we do in the myFloat2=float(9). \n\n* An integer x can be cast into float by using float(x)\n* A float y can be cast into an integer by using int(y)\n* An integer x and a float y can both be cast into a string by respectively str(x) and str(y). ",
"_____no_output_____"
]
],
[
[
"# Here we first define an integer, and cast it into a float\nmyInt = 9 \nmyFloat = float(myInt)\nprint(\"My Integer: %s, and my Float: %s\" % (myInt,myFloat))",
"My Integer: 9, and my Float: 9.0\n"
],
[
"# Here we cast myInt and myFloat (from above) into strings\nmyIntegerString = str(myInt)\nprint(myIntegerString)\nmyFloatString= str(myFloat)\nprint(myFloatString)",
"9\n9.0\n"
]
],
[
[
"## Strings \nStrings can be define by using single (') or double quoates(\"). The difference between the two is that using double quotes makes it easy to include apostrophes (whereas these would terminate the string if using single quotes)",
"_____no_output_____"
]
],
[
[
"myString = \"Don't worry about apostrophes\"\nprint(myString)",
"Don't worry about apostrophes\n"
],
[
"escapedString = 'Don\\'t worry about apostrophes'\nprint(escapedString)",
"Don't worry about apostrophes\n"
],
[
"#Strings are just an array of characters\nmyString = \"abcdefgh\"\nprint(myString)\nthird = myString[2]\nprint(third)\nlast = myString[-1]\nprint(last)",
"abcdefgh\nc\nh\n"
]
],
[
[
"### MiniExcercise:\nFetch the character \"d\" from myString and print it to screen",
"_____no_output_____"
]
],
[
[
"# Write your code here\n\n\n",
"_____no_output_____"
]
],
[
[
"## Exercise 1 \nThe target of this exercise is to create a string, an integer, and a floating point number. The string should be named mystring and should contain the word \"hello\". The floating point number should be named myfloat and should contain the number 10.0, and the integer should be named myint and should contain the number 20.",
"_____no_output_____"
]
],
[
[
"# change this code\nmystring = \"change this\"\nmyfloat = \"change this\"\nmyint = \"change this\"\n\n# testing code\nif mystring == \"hello\":\n print(\"String: %s\" % mystring)\nif isinstance(myfloat, float) and myfloat == 10.0:\n print(\"Float: %f\" % myfloat)\nif isinstance(myint, int) and myint == 20:\n print(\"Integer: %d\" % myint)",
"_____no_output_____"
]
],
[
[
"## Exercise 2\nRun the code below first without changing anything, supplying a number to it. \n\nThen, use a cast to correct it so that it outputs correct math on numbers. ",
"_____no_output_____"
]
],
[
[
"# Here we take and input of a number, and then multiply it 5 times and then prints the result.\ninputVariable = input(\"Provide a number: \")\nx = inputVariable * 5\nprint(x)",
"Provide a number: 6\n66666\n"
]
],
[
[
"## Naming conventions:\nDifferent programming languages have different \"best practice\" naming for their naming of variables, funtions and classes. In python, the naming conventions are described at: [PEP8](https://www.python.org/dev/peps/pep-0008/#naming-conventions)",
"_____no_output_____"
]
],
[
[
"# Examples of naming conventions\nCONSTANTS_ARE_ALL_UPPERCASE = \"my constant string\"\nvariables_should_be_all_lowercase_separated_with_underscores = 1\ndef functions_should_do_the_same():\n var1 = 4\n retrun var1",
"_____no_output_____"
]
],
[
[
"# Basic String formatting\n\nAs seen in the testing code: \n\n```Python\n# testing code\nif mystring == \"hello\":\n print(\"String: %s\" % mystring)\nif isinstance(myfloat, float) and myfloat == 10.0:\n print(\"Float: %f\" % myfloat)\nif isinstance(myint, int) and myint == 20:\n print(\"Integer: %d\" % myint)\n```\nWe use some string formatting in the print statements. We also use If statements and conditions which we are covering below. \n\nHere we have used a special string formatting operator, **%**, that enables us to format the string according to the specified format. %s specifies that we are inputting a variable of type string into the sentence. %f specifies a placeholder for a floating point number and %d for a decimal integer. More on string formatting at: [w3schools about formatting](https://www.w3schools.com/python/ref_string_format.asp)\n\nIn the above example we are adding one value to the string, but we could also add several in one string using the syntax below:\n\n```Python\n\"string %s, number %d and floatingpoint %f\"%(\"hello\",10,10.0)\n```\nFeel free to try it out below. ",
"_____no_output_____"
]
],
[
[
"# Execute this to check multiple variables inserted into a string\nprint(\"string %s, number %d and floatingpoint %f without formatting, or formatted %0.5f\" % (\"hello\",10,10.0, 10.0))",
"string hello, number 10 and floatingpoint 10.000000 without formatting, or formatted 10.00000\n"
]
],
[
[
"### MiniExercise 2B:\nTry doing float printing with only two decimals",
"_____no_output_____"
]
],
[
[
"# Execute this to check multiple variables inserted into a string\nprint(\"Print the float %f\" % (10.0))",
"_____no_output_____"
]
],
[
[
"## String functions\nIn addition to this there are several other string formatting options that you could do. Strings are objects that have several predefined functions. Feel free to add some code cells below here with some of the example code snippets from [w3schools on strings](https://www.w3schools.com/python/python_strings.asp)",
"_____no_output_____"
]
],
[
[
"# Example 1 -- eg. convert a string to lower letters\na = \"HElLo!\"\nprint(a.lower())\n# covert to captial letters\nprint(a.upper())",
"hello!\nHELLO!\n"
],
[
"# Example 2 -- eg. split a string into two. \na = \"Hello, world!\"\n#split at a specific character\nvar = a.split(\",\") \nprint(var)\n#split whitespace at beginning or end \nprint(var[1].split())",
"['Hello', ' world!']\n['world!']\n"
]
],
[
[
"## Exercise 3\n\nBuild a script in the cell below that: \n\n1) Takes in a string that you input\n2) Make sure it is a string\n3) Store it in a variable \n4) Print out the variable\n5) Then grab only the first and last characters in the name\n6) Store it in a new variable\n7) Print out the new variable\n\nsee [w3schools on strings](https://www.w3schools.com/python/python_strings.asp) for help. ",
"_____no_output_____"
]
],
[
[
"### Exercise 3 answer here:\n\n\n\n",
"_____no_output_____"
]
],
[
[
"# Collection types (Arrays)\nThere are four collection data types in the Python programming language:\n\n* List is a collection which is ordered and changeable. Allows duplicate members.\n* Tuple is a collection which is ordered and unchangeable. Allows duplicate members.\n* Set is a collection which is unordered and unindexed. No duplicate members.\n* Dictionary is a collection which is unordered, changeable and indexed. No duplicate members.\n\nWhen choosing a collection type, it is useful to understand the properties of that type. Choosing the right type for a particular data set could mean retention of meaning, and, it could mean an increase in efficiency or security ([ref w3schools](https://www.w3schools.com/python/python_lists.asp))\n\nBelow we will walk through Lists and Tuples. Find more info on sets and dictionaries on eg. w3scools above.",
"_____no_output_____"
],
[
"## List\nA list is a collection which is ordered and changeable. They can contain any type of variable, and they can contain as many variables as you wish. In Python lists are written with square brackets.",
"_____no_output_____"
]
],
[
[
"myList = [\"apple\", 1, \"cherry\", 10]\nprint(myList)",
"['apple', 1, 'cherry', 10]\n"
]
],
[
[
"You access the list items by referring to the index number. Remember that the index starts at 0. ",
"_____no_output_____"
]
],
[
[
"myList = [1,2,3,4,5]\nprint(myList[1])",
"2\n"
]
],
[
[
"### Lists are changable\nLists are changable, so you could eg. add or remove from it. ",
"_____no_output_____"
]
],
[
[
"# Adding to a list my append()\nmyList = [1,2,3]\nmyList.append(3) # adds the value 3 to the list\nprint(myList)\n\n# Removing from a list by remove()\nmyList.remove(2) # removes the value 2 from the list\nprint(myList)\n\n# Removing from a list by pop()\nmyList.pop() # removes the last item in the list\nprint(myList)\nmyList.pop(0) # removes the first item in the list\nprint(myList)",
"[1, 2, 3, 3]\n[1, 3, 3]\n[1, 3]\n[3]\n"
]
],
[
[
"## Tuple\nA tuple is a collection which is ordered and **unchangeable**. In Python tuples are written with parenthesis. When working with IFC and the ifcopenshell, we often work with tuples.",
"_____no_output_____"
]
],
[
[
"myTuple = (\"apple\", 1, \"cherry\", 10)\nprint(myTuple)",
"('apple', 1, 'cherry', 10)\n"
]
],
[
[
"You access the tuple items, similar as for lists, by referring to the index number. Remember that the index starts at 0.",
"_____no_output_____"
]
],
[
[
"myTuple = (1,2,3,4,5)\nprint(myTuple[1])",
"2\n"
]
],
[
[
"### Tuples are unchangable\nTuples are unchangable, but you could cast it into a list and work with it as a list. ",
"_____no_output_____"
]
],
[
[
"myTuple = (1,2,3,4,5)\nmyList = list(myTuple) # Cast tuple into a list\nmyList.pop(2)\nmyTuple = tuple(myList) # Cast list into a tuple\nprint(myTuple)",
"(1, 2, 4, 5)\n"
]
],
[
[
"## Check the number of elements in list or tuple\nIt is often needed to check how many elements that are in a list or a tuple.",
"_____no_output_____"
]
],
[
[
"# Check how many elements there are in myList (defined above)\nprint(len(myList))\n\n# Check how many elements there are in myTuple (defined above)\nprint(len(myTuple))",
"3\n3\n"
]
],
[
[
"## Exercise 4 - how many IfcWall elements are in Grethes Hus model? \nIn the excercise below we want you to use what you have learned above to find how many elements of type IfcWall are in the Grethes Hus Modell. \n\nWe have provided some code to get the file and to query out all elements of type IfcWall and stored it in a variable called *walls*. \n\nPrint out to the screen the number of walls there are in the variable **walls**. ",
"_____no_output_____"
]
],
[
[
"# We import ifcopenshell and open the Grethes Hus Model and store it in a variable called file. \nimport ifcopenshell \nfile = ifcopenshell.open(\"models/Grethes-hus-bok-2.ifc\")\n\n#We query the file for its number of IfcWall type elements\nwalls = file.by_type(\"IfcWall\")\n\n\"\"\"ToDo: Change the zero below to list out the number of walls in the file.\"\"\"\nnumberOfWalls = 0 # store the number of IfcWall types in this variable. \nprint(numberOfWalls)",
"0\n"
]
],
[
[
"# Conditions and If statements \n\n> Python supports the usual logical conditions from mathematics:\n\n> * Equals: a == b\n> * Not Equals: a != b\n> * Less than: a < b\n> * Less than or equal to: a <= b\n> * Greater than: a > b\n> * Greater than or equal to: a >= b\n>\n> These conditions can be used in several ways, most commonly in \"if statements\" and loops. (ref. [w3schools](https://www.w3schools.com/python/python_conditions.asp))\n\nWe'll walk through use of if statements and the most common loops here and use it to explore some elements from the Grethes Hus Model. \n\nThe syntax for a If statement that checks if a variable **a** Equals a variable **b** is as follows:\n\n```Python\n a = 0\n b = 2\n if a == b:\n print(\"they are eaqual\")\n```\nNotice the **indentation**. Python relies on indentation, using whitespace, to define scope in the code. Other programming languages often use curly-brackets for this purpose. Here the print is within the if statement. \n\nLets use an if statement to check if there are more than zero walls in the Grethes Hus Model. ",
"_____no_output_____"
]
],
[
[
"## Here we refer back to the variable \"numberOfWalls\" as defined above, and check if its greated than 0. \nif numberOfWalls > 0:\n print(\"You probably did something right above.\")",
"_____no_output_____"
]
],
[
[
"## elif and else\nWe could also add to the If statement, by using an **Elif** statement. This will allow you to add another condition to check. At the end, we could also do an **Else** statement. This will be executed if no of the previous conditions where met. The syntax for this is: \n\n```Python\n if <condition1>:\n # do something based on condition1\n elif <condition2>:\n # Do something based on condition2\n else:\n # Do something if non of the previouse contitions where met. \n```\nYou are free to at as many elif statements as you wish, but you could only have one else statement. ",
"_____no_output_____"
]
],
[
[
"if numberOfWalls > 0:\n print(\"You probably did something right above.\")\nelif numberOfWalls == 0:\n print(\"You did probably do something wrong above\")\nelse:\n print(\"How could this happen...?\")",
"You did probably do something wrong above\n"
]
],
[
[
"## Check if an element exist in a list or tuple\nOften times one would like to check if a particular value is in a list or a tuple. This is easy with python. ",
"_____no_output_____"
]
],
[
[
"# Check if a value is in a list \nmyList = [\"BIMFag\", \"banana\", \"BIM\"]\nif \"BIMFag\" in myList:\n print(\"I found BIMFag in myList!\")\nmyTuple = [\"BIMFag\", \"banana\", \"BIM\"]\nif \"BIMFag\" in myTuple:\n print(\"I found BIMFag in myTuple!\")",
"I found BIMFag in myList!\nI found BIMFag in myTuple!\n"
]
],
[
[
"# While Loops\nConditions and if statements are usefull also in while loops. While loops are executing as long as a condition is met. The syntax for while loops are: \n\n```Python\n x = 1\n while x<10: # Checks if x is less than 10\n # do something eg.\n print(x) # since x is bigger than zero this will print for as long as x is less than 10\n x = x + 1 ## this statement is adding 1 to x for each iteration of the loop. \n```\nKeep in mind that if the condition on which the while loop is checking on is never met, it will execute indefinately. so whitout the ```Python x= x+1``` statement above, this would have just printed out 1 indefinately. \n\nNotice the **indentation**. As mentioned above Python relies on indentation, using whitespace, to define what is within the while loop. Here the ```print(x)``` and ```x=x+1```statement is indented, so to show that it is inside the while loop. ",
"_____no_output_____"
]
],
[
[
"# Lets see it in action. Print out x as long as its less than the number of walls\nx = 1\nwhile x < numberOfWalls:\n print(x)\n x = x +1\nelse:\n print(\"x is: %s and numberOfWalls is: %s\"%(x,numberOfWalls))",
"x is: 1 and numberOfWalls is: 0\n"
]
],
[
[
"## Loops, break and else\nNotice that the loop exits when the condition is no longer met. Notice also that loops could also have an else statement, similar to if statements. If the loop condition are no longer met, then the else part is executed. \n\nIf we want to exit the while loop early one could use the special **break** statement. The while loop could have a condition that always will be true, like ```True```but then have a check further in, and use ```break```to exit the loop based on a condition in the condition of the respective check. ",
"_____no_output_____"
]
],
[
[
"x = 1\n## Loop while True --> True is always True... \nwhile True: \n print(x)\n # check if x is bigger or eaqual to 10\n if x >= 10:\n break # if condition met, break out of the while loop. \n x = x+1 # if the condition of the if statement was false, this get executed in the while loop. \n",
"1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n"
]
],
[
[
"# For Loops\nA *for* loop is used for iterating over a sequence (that is either a list, a tuple, a dictionary, a set, or a string).\n\nThe syntax for for loops in python is: \n\n```Python\nfor element in listOfElements:\n # Do something with each element eg. print it out.\n print(element)\n```\n\nNotice the **indentation** here as well. It define what is within the for loop. Here the print statement is indented, so that will be done every iteration of the elements in the list. ",
"_____no_output_____"
]
],
[
[
"myTuple = (1,2,3,4,5,6)\n\nfor element in myTuple:\n print(element)",
"1\n2\n3\n4\n5\n6\n"
],
[
"for i in range(6):\n print(myTuple[i])",
"1\n2\n3\n4\n5\n6\n"
]
],
[
[
"## Iterating over a number of IfcWindow objects and access attributes\nBelow we'll use a for loop to loop through all IfcWindow elements and print out some of it's direct attributes. See the IFC data dictinary for IFC 2.3 [here](https://standards.buildingsmart.org/IFC/RELEASE/IFC2x3/FINAL/HTML/). The main access page for the different IFC schema definitions can be found [here](https://technical.buildingsmart.org/standards/ifc/ifc-schema-specifications/)\n\nWe'll be getting its **Name** attribute. Since the IfcWindow elements are objects in Python we could access these using the \".\" operator. \n\nFeel free to experiment with other attributes. You could eg. try with **OverallHeight and OverallWidth**. ",
"_____no_output_____"
]
],
[
[
"# We import ifcopenshell and open the Grethes Hus Model and store it in a variable called file. \nimport ifcopenshell \nfile = ifcopenshell.open(\"models/Grethes-hus-bok-2.ifc\")\n\n#We query the file for its number of IfcWindow type elements\nwindows = file.by_type(\"IfcWindow\")\n\n\"\"\"By uncommenting (removing the '#') the line below you see the contents of the windows variable\"\"\"\n#print(windows)\n\n# Here is the for loop and its syntax. Notice the indentation. \nfor window in windows: \n print(\"The window elements name is: \"+window.Name)",
"The window elements name is: M_Fixed:_1400x2200mm:348960\nThe window elements name is: M_Fixed:_1400x2200mm:350239\nThe window elements name is: M_Fixed:_1400x2200mm:350479\nThe window elements name is: M_Fixed:_1400x2200mm:350484\nThe window elements name is: M_Fixed:_1400x2200mm:350489\nThe window elements name is: M_Fixed:_1400x2200mm:350494\nThe window elements name is: M_Fixed:_1400x2200mm:350499\nThe window elements name is: M_Fixed:_1400x2200mm:350504\nThe window elements name is: M_Fixed:_1400x2200mm:350509\nThe window elements name is: M_Fixed:_1400x2200mm:350514\nThe window elements name is: M_Fixed:_1400x2200mm:350519\nThe window elements name is: M_Fixed:_1400x2200mm:350524\nThe window elements name is: M_Fixed:_800x750mm:351890\nThe window elements name is: M_Fixed:_1100x700mm:353017\nThe window elements name is: M_Fixed:_800x700mm:354254\nThe window elements name is: M_Fixed:_800x750mm:355941\nThe window elements name is: M_Fixed:_1100x700mm:355942\nThe window elements name is: M_Fixed:_800x700mm:355943\nThe window elements name is: M_Fixed:_800x1000mm:356683\nThe window elements name is: M_Fixed:_800x1000mm:357125\nThe window elements name is: M_Fixed:_1400x1000mm:357137\nThe window elements name is: M_Fixed:_800x2000mm:357354\nThe window elements name is: M_Window-Casement-Double:_1400x1000mm:385488\nThe window elements name is: M_Window-Casement-Double:_1400x1000mm:385889\nThe window elements name is: M_Window-Casement-Double:_1400x1000mm:385919\nThe window elements name is: M_Window-Casement-Double:_1400x1000mm:385922\n"
]
],
[
[
"# Functions \n\nFunctions are a useful way to divide your code into blocks of code, that only runs when they are called. It is also a good way to share a small snippet of code with others. \n\nA function is defined by the special ```def```statement. Like the example below.\n\n## Defining a function",
"_____no_output_____"
]
],
[
[
"def myDefinedFunction():\n print(\"This is a print statement within my function\")",
"_____no_output_____"
]
],
[
[
"Notice that the function uses the same indendation style as for If statements and loops as walked through above. A function could contain all the concepts we have walked through above as well. \n\nNotice however that noting happended above. That is why we havent called the function. It is however stored, so that we can call it below. \n## Calling / Using a function",
"_____no_output_____"
]
],
[
[
"myDefinedFunction()",
"This is a print statement within my function\n"
]
],
[
[
"## Input variables to functions\n\nWe could also build functions that takes in input variables that could be used in the function. Lets define a function that takes in a string variable and prints it out. ",
"_____no_output_____"
]
],
[
[
"# A function definition that takes in a parameter called \"string\"\ndef myDefinedFunction2(string):\n print(\"My variable is %s\"%string)\n\n# A call to myDefinedFunction2 with the string variable \"Hello again!\"\nmyDefinedFunction2(\"Hello again!\")",
"My variable is Hello again!\n"
],
[
"# A new call to myDefinedFunction2 with the string \"and again and again\"\nmyDefinedFunction2(\"and again and again\")",
"My variable is and again and again\n"
]
],
[
[
"## Input variables and assumed type\nWhat would happen if we call myDefinedFunction2 with a number, instead of a string? Try it below. ",
"_____no_output_____"
]
],
[
[
"string = 10\n\"\"\"Try calling myDefinedFunction2 with the variable define above\"\"\"\n\n",
"_____no_output_____"
]
],
[
[
"## Exercise 5 - ensure that a function works as intended\nYou can pass as many variables to a function as you'd like, just separate them with a comma like we do with var1 and var 2 below. They could be whatever object you'd like. \n\nBelow we want to create a function that takes in two variables, var1 and var2. If they are both strings we want to just concatenate them together and print them out. If they are not strings we only want to add them to a list and print \n\n**A couple of hints**\n\n* you could check if a variable ```a``` is a string by using ```isinstance(a,str)```\n* If you want two conditions to both be True in order to pass, you could use the ```and```operator\n\n**The correct output:**\n```\nHello, World!\n['hello', 4]\n```",
"_____no_output_____"
]
],
[
[
"def myFunc3(var1,var2):\n \"\"\"Fill in code to comlete the challenge\"\"\"\n\n\"\"\"Don't change the code below\"\"\" \nmyFunc3(\"Hello, \",\"World!\")\nmyFunc3(\"hello\",4)",
"Hello, World!\n['hello', 4]\n"
]
],
[
[
"## Lists and tuples as input variables\nWe could also add lists and tuples as input variables to functions. ",
"_____no_output_____"
]
],
[
[
"def myFunc4(myList):\n for elem in myList:\n print(elem)",
"_____no_output_____"
],
[
"# Define a list \naList = [1,2,3]\n# Send it as input variable to myFunc4\nmyFunc4(aList)\n# Define a tuple \naTuple = (4,5,6)\n# Send it as input variable to myFunc4\nmyFunc4(aTuple)",
"1\n2\n3\n4\n5\n6\n"
]
],
[
[
"### What will happend if you pass a string instead of a list?\nTry below to call the function above passing in a string value like eg. ```myFunc4(\"your name\")```\n\nWhat do you think will happen? ",
"_____no_output_____"
]
],
[
[
"### Do the test here ###\n\n",
"_____no_output_____"
]
],
[
[
"## Have functions return values\n\nA good way to use functions is to have them perform som logic on input variables and have them return the result. For this you use the ```return``` statement. ",
"_____no_output_____"
]
],
[
[
"# defines a function that takes in a list of items as parameter, \n# loops through it and adds all integers together, and then returns the resulting value\ndef myFunc5(myList):\n tmp = 0 # Defined outside the for loop to be able to return it outside the scope of the loop\n for elem in myList:\n if isinstance(elem,int):\n tmp = tmp +elem\n return tmp\naList = [\"string\",10,10,30,4, \"stuff\"]\nprint(myFunc5(aList))",
"54\n"
]
],
[
[
"## Exercise 6 - Names and Descriptions of Window objects. \n\nMany times one would get an ifc model where one want to change or restructure information in the file. That is tediouse work on big models, so one may want to script it. In addition it would be a good code block to have, regardless of object type. \n\nIn IFC the name and description is something all objects *inherrit* from IfcRoot, and cosequently they all have the following attributes ([ref. Ifc documentation](https://standards.buildingsmart.org/IFC/RELEASE/IFC2x3/FINAL/HTML/)): \n\n>* GlobalId\t : \tAssignment of a globally unique identifier within the entire software world.\n>* OwnerHistory\t : \tAssignment of the information about the current ownership of that object, including owning actor, >application, local identification and information captured about the recent changes of the object, NOTE: only the >last modification in stored.\n>* Name\t : \tOptional name for use by the participating software systems or users. For some subtypes of IfcRoot the >insertion of the Name attribute may be required. This would be enforced by a where rule.\n>* Description\t : \tOptional description, provided for exchanging informative comments.\n\nSo, it might be a good idea to package code into functions. That way we could share it and reuse the code regardless of which element we want to change the name and description on. \n\nThe first challenge is to create a function that takes in a IfcWindow (or another Ifc object), get the Name and store that in the Description property of the object. Then, add an input variable that *if set* will be given as a Name to the window or it will store a blank string as the Name. \n\nThe second challenge is to write code to print out the names and descriptions (before and after) to confirm your result. \n\n\n** Some hints:** \n* Python functions could have input variables that have default variables eg. def func (var1,var2 =\"\"):.\n* Look above to find code that enables you to import the ifcopenshell code library, that have code that enables you to work with teh IFC file.\n* Look above, or eg. in [academy.ifcopenshell.org](http://academy.ifcopenshell.org), to find code in the ifcopenshell library that could store a reference to an IFC model in a variable and another function that lets you query that file for all IfcWindow objects you might want to find in that model. \n* You can set the ```Name```variable of a window object that is store in a variable called \"window\" by ```window.Name = \"the name you want to set\"````\n\n",
"_____no_output_____"
]
],
[
[
"### Add your code here ### \n\n\n\n",
"_____no_output_____"
]
],
[
[
"# Classes and Objects\n\nWe have already worked alot with objects, some window objects, some string objects, and even some list objects. So, what is objects and where do they come from? Objects are an encapsulation of variables and functions into a single entity. Objects get their variables and functions from classes. Classes are essentially a template to create your objects. \n\nWhen functions are code blocks that could be shared and reused, classes and objects are even bigger blocks of reusable code. For the IfcWindow example above, it has a class that defines what the variables and functions the window objects of that class has. The documentation of what capabilities the IfcWindow class should have is defined by the Ifc documentation for IfcWindow. You could also have documentation of how that class is defined/implemented in the particular programming language. \n\nA very basic class would look something like this:\n\n```Python\nclass MyWindowClass:\n nameVariable = \"Window\"\n\n def namePrint(self):\n print(\"My Name is \"+self.nameVariable)\n```\n## Defining a Class\nLets define a Window Class",
"_____no_output_____"
]
],
[
[
"## Defines a Window Class\nclass MyWindowClass:\n nameVariable = \"Window\"\n\n def namePrint(self):\n print(\"My Name is \"+self.nameVariable)",
"_____no_output_____"
]
],
[
[
"## Creating objects from class definitions\nWhen you have defined a class, you could create several objects based on it. ",
"_____no_output_____"
]
],
[
[
"## Defines two different objects based on the MyWindowClass \nwindow1 = MyWindowClass()\nwindow2 = MyWindowClass()",
"_____no_output_____"
]
],
[
[
"## Accessing Object variables and functions\nNow the window1 and window2 objects have both a nameVariable and namePrint function each. As we have seen above, we access its variables (attributes) by the \".\" operator. ",
"_____no_output_____"
]
],
[
[
"# Get the nameVariable of window1\nwindow1.nameVariable",
"_____no_output_____"
]
],
[
[
"### MiniExercise: Do the same for window2 below",
"_____no_output_____"
]
],
[
[
"### Get the the name variable of window2\n\n\n",
"_____no_output_____"
]
],
[
[
"## Exercise 7 - Setting object variables\nBoth windows have the same name. Use what you know from above to set the name of window1 and window2 to something proper. ",
"_____no_output_____"
]
],
[
[
"### Solve Exercise 7 here:\n\n\n\n",
"_____no_output_____"
]
],
[
[
"## Exercise 8 - Accessing object functions\nIn the MyWindowClass definition we have a function to print out the name. Show below how this function can be accessed on both window1 and window2.",
"_____no_output_____"
]
],
[
[
"### Solve Exercise 8 here: \n\n\n\n",
"_____no_output_____"
]
],
[
[
"# Modules and Packages\nIn programming, a module is a piece of software that has a specific functionality. For example, when building a ping pong game, one module could be responsible for the game logic, and another module could be responsible for drawing the game on the screen. Each module is a different file, which can be edited separately.\n\nModules in Python are simply Python files with a .py extension. The name of the module will be the name of the file. A Python module can have a set of functions, classes or variables defined and implemented.\n\nYou might have seen the ```ifc_viewer.py``` file in the folders? That is a module, and particularly a module that enables viewing of models in jupyter notebooks. So, how do one use that? ",
"_____no_output_____"
],
[
"## Using Modules in other programs\nModules could be used in other programs by using the special ```import```statement. Eg. to import all of it use ```import ifc_viewer```would do the trick. \n\nHowever, the module may define several classes, variables and functions. What if we only wanted to import the ```ifc_viewer``` class in the ```ifc_viewer```module? Then we do like bewlow. ",
"_____no_output_____"
]
],
[
[
"# Import the ifc_viewer class of ifc_viewer module\n",
"_____no_output_____"
]
],
[
[
"## Using Packages and Modules in another program\nAnother package we already have used is the **ifcopenshell**. Packages are namespaces which contain multiple packages and modules themselves. They are simply directories, but with a twist.\n\nEach package in Python is a directory which MUST contain a special file called __init__.py. This file can be empty, and it indicates that the directory it contains is a Python package, so it can be imported the same way a module can be imported ([ref. learnpython.org](https://www.learnpython.org/en/Modules_and_Packages)).\n\nWe have already imported ifcopenshell. But we could also import specific parts of it. Lets combine it to visualize the windows of the Grethes Hus model. ",
"_____no_output_____"
]
],
[
[
"import ifcopenshell \nimport ifcopenshell.geom\nfrom ifc_viewer import ifc_viewer\n# Storing the model in a file variable, and giving the path to the file as input. \nfile = ifcopenshell.open(\"../models/Grethes_hus_bok_2.ifc\")\n\n# Storing all windows of the file by ising the by_type function of the file class \nwindows = file.by_type(\"IfcWindow\")\n\n# Setting the geometry settings. \ns = ifcopenshell.geom.settings()\ns.set(s.USE_PYTHON_OPENCASCADE, True)\n\n# Instansiationg a viewer object from the ifc_viewer class\nviewer = ifc_viewer()\n\n# Running through all window elements and create a shape and add it to the viewer for displaying. \nfor window in windows:\n shape = ifcopenshell.geom.create_shape(s, window)\n viewer.DisplayShape(window, shape.geometry, shape.styles)\n\nviewer.Display()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e729808c8810f028eba888331370571ce5351334 | 48,236 | ipynb | Jupyter Notebook | julia-100-exercises.ipynb | rougier/julia-100-exercises | d9d216ea4d430e68a74d07eb70461d22a056435c | [
"MIT"
] | 2 | 2021-01-28T22:01:34.000Z | 2021-11-17T12:07:05.000Z | julia-100-exercises.ipynb | astrieanna/julia-100-exercises | d9d216ea4d430e68a74d07eb70461d22a056435c | [
"MIT"
] | null | null | null | julia-100-exercises.ipynb | astrieanna/julia-100-exercises | d9d216ea4d430e68a74d07eb70461d22a056435c | [
"MIT"
] | 3 | 2016-09-25T00:41:32.000Z | 2021-11-17T12:07:06.000Z | 30.166354 | 242 | 0.44247 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
e72982160d22cffb4ca28eac671b0e20e8b8a137 | 15,529 | ipynb | Jupyter Notebook | notebooks/04_2_Model_Stacking.ipynb | SuRreal1000/capstone_know_your_ship | 11e94d2ea44804095bfde16a9413026e5176c9ef | [
"MIT"
] | null | null | null | notebooks/04_2_Model_Stacking.ipynb | SuRreal1000/capstone_know_your_ship | 11e94d2ea44804095bfde16a9413026e5176c9ef | [
"MIT"
] | null | null | null | notebooks/04_2_Model_Stacking.ipynb | SuRreal1000/capstone_know_your_ship | 11e94d2ea44804095bfde16a9413026e5176c9ef | [
"MIT"
] | null | null | null | 26.682131 | 383 | 0.568871 | [
[
[
"# 04_2 Model_Stacking",
"_____no_output_____"
],
[
"This notebook includes the data preparation and the developement of a Stacking model.\n\nDue to NDA agreements no data can be displayed.",
"_____no_output_____"
],
[
"Data Preparation, Data Cleaning, and Preparation for Modelling is the same for all algorithms. To directly go to modelling click [here](#modelling)",
"_____no_output_____"
],
[
"---",
"_____no_output_____"
],
[
"## Data preparation",
"_____no_output_____"
],
[
"### Import libraries and read data",
"_____no_output_____"
]
],
[
[
"import pandas as pd \nimport numpy as np\n\nfrom sklearn.metrics import mean_squared_error\nfrom sklearn.pipeline import make_pipeline\nfrom sklearn.preprocessing import MinMaxScaler\nfrom sklearn.model_selection import train_test_split\n\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n\nimport sys\nsys.path.append(\"..\")\nimport mlflow\nfrom modeling.config import EXPERIMENT_NAME\nTRACKING_URI = open(\"../.mlflow_uri\").read().strip()\n\n\nfrom sklearn.ensemble import RandomForestRegressor\nfrom xgboost import XGBRegressor\nfrom sklearn.ensemble import StackingRegressor\nfrom sklearn.linear_model import RidgeCV\nfrom sklearn.svm import LinearSVR\nfrom sklearn.linear_model import LinearRegression\nfrom sklearn.preprocessing import PolynomialFeatures\n\nfrom sklearn.preprocessing import power_transform\nfrom sklearn.preprocessing import PowerTransformer\nfrom scipy import stats\n#>>> print(power_transform(data, method='box-cox'))\nimport statsmodels.api as sm",
"_____no_output_____"
],
[
"# read data\ndf = pd.read_csv('../data/Featureselection03.csv')\ndf.head()",
"_____no_output_____"
]
],
[
[
"### Create data frame with important features",
"_____no_output_____"
],
[
"So that everyone is on track with the feature selection, we created another csv file to rate the importance and only use important features for training our models and further analysis.",
"_____no_output_____"
],
[
"Only important features are used to train the model. In this case we use 17 features beside the target.",
"_____no_output_____"
]
],
[
[
"# read list with feature importance\ndata_log = pd.read_csv('../data/Capstone_features_Features.csv')\ndata_log.head()",
"_____no_output_____"
],
[
"# create list of important features (feature importance < 3)\nlist_imp_feat = list(data_log[data_log['ModelImportance'] < 3]['VarName'])\nlen(list_imp_feat)",
"_____no_output_____"
],
[
"df_model = df[list_imp_feat].copy()",
"_____no_output_____"
],
[
"df_model.info()",
"_____no_output_____"
]
],
[
[
"### Fill and drop NaN",
"_____no_output_____"
],
[
"Values for V.SLPOG.act.PRC and ME.SFCI.act.gPkWh contain missing values. The EDA showed that these are mainly caused during harbour times when the main engine was not running. Therefore it makes sense to fill the missing values with 0.",
"_____no_output_____"
]
],
[
[
"df_model['V.SLPOG.act.PRC'].fillna(0,inplace=True)\ndf_model['ME.SFCI.act.gPkWh'].fillna(0,inplace=True)",
"_____no_output_____"
],
[
"df_model['A.SOG.next.kn'] = (df_model['V.SOG.act.kn'].shift(-1) - df_model['V.SOG.act.kn'])\ndf_model['A.SOG.next.kn'].fillna(df_model['V.SOG.act.kn'], inplace=True)\ndf_model['A.SOG.next.kn'].describe()",
"_____no_output_____"
]
],
[
[
"The remaining rows with missing values are dropped.",
"_____no_output_____"
]
],
[
[
"df_model.dropna(inplace=True)",
"_____no_output_____"
],
[
"df_model.info()",
"_____no_output_____"
],
[
"plt.figure(figsize = (30,28))\nsns.heatmap(df_model.corr(), annot = True, cmap = 'RdYlGn')",
"_____no_output_____"
]
],
[
[
"### Define target",
"_____no_output_____"
]
],
[
[
"X = df_model.drop(['ME.FMS.act.tPh'], axis = 1)\ny = df_model['ME.FMS.act.tPh']",
"_____no_output_____"
],
[
"X.rename(columns={'passage_type_Europe<13.5kn': 'passage_type_Europe_smaller_13.5kn', 'passage_type_Europe>13.5kn': 'passage_type_Europe_greater_13.5kn',\\\n 'passage_type_SouthAmerica<13.5kn': 'passage_type_SouthAmerica_smaller_13.5kn', 'passage_type_SouthAmerica>13.5kn': 'passage_type_SouthAmerica_greater_13.5kn'}, inplace=True)",
"_____no_output_____"
]
],
[
[
"### Train Test Split",
"_____no_output_____"
],
[
"Due to the high amount of data, a split into 10% test data and 90% train data is chosen. The random state is set to 42 to have comparable results for diffent models. To account for the imbalance in the distribution of passage types the stratify parameter is used for this feature. This results in approximately the same percentage of the different passage types in each subset.",
"_____no_output_____"
]
],
[
[
"X_train, X_test, y_train, y_test = train_test_split(X, y, stratify = X['passage_type'], test_size = 0.1, random_state = 42)",
"_____no_output_____"
]
],
[
[
"### Create dummy values for passage type",
"_____no_output_____"
],
[
"As passage_type is the only object type, get_dummies will only create dummies for passage_type.",
"_____no_output_____"
]
],
[
[
"X_train = pd.get_dummies(X_train, drop_first=True)\nX_test = pd.get_dummies(X_test, drop_first=True)",
"_____no_output_____"
]
],
[
[
"### Set MLFlow connection",
"_____no_output_____"
],
[
"MLFlow is used to track and compare different models and model settings.",
"_____no_output_____"
]
],
[
[
"runmlflow = False\n\n# setting the MLFlow connection and experiment\nif runmlflow == True:\n mlflow.set_tracking_uri(TRACKING_URI)\n mlflow.set_experiment(EXPERIMENT_NAME)\n mlflow.start_run(run_name='Stacking (Poly, RF_Hyper)') # CHANGE!\n run = mlflow.active_run()",
"_____no_output_____"
]
],
[
[
"---",
"_____no_output_____"
],
[
"## Modelling <a id='modelling'></a>",
"_____no_output_____"
]
],
[
[
"RSEED = 42",
"_____no_output_____"
]
],
[
[
"For all models in this project a MinMaxScaler is applied. For this model a random forrest is used. The hyperparameter are selected based on grid search and offer a reasonable balance between optimal results and overfitting. These settings are used in a pipeline.",
"_____no_output_____"
],
[
"### Pipeline",
"_____no_output_____"
]
],
[
[
"estimators = [\n ('rfh', make_pipeline(MinMaxScaler(), RandomForestRegressor(criterion= 'squared_error',\n max_depth= 40, \n max_features= 'auto',\n max_leaf_nodes= 7000, \n min_samples_split= 20,\n n_estimators= 100,\n random_state=RSEED))), # ('xgb', make_pipeline(MinMaxScaler(), XGBRegressor(seed = RSEED))),\n ('plr', make_pipeline(PolynomialFeatures(degree=2), MinMaxScaler() , LinearRegression())),\n ]\nreg = StackingRegressor(estimators=estimators, final_estimator=RandomForestRegressor(random_state=RSEED))\n",
"_____no_output_____"
]
],
[
[
"### Fit and predict",
"_____no_output_____"
]
],
[
[
"reg.fit(X_train, y_train)\ny_pred = reg.predict(X_test)\ny_pred_train = reg.predict(X_train)",
"_____no_output_____"
]
],
[
[
"---",
"_____no_output_____"
],
[
"## Analysis",
"_____no_output_____"
],
[
"### Errors and residuals",
"_____no_output_____"
],
[
"The root mean squared error (RMSE) is used to evaluate the model. ",
"_____no_output_____"
]
],
[
[
"y_pred2 = y_pred.copy()\ny_pred2_train = y_pred_train.copy()\n\ny_pred2[y_pred2 < 0.013509] = 0\ny_pred2_train[y_pred2_train < 0.013509] = 0 #0.013509\n\nprint('RMSE train: ', mean_squared_error(y_train, y_pred2_train, squared= False))\nrmse_train = mean_squared_error(y_train, y_pred2_train, squared= False)\nprint('RMSE test: ', mean_squared_error(y_test, y_pred2, squared= False))\nrmse_test = mean_squared_error(y_test, y_pred2, squared= False)",
"_____no_output_____"
]
],
[
[
"Plotting actual values against predicted shows that the points are close to the optimal diagonale. However, this plot and the yellowbrick residual plot show some dificulties the model has when predicting low target values.",
"_____no_output_____"
]
],
[
[
"fig=plt.figure(figsize=(6, 6))\nplt.axline([1, 1], [2, 2],color='lightgrey')\nplt.scatter(y_train, y_pred2_train, color ='#33424F')\nplt.scatter(y_test, y_pred2, color = '#FF6600')\n#plt.xticks(np.arange(0,501,100));\n#plt.yticks(np.arange(0,501,100));\nplt.xlabel(\"ME.FMS.act.tPh actual\");\nplt.ylabel(\"ME.FMS.act.tPh predicted\");\n#plt.xlim(0, 450);\n#plt.ylim(0, 450);",
"_____no_output_____"
],
[
"residuals_train = y_pred2_train - y_train\nresiduals_test = y_pred2 - y_test",
"_____no_output_____"
],
[
"sns.scatterplot(x = y_pred2_train, y = residuals_train)\nsns.scatterplot(x = y_pred2, y = residuals_test)\nplt.axhline(y = 0, color = 'black')\nplt.xlabel(\"ME.FMS.act.tPh predicted\");\nplt.ylabel(\"Residuals\");\nplt.legend(labels=['', 'train', 'test'])",
"_____no_output_____"
]
],
[
[
"---",
"_____no_output_____"
],
[
"## Write to MLFlow",
"_____no_output_____"
]
],
[
[
"#seting parameters that should be logged on MLFlow\n#these parameters were used in feature engineering (inputing missing values)\n#or parameters of the model (fit_intercept for Linear Regression model)\nparams = {\n \"features drop\": 'EntryDate,Date_daily, Type_daily, TI.LOC.act.ts, WEA.WDR.act.deg, WEA.WSR.act.mPs, WEA.WDTV.act.deg, trip_id, LS.GME.act.nodim, V.WD.act.m',\n \"explanation\": 'correlated features with <0.95 where dropped',\n \"csv used\": 'Featureselection03.csv',\n \"NaN handling\": 'V.SLPOG.act.PRC and ME.SFCI.act.gPkWh filled with 0, rest dropped by row',\n 'Shape' : df.shape,\n 'Scaler' : 'MinMaxScaler'\n }",
"_____no_output_____"
],
[
"if runmlflow == True:\n #logging params to mlflow\n mlflow.log_params(params)\n #setting tags\n mlflow.set_tag(\"running_from_jupyter\", \"True\")\n #logging metrics\n mlflow.log_metric(\"train-\" + \"RMSE\", rmse_train)\n mlflow.log_metric(\"test-\" + \"RMSE\", rmse_test)\n # logging the model to mlflow will not work without a AWS Connection setup.. too complex for now\n # but possible if running mlflow locally\n # mlflow.log_artifact(\"../models\")\n # mlflow.sklearn.log_model(reg, \"model\")\n mlflow.end_run()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
]
] |
e7298bb8557a95f2144e6db902b8973ae80c9325 | 10,204 | ipynb | Jupyter Notebook | python/jupyter/ThetaSketchNotebook.ipynb | tdoehmen/datasketches-cpp | 9fd6f0fe90f4566f42daebc2d2323ce09f5f7c55 | [
"BSL-1.0",
"Apache-2.0",
"MIT"
] | 64 | 2021-01-10T19:13:34.000Z | 2022-03-29T00:31:02.000Z | python/jupyter/ThetaSketchNotebook.ipynb | tdoehmen/datasketches-cpp | 9fd6f0fe90f4566f42daebc2d2323ce09f5f7c55 | [
"BSL-1.0",
"Apache-2.0",
"MIT"
] | 74 | 2021-01-04T18:43:50.000Z | 2022-03-17T22:11:02.000Z | python/jupyter/ThetaSketchNotebook.ipynb | tdoehmen/datasketches-cpp | 9fd6f0fe90f4566f42daebc2d2323ce09f5f7c55 | [
"BSL-1.0",
"Apache-2.0",
"MIT"
] | 30 | 2019-05-20T21:17:27.000Z | 2020-12-05T09:14:00.000Z | 25.702771 | 278 | 0.523226 | [
[
[
"## Theta Sketch Examples",
"_____no_output_____"
],
[
"### Basic Sketch Usage",
"_____no_output_____"
]
],
[
[
"from datasketches import theta_sketch, update_theta_sketch, compact_theta_sketch\nfrom datasketches import theta_union, theta_intersection, theta_a_not_b",
"_____no_output_____"
]
],
[
[
"To start, we'll create a sketch with 1 million points in order to demonstrate basic sketch operations.",
"_____no_output_____"
]
],
[
[
"n = 1000000\nk = 12\nsk1 = update_theta_sketch(k)\nfor i in range(0, n):\n sk1.update(i)\nprint(sk1)",
"### Update Theta sketch summary:\n lg nominal size : 12\n lg current size : 13\n num retained keys : 6560\n resize factor : 8\n sampling probability : 1\n seed hash : 37836\n ordered? : false\n theta (fraction) : 0.00654224\n theta (raw 64-bit) : 60341508738660257\n estimation mode? : true\n estimate : 1.00271e+06\n lower bound 95% conf : 978261\n upper bound 95% conf : 1.02778e+06\n### End sketch summary\n\n"
]
],
[
[
"The summary contains most data fo interest, but we can also query for specific information. And in this case, since we know the exact number of distinct items presented ot the sketch, we can look at the estimate, upper, and lower bounds as a percentage of the exact value.",
"_____no_output_____"
]
],
[
[
"print(\"Upper bound (1 std. dev) as % of true value:\\t\", round(100*sk1.get_upper_bound(1) / n, 4))\nprint(\"Sketch estimate as % of true value:\\t\\t\", round(100*sk1.get_estimate() / n, 4))\nprint(\"Lower bound (1 std. dev) as % of true value:\\t\", round(100*sk1.get_lower_bound(1) / n, 4))",
"Upper bound (1 std. dev) as % of true value:\t 101.5208\nSketch estimate as % of true value:\t\t 100.2715\nLower bound (1 std. dev) as % of true value:\t 99.0374\n"
]
],
[
[
"We can serialize and reconstruct the sketch. If we compact the sketch prior to serialization, we can still query the rebuilt sketch but cannot update it further.",
"_____no_output_____"
]
],
[
[
"sk1_bytes = sk1.compact().serialize()\nlen(sk1_bytes)",
"_____no_output_____"
],
[
"new_sk1 = theta_sketch.deserialize(sk1_bytes)\nprint(\"Estimate: \\t\\t\", new_sk1.get_estimate())\nprint(\"Estimation mode: \\t\", new_sk1.is_estimation_mode())",
"Estimate: \t\t 1002714.745231455\nEstimation mode: \t True\n"
]
],
[
[
"### Sketch Unions",
"_____no_output_____"
],
[
"Theta Sketch unions make use of a separate union object. The union will accept input sketches with different values of $k$.\n\nFor this example, we will create a sketch with distinct values that partially overlap those in `sk1`.",
"_____no_output_____"
]
],
[
[
"offset = int(3 * n / 4)\nsk2 = update_theta_sketch(k+1)\nfor i in range(0, n):\n sk2.update(i + offset)\nprint(sk2)",
"### Update Theta sketch summary:\n lg nominal size : 13\n lg current size : 14\n num retained keys : 12488\n resize factor : 8\n sampling probability : 1\n seed hash : 37836\n ordered? : false\n theta (fraction) : 0.0123336\n theta (raw 64-bit) : 113757656857900725\n estimation mode? : true\n estimate : 1.01252e+06\n lower bound 95% conf : 994626\n upper bound 95% conf : 1.03073e+06\n### End sketch summary\n\n"
]
],
[
[
"We can now feed the sketches into the union. As constructed, the exact number of unique values presented to the two sketches is $\\frac{7}{4}n$.",
"_____no_output_____"
]
],
[
[
"union = theta_union(k)\nunion.update(sk1)\nunion.update(sk2)\nresult = union.get_result()\nprint(\"Union estimate as % of true value: \", round(100*result.get_estimate()/(1.75*n), 4))",
"Union estimate as % of true value: 99.6787\n"
]
],
[
[
"### Sketch Intersections",
"_____no_output_____"
],
[
"Beyond unions, theta sketches also support intersctions through the use of an intersection object. These set intersections can have vastly superior error bounds than the classic inclusion-exclusion rule used with sketches like HLL.",
"_____no_output_____"
]
],
[
[
"intersection = theta_intersection()\nintersection.update(sk1)\nintersection.update(sk2)\nprint(\"Has result: \", intersection.has_result())\nresult = intersection.get_result()\nprint(result)",
"Has result: True\n### Compact Theta sketch summary:\n num retained keys : 1668\n seed hash : 37836\n ordered? : true\n theta (fraction) : 0.00654224\n theta (raw 64-bit) : 60341508738660257\n estimation mode? : true\n estimate : 254959\n lower bound 95% conf : 242739\n upper bound 95% conf : 267789\n### End sketch summary\n\n"
]
],
[
[
"In this case, we expect the sets to have an overlap of $\\frac{1}{4}n$.",
"_____no_output_____"
]
],
[
[
"print(\"Intersection estimate as % of true value: \", round(100*result.get_estimate()/(0.25*n), 4))",
"Intersection estimate as % of true value: 101.9834\n"
]
],
[
[
"### Set Subtraction (A-not-B)",
"_____no_output_____"
],
[
"Finally, we have the set subtraction operation. Unlike `theta_union` and `theta_intersection`, `theta_a_not_b` always takes as input 2 sketches at a time, namely $a$ and $b$, and directly returns the result as a sketch.",
"_____no_output_____"
]
],
[
[
"anb = theta_a_not_b()\nresult = anb.compute(sk1, sk2)\nprint(result)",
"### Compact Theta sketch summary:\n num retained keys : 4892\n seed hash : 37836\n ordered? : true\n theta (fraction) : 0.00654224\n theta (raw 64-bit) : 60341508738660257\n estimation mode? : true\n estimate : 747756\n lower bound 95% conf : 726670\n upper bound 95% conf : 769452\n### End sketch summary\n\n"
]
],
[
[
"By using the same two sketches as before, the expected result here is $\\frac{3}{4}n$.",
"_____no_output_____"
]
],
[
[
"print(\"A-not-B estimate as % of true value: \", round(100*result.get_estimate()/(0.75*n), 4))",
"A-not-B estimate as % of true value: 99.7008\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e729a7b66a07495ce951b1c625360900381cdd11 | 10,788 | ipynb | Jupyter Notebook | julia/Untitled.ipynb | egobiernoytp/lac_decarbonization | 7b574c4c91a0b1341dfd97a203fc8477ba32a91d | [
"Apache-2.0"
] | null | null | null | julia/Untitled.ipynb | egobiernoytp/lac_decarbonization | 7b574c4c91a0b1341dfd97a203fc8477ba32a91d | [
"Apache-2.0"
] | null | null | null | julia/Untitled.ipynb | egobiernoytp/lac_decarbonization | 7b574c4c91a0b1341dfd97a203fc8477ba32a91d | [
"Apache-2.0"
] | null | null | null | 36.693878 | 1,528 | 0.525028 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
e729afb7839b9cdbcfa72133e7cf34dde5aaf097 | 184,018 | ipynb | Jupyter Notebook | Tash-PT/RNN_tash.ipynb | letisousa/SentimentAnalysisUpdates | e664e499a20e8005fbc22c9bb8bde64a94ca8730 | [
"MIT"
] | 3 | 2021-08-05T19:11:57.000Z | 2021-08-06T11:45:59.000Z | Tash-PT/RNN_tash.ipynb | letisousa/SentimentAnalysisUpdates | e664e499a20e8005fbc22c9bb8bde64a94ca8730 | [
"MIT"
] | null | null | null | Tash-PT/RNN_tash.ipynb | letisousa/SentimentAnalysisUpdates | e664e499a20e8005fbc22c9bb8bde64a94ca8730 | [
"MIT"
] | null | null | null | 80.638913 | 24,044 | 0.728782 | [
[
[
"import pandas as pd\nimport numpy as np\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n\nimport re\nimport nltk\nfrom nltk import word_tokenize, RegexpTokenizer\nfrom nltk.corpus import stopwords, wordnet\nnltk.download('stopwords')\nnltk.download('wordnet')\n\nfrom sklearn.feature_extraction.text import CountVectorizer\nfrom sklearn.feature_extraction.text import TfidfTransformer\nfrom sklearn.naive_bayes import MultinomialNB\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.metrics import classification_report\nfrom sklearn.preprocessing import OneHotEncoder\nimport autokeras as ak\nfrom sklearn.model_selection import KFold\n\nimport tensorflow as tf\nfrom tensorflow.keras.layers.experimental.preprocessing import TextVectorization\nfrom sklearn.svm import SVC\nfrom sklearn.linear_model import LogisticRegression",
"[nltk_data] Downloading package stopwords to\n[nltk_data] C:\\Users\\ddayv\\AppData\\Roaming\\nltk_data...\n[nltk_data] Package stopwords is already up-to-date!\n[nltk_data] Downloading package wordnet to\n[nltk_data] C:\\Users\\ddayv\\AppData\\Roaming\\nltk_data...\n[nltk_data] Package wordnet is already up-to-date!\n"
],
[
"df = pd.read_csv(\"tash-pt.csv\")\ndf",
"_____no_output_____"
],
[
"df.isnull().sum()\ndf = df.dropna()",
"_____no_output_____"
],
[
"df = df.drop(columns=['id_twitter'])\ndf['sentiment'].unique()",
"_____no_output_____"
],
[
"df['sentiment'].value_counts()",
"_____no_output_____"
],
[
"Tweet = df['text']\nsentiment = np.asarray(df['sentiment'])",
"_____no_output_____"
],
[
"count_vect = CountVectorizer()\nX_train = count_vect.fit_transform(Tweet)\n\ntfidf_transformer = TfidfTransformer()\nX_train_transform = tfidf_transformer.fit_transform(X_train) # Aplicando o TF-IDF\nX_train_transform.shape",
"_____no_output_____"
],
[
"X_train, X_test, Y_train, Y_test = train_test_split(X_train_transform, sentiment, test_size=0.3)\n\nclf = MultinomialNB().fit(X_train, Y_train) # Aplicando naive bayes\npredicted = clf.predict(X_test)\n\nprint(classification_report(Y_test, predicted))",
" precision recall f1-score support\n\n -1 0.50 0.40 0.45 264\n 0 0.43 0.65 0.51 297\n 1 0.49 0.30 0.38 276\n\n accuracy 0.46 837\n macro avg 0.47 0.45 0.45 837\nweighted avg 0.47 0.46 0.45 837\n\n"
],
[
"kf = KFold(n_splits=10)\nclf = MultinomialNB()\nlista = []\nfor train_index, test_index in kf.split(X_train_transform):\n X_train, X_test = X_train_transform[train_index], X_train_transform[test_index]\n y_train, y_test = sentiment[train_index], sentiment[test_index]\n clf.fit(X_train, y_train)\n y_pred = clf.predict(X_test)\n lista.append(clf.score(X_test, y_test))\n \n print(clf.score(X_test, y_test))\n \nprint(f' Média: {np.array(lista).mean()} \\t Std: {np.array(lista).std()} ')",
"0.4229390681003584\n0.43727598566308246\n0.4121863799283154\n0.4265232974910394\n0.4838709677419355\n0.4731182795698925\n0.45878136200716846\n0.44244604316546765\n0.4352517985611511\n0.4784172661870504\n Média: 0.44708104484154604 \t Std: 0.02372960041678921 \n"
]
],
[
[
"## SVM",
"_____no_output_____"
]
],
[
[
"X_train, X_test, Y_train, Y_test = train_test_split(X_train_transform, sentiment, test_size=0.3)\nclf = SVC().fit(X_train, Y_train) \npredicted = clf.predict(X_test)\n\nprint(classification_report(Y_test, predicted))",
" precision recall f1-score support\n\n -1 0.45 0.39 0.42 256\n 0 0.43 0.67 0.52 317\n 1 0.44 0.21 0.29 264\n\n accuracy 0.44 837\n macro avg 0.44 0.42 0.41 837\nweighted avg 0.44 0.44 0.42 837\n\n"
],
[
"kf = KFold(n_splits=10)\nclf = SVC()\nlista = []\nfor train_index, test_index in kf.split(X_train_transform):\n X_train, X_test = X_train_transform[train_index], X_train_transform[test_index]\n y_train, y_test = sentiment[train_index], sentiment[test_index]\n clf.fit(X_train, y_train)\n y_pred = clf.predict(X_test)\n lista.append(clf.score(X_test, y_test))\n \n print(clf.score(X_test, y_test))\n \nprint(f' Média: {np.array(lista).mean()} \\t Std: {np.array(lista).std()} ')",
"0.3835125448028674\n0.45878136200716846\n0.4121863799283154\n0.43727598566308246\n0.4659498207885305\n0.44802867383512546\n0.44086021505376344\n0.4748201438848921\n0.44964028776978415\n0.48201438848920863\n Média: 0.44530698022227383 \t Std: 0.028021649950196455 \n"
]
],
[
[
"## RL",
"_____no_output_____"
]
],
[
[
"X_train, X_test, Y_train, Y_test = train_test_split(X_train_transform, sentiment, test_size=0.3)\nclf = LogisticRegression(max_iter=1000).fit(X_train, Y_train) \npredicted = clf.predict(X_test)\n\nprint(classification_report(Y_test, predicted))",
" precision recall f1-score support\n\n -1 0.42 0.41 0.41 249\n 0 0.45 0.53 0.49 321\n 1 0.42 0.34 0.38 267\n\n accuracy 0.43 837\n macro avg 0.43 0.43 0.42 837\nweighted avg 0.43 0.43 0.43 837\n\n"
],
[
"kf = KFold(n_splits=10)\nclf = LogisticRegression(max_iter=1000)\nlista = []\nfor train_index, test_index in kf.split(X_train_transform):\n X_train, X_test = X_train_transform[train_index], X_train_transform[test_index]\n y_train, y_test = sentiment[train_index], sentiment[test_index]\n clf.fit(X_train, y_train)\n y_pred = clf.predict(X_test)\n lista.append(clf.score(X_test, y_test))\n \n print(clf.score(X_test, y_test))\n \nprint(f' Média: {np.array(lista).mean()} \\t Std: {np.array(lista).std()} ')",
"0.41935483870967744\n0.45161290322580644\n0.4157706093189964\n0.43727598566308246\n0.4551971326164875\n0.4551971326164875\n0.4767025089605735\n0.4460431654676259\n0.45323741007194246\n0.46402877697841727\n Média: 0.44744204636290974 \t Std: 0.017888085792298797 \n"
],
[
"def pre_X(frases):\n lista = []\n \n for frase in frases:\n lista.append(frase)\n return lista\n\ndef pre_Y(number):\n lista = []\n \n for numb in number:\n lista.append(numb)\n return lista",
"_____no_output_____"
],
[
"#pré-processamento dos textos\ndef set_array(frases):\n \n vocab = []\n palavras = []\n for frase in frases:\n text_array = remove_user(frase)\n text_array = Tokenize(text_array)\n text_array = text_array.split(' ')\n for i in range(len(text_array)):\n vocab.append(text_array[i])\n return vocab\n\ndef Tokenize(f): \n\n f = f.lower().replace('\\n', '').replace('-','').replace('#','').replace('.','').replace(',','').replace('!','').replace('r\\n','').replace(' ','')\n token = RegexpTokenizer(r\"\\w+\")\n f = token.tokenize(f)\n stop_words = set(stopwords.words('portuguese'))\n new_word = [word for word in f if not word in stop_words]\n return ' '.join(new_word)\n\ndef remove_user(frase):\n return re.sub('@\\w+','',frase)",
"_____no_output_____"
],
[
"model = tf.keras.Sequential([\n tf.keras.layers.Dense(50, activation='relu'),\n tf.keras.layers.Dropout(0.3),\n tf.keras.layers.Dense(25, activation='relu'),\n tf.keras.layers.Dropout(0.3),\n tf.keras.layers.Dense(10, activation='tanh'),\n tf.keras.layers.Dropout(0.2),\n tf.keras.layers.Dense(3 , activation='softmax')\n])\n\nmodel.compile(\n optimizer='adam',\n loss=tf.keras.losses.categorical_crossentropy,\n metrics=['accuracy']\n)",
"_____no_output_____"
],
[
"Tweet = Tweet.apply(remove_user)\nTweet_preprocessed = Tweet.apply(Tokenize)",
"_____no_output_____"
],
[
"count_vect = CountVectorizer()\nX_train = count_vect.fit_transform(Tweet_preprocessed)\n\ntfidf_transformer = TfidfTransformer()\nX_train_transform = tfidf_transformer.fit_transform(X_train) # Aplicando o TF-IDF\n\n\nX_train, X_test, Y_train, Y_test = train_test_split(X_train_transform, sentiment, test_size=0.3)\n\nX_train",
"_____no_output_____"
],
[
"X_test",
"_____no_output_____"
],
[
"one = OneHotEncoder(sparse=False)",
"_____no_output_____"
],
[
"y_one = one.fit_transform(Y_train.reshape(-1,1))\ny_one_ = one.transform(Y_test.reshape(-1,1))",
"_____no_output_____"
],
[
"fit = model.fit(X_train.todense(), y_one, epochs=10, validation_data=(X_test.todense(), y_one_))",
"Epoch 1/10\n61/61 [==============================] - 0s 7ms/step - loss: 1.0968 - accuracy: 0.3554 - val_loss: 1.0963 - val_accuracy: 0.3632\nEpoch 2/10\n61/61 [==============================] - 0s 5ms/step - loss: 1.0721 - accuracy: 0.4267 - val_loss: 1.0830 - val_accuracy: 0.3787\nEpoch 3/10\n61/61 [==============================] - 0s 5ms/step - loss: 0.8989 - accuracy: 0.6826 - val_loss: 1.0778 - val_accuracy: 0.4277\nEpoch 4/10\n61/61 [==============================] - 0s 5ms/step - loss: 0.5435 - accuracy: 0.8344 - val_loss: 1.2031 - val_accuracy: 0.4313\nEpoch 5/10\n61/61 [==============================] - 0s 5ms/step - loss: 0.2643 - accuracy: 0.9359 - val_loss: 1.3849 - val_accuracy: 0.4444\nEpoch 6/10\n61/61 [==============================] - 0s 5ms/step - loss: 0.1257 - accuracy: 0.9779 - val_loss: 1.6006 - val_accuracy: 0.4229\nEpoch 7/10\n61/61 [==============================] - 0s 5ms/step - loss: 0.0668 - accuracy: 0.9908 - val_loss: 1.7504 - val_accuracy: 0.4265\nEpoch 8/10\n61/61 [==============================] - 0s 5ms/step - loss: 0.0487 - accuracy: 0.9918 - val_loss: 1.8939 - val_accuracy: 0.4146\nEpoch 9/10\n61/61 [==============================] - 0s 5ms/step - loss: 0.0316 - accuracy: 0.9954 - val_loss: 2.0011 - val_accuracy: 0.4301\nEpoch 10/10\n61/61 [==============================] - 0s 5ms/step - loss: 0.0299 - accuracy: 0.9954 - val_loss: 2.1217 - val_accuracy: 0.4158\n"
],
[
"predicted = model.predict(X_test.todense())\n\nprint(classification_report(np.argmax(y_one_, axis=1), np.argmax(predicted, axis=1)))",
" precision recall f1-score support\n\n 0 0.45 0.40 0.43 272\n 1 0.44 0.41 0.43 304\n 2 0.36 0.43 0.40 261\n\n accuracy 0.42 837\n macro avg 0.42 0.42 0.42 837\nweighted avg 0.42 0.42 0.42 837\n\n"
],
[
"plt.figure(figsize=(12,6))\nplt.plot(fit.history['loss'], label='Loss', color='darkred')\nplt.plot(fit.history['val_loss'], label='Val Loss', color='green')\nplt.legend()\nplt.ylabel('Loss')\nplt.show()",
"_____no_output_____"
],
[
"plt.figure(figsize=(12,6))\nplt.plot(fit.history['accuracy'], label='accuracy', color='darkred')\nplt.plot(fit.history['val_accuracy'], label='Val accuracy', color='green')\nplt.legend()\nplt.ylabel('Accuracy')\nplt.show()",
"_____no_output_____"
],
[
"kf = KFold(n_splits=10)\nval_accuracy = []\nindex = []\ncont = 0\n\nfor train_index, test_index in kf.split(X_train_transform):\n cont += 1\n \n model = tf.keras.Sequential([\n tf.keras.layers.Dense(50, activation='relu'),\n tf.keras.layers.Dropout(0.3),\n tf.keras.layers.Dense(25, activation='relu'),\n tf.keras.layers.Dropout(0.3),\n tf.keras.layers.Dense(10, activation='tanh'),\n tf.keras.layers.Dropout(0.2),\n tf.keras.layers.Dense(3 , activation='softmax')\n ])\n\n model.compile(\n optimizer='adam',\n loss=tf.keras.losses.categorical_crossentropy,\n metrics=['accuracy']\n )\n \n X_train, X_test = X_train_transform[train_index], X_train_transform[test_index]\n y_train, y_test = sentiment[train_index], sentiment[test_index]\n y_train_one = one.fit_transform(y_train.reshape(-1,1))\n y_teste_one = one.fit_transform(y_test.reshape(-1,1))\n \n print(\"Iter: \",cont)\n print(\" \")\n \n fit = model.fit(X_train.todense(), y_train_one, epochs=5, validation_data=(X_test.todense(), y_teste_one))\n print(\" \")\n val_accuracy.append(fit.history['val_accuracy'])\n \n index.append((train_index,test_index))",
"Iter: 1\n \nEpoch 1/5\n79/79 [==============================] - 0s 5ms/step - loss: 1.0950 - accuracy: 0.3732 - val_loss: 1.1189 - val_accuracy: 0.2545\nEpoch 2/5\n79/79 [==============================] - 0s 4ms/step - loss: 1.0640 - accuracy: 0.3951 - val_loss: 1.0931 - val_accuracy: 0.3333\nEpoch 3/5\n79/79 [==============================] - 0s 4ms/step - loss: 0.8610 - accuracy: 0.6818 - val_loss: 1.0648 - val_accuracy: 0.4731\nEpoch 4/5\n79/79 [==============================] - 0s 4ms/step - loss: 0.4441 - accuracy: 0.8784 - val_loss: 1.2324 - val_accuracy: 0.4444\nEpoch 5/5\n79/79 [==============================] - 0s 4ms/step - loss: 0.2036 - accuracy: 0.9553 - val_loss: 1.4517 - val_accuracy: 0.4552\n \nIter: 2\n \nEpoch 1/5\n79/79 [==============================] - 0s 6ms/step - loss: 1.0977 - accuracy: 0.3473 - val_loss: 1.0942 - val_accuracy: 0.3978\nEpoch 2/5\n79/79 [==============================] - 0s 5ms/step - loss: 1.0522 - accuracy: 0.4665 - val_loss: 1.0730 - val_accuracy: 0.4265\nEpoch 3/5\n79/79 [==============================] - 0s 4ms/step - loss: 0.8053 - accuracy: 0.6990 - val_loss: 1.1635 - val_accuracy: 0.4122\nEpoch 4/5\n79/79 [==============================] - 0s 5ms/step - loss: 0.3987 - accuracy: 0.8947 - val_loss: 1.3930 - val_accuracy: 0.4122\nEpoch 5/5\n79/79 [==============================] - 0s 4ms/step - loss: 0.1829 - accuracy: 0.9621 - val_loss: 1.6861 - val_accuracy: 0.4194\n \nIter: 3\n \nEpoch 1/5\n79/79 [==============================] - 0s 6ms/step - loss: 1.0959 - accuracy: 0.3553 - val_loss: 1.0989 - val_accuracy: 0.3441\nEpoch 2/5\n79/79 [==============================] - 0s 4ms/step - loss: 1.0497 - accuracy: 0.4406 - val_loss: 1.0794 - val_accuracy: 0.4050\nEpoch 3/5\n79/79 [==============================] - 0s 5ms/step - loss: 0.8093 - accuracy: 0.6272 - val_loss: 1.2079 - val_accuracy: 0.3799\nEpoch 4/5\n79/79 [==============================] - 0s 5ms/step - loss: 0.5498 - accuracy: 0.7548 - val_loss: 1.4283 - val_accuracy: 0.3584\nEpoch 5/5\n79/79 [==============================] - 0s 5ms/step - loss: 0.3750 - accuracy: 0.8664 - val_loss: 1.5872 - val_accuracy: 0.3943\n \nIter: 4\n \nEpoch 1/5\n79/79 [==============================] - 1s 7ms/step - loss: 1.0973 - accuracy: 0.3612 - val_loss: 1.0925 - val_accuracy: 0.3763\nEpoch 2/5\n79/79 [==============================] - 0s 4ms/step - loss: 1.0597 - accuracy: 0.4789 - val_loss: 1.0729 - val_accuracy: 0.4265\nEpoch 3/5\n79/79 [==============================] - 0s 5ms/step - loss: 0.8208 - accuracy: 0.7137 - val_loss: 1.0890 - val_accuracy: 0.4122\nEpoch 4/5\n79/79 [==============================] - 0s 5ms/step - loss: 0.4313 - accuracy: 0.8868 - val_loss: 1.3112 - val_accuracy: 0.4014\nEpoch 5/5\n79/79 [==============================] - 0s 5ms/step - loss: 0.1821 - accuracy: 0.9621 - val_loss: 1.5900 - val_accuracy: 0.4444\n \nIter: 5\n \nEpoch 1/5\n79/79 [==============================] - 1s 7ms/step - loss: 1.0979 - accuracy: 0.3473 - val_loss: 1.0950 - val_accuracy: 0.3584\nEpoch 2/5\n79/79 [==============================] - 0s 5ms/step - loss: 1.0606 - accuracy: 0.4246 - val_loss: 1.0657 - val_accuracy: 0.4194\nEpoch 3/5\n79/79 [==============================] - 0s 4ms/step - loss: 0.8431 - accuracy: 0.6332 - val_loss: 1.0769 - val_accuracy: 0.4552\nEpoch 4/5\n79/79 [==============================] - 0s 5ms/step - loss: 0.4681 - accuracy: 0.8505 - val_loss: 1.2442 - val_accuracy: 0.4301\nEpoch 5/5\n79/79 [==============================] - 0s 5ms/step - loss: 0.2085 - accuracy: 0.9498 - val_loss: 1.4774 - val_accuracy: 0.4301\n \nIter: 6\n \nEpoch 1/5\n79/79 [==============================] - 0s 6ms/step - loss: 1.0975 - accuracy: 0.3616 - val_loss: 1.0938 - val_accuracy: 0.3799\nEpoch 2/5\n79/79 [==============================] - 0s 5ms/step - loss: 1.0569 - accuracy: 0.4306 - val_loss: 1.0714 - val_accuracy: 0.4409\nEpoch 3/5\n79/79 [==============================] - 0s 5ms/step - loss: 0.8242 - accuracy: 0.6994 - val_loss: 1.0662 - val_accuracy: 0.4194\nEpoch 4/5\n79/79 [==============================] - 0s 5ms/step - loss: 0.4212 - accuracy: 0.8840 - val_loss: 1.2436 - val_accuracy: 0.4444\nEpoch 5/5\n79/79 [==============================] - 0s 4ms/step - loss: 0.1848 - accuracy: 0.9557 - val_loss: 1.4649 - val_accuracy: 0.4444\n \nIter: 7\n \nEpoch 1/5\n79/79 [==============================] - 0s 6ms/step - loss: 1.0966 - accuracy: 0.3672 - val_loss: 1.0970 - val_accuracy: 0.3548\nEpoch 2/5\n79/79 [==============================] - 0s 4ms/step - loss: 1.0559 - accuracy: 0.4187 - val_loss: 1.0841 - val_accuracy: 0.3943\nEpoch 3/5\n79/79 [==============================] - 0s 4ms/step - loss: 0.8138 - accuracy: 0.7010 - val_loss: 1.1094 - val_accuracy: 0.4588\nEpoch 4/5\n79/79 [==============================] - 0s 5ms/step - loss: 0.4401 - accuracy: 0.8724 - val_loss: 1.2460 - val_accuracy: 0.4695\nEpoch 5/5\n79/79 [==============================] - 0s 4ms/step - loss: 0.2102 - accuracy: 0.9565 - val_loss: 1.4835 - val_accuracy: 0.4480\n \nIter: 8\n \nEpoch 1/5\n79/79 [==============================] - 0s 6ms/step - loss: 1.0968 - accuracy: 0.3599 - val_loss: 1.0937 - val_accuracy: 0.4029\nEpoch 2/5\n79/79 [==============================] - 0s 5ms/step - loss: 1.0467 - accuracy: 0.4759 - val_loss: 1.0652 - val_accuracy: 0.4317\nEpoch 3/5\n79/79 [==============================] - 0s 5ms/step - loss: 0.7871 - accuracy: 0.7342 - val_loss: 1.1161 - val_accuracy: 0.4424\nEpoch 4/5\n79/79 [==============================] - 0s 5ms/step - loss: 0.3843 - accuracy: 0.8960 - val_loss: 1.4035 - val_accuracy: 0.4065\nEpoch 5/5\n79/79 [==============================] - 0s 5ms/step - loss: 0.1670 - accuracy: 0.9645 - val_loss: 1.7039 - val_accuracy: 0.3813\n \nIter: 9\n \nEpoch 1/5\n79/79 [==============================] - 1s 7ms/step - loss: 1.0969 - accuracy: 0.3479 - val_loss: 1.0926 - val_accuracy: 0.3705\nEpoch 2/5\n79/79 [==============================] - 0s 4ms/step - loss: 1.0417 - accuracy: 0.4532 - val_loss: 1.0525 - val_accuracy: 0.4245\nEpoch 3/5\n79/79 [==============================] - 0s 4ms/step - loss: 0.7827 - accuracy: 0.6835 - val_loss: 1.1212 - val_accuracy: 0.4245\nEpoch 4/5\n79/79 [==============================] - 0s 6ms/step - loss: 0.4434 - accuracy: 0.8597 - val_loss: 1.2981 - val_accuracy: 0.3957\nEpoch 5/5\n79/79 [==============================] - 0s 5ms/step - loss: 0.2059 - accuracy: 0.9502 - val_loss: 1.6018 - val_accuracy: 0.3849\n \nIter: 10\n \nEpoch 1/5\n79/79 [==============================] - 1s 6ms/step - loss: 1.0974 - accuracy: 0.3587 - val_loss: 1.0933 - val_accuracy: 0.4101\nEpoch 2/5\n79/79 [==============================] - 0s 5ms/step - loss: 1.0765 - accuracy: 0.4081 - val_loss: 1.0704 - val_accuracy: 0.4281\nEpoch 3/5\n79/79 [==============================] - 0s 6ms/step - loss: 0.9067 - accuracy: 0.6513 - val_loss: 1.0709 - val_accuracy: 0.4496\nEpoch 4/5\n79/79 [==============================] - 0s 5ms/step - loss: 0.4997 - accuracy: 0.8597 - val_loss: 1.1898 - val_accuracy: 0.4532\nEpoch 5/5\n79/79 [==============================] - 0s 4ms/step - loss: 0.2213 - accuracy: 0.9506 - val_loss: 1.4263 - val_accuracy: 0.4460\n \n"
],
[
"def media_std(val_accuracy):\n matrix_acc = np.array(val_accuracy)\n lista = []\n for i in range(len(matrix_acc)):\n lista.append(matrix_acc[i][-1])\n print(f' Fold: {i}\\t Ultimo valor acc: {lista[i]}')\n print(\"\")\n print(f' Média: {np.array(lista).mean()} \\t Std: {np.array(lista).std()}')\n \nmedia_std(val_accuracy)",
" Fold: 0\t Ultimo valor acc: 0.45519712567329407\n Fold: 1\t Ultimo valor acc: 0.4193548262119293\n Fold: 2\t Ultimo valor acc: 0.39426523447036743\n Fold: 3\t Ultimo valor acc: 0.4444444477558136\n Fold: 4\t Ultimo valor acc: 0.4301075339317322\n Fold: 5\t Ultimo valor acc: 0.4444444477558136\n Fold: 6\t Ultimo valor acc: 0.44802868366241455\n Fold: 7\t Ultimo valor acc: 0.38129496574401855\n Fold: 8\t Ultimo valor acc: 0.384892076253891\n Fold: 9\t Ultimo valor acc: 0.4460431635379791\n\n Média: 0.42480725049972534 \t Std: 0.026762210752962822\n"
],
[
"vectorize_layer = TextVectorization(\n max_tokens=15000,\n output_mode='int',\n output_sequence_length=len(max(df['text'])))\n\nvocab = set_array(df['text'])\n\nvectorize_layer.adapt(np.unique(vocab))\nlen(vectorize_layer.get_vocabulary())\n",
"_____no_output_____"
]
],
[
[
"## CONV1D",
"_____no_output_____"
]
],
[
[
"model = tf.keras.Sequential([\n vectorize_layer,\n tf.keras.layers.Embedding(\n input_dim=len(vectorize_layer.get_vocabulary()),\n output_dim=64,\n mask_zero=True),\n \n tf.keras.layers.Conv1D(32,6, activation='relu'),\n tf.keras.layers.MaxPooling1D(2),\n tf.keras.layers.Flatten(),\n \n tf.keras.layers.Dense(16, activation='relu'),\n tf.keras.layers.Dense(3, activation='softmax') \n])\n\nmodel.compile(\n optimizer= tf.keras.optimizers.Adam(),\n loss=tf.keras.losses.categorical_crossentropy,\n metrics=['accuracy']\n)",
"_____no_output_____"
],
[
"X_train, X_test, Y_train, Y_test = train_test_split(Tweet_preprocessed, sentiment, test_size=0.3)",
"_____no_output_____"
],
[
"y_one = one.fit_transform(Y_train.reshape(-1,1))\ny_one_ = one.transform(Y_test.reshape(-1,1))",
"_____no_output_____"
],
[
"fit = model.fit(np.asarray(pre_X(X_train)), y_one, epochs=10, batch_size=128 ,validation_data=(np.asarray(pre_X(X_test)),y_one_))",
"Epoch 1/10\n16/16 [==============================] - 0s 27ms/step - loss: 1.0991 - accuracy: 0.3405 - val_loss: 1.0953 - val_accuracy: 0.3955\nEpoch 2/10\n16/16 [==============================] - 0s 17ms/step - loss: 1.0857 - accuracy: 0.3549 - val_loss: 1.0915 - val_accuracy: 0.3955\nEpoch 3/10\n16/16 [==============================] - 0s 20ms/step - loss: 1.0498 - accuracy: 0.4954 - val_loss: 1.0913 - val_accuracy: 0.4086\nEpoch 4/10\n16/16 [==============================] - 0s 17ms/step - loss: 0.9434 - accuracy: 0.8554 - val_loss: 1.0869 - val_accuracy: 0.3955\nEpoch 5/10\n16/16 [==============================] - 0s 19ms/step - loss: 0.7160 - accuracy: 0.9210 - val_loss: 1.0780 - val_accuracy: 0.4002\nEpoch 6/10\n16/16 [==============================] - 0s 18ms/step - loss: 0.4133 - accuracy: 0.9569 - val_loss: 1.1196 - val_accuracy: 0.3859\nEpoch 7/10\n16/16 [==============================] - 0s 22ms/step - loss: 0.1893 - accuracy: 0.9903 - val_loss: 1.1448 - val_accuracy: 0.3907\nEpoch 8/10\n16/16 [==============================] - 0s 21ms/step - loss: 0.0895 - accuracy: 0.9923 - val_loss: 1.2109 - val_accuracy: 0.3990\nEpoch 9/10\n16/16 [==============================] - 0s 17ms/step - loss: 0.0513 - accuracy: 0.9954 - val_loss: 1.2654 - val_accuracy: 0.3859\nEpoch 10/10\n16/16 [==============================] - 0s 16ms/step - loss: 0.0299 - accuracy: 0.9990 - val_loss: 1.3343 - val_accuracy: 0.3668\n"
],
[
"predicted = model.predict(X_test)\n\nprint(classification_report(np.argmax(y_one_, axis=1), np.argmax(predicted, axis=1)))",
" precision recall f1-score support\n\n 0 0.34 0.47 0.40 239\n 1 0.42 0.30 0.35 331\n 2 0.35 0.36 0.36 267\n\n accuracy 0.37 837\n macro avg 0.37 0.38 0.37 837\nweighted avg 0.37 0.37 0.36 837\n\n"
],
[
"kf = KFold(n_splits=10)\nval_accuracy = []\nindex = []\ncont = 0\n\nfor train_index, test_index in kf.split(Tweet_preprocessed):\n cont += 1\n \n model = tf.keras.Sequential([\n vectorize_layer,\n tf.keras.layers.Embedding(\n input_dim=len(vectorize_layer.get_vocabulary()),\n output_dim=64,\n mask_zero=True),\n \n tf.keras.layers.Conv1D(32,6, activation='relu'),\n tf.keras.layers.MaxPooling1D(2),\n tf.keras.layers.Flatten(),\n \n tf.keras.layers.Dense(16, activation='relu'),\n tf.keras.layers.Dense(3, activation='softmax') \n ])\n\n model.compile(\n optimizer= tf.keras.optimizers.Adam(),\n loss=tf.keras.losses.categorical_crossentropy,\n metrics=['accuracy']\n )\n \n X_train, X_test = Tweet_preprocessed[train_index], Tweet_preprocessed[test_index]\n y_train, y_test = sentiment[train_index], sentiment[test_index]\n y_train_one = one.fit_transform(y_train.reshape(-1,1))\n y_teste_one = one.fit_transform(y_test.reshape(-1,1))\n \n print(\"Iter: \",cont)\n print(\" \")\n \n fit = model.fit(X_train, y_train_one, epochs=5, validation_data=(X_test, y_teste_one))\n print(\" \")\n val_accuracy.append(fit.history['val_accuracy'])\n \n index.append((train_index,test_index))",
"Iter: 1\n \nEpoch 1/5\n79/79 [==============================] - 1s 10ms/step - loss: 1.0959 - accuracy: 0.3660 - val_loss: 1.1224 - val_accuracy: 0.2545\nEpoch 2/5\n79/79 [==============================] - 1s 10ms/step - loss: 0.9892 - accuracy: 0.5195 - val_loss: 1.1370 - val_accuracy: 0.3082\nEpoch 3/5\n79/79 [==============================] - 1s 11ms/step - loss: 0.5586 - accuracy: 0.7444 - val_loss: 1.2958 - val_accuracy: 0.3656\nEpoch 4/5\n79/79 [==============================] - 1s 10ms/step - loss: 0.2732 - accuracy: 0.9286 - val_loss: 1.4073 - val_accuracy: 0.3943\nEpoch 5/5\n79/79 [==============================] - 1s 9ms/step - loss: 0.0895 - accuracy: 0.9860 - val_loss: 1.6374 - val_accuracy: 0.4158\n \nIter: 2\n \nEpoch 1/5\n79/79 [==============================] - 1s 11ms/step - loss: 1.0976 - accuracy: 0.3549 - val_loss: 1.0929 - val_accuracy: 0.3978\nEpoch 2/5\n79/79 [==============================] - 1s 10ms/step - loss: 0.9959 - accuracy: 0.5614 - val_loss: 1.1077 - val_accuracy: 0.3548\nEpoch 3/5\n79/79 [==============================] - 1s 8ms/step - loss: 0.4737 - accuracy: 0.8768 - val_loss: 1.3079 - val_accuracy: 0.3369\nEpoch 4/5\n79/79 [==============================] - 1s 9ms/step - loss: 0.1019 - accuracy: 0.9841 - val_loss: 1.5787 - val_accuracy: 0.3477\nEpoch 5/5\n79/79 [==============================] - 1s 9ms/step - loss: 0.0265 - accuracy: 0.9972 - val_loss: 1.6540 - val_accuracy: 0.3799\n \nIter: 3\n \nEpoch 1/5\n79/79 [==============================] - 1s 10ms/step - loss: 1.0969 - accuracy: 0.3624 - val_loss: 1.0981 - val_accuracy: 0.3441\nEpoch 2/5\n79/79 [==============================] - 1s 10ms/step - loss: 1.0188 - accuracy: 0.4864 - val_loss: 1.0678 - val_accuracy: 0.4122\nEpoch 3/5\n79/79 [==============================] - 1s 10ms/step - loss: 0.5493 - accuracy: 0.8258 - val_loss: 1.2338 - val_accuracy: 0.4194\nEpoch 4/5\n79/79 [==============================] - 1s 11ms/step - loss: 0.1345 - accuracy: 0.9729 - val_loss: 1.4352 - val_accuracy: 0.4158\nEpoch 5/5\n79/79 [==============================] - 1s 9ms/step - loss: 0.0380 - accuracy: 0.9912 - val_loss: 1.5715 - val_accuracy: 0.4014\n \nIter: 4\n \nEpoch 1/5\n79/79 [==============================] - 1s 11ms/step - loss: 1.0978 - accuracy: 0.3565 - val_loss: 1.0966 - val_accuracy: 0.3763\nEpoch 2/5\n79/79 [==============================] - 1s 9ms/step - loss: 1.0365 - accuracy: 0.5455 - val_loss: 1.0928 - val_accuracy: 0.3907\nEpoch 3/5\n79/79 [==============================] - 1s 9ms/step - loss: 0.5154 - accuracy: 0.8808 - val_loss: 1.1976 - val_accuracy: 0.4086\nEpoch 4/5\n79/79 [==============================] - 1s 10ms/step - loss: 0.0918 - accuracy: 0.9876 - val_loss: 1.3727 - val_accuracy: 0.3978\nEpoch 5/5\n79/79 [==============================] - 1s 9ms/step - loss: 0.0247 - accuracy: 0.9976 - val_loss: 1.5368 - val_accuracy: 0.3477\n \nIter: 5\n \nEpoch 1/5\n79/79 [==============================] - 1s 11ms/step - loss: 1.0980 - accuracy: 0.3509 - val_loss: 1.0972 - val_accuracy: 0.3584\nEpoch 2/5\n79/79 [==============================] - 1s 8ms/step - loss: 1.0374 - accuracy: 0.5144 - val_loss: 1.0993 - val_accuracy: 0.3728\nEpoch 3/5\n79/79 [==============================] - 1s 8ms/step - loss: 0.5783 - accuracy: 0.8565 - val_loss: 1.1882 - val_accuracy: 0.4194\nEpoch 4/5\n79/79 [==============================] - 1s 8ms/step - loss: 0.1273 - accuracy: 0.9777 - val_loss: 1.3716 - val_accuracy: 0.4229\nEpoch 5/5\n79/79 [==============================] - 1s 8ms/step - loss: 0.0296 - accuracy: 0.9944 - val_loss: 1.4946 - val_accuracy: 0.4444\n \nIter: 6\n \nEpoch 1/5\n79/79 [==============================] - 1s 10ms/step - loss: 1.0977 - accuracy: 0.3608 - val_loss: 1.0928 - val_accuracy: 0.3799\nEpoch 2/5\n79/79 [==============================] - 1s 8ms/step - loss: 0.9746 - accuracy: 0.5526 - val_loss: 1.0921 - val_accuracy: 0.4158\nEpoch 3/5\n79/79 [==============================] - 1s 9ms/step - loss: 0.4551 - accuracy: 0.8756 - val_loss: 1.3333 - val_accuracy: 0.4050\nEpoch 4/5\n79/79 [==============================] - 1s 8ms/step - loss: 0.1281 - accuracy: 0.9753 - val_loss: 1.8594 - val_accuracy: 0.3477\nEpoch 5/5\n79/79 [==============================] - 1s 8ms/step - loss: 0.0366 - accuracy: 0.9960 - val_loss: 2.0550 - val_accuracy: 0.3799\n \nIter: 7\n \nEpoch 1/5\n79/79 [==============================] - 1s 10ms/step - loss: 1.0981 - accuracy: 0.3545 - val_loss: 1.0989 - val_accuracy: 0.3548\nEpoch 2/5\n79/79 [==============================] - 1s 8ms/step - loss: 1.0187 - accuracy: 0.5100 - val_loss: 1.0910 - val_accuracy: 0.3656\nEpoch 3/5\n79/79 [==============================] - 1s 8ms/step - loss: 0.5237 - accuracy: 0.8676 - val_loss: 1.2879 - val_accuracy: 0.3943\nEpoch 4/5\n79/79 [==============================] - 1s 9ms/step - loss: 0.1157 - accuracy: 0.9789 - val_loss: 1.4948 - val_accuracy: 0.3871\nEpoch 5/5\n79/79 [==============================] - 1s 8ms/step - loss: 0.0336 - accuracy: 0.9948 - val_loss: 1.6699 - val_accuracy: 0.3799\n \nIter: 8\n \nEpoch 1/5\n79/79 [==============================] - 1s 10ms/step - loss: 1.0988 - accuracy: 0.3543 - val_loss: 1.0933 - val_accuracy: 0.4065\nEpoch 2/5\n79/79 [==============================] - 1s 8ms/step - loss: 1.0587 - accuracy: 0.4069 - val_loss: 1.0802 - val_accuracy: 0.3957\nEpoch 3/5\n79/79 [==============================] - 1s 9ms/step - loss: 0.5762 - accuracy: 0.8585 - val_loss: 1.2272 - val_accuracy: 0.3957\nEpoch 4/5\n79/79 [==============================] - 1s 9ms/step - loss: 0.1166 - accuracy: 0.9745 - val_loss: 1.4908 - val_accuracy: 0.3957\nEpoch 5/5\n79/79 [==============================] - 1s 9ms/step - loss: 0.0316 - accuracy: 0.9960 - val_loss: 1.6219 - val_accuracy: 0.3525\n \nIter: 9\n \nEpoch 1/5\n79/79 [==============================] - 1s 11ms/step - loss: 1.0975 - accuracy: 0.3567 - val_loss: 1.0956 - val_accuracy: 0.3705\nEpoch 2/5\n79/79 [==============================] - 1s 8ms/step - loss: 1.0257 - accuracy: 0.5584 - val_loss: 1.0933 - val_accuracy: 0.3921\nEpoch 3/5\n79/79 [==============================] - 1s 8ms/step - loss: 0.5462 - accuracy: 0.8996 - val_loss: 1.1831 - val_accuracy: 0.3921\nEpoch 4/5\n79/79 [==============================] - 1s 8ms/step - loss: 0.1158 - accuracy: 0.9813 - val_loss: 1.3578 - val_accuracy: 0.4245\nEpoch 5/5\n79/79 [==============================] - 1s 9ms/step - loss: 0.0304 - accuracy: 0.9980 - val_loss: 1.5580 - val_accuracy: 0.3885\n \nIter: 10\n \nEpoch 1/5\n79/79 [==============================] - 1s 12ms/step - loss: 1.0977 - accuracy: 0.3547 - val_loss: 1.0981 - val_accuracy: 0.3705\nEpoch 2/5\n79/79 [==============================] - 1s 9ms/step - loss: 1.0142 - accuracy: 0.5680 - val_loss: 1.0990 - val_accuracy: 0.3633\nEpoch 3/5\n79/79 [==============================] - 1s 9ms/step - loss: 0.5088 - accuracy: 0.8653 - val_loss: 1.2345 - val_accuracy: 0.3777\nEpoch 4/5\n79/79 [==============================] - 1s 9ms/step - loss: 0.1198 - accuracy: 0.9765 - val_loss: 1.4055 - val_accuracy: 0.3849\nEpoch 5/5\n79/79 [==============================] - 1s 9ms/step - loss: 0.0282 - accuracy: 0.9984 - val_loss: 1.6683 - val_accuracy: 0.3201\n \n"
],
[
"media_std(val_accuracy)",
" Fold: 0\t Ultimo valor acc: 0.41577062010765076\n Fold: 1\t Ultimo valor acc: 0.379928320646286\n Fold: 2\t Ultimo valor acc: 0.40143370628356934\n Fold: 3\t Ultimo valor acc: 0.3476702570915222\n Fold: 4\t Ultimo valor acc: 0.4444444477558136\n Fold: 5\t Ultimo valor acc: 0.379928320646286\n Fold: 6\t Ultimo valor acc: 0.379928320646286\n Fold: 7\t Ultimo valor acc: 0.3525179922580719\n Fold: 8\t Ultimo valor acc: 0.3884892165660858\n Fold: 9\t Ultimo valor acc: 0.3201438784599304\n\n Média: 0.3810255080461502 \t Std: 0.03367019710445899\n"
]
],
[
[
"## BDR",
"_____no_output_____"
]
],
[
[
"model = tf.keras.Sequential([\n vectorize_layer,\n tf.keras.layers.Embedding(\n input_dim=len(vectorize_layer.get_vocabulary()),\n output_dim=64,mask_zero=True),\n \n tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(50)),\n \n tf.keras.layers.Dense(16, activation='relu'),\n tf.keras.layers.Dense(3, activation='softmax')\n])\n\nmodel.compile(\n optimizer= tf.keras.optimizers.Adam(),\n loss=tf.keras.losses.categorical_crossentropy,\n metrics=['accuracy']\n)",
"_____no_output_____"
],
[
"X_train, X_test, Y_train, Y_test = train_test_split(Tweet_preprocessed, sentiment, test_size=0.3)",
"_____no_output_____"
],
[
"y_one = one.fit_transform(Y_train.reshape(-1,1))\ny_one_ = one.transform(Y_test.reshape(-1,1))",
"_____no_output_____"
],
[
"fit = model.fit(np.asarray(pre_X(X_train)), y_one, epochs=10, batch_size=128 ,validation_data=(np.asarray(pre_X(X_test)),y_one_))",
"Epoch 1/10\n16/16 [==============================] - 3s 199ms/step - loss: 1.0968 - accuracy: 0.3667 - val_loss: 1.0975 - val_accuracy: 0.3417\nEpoch 2/10\n16/16 [==============================] - 1s 86ms/step - loss: 1.0741 - accuracy: 0.3754 - val_loss: 1.0934 - val_accuracy: 0.3417\nEpoch 3/10\n16/16 [==============================] - 1s 79ms/step - loss: 0.9654 - accuracy: 0.4933 - val_loss: 1.0995 - val_accuracy: 0.4146\nEpoch 4/10\n16/16 [==============================] - 1s 84ms/step - loss: 0.5773 - accuracy: 0.8754 - val_loss: 1.7090 - val_accuracy: 0.4170\nEpoch 5/10\n16/16 [==============================] - 1s 88ms/step - loss: 0.2498 - accuracy: 0.9605 - val_loss: 2.0743 - val_accuracy: 0.4265\nEpoch 6/10\n16/16 [==============================] - 1s 84ms/step - loss: 0.1204 - accuracy: 0.9821 - val_loss: 2.6661 - val_accuracy: 0.4205\nEpoch 7/10\n16/16 [==============================] - 1s 84ms/step - loss: 0.0669 - accuracy: 0.9903 - val_loss: 3.1340 - val_accuracy: 0.4098\nEpoch 8/10\n16/16 [==============================] - 1s 72ms/step - loss: 0.0414 - accuracy: 0.9944 - val_loss: 3.2479 - val_accuracy: 0.4014\nEpoch 9/10\n16/16 [==============================] - 1s 81ms/step - loss: 0.0411 - accuracy: 0.9928 - val_loss: 3.9525 - val_accuracy: 0.4194\nEpoch 10/10\n16/16 [==============================] - 1s 73ms/step - loss: 0.0284 - accuracy: 0.9954 - val_loss: 3.3313 - val_accuracy: 0.4074\n"
],
[
"predicted = model.predict(X_test)\n\nprint(classification_report(np.argmax(y_one_, axis=1), np.argmax(predicted, axis=1)))",
" precision recall f1-score support\n\n 0 0.44 0.47 0.45 278\n 1 0.40 0.30 0.35 286\n 2 0.38 0.45 0.41 273\n\n accuracy 0.41 837\n macro avg 0.41 0.41 0.40 837\nweighted avg 0.41 0.41 0.40 837\n\n"
],
[
"kf = KFold(n_splits=10)\nval_accuracy = []\nindex = []\ncont = 0\n\nfor train_index, test_index in kf.split(Tweet_preprocessed):\n cont += 1\n \n model = tf.keras.Sequential([\n vectorize_layer,\n tf.keras.layers.Embedding(\n input_dim=len(vectorize_layer.get_vocabulary()),\n output_dim=64,mask_zero=True),\n \n tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(50)),\n \n tf.keras.layers.Dense(16, activation='relu'),\n tf.keras.layers.Dense(3, activation='softmax')\n \n ])\n\n model.compile(\n \n optimizer= tf.keras.optimizers.Adam(),\n loss=tf.keras.losses.categorical_crossentropy,\n metrics=['accuracy']\n \n )\n \n X_train, X_test = Tweet_preprocessed[train_index], Tweet_preprocessed[test_index]\n y_train, y_test = sentiment[train_index], sentiment[test_index]\n y_train_one = one.fit_transform(y_train.reshape(-1,1))\n y_teste_one = one.fit_transform(y_test.reshape(-1,1))\n \n print(\"Iter: \",cont)\n print(\" \")\n \n fit = model.fit(X_train, y_train_one, epochs=5, validation_data=(X_test, y_teste_one))\n print(\" \")\n val_accuracy.append(fit.history['val_accuracy'])\n \n index.append((train_index,test_index))",
"Iter: 1\n \nEpoch 1/5\n79/79 [==============================] - 4s 56ms/step - loss: 1.0952 - accuracy: 0.3700 - val_loss: 1.1056 - val_accuracy: 0.2545\nEpoch 2/5\n79/79 [==============================] - 3s 38ms/step - loss: 0.8979 - accuracy: 0.6025 - val_loss: 1.1874 - val_accuracy: 0.4265\nEpoch 3/5\n79/79 [==============================] - 3s 35ms/step - loss: 0.3146 - accuracy: 0.9055 - val_loss: 1.5802 - val_accuracy: 0.4516\nEpoch 4/5\n79/79 [==============================] - 3s 35ms/step - loss: 0.0977 - accuracy: 0.9749 - val_loss: 2.1116 - val_accuracy: 0.4516\nEpoch 5/5\n79/79 [==============================] - 3s 34ms/step - loss: 0.0385 - accuracy: 0.9912 - val_loss: 2.4603 - val_accuracy: 0.4624\n \nIter: 2\n \nEpoch 1/5\n79/79 [==============================] - 4s 50ms/step - loss: 1.0932 - accuracy: 0.3620 - val_loss: 1.0761 - val_accuracy: 0.4409\nEpoch 2/5\n79/79 [==============================] - 3s 34ms/step - loss: 0.9010 - accuracy: 0.5682 - val_loss: 1.1430 - val_accuracy: 0.4624\nEpoch 3/5\n79/79 [==============================] - 3s 32ms/step - loss: 0.4223 - accuracy: 0.8589 - val_loss: 1.7528 - val_accuracy: 0.4122\nEpoch 4/5\n79/79 [==============================] - 3s 32ms/step - loss: 0.1874 - accuracy: 0.9510 - val_loss: 2.1812 - val_accuracy: 0.4229\nEpoch 5/5\n79/79 [==============================] - 3s 32ms/step - loss: 0.0565 - accuracy: 0.9904 - val_loss: 2.8609 - val_accuracy: 0.4050\n \nIter: 3\n \nEpoch 1/5\n79/79 [==============================] - 4s 53ms/step - loss: 1.0947 - accuracy: 0.3620 - val_loss: 1.0953 - val_accuracy: 0.3441\nEpoch 2/5\n79/79 [==============================] - 3s 34ms/step - loss: 0.8667 - accuracy: 0.6515 - val_loss: 1.2722 - val_accuracy: 0.3728\nEpoch 3/5\n79/79 [==============================] - 3s 35ms/step - loss: 0.3373 - accuracy: 0.8991 - val_loss: 1.7400 - val_accuracy: 0.3835\nEpoch 4/5\n79/79 [==============================] - 3s 34ms/step - loss: 0.1100 - accuracy: 0.9725 - val_loss: 2.3952 - val_accuracy: 0.3907\nEpoch 5/5\n79/79 [==============================] - 3s 33ms/step - loss: 0.0489 - accuracy: 0.9896 - val_loss: 2.4253 - val_accuracy: 0.3978\n \nIter: 4\n \nEpoch 1/5\n79/79 [==============================] - 4s 56ms/step - loss: 1.0951 - accuracy: 0.3620 - val_loss: 1.0779 - val_accuracy: 0.3728\nEpoch 2/5\n79/79 [==============================] - 3s 33ms/step - loss: 0.9096 - accuracy: 0.6148 - val_loss: 1.1226 - val_accuracy: 0.4265\nEpoch 3/5\n79/79 [==============================] - 3s 33ms/step - loss: 0.4026 - accuracy: 0.8688 - val_loss: 1.9122 - val_accuracy: 0.3799\nEpoch 4/5\n79/79 [==============================] - 3s 34ms/step - loss: 0.1533 - accuracy: 0.9629 - val_loss: 2.2334 - val_accuracy: 0.3799\nEpoch 5/5\n79/79 [==============================] - 3s 34ms/step - loss: 0.0575 - accuracy: 0.9896 - val_loss: 2.8029 - val_accuracy: 0.3656\n \nIter: 5\n \nEpoch 1/5\n79/79 [==============================] - 5s 58ms/step - loss: 1.0912 - accuracy: 0.3732 - val_loss: 1.0821 - val_accuracy: 0.3907\nEpoch 2/5\n79/79 [==============================] - 3s 37ms/step - loss: 0.8724 - accuracy: 0.5921 - val_loss: 1.1954 - val_accuracy: 0.4014\nEpoch 3/5\n79/79 [==============================] - 3s 37ms/step - loss: 0.4795 - accuracy: 0.8393 - val_loss: 1.4443 - val_accuracy: 0.4373\nEpoch 4/5\n79/79 [==============================] - 3s 38ms/step - loss: 0.1586 - accuracy: 0.9545 - val_loss: 1.9816 - val_accuracy: 0.4480\nEpoch 5/5\n79/79 [==============================] - 3s 37ms/step - loss: 0.0468 - accuracy: 0.9888 - val_loss: 2.7055 - val_accuracy: 0.4301\n \nIter: 6\n \nEpoch 1/5\n79/79 [==============================] - 4s 55ms/step - loss: 1.0931 - accuracy: 0.3561 - val_loss: 1.0835 - val_accuracy: 0.3799\nEpoch 2/5\n79/79 [==============================] - 3s 35ms/step - loss: 0.8544 - accuracy: 0.6224 - val_loss: 1.1660 - val_accuracy: 0.4086\nEpoch 3/5\n79/79 [==============================] - 3s 37ms/step - loss: 0.3097 - accuracy: 0.8999 - val_loss: 1.6130 - val_accuracy: 0.4337\nEpoch 4/5\n79/79 [==============================] - 3s 34ms/step - loss: 0.0947 - accuracy: 0.9765 - val_loss: 2.1329 - val_accuracy: 0.4516\nEpoch 5/5\n79/79 [==============================] - 3s 36ms/step - loss: 0.0371 - accuracy: 0.9912 - val_loss: 2.4613 - val_accuracy: 0.4301\n \nIter: 7\n \nEpoch 1/5\n79/79 [==============================] - 4s 56ms/step - loss: 1.0959 - accuracy: 0.3652 - val_loss: 1.0939 - val_accuracy: 0.3763\nEpoch 2/5\n79/79 [==============================] - 3s 38ms/step - loss: 0.8842 - accuracy: 0.6687 - val_loss: 1.1756 - val_accuracy: 0.3978\nEpoch 3/5\n79/79 [==============================] - 3s 37ms/step - loss: 0.3383 - accuracy: 0.9023 - val_loss: 2.0765 - val_accuracy: 0.4122\nEpoch 4/5\n79/79 [==============================] - 3s 37ms/step - loss: 0.1296 - accuracy: 0.9677 - val_loss: 2.1791 - val_accuracy: 0.4050\nEpoch 5/5\n79/79 [==============================] - 3s 39ms/step - loss: 0.0515 - accuracy: 0.9896 - val_loss: 2.7134 - val_accuracy: 0.3871\n \nIter: 8\n \nEpoch 1/5\n79/79 [==============================] - 5s 60ms/step - loss: 1.0926 - accuracy: 0.3707 - val_loss: 1.0653 - val_accuracy: 0.4317\nEpoch 2/5\n79/79 [==============================] - 3s 38ms/step - loss: 0.8644 - accuracy: 0.6501 - val_loss: 1.2263 - val_accuracy: 0.3921\nEpoch 3/5\n79/79 [==============================] - 3s 39ms/step - loss: 0.3651 - accuracy: 0.8828 - val_loss: 1.7774 - val_accuracy: 0.3993\nEpoch 4/5\n79/79 [==============================] - 3s 40ms/step - loss: 0.1314 - accuracy: 0.9697 - val_loss: 2.2333 - val_accuracy: 0.3813\nEpoch 5/5\n79/79 [==============================] - 3s 39ms/step - loss: 0.0563 - accuracy: 0.9884 - val_loss: 2.6010 - val_accuracy: 0.3885\n \nIter: 9\n \nEpoch 1/5\n79/79 [==============================] - 5s 58ms/step - loss: 1.0930 - accuracy: 0.3599 - val_loss: 1.0718 - val_accuracy: 0.3741\nEpoch 2/5\n79/79 [==============================] - 3s 39ms/step - loss: 0.8766 - accuracy: 0.6265 - val_loss: 1.1484 - val_accuracy: 0.3741\nEpoch 3/5\n79/79 [==============================] - 3s 43ms/step - loss: 0.3574 - accuracy: 0.8880 - val_loss: 1.6156 - val_accuracy: 0.3633\nEpoch 4/5\n79/79 [==============================] - 4s 53ms/step - loss: 0.1231 - accuracy: 0.9685 - val_loss: 2.4061 - val_accuracy: 0.3741\nEpoch 5/5\n79/79 [==============================] - 4s 48ms/step - loss: 0.0407 - accuracy: 0.9892 - val_loss: 2.8241 - val_accuracy: 0.3957\n \nIter: 10\n \nEpoch 1/5\n79/79 [==============================] - 6s 78ms/step - loss: 1.0948 - accuracy: 0.3579 - val_loss: 1.0840 - val_accuracy: 0.4245\nEpoch 2/5\n79/79 [==============================] - 4s 50ms/step - loss: 0.8703 - accuracy: 0.6652 - val_loss: 1.1723 - val_accuracy: 0.4568\nEpoch 3/5\n79/79 [==============================] - 4s 50ms/step - loss: 0.3350 - accuracy: 0.8964 - val_loss: 1.6151 - val_accuracy: 0.4317\nEpoch 4/5\n79/79 [==============================] - 4s 53ms/step - loss: 0.1233 - accuracy: 0.9681 - val_loss: 2.1078 - val_accuracy: 0.4173\nEpoch 5/5\n79/79 [==============================] - 4s 53ms/step - loss: 0.0539 - accuracy: 0.9884 - val_loss: 2.2953 - val_accuracy: 0.4460\n \n"
],
[
"media_std(val_accuracy)",
" Fold: 0\t Ultimo valor acc: 0.46236559748649597\n Fold: 1\t Ultimo valor acc: 0.4050179123878479\n Fold: 2\t Ultimo valor acc: 0.3978494703769684\n Fold: 3\t Ultimo valor acc: 0.3655914068222046\n Fold: 4\t Ultimo valor acc: 0.4301075339317322\n Fold: 5\t Ultimo valor acc: 0.4301075339317322\n Fold: 6\t Ultimo valor acc: 0.3870967626571655\n Fold: 7\t Ultimo valor acc: 0.3884892165660858\n Fold: 8\t Ultimo valor acc: 0.3956834673881531\n Fold: 9\t Ultimo valor acc: 0.4460431635379791\n\n Média: 0.4108352065086365 \t Std: 0.028600228122043644\n"
],
[
"plt.figure(figsize=(12,6))\nplt.plot(fit.history['loss'], label='Loss', color='darkred')\nplt.plot(fit.history['val_loss'], label='Val Loss', color='green')\nplt.legend()\nplt.grid()\nplt.ylabel('Loss')\nplt.show()",
"_____no_output_____"
],
[
"plt.figure(figsize=(12,6))\nplt.plot(fit.history['accuracy'], label='accuracy', color='darkred')\nplt.plot(fit.history['val_accuracy'], label='Val accuracy', color='green')\nplt.legend()\nplt.grid()\nplt.ylabel('Accuracy')\nplt.show()",
"_____no_output_____"
]
],
[
[
"## LSTM",
"_____no_output_____"
]
],
[
[
"model = tf.keras.Sequential([\n vectorize_layer,\n tf.keras.layers.Embedding(\n input_dim=len(vectorize_layer.get_vocabulary()),\n output_dim=64,mask_zero=True),\n \n tf.keras.layers.LSTM(50, activation='relu' ,return_sequences=True),\n tf.keras.layers.Dropout(0.3),\n \n tf.keras.layers.LSTM(10 , activation='tanh'),\n tf.keras.layers.Dropout(0.2),\n \n tf.keras.layers.Dense(3, activation='softmax')\n ])\n\nmodel.compile(\n optimizer= tf.keras.optimizers.Adam(),\n loss=tf.keras.losses.categorical_crossentropy,\n metrics=['accuracy']\n)",
"_____no_output_____"
],
[
"X_train, X_test, Y_train, Y_test = train_test_split(Tweet_preprocessed, sentiment, test_size=0.3)",
"_____no_output_____"
],
[
"y_one = one.fit_transform(Y_train.reshape(-1,1))\ny_one_ = one.transform(Y_test.reshape(-1,1))",
"_____no_output_____"
],
[
"fit = model.fit(np.asarray(pre_X(X_train)), y_one, epochs=10, batch_size=128 ,validation_data=(np.asarray(pre_X(X_test)),y_one_))",
"Epoch 1/10\n16/16 [==============================] - 4s 258ms/step - loss: 1.0973 - accuracy: 0.3456 - val_loss: 1.0959 - val_accuracy: 0.3620\nEpoch 2/10\n16/16 [==============================] - 2s 154ms/step - loss: 1.0884 - accuracy: 0.3667 - val_loss: 1.0934 - val_accuracy: 0.3620\nEpoch 3/10\n16/16 [==============================] - 2s 147ms/step - loss: 1.0596 - accuracy: 0.4056 - val_loss: 1.0822 - val_accuracy: 0.3883\nEpoch 4/10\n16/16 [==============================] - 3s 178ms/step - loss: 0.8860 - accuracy: 0.6497 - val_loss: 1.1403 - val_accuracy: 0.4325\nEpoch 5/10\n16/16 [==============================] - 3s 180ms/step - loss: 0.6325 - accuracy: 0.8533 - val_loss: 1.3631 - val_accuracy: 0.4110\nEpoch 6/10\n16/16 [==============================] - 3s 184ms/step - loss: 0.3978 - accuracy: 0.9164 - val_loss: 1.7682 - val_accuracy: 0.4026\nEpoch 7/10\n16/16 [==============================] - 3s 201ms/step - loss: 0.3057 - accuracy: 0.9359 - val_loss: 1.9006 - val_accuracy: 0.3978\nEpoch 8/10\n16/16 [==============================] - 3s 190ms/step - loss: 0.2282 - accuracy: 0.9626 - val_loss: 1.9984 - val_accuracy: 0.3943\nEpoch 9/10\n16/16 [==============================] - 3s 184ms/step - loss: 0.1913 - accuracy: 0.9662 - val_loss: 2.0321 - val_accuracy: 0.4074\nEpoch 10/10\n16/16 [==============================] - 4s 219ms/step - loss: 0.2242 - accuracy: 0.9528 - val_loss: 1.9250 - val_accuracy: 0.4205\n"
],
[
"predicted = model.predict(X_test)\n\nprint(classification_report(np.argmax(y_one_, axis=1), np.argmax(predicted, axis=1)))",
" precision recall f1-score support\n\n 0 0.42 0.46 0.44 260\n 1 0.45 0.38 0.41 303\n 2 0.39 0.43 0.41 274\n\n accuracy 0.42 837\n macro avg 0.42 0.42 0.42 837\nweighted avg 0.42 0.42 0.42 837\n\n"
],
[
"kf = KFold(n_splits=10)\nval_accuracy = []\nindex = []\ncont = 0\n\nfor train_index, test_index in kf.split(Tweet_preprocessed):\n cont += 1\n \n model = tf.keras.Sequential([\n vectorize_layer,\n tf.keras.layers.Embedding(\n input_dim=len(vectorize_layer.get_vocabulary()),\n output_dim=64,mask_zero=True),\n \n tf.keras.layers.LSTM(50, activation='relu' ,return_sequences=True),\n tf.keras.layers.Dropout(0.3),\n \n tf.keras.layers.LSTM(10 , activation='tanh'),\n tf.keras.layers.Dropout(0.2),\n \n tf.keras.layers.Dense(3, activation='softmax')\n ])\n\n model.compile(\n optimizer= tf.keras.optimizers.Adam(),\n loss=tf.keras.losses.categorical_crossentropy,\n metrics=['accuracy']\n )\n \n X_train, X_test = Tweet_preprocessed[train_index], Tweet_preprocessed[test_index]\n y_train, y_test = sentiment[train_index], sentiment[test_index]\n y_train_one = one.fit_transform(y_train.reshape(-1,1))\n y_teste_one = one.fit_transform(y_test.reshape(-1,1))\n \n print(\"Iter: \",cont)\n print(\" \")\n \n fit = model.fit(X_train, y_train_one, epochs=5, validation_data=(X_test, y_teste_one))\n print(\" \")\n val_accuracy.append(fit.history['val_accuracy'])\n \n index.append((train_index,test_index))",
"Iter: 1\n \nEpoch 1/5\n79/79 [==============================] - 6s 80ms/step - loss: 1.0943 - accuracy: 0.3680 - val_loss: 1.1176 - val_accuracy: 0.2545\nEpoch 2/5\n79/79 [==============================] - 5s 64ms/step - loss: 1.0172 - accuracy: 0.4860 - val_loss: 1.1124 - val_accuracy: 0.3835\nEpoch 3/5\n79/79 [==============================] - 5s 62ms/step - loss: 0.6372 - accuracy: 0.7807 - val_loss: 1.2164 - val_accuracy: 0.4588\nEpoch 4/5\n79/79 [==============================] - 5s 61ms/step - loss: 0.2999 - accuracy: 0.9147 - val_loss: 1.4908 - val_accuracy: 0.4624\nEpoch 5/5\n79/79 [==============================] - 5s 64ms/step - loss: 0.1645 - accuracy: 0.9589 - val_loss: 1.8743 - val_accuracy: 0.4624\n \nIter: 2\n \nEpoch 1/5\n79/79 [==============================] - 6s 79ms/step - loss: 1.0966 - accuracy: 0.3557 - val_loss: 1.0917 - val_accuracy: 0.3978\nEpoch 2/5\n79/79 [==============================] - 5s 60ms/step - loss: 0.9965 - accuracy: 0.5243 - val_loss: 1.1384 - val_accuracy: 0.3799\nEpoch 3/5\n79/79 [==============================] - 5s 61ms/step - loss: 0.5752 - accuracy: 0.8206 - val_loss: 1.4016 - val_accuracy: 0.3871\nEpoch 4/5\n79/79 [==============================] - 5s 60ms/step - loss: 0.2533 - accuracy: 0.9330 - val_loss: 1.7938 - val_accuracy: 0.3978\nEpoch 5/5\n79/79 [==============================] - 5s 60ms/step - loss: 0.1458 - accuracy: 0.9665 - val_loss: 1.9578 - val_accuracy: 0.3978\n \nIter: 3\n \nEpoch 1/5\n79/79 [==============================] - 5s 70ms/step - loss: 1.0950 - accuracy: 0.3553 - val_loss: 1.0949 - val_accuracy: 0.3441\nEpoch 2/5\n79/79 [==============================] - 5s 58ms/step - loss: 0.9926 - accuracy: 0.5100 - val_loss: 1.1222 - val_accuracy: 0.3978\nEpoch 3/5\n79/79 [==============================] - 5s 59ms/step - loss: 0.6315 - accuracy: 0.7767 - val_loss: 1.3194 - val_accuracy: 0.3871\nEpoch 4/5\n79/79 [==============================] - 5s 58ms/step - loss: 0.3351 - accuracy: 0.9099 - val_loss: 1.7494 - val_accuracy: 0.3943\nEpoch 5/5\n79/79 [==============================] - 5s 58ms/step - loss: 0.1752 - accuracy: 0.9593 - val_loss: 2.0358 - val_accuracy: 0.3835\n \nIter: 4\n \nEpoch 1/5\n79/79 [==============================] - 6s 71ms/step - loss: 1.0959 - accuracy: 0.3612 - val_loss: 1.0897 - val_accuracy: 0.3763\nEpoch 2/5\n79/79 [==============================] - 5s 59ms/step - loss: 1.0073 - accuracy: 0.4916 - val_loss: 1.1488 - val_accuracy: 0.3333\nEpoch 3/5\n79/79 [==============================] - 5s 60ms/step - loss: 0.6217 - accuracy: 0.7659 - val_loss: 1.2629 - val_accuracy: 0.4373\nEpoch 4/5\n79/79 [==============================] - 5s 59ms/step - loss: 0.3350 - accuracy: 0.9027 - val_loss: 1.6913 - val_accuracy: 0.4086\nEpoch 5/5\n79/79 [==============================] - 5s 58ms/step - loss: 0.1593 - accuracy: 0.9589 - val_loss: 2.0617 - val_accuracy: 0.4301\n \nIter: 5\n \nEpoch 1/5\n79/79 [==============================] - 5s 68ms/step - loss: 1.0967 - accuracy: 0.3604 - val_loss: 1.0950 - val_accuracy: 0.3584\nEpoch 2/5\n79/79 [==============================] - 4s 56ms/step - loss: 1.0455 - accuracy: 0.4661 - val_loss: 1.1036 - val_accuracy: 0.3513\nEpoch 3/5\n79/79 [==============================] - 4s 57ms/step - loss: 0.7614 - accuracy: 0.7229 - val_loss: 1.2513 - val_accuracy: 0.4050\nEpoch 4/5\n79/79 [==============================] - 4s 57ms/step - loss: 0.4427 - accuracy: 0.8680 - val_loss: 1.6525 - val_accuracy: 0.4588\nEpoch 5/5\n79/79 [==============================] - 5s 58ms/step - loss: 0.2941 - accuracy: 0.9211 - val_loss: 1.7789 - val_accuracy: 0.4229\n \nIter: 6\n \nEpoch 1/5\n79/79 [==============================] - 6s 74ms/step - loss: 1.0975 - accuracy: 0.3608 - val_loss: 1.0927 - val_accuracy: 0.3799\nEpoch 2/5\n79/79 [==============================] - 5s 59ms/step - loss: 1.0305 - accuracy: 0.4809 - val_loss: 1.1556 - val_accuracy: 0.3799\nEpoch 3/5\n79/79 [==============================] - 5s 59ms/step - loss: 0.6262 - accuracy: 0.7835 - val_loss: 1.3605 - val_accuracy: 0.4301\nEpoch 4/5\n79/79 [==============================] - 5s 59ms/step - loss: 0.3341 - accuracy: 0.9183 - val_loss: 1.7965 - val_accuracy: 0.4552\nEpoch 5/5\n79/79 [==============================] - 5s 58ms/step - loss: 0.2279 - accuracy: 0.9458 - val_loss: 1.8243 - val_accuracy: 0.4265\n \nIter: 7\n \nEpoch 1/5\n79/79 [==============================] - 6s 71ms/step - loss: 1.0964 - accuracy: 0.3628 - val_loss: 1.0951 - val_accuracy: 0.3548\nEpoch 2/5\n79/79 [==============================] - 5s 60ms/step - loss: 1.0064 - accuracy: 0.4972 - val_loss: 1.0832 - val_accuracy: 0.4265\nEpoch 3/5\n79/79 [==============================] - 5s 59ms/step - loss: 0.5620 - accuracy: 0.8262 - val_loss: 1.3012 - val_accuracy: 0.4229\nEpoch 4/5\n79/79 [==============================] - 5s 58ms/step - loss: 0.2557 - accuracy: 0.9330 - val_loss: 1.8862 - val_accuracy: 0.4050\nEpoch 5/5\n79/79 [==============================] - 5s 59ms/step - loss: 0.1552 - accuracy: 0.9661 - val_loss: 1.9435 - val_accuracy: 0.4158\n \nIter: 8\n \nEpoch 1/5\n79/79 [==============================] - 6s 76ms/step - loss: 1.0962 - accuracy: 0.3631 - val_loss: 1.0879 - val_accuracy: 0.4065\nEpoch 2/5\n79/79 [==============================] - 5s 61ms/step - loss: 1.0088 - accuracy: 0.5078 - val_loss: 1.0892 - val_accuracy: 0.3993\nEpoch 3/5\n79/79 [==============================] - 5s 60ms/step - loss: 0.6707 - accuracy: 0.7872 - val_loss: 1.2371 - val_accuracy: 0.4245\nEpoch 4/5\n79/79 [==============================] - 5s 60ms/step - loss: 0.3604 - accuracy: 0.9047 - val_loss: 1.5666 - val_accuracy: 0.3633\nEpoch 5/5\n79/79 [==============================] - 5s 59ms/step - loss: 0.1769 - accuracy: 0.9597 - val_loss: 2.0826 - val_accuracy: 0.3921\n \nIter: 9\n \nEpoch 1/5\n79/79 [==============================] - 6s 73ms/step - loss: 1.0967 - accuracy: 0.3543 - val_loss: 1.0912 - val_accuracy: 0.3705\nEpoch 2/5\n79/79 [==============================] - 5s 61ms/step - loss: 1.0030 - accuracy: 0.4958 - val_loss: 1.0899 - val_accuracy: 0.4065\nEpoch 3/5\n79/79 [==============================] - 5s 61ms/step - loss: 0.6139 - accuracy: 0.8063 - val_loss: 1.3253 - val_accuracy: 0.4101\nEpoch 4/5\n79/79 [==============================] - 5s 63ms/step - loss: 0.3036 - accuracy: 0.9091 - val_loss: 1.6887 - val_accuracy: 0.4029\nEpoch 5/5\n79/79 [==============================] - 5s 60ms/step - loss: 0.1564 - accuracy: 0.9562 - val_loss: 1.9172 - val_accuracy: 0.3885\n \nIter: 10\n \nEpoch 1/5\n79/79 [==============================] - 6s 71ms/step - loss: 1.0954 - accuracy: 0.3543 - val_loss: 1.0923 - val_accuracy: 0.4101\nEpoch 2/5\n79/79 [==============================] - 5s 58ms/step - loss: 1.0269 - accuracy: 0.4779 - val_loss: 1.0643 - val_accuracy: 0.4532\nEpoch 3/5\n79/79 [==============================] - 5s 62ms/step - loss: 0.7028 - accuracy: 0.7593 - val_loss: 1.2452 - val_accuracy: 0.4496\nEpoch 4/5\n79/79 [==============================] - 5s 63ms/step - loss: 0.3484 - accuracy: 0.9055 - val_loss: 1.7693 - val_accuracy: 0.4137\nEpoch 5/5\n79/79 [==============================] - 5s 62ms/step - loss: 0.2250 - accuracy: 0.9450 - val_loss: 1.9620 - val_accuracy: 0.3705\n \n"
],
[
"media_std(val_accuracy)",
" Fold: 0\t Ultimo valor acc: 0.46236559748649597\n Fold: 1\t Ultimo valor acc: 0.3978494703769684\n Fold: 2\t Ultimo valor acc: 0.38351255655288696\n Fold: 3\t Ultimo valor acc: 0.4301075339317322\n Fold: 4\t Ultimo valor acc: 0.4229390621185303\n Fold: 5\t Ultimo valor acc: 0.4265232980251312\n Fold: 6\t Ultimo valor acc: 0.41577062010765076\n Fold: 7\t Ultimo valor acc: 0.39208632707595825\n Fold: 8\t Ultimo valor acc: 0.3884892165660858\n Fold: 9\t Ultimo valor acc: 0.37050360441207886\n\n Média: 0.4090147286653519 \t Std: 0.026083133753362603\n"
],
[
"TextClassifier = ak.TextClassifier(\n num_classes=3, \n multi_label=True, \n loss=tf.keras.losses.categorical_crossentropy, \n metrics=['accuracy'],\n project_name=\"text_classifier\",\n max_trials=1,\n objective=\"val_loss\"\n)\n\nfit = TextClassifier.fit(\n x=np.asarray(pre_X(X_train)), y=one.fit_transform(Y_train.reshape(-1,1)), epochs=10, validation_split=0.2, validation_data=(np.asarray(pre_X(X_test)),one.fit_transform(Y_test.reshape(-1,1)) )\n)",
"INFO:tensorflow:Reloading Oracle from existing project .\\text_classifier\\oracle.json\nINFO:tensorflow:Reloading Tuner from .\\text_classifier\\tuner0.json\nINFO:tensorflow:Oracle triggered exit\nEpoch 1/10\n61/61 [==============================] - 6s 96ms/step - loss: 0.6423 - accuracy: 0.3564 - val_loss: 0.6409 - val_accuracy: 0.3536\nEpoch 2/10\n61/61 [==============================] - 6s 92ms/step - loss: 0.6305 - accuracy: 0.3744 - val_loss: 0.6393 - val_accuracy: 0.3668\nEpoch 3/10\n61/61 [==============================] - 6s 100ms/step - loss: 0.6014 - accuracy: 0.5287 - val_loss: 0.6308 - val_accuracy: 0.3967\nEpoch 4/10\n61/61 [==============================] - 6s 95ms/step - loss: 0.4302 - accuracy: 0.7492 - val_loss: 0.7543 - val_accuracy: 0.3823\nEpoch 5/10\n61/61 [==============================] - 6s 93ms/step - loss: 0.1863 - accuracy: 0.9026 - val_loss: 0.9689 - val_accuracy: 0.3978\nEpoch 6/10\n61/61 [==============================] - 6s 95ms/step - loss: 0.0705 - accuracy: 0.9662 - val_loss: 1.2428 - val_accuracy: 0.3931\nEpoch 7/10\n61/61 [==============================] - 6s 92ms/step - loss: 0.0389 - accuracy: 0.9841 - val_loss: 1.3408 - val_accuracy: 0.3967\nEpoch 8/10\n61/61 [==============================] - 5s 89ms/step - loss: 0.0170 - accuracy: 0.9938 - val_loss: 1.5297 - val_accuracy: 0.4014\nEpoch 9/10\n61/61 [==============================] - 6s 92ms/step - loss: 0.0130 - accuracy: 0.9928 - val_loss: 1.6561 - val_accuracy: 0.3967\nEpoch 10/10\n61/61 [==============================] - 6s 91ms/step - loss: 0.0076 - accuracy: 0.9974 - val_loss: 1.7773 - val_accuracy: 0.4098\nWARNING:tensorflow:From c:\\users\\ddayv\\appdata\\local\\programs\\python\\python37\\lib\\site-packages\\tensorflow\\python\\training\\tracking\\tracking.py:111: Model.state_updates (from tensorflow.python.keras.engine.training) is deprecated and will be removed in a future version.\nInstructions for updating:\nThis property should not be used in TensorFlow 2.0, as updates are applied automatically.\nWARNING:tensorflow:From c:\\users\\ddayv\\appdata\\local\\programs\\python\\python37\\lib\\site-packages\\tensorflow\\python\\training\\tracking\\tracking.py:111: Layer.updates (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version.\nInstructions for updating:\nThis property should not be used in TensorFlow 2.0, as updates are applied automatically.\nINFO:tensorflow:Assets written to: .\\text_classifier\\best_model\\assets\n"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e729d6ece7cde33c221cf06f2a3c36bcfd68a664 | 5,261 | ipynb | Jupyter Notebook | Ex/Chapter1/Chapter1-1.ipynb | tryoutlab/python-ai-oreilly | 111a0db4a9d5bf7ec4c07b1e9e357ed4fa225f28 | [
"Unlicense"
] | null | null | null | Ex/Chapter1/Chapter1-1.ipynb | tryoutlab/python-ai-oreilly | 111a0db4a9d5bf7ec4c07b1e9e357ed4fa225f28 | [
"Unlicense"
] | null | null | null | Ex/Chapter1/Chapter1-1.ipynb | tryoutlab/python-ai-oreilly | 111a0db4a9d5bf7ec4c07b1e9e357ed4fa225f28 | [
"Unlicense"
] | null | null | null | 38.40146 | 83 | 0.478996 | [
[
[
"from sklearn import datasets\n\nhouse_prices = datasets.load_boston()\nprint(house_prices.data)",
"[[6.3200e-03 1.8000e+01 2.3100e+00 ... 1.5300e+01 3.9690e+02 4.9800e+00]\n [2.7310e-02 0.0000e+00 7.0700e+00 ... 1.7800e+01 3.9690e+02 9.1400e+00]\n [2.7290e-02 0.0000e+00 7.0700e+00 ... 1.7800e+01 3.9283e+02 4.0300e+00]\n ...\n [6.0760e-02 0.0000e+00 1.1930e+01 ... 2.1000e+01 3.9690e+02 5.6400e+00]\n [1.0959e-01 0.0000e+00 1.1930e+01 ... 2.1000e+01 3.9345e+02 6.4800e+00]\n [4.7410e-02 0.0000e+00 1.1930e+01 ... 2.1000e+01 3.9690e+02 7.8800e+00]]\n"
],
[
"print(house_prices.target)",
"[24. 21.6 34.7 33.4 36.2 28.7 22.9 27.1 16.5 18.9 15. 18.9 21.7 20.4\n 18.2 19.9 23.1 17.5 20.2 18.2 13.6 19.6 15.2 14.5 15.6 13.9 16.6 14.8\n 18.4 21. 12.7 14.5 13.2 13.1 13.5 18.9 20. 21. 24.7 30.8 34.9 26.6\n 25.3 24.7 21.2 19.3 20. 16.6 14.4 19.4 19.7 20.5 25. 23.4 18.9 35.4\n 24.7 31.6 23.3 19.6 18.7 16. 22.2 25. 33. 23.5 19.4 22. 17.4 20.9\n 24.2 21.7 22.8 23.4 24.1 21.4 20. 20.8 21.2 20.3 28. 23.9 24.8 22.9\n 23.9 26.6 22.5 22.2 23.6 28.7 22.6 22. 22.9 25. 20.6 28.4 21.4 38.7\n 43.8 33.2 27.5 26.5 18.6 19.3 20.1 19.5 19.5 20.4 19.8 19.4 21.7 22.8\n 18.8 18.7 18.5 18.3 21.2 19.2 20.4 19.3 22. 20.3 20.5 17.3 18.8 21.4\n 15.7 16.2 18. 14.3 19.2 19.6 23. 18.4 15.6 18.1 17.4 17.1 13.3 17.8\n 14. 14.4 13.4 15.6 11.8 13.8 15.6 14.6 17.8 15.4 21.5 19.6 15.3 19.4\n 17. 15.6 13.1 41.3 24.3 23.3 27. 50. 50. 50. 22.7 25. 50. 23.8\n 23.8 22.3 17.4 19.1 23.1 23.6 22.6 29.4 23.2 24.6 29.9 37.2 39.8 36.2\n 37.9 32.5 26.4 29.6 50. 32. 29.8 34.9 37. 30.5 36.4 31.1 29.1 50.\n 33.3 30.3 34.6 34.9 32.9 24.1 42.3 48.5 50. 22.6 24.4 22.5 24.4 20.\n 21.7 19.3 22.4 28.1 23.7 25. 23.3 28.7 21.5 23. 26.7 21.7 27.5 30.1\n 44.8 50. 37.6 31.6 46.7 31.5 24.3 31.7 41.7 48.3 29. 24. 25.1 31.5\n 23.7 23.3 22. 20.1 22.2 23.7 17.6 18.5 24.3 20.5 24.5 26.2 24.4 24.8\n 29.6 42.8 21.9 20.9 44. 50. 36. 30.1 33.8 43.1 48.8 31. 36.5 22.8\n 30.7 50. 43.5 20.7 21.1 25.2 24.4 35.2 32.4 32. 33.2 33.1 29.1 35.1\n 45.4 35.4 46. 50. 32.2 22. 20.1 23.2 22.3 24.8 28.5 37.3 27.9 23.9\n 21.7 28.6 27.1 20.3 22.5 29. 24.8 22. 26.4 33.1 36.1 28.4 33.4 28.2\n 22.8 20.3 16.1 22.1 19.4 21.6 23.8 16.2 17.8 19.8 23.1 21. 23.8 23.1\n 20.4 18.5 25. 24.6 23. 22.2 19.3 22.6 19.8 17.1 19.4 22.2 20.7 21.1\n 19.5 18.5 20.6 19. 18.7 32.7 16.5 23.9 31.2 17.5 17.2 23.1 24.5 26.6\n 22.9 24.1 18.6 30.1 18.2 20.6 17.8 21.7 22.7 22.6 25. 19.9 20.8 16.8\n 21.9 27.5 21.9 23.1 50. 50. 50. 50. 50. 13.8 13.8 15. 13.9 13.3\n 13.1 10.2 10.4 10.9 11.3 12.3 8.8 7.2 10.5 7.4 10.2 11.5 15.1 23.2\n 9.7 13.8 12.7 13.1 12.5 8.5 5. 6.3 5.6 7.2 12.1 8.3 8.5 5.\n 11.9 27.9 17.2 27.5 15. 17.2 17.9 16.3 7. 7.2 7.5 10.4 8.8 8.4\n 16.7 14.2 20.8 13.4 11.7 8.3 10.2 10.9 11. 9.5 14.5 14.1 16.1 14.3\n 11.7 13.4 9.6 8.7 8.4 12.8 10.5 17.1 18.4 15.4 10.8 11.8 14.9 12.6\n 14.1 13. 13.4 15.2 16.1 17.8 14.9 14.1 12.7 13.5 14.9 20. 16.4 17.7\n 19.5 20.2 21.4 19.9 19. 19.1 19.1 20.1 19.9 19.6 23.2 29.8 13.8 13.3\n 16.7 12. 14.6 21.4 23. 23.7 25. 21.8 20.6 21.2 19.1 20.6 15.2 7.\n 8.1 13.6 20.1 21.8 24.5 23.1 19.7 18.3 21.2 17.5 16.8 22.4 20.6 23.9\n 22. 11.9]\n"
],
[
"digits = datasets.load_digits()\nprint(digits.images[4])",
"[[ 0. 0. 0. 1. 11. 0. 0. 0.]\n [ 0. 0. 0. 7. 8. 0. 0. 0.]\n [ 0. 0. 1. 13. 6. 2. 2. 0.]\n [ 0. 0. 7. 15. 0. 9. 8. 0.]\n [ 0. 5. 16. 10. 0. 16. 6. 0.]\n [ 0. 4. 15. 16. 13. 16. 1. 0.]\n [ 0. 0. 0. 3. 15. 10. 0. 0.]\n [ 0. 0. 0. 2. 16. 4. 0. 0.]]\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code"
]
] |
e729e1d08062ba12758bf3aa4a4accf66ab2a9a9 | 13,360 | ipynb | Jupyter Notebook | tutorials/Tutorial_7_Training_an_Encrypted_Neural_Network.ipynb | mh739025250/CrypTen | 2b82ea44c74a41d91854854ed89984104884460f | [
"MIT"
] | null | null | null | tutorials/Tutorial_7_Training_an_Encrypted_Neural_Network.ipynb | mh739025250/CrypTen | 2b82ea44c74a41d91854854ed89984104884460f | [
"MIT"
] | null | null | null | tutorials/Tutorial_7_Training_an_Encrypted_Neural_Network.ipynb | mh739025250/CrypTen | 2b82ea44c74a41d91854854ed89984104884460f | [
"MIT"
] | null | null | null | 35.913978 | 559 | 0.578219 | [
[
[
"# Training an Encrypted Neural Network\n\nIn this tutorial, we will walk through an example of how we can train a neural network with CrypTen. This is particularly relevant for the <i>Feature Aggregation</i>, <i>Data Labeling</i> and <i>Data Augmentation</i> use cases. We will focus on the usual two-party setting and show how we can train an accurate neural network for digit classification on the MNIST data.\n\nFor concreteness, this tutorial will step through the <i>Feature Aggregation</i> use cases: Alice and Bob each have part of the features of the data set, and wish to train a neural network on their combined data, while keeping their data private. \n\n## Setup\nAs usual, we'll begin by importing and initializing the `crypten` and `torch` libraries. \n\nWe will use the MNIST dataset to demonstrate how Alice and Bob can learn without revealing protected information. For reference, the feature size of each example in the MNIST data is `28 x 28`. Let's assume Alice has the first `28 x 20` features and Bob has last `28 x 8` features. One way to think of this split is that Alice has the (roughly) top 2/3rds of each image, while Bob has the bottom 1/3rd of each image. We'll again use our helper script `mnist_utils.py` that downloads the publicly available MNIST data, and splits the data as required.\n\nFor simplicity, we will restrict our problem to binary classification: we'll simply learn how to distinguish between 0 and non-zero digits. For speed of execution in the notebook, we will only create a dataset of a 100 examples.",
"_____no_output_____"
]
],
[
[
"import crypten\nimport torch\n\ncrypten.init()\ntorch.set_num_threads(1)",
"WARNING:root:module 'torchvision.models.mobilenet' has no attribute 'ConvBNReLU'\n"
],
[
"%run ./mnist_utils.py --option features --reduced 100 --binary",
"_____no_output_____"
]
],
[
[
"Next, we'll define the network architecture below, and then describe how to train it on encrypted data in the next section. ",
"_____no_output_____"
]
],
[
[
"import torch.nn as nn\nimport torch.nn.functional as F\n\n#Define an example network\nclass ExampleNet(nn.Module):\n def __init__(self):\n super(ExampleNet, self).__init__()\n self.conv1 = nn.Conv2d(1, 16, kernel_size=5, padding=0)\n self.fc1 = nn.Linear(16 * 12 * 12, 100)\n self.fc2 = nn.Linear(100, 2) # For binary classification, final layer needs only 2 outputs\n \n def forward(self, x):\n out = self.conv1(x)\n out = F.relu(out)\n out = F.max_pool2d(out, 2)\n out = out.view(-1, 16 * 12 * 12)\n out = self.fc1(out)\n out = F.relu(out)\n out = self.fc2(out)\n return out",
"_____no_output_____"
]
],
[
[
"## Encrypted Training\n\nAfter all the material we've covered in earlier tutorials, we only need to know a few additional items for encrypted training. We'll first discuss how the training loop in CrypTen differs from PyTorch. Then, we'll go through a complete example to illustrate training on encrypted data from end-to-end.\n\n### How does CrypTen training differ from PyTorch training?\n\nThere are two main ways implementing a CrypTen training loop differs from a PyTorch training loop. We'll describe these items first, and then illustrate them with small examples below.\n\n<i>(1) Use one-hot encoding</i>: CrypTen training requires all labels to use one-hot encoding. This means that when using standard datasets such as MNIST, we need to modify the labels to use one-hot encoding.\n\n<i>(2) Directly update parameters</i>: CrypTen does not use the PyTorch optimizers. Instead, CrypTen implements encrypted SGD by implementing its own `backward` function, followed by directly updating the parameters. As we will see below, using SGD in CrypTen is very similar to using the PyTorch optimizers.\n\nWe now show some small examples to illustrate these differences. As before, we will assume Alice has the rank 0 process and Bob has the rank 1 process.",
"_____no_output_____"
]
],
[
[
"# Define source argument values for Alice and Bob\nALICE = 0\nBOB = 1",
"_____no_output_____"
],
[
"# Load Alice's data \ndata_alice_enc = crypten.load_from_party('/tmp/alice_train.pth', src=ALICE)",
"_____no_output_____"
],
[
"# We'll now set up the data for our small example below\n# For illustration purposes, we will create toy data\n# and encrypt all of it from source ALICE\nx_small = torch.rand(100, 1, 28, 28)\ny_small = torch.randint(1, (100,))\n\n# Transform labels into one-hot encoding\nlabel_eye = torch.eye(2)\ny_one_hot = label_eye[y_small]\n\n# Transform all data to CrypTensors\nx_train = crypten.cryptensor(x_small, src=ALICE)\ny_train = crypten.cryptensor(y_one_hot)\n\n# Instantiate and encrypt a CrypTen model\nmodel_plaintext = ExampleNet()\ndummy_input = torch.empty(1, 1, 28, 28)\nmodel = crypten.nn.from_pytorch(model_plaintext, dummy_input)\nmodel.encrypt()",
"_____no_output_____"
],
[
"# Example: Stochastic Gradient Descent in CrypTen\n\nmodel.train() # Change to training mode\nloss = crypten.nn.MSELoss() # Choose loss functions\n\n# Set parameters: learning rate, num_epochs\nlearning_rate = 0.001\nnum_epochs = 2\n\n# Train the model: SGD on encrypted data\nfor i in range(num_epochs):\n\n # forward pass\n output = model(x_train)\n loss_value = loss(output, y_train)\n \n # set gradients to zero\n model.zero_grad()\n\n # perform backward pass\n loss_value.backward()\n\n # update parameters\n model.update_parameters(learning_rate) \n \n # examine the loss after each epoch\n print(\"Epoch: {0:d} Loss: {1:.4f}\".format(i, loss_value.get_plain_text()))",
"Epoch: 0 Loss: 0.3058\nEpoch: 1 Loss: 0.2807\n"
]
],
[
[
"### A Complete Example\n\nWe now put these pieces together for a complete example of training a network in a multi-party setting. \n\nAs in Tutorial 3, we'll assume Alice has the rank 0 process, and Bob has the rank 1 process; so we'll load and encrypt Alice's data with `src=0`, and load and encrypt Bob's data with `src=1`. We'll then initialize a plaintext model and convert it to an encrypted model, just as we did in Tutorial 4. We'll finally define our loss function, training parameters, and run SGD on the encrypted data. For the purposes of this tutorial we train on 100 samples; training should complete in ~3 minutes per epoch.",
"_____no_output_____"
]
],
[
[
"import crypten.mpc as mpc\nimport crypten.communicator as comm\n\n# Convert labels to one-hot encoding\n# Since labels are public in this use case, we will simply use them from loaded torch tensors\nlabels = torch.load('/tmp/train_labels.pth')\nlabels = labels.long()\nlabels_one_hot = label_eye[labels]\n\[email protected]_multiprocess(world_size=2)\ndef run_encrypted_training():\n # Load data:\n x_alice_enc = crypten.load_from_party('/tmp/alice_train.pth', src=ALICE)\n x_bob_enc = crypten.load_from_party('/tmp/bob_train.pth', src=BOB)\n \n crypten.print(x_alice_enc.size())\n crypten.print(x_bob_enc.size())\n \n # Combine the feature sets: identical to Tutorial 3\n x_combined_enc = crypten.cat([x_alice_enc, x_bob_enc], dim=2)\n \n # Reshape to match the network architecture\n x_combined_enc = x_combined_enc.unsqueeze(1)\n \n \n # Commenting out due to intermittent failure in PyTorch codebase\n \"\"\"\n # Initialize a plaintext model and convert to CrypTen model\n pytorch_model = ExampleNet()\n model = crypten.nn.from_pytorch(pytorch_model, dummy_input)\n model.encrypt()\n \n # Set train mode\n model.train()\n \n # Define a loss function\n loss = crypten.nn.MSELoss()\n\n # Define training parameters\n learning_rate = 0.001\n num_epochs = 2\n batch_size = 10\n num_batches = x_combined_enc.size(0) // batch_size\n \n rank = comm.get().get_rank()\n for i in range(num_epochs): \n crypten.print(f\"Epoch {i} in progress:\") \n \n for batch in range(num_batches):\n # define the start and end of the training mini-batch\n start, end = batch * batch_size, (batch + 1) * batch_size\n \n # construct CrypTensors out of training examples / labels\n x_train = x_combined_enc[start:end]\n y_batch = labels_one_hot[start:end]\n y_train = crypten.cryptensor(y_batch, requires_grad=True)\n \n # perform forward pass:\n output = model(x_train)\n loss_value = loss(output, y_train)\n \n # set gradients to \"zero\" \n model.zero_grad()\n\n # perform backward pass: \n loss_value.backward()\n\n # update parameters\n model.update_parameters(learning_rate)\n \n # Print progress every batch:\n batch_loss = loss_value.get_plain_text()\n crypten.print(f\"\\tBatch {(batch + 1)} of {num_batches} Loss {batch_loss.item():.4f}\")\n \"\"\"\n\nrun_encrypted_training()",
"torch.Size([100, 28, 20])\ntorch.Size([100, 28, 8])\n"
]
],
[
[
"We see that the average batch loss decreases across the epochs, as we expect during training.\n\nThis completes our tutorial. Before exiting this tutorial, please clean up the files generated using the following code.",
"_____no_output_____"
]
],
[
[
"import os\n\nfilenames = ['/tmp/alice_train.pth', \n '/tmp/bob_train.pth', \n '/tmp/alice_test.pth',\n '/tmp/bob_test.pth', \n '/tmp/train_labels.pth',\n '/tmp/test_labels.pth']\n\nfor fn in filenames:\n if os.path.exists(fn): os.remove(fn)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e729e2a65519faccc0ce10d7dda64e7a20e1c6a6 | 1,170 | ipynb | Jupyter Notebook | Python basics practice/Python 3 (5)/Line Continuation - Exercise_Py3.ipynb | rachithh/data-science | 1f7c5678094fc3acfda30cb00f9de93a2974f505 | [
"MIT"
] | null | null | null | Python basics practice/Python 3 (5)/Line Continuation - Exercise_Py3.ipynb | rachithh/data-science | 1f7c5678094fc3acfda30cb00f9de93a2974f505 | [
"MIT"
] | null | null | null | Python basics practice/Python 3 (5)/Line Continuation - Exercise_Py3.ipynb | rachithh/data-science | 1f7c5678094fc3acfda30cb00f9de93a2974f505 | [
"MIT"
] | null | null | null | 16.956522 | 100 | 0.492308 | [
[
[
"## Line Continuation",
"_____no_output_____"
],
[
"Add a backslash in the code below, so it is a one-line code. Observe the change in the result.",
"_____no_output_____"
]
],
[
[
"15 + 31\\\n- 26",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
]
] |
e729e7f908107403230b8542987258813a72a40f | 39,417 | ipynb | Jupyter Notebook | examples/layers/add_default_legend_layer.ipynb | jorisvandenbossche/cartoframes | de0f514a8460d61a86afd58e46f7e738060ba09a | [
"BSD-3-Clause"
] | null | null | null | examples/layers/add_default_legend_layer.ipynb | jorisvandenbossche/cartoframes | de0f514a8460d61a86afd58e46f7e738060ba09a | [
"BSD-3-Clause"
] | null | null | null | examples/layers/add_default_legend_layer.ipynb | jorisvandenbossche/cartoframes | de0f514a8460d61a86afd58e46f7e738060ba09a | [
"BSD-3-Clause"
] | null | null | null | 38.834483 | 1,021 | 0.465358 | [
[
[
"# Add a default legend to Layer\n\nIn this example, a default Legend with a title, description, and footer is added to the map.\n\nFor more information, run `help(Layer)` or `help(Legend)`.",
"_____no_output_____"
]
],
[
[
"from cartoframes.auth import set_default_credentials\nfrom cartoframes.viz import Map, Layer, Legend\n\nset_default_credentials('cartoframes')",
"_____no_output_____"
],
[
"Map(\n Layer(\n 'global_power_plants',\n legend=Legend(\n 'default',\n title='Global Power Plants',\n description='Power plant locations around the world',\n footer='Source: World Resources Institute'\n )\n )\n)",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
]
] |
e72a20d3445cc7eed8ec0b538a3feea5a6ca255d | 14,849 | ipynb | Jupyter Notebook | notebooks/building_production_ml_systems/labs/3_kubeflow_pipelines_vertex.ipynb | paras301/asl-ml-immersion | 1d1faf10e696a7024ce8711c15498ff72c657c5b | [
"Apache-2.0"
] | null | null | null | notebooks/building_production_ml_systems/labs/3_kubeflow_pipelines_vertex.ipynb | paras301/asl-ml-immersion | 1d1faf10e696a7024ce8711c15498ff72c657c5b | [
"Apache-2.0"
] | null | null | null | notebooks/building_production_ml_systems/labs/3_kubeflow_pipelines_vertex.ipynb | paras301/asl-ml-immersion | 1d1faf10e696a7024ce8711c15498ff72c657c5b | [
"Apache-2.0"
] | 1 | 2021-11-10T02:54:02.000Z | 2021-11-10T02:54:02.000Z | 31.327004 | 553 | 0.614048 | [
[
[
"# Vertex pipelines\n\n**Learning Objectives:**\n\nUse components from `google_cloud_pipeline_components` to create a Vertex Pipeline which will\n 1. train a custom model on Vertex AI\n 1. create an endpoint to host the model \n 1. upload the trained model, and \n 1. deploy the uploaded model to the endpoint for serving",
"_____no_output_____"
],
[
"## Overview\n\nThis notebook shows how to use the components defined in [`google_cloud_pipeline_components`](https://github.com/kubeflow/pipelines/tree/master/components/google-cloud) in conjunction with an experimental `run_as_aiplatform_custom_job` method, to build a [Vertex Pipelines](https://cloud.google.com/vertex-ai/docs/pipelines) workflow that trains a [custom model](https://cloud.google.com/vertex-ai/docs/training/containers-overview), uploads the model, creates an endpoint, and deploys the model to the endpoint. \n\nWe'll use the `kfp.v2.google.experimental.run_as_aiplatform_custom_job` method to train a custom model.\n\nThe google cloud pipeline components are [documented here](https://google-cloud-pipeline-components.readthedocs.io/en/google-cloud-pipeline-components-0.1.2/). From this [github page](...) you can also find other examples in how to build a Vertex pipeline with AutoML [here](https://github.com/GoogleCloudPlatform/ai-platform-samples/tree/master/ai-platform-unified/notebooks/official/pipelines). You can see other available methods from the [Vertex AI SDK](https://googleapis.dev/python/aiplatform/latest/aiplatform.html).",
"_____no_output_____"
],
[
"### Set up your local development environment and install necessary packages\n\n",
"_____no_output_____"
]
],
[
[
"!pip3 install --user google-cloud-pipeline-components==0.1.1 --upgrade",
"_____no_output_____"
]
],
[
[
"### Restart the kernel\n\nAfter you install the additional packages, you need to restart the notebook kernel so it can find the packages.",
"_____no_output_____"
],
[
"### Import libraries and define constants",
"_____no_output_____"
]
],
[
[
"import os\nfrom datetime import datetime\n\nimport kfp\nfrom google.cloud import aiplatform\nfrom google_cloud_pipeline_components import aiplatform as gcc_aip\nfrom kfp.v2 import compiler\nfrom kfp.v2.dsl import component\nfrom kfp.v2.google import experimental",
"_____no_output_____"
]
],
[
[
"Check the versions of the packages you installed. The KFP SDK version should be >=1.6.",
"_____no_output_____"
]
],
[
[
"print(\"KFP SDK version: {}\".format(kfp.__version__))",
"_____no_output_____"
]
],
[
[
"#### Set your environment variables\nNext, we'll set up our project variables, like GCP project ID, the bucket and region. Also, to avoid name collisions between resources created, we'll create a timestamp and append it onto the name of resources we create in this lab.",
"_____no_output_____"
]
],
[
[
"# Change below if necessary\nPROJECT = !gcloud config get-value project # noqa: E999\nPROJECT = PROJECT[0]\nBUCKET = PROJECT\nREGION = \"us-central1\"\n\nTIMESTAMP = datetime.now().strftime(\"%Y%m%d%H%M%S\")\n\nPIPELINE_ROOT = f\"gs://{BUCKET}/pipeline_root\"",
"_____no_output_____"
],
[
"print(PIPELINE_ROOT)",
"_____no_output_____"
]
],
[
[
"We'll save pipeline artifacts in a directory called `pipeline_root` within our bucket. Validate access to your Cloud Storage bucket by examining its contents. It should be empty at this stage. ",
"_____no_output_____"
]
],
[
[
"!gsutil ls -la gs://{BUCKET}/pipeline_root",
"_____no_output_____"
]
],
[
[
"### Give your default service account storage bucket access\nThis pipeline will read `.csv` files from Cloud storage for training and will write model checkpoints and artifacts to a specified bucket. So, we need to give our default service account `storage.objectAdmin` access. You can do this by running the command below in Cloud Shell:",
"_____no_output_____"
],
[
"```bash\nPROJECT=$(gcloud config get-value project)\nPROJECT_NUMBER=$(gcloud projects list --filter=\"name=$PROJECT\" --format=\"value(PROJECT_NUMBER)\")\ngcloud projects add-iam-policy-binding $PROJECT \\\n --member=\"serviceAccount:[email protected]\" \\\n --role=\"roles/storage.objectAdmin\"\n```",
"_____no_output_____"
],
[
"Note, it may take some time for the permissions to propogate to the service account. You can confirm the status from the [IAM page here](https://console.cloud.google.com/iam-admin/iam). ",
"_____no_output_____"
],
[
"## Define a pipeline that uses the components\n",
"_____no_output_____"
],
[
"We'll start by defining a component with which the custom training job is run. For this example, this component doesn't do anything (but run a print statement).",
"_____no_output_____"
]
],
[
[
"@component\ndef training_op(input1: str):\n print(\"VertexAI pipeline: {}\".format(input1))",
"_____no_output_____"
]
],
[
[
"Now, you define the pipeline. \n\nThe `experimental.run_as_aiplatform_custom_job` method takes as args the component defined above, and the list of `worker_pool_specs`— in this case one— with which the custom training job is configured. \nSee [full function code here](https://github.com/kubeflow/pipelines/blob/master/sdk/python/kfp/v2/google/experimental/custom_job.py)\n\nThen, [`google_cloud_pipeline_components`](https://github.com/kubeflow/pipelines/tree/master/components/google-cloud) components are used to define the rest of the pipeline: upload the model, create an endpoint, and deploy the model to the endpoint. (While not shown in this example, the model deploy will create an endpoint if one is not provided). \n\nNote that the code we're using the exact same code that we developed in the previous lab [`1_training_at_scale_vertex.ipynb`](1_training_at_scale_vertex.ipynb). In fact, we are pulling the same python package executor image URI that we pushed to Cloud storage in that lab. Note that we also include the `SERVING_CONTAINER_IMAGE_URI` since we'll need to specify that when uploading and deploying our model.",
"_____no_output_____"
]
],
[
[
"# Output directory and job_name\nOUTDIR = f\"gs://{BUCKET}/taxifare/trained_model_{TIMESTAMP}\"\nMODEL_DISPLAY_NAME = f\"taxifare_{TIMESTAMP}\"\n\nPYTHON_PACKAGE_URIS = f\"gs://{BUCKET}/taxifare/taxifare_trainer-0.1.tar.gz\"\nMACHINE_TYPE = \"n1-standard-16\"\nREPLICA_COUNT = 1\nPYTHON_PACKAGE_EXECUTOR_IMAGE_URI = (\n \"us-docker.pkg.dev/vertex-ai/training/tf-cpu.2-3:latest\"\n)\nSERVING_CONTAINER_IMAGE_URI = (\n \"us-docker.pkg.dev/vertex-ai/prediction/tf2-cpu.2-3:latest\"\n)\nPYTHON_MODULE = \"trainer.task\"\n\n# Model and training hyperparameters\nBATCH_SIZE = 500\nNUM_EXAMPLES_TO_TRAIN_ON = 10000\nNUM_EVALS = 1000\nNBUCKETS = 10\nLR = 0.001\nNNSIZE = \"32 8\"\n\n# GCS paths\nGCS_PROJECT_PATH = f\"gs://{BUCKET}/taxifare\"\nDATA_PATH = f\"{GCS_PROJECT_PATH}/data\"\nTRAIN_DATA_PATH = f\"{DATA_PATH}/taxi-train*\"\nEVAL_DATA_PATH = f\"{DATA_PATH}/taxi-valid*\"",
"_____no_output_____"
]
],
[
[
"### Lab Task #1. \n\nIn the cell below we define the pipeline for training and deploying our taxifare model. Fill in the code to accomplish four things:\n1. define the approrpriate `worker_pool_spec` for the training job\n1. use `ModelUploadOp` to upload the model artifacts after training to create the model in Vertex AI\n1. create an endpoing using `EndpointCreateOp`\n1. finally, deploy the model you uploaded to the endpoint you created in the steps above.",
"_____no_output_____"
]
],
[
[
"@kfp.dsl.pipeline(name=\"taxifare--train-upload-endpoint-deploy\")\ndef pipeline(\n project: str = PROJECT,\n model_display_name: str = MODEL_DISPLAY_NAME,\n):\n train_task = training_op(\"taxifare training pipeline\")\n experimental.run_as_aiplatform_custom_job(\n train_task,\n display_name=f\"pipelines-train-{TIMESTAMP}\",\n worker_pool_specs= \n # TODO: Your code goes here.\n )\n\n model_upload_op = gcc_aip.ModelUploadOp(\n # TODO: Your code goes here.\n )\n model_upload_op.after(train_task)\n\n endpoint_create_op = gcc_aip.EndpointCreateOp(\n # TODO: Your code goes here.\n )\n\n model_deploy_op = gcc_aip.ModelDeployOp(\n # TODO: Your code goes here.\n )",
"_____no_output_____"
]
],
[
[
"## Compile and run the pipeline\n\nNow, you're ready to compile the pipeline:",
"_____no_output_____"
]
],
[
[
"if not os.path.isdir(\"vertex_pipelines\"):\n os.mkdir(\"vertex_pipelines\")\n\ncompiler.Compiler().compile(\n pipeline_func=pipeline,\n package_path=\"./vertex_pipelines/train_upload_endpoint_deploy.json\",\n)",
"_____no_output_____"
]
],
[
[
"The pipeline compilation generates the `train_upload_endpoint_deploy.json` job spec file.\n\nNext, instantiate the pipeline job object:",
"_____no_output_____"
],
[
"### Lab Task #2.\n\nComplete the code in the cell below to fill in the missing arguments.\n",
"_____no_output_____"
]
],
[
[
"pipeline_job = aiplatform.pipeline_jobs.PipelineJob(\n display_name= # TODO: Your code goes here.\n template_path= # TODO: Your code goes here.\n pipeline_root= # TODO: Your code goes here.\n project=PROJECT,\n location=REGION,\n)",
"_____no_output_____"
]
],
[
[
"Then, you run the defined pipeline like this: ",
"_____no_output_____"
]
],
[
[
"pipeline_job.run()",
"_____no_output_____"
]
],
[
[
"Click on the generated link above starting with `https://console.cloud.google.com/vertex-ai/locations/[location]/pipelines/runs/` to see your run in the Cloud Console. It should look something like this:\n\n<img src='../assets/taxifare_vertex_pipeline.png' width='80%'>",
"_____no_output_____"
],
[
"Copyright 2021 Google Inc. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
e72a38a96da71cc7514870737b59d7c5c8611a3b | 561,581 | ipynb | Jupyter Notebook | Smart_Beta_and_Portfolio_Optimization.ipynb | parinp/Ai-for-Trading | 26de49ab7e416c8e9fadcb47c6c2e84dc30e1cca | [
"MIT"
] | null | null | null | Smart_Beta_and_Portfolio_Optimization.ipynb | parinp/Ai-for-Trading | 26de49ab7e416c8e9fadcb47c6c2e84dc30e1cca | [
"MIT"
] | null | null | null | Smart_Beta_and_Portfolio_Optimization.ipynb | parinp/Ai-for-Trading | 26de49ab7e416c8e9fadcb47c6c2e84dc30e1cca | [
"MIT"
] | null | null | null | 59.395135 | 72,796 | 0.590002 | [
[
[
"# Project 3: Smart Beta Portfolio and Portfolio Optimization\n\n## Overview\n\n\nSmart beta has a broad meaning, but we can say in practice that when we use the universe of stocks from an index, and then apply some weighting scheme other than market cap weighting, it can be considered a type of smart beta fund. A Smart Beta portfolio generally gives investors exposure or \"beta\" to one or more types of market characteristics (or factors) that are believed to predict prices while giving investors a diversified broad exposure to a particular market. Smart Beta portfolios generally target momentum, earnings quality, low volatility, and dividends or some combination. Smart Beta Portfolios are generally rebalanced infrequently and follow relatively simple rules or algorithms that are passively managed. Model changes to these types of funds are also rare requiring prospectus filings with US Security and Exchange Commission in the case of US focused mutual funds or ETFs.. Smart Beta portfolios are generally long-only, they do not short stocks.\n\nIn contrast, a purely alpha-focused quantitative fund may use multiple models or algorithms to create a portfolio. The portfolio manager retains discretion in upgrading or changing the types of models and how often to rebalance the portfolio in attempt to maximize performance in comparison to a stock benchmark. Managers may have discretion to short stocks in portfolios.\n\nImagine you're a portfolio manager, and wish to try out some different portfolio weighting methods.\n\nOne way to design portfolio is to look at certain accounting measures (fundamentals) that, based on past trends, indicate stocks that produce better results. \n\n\nFor instance, you may start with a hypothesis that dividend-issuing stocks tend to perform better than stocks that do not. This may not always be true of all companies; for instance, Apple does not issue dividends, but has had good historical performance. The hypothesis about dividend-paying stocks may go something like this: \n\nCompanies that regularly issue dividends may also be more prudent in allocating their available cash, and may indicate that they are more conscious of prioritizing shareholder interests. For example, a CEO may decide to reinvest cash into pet projects that produce low returns. Or, the CEO may do some analysis, identify that reinvesting within the company produces lower returns compared to a diversified portfolio, and so decide that shareholders would be better served if they were given the cash (in the form of dividends). So according to this hypothesis, dividends may be both a proxy for how the company is doing (in terms of earnings and cash flow), but also a signal that the company acts in the best interest of its shareholders. Of course, it's important to test whether this works in practice.\n\n\nYou may also have another hypothesis, with which you wish to design a portfolio that can then be made into an ETF. You may find that investors may wish to invest in passive beta funds, but wish to have less risk exposure (less volatility) in their investments. The goal of having a low volatility fund that still produces returns similar to an index may be appealing to investors who have a shorter investment time horizon, and so are more risk averse.\n\nSo the objective of your proposed portfolio is to design a portfolio that closely tracks an index, while also minimizing the portfolio variance. Also, if this portfolio can match the returns of the index with less volatility, then it has a higher risk-adjusted return (same return, lower volatility).\n\nSmart Beta ETFs can be designed with both of these two general methods (among others): alternative weighting and minimum volatility ETF.\n\n\n## Instructions\nEach problem consists of a function to implement and instructions on how to implement the function. The parts of the function that need to be implemented are marked with a `# TODO` comment. After implementing the function, run the cell to test it against the unit tests we've provided. For each problem, we provide one or more unit tests from our `project_tests` package. These unit tests won't tell you if your answer is correct, but will warn you of any major errors. Your code will be checked for the correct solution when you submit it to Udacity.\n\n## Packages\nWhen you implement the functions, you'll only need to you use the packages you've used in the classroom, like [Pandas](https://pandas.pydata.org/) and [Numpy](http://www.numpy.org/). These packages will be imported for you. We recommend you don't add any import statements, otherwise the grader might not be able to run your code.\n\nThe other packages that we're importing are `helper`, `project_helper`, and `project_tests`. These are custom packages built to help you solve the problems. The `helper` and `project_helper` module contains utility functions and graph functions. The `project_tests` contains the unit tests for all the problems.\n### Install Packages",
"_____no_output_____"
]
],
[
[
"import sys\n!{sys.executable} -m pip install -r requirements.txt",
"Requirement already satisfied: colour==0.1.5 in /opt/conda/lib/python3.6/site-packages (from -r requirements.txt (line 1)) (0.1.5)\nRequirement already satisfied: cvxpy==1.0.3 in /opt/conda/lib/python3.6/site-packages (from -r requirements.txt (line 2)) (1.0.3)\nRequirement already satisfied: cycler==0.10.0 in /opt/conda/lib/python3.6/site-packages/cycler-0.10.0-py3.6.egg (from -r requirements.txt (line 3)) (0.10.0)\nRequirement already satisfied: numpy==1.14.5 in /opt/conda/lib/python3.6/site-packages (from -r requirements.txt (line 4)) (1.14.5)\nRequirement already satisfied: pandas==0.21.1 in /opt/conda/lib/python3.6/site-packages (from -r requirements.txt (line 5)) (0.21.1)\nRequirement already satisfied: plotly==2.2.3 in /opt/conda/lib/python3.6/site-packages (from -r requirements.txt (line 6)) (2.2.3)\nRequirement already satisfied: pyparsing==2.2.0 in /opt/conda/lib/python3.6/site-packages (from -r requirements.txt (line 7)) (2.2.0)\nRequirement already satisfied: python-dateutil==2.6.1 in /opt/conda/lib/python3.6/site-packages (from -r requirements.txt (line 8)) (2.6.1)\nRequirement already satisfied: pytz==2017.3 in /opt/conda/lib/python3.6/site-packages (from -r requirements.txt (line 9)) (2017.3)\nRequirement already satisfied: requests==2.18.4 in /opt/conda/lib/python3.6/site-packages (from -r requirements.txt (line 10)) (2.18.4)\nRequirement already satisfied: scipy==1.0.0 in /opt/conda/lib/python3.6/site-packages (from -r requirements.txt (line 11)) (1.0.0)\nRequirement already satisfied: scikit-learn==0.19.1 in /opt/conda/lib/python3.6/site-packages (from -r requirements.txt (line 12)) (0.19.1)\nRequirement already satisfied: six==1.11.0 in /opt/conda/lib/python3.6/site-packages (from -r requirements.txt (line 13)) (1.11.0)\nRequirement already satisfied: tqdm==4.19.5 in /opt/conda/lib/python3.6/site-packages (from -r requirements.txt (line 14)) (4.19.5)\nRequirement already satisfied: toolz in /opt/conda/lib/python3.6/site-packages (from cvxpy==1.0.3->-r requirements.txt (line 2)) (0.8.2)\nRequirement already satisfied: ecos>=2 in /opt/conda/lib/python3.6/site-packages (from cvxpy==1.0.3->-r requirements.txt (line 2)) (2.0.7.post1)\nRequirement already satisfied: scs>=1.1.3 in /opt/conda/lib/python3.6/site-packages (from cvxpy==1.0.3->-r requirements.txt (line 2)) (2.1.2)\nRequirement already satisfied: fastcache in /opt/conda/lib/python3.6/site-packages (from cvxpy==1.0.3->-r requirements.txt (line 2)) (1.0.2)\nRequirement already satisfied: osqp in /opt/conda/lib/python3.6/site-packages (from cvxpy==1.0.3->-r requirements.txt (line 2)) (0.6.1)\nRequirement already satisfied: multiprocess in /opt/conda/lib/python3.6/site-packages (from cvxpy==1.0.3->-r requirements.txt (line 2)) (0.70.10)\nRequirement already satisfied: nbformat>=4.2 in /opt/conda/lib/python3.6/site-packages (from plotly==2.2.3->-r requirements.txt (line 6)) (4.4.0)\nRequirement already satisfied: decorator>=4.0.6 in /opt/conda/lib/python3.6/site-packages (from plotly==2.2.3->-r requirements.txt (line 6)) (4.0.11)\nRequirement already satisfied: chardet<3.1.0,>=3.0.2 in /opt/conda/lib/python3.6/site-packages (from requests==2.18.4->-r requirements.txt (line 10)) (3.0.4)\nRequirement already satisfied: idna<2.7,>=2.5 in /opt/conda/lib/python3.6/site-packages (from requests==2.18.4->-r requirements.txt (line 10)) (2.6)\nRequirement already satisfied: urllib3<1.23,>=1.21.1 in /opt/conda/lib/python3.6/site-packages (from requests==2.18.4->-r requirements.txt (line 10)) (1.22)\nRequirement already satisfied: certifi>=2017.4.17 in /opt/conda/lib/python3.6/site-packages (from requests==2.18.4->-r requirements.txt (line 10)) (2019.11.28)\nRequirement already satisfied: future in /opt/conda/lib/python3.6/site-packages (from osqp->cvxpy==1.0.3->-r requirements.txt (line 2)) (0.16.0)\nRequirement already satisfied: dill>=0.3.2 in /opt/conda/lib/python3.6/site-packages (from multiprocess->cvxpy==1.0.3->-r requirements.txt (line 2)) (0.3.2)\nRequirement already satisfied: jupyter-core in /opt/conda/lib/python3.6/site-packages (from nbformat>=4.2->plotly==2.2.3->-r requirements.txt (line 6)) (4.4.0)\nRequirement already satisfied: ipython-genutils in /opt/conda/lib/python3.6/site-packages (from nbformat>=4.2->plotly==2.2.3->-r requirements.txt (line 6)) (0.2.0)\nRequirement already satisfied: traitlets>=4.1 in /opt/conda/lib/python3.6/site-packages (from nbformat>=4.2->plotly==2.2.3->-r requirements.txt (line 6)) (4.3.2)\nRequirement already satisfied: jsonschema!=2.5.0,>=2.4 in /opt/conda/lib/python3.6/site-packages (from nbformat>=4.2->plotly==2.2.3->-r requirements.txt (line 6)) (2.6.0)\n"
]
],
[
[
"### Load Packages",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\nimport helper\nimport project_helper\nimport project_tests",
"_____no_output_____"
]
],
[
[
"## Market Data\n### Load Data\nFor this universe of stocks, we'll be selecting large dollar volume stocks. We're using this universe, since it is highly liquid.",
"_____no_output_____"
]
],
[
[
"df = pd.read_csv('../../data/project_3/eod-quotemedia.csv')\n\npercent_top_dollar = 0.2\nhigh_volume_symbols = project_helper.large_dollar_volume_stocks(df, 'adj_close', 'adj_volume', percent_top_dollar)\ndf = df[df['ticker'].isin(high_volume_symbols)]\n\nclose = df.reset_index().pivot(index='date', columns='ticker', values='adj_close')\nvolume = df.reset_index().pivot(index='date', columns='ticker', values='adj_volume')\ndividends = df.reset_index().pivot(index='date', columns='ticker', values='dividends')",
"_____no_output_____"
]
],
[
[
"### View Data\nTo see what one of these 2-d matrices looks like, let's take a look at the closing prices matrix.",
"_____no_output_____"
]
],
[
[
"project_helper.print_dataframe(close)",
"_____no_output_____"
]
],
[
[
"# Part 1: Smart Beta Portfolio\nIn Part 1 of this project, you'll build a portfolio using dividend yield to choose the portfolio weights. A portfolio such as this could be incorporated into a smart beta ETF. You'll compare this portfolio to a market cap weighted index to see how well it performs. \n\nNote that in practice, you'll probably get the index weights from a data vendor (such as companies that create indices, like MSCI, FTSE, Standard and Poor's), but for this exercise we will simulate a market cap weighted index.\n\n## Index Weights\nThe index we'll be using is based on large dollar volume stocks. Implement `generate_dollar_volume_weights` to generate the weights for this index. For each date, generate the weights based on dollar volume traded for that date. For example, assume the following is close prices and volume data:\n```\n Prices\n A B ...\n2013-07-08 2 2 ...\n2013-07-09 5 6 ...\n2013-07-10 1 2 ...\n2013-07-11 6 5 ...\n... ... ... ...\n\n Volume\n A B ...\n2013-07-08 100 340 ...\n2013-07-09 240 220 ...\n2013-07-10 120 500 ...\n2013-07-11 10 100 ...\n... ... ... ...\n```\nThe weights created from the function `generate_dollar_volume_weights` should be the following:\n```\n A B ...\n2013-07-08 0.126.. 0.194.. ...\n2013-07-09 0.759.. 0.377.. ...\n2013-07-10 0.075.. 0.285.. ...\n2013-07-11 0.037.. 0.142.. ...\n... ... ... ...\n```",
"_____no_output_____"
]
],
[
[
"def generate_dollar_volume_weights(close, volume):\n \"\"\"\n Generate dollar volume weights.\n\n Parameters\n ----------\n close : DataFrame\n Close price for each ticker and date\n volume : str\n Volume for each ticker and date\n\n Returns\n -------\n dollar_volume_weights : DataFrame\n The dollar volume weights for each ticker and date\n \"\"\"\n assert close.index.equals(volume.index)\n assert close.columns.equals(volume.columns)\n \n #TODO: Implement function\n\n market_cap = close*volume\n summation = market_cap.sum(axis = 1)\n \n return market_cap.div(summation, axis = 0)\n\nproject_tests.test_generate_dollar_volume_weights(generate_dollar_volume_weights)",
"Tests Passed\n"
]
],
[
[
"### View Data\nLet's generate the index weights using `generate_dollar_volume_weights` and view them using a heatmap.",
"_____no_output_____"
]
],
[
[
"index_weights = generate_dollar_volume_weights(close, volume)\nproject_helper.plot_weights(index_weights, 'Index Weights')",
"_____no_output_____"
]
],
[
[
"## Portfolio Weights\nNow that we have the index weights, let's choose the portfolio weights based on dividend. You would normally calculate the weights based on trailing dividend yield, but we'll simplify this by just calculating the total dividend yield over time.\n\nImplement `calculate_dividend_weights` to return the weights for each stock based on its total dividend yield over time. This is similar to generating the weight for the index, but it's using dividend data instead.\nFor example, assume the following is `dividends` data:\n```\n Prices\n A B\n2013-07-08 0 0\n2013-07-09 0 1\n2013-07-10 0.5 0\n2013-07-11 0 0\n2013-07-12 2 0\n... ... ...\n```\nThe weights created from the function `calculate_dividend_weights` should be the following:\n```\n A B\n2013-07-08 NaN NaN\n2013-07-09 0 1\n2013-07-10 0.333.. 0.666..\n2013-07-11 0.333.. 0.666..\n2013-07-12 0.714.. 0.285..\n... ... ...\n```",
"_____no_output_____"
]
],
[
[
"def calculate_dividend_weights(dividends):\n \"\"\"\n Calculate dividend weights.\n\n Parameters\n ----------\n dividends : DataFrame\n Dividend for each stock and date\n\n Returns\n -------\n dividend_weights : DataFrame\n Weights for each stock and date\n \"\"\"\n #TODO: Implement function\n \n cumsum = dividends.cumsum()\n totalsum = cumsum.sum(axis = 1)\n \n dividend_weights = cumsum.div(totalsum, axis = 0)\n \n return dividend_weights\n\nproject_tests.test_calculate_dividend_weights(calculate_dividend_weights)",
"Tests Passed\n"
]
],
[
[
"### View Data\nJust like the index weights, let's generate the ETF weights and view them using a heatmap.",
"_____no_output_____"
]
],
[
[
"etf_weights = calculate_dividend_weights(dividends)\nproject_helper.plot_weights(etf_weights, 'ETF Weights')",
"_____no_output_____"
]
],
[
[
"## Returns\nImplement `generate_returns` to generate returns data for all the stocks and dates from price data. You might notice we're implementing returns and not log returns. Since we're not dealing with volatility, we don't have to use log returns.",
"_____no_output_____"
]
],
[
[
"def generate_returns(prices):\n \"\"\"\n Generate returns for ticker and date.\n\n Parameters\n ----------\n prices : DataFrame\n Price for each ticker and date\n\n Returns\n -------\n returns : Dataframe\n The returns for each ticker and date\n \"\"\"\n #TODO: Implement function\n\n return prices/prices.shift(1) - 1\n\nproject_tests.test_generate_returns(generate_returns)",
"Tests Passed\n"
]
],
[
[
"### View Data\nLet's generate the closing returns using `generate_returns` and view them using a heatmap.",
"_____no_output_____"
]
],
[
[
"returns = generate_returns(close)\nproject_helper.plot_returns(returns, 'Close Returns')",
"_____no_output_____"
]
],
[
[
"## Weighted Returns\nWith the returns of each stock computed, we can use it to compute the returns for an index or ETF. Implement `generate_weighted_returns` to create weighted returns using the returns and weights.",
"_____no_output_____"
]
],
[
[
"def generate_weighted_returns(returns, weights):\n \"\"\"\n Generate weighted returns.\n\n Parameters\n ----------\n returns : DataFrame\n Returns for each ticker and date\n weights : DataFrame\n Weights for each ticker and date\n\n Returns\n -------\n weighted_returns : DataFrame\n Weighted returns for each ticker and date\n \"\"\"\n assert returns.index.equals(weights.index)\n assert returns.columns.equals(weights.columns)\n \n #TODO: Implement function\n \n return returns*weights\n\nproject_tests.test_generate_weighted_returns(generate_weighted_returns)",
"Tests Passed\n"
]
],
[
[
"### View Data\nLet's generate the ETF and index returns using `generate_weighted_returns` and view them using a heatmap.",
"_____no_output_____"
]
],
[
[
"index_weighted_returns = generate_weighted_returns(returns, index_weights)\netf_weighted_returns = generate_weighted_returns(returns, etf_weights)\nproject_helper.plot_returns(index_weighted_returns, 'Index Returns')\nproject_helper.plot_returns(etf_weighted_returns, 'ETF Returns')",
"_____no_output_____"
]
],
[
[
"## Cumulative Returns\nTo compare performance between the ETF and Index, we're going to calculate the tracking error. Before we do that, we first need to calculate the index and ETF comulative returns. Implement `calculate_cumulative_returns` to calculate the cumulative returns over time given the returns.",
"_____no_output_____"
]
],
[
[
"def calculate_cumulative_returns(returns):\n \"\"\"\n Calculate cumulative returns.\n\n Parameters\n ----------\n returns : DataFrame\n Returns for each ticker and date\n\n Returns\n -------\n cumulative_returns : Pandas Series\n Cumulative returns for each date\n \"\"\"\n #TODO: Implement function\n \n return (returns.sum(axis=1)+1).cumprod()\n\nproject_tests.test_calculate_cumulative_returns(calculate_cumulative_returns)",
"Tests Passed\n"
]
],
[
[
"### View Data\nLet's generate the ETF and index cumulative returns using `calculate_cumulative_returns` and compare the two.",
"_____no_output_____"
]
],
[
[
"index_weighted_cumulative_returns = calculate_cumulative_returns(index_weighted_returns)\netf_weighted_cumulative_returns = calculate_cumulative_returns(etf_weighted_returns)\nproject_helper.plot_benchmark_returns(index_weighted_cumulative_returns, etf_weighted_cumulative_returns, 'Smart Beta ETF vs Index')",
"_____no_output_____"
]
],
[
[
"## Tracking Error\nIn order to check the performance of the smart beta portfolio, we can calculate the annualized tracking error against the index. Implement `tracking_error` to return the tracking error between the ETF and benchmark.\n\nFor reference, we'll be using the following annualized tracking error function:\n$$ TE = \\sqrt{252} * SampleStdev(r_p - r_b) $$\n\nWhere $ r_p $ is the portfolio/ETF returns and $ r_b $ is the benchmark returns.\n\n_Note: When calculating the sample standard deviation, the delta degrees of freedom is 1, which is the also the default value._",
"_____no_output_____"
]
],
[
[
"def tracking_error(benchmark_returns_by_date, etf_returns_by_date):\n \"\"\"\n Calculate the tracking error.\n\n Parameters\n ----------\n benchmark_returns_by_date : Pandas Series\n The benchmark returns for each date\n etf_returns_by_date : Pandas Series\n The ETF returns for each date\n\n Returns\n -------\n tracking_error : float\n The tracking error\n \"\"\"\n assert benchmark_returns_by_date.index.equals(etf_returns_by_date.index)\n \n #TODO: Implement function\n \n te = np.sqrt(252) * (etf_returns_by_date - benchmark_returns_by_date).std()\n\n return te\n\nproject_tests.test_tracking_error(tracking_error)",
"Tests Passed\n"
]
],
[
[
"### View Data\nLet's generate the tracking error using `tracking_error`.",
"_____no_output_____"
]
],
[
[
"smart_beta_tracking_error = tracking_error(np.sum(index_weighted_returns, 1), np.sum(etf_weighted_returns, 1))\nprint('Smart Beta Tracking Error: {}'.format(smart_beta_tracking_error))",
"Smart Beta Tracking Error: 0.10207614832007529\n"
]
],
[
[
"# Part 2: Portfolio Optimization\n\nNow, let's create a second portfolio. We'll still reuse the market cap weighted index, but this will be independent of the dividend-weighted portfolio that we created in part 1.\n\nWe want to both minimize the portfolio variance and also want to closely track a market cap weighted index. In other words, we're trying to minimize the distance between the weights of our portfolio and the weights of the index.\n\n$Minimize \\left [ \\sigma^2_p + \\lambda \\sqrt{\\sum_{1}^{m}(weight_i - indexWeight_i)^2} \\right ]$ where $m$ is the number of stocks in the portfolio, and $\\lambda$ is a scaling factor that you can choose.\n\nWhy are we doing this? One way that investors evaluate a fund is by how well it tracks its index. The fund is still expected to deviate from the index within a certain range in order to improve fund performance. A way for a fund to track the performance of its benchmark is by keeping its asset weights similar to the weights of the index. We’d expect that if the fund has the same stocks as the benchmark, and also the same weights for each stock as the benchmark, the fund would yield about the same returns as the benchmark. By minimizing a linear combination of both the portfolio risk and distance between portfolio and benchmark weights, we attempt to balance the desire to minimize portfolio variance with the goal of tracking the index.\n\n\n## Covariance\nImplement `get_covariance_returns` to calculate the covariance of the `returns`. We'll use this to calculate the portfolio variance.\n\nIf we have $m$ stock series, the covariance matrix is an $m \\times m$ matrix containing the covariance between each pair of stocks. We can use [`Numpy.cov`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.cov.html) to get the covariance. We give it a 2D array in which each row is a stock series, and each column is an observation at the same period of time. For any `NaN` values, you can replace them with zeros using the [`DataFrame.fillna`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.fillna.html) function.\n\nThe covariance matrix $\\mathbf{P} = \n\\begin{bmatrix}\n\\sigma^2_{1,1} & ... & \\sigma^2_{1,m} \\\\ \n... & ... & ...\\\\\n\\sigma_{m,1} & ... & \\sigma^2_{m,m} \\\\\n\\end{bmatrix}$",
"_____no_output_____"
]
],
[
[
"def get_covariance_returns(returns):\n \"\"\"\n Calculate covariance matrices.\n\n Parameters\n ----------\n returns : DataFrame\n Returns for each ticker and date\n\n Returns\n -------\n returns_covariance : 2 dimensional Ndarray\n The covariance of the returns\n \"\"\"\n #TODO: Implement function\n \n return np.cov(returns.fillna(0).T)\n\nproject_tests.test_get_covariance_returns(get_covariance_returns)",
"Tests Passed\n"
]
],
[
[
"### View Data\nLet's look at the covariance generated from `get_covariance_returns`.",
"_____no_output_____"
]
],
[
[
"covariance_returns = get_covariance_returns(returns)\ncovariance_returns = pd.DataFrame(covariance_returns, returns.columns, returns.columns)\n\ncovariance_returns_correlation = np.linalg.inv(np.diag(np.sqrt(np.diag(covariance_returns))))\ncovariance_returns_correlation = pd.DataFrame(\n covariance_returns_correlation.dot(covariance_returns).dot(covariance_returns_correlation),\n covariance_returns.index,\n covariance_returns.columns)\n\nproject_helper.plot_covariance_returns_correlation(\n covariance_returns_correlation,\n 'Covariance Returns Correlation Matrix')",
"_____no_output_____"
]
],
[
[
"### portfolio variance\nWe can write the portfolio variance $\\sigma^2_p = \\mathbf{x^T} \\mathbf{P} \\mathbf{x}$\n\nRecall that the $\\mathbf{x^T} \\mathbf{P} \\mathbf{x}$ is called the quadratic form.\nWe can use the cvxpy function `quad_form(x,P)` to get the quadratic form.\n\n### Distance from index weights\nWe want portfolio weights that track the index closely. So we want to minimize the distance between them.\nRecall from the Pythagorean theorem that you can get the distance between two points in an x,y plane by adding the square of the x and y distances and taking the square root. Extending this to any number of dimensions is called the L2 norm. So: $\\sqrt{\\sum_{1}^{n}(weight_i - indexWeight_i)^2}$ Can also be written as $\\left \\| \\mathbf{x} - \\mathbf{index} \\right \\|_2$. There's a cvxpy function called [norm()](https://www.cvxpy.org/api_reference/cvxpy.atoms.other_atoms.html#norm)\n`norm(x, p=2, axis=None)`. The default is already set to find an L2 norm, so you would pass in one argument, which is the difference between your portfolio weights and the index weights.\n\n### objective function\nWe want to minimize both the portfolio variance and the distance of the portfolio weights from the index weights.\nWe also want to choose a `scale` constant, which is $\\lambda$ in the expression. \n\n$\\mathbf{x^T} \\mathbf{P} \\mathbf{x} + \\lambda \\left \\| \\mathbf{x} - \\mathbf{index} \\right \\|_2$\n\n\nThis lets us choose how much priority we give to minimizing the difference from the index, relative to minimizing the variance of the portfolio. If you choose a higher value for `scale` ($\\lambda$).\n\nWe can find the objective function using cvxpy `objective = cvx.Minimize()`. Can you guess what to pass into this function?\n\n",
"_____no_output_____"
],
[
"### constraints\nWe can also define our constraints in a list. For example, you'd want the weights to sum to one. So $\\sum_{1}^{n}x = 1$. You may also need to go long only, which means no shorting, so no negative weights. So $x_i >0 $ for all $i$. you could save a variable as `[x >= 0, sum(x) == 1]`, where x was created using `cvx.Variable()`.\n\n### optimization\nSo now that we have our objective function and constraints, we can solve for the values of $\\mathbf{x}$.\ncvxpy has the constructor `Problem(objective, constraints)`, which returns a `Problem` object.\n\nThe `Problem` object has a function solve(), which returns the minimum of the solution. In this case, this is the minimum variance of the portfolio.\n\nIt also updates the vector $\\mathbf{x}$.\n\nWe can check out the values of $x_A$ and $x_B$ that gave the minimum portfolio variance by using `x.value`",
"_____no_output_____"
]
],
[
[
"import cvxpy as cvx\n\ndef get_optimal_weights(covariance_returns, index_weights, scale=2.0):\n \"\"\"\n Find the optimal weights.\n\n Parameters\n ----------\n covariance_returns : 2 dimensional Ndarray\n The covariance of the returns\n index_weights : Pandas Series\n Index weights for all tickers at a period in time\n scale : int\n The penalty factor for weights the deviate from the index \n Returns\n -------\n x : 1 dimensional Ndarray\n The solution for x\n \"\"\"\n assert len(covariance_returns.shape) == 2\n assert len(index_weights.shape) == 1\n assert covariance_returns.shape[0] == covariance_returns.shape[1] == index_weights.shape[0]\n\n #TODO: Implement function\n \n m = len(index_weights)\n \n x = cvx.Variable(m)\n \n port_variance = cvx.quad_form(x,covariance_returns)\n \n norm = cvx.norm(x-index_weights,p=2)\n \n objective = cvx.Minimize(port_variance + scale * norm)\n \n constraints = [x >=0, sum(x)==1] \n \n result = cvx.Problem(objective,constraints).solve()\n \n return x.value\n\nproject_tests.test_get_optimal_weights(get_optimal_weights)",
"_____no_output_____"
]
],
[
[
"## Optimized Portfolio\nUsing the `get_optimal_weights` function, let's generate the optimal ETF weights without rebalanceing. We can do this by feeding in the covariance of the entire history of data. We also need to feed in a set of index weights. We'll go with the average weights of the index over time.",
"_____no_output_____"
]
],
[
[
"raw_optimal_single_rebalance_etf_weights = get_optimal_weights(covariance_returns.values, index_weights.iloc[-1])\noptimal_single_rebalance_etf_weights = pd.DataFrame(\n np.tile(raw_optimal_single_rebalance_etf_weights, (len(returns.index), 1)),\n returns.index,\n returns.columns)",
"_____no_output_____"
]
],
[
[
"With our ETF weights built, let's compare it to the index. Run the next cell to calculate the ETF returns and compare it to the index returns.",
"_____no_output_____"
]
],
[
[
"optim_etf_returns = generate_weighted_returns(returns, optimal_single_rebalance_etf_weights)\noptim_etf_cumulative_returns = calculate_cumulative_returns(optim_etf_returns)\nproject_helper.plot_benchmark_returns(index_weighted_cumulative_returns, optim_etf_cumulative_returns, 'Optimized ETF vs Index')\n\noptim_etf_tracking_error = tracking_error(np.sum(index_weighted_returns, 1), np.sum(optim_etf_returns, 1))\nprint('Optimized ETF Tracking Error: {}'.format(optim_etf_tracking_error))",
"_____no_output_____"
]
],
[
[
"## Rebalance Portfolio Over Time\nThe single optimized ETF portfolio used the same weights for the entire history. This might not be the optimal weights for the entire period. Let's rebalance the portfolio over the same period instead of using the same weights. Implement `rebalance_portfolio` to rebalance a portfolio.\n\nReblance the portfolio every n number of days, which is given as `shift_size`. When rebalancing, you should look back a certain number of days of data in the past, denoted as `chunk_size`. Using this data, compute the optoimal weights using `get_optimal_weights` and `get_covariance_returns`.",
"_____no_output_____"
]
],
[
[
"def rebalance_portfolio(returns, index_weights, shift_size, chunk_size):\n \"\"\"\n Get weights for each rebalancing of the portfolio.\n\n Parameters\n ----------\n returns : DataFrame\n Returns for each ticker and date\n index_weights : DataFrame\n Index weight for each ticker and date\n shift_size : int\n The number of days between each rebalance\n chunk_size : int\n The number of days to look in the past for rebalancing\n\n Returns\n -------\n all_rebalance_weights : list of Ndarrays\n The ETF weights for each point they are rebalanced\n \"\"\"\n assert returns.index.equals(index_weights.index)\n assert returns.columns.equals(index_weights.columns)\n assert shift_size > 0\n assert chunk_size >= 0\n \n #TODO: Implement function\n \n new_weights = []\n \n for size in range(chunk_size,len(index_weights),shift_size):\n \n new_returns = returns.iloc[size-chunk_size:size] \n new_index_weights= index_weights.iloc[size - 1]\n \n cov = get_covariance_returns(new_returns)\n \n weights = get_optimal_weights(cov,new_index_weights)\n \n new_weights.append(weights)\n \n return new_weights\n\nproject_tests.test_rebalance_portfolio(rebalance_portfolio)",
"Tests Passed\n"
],
[
"def get_portfolio_turnover(all_rebalance_weights, shift_size, rebalance_count, n_trading_days_in_year=252):\n \"\"\"\n Calculage portfolio turnover.\n\n Parameters\n ----------\n all_rebalance_weights : list of Ndarrays\n The ETF weights for each point they are rebalanced\n shift_size : int\n The number of days between each rebalance\n rebalance_count : int\n Number of times the portfolio was rebalanced\n n_trading_days_in_year: int\n Number of trading days in a year\n\n Returns\n -------\n portfolio_turnover : float\n The portfolio turnover\n \"\"\"\n assert shift_size > 0\n assert rebalance_count > 0\n \n #TODO: Implement function\n \n weights = pd.DataFrame(all_rebalance_weights)\n \n diff = abs(weights-weights.shift(1))\n \n summation = np.sum(diff.sum(axis = 1))\n \n result = summation/rebalance_count * (n_trading_days_in_year/shift_size)\n \n return result\n\nproject_tests.test_get_portfolio_turnover(get_portfolio_turnover)",
"Tests Passed\n"
]
],
[
[
"Run the following cell to get the portfolio turnover from `get_portfolio turnover`.",
"_____no_output_____"
]
],
[
[
"chunk_size = 250\nshift_size = 5\nall_rebalance_weights = rebalance_portfolio(returns, index_weights, shift_size, chunk_size)",
"_____no_output_____"
],
[
"print(get_portfolio_turnover(all_rebalance_weights, shift_size, len(all_rebalance_weights) - 1))",
"16.72683266050277\n"
]
],
[
[
"That's it! You've built a smart beta portfolio in part 1 and did portfolio optimization in part 2. You can now submit your project.",
"_____no_output_____"
],
[
"## Submission\nNow that you're done with the project, it's time to submit it. Click the submit button in the bottom right. One of our reviewers will give you feedback on your project with a pass or not passed grade. You can continue to the next section while you wait for feedback.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
]
] |
e72a38bec25c78cbcf93bead3c470cf6ada02a93 | 8,405 | ipynb | Jupyter Notebook | notebook_utils/abstracts_2_vec_per_author.ipynb | omarsou/altegrad_challenge_hindex | 199e555a79919bd4bf2e1483c04458169f9a289b | [
"MIT"
] | 1 | 2021-03-26T08:40:15.000Z | 2021-03-26T08:40:15.000Z | notebook_utils/abstracts_2_vec_per_author.ipynb | omarsou/altegrad_challenge_hindex | 199e555a79919bd4bf2e1483c04458169f9a289b | [
"MIT"
] | null | null | null | notebook_utils/abstracts_2_vec_per_author.ipynb | omarsou/altegrad_challenge_hindex | 199e555a79919bd4bf2e1483c04458169f9a289b | [
"MIT"
] | null | null | null | 27.831126 | 212 | 0.557287 | [
[
[
"**In this notebook, we embed the abstract of the papers into a low dimensional space (using either sentencetransformers library or doc2vec from Gensim) and associate to each author his abstracts embedding**",
"_____no_output_____"
]
],
[
[
"!pip install -U sentence-transformers",
"_____no_output_____"
],
[
"from tqdm import tqdm_notebook as tqdm\nfrom sentence_transformers import SentenceTransformer\nimport pandas as pd\nimport gzip\nimport pickle\nimport numpy as np\nimport torch\nfrom gensim.models.doc2vec import Doc2Vec, TaggedDocument\nfrom string import digits, ascii_letters, punctuation, printable\nimport nltk\nfrom nltk.corpus import stopwords \nnltk.download('stopwords')",
"_____no_output_____"
],
[
"def save(object, filename, protocol = 0):\n \"\"\"Saves a compressed object to disk\n \"\"\"\n file = gzip.GzipFile(filename, 'wb')\n file.write(pickle.dumps(object, protocol))\n file.close()\ndef load_dataset_file(filename):\n with gzip.open(filename, \"rb\") as f:\n loaded_object = pickle.load(f)\n return loaded_object",
"_____no_output_____"
]
],
[
[
"# Load Abstracts",
"_____no_output_____"
]
],
[
[
"tmp = load_dataset_file('/content/drive/MyDrive/altegrad_datachallenge/files_generated/preprocess_abstracts.txt')\n## Cleaning V2 (before conditioned on word with word.isalpha() as a condition)\nvalid = ascii_letters + digits + punctuation + printable\npaper_id = []\ntext = []\nfor key in tqdm(tmp.keys()):\n txt = ''.join([char for char in tmp[key] if char in valid])\n if len(txt) > 0:\n paper_id.append(key)\n text.append(txt)",
"_____no_output_____"
]
],
[
[
"# Abstract Embedding",
"_____no_output_____"
],
[
"## STSB Roberta Base",
"_____no_output_____"
]
],
[
[
"model = SentenceTransformer('stsb-roberta-base')\nmodel.cuda()\nembeddings = model.encode(text)",
"_____no_output_____"
],
[
"emb_per_paper = {}\nfor idx, id in enumerate(paper_id):\n emb_per_paper[id] = embeddings[idx]\nsave(emb_per_paper, '/content/drive/MyDrive/altegrad_datachallenge/embedding_per_paper_clean.txt')",
"_____no_output_____"
]
],
[
[
"## Doc2Vec",
"_____no_output_____"
]
],
[
[
"stop_words = set(stopwords.words('english')) \ndoc = []\nfor txt in tqdm(text):\n p = txt.split()\n p_clean = [l for l in p if l not in stop_words]\n doc.append(p_clean)\ndel text\n\ntagged_data = [TaggedDocument(d, [i]) for i, d in enumerate(doc)]\nmodel = Doc2Vec(tagged_data, vector_size = 256, window = 5, min_count = 2, epochs = 100, workers=10)",
"_____no_output_____"
],
[
"# Save the embedding\nemb_per_paper = {}\nfor idx, id_ in tqdm(enumerate(paper_id)):\n emb_per_paper[id_] = model.docvecs[idx]\nmodel.save('/content/drive/MyDrive/altegrad_datachallenge/word2vec.model') # Saving the model\nsave(emb_per_paper, '/content/drive/MyDrive/altegrad_datachallenge/doc2vec_paper_embedding.txt') # Saving the embedding",
"_____no_output_____"
]
],
[
[
"# Abstract Per Author Embedding\nAssociate each author with his articles",
"_____no_output_____"
]
],
[
[
"# read the file to create a dictionary with author key and paper list as value\nf = open(\"/content/drive/MyDrive/altegrad_datachallenge/author_papers.txt\",\"r\")\npapers_set = set()\nd = {}\nfor l in f:\n auth_paps = [paper_id.strip() for paper_id in l.split(\":\")[1].replace(\"[\",\"\").replace(\"]\",\"\").replace(\"\\n\",\"\").replace(\"\\'\",\"\").replace(\"\\\"\",\"\").split(\",\")]\n d[l.split(\":\")[0]] = auth_paps",
"_____no_output_____"
]
],
[
[
"## Using Roberta Embedding",
"_____no_output_____"
]
],
[
[
"emb_per_paper = load_dataset_file('/content/drive/MyDrive/altegrad_datachallenge/embedding_per_paper_clean.txt')\ndf = open(\"/content/drive/MyDrive/altegrad_datachallenge/author_embedding_clean.csv\",\"w\")\nfor id_author in tqdm(d.keys()):\n tot_embedding = np.zeros(768)\n c = 0\n for id_paper in d[id_author]:\n try:\n tot_embedding += emb_per_paper[id_paper]\n c += 1\n except KeyError:\n continue\n if c==0:\n c=1\n tot_embeddding = np.append(tot_embedding/c, c)\n df.write(id_author+\",\"+\",\".join(map(lambda x:\"{:.8f}\".format(round(x, 8)), tot_embedding))+\"\\n\")\ndf.close()",
"_____no_output_____"
]
],
[
[
"## Using Doc2Vec",
"_____no_output_____"
]
],
[
[
"emb_per_paper = load_dataset_file('/content/drive/MyDrive/altegrad_datachallenge/doc2vec_paper_embedding.txt')\ndf = open(\"/content/drive/MyDrive/altegrad_datachallenge/doc2vec_author_embedding.csv\",\"w\")\nfor id_author in tqdm(d.keys()):\n tot_embedding = np.zeros(256)\n c = 0\n for id_paper in d[id_author]:\n try:\n tot_embedding += emb_per_paper[id_paper]\n c += 1\n except KeyError:\n continue\n if c==0:\n c=1\n tot_embeddding = np.append(tot_embedding/c, c)\n df.write(id_author+\",\"+\",\".join(map(lambda x:\"{:.8f}\".format(round(x, 8)), tot_embedding))+\"\\n\")\ndf.close()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e72a3f9dca6331722cd677728cffefc9c833cab2 | 4,459 | ipynb | Jupyter Notebook | DAG_viz.ipynb | djliden/LSHTC3_DAG | deea0f64d6fc360001b4a546eb92312021ba3770 | [
"MIT"
] | null | null | null | DAG_viz.ipynb | djliden/LSHTC3_DAG | deea0f64d6fc360001b4a546eb92312021ba3770 | [
"MIT"
] | null | null | null | DAG_viz.ipynb | djliden/LSHTC3_DAG | deea0f64d6fc360001b4a546eb92312021ba3770 | [
"MIT"
] | null | null | null | 46.447917 | 2,400 | 0.549451 | [
[
[
"# Visualization using the Graphviz Library\n**Goal:** Visualize all (or some) of the DAG defining the LSHTC3 data.\n\nSource: https://graphviz.readthedocs.io/en/stable/examples.html\n\n## Load the Data",
"_____no_output_____"
]
],
[
[
"with open(\"./data/hierarchyWikipediaMedium.txt\", 'r') as edges:\n lines = []\n for line in edges.readlines():\n line = line.rstrip('\\r\\n')\n line = line.split(' ')\n lines.append(line)\n print(lines[0:100])",
"[['2143406', '2156813'], ['2143406', '2322682'], ['2143406', '143406'], ['2143406', '2255744'], ['2143406', '2235965'], ['2156813', '2440809'], ['2156813', '2159645'], ['2156813', '2267844'], ['2156813', '2271677'], ['2156813', '2152343'], ['2156813', '1008038'], ['2156813', '2310019'], ['2156813', '2266243'], ['2156813', '2426918'], ['2156813', '2334805'], ['2156813', '2132486'], ['2156813', '2078122'], ['2156813', '2057861'], ['2156813', '2271281'], ['2156813', '1012550'], ['2156813', '2042688'], ['2156813', '2373246'], ['2156813', '2244017'], ['2156813', '156813'], ['2322682', '2013402'], ['2322682', '2037186'], ['2322682', '2341442'], ['2322682', '2031546'], ['2322682', '2305909'], ['2322682', '2205397'], ['2322682', '1112766'], ['2322682', '2230320'], ['2322682', '2402750'], ['2322682', '2326915'], ['2322682', '2242317'], ['2322682', '2418165'], ['2322682', '2196567'], ['2322682', '2256232'], ['2322682', '2127748'], ['2322682', '2305967'], ['2322682', '2436982'], ['2322682', '2207289'], ['2322682', '2386980'], ['2322682', '2151145'], ['2322682', '2080615'], ['2322682', '322682'], ['2255744', '2312526'], ['2255744', '2324773'], ['2255744', '2127372'], ['2255744', '2091567'], ['2255744', '2274652'], ['2255744', '2095683'], ['2255744', '255744'], ['2255744', '2085095'], ['2255744', '2330841'], ['2255744', '2243139'], ['2255744', '2287636'], ['2255744', '2054454'], ['2255744', '2325699'], ['2255744', '348917'], ['2255744', '2268000'], ['2235965', '2106820'], ['2235965', '2439275'], ['2235965', '2361429'], ['2235965', '235965'], ['2235965', '352898'], ['2235965', '2222416'], ['2235965', '97796'], ['2013402', '2286461'], ['2013402', '2416362'], ['2013402', '2203059'], ['2013402', '2302996'], ['2013402', '2020098'], ['2013402', '2172874'], ['2013402', '2248537'], ['2013402', '2223711'], ['2013402', '2104131'], ['2013402', '2273641'], ['2013402', '2053438'], ['2013402', '2343145'], ['2013402', '2197030'], ['2013402', '2354268'], ['2013402', '190998'], ['2013402', '2235125'], ['2013402', '2429376'], ['2013402', '2154114'], ['2013402', '2277405'], ['2013402', '2104404'], ['2013402', '2273930'], ['2013402', '2441803'], ['2013402', '2224886'], ['2013402', '2204292'], ['2013402', '2192589'], ['2013402', '2342202'], ['2013402', '2073759'], ['2013402', '2320620'], ['2013402', '2274150'], ['2013402', '2393382'], ['2013402', '2153916'], ['2013402', '2103361']]\n"
],
[
"from graphviz import Graph\ng = Graph('G', filename='process.gv', engine='sfdp')\nfor edge in lines[6000:6100]:\n g.edge(edge[0], edge[1])\ng.view()",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
]
] |
e72a4c45364aa1424020315361d41233b6648baa | 241,554 | ipynb | Jupyter Notebook | working_ipynbs/name_classification.ipynb | TamatiB/restitution_africa2021 | a5d640075813350386ff52180a51af2e1367a67f | [
"Apache-2.0"
] | null | null | null | working_ipynbs/name_classification.ipynb | TamatiB/restitution_africa2021 | a5d640075813350386ff52180a51af2e1367a67f | [
"Apache-2.0"
] | null | null | null | working_ipynbs/name_classification.ipynb | TamatiB/restitution_africa2021 | a5d640075813350386ff52180a51af2e1367a67f | [
"Apache-2.0"
] | null | null | null | 41.052685 | 94 | 0.288793 | [
[
[
"The aim of this is to be able to classify names as either being African in origin or not",
"_____no_output_____"
]
],
[
[
"import pandas as pd",
"_____no_output_____"
],
[
"import ast",
"_____no_output_____"
],
[
"from ethnicolr import pred_wiki_ln, pred_wiki_name",
"Using TensorFlow backend.\n"
]
],
[
[
"# Load puplications, get authors and how many publications of theirs we have",
"_____no_output_____"
]
],
[
[
"from collections import Counter",
"_____no_output_____"
],
[
"data = pd.read_csv(\"bb_pulications.csv\")\ndata['author'] = data['bib'].apply(lambda x: ast.literal_eval(x)['author'])\ndata['year'] = data['bib'].apply(lambda x: ast.literal_eval(x)['pub_year'])\ndata['title'] = data['bib'].apply(lambda x: ast.literal_eval(x)['title'])",
"_____no_output_____"
]
],
[
[
"Clean author names a little bit",
"_____no_output_____"
]
],
[
[
"def clean_author(x):\n \"\"\"\n x list of authors for a publication\n \"\"\"\n clean_list = []\n for item in x:\n clean_list.append(item.lower())\n return clean_list",
"_____no_output_____"
],
[
"data['author_cleaned'] = data['author'].apply(lambda x: clean_author(x))",
"_____no_output_____"
],
[
"authors = data['author_cleaned'].sum()",
"_____no_output_____"
],
[
"Counter(authors)",
"_____no_output_____"
]
],
[
[
"Get surnames",
"_____no_output_____"
]
],
[
[
"# first have to searate from initials and then put back togetehr again\nsurnames = []\nfor name in authors:\n surname = name.split(' ')[1:]\n surname_str = ' '.join(surname) \n surnames.append(surname_str)",
"_____no_output_____"
],
[
"Counter(surnames)",
"_____no_output_____"
]
],
[
[
"# Hokay, lets try this Classifier",
"_____no_output_____"
]
],
[
[
"#drop duplicates\nsurnames = list(set(surnames))",
"_____no_output_____"
],
[
"df = pd.DataFrame(surnames, columns=[\"surnames\"])",
"_____no_output_____"
],
[
"preds = pred_wiki_ln(df, \"surnames\")",
"_____no_output_____"
],
[
"preds",
"_____no_output_____"
],
[
"preds['race'].value_counts()",
"_____no_output_____"
],
[
"preds[preds['race'] == 'GreaterAfrican,Africans']",
"_____no_output_____"
]
],
[
[
"Sweet, actually doesnt look like it does too bad. Nice",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
e72a56b11d6d3e9ec97d3362eed5b7ce307b3cdb | 62,008 | ipynb | Jupyter Notebook | examples/cd_text_imdb.ipynb | cliveseldon/alibi-detect | cf3d30790348709716b5202f1d941bf7eaf03667 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | examples/cd_text_imdb.ipynb | cliveseldon/alibi-detect | cf3d30790348709716b5202f1d941bf7eaf03667 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | examples/cd_text_imdb.ipynb | cliveseldon/alibi-detect | cf3d30790348709716b5202f1d941bf7eaf03667 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | 38.323857 | 853 | 0.604406 | [
[
[
"# Text drift detection on IMDB movie reviews\n\n### Method\n\nWe detect drift on text data using both the [Maximum Mean Discrepancy](https://docs.seldon.io/projects/alibi-detect/en/latest/methods/mmddrift.html) and [Kolmogorov-Smirnov (K-S)](https://docs.seldon.io/projects/alibi-detect/en/latest/methods/ksdrift.html) detectors. In this example notebook we will focus on detecting covariate shift $\\Delta p(x)$ as detecting predicted label distribution drift does not differ from other modalities (check [K-S](https://docs.seldon.io/projects/alibi-detect/en/latest/examples/cd_ks_cifar10.html#BBSDs) and [MMD](https://docs.seldon.io/projects/alibi-detect/en/latest/examples/cd_mmd_cifar10.html#BBSDs) drift on CIFAR-10).\n\nIt becomes however a little bit more involved when we want to pick up input data drift $\\Delta p(x)$. When we deal with tabular or image data, we can either directly apply the two sample hypothesis test on the input or do the test after a preprocessing step with for instance a randomly initialized encoder as proposed in [Failing Loudly: An Empirical Study of Methods for Detecting Dataset Shift](https://arxiv.org/abs/1810.11953) (they call it an Untrained AutoEncoder or *UAE*). It is not as straightforward when dealing with text, both in string or tokenized format as they don't directly represent the semantics of the input.\n\nAs a result, we extract (contextual) embeddings for the text and detect drift on those. This procedure has a significant impact on the type of drift we detect. Strictly speaking we are not detecting $\\Delta p(x)$ anymore since the whole training procedure (objective function, training data etc) for the (pre)trained embeddings has an impact on the embeddings we extract.\n\nThe library contains functionality to leverage pre-trained embeddings from [HuggingFace's transformer package](https://github.com/huggingface/transformers) but also allows you to easily use your own embeddings of choice. Both options are illustrated with examples in this notebook.\n\n### Backend\n\nThe method works with both the **PyTorch** and **TensorFlow** frameworks for the statistical tests and preprocessing steps. Alibi Detect does however not install PyTorch for you. \nCheck the [PyTorch docs](https://pytorch.org/) how to do this.\n\n### Dataset\n\nBinary sentiment classification [dataset](https://ai.stanford.edu/~amaas/data/sentiment/) containing $25,000$ movie reviews for training and $25,000$ for testing. Install the `nlp` library to fetch the dataset:\n\n`pip install nlp`",
"_____no_output_____"
]
],
[
[
"import nlp\nimport numpy as np\nimport os\nimport tensorflow as tf\nfrom transformers import AutoTokenizer\nfrom alibi_detect.cd import KSDrift, MMDDrift\nfrom alibi_detect.utils.saving import save_detector, load_detector",
"_____no_output_____"
]
],
[
[
"## Load tokenizer",
"_____no_output_____"
]
],
[
[
"model_name = 'bert-base-cased'\ntokenizer = AutoTokenizer.from_pretrained(model_name)",
"_____no_output_____"
]
],
[
[
"## Load data",
"_____no_output_____"
]
],
[
[
"def load_dataset(dataset: str, split: str = 'test'):\n data = nlp.load_dataset(dataset)\n X, y = [], []\n for x in data[split]:\n X.append(x['text'])\n y.append(x['label'])\n X = np.array(X)\n y = np.array(y)\n return X, y",
"_____no_output_____"
],
[
"X, y = load_dataset('imdb', split='train')\nprint(X.shape, y.shape)",
"INFO:nlp.load:Checking /home/avl/.cache/huggingface/datasets/d3b7716978cb901261e59327d43b04c52d6d29e50eeac39bea0816865a584081.7c39fd6270c5ee55bcf2e4de23af77ef299e0df65be3f3e84454dcef7175844a.py for additional imports.\nINFO:filelock:Lock 140070637965264 acquired on /home/avl/.cache/huggingface/datasets/d3b7716978cb901261e59327d43b04c52d6d29e50eeac39bea0816865a584081.7c39fd6270c5ee55bcf2e4de23af77ef299e0df65be3f3e84454dcef7175844a.py.lock\nINFO:nlp.load:Found main folder for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/imdb/imdb.py at /home/avl/anaconda3/envs/detect/lib/python3.7/site-packages/nlp/datasets/imdb\nINFO:nlp.load:Found specific version folder for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/imdb/imdb.py at /home/avl/anaconda3/envs/detect/lib/python3.7/site-packages/nlp/datasets/imdb/76cdbd7249ea3548c928bbf304258dab44d09cd3638d9da8d42480d1d1be3743\nINFO:nlp.load:Found script file from https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/imdb/imdb.py to /home/avl/anaconda3/envs/detect/lib/python3.7/site-packages/nlp/datasets/imdb/76cdbd7249ea3548c928bbf304258dab44d09cd3638d9da8d42480d1d1be3743/imdb.py\nINFO:nlp.load:Found dataset infos file from https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/imdb/dataset_infos.json to /home/avl/anaconda3/envs/detect/lib/python3.7/site-packages/nlp/datasets/imdb/76cdbd7249ea3548c928bbf304258dab44d09cd3638d9da8d42480d1d1be3743/dataset_infos.json\nINFO:nlp.load:Found metadata file for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/imdb/imdb.py at /home/avl/anaconda3/envs/detect/lib/python3.7/site-packages/nlp/datasets/imdb/76cdbd7249ea3548c928bbf304258dab44d09cd3638d9da8d42480d1d1be3743/imdb.json\nINFO:filelock:Lock 140070637965264 released on /home/avl/.cache/huggingface/datasets/d3b7716978cb901261e59327d43b04c52d6d29e50eeac39bea0816865a584081.7c39fd6270c5ee55bcf2e4de23af77ef299e0df65be3f3e84454dcef7175844a.py.lock\nINFO:nlp.builder:No config specified, defaulting to first: imdb/plain_text\nINFO:nlp.info:Loading Dataset Infos from /home/avl/anaconda3/envs/detect/lib/python3.7/site-packages/nlp/datasets/imdb/76cdbd7249ea3548c928bbf304258dab44d09cd3638d9da8d42480d1d1be3743\nINFO:nlp.builder:Overwrite dataset info from restored data version.\nINFO:nlp.info:Loading Dataset info from /home/avl/.cache/huggingface/datasets/imdb/plain_text/1.0.0/76cdbd7249ea3548c928bbf304258dab44d09cd3638d9da8d42480d1d1be3743\nINFO:nlp.builder:Reusing dataset imdb (/home/avl/.cache/huggingface/datasets/imdb/plain_text/1.0.0/76cdbd7249ea3548c928bbf304258dab44d09cd3638d9da8d42480d1d1be3743)\nINFO:nlp.builder:Constructing Dataset for split train, test, unsupervised, from /home/avl/.cache/huggingface/datasets/imdb/plain_text/1.0.0/76cdbd7249ea3548c928bbf304258dab44d09cd3638d9da8d42480d1d1be3743\nINFO:nlp.utils.info_utils:All the checksums matched successfully for post processing resources\nINFO:nlp.utils.info_utils:All the checksums matched successfully for post processing resources\nINFO:nlp.utils.info_utils:All the checksums matched successfully for post processing resources\n"
]
],
[
[
"Let's take a look at respectively a negative and positive review:",
"_____no_output_____"
]
],
[
[
"labels = ['Negative', 'Positive']\nprint(labels[y[-1]])\nprint(X[-1])",
"Negative\nThis is one of the dumbest films, I've ever seen. It rips off nearly ever type of thriller and manages to make a mess of them all.<br /><br />There's not a single good line or character in the whole mess. If there was a plot, it was an afterthought and as far as acting goes, there's nothing good to say so Ill say nothing. I honestly cant understand how this type of nonsense gets produced and actually released, does somebody somewhere not at some stage think, 'Oh my god this really is a load of shite' and call it a day. Its crap like this that has people downloading illegally, the trailer looks like a completely different film, at least if you have download it, you haven't wasted your time or money Don't waste your time, this is painful.\n"
],
[
"print(labels[y[2]])\nprint(X[2])",
"Positive\nBrilliant over-acting by Lesley Ann Warren. Best dramatic hobo lady I have ever seen, and love scenes in clothes warehouse are second to none. The corn on face is a classic, as good as anything in Blazing Saddles. The take on lawyers is also superb. After being accused of being a turncoat, selling out his boss, and being dishonest the lawyer of Pepto Bolt shrugs indifferently \"I'm a lawyer\" he says. Three funny words. Jeffrey Tambor, a favorite from the later Larry Sanders show, is fantastic here too as a mad millionaire who wants to crush the ghetto. His character is more malevolent than usual. The hospital scene, and the scene where the homeless invade a demolition site, are all-time classics. Look for the legs scene and the two big diggers fighting (one bleeds). This movie gets better each time I see it (which is quite often).\n"
]
],
[
[
"We split the original test set in a reference dataset and a dataset which should not be rejected under the *H0* of the statistical test. We also create imbalanced datasets and inject selected words in the reference set.",
"_____no_output_____"
]
],
[
[
"def random_sample(X: np.ndarray, y: np.ndarray, proba_zero: float, n: int):\n if len(y.shape) == 1:\n idx_0 = np.where(y == 0)[0]\n idx_1 = np.where(y == 1)[0]\n else:\n idx_0 = np.where(y[:, 0] == 1)[0]\n idx_1 = np.where(y[:, 1] == 1)[0]\n n_0, n_1 = int(n * proba_zero), int(n * (1 - proba_zero))\n idx_0_out = np.random.choice(idx_0, n_0, replace=False)\n idx_1_out = np.random.choice(idx_1, n_1, replace=False)\n X_out = np.concatenate([X[idx_0_out], X[idx_1_out]])\n y_out = np.concatenate([y[idx_0_out], y[idx_1_out]])\n return X_out, y_out\n\n\ndef padding_last(x: np.ndarray, seq_len: int) -> np.ndarray:\n try: # try not to replace padding token\n last_token = np.where(x == 0)[0][0]\n except: # no padding\n last_token = seq_len - 1\n return 1, last_token\n\n\ndef padding_first(x: np.ndarray, seq_len: int) -> np.ndarray:\n try: # try not to replace padding token\n first_token = np.where(x == 0)[0][-1] + 2\n except: # no padding\n first_token = 0\n return first_token, seq_len - 1\n\n\ndef inject_word(token: int, X: np.ndarray, perc_chg: float, padding: str = 'last'):\n seq_len = X.shape[1]\n n_chg = int(perc_chg * .01 * seq_len)\n X_cp = X.copy()\n for _ in range(X.shape[0]):\n if padding == 'last':\n first_token, last_token = padding_last(X_cp[_, :], seq_len)\n else:\n first_token, last_token = padding_first(X_cp[_, :], seq_len)\n if last_token <= n_chg:\n choice_len = seq_len\n else:\n choice_len = last_token\n idx = np.random.choice(np.arange(first_token, choice_len), n_chg, replace=False)\n X_cp[_, idx] = token\n return X_cp",
"_____no_output_____"
]
],
[
[
"Reference, *H0* and imbalanced data:",
"_____no_output_____"
]
],
[
[
"# proba_zero = fraction with label 0 (=negative sentiment)\nn_sample = 1000\nX_ref = random_sample(X, y, proba_zero=.5, n=n_sample)[0]\nX_h0 = random_sample(X, y, proba_zero=.5, n=n_sample)[0]\nn_imb = [.1, .9]\nX_imb = {_: random_sample(X, y, proba_zero=_, n=n_sample)[0] for _ in n_imb}",
"_____no_output_____"
]
],
[
[
"Inject words in reference data:",
"_____no_output_____"
]
],
[
[
"words = ['fantastic', 'good', 'bad', 'horrible']\nperc_chg = [1., 5.] # % of tokens to change in an instance\n\nwords_tf = tokenizer(words)['input_ids']\nwords_tf = [token[1:-1][0] for token in words_tf]\nmax_len = 100\ntokens = tokenizer(list(X_ref), pad_to_max_length=True, \n max_length=max_len, return_tensors='tf')\nX_word = {}\nfor i, w in enumerate(words_tf):\n X_word[words[i]] = {}\n for p in perc_chg:\n x = inject_word(w, tokens['input_ids'].numpy(), p)\n dec = tokenizer.batch_decode(x, **dict(skip_special_tokens=True))\n X_word[words[i]][p] = np.array(dec)",
"Truncation was not explicitly activated but `max_length` is provided a specific value, please use `truncation=True` to explicitly truncate examples to max length. Defaulting to 'longest_first' truncation strategy. If you encode pairs of sequences (GLUE-style) with the tokenizer you can select this strategy more precisely by providing a specific strategy to `truncation`.\n/home/avl/anaconda3/envs/detect/lib/python3.7/site-packages/transformers/tokenization_utils_base.py:2079: FutureWarning:\n\nThe `pad_to_max_length` argument is deprecated and will be removed in a future version, use `padding=True` or `padding='longest'` to pad to the longest sequence in the batch, or use `padding='max_length'` to pad to a max length. In this case, you can give a specific length with `max_length` (e.g. `max_length=45`) or leave max_length to None to pad to the maximal input size of the model (e.g. 512 for Bert).\n\n"
]
],
[
[
"## Preprocessing\n\nFirst we need to specify the type of embedding we want to extract from the BERT model. We can extract embeddings from the ...\n\n- **pooler_output**: Last layer hidden-state of the first token of the sequence (classification token; CLS) further processed by a Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence prediction (classification) objective during pre-training. **Note**: this output is usually not a good summary of the semantic content of the input, you’re often better with averaging or pooling the sequence of hidden-states for the whole input sequence.\n\n- **last_hidden_state**: Sequence of hidden states at the output of the last layer of the model, averaged over the tokens.\n\n- **hidden_state**: Hidden states of the model at the output of each layer, averaged over the tokens.\n\n- **hidden_state_cls**: See *hidden_state* but use the CLS token output.\n\nIf *hidden_state* or *hidden_state_cls* is used as embedding type, you also need to pass the layer numbers used to extract the embedding from. As an example we extract embeddings from the last 8 hidden states.",
"_____no_output_____"
]
],
[
[
"from alibi_detect.models.tensorflow import TransformerEmbedding\n\nemb_type = 'hidden_state'\nn_layers = 8\nlayers = [-_ for _ in range(1, n_layers + 1)]\n\nembedding = TransformerEmbedding(model_name, emb_type, layers)",
"Some layers from the model checkpoint at bert-base-cased were not used when initializing TFBertModel: ['nsp___cls', 'mlm___cls']\n- This IS expected if you are initializing TFBertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\n- This IS NOT expected if you are initializing TFBertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\nAll the layers of TFBertModel were initialized from the model checkpoint at bert-base-cased.\nIf your task is similar to the task the model of the checkpoint was trained on, you can already use TFBertModel for predictions without further training.\n"
]
],
[
[
"Let's check what an embedding looks like:",
"_____no_output_____"
]
],
[
[
"tokens = tokenizer(list(X[:5]), pad_to_max_length=True, \n max_length=max_len, return_tensors='tf')\nx_emb = embedding(tokens)\nprint(x_emb.shape)",
"(5, 768)\n"
]
],
[
[
"So the BERT model's embedding space used by the drift detector consists of a $768$-dimensional vector for each instance. We will therefore first apply a dimensionality reduction step with an Untrained AutoEncoder (*UAE*) before conducting the statistical hypothesis test. We use the embedding model as the input for the UAE which then projects the embedding on a lower dimensional space.",
"_____no_output_____"
]
],
[
[
"tf.random.set_seed(0)",
"_____no_output_____"
],
[
"from alibi_detect.cd.tensorflow import UAE\n\nenc_dim = 32\nshape = (x_emb.shape[1],)\n\nuae = UAE(input_layer=embedding, shape=shape, enc_dim=enc_dim)",
"_____no_output_____"
]
],
[
[
"Let's test this again:",
"_____no_output_____"
]
],
[
[
"emb_uae = uae(tokens)\nprint(emb_uae.shape)",
"(5, 32)\n"
]
],
[
[
"## K-S detector\n### Initialize\n\nWe proceed to initialize the drift detector. From here on the detector works the same as for other modalities such as images. Please check the [images](https://docs.seldon.io/projects/alibi-detect/en/latest/examples/cd_ks_cifar10.html) example or the [K-S detector documentation](https://docs.seldon.io/projects/alibi-detect/en/latest/methods/ksdrift.html) for more information about each of the possible parameters.",
"_____no_output_____"
]
],
[
[
"from functools import partial\nfrom alibi_detect.cd.tensorflow import preprocess_drift\n\n# define preprocessing function\npreprocess_fn = partial(preprocess_drift, model=uae, tokenizer=tokenizer, \n max_len=max_len, batch_size=32)\n\n# initialize detector\ncd = KSDrift(X_ref, p_val=.05, preprocess_fn=preprocess_fn, input_shape=(max_len,))\n\n# we can also save/load an initialised detector\nfilepath = 'my_path' # change to directory where detector is saved\nsave_detector(cd, filepath)\ncd = load_detector(filepath)",
"WARNING:alibi_detect.utils.saving:Directory my_path does not exist and is now created.\n"
]
],
[
[
"### Detect drift\n\nLet’s first check if drift occurs on a similar sample from the training set as the reference data.",
"_____no_output_____"
]
],
[
[
"preds_h0 = cd.predict(X_h0)\nlabels = ['No!', 'Yes!']\nprint('Drift? {}'.format(labels[preds_h0['data']['is_drift']]))\nprint('p-value: {}'.format(preds_h0['data']['p_val']))",
"Truncation was not explicitly activated but `max_length` is provided a specific value, please use `truncation=True` to explicitly truncate examples to max length. Defaulting to 'longest_first' truncation strategy. If you encode pairs of sequences (GLUE-style) with the tokenizer you can select this strategy more precisely by providing a specific strategy to `truncation`.\n"
]
],
[
[
"Detect drift on imbalanced and perturbed datasets:",
"_____no_output_____"
]
],
[
[
"for k, v in X_imb.items():\n preds = cd.predict(v)\n print('% negative sentiment {}'.format(k * 100))\n print('Drift? {}'.format(labels[preds['data']['is_drift']]))\n print('p-value: {}'.format(preds['data']['p_val']))\n print('')",
"% negative sentiment 10.0\nDrift? Yes!\np-value: [4.32430744e-01 7.22554982e-01 6.85231388e-01 6.09918952e-01\n 1.81119651e-01 6.20218972e-03 1.22740539e-03 1.12110768e-02\n 1.63965786e-04 3.69972497e-01 5.36054313e-01 8.59294355e-01\n 8.27956855e-01 1.81119651e-01 4.00471032e-01 6.47557259e-01\n 7.76214674e-02 1.20504074e-01 1.33834302e-01 2.24637091e-02\n 6.47557259e-01 8.69054198e-02 7.76214674e-02 7.94394433e-01\n 1.20504074e-01 3.40991944e-01 5.72654784e-01 2.87693232e-01\n 2.87693232e-01 5.72654784e-01 1.99518353e-01 2.87693232e-01]\n\n% negative sentiment 90.0\nDrift? Yes!\np-value: [5.9607941e-01 3.1773445e-01 1.0167704e-01 6.0131598e-01 4.7765803e-03\n 7.8468665e-02 5.4378760e-01 3.1890289e-04 4.7273561e-02 2.3027392e-01\n 3.5409841e-01 2.2440368e-01 4.5503160e-01 8.8078308e-01 7.5261140e-01\n 6.5092611e-01 3.8073459e-01 5.4552953e-04 6.6255075e-01 6.9101667e-01\n 3.9483134e-02 8.2559012e-02 3.2168049e-01 1.9095013e-01 7.0450002e-01\n 1.5517529e-06 9.7765464e-01 9.8889194e-02 6.3466263e-01 2.9970827e-02\n 1.7626658e-01 5.0656848e-02]\n\n"
],
[
"for w, probas in X_word.items():\n for p, v in probas.items():\n preds = cd.predict(v)\n print('Word: {} -- % perturbed: {}'.format(w, p))\n print('Drift? {}'.format(labels[preds['data']['is_drift']]))\n print('p-value: {}'.format(preds['data']['p_val']))\n print('')",
"Word: fantastic -- % perturbed: 1.0\nDrift? No!\np-value: [0.9540582 0.01293455 0.26338065 0.722555 0.34099194 0.04281518\n 0.04841881 0.31356168 0.14833806 0.96887016 0.85929435 0.50035924\n 0.00532228 0.8879386 0.9998709 0.99870795 0.85929435 0.9882611\n 0.06155144 0.7590978 0.79439443 0.2406036 0.10828251 0.722555\n 0.28769323 0.18111965 0.9134755 0.996931 0.18111965 0.07762147\n 0.9540582 0.5726548 ]\n\nWord: fantastic -- % perturbed: 5.0\nDrift? Yes!\np-value: [4.55808453e-03 4.14164800e-17 2.43227714e-08 6.85231388e-01\n 3.18301190e-08 1.26629300e-17 1.10562748e-09 1.71140861e-02\n 1.69780876e-14 3.50604125e-04 1.48931602e-02 1.84965307e-10\n 0.00000000e+00 1.48931602e-02 9.93654132e-01 1.08282514e-01\n 3.40991944e-01 2.19330013e-01 2.14098059e-19 7.76214674e-02\n 3.25786677e-05 1.69780876e-14 1.09291570e-20 6.15514442e-02\n 8.36122004e-23 4.56308130e-10 1.20504074e-01 4.00471032e-01\n 2.86754206e-33 7.08821891e-21 2.26972293e-06 7.42663324e-05]\n\nWord: good -- % perturbed: 1.0\nDrift? Yes!\np-value: [2.1933001e-01 9.1347551e-01 1.3383430e-01 9.9954331e-01 2.6338065e-01\n 9.9954331e-01 6.0991895e-01 8.8793862e-01 9.6887016e-01 5.7265478e-01\n 9.3558097e-01 5.3605431e-01 9.7104527e-02 9.9870795e-01 9.3558097e-01\n 4.6576622e-01 9.9987090e-01 2.6338065e-01 9.9693102e-01 1.1211077e-02\n 9.3558097e-01 9.1347551e-01 6.0991895e-01 7.2255498e-01 6.0991895e-01\n 9.9870795e-01 9.6887016e-01 9.9870795e-01 5.7265478e-01 4.2185336e-04\n 9.9365413e-01 9.8016179e-01]\n\nWord: good -- % perturbed: 5.0\nDrift? Yes!\np-value: [2.86769516e-16 9.98707950e-01 4.91978077e-19 7.94394433e-01\n 4.64324268e-09 7.22554982e-01 3.50604125e-04 3.40991944e-01\n 1.34916729e-04 2.09715821e-11 6.47557259e-01 4.21853358e-04\n 1.65277426e-33 5.46463318e-02 3.40991944e-01 1.84965307e-10\n 5.36054313e-01 1.00300261e-10 9.80161786e-01 1.69780876e-14\n 9.13475513e-01 3.27475419e-07 2.54783203e-07 3.32311448e-03\n 1.34916729e-04 1.20504074e-01 6.15514442e-02 7.94394433e-01\n 1.18559271e-07 0.00000000e+00 2.82894098e-03 1.64079204e-01]\n\nWord: bad -- % perturbed: 1.0\nDrift? No!\np-value: [0.6852314 0.40047103 0.1338343 0.9882611 0.50035924 0.9882611\n 0.9999727 0.8879386 0.8879386 0.46576622 0.8879386 0.85929435\n 0.01962691 0.9540582 0.9998709 0.40047103 0.21933001 0.01962691\n 0.6852314 0.18111965 0.31356168 0.6852314 0.14833806 0.9134755\n 0.93558097 0.99870795 0.9999727 0.99365413 0.722555 0.21933001\n 0.06155144 0.9998709 ]\n\nWord: bad -- % perturbed: 5.0\nDrift? Yes!\np-value: [8.2482254e-10 2.8794037e-11 8.4967083e-18 2.4060360e-01 3.2578668e-05\n 2.4060360e-01 5.7265478e-01 1.4833806e-01 3.6098195e-06 1.3007273e-15\n 3.1356168e-01 4.1571425e-08 1.0593816e-42 7.2607823e-04 2.4060360e-01\n 1.7114086e-02 1.8548947e-08 4.5879536e-21 1.8111965e-01 1.9783097e-07\n 8.4248814e-24 4.6432427e-09 2.8676952e-16 7.2131259e-03 4.3243074e-01\n 9.1347551e-01 1.6407920e-01 1.4563050e-03 5.3955968e-11 6.1319246e-16\n 4.9197808e-19 9.8016179e-01]\n\nWord: horrible -- % perturbed: 1.0\nDrift? Yes!\np-value: [0.26338065 0.9995433 0.99870795 0.9540582 0.7590978 0.722555\n 0.9999727 0.9134755 0.00145631 0.99870795 0.9995433 0.64755726\n 0.09710453 0.99870795 0.5360543 0.99870795 0.04281518 0.1338343\n 0.82795686 0.1338343 0.1640792 0.9134755 0.43243074 0.9801618\n 0.9995433 0.1338343 0.99365413 0.9999727 0.9998709 0.00203786\n 0.1640792 0.7590978 ]\n\nWord: horrible -- % perturbed: 5.0\nDrift? Yes!\np-value: [1.26629300e-17 5.36054313e-01 1.20504074e-01 8.27956855e-01\n 7.26078229e-04 9.69783217e-03 4.84188050e-02 6.07078255e-04\n 3.21035236e-38 4.01514189e-05 2.87693232e-01 1.84965307e-10\n 5.41929480e-39 1.64079204e-01 1.63965786e-04 1.48338065e-01\n 1.41174699e-08 1.98871276e-04 4.56308130e-10 1.95523170e-16\n 6.34892210e-31 2.54783203e-07 9.03489017e-17 9.80161786e-01\n 4.15714254e-08 4.95470906e-14 9.13475513e-01 1.98871276e-04\n 5.62237052e-13 0.00000000e+00 2.79573150e-17 1.71140861e-02]\n\n"
]
],
[
[
"## MMD TensorFlow detector\n\n### Initialize\n\nAgain check the [images](https://docs.seldon.io/projects/alibi-detect/en/latest/examples/cd_mmd_cifar10.html) example or the [MMD detector documentation](https://docs.seldon.io/projects/alibi-detect/en/latest/methods/mmddrift.html) for more information about each of the possible parameters.",
"_____no_output_____"
]
],
[
[
"cd = MMDDrift(X_ref, p_val=.05, preprocess_fn=preprocess_fn, \n n_permutations=100, input_shape=(max_len,))",
"_____no_output_____"
]
],
[
[
"### Detect drift\n\n*H0*:",
"_____no_output_____"
]
],
[
[
"preds_h0 = cd.predict(X_h0)\nlabels = ['No!', 'Yes!']\nprint('Drift? {}'.format(labels[preds_h0['data']['is_drift']]))\nprint('p-value: {}'.format(preds_h0['data']['p_val']))",
"Drift? No!\np-value: 0.9\n"
]
],
[
[
"Imbalanced data:",
"_____no_output_____"
]
],
[
[
"for k, v in X_imb.items():\n preds = cd.predict(v)\n print('% negative sentiment {}'.format(k * 100))\n print('Drift? {}'.format(labels[preds['data']['is_drift']]))\n print('p-value: {}'.format(preds['data']['p_val']))\n print('')",
"% negative sentiment 10.0\nDrift? Yes!\np-value: 0.0\n\n% negative sentiment 90.0\nDrift? Yes!\np-value: 0.0\n\n"
]
],
[
[
"Perturbed data:",
"_____no_output_____"
]
],
[
[
"for w, probas in X_word.items():\n for p, v in probas.items():\n preds = cd.predict(v)\n print('Word: {} -- % perturbed: {}'.format(w, p))\n print('Drift? {}'.format(labels[preds['data']['is_drift']]))\n print('p-value: {}'.format(preds['data']['p_val']))\n print('')",
"Word: fantastic -- % perturbed: 1.0\nDrift? Yes!\np-value: 0.01\n\nWord: fantastic -- % perturbed: 5.0\nDrift? Yes!\np-value: 0.0\n\nWord: good -- % perturbed: 1.0\nDrift? No!\np-value: 0.57\n\nWord: good -- % perturbed: 5.0\nDrift? Yes!\np-value: 0.0\n\nWord: bad -- % perturbed: 1.0\nDrift? No!\np-value: 0.4\n\nWord: bad -- % perturbed: 5.0\nDrift? Yes!\np-value: 0.0\n\nWord: horrible -- % perturbed: 1.0\nDrift? No!\np-value: 0.08\n\nWord: horrible -- % perturbed: 5.0\nDrift? Yes!\np-value: 0.0\n\n"
]
],
[
[
"## MMD PyTorch detector\n\n## Initialize\n\nWe can run the same detector with *PyTorch* backend for both the preprocessing step and MMD implementation:",
"_____no_output_____"
]
],
[
[
"import torch\nimport torch.nn as nn\n\n# set random seed and device\nseed = 0\ntorch.manual_seed(seed)\ntorch.cuda.manual_seed(seed)\n\ndevice = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\nprint(device)",
"cuda\n"
],
[
"from alibi_detect.cd.pytorch import preprocess_drift\nfrom alibi_detect.models.pytorch import TransformerEmbedding\n\nembedding_pt = TransformerEmbedding(model_name, emb_type, layers)\n\nmodel = nn.Sequential(\n embedding_pt,\n nn.Linear(768, 256),\n nn.ReLU(),\n nn.Linear(256, enc_dim)\n).to(device).eval()\n\n# define preprocessing function\npreprocess_fn = partial(preprocess_drift, model=model, tokenizer=tokenizer, \n max_len=max_len, batch_size=32)\n\n# initialise drift detector\ncd = MMDDrift(X_ref, backend='pytorch', p_val=.05, preprocess_fn=preprocess_fn, \n n_permutations=100, input_shape=(max_len,))",
"INFO:filelock:Lock 140068554309968 acquired on /home/avl/.cache/huggingface/transformers/092cc582560fc3833e556b3f833695c26343cb54b7e88cd02d40821462a74999.1f48cab6c959fc6c360d22bea39d06959e90f5b002e77e836d2da45464875cda.lock\n"
]
],
[
[
"### Detect drift\n\n*H0*:",
"_____no_output_____"
]
],
[
[
"preds_h0 = cd.predict(X_h0)\nlabels = ['No!', 'Yes!']\nprint('Drift? {}'.format(labels[preds_h0['data']['is_drift']]))\nprint('p-value: {}'.format(preds_h0['data']['p_val']))",
"Drift? No!\np-value: 0.3400000035762787\n"
]
],
[
[
"Imbalanced data:",
"_____no_output_____"
]
],
[
[
"for k, v in X_imb.items():\n preds = cd.predict(v)\n print('% negative sentiment {}'.format(k * 100))\n print('Drift? {}'.format(labels[preds['data']['is_drift']]))\n print('p-value: {}'.format(preds['data']['p_val']))\n print('')",
"% negative sentiment 10.0\nDrift? Yes!\np-value: 0.0\n\n% negative sentiment 90.0\nDrift? Yes!\np-value: 0.0\n\n"
]
],
[
[
"Perturbed data:",
"_____no_output_____"
]
],
[
[
"for w, probas in X_word.items():\n for p, v in probas.items():\n preds = cd.predict(v)\n print('Word: {} -- % perturbed: {}'.format(w, p))\n print('Drift? {}'.format(labels[preds['data']['is_drift']]))\n print('p-value: {}'.format(preds['data']['p_val']))\n print('')",
"Word: fantastic -- % perturbed: 1.0\nDrift? No!\np-value: 0.07999999821186066\n\nWord: fantastic -- % perturbed: 5.0\nDrift? Yes!\np-value: 0.0\n\nWord: good -- % perturbed: 1.0\nDrift? No!\np-value: 0.7099999785423279\n\nWord: good -- % perturbed: 5.0\nDrift? Yes!\np-value: 0.0\n\nWord: bad -- % perturbed: 1.0\nDrift? No!\np-value: 0.12999999523162842\n\nWord: bad -- % perturbed: 5.0\nDrift? Yes!\np-value: 0.0\n\nWord: horrible -- % perturbed: 1.0\nDrift? No!\np-value: 0.33000001311302185\n\nWord: horrible -- % perturbed: 5.0\nDrift? Yes!\np-value: 0.0\n\n"
]
],
[
[
"## Train embeddings from scratch\n\nSo far we used pre-trained embeddings from a BERT model. We can however also use embeddings from a model trained from scratch. First we define and train a simple classification model consisting of an embedding and LSTM layer in *TensorFlow*.\n\n### Load data and train model",
"_____no_output_____"
]
],
[
[
"from tensorflow.keras.datasets import imdb, reuters\nfrom tensorflow.keras.layers import Dense, Embedding, Input, LSTM\nfrom tensorflow.keras.preprocessing import sequence\nfrom tensorflow.keras.utils import to_categorical\n\nINDEX_FROM = 3\nNUM_WORDS = 10000\n\n\ndef print_sentence(tokenized_sentence: str, id2w: dict):\n print(' '.join(id2w[_] for _ in tokenized_sentence))\n print('')\n print(tokenized_sentence)\n\n\ndef mapping_word_id(data):\n w2id = data.get_word_index()\n w2id = {k: (v + INDEX_FROM) for k, v in w2id.items()}\n w2id[\"<PAD>\"] = 0\n w2id[\"<START>\"] = 1\n w2id[\"<UNK>\"] = 2\n w2id[\"<UNUSED>\"] = 3\n id2w = {v: k for k, v in w2id.items()}\n return w2id, id2w\n\n\ndef get_dataset(dataset: str = 'imdb', max_len: int = 100):\n if dataset == 'imdb':\n data = imdb\n elif dataset == 'reuters':\n data = reuters\n else:\n raise NotImplementedError\n\n w2id, id2w = mapping_word_id(data)\n\n (X_train, y_train), (X_test, y_test) = data.load_data(\n num_words=NUM_WORDS, index_from=INDEX_FROM)\n X_train = sequence.pad_sequences(X_train, maxlen=max_len)\n X_test = sequence.pad_sequences(X_test, maxlen=max_len)\n y_train, y_test = to_categorical(y_train), to_categorical(y_test)\n\n return (X_train, y_train), (X_test, y_test), (w2id, id2w)\n\n\ndef imdb_model(X: np.ndarray, num_words: int = 100, emb_dim: int = 128,\n lstm_dim: int = 128, output_dim: int = 2) -> tf.keras.Model:\n inputs = Input(shape=(X.shape[1:]), dtype=tf.float32)\n x = Embedding(num_words, emb_dim)(inputs)\n x = LSTM(lstm_dim, dropout=.5)(x)\n outputs = Dense(output_dim, activation=tf.nn.softmax)(x)\n model = tf.keras.Model(inputs=inputs, outputs=outputs)\n model.compile(\n loss='categorical_crossentropy',\n optimizer='adam',\n metrics=['accuracy']\n )\n return model",
"_____no_output_____"
]
],
[
[
"Load and tokenize data:",
"_____no_output_____"
]
],
[
[
"(X_train, y_train), (X_test, y_test), (word2token, token2word) = \\\n get_dataset(dataset='imdb', max_len=max_len)",
"<string>:6: VisibleDeprecationWarning:\n\nCreating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray\n\n/home/avl/anaconda3/envs/detect/lib/python3.7/site-packages/tensorflow/python/keras/datasets/imdb.py:159: VisibleDeprecationWarning:\n\nCreating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray\n\n/home/avl/anaconda3/envs/detect/lib/python3.7/site-packages/tensorflow/python/keras/datasets/imdb.py:160: VisibleDeprecationWarning:\n\nCreating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray\n\n"
]
],
[
[
"Let's check out an instance:",
"_____no_output_____"
]
],
[
[
"print_sentence(X_train[0], token2word)",
"cry at a film it must have been good and this definitely was also <UNK> to the two little boy's that played the <UNK> of norman and paul they were just brilliant children are often left out of the <UNK> list i think because the stars that play them all grown up are such a big profile for the whole film but these children are amazing and should be praised for what they have done don't you think the whole story was so lovely because it was true and was someone's life after all that was shared with us all\n\n[1415 33 6 22 12 215 28 77 52 5 14 407 16 82\n 2 8 4 107 117 5952 15 256 4 2 7 3766 5 723\n 36 71 43 530 476 26 400 317 46 7 4 2 1029 13\n 104 88 4 381 15 297 98 32 2071 56 26 141 6 194\n 7486 18 4 226 22 21 134 476 26 480 5 144 30 5535\n 18 51 36 28 224 92 25 104 4 226 65 16 38 1334\n 88 12 16 283 5 16 4472 113 103 32 15 16 5345 19\n 178 32]\n"
]
],
[
[
"Define and train a simple model:",
"_____no_output_____"
]
],
[
[
"model = imdb_model(X=X_train, num_words=NUM_WORDS, emb_dim=256, lstm_dim=128, output_dim=2)\nmodel.fit(X_train, y_train, batch_size=32, epochs=2, \n shuffle=True, validation_data=(X_test, y_test))",
"Epoch 1/2\n782/782 [==============================] - 96s 121ms/step - loss: 0.5019 - accuracy: 0.7397 - val_loss: 0.3452 - val_accuracy: 0.8514\nEpoch 2/2\n782/782 [==============================] - 93s 118ms/step - loss: 0.2649 - accuracy: 0.8943 - val_loss: 0.3628 - val_accuracy: 0.8454\n"
]
],
[
[
"Extract the embedding layer from the trained model and combine with UAE preprocessing step:",
"_____no_output_____"
]
],
[
[
"embedding = tf.keras.Model(inputs=model.inputs, outputs=model.layers[1].output)\nx_emb = embedding(X_train[:5])\nprint(x_emb.shape)",
"(5, 100, 256)\n"
],
[
"tf.random.set_seed(0)\n\nshape = tuple(x_emb.shape[1:])\nuae = UAE(input_layer=embedding, shape=shape, enc_dim=enc_dim)",
"_____no_output_____"
]
],
[
[
"Again, create reference, *H0* and perturbed datasets. Also test against the *Reuters* news topic classification dataset.",
"_____no_output_____"
]
],
[
[
"X_ref, y_ref = random_sample(X_test, y_test, proba_zero=.5, n=n_sample)\nX_h0, y_h0 = random_sample(X_test, y_test, proba_zero=.5, n=n_sample)\ntokens = [word2token[w] for w in words]\nX_word = {}\nfor i, t in enumerate(tokens):\n X_word[words[i]] = {}\n for p in perc_chg:\n X_word[words[i]][p] = inject_word(t, X_ref, p, padding='first')",
"_____no_output_____"
],
[
"# load and tokenize Reuters dataset\n(X_reut, y_reut), (w2t_reut, t2w_reut) = \\\n get_dataset(dataset='reuters', max_len=max_len)[1:]\n\n# sample random instances\nidx = np.random.choice(X_reut.shape[0], n_sample, replace=False)\nX_ood = X_reut[idx]",
"/home/avl/anaconda3/envs/detect/lib/python3.7/site-packages/tensorflow/python/keras/datasets/reuters.py:148: VisibleDeprecationWarning:\n\nCreating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray\n\n/home/avl/anaconda3/envs/detect/lib/python3.7/site-packages/tensorflow/python/keras/datasets/reuters.py:149: VisibleDeprecationWarning:\n\nCreating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray\n\n"
]
],
[
[
"### Initialize detector and detect drift",
"_____no_output_____"
]
],
[
[
"from alibi_detect.cd.tensorflow import preprocess_drift\n\n# define preprocessing function\npreprocess_fn = partial(preprocess_drift, model=uae, batch_size=128)\n\n# initialize detector\ncd = KSDrift(X_ref, p_val=.05, preprocess_fn=preprocess_fn)",
"_____no_output_____"
]
],
[
[
"*H0*:",
"_____no_output_____"
]
],
[
[
"preds_h0 = cd.predict(X_h0)\nlabels = ['No!', 'Yes!']\nprint('Drift? {}'.format(labels[preds_h0['data']['is_drift']]))\nprint('p-value: {}'.format(preds_h0['data']['p_val']))",
"Drift? No!\np-value: [0.93558097 0.64755726 0.50035924 0.85929435 0.04281518 0.93558097\n 0.9801618 0.50035924 0.8879386 0.43243074 0.5726548 0.6852314\n 0.60991895 0.9134755 0.18111965 0.722555 0.5726548 0.21933001\n 0.5360543 0.6852314 0.85929435 0.31356168 0.9801618 0.18111965\n 0.34099194 0.722555 0.04841881 0.99365413 0.82795686 0.14833806\n 0.1338343 0.9134755 ]\n"
]
],
[
[
"Perturbed data:",
"_____no_output_____"
]
],
[
[
"for w, probas in X_word.items():\n for p, v in probas.items():\n preds = cd.predict(v)\n print('Word: {} -- % perturbed: {}'.format(w, p))\n print('Drift? {}'.format(labels[preds['data']['is_drift']]))\n print('p-value: {}'.format(preds['data']['p_val']))\n print('')",
"Word: fantastic -- % perturbed: 1.0\nDrift? No!\np-value: [0.9882611 0.79439443 0.9999727 0.9882611 0.7590978 0.8879386\n 0.996931 0.82795686 0.64755726 0.7590978 0.85929435 0.99870795\n 0.93558097 0.82795686 0.99365413 0.996931 0.85929435 0.8879386\n 0.85929435 0.9540582 0.96887016 0.9801618 0.50035924 0.9998709\n 0.96887016 0.9801618 0.8879386 0.96887016 0.9540582 0.8879386\n 0.9995433 0.722555 ]\n\nWord: fantastic -- % perturbed: 5.0\nDrift? Yes!\np-value: [8.87938619e-01 1.99518353e-01 6.47557259e-01 1.64079204e-01\n 2.63380647e-01 1.81119651e-01 7.22554982e-01 1.96269080e-02\n 3.50604125e-04 1.99518353e-01 1.08282514e-01 6.85231388e-01\n 2.63380647e-01 1.33834302e-01 8.27956855e-01 1.99518353e-01\n 3.77843790e-02 1.48931602e-02 4.65766221e-01 4.84188050e-02\n 6.09918952e-01 5.36054313e-01 2.82894098e-03 2.92505771e-02\n 5.00359237e-01 7.94394433e-01 5.72654784e-01 6.15514442e-02\n 8.87938619e-01 4.00471032e-01 3.13561678e-01 3.40991944e-01]\n\nWord: good -- % perturbed: 1.0\nDrift? No!\np-value: [0.9882611 0.99365413 0.99365413 0.9998709 0.99870795 0.99365413\n 0.996931 0.96887016 0.9134755 0.96887016 0.99365413 0.9801618\n 0.9134755 0.9998709 0.93558097 0.99365413 0.9801618 0.96887016\n 0.99365413 0.9540582 0.99365413 0.996931 0.93558097 0.9995433\n 0.93558097 0.996931 0.99365413 0.99870795 0.9801618 0.9134755\n 0.96887016 0.9540582 ]\n\nWord: good -- % perturbed: 5.0\nDrift? No!\np-value: [0.9540582 0.82795686 0.7590978 0.5726548 0.60991895 0.3699725\n 0.9801618 0.85929435 0.5360543 0.60991895 0.9801618 0.64755726\n 0.28769323 0.99870795 0.8879386 0.28769323 0.60991895 0.19951835\n 0.8879386 0.21933001 0.28769323 0.5360543 0.2406036 0.7590978\n 0.79439443 0.34099194 0.9134755 0.40047103 0.8879386 0.31356168\n 0.82795686 0.2406036 ]\n\nWord: bad -- % perturbed: 1.0\nDrift? No!\np-value: [0.8879386 0.99870795 0.99365413 0.85929435 0.93558097 0.6852314\n 0.82795686 0.9540582 0.93558097 0.9540582 0.93558097 0.7590978\n 0.6852314 0.96887016 0.9134755 0.99365413 0.46576622 0.79439443\n 0.85929435 0.9540582 0.93558097 0.8879386 0.50035924 0.9999727\n 0.5726548 0.9134755 0.99870795 0.9540582 0.9882611 0.8879386\n 0.9540582 0.9134755 ]\n\nWord: bad -- % perturbed: 5.0\nDrift? Yes!\np-value: [6.09918952e-01 1.99518353e-01 3.69972497e-01 4.00471032e-01\n 1.81119651e-01 1.71140861e-02 8.59294355e-01 6.15514442e-02\n 3.13561678e-01 1.64079204e-01 6.47557259e-01 3.69972497e-01\n 2.63813617e-05 1.96269080e-02 1.20504074e-01 3.69972497e-01\n 7.76214674e-02 3.32780443e-02 9.71045271e-02 3.69972497e-01\n 5.46463318e-02 5.00359237e-01 4.93855441e-05 6.47557259e-01\n 4.84188050e-02 3.69972497e-01 2.40603596e-01 3.89581337e-03\n 4.00471032e-01 8.27956855e-01 5.36054313e-01 5.36054313e-01]\n\nWord: horrible -- % perturbed: 1.0\nDrift? No!\np-value: [0.9801618 0.50035924 0.9134755 0.7590978 0.8879386 0.60991895\n 0.9540582 0.9134755 0.5726548 0.96887016 0.85929435 0.8879386\n 0.2406036 0.64755726 0.8879386 0.79439443 0.5726548 0.9882611\n 0.6852314 0.85929435 0.7590978 0.7590978 0.7590978 0.9134755\n 0.7590978 0.93558097 0.7590978 0.82795686 0.996931 0.9134755\n 0.9801618 0.8879386 ]\n\nWord: horrible -- % perturbed: 5.0\nDrift? Yes!\np-value: [7.22554982e-01 1.38413116e-05 5.46463318e-02 3.89581337e-03\n 1.99518353e-01 4.21853358e-04 1.99518353e-01 1.81119651e-01\n 5.37760343e-07 6.20218972e-03 1.64079204e-01 7.76214674e-02\n 5.71402455e-15 7.42663324e-05 1.29345525e-02 9.69783217e-03\n 1.12110768e-02 6.15514442e-02 2.40603596e-01 1.20504074e-01\n 5.72654784e-01 2.40603596e-01 1.10792353e-04 1.96269080e-02\n 5.32228360e-03 1.98871276e-04 1.72444014e-03 1.71140861e-02\n 8.87938619e-01 9.71045271e-02 4.84188050e-02 8.69054198e-02]\n\n"
]
],
[
[
"The detector is not as sensitive as the Transformer-based K-S drift detector. The embeddings trained from scratch only trained on a small dataset and a simple model with cross-entropy loss function for 2 epochs. The pre-trained BERT model on the other hand captures semantics of the data better.\n\nSample from the Reuters dataset:",
"_____no_output_____"
]
],
[
[
"preds_ood = cd.predict(X_ood)\nlabels = ['No!', 'Yes!']\nprint('Drift? {}'.format(labels[preds_ood['data']['is_drift']]))\nprint('p-value: {}'.format(preds_ood['data']['p_val']))",
"Drift? Yes!\np-value: [5.72654784e-01 7.26078229e-04 2.73716728e-15 3.49877549e-09\n 1.29345525e-02 2.24637091e-02 4.95470906e-14 1.34916729e-04\n 8.27956855e-01 4.00471032e-01 6.20218972e-03 1.97469308e-09\n 6.15514442e-02 5.06567594e-04 5.46463318e-02 7.59097815e-01\n 1.97830971e-07 4.56308130e-10 4.15714254e-08 4.32430744e-01\n 8.36122004e-23 4.56308130e-10 1.12110768e-02 5.20541775e-30\n 5.72654784e-01 9.15067773e-08 1.85489473e-08 6.85231388e-01\n 1.54522304e-12 2.56591532e-02 2.40603596e-01 7.21312594e-03]\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e72a6751384410af74d20b3f63446231ce62f283 | 4,095 | ipynb | Jupyter Notebook | ch01/1-2.ipynb | jjhsnail0822/deep-learning-from-scratch-2 | 8ad4581572317885069a73e770a102bc289fca76 | [
"MIT"
] | null | null | null | ch01/1-2.ipynb | jjhsnail0822/deep-learning-from-scratch-2 | 8ad4581572317885069a73e770a102bc289fca76 | [
"MIT"
] | null | null | null | ch01/1-2.ipynb | jjhsnail0822/deep-learning-from-scratch-2 | 8ad4581572317885069a73e770a102bc289fca76 | [
"MIT"
] | null | null | null | 29.460432 | 248 | 0.424176 | [
[
[
"<a href=\"https://colab.research.google.com/github/jjhsnail0822/deep-learning-from-scratch-2/blob/master/ch01/1-2.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
]
],
[
[
"# 1.2.1 신경망 추론 전체 그림\n\nimport numpy as np\n\ndef sigmoid(x):\n return 1/(1+np.exp(-x))\n\nx=np.random.randn(10,2)\nW1=np.random.randn(2,4)\nb1=np.random.randn(4)\nW2=np.random.randn(4,3)\nb2=np.random.randn(3)\n\nh=np.matmul(x,W1)+b1\na=sigmoid(h)\ns=np.matmul(a,W2)+b2",
"_____no_output_____"
],
[
"# 1.2.2 계층으로 클래스화 및 순전파 구현\n\nclass Sigmoid:\n def __init__(self):\n self.params=[]\n \n def forward(self, x):\n return 1/(1+np.exp(-x))\n\nclass Affine:\n def __init__(self, W, b):\n self.params=[W,b]\n\n def forward(self, x):\n W,b=self.params\n out=np.matmul(x,W)+b\n return out\n\nclass TwoLayerNet:\n def __init__(self, input_size, hidden_size, output_size):\n I,H,O=input_size,hidden_size,output_size\n\n W1=np.random.randn(I,H)\n b1=np.random.randn(H)\n W2=np.random.randn(H,O)\n b2=np.random.randn(O)\n\n self.layers=[\n Affine(W1,b1),\n Sigmoid(),\n Affine(W2,b2)\n ]\n\n self.params=[]\n for layer in self.layers:\n self.params+=layer.params\n \n def predict(self, x):\n for layer in self.layers:\n x=layer.forward(x)\n return x\n\nx=np.random.randn(10,2)\nmodel=TwoLayerNet(2,4,3)\ns=model.predict(x)\ns",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
]
] |
e72a692901923c884ecd9d5d736b569fa8e6c1e9 | 246,261 | ipynb | Jupyter Notebook | Ethereum API.ipynb | Snedashkovsky/Course_blockchain_data | 39cebe8f0836443f50de5c26843093228e6629d5 | [
"Apache-2.0"
] | 1 | 2018-03-30T11:59:19.000Z | 2018-03-30T11:59:19.000Z | Ethereum API.ipynb | Snedashkovsky/Course_blockchain_data | 39cebe8f0836443f50de5c26843093228e6629d5 | [
"Apache-2.0"
] | null | null | null | Ethereum API.ipynb | Snedashkovsky/Course_blockchain_data | 39cebe8f0836443f50de5c26843093228e6629d5 | [
"Apache-2.0"
] | null | null | null | 68.046698 | 553 | 0.492433 | [
[
[
"https://github.com/cyberFund/ethdrain \n\nPython script allowing to copy the Ethereum blockchain towards ElasticSearch, PostgreSQL and csv in an efficient way by connecting to a local RPC node",
"_____no_output_____"
]
],
[
[
"import requests\nimport json",
"_____no_output_____"
],
[
"def print_json(json_for_print):\n print(json.dumps(json_for_print, indent=4, sort_keys=True))\n return",
"_____no_output_____"
]
],
[
[
"### Request function",
"_____no_output_____"
]
],
[
[
"def http_post_request(url, request):\n print('url: {}'.format(str(url)))\n print('request: {} \\n'.format(str(request)))\n return requests.post(url, data=request, headers={\"content-type\": \"application/json\"}).json()",
"_____no_output_____"
]
],
[
[
"### API\nEthereum JSON RPC API: https://github.com/ethereum/wiki/wiki/JSON-RPC",
"_____no_output_____"
],
[
"#### getBlockByNumber",
"_____no_output_____"
]
],
[
[
"def make_request_getBlockByNumber(block_nb, use_hex=True):\n return json.dumps({\n \"jsonrpc\": \"2.0\",\n \"method\": \"eth_getBlockByNumber\",\n \"params\": [hex(block_nb) if use_hex else block_nb, True],\n \"id\": 1\n })",
"_____no_output_____"
],
[
"# eth_url = \"http://localhost:8545\"\neth_url = 'https://mainnet.infura.io/TzMi1NSXsXK2SzUuEY9Q'\nprint_json(http_post_request(eth_url,\n make_request_getBlockByNumber(\"latest\", False)))",
"url: https://mainnet.infura.io/TzMi1NSXsXK2SzUuEY9Q\nrequest: {\"params\": [\"latest\", true], \"method\": \"eth_getBlockByNumber\", \"id\": 1, \"jsonrpc\": \"2.0\"} \n\n{\n \"id\": 1,\n \"jsonrpc\": \"2.0\",\n \"result\": {\n \"difficulty\": \"0xb91590c441438\",\n \"extraData\": \"0x6e616e6f706f6f6c2e6f7267\",\n \"gasLimit\": \"0x7a121d\",\n \"gasUsed\": \"0x79b084\",\n \"hash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"logsBloom\": \"0x426621090648a12008039888100080c25c2894d57040208a10f24680142801fe000084520281008314e5120a401044c014019658a002b9c01384721020610300001000c704a1431b2230409f2804050800098010040190001070c0010808106105b590a2036cc04880402ed4402109004e0130c56a2d328ec5000791000102c024042245211878200c880000140000084c53af120a0842bc4481000022022145028e04180044402da41012045700ae081201e78300108660050080c2d40a04640000c48340070230500e400012e0082010160444181d224138841820dc0c34c20114108430160034088420700464ab00402d30588a1840cc1c26010510041450\",\n \"miner\": \"0x52bc44d5378309ee2abf1539bf71de1b7d7be3b5\",\n \"mixHash\": \"0xa7bb8f60f874eec50e08d894c1165049d8a5d47c7bb10e1268e4468f3f46f9ca\",\n \"nonce\": \"0x06adf7384752bb38\",\n \"number\": \"0x517024\",\n \"parentHash\": \"0x033abe895022669545f948e89ef2b3f5f719e10095114caa08230e85f032cf6c\",\n \"receiptsRoot\": \"0x68c3975203b075caa9bd1b3653c4e565b3f22eafa5c9f677aff41e92dec49326\",\n \"sha3Uncles\": \"0xd9f6a8e40be23be00a490885f7468b0c3336882e9e34abcdbe11d6a34564d3b7\",\n \"size\": \"0x78a7\",\n \"stateRoot\": \"0x12f4b58dab0aa6af7518efe8a3d77c721f2a97de12cba16742a3017d4056aaa0\",\n \"timestamp\": \"0x5abb8f34\",\n \"totalDifficulty\": \"0xb3884441accee7b12b\",\n \"transactions\": [\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0xf73ff30b125f0bcdfe53d52e1a670ccc5a40d264\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0xcf59efaf40ebfd6b5fb40c0a1f84662d0ff2731464d8b3b4bd5991f1f52ec360\",\n \"input\": \"0xa9059cbb00000000000000000000000099efc6685acd8dd2d1aff1fa004a4d663ef292240000000000000000000000000000000000000000000000004563918244f40000\",\n \"nonce\": \"0x6e0\",\n \"r\": \"0xa7442f30693dfb385a5ef30f967953b11f3fe13dd2f6a7d50579b03bcc15ec42\",\n \"s\": \"0xa6d11744525df283843e1ed0cb5a4bb67afaa49578b71615fca87a4497c1dce\",\n \"to\": \"0x355a458d555151d3b27f94227960ade1504e526a\",\n \"transactionIndex\": \"0x0\",\n \"v\": \"0x1c\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x2a7a9c8014f35cc968f6f38f3b1b5703ed409e89\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0xf0e98365b0f7fb99ee2e3a52d3367acefc8aee3f589382b288d576a281a04051\",\n \"input\": \"0xa9059cbb00000000000000000000000042d99c1bf879d9a7aa6146af5b9f9b3d2bcf535c0000000000000000000000000000000000000000000000004563918244f40000\",\n \"nonce\": \"0x6cb\",\n \"r\": \"0xf80386b8a18f4e4abb547e4d9f4a7523cce426df1bbf9c86c9e5a980f943598c\",\n \"s\": \"0x3abe5cab1e8e021e4f16eb524152bb18ffc75271045cb1d1ec1c341a819a74dc\",\n \"to\": \"0x355a458d555151d3b27f94227960ade1504e526a\",\n \"transactionIndex\": \"0x1\",\n \"v\": \"0x1c\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0xfd5c417f350b900f0f03125defb3fc54d59cf702\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0xf1f71a680a84330c87d593794dfeed85be9fec99a0543aaffc94907bada25585\",\n \"input\": \"0xa9059cbb0000000000000000000000004a8dff6e2ee6b5f3cee3142a05f00ad2b36f92060000000000000000000000000000000000000000000000004563918244f40000\",\n \"nonce\": \"0x785\",\n \"r\": \"0x76d0de25d9b7343ddf1b0b74f73cb8c22a343dd51b220d94611633d0fab0cf13\",\n \"s\": \"0x348399735dd53a9164c9398442cadfb5e51fab93eac0fc612fbbbdb37b98094e\",\n \"to\": \"0x355a458d555151d3b27f94227960ade1504e526a\",\n \"transactionIndex\": \"0x2\",\n \"v\": \"0x1b\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0xaae2f7f87b0ed9f95be375ad6139b62ba4ec46ba\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0x8bbe1caa66e36e69d827e151ef5fba474db09cafe7e0302f784844eb11af8f02\",\n \"input\": \"0xa9059cbb0000000000000000000000004a0a96503f35e0497a8e3dc36db163c95f97fc8d0000000000000000000000000000000000000000000000004563918244f40000\",\n \"nonce\": \"0x71d\",\n \"r\": \"0xf2c9a148ed84b4d6b634ee76da51607fd571ef5368d1525573ce6259723e7291\",\n \"s\": \"0x2385086d1b0109b6a8af2892edb4edfb4e9774eb01c388bb2e764d839089c33d\",\n \"to\": \"0x355a458d555151d3b27f94227960ade1504e526a\",\n \"transactionIndex\": \"0x3\",\n \"v\": \"0x1b\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x33b1335d677d73cd9691648ef6837f5ca95df4fa\",\n \"gas\": \"0x2327f\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0x33dfcaac2ec0af591297387859a30b1a9c5f067d249860e568d551b1e19073c1\",\n \"input\": \"0x29ff4f53000000000000000000000000882eb61fe9cdcb26a9b49594578f83b2cd2222f8\",\n \"nonce\": \"0x1b\",\n \"r\": \"0xf0b1db76d1f5ffdf8dacdc376184c9f04df8091056e6d91c56c0278afbba5ed\",\n \"s\": \"0x2a302c6a190bbb653ed8a6ff6df24f88368c6fd7a55935a082bb15cba1b38b32\",\n \"to\": \"0x5bf2961a6bb8b04afd0b27518a96150c35595b23\",\n \"transactionIndex\": \"0x4\",\n \"v\": \"0x25\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0xcec8f592748fbe855a19130238d515b686844533\",\n \"gas\": \"0x1e73f\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0x5fed3ddc9e9dc0ab8abcbee91ab151c93f286d7cf7f7fed91445e8cae534618c\",\n \"input\": \"0xfdb5a03e\",\n \"nonce\": \"0x1ee\",\n \"r\": \"0x4abbbb16913afd062403bcaaf626ac0a41b543e99a1cbd77ab5a24cd631baa7\",\n \"s\": \"0x79afb9ee274dbc2a07c0c4d3b8212f72b20db2531f80d6bc2694a99a98006cf0\",\n \"to\": \"0xb3775fb83f7d12a36e0475abdd1fca35c091efbe\",\n \"transactionIndex\": \"0x5\",\n \"v\": \"0x25\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x04ab96c0c58ba42428c752f7d49d930df1e1a4b9\",\n \"gas\": \"0x3d090\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0x696d2b19475014c462a1ad9601dbf8316a8870d0787150ff279ede7f12c4d8ff\",\n \"input\": \"0x9e281a98000000000000000000000000e29c5b523590165795bbd7d52369c2895b18841f0000000000000000000000000000000000000000000000073793e0a2ed4e0000\",\n \"nonce\": \"0x173\",\n \"r\": \"0xb845e6bbfb8018be0eac5bad1f79dcd425ca3ccf6160a3a9304f57a215a9c28c\",\n \"s\": \"0x273df8ea7c5c0b4bb9fd1ab1998f5dc72e554f557d196355faa4213c96bf27a8\",\n \"to\": \"0x8d12a197cb00d4747a1fe03395095ce2a5cc6819\",\n \"transactionIndex\": \"0x6\",\n \"v\": \"0x26\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x2dae0d30b00ba0f9b2c5cb1a2e1f3eac3827f9a4\",\n \"gas\": \"0x3d090\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0x140c5fac5457d970c4d3bba2364cf3b873e9ce657314f0018911925257c18baa\",\n \"input\": \"0xa9059cbb0000000000000000000000006106579bb003144c90de85fc3c7a36d578b71af5000000000000000000000000000000000000000000000dac3d00d0130de00000\",\n \"nonce\": \"0x19\",\n \"r\": \"0x8a014609c1a72a4c95fdc2a9bf7d3819f9d20226e51804c468fe0823178b925e\",\n \"s\": \"0x3e955d4d2b27cf93b74d04ecae77acd81ceec83dbc4e8111a44dd55f3b65f7e0\",\n \"to\": \"0x0d0ffa077af9042a7403ad4015dae69c45b1c260\",\n \"transactionIndex\": \"0x7\",\n \"v\": \"0x25\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x5670cade3135d3f5057858a1e53385c78ef4f2c2\",\n \"gas\": \"0xe9a4\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0x9796e62afa4d428ba5258fbebd244029b1534c06e7836243eeb307ccbfca1503\",\n \"input\": \"0x\",\n \"nonce\": \"0x1d\",\n \"r\": \"0x6ae252e0592147abe4685e6e604e03154d2b3148805f518c4c02472488515254\",\n \"s\": \"0x561e27f95c3222537e4c6ee01e908cc88a65b04818067544d8906aa3d56ffac0\",\n \"to\": \"0x25c1629f40c29e9291afc8721694212ae63b244a\",\n \"transactionIndex\": \"0x8\",\n \"v\": \"0x26\",\n \"value\": \"0x1c6bf52634000\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x0673f6365ee373f392431c940b14d1847aa1cea2\",\n \"gas\": \"0x5208\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0x5e21d845eebb2bc01e5111f658b022a80502adc3ba47b3ec7101ce706664a3ec\",\n \"input\": \"0x\",\n \"nonce\": \"0x4\",\n \"r\": \"0x5138f0ff7738361dbcc9247d25020071ec59f81e3fda111c24a9516093772e3c\",\n \"s\": \"0x2945262175e34b67f9ba344cc9e33f01c9bb7525a31978f20098059a03f2d9ff\",\n \"to\": \"0xa9e1801f26214a237724b1e93314e5478aa2aa77\",\n \"transactionIndex\": \"0x9\",\n \"v\": \"0x1b\",\n \"value\": \"0xc5575553b1d800\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0xbc6f4017fdab37c9bd2b12c2751110cd2e4a68e9\",\n \"gas\": \"0x5208\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0xac611d2f9811e44f788ca265c4b90e9e742aa8bb87dff6b19cab63dba976ee44\",\n \"input\": \"0x\",\n \"nonce\": \"0x10\",\n \"r\": \"0x2df21b22801686ab9e0dc249bee153dcd7db17ca4ccbc32ab2f1c5b4937a8261\",\n \"s\": \"0x4d9756c3c74b5bb652587f7a468c634593f1bb0ff0ac6d8b9d008db9c70a3dd1\",\n \"to\": \"0x2e3ea915f31c4884aef952810638691740bb5242\",\n \"transactionIndex\": \"0xa\",\n \"v\": \"0x25\",\n \"value\": \"0x2d79883d20000\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x6a0c60ded4d315b5508b2c88afc700ed53a6b385\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0xb5a12788b983f7ec125fc771a188fd1e219d6e7c9affe354bca01dac21d3cc1f\",\n \"input\": \"0xa9059cbb0000000000000000000000000260f79ac35f2059f95e036ddff90113a6c6f0e80000000000000000000000000000000000000000000000004563918244f40000\",\n \"nonce\": \"0x729\",\n \"r\": \"0xac66bd06f22d5650d786a66889e8cbffbf91f9eee51f6a3d0848fa5c6beb9fa5\",\n \"s\": \"0x502e2112dd7755f1a048696510d229d5ba4c8b97cea270282e25f380ea34e65c\",\n \"to\": \"0x355a458d555151d3b27f94227960ade1504e526a\",\n \"transactionIndex\": \"0xb\",\n \"v\": \"0x1b\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x7060b3ec1bec74330d86a01719c4ceb84a5d7d01\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0xc55faa0d152af534121158236bf55c945ff59cde62ea1a6adc9c6c72ac886b42\",\n \"input\": \"0xa9059cbb0000000000000000000000008a04d83f835c1b36d632394568ce09e178a16b180000000000000000000000000000000000000000000000004563918244f40000\",\n \"nonce\": \"0x7a8\",\n \"r\": \"0x61cbdc4d00d836df19eefb139f5e4bd53175e9be596deb8b0738c930f9829052\",\n \"s\": \"0x5149f4d273f71e895e0db278bab6503bd7ee48234b7aded1eb841d81c00ad72d\",\n \"to\": \"0x355a458d555151d3b27f94227960ade1504e526a\",\n \"transactionIndex\": \"0xc\",\n \"v\": \"0x1c\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x1d8fb8d5b1f115dd542f93c600950ebc9cd5e5a5\",\n \"gas\": \"0x5208\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0xaf2eb8a05862ab3d5f9a7f46c8244916be65a452692d38f39542f7ef5778b989\",\n \"input\": \"0x\",\n \"nonce\": \"0xf\",\n \"r\": \"0x6a1b10c83447ed0fb0f9ae94e3c6520cea6caf535596dd57cf92dbf6317103e9\",\n \"s\": \"0x4554aaf73bb82610adc0fae1e4f26007bd58147163678f3806cd1ef0a0f02cda\",\n \"to\": \"0x1ae8a36b15e13af170669555fe649c05a4bcc61e\",\n \"transactionIndex\": \"0xd\",\n \"v\": \"0x1b\",\n \"value\": \"0xc2b5cb56708fb0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x59a5208b32e627891c389ebafc644145224006e8\",\n \"gas\": \"0x9509\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0xa0d4c8d071f3128299c6465869704d25589bf95429bc509309f77ab84ad51026\",\n \"input\": \"0xa9059cbb0000000000000000000000004515f6f497910585e8df53e99beb8fe4c491d7690000000000000000000000000000000000000000000000304a555856ddf80000\",\n \"nonce\": \"0x7108\",\n \"r\": \"0x734ae501d5f338ceba6e1a1943552ec2c66a7bcd2340de35c8c0719d6b23a1de\",\n \"s\": \"0x349b918c047d4ad15482590cff9ab0f38a8283e8c730e52082b489b64acd64a9\",\n \"to\": \"0xcfb98637bcae43c13323eaa1731ced2b716962fd\",\n \"transactionIndex\": \"0xe\",\n \"v\": \"0x25\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0xe25be93f9529be05b816a40a15f6e59cc75447b2\",\n \"gas\": \"0x5208\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0xd00588297fb687d7d4df14464d5abf41e20db5e58bd6c2f27913f93ce682b9c3\",\n \"input\": \"0x\",\n \"nonce\": \"0x23\",\n \"r\": \"0xeb34f673076be1921cefbd17a6fe521c0c769b25e01bbaa03940318fc212a3e4\",\n \"s\": \"0x3c567a4a20244d8156291a11dbc7532d8d147c1f84069fd3479e93346a997fac\",\n \"to\": \"0x32eb48c04a1122c112c708c2b821f5b1099335fe\",\n \"transactionIndex\": \"0xf\",\n \"v\": \"0x26\",\n \"value\": \"0x1c6bf52634000\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x9d966abcc94760dec6b556dbe6eb1196f58be4bb\",\n \"gas\": \"0x12e1d\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0xe1b49ee483d83183710bf92932a48ff1ae425293d50dec1fa8695e939c583b10\",\n \"input\": \"0xa9059cbb0000000000000000000000000ded7b6ae7e4a4b3760036349491f3e2ddba3d8d00000000000000000000000000000000000000000000043c33c1937564800000\",\n \"nonce\": \"0x63\",\n \"r\": \"0xcdd94593228ddb354d4d3129d6f74e66fb7f167f39a417fa4659825b3a7521c1\",\n \"s\": \"0x6e0c9404463c868e6f57a67c49acaedadbf47853424df01574eb769db83f4d4e\",\n \"to\": \"0xb4db72af3421ebb00d9b905c00e1ed727fc95bbe\",\n \"transactionIndex\": \"0x10\",\n \"v\": \"0x26\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0xfad81dca251aae3dd00fb39327d397e2496cab86\",\n \"gas\": \"0x5208\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0x56be92af984fa412053db5fea748681ef192822076a94b590a5a66c7962ce018\",\n \"input\": \"0x\",\n \"nonce\": \"0x3\",\n \"r\": \"0xe4be2005ec26a337c92e2cec826e2d499e4f110457d9248e9477af485c968fa1\",\n \"s\": \"0x2cda0a80b6b0b0c72a6215cf73c13a56ba5bfc7befa40db6138aebf0e65bb50c\",\n \"to\": \"0xb2782fd5a3d55d83255217d941b0b0e082048ba9\",\n \"transactionIndex\": \"0x11\",\n \"v\": \"0x1c\",\n \"value\": \"0x377f460953a1000\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x165aa385e9adf7222b82cec4c5b0ee6b93d71ac5\",\n \"gas\": \"0x10465\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0x5cfeeb6be7931f193bb3d113e82734a293d5f540fc26c19461b7cd4a53d8f07f\",\n \"input\": \"0x3ccfd60b\",\n \"nonce\": \"0x283\",\n \"r\": \"0xcae65deb9ccd42179d09bb93b7a97ad09db0e09aea856f93a7f4569d4c584643\",\n \"s\": \"0x7b6c8401e8eac970b1b753e58cd66aee7f556f9aa3f850fb74565068c068b11b\",\n \"to\": \"0xb3775fb83f7d12a36e0475abdd1fca35c091efbe\",\n \"transactionIndex\": \"0x12\",\n \"v\": \"0x25\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0xbf15c60b6cabad081271e89084a12cd459555c85\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0xd98a317f547de4a7f18e55d11bfc40975973c16c0aeabbe52c351c6bb8076251\",\n \"input\": \"0xa9059cbb000000000000000000000000cca339a40a9162f5ae8062cc67c06f3204a8e0510000000000000000000000000000000000000000000000004563918244f40000\",\n \"nonce\": \"0x7a8\",\n \"r\": \"0x864b7184ceb54efc999bdf574227f732ec01d8f76ddd0a8f761c879f661feb5\",\n \"s\": \"0x112694082240d97b861bc9bda8d1f61b8ffcb8352b662109df23537ccea8633a\",\n \"to\": \"0x355a458d555151d3b27f94227960ade1504e526a\",\n \"transactionIndex\": \"0x13\",\n \"v\": \"0x1b\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x0a73573cf2903d2d8305b1ecb9e9730186a312ae\",\n \"gas\": \"0x249f0\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0x9cb31e5325524bb60071ebf0771db3e84907555eb7064b106b3925e34d89f2bb\",\n \"input\": \"0xa9059cbb00000000000000000000000057189300cd07a756aa3cd7aa31b0f7f718a89bbf000000000000000000000000000000000000000000000000000000005ec2fc0a\",\n \"nonce\": \"0xb6c7\",\n \"r\": \"0xadfadf6b0d1dd0b9ca0146b6dae7b5db09b5091c6f8d5cbefde3ccc2d8ed1c8a\",\n \"s\": \"0x47ae78ccf40c430adcd07e525bcabf767ad316f6fdfac8ca9b3ecb8f676d2056\",\n \"to\": \"0x46b9ad944d1059450da1163511069c718f699d31\",\n \"transactionIndex\": \"0x14\",\n \"v\": \"0x25\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0xb4c32d9986b5631fb0eb0aea7adf2c58397969cc\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0xfa3d4aacd846cfbf8cfb53137b76963963b9260d32f905ca91d111e2d811e62b\",\n \"input\": \"0xa9059cbb000000000000000000000000e251912d1530931bd21a4b3ae1fa410c73ed2c600000000000000000000000000000000000000000000000004563918244f40000\",\n \"nonce\": \"0x809\",\n \"r\": \"0x8d6c53e323477d5476d32e6852a5fd26a176c52b6b2b66f43db035836e2d4d7f\",\n \"s\": \"0x57d2c95094d5041e70db8314bd5df4ea59a08f0c9d6a99b1692bab9ed30779a0\",\n \"to\": \"0x355a458d555151d3b27f94227960ade1504e526a\",\n \"transactionIndex\": \"0x15\",\n \"v\": \"0x1b\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0xa453b444a5353c0ae1ae09addc2aecc2427a9d06\",\n \"gas\": \"0x15f90\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0x2c4215f34e6aa90c6208fd38ad2480af7003a454a03a96565d4200eb550c2c96\",\n \"input\": \"0x01c6adc30000000000000000000000007b4f0ee5837de9a86edb17aae8ca45e90890c6080000000000000000000000000000000000000000000000000000000000000514\",\n \"nonce\": \"0x21bf5\",\n \"r\": \"0x27c86b137d577216386dad2493045df3173791aae66886d9f03c27c0eb6d20bb\",\n \"s\": \"0x535a1a6f42a0f80e847327c35a63d5595a19064e40914972c02d53193b1654af\",\n \"to\": \"0xbcc394d45c3613530a83cae62c716dc23b7f2152\",\n \"transactionIndex\": \"0x16\",\n \"v\": \"0x26\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0xba29c81a59dd50a065638902e0e9acd4ab97e1af\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0x097d7ebd574abffd0a546c97801f758fa2d27a549b7beca957b81bb86771da72\",\n \"input\": \"0xa9059cbb000000000000000000000000766cda7b212dfbb98c3b94ebf86d992abc81d5010000000000000000000000000000000000000000000000004563918244f40000\",\n \"nonce\": \"0x6b6\",\n \"r\": \"0xb60dd28e8492adc2c8b1a9c82fb9db77bac65ad9518491cf72a2a7ba5b43c959\",\n \"s\": \"0x55d0922511cf624277183c51f751c20bfa16c8f7f294dcda395b9f525432493f\",\n \"to\": \"0x355a458d555151d3b27f94227960ade1504e526a\",\n \"transactionIndex\": \"0x17\",\n \"v\": \"0x1c\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0xab0ef6a96fc1857a08bab6000f339460d80da12b\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0x8526c05e861075226dc3830d9f6aae923273d73ade47ff8cece1854deb51e8da\",\n \"input\": \"0xa9059cbb0000000000000000000000006460110594fb5b11e6d3e2f061efa1ba14a898900000000000000000000000000000000000000000000000004563918244f40000\",\n \"nonce\": \"0x790\",\n \"r\": \"0x25c1cd0a7ebdd1ba953a9c35f0487db50ecbae638f9ee3332f9e949a56a80b00\",\n \"s\": \"0x7646fec1c38eb530d5385306abd3583eda004eb7fd975576a407c939297b671e\",\n \"to\": \"0x355a458d555151d3b27f94227960ade1504e526a\",\n \"transactionIndex\": \"0x18\",\n \"v\": \"0x1c\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x59dd607181fc174433017a79c15f50573445a09e\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0x7b6fd38918a21b12b4128c1ea84a0bf3a65a73dd9eb54e5481e13b597f55bc5d\",\n \"input\": \"0xa9059cbb00000000000000000000000042de2ad441e2f5b1d790471acf799f800af210a30000000000000000000000000000000000000000000000004563918244f40000\",\n \"nonce\": \"0x6bb\",\n \"r\": \"0xc1d78e07dc2498de003357763fde71e6bbd4f863da036fcf17c0deb2e5fe659c\",\n \"s\": \"0x210a3ff50888f93e882af3ee335daf2eb3c8654cbc2508643e58ca6bd2ab9857\",\n \"to\": \"0x355a458d555151d3b27f94227960ade1504e526a\",\n \"transactionIndex\": \"0x19\",\n \"v\": \"0x1b\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x927e07feebbaae3047620794720fd2bc1f1af6fa\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0x20f3a685fe6030080dbf8e245d092711762874b5f7db4d5321ff7f61ce1e2714\",\n \"input\": \"0xa9059cbb000000000000000000000000e7da3d60aa0c67a1422677fc1609577b935736360000000000000000000000000000000000000000000000004563918244f40000\",\n \"nonce\": \"0x6f6\",\n \"r\": \"0x27f8ed3f8f9a343845486ce324b1548d7da133d9702e161c2d053d0212445cb6\",\n \"s\": \"0x6796c0093f1626831f6207db84be904c368a8b8d4f722f8bd630dd3f15b7b61a\",\n \"to\": \"0x355a458d555151d3b27f94227960ade1504e526a\",\n \"transactionIndex\": \"0x1a\",\n \"v\": \"0x1c\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x8ae78f0c02222fca4dd00b31a3702c1e2a2b9106\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0x4c24245da15f0cf6bf4119f9b07153ee08c9ca6dafbc40be105187df6d3d3eee\",\n \"input\": \"0xa9059cbb000000000000000000000000e493a720c6d4f897535f3ce325e7d0b0797342600000000000000000000000000000000000000000000000004563918244f40000\",\n \"nonce\": \"0x6aa\",\n \"r\": \"0xf544c7441ca764c25edd853f17be23622d5c7d161f337646ba0a16bd13c244e7\",\n \"s\": \"0x47da869ebdf5ab13688ef1852d957c5c5048717d403ab54c61658261e97181bf\",\n \"to\": \"0x355a458d555151d3b27f94227960ade1504e526a\",\n \"transactionIndex\": \"0x1b\",\n \"v\": \"0x1c\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x61fb217f6a9fde302751fd3d16f08f89c6049ca9\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0xe3623b49c23d6300725f58293ab471cbdfd2f16569c316198beb4d7f47a281e7\",\n \"input\": \"0xa9059cbb000000000000000000000000bdde0b2a09d90fb6c316251d0454dea78a3bfa3a0000000000000000000000000000000000000000000000004563918244f40000\",\n \"nonce\": \"0x717\",\n \"r\": \"0xec64f2dd56377054225e01174a64144e2e7ce6a9e09f5fd04d19a88bbf4dcdb9\",\n \"s\": \"0x7f5960137d26b1b9522769af9116ecc831f35ff08f5ae64f6d1970a4544ee8bc\",\n \"to\": \"0x355a458d555151d3b27f94227960ade1504e526a\",\n \"transactionIndex\": \"0x1c\",\n \"v\": \"0x1b\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x4a2d6a500c825fb721e8aea9acabb222d18b5896\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0x09b6f810fb2cbad6e0366129b9d8bf28a524f811b783f2f11aca39ad55e24603\",\n \"input\": \"0xa9059cbb0000000000000000000000003dc854ddf175e5fe58e9c23bdaf091289a9f8ec10000000000000000000000000000000000000000000000004563918244f40000\",\n \"nonce\": \"0x7b7\",\n \"r\": \"0x1cbcc4d81d35e32b9e201ccb829161edc6dbc305f7f6427e80aa5bf5b2f58f32\",\n \"s\": \"0x376dcaf8fd6c4d0b40778c20b0fb9f02353431b759eb4f4e6069cfa1fd7ce44d\",\n \"to\": \"0x355a458d555151d3b27f94227960ade1504e526a\",\n \"transactionIndex\": \"0x1d\",\n \"v\": \"0x1c\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0xc033b10ead77cb891c44c758c490eaceea2fc95d\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0x7ba30b7c5bdda7c6c15467f70b94aa28e8166234fb9bd3c94b67d6a32ba416c3\",\n \"input\": \"0xa9059cbb0000000000000000000000006513fbe9adff0e95bc8784aa92e16d83625b3da60000000000000000000000000000000000000000000000004563918244f40000\",\n \"nonce\": \"0x6a2\",\n \"r\": \"0x5860aacc19ab0da02488e76fe1c299211433bdc40d7acd09c5013bb3769eb1f6\",\n \"s\": \"0x36c34dd77ca4101f4de8d644273ae9d11a564525e1ea3d6f997f8bfc933881e5\",\n \"to\": \"0x355a458d555151d3b27f94227960ade1504e526a\",\n \"transactionIndex\": \"0x1e\",\n \"v\": \"0x1c\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0xc6d270daf9336097325320a3f5383400de2a1d5c\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0x5b90516f6c1cfbd86d0abbb5c73195780cedb20a1d1e25592fe72da22af52c65\",\n \"input\": \"0xa9059cbb000000000000000000000000911def5f5deb952a06d6c3e2edd0b8a481f8e8630000000000000000000000000000000000000000000000004563918244f40000\",\n \"nonce\": \"0x763\",\n \"r\": \"0xe5526f7b4c3fe8bededf34387c01cf8f9738a1e50fcefe6f5621f0ccc8ca048f\",\n \"s\": \"0x48e5f9b19f4aaf3d95797fa343747902e7974662f4e9de72215e9afc1acbd754\",\n \"to\": \"0x355a458d555151d3b27f94227960ade1504e526a\",\n \"transactionIndex\": \"0x1f\",\n \"v\": \"0x1c\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x6dbd7240d360be22cba37e207a727e5a59fbaaaf\",\n \"gas\": \"0x5208\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0x59cf64208ab632be76190cc49da7faf7e1072cbe1b2f301e9e37db0128f973a1\",\n \"input\": \"0x\",\n \"nonce\": \"0x228\",\n \"r\": \"0xdd2c3ff3d8910b7d67b5cf92bb01561abef676740248d86e862c755389d9df4b\",\n \"s\": \"0x19138525c5eb1e3fbe66b85fbe3e00d1414d79ec18fa4184d8040f1aac5ae8a4\",\n \"to\": \"0xd5929ab852d3e02264cb506201784066d120338f\",\n \"transactionIndex\": \"0x20\",\n \"v\": \"0x26\",\n \"value\": \"0x38d7ea4c68000\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x99f1fabe5f4a25ddcc1e64870e3237f04680bcc5\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0x1cb2b015e01d78e04b619e3742cdfcae16396d0bab0cd4ec098f32a49c0f2b67\",\n \"input\": \"0xa9059cbb000000000000000000000000e3b8cedd6787ac38c9e948ac2c0157f4c8ed958b0000000000000000000000000000000000000000000000004563918244f40000\",\n \"nonce\": \"0x778\",\n \"r\": \"0x7d283c8e664891ce226efc095c49e1ef27cbeefa414fe4aa9fa924db691eee6a\",\n \"s\": \"0x4cad9d14356959efb86d73c7c7a8a3fde0235cc7f717f3f745bda554456b12b6\",\n \"to\": \"0x355a458d555151d3b27f94227960ade1504e526a\",\n \"transactionIndex\": \"0x21\",\n \"v\": \"0x1b\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x5e575279bf9f4acf0a130c186861454247394c06\",\n \"gas\": \"0x249f0\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0xca3f56c9ecaf63649146754bfa2e2ee49f3494af51f21e2febd63ffcbd5f6c9f\",\n \"input\": \"0xa9059cbb00000000000000000000000066d73b05ab7744e18bb8783dc6b2e1fd08f54b020000000000000000000000000000000000000000000033f5b44ed37f6d073400\",\n \"nonce\": \"0x54872\",\n \"r\": \"0x69764eee4f6aac1af94657620e6f9d6a3343f30aabbfa5280481ef6db07b125d\",\n \"s\": \"0xab0cfe68900181e10c2059c5324855757936597e741cc10b9710d79d685573\",\n \"to\": \"0x5102791ca02fc3595398400bfe0e33d7b6c82267\",\n \"transactionIndex\": \"0x22\",\n \"v\": \"0x25\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x000836d933f63f6999b9236826a808975e05412f\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0x385da38adedfda78baec4affb4511db02bfc83c5460706cb868bfb5646998c7d\",\n \"input\": \"0xa9059cbb0000000000000000000000000240a2f8109b09171b593aadffdd8d6c7eb514370000000000000000000000000000000000000000000000004563918244f40000\",\n \"nonce\": \"0x6d7\",\n \"r\": \"0x6c8710af4a3c3c900f6df542d644fadc4986674b1406c2b5736c67aff1abcadf\",\n \"s\": \"0x1ed38785756db1d9ecebf27ba081a54daaeec78be705d7cc9d977c65c68b71a0\",\n \"to\": \"0x355a458d555151d3b27f94227960ade1504e526a\",\n \"transactionIndex\": \"0x23\",\n \"v\": \"0x1b\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0xa88f73750f4070e48cf3642c822b594720bc7769\",\n \"gas\": \"0x5208\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0xdeb5d8b57e39165066f6d138ff4455b35cc6afec05be700cdd827d76a8d1a9cb\",\n \"input\": \"0x\",\n \"nonce\": \"0x2e\",\n \"r\": \"0x569dd8dede36d62c77d81738244a356ec1b29a29a5868017449f60d2b87c4124\",\n \"s\": \"0x725b49936047b6d1fc48abae84daa55229c250ed5ecda19b7ac6db89684390aa\",\n \"to\": \"0x786d41d57333560307418e35af5dd5f7c6f8abe5\",\n \"transactionIndex\": \"0x24\",\n \"v\": \"0x25\",\n \"value\": \"0x470de4df820000\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0xbb64ac045539bc0e9fffd04399347a8459e8282a\",\n \"gas\": \"0x21534\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0x2a9f7e3355e8aff7a71567d0f92e923f86b0cb69899c163cf5b40b856ef7427e\",\n \"input\": \"0xf088d5470000000000000000000000003ed938607a049e78f887eed7baf3bc71e7e4f758\",\n \"nonce\": \"0x13\",\n \"r\": \"0x6900d3aa21e3c66d7c78dc8a22a192536fbdc9ba89e005060827e6bd78c59176\",\n \"s\": \"0x3be56136aa84305ba658ee21ca55899a32f5abea9d42d593dc5686980c5d6ff8\",\n \"to\": \"0xb3775fb83f7d12a36e0475abdd1fca35c091efbe\",\n \"transactionIndex\": \"0x25\",\n \"v\": \"0x26\",\n \"value\": \"0x2386f26fc10000\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x1fd6267f0d86f62d88172b998390afee2a1f54b6\",\n \"gas\": \"0x186a0\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0x9b0baed4c681b6c2c971acbff8a19ff27e9ebd8e69a1c93bfcf5c3849fe03f13\",\n \"input\": \"0x0000000000000000000000000000000000000000\",\n \"nonce\": \"0x16d03\",\n \"r\": \"0x7d4c4d8898c87c8d18a84dabb337a7e78b23312872bde9a594107b299348bb6c\",\n \"s\": \"0x68b5fed48160351a8475caf14b91641e34ff1ff68c81b11db0c4d0e56add72bd\",\n \"to\": \"0xfedbc468ed7d39041964fb050567d7d5e5e34878\",\n \"transactionIndex\": \"0x26\",\n \"v\": \"0x25\",\n \"value\": \"0x4a5def80f2da3f800\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x08d28494d98b83a5f2228a056198b19375d19ecb\",\n \"gas\": \"0x5208\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0x6523b1b08314fc6c27f7d5109851201dc12a75bb12b9f2b1d8b9eae682fae616\",\n \"input\": \"0x\",\n \"nonce\": \"0x958\",\n \"r\": \"0xafa2fcb13491a4c569a01a09ce65e522924765aeb2475ab76891a75480f5ef9e\",\n \"s\": \"0x719ecd1256b1bf4ab40b60f052398e2a0b696614cde7f62d083186d60758b7f5\",\n \"to\": \"0xfcb6b74d62268561159304856cf0ab0c5d585ad3\",\n \"transactionIndex\": \"0x27\",\n \"v\": \"0x25\",\n \"value\": \"0xb5e620f48000\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x23a86e4a5dbb4aa8e15700877096f403680c901d\",\n \"gas\": \"0x5208\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0xa6aac2e6d775eaa45195258abcc8a09eb48e659a36c8e900fd3df39b4a8c4bc8\",\n \"input\": \"0x\",\n \"nonce\": \"0x0\",\n \"r\": \"0xc730f5faad7867f10995bdb04610dc092a26936031f8b7158757ecd24da9d3a1\",\n \"s\": \"0x46aeb7baa53e5e318f34d9125ed651372110485e92726a6542763b7f5e144649\",\n \"to\": \"0xeb92872e43bcd00a5d00640c284badd108191ed2\",\n \"transactionIndex\": \"0x28\",\n \"v\": \"0x1b\",\n \"value\": \"0x470de4df820000\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0xebb32760f86b6015f3e36216ac444fbc8d658260\",\n \"gas\": \"0x3d090\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0x8cc96af0e16bc2b17558566440926034b55ca731c7446936c77f3c43bb507623\",\n \"input\": \"0xa9059cbb0000000000000000000000008641830cbba7fa53b24a7ddeab25b6ae4a6485820000000000000000000000000000000000000000000000000000000000000000\",\n \"nonce\": \"0x1e\",\n \"r\": \"0x6f244274944078186c894fdc64157028d7a90ebf836bfa219660d1c1242108d9\",\n \"s\": \"0x5d482dfe6ee0babc4420cc63b84880cd85b8a6f409f9f8979a4275ccfbd045d2\",\n \"to\": \"0x86fa049857e0209aa7d9e616f7eb3b3b78ecfdb0\",\n \"transactionIndex\": \"0x29\",\n \"v\": \"0x1b\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x2dae0d30b00ba0f9b2c5cb1a2e1f3eac3827f9a4\",\n \"gas\": \"0x3d090\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0x878a9f6e78c668736eb599b185aa88c01b492d1e1eb2cb1f41497e82c85acd1a\",\n \"input\": \"0xa9059cbb00000000000000000000000021968804118e60cb0c4597bd6017c754b9c65312000000000000000000000000000000000000000000000b9419ae878a48980000\",\n \"nonce\": \"0x1a\",\n \"r\": \"0x99be0a4bda9994fd2c277ebc5fd0f1321880bc7e9c4af059aef12c7ec151dab5\",\n \"s\": \"0x2513ad4428a10deabdd21a0f5675c961a57441fcf7314dc05e09a95f877426d8\",\n \"to\": \"0x0d0ffa077af9042a7403ad4015dae69c45b1c260\",\n \"transactionIndex\": \"0x2a\",\n \"v\": \"0x26\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x6a0c60ded4d315b5508b2c88afc700ed53a6b385\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0x91ea0221bb897751242cf970fd0a234b5e8abf1faca4bf2d14d06ed76bd6059c\",\n \"input\": \"0xa9059cbb00000000000000000000000039009e31580467303675df86e9aafb0c9423adc80000000000000000000000000000000000000000000000004563918244f40000\",\n \"nonce\": \"0x72a\",\n \"r\": \"0x6613f9c61885d1c74e3465510fe0673b6c7f4dd34fb527016c65f5c1ea233fac\",\n \"s\": \"0x29d512a0ef267a5c39736b2d0f8aa2db0e5d01cb1e4e5e53d36ef17c3ae05a3a\",\n \"to\": \"0x355a458d555151d3b27f94227960ade1504e526a\",\n \"transactionIndex\": \"0x2b\",\n \"v\": \"0x1c\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0xfd5c417f350b900f0f03125defb3fc54d59cf702\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0x97be16d140882d3dddfec05dc5a9a7c4db6ea268cfef4eddeea13764be2e8b7f\",\n \"input\": \"0xa9059cbb000000000000000000000000b392a23fc325d36a590ec4962cd5f12e2b382d610000000000000000000000000000000000000000000000004563918244f40000\",\n \"nonce\": \"0x786\",\n \"r\": \"0x7b3a791fbf27ba9e890ec4555cfbf3eac8bfd14d74fbadfb4782a709ac4b205f\",\n \"s\": \"0x2affb83ea745e67717956c6d2765e9889b134f58a4424560c37a04e9cdb13bb9\",\n \"to\": \"0x355a458d555151d3b27f94227960ade1504e526a\",\n \"transactionIndex\": \"0x2c\",\n \"v\": \"0x1b\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x2a7a9c8014f35cc968f6f38f3b1b5703ed409e89\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0xdc5e4f95ef7ee74712f8654637271cac58bce46e49d3094314e6abd8e8bee06f\",\n \"input\": \"0xa9059cbb00000000000000000000000083b05a96e783134253806409fcfc7ea7f5e98e730000000000000000000000000000000000000000000000004563918244f40000\",\n \"nonce\": \"0x6cc\",\n \"r\": \"0x6697ca996911f32a59cd4811ab6898a6fdedd9080e8f449d52dd2720ab488491\",\n \"s\": \"0x58f38bf8b2097542e8c57b0dcd8d6abe6ab54c6ba40a7d9cf5eb779b5cc13808\",\n \"to\": \"0x355a458d555151d3b27f94227960ade1504e526a\",\n \"transactionIndex\": \"0x2d\",\n \"v\": \"0x1b\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0xf73ff30b125f0bcdfe53d52e1a670ccc5a40d264\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0xdac92e22c68c0e6d498e73886cbf9db2eee262912ac25ab7fe343fdafc1bf111\",\n \"input\": \"0xa9059cbb000000000000000000000000487856b911b92641c1c50d7a41c810a814c158310000000000000000000000000000000000000000000000004563918244f40000\",\n \"nonce\": \"0x6e1\",\n \"r\": \"0x3a71a8de206216e424ff613495d5c15f006d1237bbb559676fca5250971a37a\",\n \"s\": \"0x409af9f23a9dcff95c321189038a376678bc604e07b247c10dda4025567e2c19\",\n \"to\": \"0x355a458d555151d3b27f94227960ade1504e526a\",\n \"transactionIndex\": \"0x2e\",\n \"v\": \"0x1b\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0xaae2f7f87b0ed9f95be375ad6139b62ba4ec46ba\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0x239daf60fa5b30d7c84f887bf4bb7105549db24c0d7a93fbbd693516303ca16a\",\n \"input\": \"0xa9059cbb000000000000000000000000a3f21ca763c22691ee2209cb9967c72af11a173c0000000000000000000000000000000000000000000000004563918244f40000\",\n \"nonce\": \"0x71e\",\n \"r\": \"0x4f1fe8027477a3cb4f246627deaefa7181f3a1c78ca9f9439182af90896cf3b9\",\n \"s\": \"0x50bede45dd99400816459e8715caa3f8fb3487d99479115caf17d14fd442c0\",\n \"to\": \"0x355a458d555151d3b27f94227960ade1504e526a\",\n \"transactionIndex\": \"0x2f\",\n \"v\": \"0x1b\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x52bc44d5378309ee2abf1539bf71de1b7d7be3b5\",\n \"gas\": \"0xc350\",\n \"gasPrice\": \"0x3b9aca00\",\n \"hash\": \"0xcf3ab0ab8259d15aba4147c4799e2d07bb358a2b230fdf99453b1badb8392426\",\n \"input\": \"0x\",\n \"nonce\": \"0x5cb732\",\n \"r\": \"0x2b9007834fcdebf41019df3a17a62848e942184791ad12b33553dc17894f89e7\",\n \"s\": \"0x3471ecf9b3c9fba48f07b8c0559bd3a71c6dca0a2b92adc6a728243583cb1cf5\",\n \"to\": \"0xf26f04ee71110db8615755961d2cb09293bec35b\",\n \"transactionIndex\": \"0x30\",\n \"v\": \"0x25\",\n \"value\": \"0xb20a89517984b0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x52bc44d5378309ee2abf1539bf71de1b7d7be3b5\",\n \"gas\": \"0xc350\",\n \"gasPrice\": \"0x3b9aca00\",\n \"hash\": \"0xeb70c1ad08690a3147b5d4e1729e703e46451b95453004558135bd2ecdc8386a\",\n \"input\": \"0x\",\n \"nonce\": \"0x5cb733\",\n \"r\": \"0x4f85286de5b93c73509cf492f2f5f7f7f17e4548cc6cc44d6ce594edc20e8d1f\",\n \"s\": \"0x192a8ab2266badd05e459c06eed75e5f08cbcaf2a20bd08cd58090f81b78869f\",\n \"to\": \"0x17066bb08886dc6dd722dfe292ba7a24b9f1e021\",\n \"transactionIndex\": \"0x31\",\n \"v\": \"0x26\",\n \"value\": \"0xb2579697f8b208\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x52bc44d5378309ee2abf1539bf71de1b7d7be3b5\",\n \"gas\": \"0xc350\",\n \"gasPrice\": \"0x3b9aca00\",\n \"hash\": \"0xf8b8d61472776ea92958561366ed16a9b302d4f1a79609e528501f8bbaf8d199\",\n \"input\": \"0x\",\n \"nonce\": \"0x5cb734\",\n \"r\": \"0x729f66cdbe7ff8966268a604f9f65d75e3aa4d10d5f0c5c63c18b64e52c2b4e7\",\n \"s\": \"0x1b3c1496e1f0e32a217d16e20f628ea328fcc45f23d9edd07c7f0f510006370\",\n \"to\": \"0xd47b875257cf8b470a17d659144ad853a4c5fd01\",\n \"transactionIndex\": \"0x32\",\n \"v\": \"0x25\",\n \"value\": \"0x107f28bea04c630\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x52bc44d5378309ee2abf1539bf71de1b7d7be3b5\",\n \"gas\": \"0xc350\",\n \"gasPrice\": \"0x3b9aca00\",\n \"hash\": \"0x227de6e80f55b624313cd1e1506ea840d0d252b0290ae224b2a7a9c960ed95c6\",\n \"input\": \"0x\",\n \"nonce\": \"0x5cb735\",\n \"r\": \"0x3bfa69b487e16bda04f977b5f7ed9dbc4da78df11e10be5e3e2b6511b6d9c918\",\n \"s\": \"0x3e4b7c7bf0dd6a2fddcc9add271a30f2fd2e12811f7607e83f762ed7cace314f\",\n \"to\": \"0x6936f39489a9c24b459f493fa8e58a9a38a09b89\",\n \"transactionIndex\": \"0x33\",\n \"v\": \"0x26\",\n \"value\": \"0x2ce29bd9d395500\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x52bc44d5378309ee2abf1539bf71de1b7d7be3b5\",\n \"gas\": \"0xc350\",\n \"gasPrice\": \"0x3b9aca00\",\n \"hash\": \"0x5fd9090f392d59b0e819a5ef47b67ed75626a5d625fadc7cbb6b0c739db33ab7\",\n \"input\": \"0x\",\n \"nonce\": \"0x5cb736\",\n \"r\": \"0x327ce5052da50d52991c65e539d18d1190d1189f87932e5d6cdb9e1edcf119ca\",\n \"s\": \"0x20f41f0a36b695ac6a3cd0867cd856eac5bbb32bfe6b090c1537438cca6ee968\",\n \"to\": \"0xf21d36c1c616ae898b1c74f123678d2328221828\",\n \"transactionIndex\": \"0x34\",\n \"v\": \"0x26\",\n \"value\": \"0xb1a30e46eb6578\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x52bc44d5378309ee2abf1539bf71de1b7d7be3b5\",\n \"gas\": \"0xc350\",\n \"gasPrice\": \"0x3b9aca00\",\n \"hash\": \"0xd3190b696cc0bf284567118883dbbbfb252cada7bb9f4127c96fb07c5111d6b8\",\n \"input\": \"0x\",\n \"nonce\": \"0x5cb737\",\n \"r\": \"0x23e94f772c5a26750e1bddbd943f7514a7a80b2a1573e88f75099486841764ca\",\n \"s\": \"0x60cc716671ea81f626643a424c7383e5bf1619a228d5c852695e25ecf312191b\",\n \"to\": \"0x67b3b9d226698c101ad5083f83d90a4f55358aa6\",\n \"transactionIndex\": \"0x35\",\n \"v\": \"0x26\",\n \"value\": \"0xb1c503ed8eae50\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x52bc44d5378309ee2abf1539bf71de1b7d7be3b5\",\n \"gas\": \"0xc350\",\n \"gasPrice\": \"0x3b9aca00\",\n \"hash\": \"0x8bb17bd5867436ec7cc2d117d1bb3bc446437233f3a0369c5976dfe357981981\",\n \"input\": \"0x\",\n \"nonce\": \"0x5cb738\",\n \"r\": \"0x74a2a8590b99d8646e830af4a444fec26e44b256b4fb0eef44c940467b965133\",\n \"s\": \"0x191cdfa55981b0ad05a1c7aa5f59f31f715b18600cab9c6f7f64796f5c33926e\",\n \"to\": \"0x1a24a0c9a80e8f5f2f6daf5beb8241c0cccca0c1\",\n \"transactionIndex\": \"0x36\",\n \"v\": \"0x26\",\n \"value\": \"0xb4ad0f8bfdd100\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x52bc44d5378309ee2abf1539bf71de1b7d7be3b5\",\n \"gas\": \"0xc350\",\n \"gasPrice\": \"0x3b9aca00\",\n \"hash\": \"0x80a9209d4af2a67c46d720e82469c7510604d525e04c6bd20cc07f5720b4c664\",\n \"input\": \"0x\",\n \"nonce\": \"0x5cb739\",\n \"r\": \"0xc9c9d0b798c3ceab5c59811c803b9f8e1b54797dd56921c1aaf06ab7e69a784c\",\n \"s\": \"0x421a357474e4bcda3be534d0b95c936b6ad4a67493d76c717e374c92209e9e14\",\n \"to\": \"0x122ab4509df330f88b1171d9a5fa706c6ee17ae0\",\n \"transactionIndex\": \"0x37\",\n \"v\": \"0x25\",\n \"value\": \"0x2cafac8571e15a0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x52bc44d5378309ee2abf1539bf71de1b7d7be3b5\",\n \"gas\": \"0xc350\",\n \"gasPrice\": \"0x3b9aca00\",\n \"hash\": \"0x1848f9234e3e84c1e867373c34a029dce26d054599a4a2fcaf7751d0b9c487fc\",\n \"input\": \"0x\",\n \"nonce\": \"0x5cb73a\",\n \"r\": \"0x76c58c862663d0ce6c92d877e0282c6e17102a526e98d73a1e678fe72a37f283\",\n \"s\": \"0x41c6a052b11f6dc68567eb5ab2d17443ff7d1315e64a006082f519a9d147ff37\",\n \"to\": \"0xc8f88d1c1259060a799af77120db270cdce07e37\",\n \"transactionIndex\": \"0x38\",\n \"v\": \"0x25\",\n \"value\": \"0xb1af0854fa4a20\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x52bc44d5378309ee2abf1539bf71de1b7d7be3b5\",\n \"gas\": \"0xc350\",\n \"gasPrice\": \"0x3b9aca00\",\n \"hash\": \"0x8622b9ef1633bf59ed13f9cbaec5d1b102f693c126add89b7cd08f13786f1bd3\",\n \"input\": \"0x\",\n \"nonce\": \"0x5cb73b\",\n \"r\": \"0x3eb96f74c442e2d5fdfd8cd112fcd3a99928dc7325f3beb7c4aad4b2572dfe19\",\n \"s\": \"0x58e5cf0230a39fce2708df22cb0874b98e12cb00768652669f28c66615b5d547\",\n \"to\": \"0x7413d7e99ae7c154d44307e97bcf2a99812635a3\",\n \"transactionIndex\": \"0x39\",\n \"v\": \"0x25\",\n \"value\": \"0x1034861e8f7b1400\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x52bc44d5378309ee2abf1539bf71de1b7d7be3b5\",\n \"gas\": \"0xc350\",\n \"gasPrice\": \"0x3b9aca00\",\n \"hash\": \"0x87db2d4b2c78178ef0682cb52d93df9a158c4b095658493206891e7210c90e7c\",\n \"input\": \"0x\",\n \"nonce\": \"0x5cb73c\",\n \"r\": \"0x928c6e36e81a5c196edc0ca037670bc890487c521d42ac65bd6a751320254505\",\n \"s\": \"0x324ca5587cc37a0b63025a9e0d86ac8b2e97af00a9217bec41f4ec7c3c21a031\",\n \"to\": \"0x4ed6192945cd4b00720157c33dfee3a3f6eafb08\",\n \"transactionIndex\": \"0x3a\",\n \"v\": \"0x26\",\n \"value\": \"0x2c6c3a9f11d4b60\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x52bc44d5378309ee2abf1539bf71de1b7d7be3b5\",\n \"gas\": \"0xc350\",\n \"gasPrice\": \"0x3b9aca00\",\n \"hash\": \"0x8dbfe21e6484a12dcb8d50288a24fbed87e5d930f0d586fe277821bb380a6566\",\n \"input\": \"0x\",\n \"nonce\": \"0x5cb73d\",\n \"r\": \"0x836954ac282176597064ebacf678051a0f7d2350df9e197f16ca44d92f3ce6d0\",\n \"s\": \"0x66baf74a21821832019aed22c4b4d323047909eda6dfc43369995d1290ed3305\",\n \"to\": \"0x10ff8b0d6fa4689a7c052ae37d1bb39250c83adb\",\n \"transactionIndex\": \"0x3b\",\n \"v\": \"0x25\",\n \"value\": \"0x2c70c02a308df60\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x52bc44d5378309ee2abf1539bf71de1b7d7be3b5\",\n \"gas\": \"0xc350\",\n \"gasPrice\": \"0x3b9aca00\",\n \"hash\": \"0x2884d1b345ed5c78e74f4976e1f146f36b1fa1fb473bc4f876bd36e02b2e3492\",\n \"input\": \"0x\",\n \"nonce\": \"0x5cb73e\",\n \"r\": \"0x308d2a0583d99a89ec05ec588467d128f3efbc11d089803b3973fe2897b8eff0\",\n \"s\": \"0x729feac19c9b8bee961a20d9fc80ea288e86eae748131905909e71fac81986c1\",\n \"to\": \"0xfeddb15d5a69efcccb87202e633c9658a0f92e7d\",\n \"transactionIndex\": \"0x3c\",\n \"v\": \"0x26\",\n \"value\": \"0x2c765a7cc4febc0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x52bc44d5378309ee2abf1539bf71de1b7d7be3b5\",\n \"gas\": \"0xc350\",\n \"gasPrice\": \"0x3b9aca00\",\n \"hash\": \"0xe815cee754906472aabd5ec363a1356b19571d46703f76e8bdfef47003c9851c\",\n \"input\": \"0x\",\n \"nonce\": \"0x5cb73f\",\n \"r\": \"0x83607383b60cfe7660818ab61f48959e7bb31478ef4099e24ab8999fa01715e0\",\n \"s\": \"0x61352300f0d73cf6a3a6d88149b2001f4107f184668db36a3b3778ea9a5558e6\",\n \"to\": \"0x503ce937f892d1a8daa9563c4a545b332ca3cb5b\",\n \"transactionIndex\": \"0x3d\",\n \"v\": \"0x26\",\n \"value\": \"0x163fa2e896fefe0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x52bc44d5378309ee2abf1539bf71de1b7d7be3b5\",\n \"gas\": \"0xc350\",\n \"gasPrice\": \"0x3b9aca00\",\n \"hash\": \"0xc4d419af56b5db00cdbe30d96fe9665460d3b260873b0c200d50bacb0923c958\",\n \"input\": \"0x\",\n \"nonce\": \"0x5cb740\",\n \"r\": \"0xb4d98ef35ac17223285d8f385fd5d9de30bb21da9b0f9491dc9620a1be62715a\",\n \"s\": \"0x29ad964abdd9f0186c4b7ee3eb799d0fb3b52c714a0eba2be0c96a1cb8009070\",\n \"to\": \"0x65de2b976cf5c1360e6dbb7141a54cd929eaedfd\",\n \"transactionIndex\": \"0x3e\",\n \"v\": \"0x26\",\n \"value\": \"0x16c83ba91fc8930\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x52bc44d5378309ee2abf1539bf71de1b7d7be3b5\",\n \"gas\": \"0xc350\",\n \"gasPrice\": \"0x3b9aca00\",\n \"hash\": \"0x25012748940cca0b108c366bb8b3c1754769c397065c5aea594abdeee99830c6\",\n \"input\": \"0x\",\n \"nonce\": \"0x5cb741\",\n \"r\": \"0x35417d3101af7a7960ff3095e5b9f3ba65d1a8a684dd14088d1548de0810fcee\",\n \"s\": \"0xb411f5e5437c75b3fe237ce2343ff41a47bb256d592437136425b254325b3f5\",\n \"to\": \"0x7cb5b77ff41bf74080a8bbed02db6985c21b0129\",\n \"transactionIndex\": \"0x3f\",\n \"v\": \"0x26\",\n \"value\": \"0x2ca14d69b02f6e0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x52bc44d5378309ee2abf1539bf71de1b7d7be3b5\",\n \"gas\": \"0xc350\",\n \"gasPrice\": \"0x3b9aca00\",\n \"hash\": \"0xb1e51f078479ef1a40539039e04ba885e4be2ee8c3ce505c362e7fa05f99ecb4\",\n \"input\": \"0x\",\n \"nonce\": \"0x5cb742\",\n \"r\": \"0x9b973c98e93905ba2654983a0df98c9d392bd427e050e2b057e50e739291916f\",\n \"s\": \"0x13948ebc66a7b169c342e86aae490c1b23cf0be13ec8f9e006d80123139c00aa\",\n \"to\": \"0x7fba2038632f9aba09df0d617df1c94e61627de8\",\n \"transactionIndex\": \"0x40\",\n \"v\": \"0x25\",\n \"value\": \"0x1639839c3a7e140\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x52bc44d5378309ee2abf1539bf71de1b7d7be3b5\",\n \"gas\": \"0xc350\",\n \"gasPrice\": \"0x3b9aca00\",\n \"hash\": \"0xd1f9e771e948cbf0f9e984944e90b2202d8236a5c0f63e49ea88ef7fcbb540a0\",\n \"input\": \"0x\",\n \"nonce\": \"0x5cb743\",\n \"r\": \"0x6888f4e185647a27a205f064a4ddeabb63ad1273f3dce5a2b8f97e8a038bf3ec\",\n \"s\": \"0x17deec848cf4363203510a639b101a6c1241ebb6e7075da53ad39141b44eb9c4\",\n \"to\": \"0x1f982ca8965a65fffc7b7df2d375b7b72196182b\",\n \"transactionIndex\": \"0x41\",\n \"v\": \"0x26\",\n \"value\": \"0x605bc361806de00\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x52bc44d5378309ee2abf1539bf71de1b7d7be3b5\",\n \"gas\": \"0xc350\",\n \"gasPrice\": \"0x3b9aca00\",\n \"hash\": \"0x1b25bbdcb0497916d4e216de7695936b753af74fcf5072d42f8c283230603c66\",\n \"input\": \"0x\",\n \"nonce\": \"0x5cb744\",\n \"r\": \"0xb8e1627ca996824028174e592ea4f8959a6e36918e0d8305271bd115913f8d3f\",\n \"s\": \"0x23bd59754fa43ac0983b1237ecf92220462b2f4972649dd2b612c75e9f7bc051\",\n \"to\": \"0x31cd13d7ec3c181e7e1484676e9c67f60fd2b34f\",\n \"transactionIndex\": \"0x42\",\n \"v\": \"0x25\",\n \"value\": \"0x2c70fbe6679f560\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x52bc44d5378309ee2abf1539bf71de1b7d7be3b5\",\n \"gas\": \"0xc350\",\n \"gasPrice\": \"0x3b9aca00\",\n \"hash\": \"0x6c12b55c34bf880d0968aaeebb174abf1db368a9eec8d40915643b23bb2e11ce\",\n \"input\": \"0x\",\n \"nonce\": \"0x5cb745\",\n \"r\": \"0x8cc51b9af65a36450d3c369b2c54a27e184fe423ed9efe8440788d39c4729648\",\n \"s\": \"0x5e31aefa873dd1773b37ec079716367e9c5351e5741e8a8bcc5428b6cb937c3d\",\n \"to\": \"0x0cdf633eaf4a217b7fb5dd7014f0ade143f57d05\",\n \"transactionIndex\": \"0x43\",\n \"v\": \"0x25\",\n \"value\": \"0x2c6ae57b0bb1080\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x52bc44d5378309ee2abf1539bf71de1b7d7be3b5\",\n \"gas\": \"0xc350\",\n \"gasPrice\": \"0x3b9aca00\",\n \"hash\": \"0xae0e9618fe17f3fe49650ab85a1c54f78981a9f5b1abca522e9bd49cc13bb27c\",\n \"input\": \"0x\",\n \"nonce\": \"0x5cb746\",\n \"r\": \"0xefc1ee721f5ef63d06e8362589a54daed0778cde6da34320453ab307fca8e0ad\",\n \"s\": \"0x501e54fbb369f08dfcbbe4abcf3f4e7851a507dc6964d8ef9eee7947257de67c\",\n \"to\": \"0x9ba4251e594b0bd6cd277817c1daf276b8ba5282\",\n \"transactionIndex\": \"0x44\",\n \"v\": \"0x26\",\n \"value\": \"0x2c7bf6476be5ba0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x52bc44d5378309ee2abf1539bf71de1b7d7be3b5\",\n \"gas\": \"0xc350\",\n \"gasPrice\": \"0x3b9aca00\",\n \"hash\": \"0x6fe25fff1a571f1a36c05a7ebde7d62c92017e83da2c4bc7ab61b978a1fe9ec4\",\n \"input\": \"0x\",\n \"nonce\": \"0x5cb747\",\n \"r\": \"0x391ce9fa23d1f18ea5889dc9ba4b5d11092278062727e9ec43d979bae804d3ec\",\n \"s\": \"0x57ef7304d5dda037855236b8eeaadd03e7fb1086176174c8415a2837952e3174\",\n \"to\": \"0x3f0f2dbf8791265b96c99e45092c8150a51b677b\",\n \"transactionIndex\": \"0x45\",\n \"v\": \"0x26\",\n \"value\": \"0xb1d7ba55062468\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x52bc44d5378309ee2abf1539bf71de1b7d7be3b5\",\n \"gas\": \"0xc350\",\n \"gasPrice\": \"0x3b9aca00\",\n \"hash\": \"0x3b5d8d3b2600fe94b9860fb6203518695afce4cd2d427dea9a669a5380b7ee94\",\n \"input\": \"0x\",\n \"nonce\": \"0x5cb748\",\n \"r\": \"0xef504322b93afdc08ce9dfc59d425ddc32c94ec9dcebe4e7b3dd8ae2e9466a91\",\n \"s\": \"0x677970eccf5282b22275509e5022d46179a03d3366ab9c22085634eb997ae815\",\n \"to\": \"0x302d0ce64405e8be3dc119d6ae2bd6e5a48c41af\",\n \"transactionIndex\": \"0x46\",\n \"v\": \"0x26\",\n \"value\": \"0x16473b44f98b930\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x52bc44d5378309ee2abf1539bf71de1b7d7be3b5\",\n \"gas\": \"0xc350\",\n \"gasPrice\": \"0x3b9aca00\",\n \"hash\": \"0x6c3e8efcc8365485082999870d5533d81c13f26c9adfe97bf38a222d48cbd00b\",\n \"input\": \"0x\",\n \"nonce\": \"0x5cb749\",\n \"r\": \"0xe7a581c4d104b5302b19295ed80bfc6a3a64d21d543da91898238cd1cb3ad2a5\",\n \"s\": \"0x3c07da568182837d3103a63d2f3c68f231ccc9b60dfc806c045238acc33d1bb9\",\n \"to\": \"0xec826ac0299d76876bdbb0d897422f04d268e5b6\",\n \"transactionIndex\": \"0x47\",\n \"v\": \"0x25\",\n \"value\": \"0xde1e152c63b8c80\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x52bc44d5378309ee2abf1539bf71de1b7d7be3b5\",\n \"gas\": \"0xc350\",\n \"gasPrice\": \"0x3b9aca00\",\n \"hash\": \"0xb4525371618e51dbddd689dc73709f7c29b040e212554e65747615e81200d6e5\",\n \"input\": \"0x\",\n \"nonce\": \"0x5cb74a\",\n \"r\": \"0x586733a9da9f2d60511e21ee1aeaa2bc58eb1711a1a5e0bd753ea0e1d6bb62a3\",\n \"s\": \"0x50b6f93e93666a17a22f212ec6d465ede4c067e9c7569662dfba7067c1893b1d\",\n \"to\": \"0x8bd2c46d54fd79a6c048b85ad5db3938609f3e11\",\n \"transactionIndex\": \"0x48\",\n \"v\": \"0x25\",\n \"value\": \"0xb32a39169ac298\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x52bc44d5378309ee2abf1539bf71de1b7d7be3b5\",\n \"gas\": \"0xc350\",\n \"gasPrice\": \"0x3b9aca00\",\n \"hash\": \"0x01699f65334e110118f95e7ccf8afae3a0717a1cb15c3ea05c14cbee92affa96\",\n \"input\": \"0x\",\n \"nonce\": \"0x5cb74b\",\n \"r\": \"0x98eeb922856e1a4027b1327910987cbf84fa0d4aea1e5318e9555ee0ca2dbdbe\",\n \"s\": \"0x5c7465dfcabb0a54c232f4e5b5078ebf4993a62cf8b34b4c313921a442d168fe\",\n \"to\": \"0xe3a8371fbc76dbe536aee1ab03d975c1bae60dce\",\n \"transactionIndex\": \"0x49\",\n \"v\": \"0x26\",\n \"value\": \"0x86098b8df85b1800\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x52bc44d5378309ee2abf1539bf71de1b7d7be3b5\",\n \"gas\": \"0xc350\",\n \"gasPrice\": \"0x3b9aca00\",\n \"hash\": \"0x5926957588cdd38e19146ed1664e9d24f0e003224eba6ccd19438d918c036a82\",\n \"input\": \"0x\",\n \"nonce\": \"0x5cb74c\",\n \"r\": \"0x92e143738143c86209cad69e4155d3b3840f7b2e3bce5efe50ef9bab189d6ac9\",\n \"s\": \"0x3bbe11a18e387c5e5c7dcca83b786b73047b69c1fff0ff7f034ec6b916e93c5\",\n \"to\": \"0x373980d1fc90652c8a6dbf4e5aca4cea73f64175\",\n \"transactionIndex\": \"0x4a\",\n \"v\": \"0x25\",\n \"value\": \"0x2c7459d916c6ec0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x52bc44d5378309ee2abf1539bf71de1b7d7be3b5\",\n \"gas\": \"0xc350\",\n \"gasPrice\": \"0x3b9aca00\",\n \"hash\": \"0x1533a966b119b3caff119fec489a4643de35f383187fbab50162ed322b926107\",\n \"input\": \"0x\",\n \"nonce\": \"0x5cb74d\",\n \"r\": \"0xf4c50c6e0c31c361ce3ce6feb8c608b46ec90ce4bbb23aa10b48ebf40e5a5117\",\n \"s\": \"0x6a0bd4ad5763c9abf9ac607c1191dc7ed865e6c60d8e5350febb9cf981c25d\",\n \"to\": \"0xa642a25a16aa6ebc831a379a74df4ec8f1ab5e41\",\n \"transactionIndex\": \"0x4b\",\n \"v\": \"0x25\",\n \"value\": \"0x7397b7ab1a08300\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x52bc44d5378309ee2abf1539bf71de1b7d7be3b5\",\n \"gas\": \"0xc350\",\n \"gasPrice\": \"0x3b9aca00\",\n \"hash\": \"0xdf0f0cae9c33a9f32be8f3858a151b9a241ad12e36e4fdf371aef9d6dbde6544\",\n \"input\": \"0x\",\n \"nonce\": \"0x5cb74e\",\n \"r\": \"0x22a5ebe14afd68ce629aa9f6bcdb9931a1bb793914293c3f7aa2ae6b6b0ad248\",\n \"s\": \"0x5b91de590b02f33f46bc4245ab90a8feb2001ebc16e6042414f0559e2351bbaa\",\n \"to\": \"0xd8990de934d27079af925b83692ba7d7d597b218\",\n \"transactionIndex\": \"0x4c\",\n \"v\": \"0x25\",\n \"value\": \"0x2c781dbe764b060\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x52bc44d5378309ee2abf1539bf71de1b7d7be3b5\",\n \"gas\": \"0xc350\",\n \"gasPrice\": \"0x3b9aca00\",\n \"hash\": \"0x1f9429a8cb848f91bee57efeffa54b3f633339fadc21f64611db2779d843fe53\",\n \"input\": \"0x\",\n \"nonce\": \"0x5cb74f\",\n \"r\": \"0x2bcd6a1c3db9183140584f2ad8b75604824edeb55d651a14be7d11a3833aaa52\",\n \"s\": \"0x40ca4a22f1118374e35ade75ad529bdf0a2e6fd998e6d47ffca8fee5e83303d3\",\n \"to\": \"0xd746b41a1250b6bbcc222ecbb006397f93c100b3\",\n \"transactionIndex\": \"0x4d\",\n \"v\": \"0x25\",\n \"value\": \"0x2d2d9911ff496c0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x52bc44d5378309ee2abf1539bf71de1b7d7be3b5\",\n \"gas\": \"0xc350\",\n \"gasPrice\": \"0x3b9aca00\",\n \"hash\": \"0x63fd68e2aacadd679387c03be8e5b4027527ca885193369e73723fb0953603ae\",\n \"input\": \"0x\",\n \"nonce\": \"0x5cb750\",\n \"r\": \"0xf238de208ec260da092a766b8c2cd04f5dbd30c8683c507c8b5b5e5a57579be2\",\n \"s\": \"0x1396c59506c9472779d6c183af57d572843120ac8b76c3b3692a7d14af5505db\",\n \"to\": \"0x1879b8b58f47dfdf6ca97026c850e96db7bbeca3\",\n \"transactionIndex\": \"0x4e\",\n \"v\": \"0x26\",\n \"value\": \"0x594444c925e3700\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x52bc44d5378309ee2abf1539bf71de1b7d7be3b5\",\n \"gas\": \"0xc350\",\n \"gasPrice\": \"0x3b9aca00\",\n \"hash\": \"0xaf64a49cef1db28a05e8f43d0786d55aea8bed17ab3bd2f1d8be6156b366097f\",\n \"input\": \"0x\",\n \"nonce\": \"0x5cb751\",\n \"r\": \"0x2163c3f54f58702a168951f5a99b4272dc78148fb640e70d570919220a4b731\",\n \"s\": \"0x53f326efa133b4fe10d3713334047bb37cbe4b46e56be241e43306a4d8438fc0\",\n \"to\": \"0x64b66f94e210e46f713ec6311a504371c1945875\",\n \"transactionIndex\": \"0x4f\",\n \"v\": \"0x26\",\n \"value\": \"0xde2ce02cd3b1800\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x52bc44d5378309ee2abf1539bf71de1b7d7be3b5\",\n \"gas\": \"0xc350\",\n \"gasPrice\": \"0x3b9aca00\",\n \"hash\": \"0x89c3a51161ade0b9797eecb80924f06397798f89f781bf91ea7513ea17d1d190\",\n \"input\": \"0x\",\n \"nonce\": \"0x5cb752\",\n \"r\": \"0x568d4b9ffb557e6a1a2af97267337bf1ed6e8cb5588dfd1940a472cd1d29f784\",\n \"s\": \"0x27fac7e588d657690e0e17216ea72668e8124881d65ba7a5e42b47db55e3a3a5\",\n \"to\": \"0xbd3fed307b9d6465bcc390242ce22e8b58240627\",\n \"transactionIndex\": \"0x50\",\n \"v\": \"0x25\",\n \"value\": \"0xb1f8a5e47f77d8\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x52bc44d5378309ee2abf1539bf71de1b7d7be3b5\",\n \"gas\": \"0xc350\",\n \"gasPrice\": \"0x3b9aca00\",\n \"hash\": \"0x916b92d8f125dc9d40252a5a92d78b1ad09d03ffb45d01f50a9398dbdcbd1671\",\n \"input\": \"0x\",\n \"nonce\": \"0x5cb753\",\n \"r\": \"0x32d17531bc63efddc3accc694c22e93be94ebbb89301491eaf9467cc60d7c71d\",\n \"s\": \"0x1d25f7a900e3445b20a9ebed138b61d07b3e2e92c01563e9f8548dad2008bdac\",\n \"to\": \"0xaa8d32b411607f98aea97f7b0785fa84a16b363d\",\n \"transactionIndex\": \"0x51\",\n \"v\": \"0x26\",\n \"value\": \"0xb1a4cdcc702f78\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x52bc44d5378309ee2abf1539bf71de1b7d7be3b5\",\n \"gas\": \"0xc350\",\n \"gasPrice\": \"0x3b9aca00\",\n \"hash\": \"0xa0146dd07d3c207bec3884095ff31bdf1428a4141cbd0beaf6b71ce1a26de8b7\",\n \"input\": \"0x\",\n \"nonce\": \"0x5cb754\",\n \"r\": \"0xba4f5072eb25e5b48b418a54ac7fb10d42d5d32a72cb99a6be177395003cf8d1\",\n \"s\": \"0x7f5c0231bda63a7de1206205e7ffa7ef8a82989ec4eaab4359cbc0104db18db5\",\n \"to\": \"0xd631a9e87f94724f3a8f76886933b01b066140ac\",\n \"transactionIndex\": \"0x52\",\n \"v\": \"0x25\",\n \"value\": \"0xb3bd79fbf3c0b8\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x52bc44d5378309ee2abf1539bf71de1b7d7be3b5\",\n \"gas\": \"0xc350\",\n \"gasPrice\": \"0x3b9aca00\",\n \"hash\": \"0x97b2c9f91b2ac981082243b0382770c8ad0d15e1194e1baa5b3807fcc76de2e2\",\n \"input\": \"0x\",\n \"nonce\": \"0x5cb755\",\n \"r\": \"0xb697ae43ffbbdfefdba322da4abdde157f4dbf700bacb5b62969123c62cf6793\",\n \"s\": \"0x1376b18ffb1c645038ecdb7a4aab7c58a6f4b5f61283310e3f3690f9bc0a5137\",\n \"to\": \"0x82447ad18db8252008428acb048a31c46f21992b\",\n \"transactionIndex\": \"0x53\",\n \"v\": \"0x26\",\n \"value\": \"0xb339c9b1db4f68\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x52bc44d5378309ee2abf1539bf71de1b7d7be3b5\",\n \"gas\": \"0xc350\",\n \"gasPrice\": \"0x3b9aca00\",\n \"hash\": \"0x2431dc390fbd38d6728c24971b49eaa852c18587c25faf79f21875c2b4a8eecb\",\n \"input\": \"0x\",\n \"nonce\": \"0x5cb756\",\n \"r\": \"0x454b145519b2a082d7eb2fcc69fbcaec683dcb02b877f729dfb9fc11ae7074ec\",\n \"s\": \"0x443d15d9f9ce536a93e7a8dfd50da08444185baa4eb702245027c22912da8b87\",\n \"to\": \"0x48c1d3590259d1b270d6a96e2dab7585eb04b42a\",\n \"transactionIndex\": \"0x54\",\n \"v\": \"0x26\",\n \"value\": \"0x2c71ac3436c3420\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x52bc44d5378309ee2abf1539bf71de1b7d7be3b5\",\n \"gas\": \"0xc350\",\n \"gasPrice\": \"0x3b9aca00\",\n \"hash\": \"0x89fc021d1de5a9a97477b9e186918b47336eb10f33f9323248ee54eeabc5e903\",\n \"input\": \"0x\",\n \"nonce\": \"0x5cb757\",\n \"r\": \"0xc8e73dac46476aeb3a4a94e5c14ffa9034b353a649e310431bbb49e3e8ab8a10\",\n \"s\": \"0x1769651fed5a5bd2306d0cfb2f1bc299f183451dddd21683f3ac10fe4b15ee13\",\n \"to\": \"0x1c63715ea7fd03a985891a96063c9cc355e4291d\",\n \"transactionIndex\": \"0x55\",\n \"v\": \"0x26\",\n \"value\": \"0xb21f485d2e2918\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x52bc44d5378309ee2abf1539bf71de1b7d7be3b5\",\n \"gas\": \"0xc350\",\n \"gasPrice\": \"0x3b9aca00\",\n \"hash\": \"0x078f4578443ddaad8c700343d0b9850d574b8d195703a8e93b128f965fa541ba\",\n \"input\": \"0x\",\n \"nonce\": \"0x5cb758\",\n \"r\": \"0x2923ca1b68f482499370a497e7ec8e3ce22c6b1a603e2010ca6502741590922c\",\n \"s\": \"0x1dc4cd2f003b45067991b77d2e2fd6a436660dceb7b1695d116047e6d6f5c598\",\n \"to\": \"0x98c5c9dc6dd44d6dc07cbac11df9e0f38594cf42\",\n \"transactionIndex\": \"0x56\",\n \"v\": \"0x26\",\n \"value\": \"0x19972c3b6177c800\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x52bc44d5378309ee2abf1539bf71de1b7d7be3b5\",\n \"gas\": \"0xc350\",\n \"gasPrice\": \"0x3b9aca00\",\n \"hash\": \"0x56ff6aa7037963a033d837ea1b86323becd977c5d562506d5bb038fd76dfb621\",\n \"input\": \"0x\",\n \"nonce\": \"0x5cb759\",\n \"r\": \"0xd0067b97bdeb3b1d87433d84a0533a4bb893af2178092e4d433096741dc6388d\",\n \"s\": \"0x49126e3d3facf36a04a515447e63755584465ea19525b539bfc982180d701461\",\n \"to\": \"0x7a5a57f26b387da7a4371de66218713e0d250027\",\n \"transactionIndex\": \"0x57\",\n \"v\": \"0x25\",\n \"value\": \"0xb1a8f085786420\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x52bc44d5378309ee2abf1539bf71de1b7d7be3b5\",\n \"gas\": \"0xc350\",\n \"gasPrice\": \"0x3b9aca00\",\n \"hash\": \"0x0e406ab1bc38a4d3b556632eeadec02d2854ee7cfed198bcfc3bfde1491d315a\",\n \"input\": \"0x\",\n \"nonce\": \"0x5cb75a\",\n \"r\": \"0x32f4e263b43672913645c7ff3fb1922086fe9f63fcdf144891a2afbc6f5ee646\",\n \"s\": \"0x42d3706c9bd952e843615d1d8354c0c575e15924f5bda8e4b384d5e546e99bfa\",\n \"to\": \"0x2277f1eae2906fef37fa8b88daf6c1d294b4d78b\",\n \"transactionIndex\": \"0x58\",\n \"v\": \"0x26\",\n \"value\": \"0x2c724b607650460\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x52bc44d5378309ee2abf1539bf71de1b7d7be3b5\",\n \"gas\": \"0xc350\",\n \"gasPrice\": \"0x3b9aca00\",\n \"hash\": \"0x2b3d8868033aba2a76eed36d08e2b3cfd9a43982b118ae5e45bb885732170805\",\n \"input\": \"0x\",\n \"nonce\": \"0x5cb75b\",\n \"r\": \"0xb0a560609f86255d2a7241f4f3275f360ccfbd793fb23ff58acdf1bb6d7fb9cb\",\n \"s\": \"0x3211e8eed210eea43345153dc547a9dad3f54b7569db983dc9169868902b3c18\",\n \"to\": \"0xeb141abe061738026222b0422455fa9ac737507f\",\n \"transactionIndex\": \"0x59\",\n \"v\": \"0x25\",\n \"value\": \"0xb2b562708aadd8\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x52bc44d5378309ee2abf1539bf71de1b7d7be3b5\",\n \"gas\": \"0xc350\",\n \"gasPrice\": \"0x3b9aca00\",\n \"hash\": \"0x993d2c193451e15aaaed95423f3a41033a8b4e68e36b494c4b4d54e0b352c690\",\n \"input\": \"0x\",\n \"nonce\": \"0x5cb75c\",\n \"r\": \"0xe34470737f5d91059d4ec00493a81172661c25ea8a2afff26e9d0546d78e94eb\",\n \"s\": \"0x168e319b2259933896f04918cd2d9c99c7ea31f2f4e475ffe572e6c58baed107\",\n \"to\": \"0x533945b33a17d381e7f812525918d3af18429d9e\",\n \"transactionIndex\": \"0x5a\",\n \"v\": \"0x26\",\n \"value\": \"0x2c6f6248ba003a0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x52bc44d5378309ee2abf1539bf71de1b7d7be3b5\",\n \"gas\": \"0xc350\",\n \"gasPrice\": \"0x3b9aca00\",\n \"hash\": \"0xf1ec573f27e57c45b450b704aa929846b9630a1193067b7c97de5854e6785891\",\n \"input\": \"0x\",\n \"nonce\": \"0x5cb75d\",\n \"r\": \"0x74f42beea390195eb1064069c25034afd85424d6daf4c6f6774d99b875dadd26\",\n \"s\": \"0x45b9ed50b97583af52ad14a62f71dca0523f89e125b7320a709d4259724d7189\",\n \"to\": \"0xebfadaac4be4db28fd87a6e9fe5fecd0a2a97ebb\",\n \"transactionIndex\": \"0x5b\",\n \"v\": \"0x25\",\n \"value\": \"0xb29468651152b8\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x52bc44d5378309ee2abf1539bf71de1b7d7be3b5\",\n \"gas\": \"0xc350\",\n \"gasPrice\": \"0x3b9aca00\",\n \"hash\": \"0x7bee48465a21cb166309f64ea7e2b97ee27ed74985eec2504baec98a6bcd5207\",\n \"input\": \"0x\",\n \"nonce\": \"0x5cb75e\",\n \"r\": \"0xd5dd6449bdab6ca5ea716e093bb4fd7aed875ef93fd1f171ee252ea1a9b382bc\",\n \"s\": \"0xc994b415ce8c13610bc9c0296e6f42047eb5bdd64dce156b02ffbe542e544cd\",\n \"to\": \"0x9eb8078690f32e6861c0044ccecd405d1e5138b4\",\n \"transactionIndex\": \"0x5c\",\n \"v\": \"0x25\",\n \"value\": \"0x1640c2c623f0890\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x52bc44d5378309ee2abf1539bf71de1b7d7be3b5\",\n \"gas\": \"0xc350\",\n \"gasPrice\": \"0x3b9aca00\",\n \"hash\": \"0x8aed3b66d21a46a9e17a2c804759590580562cce0e9423f5997879003cef4807\",\n \"input\": \"0x\",\n \"nonce\": \"0x5cb75f\",\n \"r\": \"0xfd64abd7d8a32f6da9c42f808f13d612af54d21633f68ad12f45412667fc28c\",\n \"s\": \"0x2c6de20c22dd4bfe14d34ae4d41e4cc0c03abcefac90ee86ad298ec6a3b9d6ed\",\n \"to\": \"0xe1a021c7c47c1ac46e79495b867f4f08d9e73727\",\n \"transactionIndex\": \"0x5d\",\n \"v\": \"0x25\",\n \"value\": \"0x16400c396da5fb0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x52bc44d5378309ee2abf1539bf71de1b7d7be3b5\",\n \"gas\": \"0xc350\",\n \"gasPrice\": \"0x3b9aca00\",\n \"hash\": \"0xad1cfe1487648286afa7e98e490cfcc930e4553b2ab87c90be67ec11464dd1dd\",\n \"input\": \"0x\",\n \"nonce\": \"0x5cb760\",\n \"r\": \"0x5dfeea1b85af92eb24bd28268e44b81c67724103705ab2a8623bd7fbeb769faa\",\n \"s\": \"0x599f59cb24bf1603d29ad05ae3059e3bcf4de735e4d2bc5504faf0812914a541\",\n \"to\": \"0x4f9b177c07e56f699b5e6e5dc76dec461e4f9dac\",\n \"transactionIndex\": \"0x5e\",\n \"v\": \"0x26\",\n \"value\": \"0x2cb7e04047d75c0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x52bc44d5378309ee2abf1539bf71de1b7d7be3b5\",\n \"gas\": \"0xc350\",\n \"gasPrice\": \"0x3b9aca00\",\n \"hash\": \"0x9b00f63f507af65319947aa132803be16883659ca36808c87bea8f56e9768c5a\",\n \"input\": \"0x\",\n \"nonce\": \"0x5cb761\",\n \"r\": \"0xdd312aa6d3b11cec7f5da22e5bdafb984ef9d223b57ca2a276ddb30ef7f942c4\",\n \"s\": \"0x48e5c5d7a2cc7c3833e20f31733a7e6f47160d1ad3883a2df2c3920d734214dc\",\n \"to\": \"0xf772ed8f785d032f3ac7af5a014113523a527df3\",\n \"transactionIndex\": \"0x5f\",\n \"v\": \"0x26\",\n \"value\": \"0x2c791bf3a6e42a0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x52bc44d5378309ee2abf1539bf71de1b7d7be3b5\",\n \"gas\": \"0xc350\",\n \"gasPrice\": \"0x3b9aca00\",\n \"hash\": \"0x30d2afb4f55b2c09a0b79b1ffaf4621996bbbb3245f98704c7504873a9c68225\",\n \"input\": \"0x\",\n \"nonce\": \"0x5cb762\",\n \"r\": \"0xcf8e0d5985f32741856eda0b5377e520ecc6211e39d45215921b8815bfda8cac\",\n \"s\": \"0x2761300d134e3ad990adc98b58910dbdc2e307496a4bf69a6daa2ba32be6977b\",\n \"to\": \"0x1ef32e5434eb01a78e49274829b9f410d645da6d\",\n \"transactionIndex\": \"0x60\",\n \"v\": \"0x26\",\n \"value\": \"0xb1a4fd9d655968\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x52bc44d5378309ee2abf1539bf71de1b7d7be3b5\",\n \"gas\": \"0xc350\",\n \"gasPrice\": \"0x3b9aca00\",\n \"hash\": \"0x69e6e2454526f230788509fe5006d3bdae369417aecbe87095cfbf8e85d94697\",\n \"input\": \"0x\",\n \"nonce\": \"0x5cb763\",\n \"r\": \"0x7f5c9c3f77da47835cfe9bc6e50bc0d060f2ad394fdd9dbe7588ffb882aa51a4\",\n \"s\": \"0x3a22d95b41082d413950c0f79b76635fb548602b9cc91d6347b4486adc58aa35\",\n \"to\": \"0x33e7463b36925ecf60b718e66eb18c8255cd6814\",\n \"transactionIndex\": \"0x61\",\n \"v\": \"0x25\",\n \"value\": \"0xb1dfbad3e0fc80\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x52bc44d5378309ee2abf1539bf71de1b7d7be3b5\",\n \"gas\": \"0xc350\",\n \"gasPrice\": \"0x3b9aca00\",\n \"hash\": \"0xe0417da1a5c1b207c3262e8f8942340aab263a1bfb7d603ca768dc4de7db367f\",\n \"input\": \"0x\",\n \"nonce\": \"0x5cb764\",\n \"r\": \"0x16030a94b26a661f4c0d0a105e8659e16d5cc79e6ff2b10609db352f93c5b9f1\",\n \"s\": \"0x1b682978f8749615b411737210790ffc4e7f556f118412392e0f31bf7247fa7\",\n \"to\": \"0x63c33eb24e0185942d31e042f65662ab3a4118fa\",\n \"transactionIndex\": \"0x62\",\n \"v\": \"0x25\",\n \"value\": \"0xb235e93b307510\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x52bc44d5378309ee2abf1539bf71de1b7d7be3b5\",\n \"gas\": \"0xc350\",\n \"gasPrice\": \"0x3b9aca00\",\n \"hash\": \"0xfb83fb1119fbabcf8537a967c1172489a1c5d1fc81e06f73323e91b1990493f1\",\n \"input\": \"0x\",\n \"nonce\": \"0x5cb765\",\n \"r\": \"0xd6b8aa4bc9827d05942b68c4438f5d124239b7901dd249eac014abfb4bb73e7a\",\n \"s\": \"0x5d58ebcc760ad7780383821920277397e54fb0657032ce8eb201938e4fbf4552\",\n \"to\": \"0x596d1b5946b6d55ecb008b412be6264e4c8c0d73\",\n \"transactionIndex\": \"0x63\",\n \"v\": \"0x26\",\n \"value\": \"0x2c6ec7914b0c9a0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x52bc44d5378309ee2abf1539bf71de1b7d7be3b5\",\n \"gas\": \"0xc350\",\n \"gasPrice\": \"0x3b9aca00\",\n \"hash\": \"0xe5526d7279942abd991c6816534469e7f976953c5277a91830b2338917f3f7cd\",\n \"input\": \"0x\",\n \"nonce\": \"0x5cb766\",\n \"r\": \"0xb32b187e72b58f5de654af1be20002ed0c73330423328832a90b12fc6f696d11\",\n \"s\": \"0x86ee01300694189c7fdc9f65601287921791d74be29ad5e3c87e96458af24f3\",\n \"to\": \"0xad3af3adcbb6089b9242ba083f716e7f3bbba886\",\n \"transactionIndex\": \"0x64\",\n \"v\": \"0x25\",\n \"value\": \"0xf9701e321e7838\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x52bc44d5378309ee2abf1539bf71de1b7d7be3b5\",\n \"gas\": \"0xc350\",\n \"gasPrice\": \"0x3b9aca00\",\n \"hash\": \"0x9e6d57d0e787193389365a13fe86f17a8f056f3f72fb4e00099b628fbfbbc3c3\",\n \"input\": \"0x\",\n \"nonce\": \"0x5cb767\",\n \"r\": \"0xda4b61d97f7b6a694a4f96b47684ff0089efab03cd57b464f6dc0b9cc155d200\",\n \"s\": \"0x13b11eab20a3be284bfe4e02db683138d1611626c5c42611b57c8b406a13d89b\",\n \"to\": \"0x8a8a1920f785828df5b7489501432685d22dac8a\",\n \"transactionIndex\": \"0x65\",\n \"v\": \"0x25\",\n \"value\": \"0xb1e3e63bf1c9b0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x52bc44d5378309ee2abf1539bf71de1b7d7be3b5\",\n \"gas\": \"0xc350\",\n \"gasPrice\": \"0x3b9aca00\",\n \"hash\": \"0x71de40e2fdeba40369273564a43d6859184145aa639e446a460bce5492a0bfc4\",\n \"input\": \"0x\",\n \"nonce\": \"0x5cb768\",\n \"r\": \"0x76e89bc02a97fcd1d831da4f4783c6b19902cb49132b1ee3f4d6476c7375fd2\",\n \"s\": \"0x1449a49f46667eac9cec2a850fcf7514341b813875ab4ae4a0981d5b016edd51\",\n \"to\": \"0xe135e5566c151905b47e41b1ed07693345fa23f0\",\n \"transactionIndex\": \"0x66\",\n \"v\": \"0x25\",\n \"value\": \"0xd69d6cb02d9458\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x52bc44d5378309ee2abf1539bf71de1b7d7be3b5\",\n \"gas\": \"0xc350\",\n \"gasPrice\": \"0x3b9aca00\",\n \"hash\": \"0x4641ba77921f5d34c83166706d70d862f1420785ca49f5738654b39de34654d4\",\n \"input\": \"0x\",\n \"nonce\": \"0x5cb769\",\n \"r\": \"0x3e471f06d7ae86b1311b1ff8d89822a8d6a639c0fd2dd35e763c267aba59c325\",\n \"s\": \"0x49c1571dd210c0f51d1536ca11bf0f9502e41ed6f3ab40611860fe7e9f9c3c07\",\n \"to\": \"0xcbfba74f348fe4e53e3a31f94b7a1bc1f5ecc6b4\",\n \"transactionIndex\": \"0x67\",\n \"v\": \"0x26\",\n \"value\": \"0xd56c3c6cfb00f0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x52bc44d5378309ee2abf1539bf71de1b7d7be3b5\",\n \"gas\": \"0xc350\",\n \"gasPrice\": \"0x3b9aca00\",\n \"hash\": \"0xbc15a71f8ef727062ca7acbbce69d01bae4b59560f20e3bc82dfca7f35757275\",\n \"input\": \"0x\",\n \"nonce\": \"0x5cb76a\",\n \"r\": \"0xb3ce22008d97843001f15325d1bbe25ea2a64aa8fd11541314e7af2deebb9f40\",\n \"s\": \"0x42a2ec7435c8c563fd45d0959f6df5cc0b9c3a6bbfba4726880fb620b37f96dd\",\n \"to\": \"0x5b60ae58391e91e924a081e94e045385e7dad8dd\",\n \"transactionIndex\": \"0x68\",\n \"v\": \"0x25\",\n \"value\": \"0xb1e7f97c4b2288\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x52bc44d5378309ee2abf1539bf71de1b7d7be3b5\",\n \"gas\": \"0xc350\",\n \"gasPrice\": \"0x3b9aca00\",\n \"hash\": \"0xc89f3abd9f5bb96afe70781a1ce74faa15139b225ef5d134379cbcf0f1dd0483\",\n \"input\": \"0x\",\n \"nonce\": \"0x5cb76b\",\n \"r\": \"0xea327552cb480517e73447cbb856be8eb67253e46c3d527b9713f1136baabe84\",\n \"s\": \"0x5e48fa72eeb3765a3372c1fcb50e48318bffd362c9e758d98892b2cea1838d13\",\n \"to\": \"0x0825d3e9ed809c4aca0fba44bf4eaf0531796e85\",\n \"transactionIndex\": \"0x69\",\n \"v\": \"0x25\",\n \"value\": \"0xb1ad051c5ec5c0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x52bc44d5378309ee2abf1539bf71de1b7d7be3b5\",\n \"gas\": \"0xc350\",\n \"gasPrice\": \"0x3b9aca00\",\n \"hash\": \"0xb903de61c2df42c70a466fa16ab53d5a1c85eb7134302ad4c86aeb1a50e5b607\",\n \"input\": \"0x\",\n \"nonce\": \"0x5cb76c\",\n \"r\": \"0x6ad3601d5848fc5a1c8d12e12d03b420e7e7abb513f00a8627ec2b62bb97fedb\",\n \"s\": \"0x7491636d4f4318fc68799dee7a7d465e38dbce835aeb1489e2378b6455b7775a\",\n \"to\": \"0x0bbae6cc1ccfdf095aa20dede7bcf4fb23735247\",\n \"transactionIndex\": \"0x6a\",\n \"v\": \"0x25\",\n \"value\": \"0xb390c95615e100\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x52bc44d5378309ee2abf1539bf71de1b7d7be3b5\",\n \"gas\": \"0xc350\",\n \"gasPrice\": \"0x3b9aca00\",\n \"hash\": \"0x112e217dc25046169d844e75d4b032eebd5e06b0c0811eb8ff22b99076fba82f\",\n \"input\": \"0x\",\n \"nonce\": \"0x5cb76d\",\n \"r\": \"0xe54ddd91e2b1b7006c47385a93fcba2c2298c0cb935db53deed5272fad7b4664\",\n \"s\": \"0x3232b2f96299d62fd7437cb47b87ecbc8a9b4f301c7743ae167ce0aab06dad62\",\n \"to\": \"0xd1bd9404a66c965398cb881ed2533db26c1df5d3\",\n \"transactionIndex\": \"0x6b\",\n \"v\": \"0x26\",\n \"value\": \"0x167b26629ea70e0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x52bc44d5378309ee2abf1539bf71de1b7d7be3b5\",\n \"gas\": \"0xc350\",\n \"gasPrice\": \"0x3b9aca00\",\n \"hash\": \"0x84ab0b063eca3c51f4e223e228a971ae8cfdf5df48d20cccc7dbe29ea5c873d3\",\n \"input\": \"0x\",\n \"nonce\": \"0x5cb76e\",\n \"r\": \"0x9c263853518c78720192e2bf68cb68be8f4156ceca617ee2152938307d520cc\",\n \"s\": \"0x63145c4c1777338791cc96e3b8759cd58b4a0b8fd8cc3b8ee362e5ae87b072a5\",\n \"to\": \"0x3beade02134c5353f9c9d58c9eb82ea6ddb8db30\",\n \"transactionIndex\": \"0x6c\",\n \"v\": \"0x25\",\n \"value\": \"0xb1b38da581d390\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x52bc44d5378309ee2abf1539bf71de1b7d7be3b5\",\n \"gas\": \"0xc350\",\n \"gasPrice\": \"0x3b9aca00\",\n \"hash\": \"0xab77fe104699a29308ab3737e9ff4037e62d17f606138b677cad42a2c7609fca\",\n \"input\": \"0x\",\n \"nonce\": \"0x5cb76f\",\n \"r\": \"0x57cf35edc237131fb0fa818c12842a6b2cff4e6b1bd93ca8f6c494f9520fedfb\",\n \"s\": \"0x3eb48c1c268798acc9edaf52cfe5cf539294dbcae411847bbb3486a583c243a3\",\n \"to\": \"0x231bd10c92b2c4d7571f271f6cce91c45132feaf\",\n \"transactionIndex\": \"0x6d\",\n \"v\": \"0x26\",\n \"value\": \"0xb1afb6241d06a0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x52bc44d5378309ee2abf1539bf71de1b7d7be3b5\",\n \"gas\": \"0xc350\",\n \"gasPrice\": \"0x3b9aca00\",\n \"hash\": \"0x67c3aba2f803c967edf680806fa7bec3b2c652a1dfe750d3561cf8aac6601293\",\n \"input\": \"0x\",\n \"nonce\": \"0x5cb770\",\n \"r\": \"0x810b781733e2e347aa21a4e1788662582e454869f8ce1fcf090cc26f63e5a374\",\n \"s\": \"0x322a0a01b0c8cb742f8085cfcc3edb15eb57c9b11ee9ce8b079358fd9396b9f3\",\n \"to\": \"0x7cc3fe108acf4e68c498c957013e6659e1c8c041\",\n \"transactionIndex\": \"0x6e\",\n \"v\": \"0x25\",\n \"value\": \"0x1677880a66d2010\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x52bc44d5378309ee2abf1539bf71de1b7d7be3b5\",\n \"gas\": \"0xc350\",\n \"gasPrice\": \"0x3b9aca00\",\n \"hash\": \"0x1fe85fe20116ae1f0f1c8d05055b27d5133e88dffa4fca1783c830506914d8e1\",\n \"input\": \"0x\",\n \"nonce\": \"0x5cb771\",\n \"r\": \"0x518200ea0e3f522bccc2767bc50e9874a5e9822a33ce9e7226c0261975283278\",\n \"s\": \"0x65148b73930b06f54b0d036e940d33cf152f39202e1d3e7bb93e77caf89f8ccf\",\n \"to\": \"0x65f93381cb0415e59135eb0977aa44c18c61ae5c\",\n \"transactionIndex\": \"0x6f\",\n \"v\": \"0x26\",\n \"value\": \"0xb1b91f471f1a90\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x52bc44d5378309ee2abf1539bf71de1b7d7be3b5\",\n \"gas\": \"0xc350\",\n \"gasPrice\": \"0x3b9aca00\",\n \"hash\": \"0x4a2d4b886f67b079efee2d3678e68d639e39db2e73393dcf396d551bf2755262\",\n \"input\": \"0x\",\n \"nonce\": \"0x5cb772\",\n \"r\": \"0xeaa81aa78bd31d126739b16223f38c4ffa9e1aca334659443d220899389cff27\",\n \"s\": \"0x310868febf9e13f2d09027fdd6bb3aa493368d17d765b4966a51ffbdd0732fc1\",\n \"to\": \"0xd09746940b05e09fe51c3ace785cd6caa9add855\",\n \"transactionIndex\": \"0x70\",\n \"v\": \"0x25\",\n \"value\": \"0xb1a5b8c419f778\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x52bc44d5378309ee2abf1539bf71de1b7d7be3b5\",\n \"gas\": \"0xc350\",\n \"gasPrice\": \"0x3b9aca00\",\n \"hash\": \"0x07d7bc2b72c9d924c8cac1522e6e39cb753a3120f31fa76d96f34a796d8f373d\",\n \"input\": \"0x\",\n \"nonce\": \"0x5cb773\",\n \"r\": \"0xe5a557f6648c01ee2fc151161b738e1b33dabaebf27712268d8743410b1a0c13\",\n \"s\": \"0x67924c2b765689cceee00af95c49e512a9e48fbaf3efb97206648c2b8e18f4f1\",\n \"to\": \"0x0ff98240bb1ec2d183b9b065564327f93c56ce26\",\n \"transactionIndex\": \"0x71\",\n \"v\": \"0x26\",\n \"value\": \"0xb2080918fe89a0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x52bc44d5378309ee2abf1539bf71de1b7d7be3b5\",\n \"gas\": \"0xc350\",\n \"gasPrice\": \"0x3b9aca00\",\n \"hash\": \"0x2ac79f3151a5378203a1e8d4886fab18d1d65abe3b79a97419e7be1eabaad02c\",\n \"input\": \"0x\",\n \"nonce\": \"0x5cb774\",\n \"r\": \"0xc2ab6b68adec112cd37aabd357d63c042e7139c624c5fd209f566e792e2a8f76\",\n \"s\": \"0x616eff7c88fd1086d48bae453c4eb1067c68f91dc6b70eb9fb4db5b5f3777848\",\n \"to\": \"0x7bfeb23f2eb6a5a7b944c6e5fd6aab04e1f77baf\",\n \"transactionIndex\": \"0x72\",\n \"v\": \"0x25\",\n \"value\": \"0xde1ff29848a7880\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x52bc44d5378309ee2abf1539bf71de1b7d7be3b5\",\n \"gas\": \"0xc350\",\n \"gasPrice\": \"0x3b9aca00\",\n \"hash\": \"0x6391897b44f8fbd7f2666aeb1fa902d09f0930c1e95a5e5d9f797dd1dea7d4e0\",\n \"input\": \"0x\",\n \"nonce\": \"0x5cb775\",\n \"r\": \"0xd664ed5410cad16e3660d624ac2fc987a0493a72acb23599093f7057f2d5bf1c\",\n \"s\": \"0x79c8b33a656e19930c647df9e1ea72b45317da2cc9e75319e3af8e810d936c1d\",\n \"to\": \"0xa6ceba80cb1a09b510b257970e146469c199576f\",\n \"transactionIndex\": \"0x73\",\n \"v\": \"0x25\",\n \"value\": \"0xd536107714e940\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x52bc44d5378309ee2abf1539bf71de1b7d7be3b5\",\n \"gas\": \"0xc350\",\n \"gasPrice\": \"0x3b9aca00\",\n \"hash\": \"0x8cf0bd2f53b03639d5bc153db9e932c0c47731f0c9b8e76f5fdf80ef9bc50acc\",\n \"input\": \"0x\",\n \"nonce\": \"0x5cb776\",\n \"r\": \"0x1c1370827f1845202adbeb763aefbc107731ae728f8ceb5a1f7fa924188aa3b9\",\n \"s\": \"0x2cb9cc587ace1f7e2c170409a845eafbe49a117f7fc2c0d0c8b88c6bedb23a2b\",\n \"to\": \"0xb31c948f71110eb41fe7deffe79ef5b960250f48\",\n \"transactionIndex\": \"0x74\",\n \"v\": \"0x26\",\n \"value\": \"0x2c6fd578e2b6f00\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x52bc44d5378309ee2abf1539bf71de1b7d7be3b5\",\n \"gas\": \"0xc350\",\n \"gasPrice\": \"0x3b9aca00\",\n \"hash\": \"0xac2c134d241fc55963631630d7c00ed61d4a841758cc397c55daa0ed080aeadb\",\n \"input\": \"0x\",\n \"nonce\": \"0x5cb777\",\n \"r\": \"0x5b475a8a8c0f3989c770d5d49f30cc69061ddf22a51dc15a813a1b3e01de0486\",\n \"s\": \"0x3e13fc2642f46909a237694df76cdd8180ed9179279fcfcd2477aeca872e9081\",\n \"to\": \"0x0c2f9f906ac17af23ee472edbde438c50df33377\",\n \"transactionIndex\": \"0x75\",\n \"v\": \"0x25\",\n \"value\": \"0x2d10cabd5d70340\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x52bc44d5378309ee2abf1539bf71de1b7d7be3b5\",\n \"gas\": \"0xc350\",\n \"gasPrice\": \"0x3b9aca00\",\n \"hash\": \"0x97cba61d297666f9faf35f4e6986742b33db628a82d6e9e33416ec6a68964a24\",\n \"input\": \"0x\",\n \"nonce\": \"0x5cb778\",\n \"r\": \"0x547d76f03cd28bcc3c38957e832eba49409b5f9efac6ee27ac1f4f34183334dd\",\n \"s\": \"0x76833d5e00e95dc0c05a9716ab953c9cca3b69e2da67ac68943e0720499ce8b5\",\n \"to\": \"0x964e89d6e5e0b5e8fc1598c2889d453b65949123\",\n \"transactionIndex\": \"0x76\",\n \"v\": \"0x26\",\n \"value\": \"0xb27e01ae8d00c8\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x03747f06215b44e498831da019b27f53e483599f\",\n \"gas\": \"0x26d7e\",\n \"gasPrice\": \"0xcce416600\",\n \"hash\": \"0x5dcf90936829a2c2db4de2d3b54a53f5a2dd8acd818db628ddd1cced0aa243b0\",\n \"input\": \"0x6ea056a9000000000000000000000000f230b790e05390fc8295f4d3f60332c93bed42e20000000000000000000000000000000000000000000000000000000082a9b244\",\n \"nonce\": \"0x43233\",\n \"r\": \"0x9b38a08ed3338ae568ba314cd028d4c8f7a0a90d15091d8cd1df86595a534104\",\n \"s\": \"0x3d8dcad579f63e588b8381760e7a8b857d0afdd342bfa7f251e72da3fbbfd55e\",\n \"to\": \"0xadec83f121f6161be49e508e9712cf18cc949afe\",\n \"transactionIndex\": \"0x77\",\n \"v\": \"0x26\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x46dcd25a517a77b3e52cc0f8627b1136cea093e2\",\n \"gas\": \"0x88b8\",\n \"gasPrice\": \"0xc570bd200\",\n \"hash\": \"0xe4d96b746ea1eeed7ec9bf53c4add2f413a2162f1874dcea671e8602cc62ea06\",\n \"input\": \"0x\",\n \"nonce\": \"0x4410\",\n \"r\": \"0x5b7093ef969c34019adcf4805cbd025475e9fbb0759e25df0e942c13795d8cc9\",\n \"s\": \"0x6659628dd7dcd9c6b81e33234235ccecd4c46bff5ae663949ec1f1f5e1b7e919\",\n \"to\": \"0xdd3ca6381b84b3c5badd24e68f67d28d6deb5f17\",\n \"transactionIndex\": \"0x78\",\n \"v\": \"0x26\",\n \"value\": \"0x32c2437100ad8000\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0xe4c89b9fcab29c5bee3971b698cca4528f2644e2\",\n \"gas\": \"0x7a120\",\n \"gasPrice\": \"0xba43b7400\",\n \"hash\": \"0x7f3e8d6abdee3c1b9f1837974c7add27f15356a37350dd6ad53cc95e9306702f\",\n \"input\": \"0x\",\n \"nonce\": \"0x4fb9\",\n \"r\": \"0xd8a3476edb17e4e5b009bc7472b2f260f7f7a6117a39da09ae5ef22874c9ad98\",\n \"s\": \"0x5239b2ecb6cbcf9cc120726b51a4f8f120178980896f530bda6d4d4a776fc187\",\n \"to\": \"0xa7a7c73e148617d6bc872858502166bca13d64fe\",\n \"transactionIndex\": \"0x79\",\n \"v\": \"0x26\",\n \"value\": \"0xf447d51f87ce000\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x1e70c249ce8c0082c3b3674e354ab0aa0e2cd382\",\n \"gas\": \"0x5208\",\n \"gasPrice\": \"0xba43b7400\",\n \"hash\": \"0x508d131fee75d78b921931f000638fb54d6c1b9db2890a51dee8b0f88f5f47a9\",\n \"input\": \"0x\",\n \"nonce\": \"0x7\",\n \"r\": \"0xa2621eda0f448a2bdc247330866b571960e18c1015cfa75442f41b50d4015ca\",\n \"s\": \"0x6a2d95abd857cbcc877f0148479b639982f2415847ff18c838fdf3e42299df1d\",\n \"to\": \"0x618180e6a5ed320b01c882580c66e73308f147ab\",\n \"transactionIndex\": \"0x7a\",\n \"v\": \"0x26\",\n \"value\": \"0x299adff3c2660800\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x0975ca9f986eee35f5cbba2d672ad9bc8d2a0844\",\n \"gas\": \"0x5208\",\n \"gasPrice\": \"0xba43b7400\",\n \"hash\": \"0xa9e5bb0f8d7a112c495483df1b58573a7d37bdc8f9da8d8ae6aef5b2f899c478\",\n \"input\": \"0x\",\n \"nonce\": \"0xc2e8\",\n \"r\": \"0x2446959cc5181768fd905815bda68f6a7b3beaaba12da903a19c9d9902042d7b\",\n \"s\": \"0x18cb7a1bb34b3ca671304f91614043c09e01766a1f9c1dd15ad77f6064ebe402\",\n \"to\": \"0x294198e8e48266c512de7b0f7be44a855c4bf87d\",\n \"transactionIndex\": \"0x7b\",\n \"v\": \"0x25\",\n \"value\": \"0x6a94d74f430000\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0xf5bec430576ff1b82e44ddb5a1c93f6f9d0884f3\",\n \"gas\": \"0x28b0a\",\n \"gasPrice\": \"0xba43b7400\",\n \"hash\": \"0x7408dd1c976e4b3e06bf44990c7c4f69564726a09889b008bf24f55c4befc0c7\",\n \"input\": \"0x\",\n \"nonce\": \"0x2645e\",\n \"r\": \"0x47915a69a5e6c6546011766d312e8ca22c7b914e77cb7ca4c67e05caa776ff7b\",\n \"s\": \"0x42aa1cb124a831d16c64941544dda91179947f6a273e50cfb43867d08f757f8f\",\n \"to\": \"0x6f3778670a129a3b3b21b5d8b1d1317d8bdb9c6e\",\n \"transactionIndex\": \"0x7c\",\n \"v\": \"0x26\",\n \"value\": \"0x385cf13d66b85400\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x54ad3dc3845424e1cf30349e547ea054f0548bed\",\n \"gas\": \"0x5208\",\n \"gasPrice\": \"0xba43b7400\",\n \"hash\": \"0x315acd3dab34f042e475397f439593644789313eccdbd5be5271742bb5ad8f4f\",\n \"input\": \"0x\",\n \"nonce\": \"0x8\",\n \"r\": \"0xe9904374ec2a1539a0b04c457c4a2aad49f0c80901ee0be6e4777dc00bed1f3f\",\n \"s\": \"0xb29edf848caea532131cbbfabb51b937cb1d77d0a2037abf7db09ba04fddf95\",\n \"to\": \"0xe4c89b9fcab29c5bee3971b698cca4528f2644e2\",\n \"transactionIndex\": \"0x7d\",\n \"v\": \"0x26\",\n \"value\": \"0x2c2ea2f7e203a80\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x37ead2980b50a96136497f2aed8f32344b06b15c\",\n \"gas\": \"0x15f90\",\n \"gasPrice\": \"0x9502f9000\",\n \"hash\": \"0x333458efeae4b33bf96c305d9f1861ba9abfb9a43236a13b8e98e5c469edee47\",\n \"input\": \"0x\",\n \"nonce\": \"0x246e\",\n \"r\": \"0xbbe6db4b5206f5c9d2581aa9cc7830a92050efc204245cb975328eadd2cfd635\",\n \"s\": \"0x34cd89235cccf5e2e397c3885d4db6ed131f3dd95f8d307b8761ea893aedb800\",\n \"to\": \"0x6d2d36eb96259280ffb20aa0752a308ad8e4606e\",\n \"transactionIndex\": \"0x7e\",\n \"v\": \"0x26\",\n \"value\": \"0x58d15e176280000\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0xf726dc178d1a4d9292a8d63f01e0fa0a1235e65c\",\n \"gas\": \"0x15f90\",\n \"gasPrice\": \"0x55ae82600\",\n \"hash\": \"0xfe1cc977f9072bd385ad151e1ed66a5a24d8c0220ed216c372e5b01d843ce633\",\n \"input\": \"0x\",\n \"nonce\": \"0x1eafb\",\n \"r\": \"0xcf6190db1121604652d88d12f04a0224700ca0abcff9b6b1b605dd69dc4e84f0\",\n \"s\": \"0x5b4fa4bab211333c43917a2cb771c4beb93043dabc56aa812e0ff215f6f27711\",\n \"to\": \"0xe77f341815dd1db9f99325b88afa6601a9b30a59\",\n \"transactionIndex\": \"0x7f\",\n \"v\": \"0x25\",\n \"value\": \"0xa4e779be480e8000\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x7071f121c038a98f8a7d485648a27fcd48891ba8\",\n \"gas\": \"0xa410\",\n \"gasPrice\": \"0x51f4d5c00\",\n \"hash\": \"0x533846e9d6d259f32c3128939bcc73e94065a2ca59a1a421c8e7832523684b1a\",\n \"input\": \"0x\",\n \"nonce\": \"0x286b9\",\n \"r\": \"0x39dae20fefc8f3817b7732d2029a931ef3023bb32ac1988a85ec9958e542448d\",\n \"s\": \"0x4bb1d0391fb598561adef32be6ddc7edd680eb07c65f3dc567a44bf43079ba53\",\n \"to\": \"0xf709240e15b4ede984ec5a89b718533c4729026b\",\n \"transactionIndex\": \"0x80\",\n \"v\": \"0x26\",\n \"value\": \"0x475a62d4a2c12c00\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x6c3125f78d5794547d8421990e3147ca951670ad\",\n \"gas\": \"0x5208\",\n \"gasPrice\": \"0x4e3b29200\",\n \"hash\": \"0x03f1849c661b863c9ed6980c1bffb3b044da978c0d3c934723d995e271f5a5d0\",\n \"input\": \"0x\",\n \"nonce\": \"0x0\",\n \"r\": \"0xb1e98db97afe42d0be6cae7b93c1a2a3619a6efeacc11c883ef16bbf5b941bee\",\n \"s\": \"0x31ddf5de18825eb1339a8f8037b22424aed419d2ffbe4042d035187d385b874e\",\n \"to\": \"0x082a44bc2fceb221c5b4eafe9395a143e6e31581\",\n \"transactionIndex\": \"0x81\",\n \"v\": \"0x26\",\n \"value\": \"0x138ad89c753b000\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0xc819a7ad23059b31fc63af55e94723b1d27623d6\",\n \"gas\": \"0x3d090\",\n \"gasPrice\": \"0x4a817c800\",\n \"hash\": \"0x7e76e174996019ed31973013f265f5b35c054ca2827c70a4dc885ba9ed167709\",\n \"input\": \"0x095ea7b30000000000000000000000002a0c0dbecc7e4d658f48e01e3fa353f44050c20800000000000000000000000000000000000000000001aaa16cbafec05c5cb800\",\n \"nonce\": \"0x3b\",\n \"r\": \"0x1f12131a88216c30398dbe5801596621abc70c07c619b5d56a258dd2bb090516\",\n \"s\": \"0x2021af550633437cfdaba246f19a186d1929345d3d3788af54e2a12899a71021\",\n \"to\": \"0x9a0242b7a33dacbe40edb927834f96eb39f8fbcb\",\n \"transactionIndex\": \"0x82\",\n \"v\": \"0x25\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x653cd961ed568b49dfd86048cf54cce163edb780\",\n \"gas\": \"0x186a0\",\n \"gasPrice\": \"0x4a817c800\",\n \"hash\": \"0xb3d3d739c401409feb531bdb491b7bfdee96c8d60ff86538ef9b63b4edacd843\",\n \"input\": \"0xa9059cbb000000000000000000000000736ca842659212e6876c16d3a3e11c2ffb681ef3000000000000000000000000000000000000000000000001158e460913d00000\",\n \"nonce\": \"0x2d6a\",\n \"r\": \"0xcf8d64be4b8aebd27431e98bff9fab7820e0a888e40e443b32b4caacc5bb8691\",\n \"s\": \"0x1bfced4a0d7d049e4774d047689f8761fb86312824db6770739e71534beec1b5\",\n \"to\": \"0xc6689eb9a6d724b8d7b1d923ffd65b7005da1b62\",\n \"transactionIndex\": \"0x83\",\n \"v\": \"0x25\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0xf08027a407e9a774362aaca289445a9a2eb359e5\",\n \"gas\": \"0x8e81\",\n \"gasPrice\": \"0x37e11d600\",\n \"hash\": \"0x7be82d330060dfea63ca90f584e7af484e618eb9503091f94217c3d850e14680\",\n \"input\": \"0xa9059cbb000000000000000000000000ee3db62514b9174bb2d4e10006d4888a155fdbb200000000000000000000000000000000000000000000000d02ab486cedc00000\",\n \"nonce\": \"0xf45\",\n \"r\": \"0x52d38261b9398abb73cd1182f15a818ff287b07a62abec488f345fb2c60558e4\",\n \"s\": \"0x45564ee89655bfc635fecf9cb8b7d0016f93d0c667b1230e310f83ca718de03b\",\n \"to\": \"0x988383d7730b68a4cbc1fc1abba08554fd605f6a\",\n \"transactionIndex\": \"0x84\",\n \"v\": \"0x26\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x2b5634c42055806a59e9107ed44d43c426e58258\",\n \"gas\": \"0x198a8\",\n \"gasPrice\": \"0x37e11d600\",\n \"hash\": \"0x73d2c28b3672b712126acaf98a0241a6cb63e183b2896260f9a13b6fe87e2b46\",\n \"input\": \"0xa9059cbb000000000000000000000000a0a0bce5c90dc9e7cdfc75ffdf007bf1c72cc1830000000000000000000000000000000000000000000003fc9a325c643c770000\",\n \"nonce\": \"0x9bdeb\",\n \"r\": \"0x4aa72656090f7f4ccf455fcb8e0fde7ccb3d61f62678361ee2eda07472f39fdf\",\n \"s\": \"0x5f7542d6c439a0c9174a47cca2574049f54880e746a35824186c62db8dea151\",\n \"to\": \"0x8f8221afbb33998d8584a2b05749ba73c37a938a\",\n \"transactionIndex\": \"0x85\",\n \"v\": \"0x25\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x6e7c25f40403f61c5de78c1d14b2c019f4fc2a6d\",\n \"gas\": \"0x186a0\",\n \"gasPrice\": \"0x2540be400\",\n \"hash\": \"0x46404f87b9056ea74492ef977721c83f1484bab080653f68abbbbf4f54ba38bf\",\n \"input\": \"0xa9059cbb000000000000000000000000be0ab04262d5d1f167747441584f646e9977da53000000000000000000000000000000000000000000000003e460dbb1bef70000\",\n \"nonce\": \"0x95\",\n \"r\": \"0x74530d3e99108d7e5c82f869c075a1915a0a9a630905d713c9352836eb4f37f7\",\n \"s\": \"0x76ecbefe32eb1b4a91b6a3c1346805446b46c00015114f4c0d6ba6f50fd58024\",\n \"to\": \"0xffe8196bc259e8dedc544d935786aa4709ec3e64\",\n \"transactionIndex\": \"0x86\",\n \"v\": \"0x25\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x06d1748c926f241a4f677710489cebd23b36dda6\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0x1e25a5200\",\n \"hash\": \"0x152239bd55de8fd7b92480c45903a6736804b0ed6405471498a7d55451cebacd\",\n \"input\": \"0xa9059cbb000000000000000000000000b76ff0eca62368946e42bc8f7761860e6583847a00000000000000000000000000000000000000000000000000000006fc23ac00\",\n \"nonce\": \"0xf\",\n \"r\": \"0x4874b50c3aee41a57a4252095d48e0b01ced8d9b8d825ce73fbbbfbc919bd32d\",\n \"s\": \"0x4375da1804e048d1c8193d1321341fa6b9bedce361d2d8acd3fc77af079926b9\",\n \"to\": \"0xb561fef0d624c0826ff869946f6076b7c4f2ba42\",\n \"transactionIndex\": \"0x87\",\n \"v\": \"0x26\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x42c40c19d1d7f0b7802aad839ce5891fdbbc2028\",\n \"gas\": \"0x5ef88\",\n \"gasPrice\": \"0x1e07dc6a5\",\n \"hash\": \"0x0007f47c929d95d5deb0b88a918bb9892109608a227bf9c47f8c17de2afc1e8c\",\n \"input\": \"0x1eebad4b012e0d6191a7a9640d8c97ce5ba9cb505d699b68b7b8681d6840b58359f480ba75b6cc10a2d5d92c379a13414936832a069d0c0c7f6d4e99a1e3a30d8b656ec07e7928\",\n \"nonce\": \"0x1047\",\n \"r\": \"0x8c4a1fc3c2363789140debbe4910f9622430b290e87302e1eae5c983f5077c6e\",\n \"s\": \"0x6beb50b41be8f80d6f8cd923f077d296cef27056fe9d1689b25762fb1e55c253\",\n \"to\": \"0xf8fb76b05fd854cc6f35d5088b9d241cbbf616c3\",\n \"transactionIndex\": \"0x88\",\n \"v\": \"0x26\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x20f3027ff5affdfada79581122dcbd9309e4bf46\",\n \"gas\": \"0x3d090\",\n \"gasPrice\": \"0x1dcd65000\",\n \"hash\": \"0x79fa3731541b3946298f7f708628b29cdef6b7186f848350cbf0eea26731d3c4\",\n \"input\": \"0x095ea7b30000000000000000000000008d12a197cb00d4747a1fe03395095ce2a5cc68190000000000000000000000000000000000000000000000002f2f39fc6c540000\",\n \"nonce\": \"0x480\",\n \"r\": \"0x96d7b3280ed5a3ed4618ae6f169ed9eaf835bb4836e14a51e04546fcc3a8de6d\",\n \"s\": \"0x7d51d4c8d53d6666c34267c7e1851ca2e242f1c6b581bfd3e1704c66e0cdb378\",\n \"to\": \"0x8f3470a7388c05ee4e7af3d01d8c722b0ff52374\",\n \"transactionIndex\": \"0x89\",\n \"v\": \"0x1c\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0xf84605b57182a04c211ad0cd000465255cf68d76\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0x167618600\",\n \"hash\": \"0xb4055a577ead85e681ea0a29a126f5f632ba6f006b75a830dd75081be78b6c6b\",\n \"input\": \"0xa9059cbb000000000000000000000000348df2aa18bed0dbce0e2c4c669bfe417611b1470000000000000000000000000000000000000000000000000000008bb2c97000\",\n \"nonce\": \"0x16\",\n \"r\": \"0x37ca208a35de30369307bfc116d225b45ed27250b20b6898d46b4a72e1ec4d1b\",\n \"s\": \"0x2c652374da3b741dca06b4abdd7ca56905a4f6eea53dc836c27bdabbf654887a\",\n \"to\": \"0x4ec15d7c977151df68388556124f43a462a8ccf2\",\n \"transactionIndex\": \"0x8a\",\n \"v\": \"0x25\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0xe5226e5c9588a8feea93582492bc54235cf1b20b\",\n \"gas\": \"0x15d0a\",\n \"gasPrice\": \"0xee6b2800\",\n \"hash\": \"0x17f4f9b4afc2d83e2655b92ad9e43a2a98a2b9a981397e69023f8bb6ce04b1c0\",\n \"input\": \"0x\",\n \"nonce\": \"0x1\",\n \"r\": \"0x920c18871d488d98b2159c67c5eed02f48a3b4b2bf30a85cdafb2021a9c7c71f\",\n \"s\": \"0x1ad7d8e141c9543e9321e3f26de332d1c151b879f9708a95d692fe2c24ffba85\",\n \"to\": \"0x2aaaf1f03c47acd2ae6c0175004b27d1abcbfb77\",\n \"transactionIndex\": \"0x8b\",\n \"v\": \"0x26\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x12649663dd262283115d7f60477810f39c3bb409\",\n \"gas\": \"0x18d13\",\n \"gasPrice\": \"0xee6b2800\",\n \"hash\": \"0x8a85658e3761938a5ce0c3dde8a58c8fc2dcc3f9054f999dc5d61962ed2e282e\",\n \"input\": \"0xf2c298be00000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000035454f5337576456436439534c32647965687539374a545468474b7774774b7a5973366f366352563132376543656b5a50706939454a0000000000000000000000\",\n \"nonce\": \"0x1\",\n \"r\": \"0x8c728d4c7f082d741a93445f468d58acb91f5e671fb050c5d05b78576b834a9d\",\n \"s\": \"0x2ac84115b02fa34039a6d2f1c3b86df33188e509c8e08a4ff4c181ace74d4330\",\n \"to\": \"0xd0a6e6c54dbc68db5db3a091b171a77407ff7ccf\",\n \"transactionIndex\": \"0x8c\",\n \"v\": \"0x25\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0xa2e36f3a83d09e3d5bdd9e05cf28699ead329fe8\",\n \"gas\": \"0x398b3\",\n \"gasPrice\": \"0xee6b2800\",\n \"hash\": \"0x3fd372ec5a0b780bffe1449816a2a2fb7a0cd79e1e80aab1bd54571acf8d9e14\",\n \"input\": \"0x3d7d3f5a000000000000000000000000000000000000000000000000000000000009e6cd00000000000000000000000000000000000000000000000005898862d16180000000000000000000000000000000000000000000000000000389f12621b98000000000000000000000000000000000000000000000000000000000000002a300\",\n \"nonce\": \"0x121\",\n \"r\": \"0x7cec50dba6ac0f4f4d94a40b14b77e331c023c2caf8cb5fc71f5e54fa09a160f\",\n \"s\": \"0x7fbfe1188f6c938dcdd480817e973faa74ed3fd026f566769132dbec670c2848\",\n \"to\": \"0x06012c8cf97bead5deae237070f9587f8e7a266d\",\n \"transactionIndex\": \"0x8d\",\n \"v\": \"0x25\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x8ed3f3b484650f5edb057f10774368284cb3c752\",\n \"gas\": \"0x5208\",\n \"gasPrice\": \"0xee6b2800\",\n \"hash\": \"0x37f527ee60e2b17c96ca05c062dfcc78fd6cb973a66a71fe201bd76d9bd596c8\",\n \"input\": \"0x\",\n \"nonce\": \"0xd4\",\n \"r\": \"0xebc00e925174c832f996b7577d366049da6d9a395b86d88115e26327332e1fce\",\n \"s\": \"0x216eb3aa52738c6eb14569d2499f2c7938682b98b5462132ac153d963f448648\",\n \"to\": \"0x13d7e0f2a091dcd1caf7f766e83d13d9e6be89ef\",\n \"transactionIndex\": \"0x8e\",\n \"v\": \"0x26\",\n \"value\": \"0x429d069189e0000\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0xe79eef9b9388a4ff70ed7ec5bccd5b928ebb8bd1\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xee6b2800\",\n \"hash\": \"0x337d6d44a3f34b585b015bdde4b21aef8426626ed68d90f8ce2fde9f013131da\",\n \"input\": \"0xa9059cbb00000000000000000000000088cdeb13fff8884e82882d88911f45ee17ed3c4400000000000000000000000000000000000000000000000340aad21b3b700000\",\n \"nonce\": \"0x3d59\",\n \"r\": \"0x9e15aa5f30f2044770ab5faefb15249aade32d7fe41eb11606259d2cf6f96b21\",\n \"s\": \"0x7f05983fe91a4b67043fdbc708ff202976b0a6ffe7444faa8a5e01bade729e41\",\n \"to\": \"0x986ee2b944c42d017f52af21c4c69b84dbea35d8\",\n \"transactionIndex\": \"0x8f\",\n \"v\": \"0x1c\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x10c0af29312319e3194402bb3657ddaa69a3014f\",\n \"gas\": \"0x15f90\",\n \"gasPrice\": \"0xee6b2800\",\n \"hash\": \"0x6f0359544e4b4b440a2529c84dcf8734ad490d38f9c1dd28ad781b7e45917c44\",\n \"input\": \"0xa9059cbb0000000000000000000000007487a6eb59664cbb2f935659392a13adc0d633c0000000000000000000000000000000000000000000000000000000000000c350\",\n \"nonce\": \"0x21\",\n \"r\": \"0x587b04ab5d44291e619adec053c7567377f918943e60550e36bf72ee4b0235bd\",\n \"s\": \"0x748a2420f72bca6534ce259f8fe44f7959068ebf7de4a4dd48f55c9b93226be6\",\n \"to\": \"0xc9859fccc876e6b4b3c749c5d29ea04f48acb74f\",\n \"transactionIndex\": \"0x90\",\n \"v\": \"0x25\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x05ee546c1a62f90d7acbffd6d846c9c54c7cf94c\",\n \"gas\": \"0xc350\",\n \"gasPrice\": \"0xee6b2800\",\n \"hash\": \"0x3e30def7cf377296158c2dbdcde354f5e2e69f71bd6aaae72a31a8598fe71ce3\",\n \"input\": \"0x\",\n \"nonce\": \"0x96de3\",\n \"r\": \"0x405fda6d9968cc5eb0a8038d398213451d74f37bcb4be56b672628837db6bda5\",\n \"s\": \"0xce6b231e2076c3a421e155e4626e297a7a61a0b1386d77db617936be260f975\",\n \"to\": \"0x112506f1ac4d50330e223ee76479bc1d5cb6954c\",\n \"transactionIndex\": \"0x91\",\n \"v\": \"0x25\",\n \"value\": \"0x17a4c21f97d54000\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x007fb5f5a7a3a58db08c64aa43a85e47ead53ac9\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xee6b2800\",\n \"hash\": \"0x61deca6d2dd0ca7582f0bbcf483a644ffe3caba9372bc7cb74a0fd52ff8e7a21\",\n \"input\": \"0xa9059cbb0000000000000000000000004999913648943db3ed815fa1a6353e063ab015e1000000000000000000000000000000000000000000000001158e460913d00000\",\n \"nonce\": \"0xd02\",\n \"r\": \"0xa675cdcfef4af2322e7551f29e3f636402a588deab3ef5b785326fcf4123ac02\",\n \"s\": \"0x19995d4ef2de3015ef51dbc18c371e697833713f5162eadce95eb638f560df44\",\n \"to\": \"0xfdfe8b7ab6cf1bd1e3d14538ef40686296c42052\",\n \"transactionIndex\": \"0x92\",\n \"v\": \"0x25\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x2de4652a1358f193f9c0d68ff78cabbbc558dc99\",\n \"gas\": \"0xcc5c\",\n \"gasPrice\": \"0xee6b2800\",\n \"hash\": \"0xf8f004a03bbfa29ab10d83e78f3edcaa87daadbd0dc6bcabdffa4bf35f20cb80\",\n \"input\": \"0xa9059cbb0000000000000000000000001fabdff59921ff36487a13841336b66481ce524d0000000000000000000000000000000000000000000000000000004d601fdc42\",\n \"nonce\": \"0x9d\",\n \"r\": \"0xa9bdcd787682ae9610283da73276a9e8a57eafe15f9b81eca4085d73e52364fe\",\n \"s\": \"0x5a6824c2ea92230dcd734580931e2e5b05e7810bac40ae61ef4f1461a01669f\",\n \"to\": \"0x607f4c5bb672230e8672085532f7e901544a7375\",\n \"transactionIndex\": \"0x93\",\n \"v\": \"0x26\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0xfd5c417f350b900f0f03125defb3fc54d59cf702\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0xb2c8234c7822cf3349be2e71651ada8c019712db1a4ae45b89b2b12ad351d458\",\n \"input\": \"0xa9059cbb000000000000000000000000203ed88b1f786c18c06bbe68bea50c022ad5ca2f0000000000000000000000000000000000000000000000004563918244f40000\",\n \"nonce\": \"0x787\",\n \"r\": \"0xe82135f4920c79ffaedd797b87fc1141fb210205f4531dffc87d727319381417\",\n \"s\": \"0x249e5774c4dfc9456e3bb067c8f33e5c43e93a4d4d72e35319ebc0b001e2aca2\",\n \"to\": \"0x355a458d555151d3b27f94227960ade1504e526a\",\n \"transactionIndex\": \"0x94\",\n \"v\": \"0x1b\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x61fb217f6a9fde302751fd3d16f08f89c6049ca9\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0x430835182ed4a8f489fbe447e242ce25368d56a8d19628cd296628fd2f075659\",\n \"input\": \"0xa9059cbb00000000000000000000000027ed4612f8e9834eb972fc3e8d73b665028e7d480000000000000000000000000000000000000000000000004563918244f40000\",\n \"nonce\": \"0x718\",\n \"r\": \"0x752f16bf90b5e00aa4803786629180dce61458c6aadcb36c1d8d8363ebd0e58e\",\n \"s\": \"0x2e80264d5bd2937a00141ec5398c42fe200d688b8dfc443ed3ea3dd08255c389\",\n \"to\": \"0x355a458d555151d3b27f94227960ade1504e526a\",\n \"transactionIndex\": \"0x95\",\n \"v\": \"0x1b\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x6a0c60ded4d315b5508b2c88afc700ed53a6b385\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0x579d3d399ac3923d85e2b3d955e03929a5a3ea6818852b46c05b0eae9bdb777f\",\n \"input\": \"0xa9059cbb000000000000000000000000746aa512c0fd1ed08a69f141f5049fbebc0d40600000000000000000000000000000000000000000000000004563918244f40000\",\n \"nonce\": \"0x72b\",\n \"r\": \"0xe7b08c8476b237bba08cc61f49e575dbebecc3ec595512b088ee2900054a1ea5\",\n \"s\": \"0x7bd559705abec035c1e20b0943995636b4f748e27320d4e0f63719405d2a19d7\",\n \"to\": \"0x355a458d555151d3b27f94227960ade1504e526a\",\n \"transactionIndex\": \"0x96\",\n \"v\": \"0x1c\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0xf73ff30b125f0bcdfe53d52e1a670ccc5a40d264\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0x9f304c6078674eaa38bffcd729e1d6dcf16adb0902154307bfe92fefebb193db\",\n \"input\": \"0xa9059cbb000000000000000000000000d386771919e48cc5ad9419ff99b00dc3ffa8600c0000000000000000000000000000000000000000000000004563918244f40000\",\n \"nonce\": \"0x6e2\",\n \"r\": \"0x1ba6b72adc786984b47abd7a47f05ac23911b448d9085da7144f1689e661862e\",\n \"s\": \"0x218ed704b71a3d64d84a5cf13d823246b5bbfc47a3a9594113a2c08cc574ed1b\",\n \"to\": \"0x355a458d555151d3b27f94227960ade1504e526a\",\n \"transactionIndex\": \"0x97\",\n \"v\": \"0x1c\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x7060b3ec1bec74330d86a01719c4ceb84a5d7d01\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0x64f6ae58b06e182a0c8a54676f31588ec67ad93b1d6d6118e5a977ab94795b04\",\n \"input\": \"0xa9059cbb000000000000000000000000b35507401788c8b88937efd2defdb34b29743cfd0000000000000000000000000000000000000000000000004563918244f40000\",\n \"nonce\": \"0x7a9\",\n \"r\": \"0xd285de9b9f12fbdd0cd7b26522b7a97ed77fd530547d16fbc681e94169fc06dd\",\n \"s\": \"0x3367077abb5374f1f4882ddd3b025b7d444335b64925a871408a34e159901ddb\",\n \"to\": \"0x355a458d555151d3b27f94227960ade1504e526a\",\n \"transactionIndex\": \"0x98\",\n \"v\": \"0x1c\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x000836d933f63f6999b9236826a808975e05412f\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0x8ee3a51093bccf30d496849fa7a458dc498a25e74b67a29324d17dcbd2ffb1d4\",\n \"input\": \"0xa9059cbb000000000000000000000000c4aa8cd8797e790e735a703af1fc6f4376d37d430000000000000000000000000000000000000000000000004563918244f40000\",\n \"nonce\": \"0x6d8\",\n \"r\": \"0xfd0bca3eca7fcca1ae52557aa6c7c6c0d469c9427d90d188427096977e429540\",\n \"s\": \"0x840ec304f51bf8c79605705be80526be2116ad7c35ed15aeada1d64b3330682\",\n \"to\": \"0x355a458d555151d3b27f94227960ade1504e526a\",\n \"transactionIndex\": \"0x99\",\n \"v\": \"0x1c\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x7d0a7f481d67bcbf3826d28e779769402fcc2d88\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0x7e6b28642e6e8d38760e7371296e956d76bc63e0dc38c9d853ed80439f0c40b4\",\n \"input\": \"0xa9059cbb0000000000000000000000002fb3fa27191a125474929d5fe65ef4aa242490120000000000000000000000000000000000000000000000004563918244f40000\",\n \"nonce\": \"0x70c\",\n \"r\": \"0x1798de6d9097d643c47febfaeaa9924d91a177a245c1131c5ab27856ec88711e\",\n \"s\": \"0x39c8b3569ac4ff61527f5537fd0d4e91408e8d86044474585c4dcb757eb1458d\",\n \"to\": \"0x355a458d555151d3b27f94227960ade1504e526a\",\n \"transactionIndex\": \"0x9a\",\n \"v\": \"0x1b\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x59dd607181fc174433017a79c15f50573445a09e\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0x29006b86f94d685fff0f083794272a93d0ec33f1be28ba5876b6ac8c7a9b5140\",\n \"input\": \"0xa9059cbb0000000000000000000000006a4ca942ba15e8bacbad1dadc32a1ba450a428920000000000000000000000000000000000000000000000004563918244f40000\",\n \"nonce\": \"0x6bc\",\n \"r\": \"0x2674617e0475510aa60378bcaeff26ac7a2e5ba00ac2d98ee2ddf384d9a63170\",\n \"s\": \"0x7d46dc3542202b166681f69f1138ebe6eb1f681e90d9b8430bcd2135964bf233\",\n \"to\": \"0x355a458d555151d3b27f94227960ade1504e526a\",\n \"transactionIndex\": \"0x9b\",\n \"v\": \"0x1b\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x99f1fabe5f4a25ddcc1e64870e3237f04680bcc5\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0xbc43789ec9bc665415c79cce6ed899b92a64ac552c98d0a5ad378e940bc5913f\",\n \"input\": \"0xa9059cbb0000000000000000000000007e55d1cad74f7242814065bdac8b545a395486180000000000000000000000000000000000000000000000004563918244f40000\",\n \"nonce\": \"0x779\",\n \"r\": \"0xaf58525eac5afe8803c5a7f9fce871d6d03e39f4b4714bcbaf1314e57950ffc8\",\n \"s\": \"0x69fa41145531d535ee2cf61a5b9dd32d079f0836f9c32c12ddc66a5d189a6a03\",\n \"to\": \"0x355a458d555151d3b27f94227960ade1504e526a\",\n \"transactionIndex\": \"0x9c\",\n \"v\": \"0x1b\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0xab0ef6a96fc1857a08bab6000f339460d80da12b\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0x8ed98c26f466424d05452c4f139b3b42a6c4bc45ded0845cab969fefe8904233\",\n \"input\": \"0xa9059cbb000000000000000000000000cc22d21bbb2c6454a9628c10c68791bad471d8610000000000000000000000000000000000000000000000004563918244f40000\",\n \"nonce\": \"0x791\",\n \"r\": \"0xc4850878deb731a146339fdb66e5c9920254a73304d628f4bb0045aa628f3e7a\",\n \"s\": \"0x3ce0f49fb47143d0b613558ce551d2d39ca1622e5eec14e02a1a18436ebb3f19\",\n \"to\": \"0x355a458d555151d3b27f94227960ade1504e526a\",\n \"transactionIndex\": \"0x9d\",\n \"v\": \"0x1c\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0xc6d270daf9336097325320a3f5383400de2a1d5c\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0x53458e262903cef45d55ceb8261606c1d65f6b9350eb0a95079cab78c488357b\",\n \"input\": \"0xa9059cbb00000000000000000000000068c4198f00481609877672d93ca9ce86731b24100000000000000000000000000000000000000000000000004563918244f40000\",\n \"nonce\": \"0x764\",\n \"r\": \"0x480af56309536131ea8ebe6c364ea4d24545cd66e83d271c1b6aa281694b7c25\",\n \"s\": \"0x354202c642927fb8b3199c9bc97b59b1b7cc86859aca7ec294fa274b8a03c436\",\n \"to\": \"0x355a458d555151d3b27f94227960ade1504e526a\",\n \"transactionIndex\": \"0x9e\",\n \"v\": \"0x1c\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x4a2d6a500c825fb721e8aea9acabb222d18b5896\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0x585c0ae1ff923cd5148e381d63697f9904f2f11d127e4fae8ff5a3a13af2ca3c\",\n \"input\": \"0xa9059cbb000000000000000000000000f2528b36354bfffe2c9f8cee8e285d91669b63050000000000000000000000000000000000000000000000004563918244f40000\",\n \"nonce\": \"0x7b8\",\n \"r\": \"0x811f28750c552279ac6e7b3254c7213a6e254bbdcca38cc0517273cb2d37584b\",\n \"s\": \"0x17e5a18c0f76510fd4d5f9a350272a7adddbb526538c2488c365b6343c390511\",\n \"to\": \"0x355a458d555151d3b27f94227960ade1504e526a\",\n \"transactionIndex\": \"0x9f\",\n \"v\": \"0x1b\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0xb4c32d9986b5631fb0eb0aea7adf2c58397969cc\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0xf780a978e4dc93164e683ce87dc1f2d1efe443d9d360a674338110c99f1a8149\",\n \"input\": \"0xa9059cbb000000000000000000000000540712f82c07d2e35d161ee336dc158ff886f9a40000000000000000000000000000000000000000000000004563918244f40000\",\n \"nonce\": \"0x80a\",\n \"r\": \"0xea458f2c62bc9398adc4ba03a456f8df4b5b1f50e38c888f13d8681e06a16a11\",\n \"s\": \"0x42a1cffb09a4192053ec3e52495d2db00c279ef2fb14fb6d0ede4663cb4ea828\",\n \"to\": \"0x355a458d555151d3b27f94227960ade1504e526a\",\n \"transactionIndex\": \"0xa0\",\n \"v\": \"0x1c\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0xbf15c60b6cabad081271e89084a12cd459555c85\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0xe251695741e6917edadfd1adea365f468fb0ba8fd130ceead28fdbecb83faf6c\",\n \"input\": \"0xa9059cbb000000000000000000000000467dbe4e09f105d7126b48fca3b56cee52e923b20000000000000000000000000000000000000000000000004563918244f40000\",\n \"nonce\": \"0x7a9\",\n \"r\": \"0xa37ca6d1419fb1f14f8faac6856b8cac17e6dc5611dcd43af73e46feb38a5866\",\n \"s\": \"0x65a8300c1b22f3fea5c514ec2ade9acac3b9363276f2212d847bcb695cab29bb\",\n \"to\": \"0x355a458d555151d3b27f94227960ade1504e526a\",\n \"transactionIndex\": \"0xa1\",\n \"v\": \"0x1c\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x2a7a9c8014f35cc968f6f38f3b1b5703ed409e89\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0x98a7d67f34c51cd711a93704670fec9fe34d54cd595daa0c91f6a5534cb3a971\",\n \"input\": \"0xa9059cbb0000000000000000000000002e14d76b92a23c1f15a2cdefac1ec585d7ea33a90000000000000000000000000000000000000000000000004563918244f40000\",\n \"nonce\": \"0x6cd\",\n \"r\": \"0xf33d10936dacc53340294253b97df61389e7b2a14a6f4d8db0f5a9c83fa36596\",\n \"s\": \"0x6b93ca0b9d9d365f75d9ce566d78786d6803d1216c53998dd7cfc4c28f4513ba\",\n \"to\": \"0x355a458d555151d3b27f94227960ade1504e526a\",\n \"transactionIndex\": \"0xa2\",\n \"v\": \"0x1b\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0xba29c81a59dd50a065638902e0e9acd4ab97e1af\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0xf40b4c89e461860064255cd86919d56df33bd33298068d5b5793edced7dde0c5\",\n \"input\": \"0xa9059cbb00000000000000000000000014a38604621c3889932c64b2dc7249d0f1d86b630000000000000000000000000000000000000000000000004563918244f40000\",\n \"nonce\": \"0x6b7\",\n \"r\": \"0x1b0ddd8623699e7a5633f6b24a1c992cc84b25399c40ce5a8f9e3521a371efc0\",\n \"s\": \"0x23bf5504d6fb4eb94c8b0f7a1bfea91d62b504bca340956eae52eb4f0e1b368c\",\n \"to\": \"0x355a458d555151d3b27f94227960ade1504e526a\",\n \"transactionIndex\": \"0xa3\",\n \"v\": \"0x1b\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x927e07feebbaae3047620794720fd2bc1f1af6fa\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0xaf59141eb4f31efdfc92e99d585b9b47a74225db1a1be815620baec369752ac4\",\n \"input\": \"0xa9059cbb0000000000000000000000003a6f79b738b44355973a48d0373b3af3b9eacf5f0000000000000000000000000000000000000000000000004563918244f40000\",\n \"nonce\": \"0x6f7\",\n \"r\": \"0x76fffeb4ab1c5bd2ca6e09b027e4279248dc45ca89a210e0358a17e191401c75\",\n \"s\": \"0x7a2d057beb73c67ec74b6a203a2beae22a76a7c1b9e8be6bfe7aa1a10f8f49ba\",\n \"to\": \"0x355a458d555151d3b27f94227960ade1504e526a\",\n \"transactionIndex\": \"0xa4\",\n \"v\": \"0x1c\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0xc033b10ead77cb891c44c758c490eaceea2fc95d\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0x6fc2bd2d0beb3e1fecf9e1b1ba167a5e808ca3be1eb632cf6012c1e4c082f078\",\n \"input\": \"0xa9059cbb00000000000000000000000088091e019598fcc5bbd339164826d85960cd3f0a0000000000000000000000000000000000000000000000004563918244f40000\",\n \"nonce\": \"0x6a3\",\n \"r\": \"0x8e502343014f2808e65700ed9ab6f1c6576249b1d32ef69b4a584dab83e8f39d\",\n \"s\": \"0xaebed52f2d917754227c5fe736139ee3a16f5c285119e51058eb1a7485e5c8a\",\n \"to\": \"0x355a458d555151d3b27f94227960ade1504e526a\",\n \"transactionIndex\": \"0xa5\",\n \"v\": \"0x1b\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0xaae2f7f87b0ed9f95be375ad6139b62ba4ec46ba\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0x6c1ebe74f2d747816903085f74efe8521ca1b52f0a2b66133fd8a901b765347c\",\n \"input\": \"0xa9059cbb00000000000000000000000039cc303f9e8862efbd34570df2aa051e0c8677f90000000000000000000000000000000000000000000000004563918244f40000\",\n \"nonce\": \"0x71f\",\n \"r\": \"0x369453c24c6a80738350660e2926729a5a83d627d7f2606857f601d43137b866\",\n \"s\": \"0x1c6a72554c9bab6f5341d42889aeac711614b34a3db9f12d861371904ea216ad\",\n \"to\": \"0x355a458d555151d3b27f94227960ade1504e526a\",\n \"transactionIndex\": \"0xa6\",\n \"v\": \"0x1b\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x8ae78f0c02222fca4dd00b31a3702c1e2a2b9106\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0xc0bd61551a3ab823d65d65aa028659325a22ab19887df8345b3b6779948df8b7\",\n \"input\": \"0xa9059cbb0000000000000000000000009fee76446ddae322e8b79429d418e8591845908a0000000000000000000000000000000000000000000000004563918244f40000\",\n \"nonce\": \"0x6ab\",\n \"r\": \"0x59945d2f01d81156f1c24d618626182126e27d2fb71fd561c001a5a7089aed0d\",\n \"s\": \"0x4498b4c73dfa4dedaa009b11d60eb6b79f8c9f1d51fc86a167fec7cf8cf376f0\",\n \"to\": \"0x355a458d555151d3b27f94227960ade1504e526a\",\n \"transactionIndex\": \"0xa7\",\n \"v\": \"0x1c\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0xa453b444a5353c0ae1ae09addc2aecc2427a9d06\",\n \"gas\": \"0x15f90\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0xe7b08e8b312cb58a9fb0d0d9e9c476d80a81329e51f7f539c1c161eb7bf62310\",\n \"input\": \"0x01c6adc30000000000000000000000007b4f0ee5837de9a86edb17aae8ca45e90890c60800000000000000000000000000000000000000000000000000000000000008fc\",\n \"nonce\": \"0x21bf6\",\n \"r\": \"0x967b06a3e11fca8291c9f6fcac2c30f387fa2d29dd1b7403fce01db82c20eb24\",\n \"s\": \"0x50af0e191d9efbbd13a01855428c79e782a7525d1f5041acd7e295506c977dc2\",\n \"to\": \"0xbcc394d45c3613530a83cae62c716dc23b7f2152\",\n \"transactionIndex\": \"0xa8\",\n \"v\": \"0x26\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x9d221b2100cbe5f05a0d2048e2556a6df6f9a6c3\",\n \"gas\": \"0x1b042\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0x61c8652d12bd99f21bef32c38b505e67e0e23cc5bce2afc627b73b3d76e38e3f\",\n \"input\": \"0x1998aeef\",\n \"nonce\": \"0xc3\",\n \"r\": \"0x4b421d910e18092777a405c593a7ea360286e9a301b2fa5537ff85daa3eef643\",\n \"s\": \"0x23e4cbd22d79141e36e3d1de82b1ec505d507c561c2fc59a7e877fa0d7d59979\",\n \"to\": \"0xcc3935479af6703a287d84daaebc18c6b2322a55\",\n \"transactionIndex\": \"0xa9\",\n \"v\": \"0x25\",\n \"value\": \"0xf6f2f21484db01\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x844fe7bdabf7dc063d2ae3eea702bae396f874fc\",\n \"gas\": \"0xe28f\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0xc1bdd62a2cf730c5dca61ac5f11105063f3139d2dc76a5e1fd86e277f07d9456\",\n \"input\": \"0xa9059cbb0000000000000000000000009b7e49c213b055b5bade83bccb4ec2f23d2f0428000000000000000000000000000000000000000000000000000000000ee6b280\",\n \"nonce\": \"0x12\",\n \"r\": \"0x421eaca730da9211e13e489014e1498c57adc6b50466de6291d88c9338393c1e\",\n \"s\": \"0x42af6537433819a244ae72377917b099a80e8690b43550f289f96e6d34bd8e4f\",\n \"to\": \"0x46b9ad944d1059450da1163511069c718f699d31\",\n \"transactionIndex\": \"0xaa\",\n \"v\": \"0x25\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0xe55779cad04ff4fff2bdbfb506e671bc585f9cd8\",\n \"gas\": \"0x12c85\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0x46bfc64a1e403ded79fb1c27acba563f0e423faf0d7a502934813bf5ec06e9b3\",\n \"input\": \"0xa9059cbb00000000000000000000000066d5238c902cc6da31d7cbde74b9995ae9ef36b000000000000000000000000000000000000000000000000000000000000186a0\",\n \"nonce\": \"0x2\",\n \"r\": \"0xbe11e85f10d87d39fe5f86555ce141eac118dc76813236409ebc086b69f90832\",\n \"s\": \"0x14d2ecae44f2494166f63e60f7af86a98f53b4c70dee765f708f3524b180c05f\",\n \"to\": \"0xa9042ef3fea39245cf5dac3ffad6970804a5fd11\",\n \"transactionIndex\": \"0xab\",\n \"v\": \"0x25\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x0975ca9f986eee35f5cbba2d672ad9bc8d2a0844\",\n \"gas\": \"0x5208\",\n \"gasPrice\": \"0xba43b7400\",\n \"hash\": \"0x0bb337ae89b690899973d80a3199388bd8d4ac4e4bf60acf6353204ac003df22\",\n \"input\": \"0x\",\n \"nonce\": \"0xc2e9\",\n \"r\": \"0xdb9df64a7721f83b1dfca66e551d4a9a2e3e60fbb577548404857b293e7589fa\",\n \"s\": \"0x596a96e3915bf1bf1cee1ad8824ceb8ad718ed3f997a062f29fbccad431ada6d\",\n \"to\": \"0x541c20d4e55afcfcc2d1cd59160499ab2753a9a2\",\n \"transactionIndex\": \"0xac\",\n \"v\": \"0x26\",\n \"value\": \"0x138dc608e8868000\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x653cd961ed568b49dfd86048cf54cce163edb780\",\n \"gas\": \"0x186a0\",\n \"gasPrice\": \"0x4a817c800\",\n \"hash\": \"0xea365acaf93ca8903ca01c12c8a5ac78f9a40e49bbe668a278cf802308f6ccee\",\n \"input\": \"0xa9059cbb00000000000000000000000079501c26c90cd09949b50761bac2e321c14dd081000000000000000000000000000000000000000000000001158e460913d00000\",\n \"nonce\": \"0x2d6b\",\n \"r\": \"0x7c29fff116df1b0994240f70cede2538a6934225bf84807412a57d213cc98e84\",\n \"s\": \"0x24da1402996275bf7d229f9413c13925157d05d0796c65a4cc100ec26cae903c\",\n \"to\": \"0xc6689eb9a6d724b8d7b1d923ffd65b7005da1b62\",\n \"transactionIndex\": \"0xad\",\n \"v\": \"0x25\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x20f3027ff5affdfada79581122dcbd9309e4bf46\",\n \"gas\": \"0x3d090\",\n \"gasPrice\": \"0x1dcd65000\",\n \"hash\": \"0x120f6f5467e4d3b1a87e15c77cb0487a581d0b4976b56dec9fc360e698a98cd5\",\n \"input\": \"0x338b5dea0000000000000000000000008f3470a7388c05ee4e7af3d01d8c722b0ff523740000000000000000000000000000000000000000000000002f2f39fc6c540000\",\n \"nonce\": \"0x481\",\n \"r\": \"0xa1f87e1caa913b5b92089c8b752ae505838ee1faeea16915344c556b16327670\",\n \"s\": \"0x41baa77482f65ab12d154f5ee308668cd4d0975f420479d11fb4201db4743df3\",\n \"to\": \"0x8d12a197cb00d4747a1fe03395095ce2a5cc6819\",\n \"transactionIndex\": \"0xae\",\n \"v\": \"0x1b\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0xe79eef9b9388a4ff70ed7ec5bccd5b928ebb8bd1\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xee6b2800\",\n \"hash\": \"0x4442b4e770deb49c8f43c937e6f3f495e4484539b98d82896c9a35502fd5fc2b\",\n \"input\": \"0xa9059cbb00000000000000000000000088cdeb13fff8884e82882d88911f45ee17ed3c44000000000000000000000000000000000000000000000002b5e3af16b1880000\",\n \"nonce\": \"0x3d5a\",\n \"r\": \"0xb3c622fd1c7b71cb6817bc01012b779f3547b6b1fd2446b0356a46103ca7bc39\",\n \"s\": \"0x7097951d1714adeff5f236baefea065a213ef7d72fe5bdd591e756281caf2d5a\",\n \"to\": \"0x986ee2b944c42d017f52af21c4c69b84dbea35d8\",\n \"transactionIndex\": \"0xaf\",\n \"v\": \"0x1b\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0xfd5c417f350b900f0f03125defb3fc54d59cf702\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0x8873f54fb7a452c7d7503d9adbfa77df28f1997388deaa154e1bd741156b419b\",\n \"input\": \"0xa9059cbb000000000000000000000000a81db9f612bc1dd716cbffb5492f73a8e844cbcd0000000000000000000000000000000000000000000000004563918244f40000\",\n \"nonce\": \"0x788\",\n \"r\": \"0xfd238eb5daf4ef85a44d652f1b11179a9793520b4422a311b610b369a259d90\",\n \"s\": \"0x1c3439118c10271bbe43ecaa30b5266df3df26f08a1a2ef8d5d0112a879e04d3\",\n \"to\": \"0x355a458d555151d3b27f94227960ade1504e526a\",\n \"transactionIndex\": \"0xb0\",\n \"v\": \"0x1b\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x927e07feebbaae3047620794720fd2bc1f1af6fa\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0x18eacd0408d4fc2fd40d1178df2c2a2cbf3165b8688482d9b657e985eb3c6a64\",\n \"input\": \"0xa9059cbb000000000000000000000000eb6ebee1c19396aa984cdf88392dd9ba26ebce160000000000000000000000000000000000000000000000004563918244f40000\",\n \"nonce\": \"0x6f8\",\n \"r\": \"0x43da255621e05c8237599effa4f35e2dee1ce8d605ca025655223c3b360edb6c\",\n \"s\": \"0x49e999d6fc0323d01b7137af248476f79a58b704b66cfed9dc0f404e75d8bc21\",\n \"to\": \"0x355a458d555151d3b27f94227960ade1504e526a\",\n \"transactionIndex\": \"0xb1\",\n \"v\": \"0x1b\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x61fb217f6a9fde302751fd3d16f08f89c6049ca9\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0x25bc499a74a2ce938009b28e05ca1e1f97e0fd451ee2e93cea893b38f171d425\",\n \"input\": \"0xa9059cbb000000000000000000000000daa2e07dd229b9d99c987dcbc4b2739169b347350000000000000000000000000000000000000000000000004563918244f40000\",\n \"nonce\": \"0x719\",\n \"r\": \"0x7cce05a3d5df31354982367cc2d0c0af11a9ab885d5df197f945a791f75a4ad8\",\n \"s\": \"0x3224f4bf1046a8569190d1513fb4cfbfded3fd309274f966b32641728f54e54a\",\n \"to\": \"0x355a458d555151d3b27f94227960ade1504e526a\",\n \"transactionIndex\": \"0xb2\",\n \"v\": \"0x1b\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x6a0c60ded4d315b5508b2c88afc700ed53a6b385\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0xc3c818004dda9657dc13f122bb4afc06011f2a084595bc24856ef4eaf7fcc6b4\",\n \"input\": \"0xa9059cbb0000000000000000000000000288d4efab99885cbbd7ca692dc2a29f464aa9ad0000000000000000000000000000000000000000000000004563918244f40000\",\n \"nonce\": \"0x72c\",\n \"r\": \"0x174bd76a1646db5fc0ae4552fa19b87ce6139c71f70959f832afabf0f5d978e6\",\n \"s\": \"0x58a32c9a890cde09985895dd16c475068737002d5c71d7a15e4baead553edc35\",\n \"to\": \"0x355a458d555151d3b27f94227960ade1504e526a\",\n \"transactionIndex\": \"0xb3\",\n \"v\": \"0x1c\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0xf73ff30b125f0bcdfe53d52e1a670ccc5a40d264\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0x5cc6b1b7c8103dfb0a102d4b5258119ef67bb4f47224dce69be2d98976345bef\",\n \"input\": \"0xa9059cbb0000000000000000000000002752011d5fd09a571837c534a85efc2b89a4733c0000000000000000000000000000000000000000000000004563918244f40000\",\n \"nonce\": \"0x6e3\",\n \"r\": \"0x6bf32439c88b135cf04ea7c377b68d48a375d64120ee1df8b9cce0fd8cd8b31a\",\n \"s\": \"0x6c814871b153b668ffd6e18036704d7230e9c6a33db3ca6a7a050f8bf7d8ac40\",\n \"to\": \"0x355a458d555151d3b27f94227960ade1504e526a\",\n \"transactionIndex\": \"0xb4\",\n \"v\": \"0x1c\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x7060b3ec1bec74330d86a01719c4ceb84a5d7d01\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0xd68b12188c595387907c69d6ad1b06963c4d89c6f0050b006717b3e756815bb9\",\n \"input\": \"0xa9059cbb0000000000000000000000008c5cce55dd0e414072948562901da4187ec8f38a0000000000000000000000000000000000000000000000004563918244f40000\",\n \"nonce\": \"0x7aa\",\n \"r\": \"0x122fff655e6243f985b99cd2a11d056016b1087648b0fa5945300fc7f87cde7d\",\n \"s\": \"0x5b3d608a0d26514fe819b6ad234fd4daf104a6c03fc0807f14fbee0ce888619e\",\n \"to\": \"0x355a458d555151d3b27f94227960ade1504e526a\",\n \"transactionIndex\": \"0xb5\",\n \"v\": \"0x1c\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x000836d933f63f6999b9236826a808975e05412f\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0xa5b4b75038de13ecb9135d186b1f58f16f2a2544ea7df27874b7f2858925eb35\",\n \"input\": \"0xa9059cbb00000000000000000000000054ccf56004c98b6258b503544c44fa73ce4affeb0000000000000000000000000000000000000000000000004563918244f40000\",\n \"nonce\": \"0x6d9\",\n \"r\": \"0x369a21a2d77704530954b614da1b514b49caeab6f55b9e7af6e307ff4a04798e\",\n \"s\": \"0x3d21ad84b7e8ddfb35ecad2126d2f325c82f8d619e1f6abbbcb52aa4c02e0491\",\n \"to\": \"0x355a458d555151d3b27f94227960ade1504e526a\",\n \"transactionIndex\": \"0xb6\",\n \"v\": \"0x1b\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x7d0a7f481d67bcbf3826d28e779769402fcc2d88\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0x585ece3882f48949f1bcf0b3f464cd7a0e38fedda6f4db66fb4e81eccdfd5804\",\n \"input\": \"0xa9059cbb000000000000000000000000c0bea1d161ad107b7cd2d70f54b4a53b6e4af5710000000000000000000000000000000000000000000000004563918244f40000\",\n \"nonce\": \"0x70d\",\n \"r\": \"0xa45ba841ac847799e73e2c8611e1ad734d5b91f3699219712cf1a2cc0646de73\",\n \"s\": \"0xed7556cb1accef39dcd3b327d95e28b97cac9e17b02268e9eb24afd1913d2d9\",\n \"to\": \"0x355a458d555151d3b27f94227960ade1504e526a\",\n \"transactionIndex\": \"0xb7\",\n \"v\": \"0x1c\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x59dd607181fc174433017a79c15f50573445a09e\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0x31940d18f786a3ee6ee19ba3d7d6dde9738c2ceaa44c5ee5de094ef1645c3cc0\",\n \"input\": \"0xa9059cbb0000000000000000000000000ce08d3d0b9d0e31a1dbd92ea3044a33d3a201500000000000000000000000000000000000000000000000004563918244f40000\",\n \"nonce\": \"0x6bd\",\n \"r\": \"0xc1fb3db0f5acb600d3210d8ae8854c7d31b2d4426c9f67d83045e3e15f928dd2\",\n \"s\": \"0x253ef5ddea7425b027296b7e6a0e07984dfeb53d0044aca82ba3c695448ea41c\",\n \"to\": \"0x355a458d555151d3b27f94227960ade1504e526a\",\n \"transactionIndex\": \"0xb8\",\n \"v\": \"0x1b\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x99f1fabe5f4a25ddcc1e64870e3237f04680bcc5\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0x0cb6af00699835574f006ad0e7f31f9575222b4dd32ac04d3f289251d22f1fdc\",\n \"input\": \"0xa9059cbb0000000000000000000000003b66b43eeeb8c93bc4008734d6dafd2cc95225680000000000000000000000000000000000000000000000004563918244f40000\",\n \"nonce\": \"0x77a\",\n \"r\": \"0xcbcd96cdea46662d003a838be2765078710b99e8ea601ea9501c5488c37f9d6b\",\n \"s\": \"0x5f6173a962ec22dcd2e0f394816a61d407230c0f849b236933d48de7c7d26afb\",\n \"to\": \"0x355a458d555151d3b27f94227960ade1504e526a\",\n \"transactionIndex\": \"0xb9\",\n \"v\": \"0x1b\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0xc6d270daf9336097325320a3f5383400de2a1d5c\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0x5e979fc8f7cd1b4eb0b618d75c9dc7450bc6e6691a71539c2a6d8d71ac9de4da\",\n \"input\": \"0xa9059cbb000000000000000000000000482baa6897e97ea49d508af9bf73a6fa8b87db380000000000000000000000000000000000000000000000004563918244f40000\",\n \"nonce\": \"0x765\",\n \"r\": \"0x9709d375fc9ef606a64bc36c15dcdc7b3a3c1a13a700d618e57989a877f1a0cc\",\n \"s\": \"0x2d140226a8bcafad69f262e778d4409efa3541c153ba8a51c304ee29bc85d582\",\n \"to\": \"0x355a458d555151d3b27f94227960ade1504e526a\",\n \"transactionIndex\": \"0xba\",\n \"v\": \"0x1c\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x4a2d6a500c825fb721e8aea9acabb222d18b5896\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0xc8e762ef7a70aa8ef9c07c65f9c408b79424d0e47f1c292ca65dd9f45517a403\",\n \"input\": \"0xa9059cbb000000000000000000000000d886281b8ae7cf5ebb2d483997b439b1d3f847f60000000000000000000000000000000000000000000000004563918244f40000\",\n \"nonce\": \"0x7b9\",\n \"r\": \"0x21daedbcab6e726af1352fccfce91fd035fb3ce522a748de2a3361c69b0baebe\",\n \"s\": \"0x43e057a95fd8bbdc435202cc44d2db10da3cbe91894f1d48850499083062042f\",\n \"to\": \"0x355a458d555151d3b27f94227960ade1504e526a\",\n \"transactionIndex\": \"0xbb\",\n \"v\": \"0x1c\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0xb4c32d9986b5631fb0eb0aea7adf2c58397969cc\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0x01c6abff01353c3734dcac7126311438c278ce42119fb0e93c93b4649c5f4d47\",\n \"input\": \"0xa9059cbb0000000000000000000000006bb968983a556ad2e290303b0aba0f06947c44b50000000000000000000000000000000000000000000000004563918244f40000\",\n \"nonce\": \"0x80b\",\n \"r\": \"0xe3c6f7f29e666673811866b682a2a1008a25ba2bae41151874aeaf438d4fefe1\",\n \"s\": \"0x6a3c8573773c9212ec0137e12e43eb14316be2dc6f5df784d83a881f98093868\",\n \"to\": \"0x355a458d555151d3b27f94227960ade1504e526a\",\n \"transactionIndex\": \"0xbc\",\n \"v\": \"0x1c\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x2a7a9c8014f35cc968f6f38f3b1b5703ed409e89\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0x0dea122537675b84a6f8fcfd013a7707646d09867360f17693bbfe87c48c1d81\",\n \"input\": \"0xa9059cbb0000000000000000000000004938ea47fe84e829b715f3dd4ee20553a7eba3b10000000000000000000000000000000000000000000000004563918244f40000\",\n \"nonce\": \"0x6ce\",\n \"r\": \"0xf751838c1906c3760bdd86fc9bee614122857b5a23427ccfc008c7fb9fa85f6e\",\n \"s\": \"0x4c430e87b0a82b85ac497f57d3f33b83f2dd078c37bc35fb1d6c349df0dd125d\",\n \"to\": \"0x355a458d555151d3b27f94227960ade1504e526a\",\n \"transactionIndex\": \"0xbd\",\n \"v\": \"0x1b\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0xaae2f7f87b0ed9f95be375ad6139b62ba4ec46ba\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0xdca172d50c1c1ea6290f5098af22abf241a5e8b8b778ce6d3d429818420dead5\",\n \"input\": \"0xa9059cbb0000000000000000000000006695a29120643cace887bf66cd47d72688c2c4e70000000000000000000000000000000000000000000000004563918244f40000\",\n \"nonce\": \"0x720\",\n \"r\": \"0x3f4ddf073020a1bd06a9450a1f3b1ede4d6d0ba52cb9df78cdd24df5850caf53\",\n \"s\": \"0x207433cb49749a3cc1fc43f689dd18d3571899d9cdfa28c6ce2702bae89049ba\",\n \"to\": \"0x355a458d555151d3b27f94227960ade1504e526a\",\n \"transactionIndex\": \"0xbe\",\n \"v\": \"0x1b\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0xba29c81a59dd50a065638902e0e9acd4ab97e1af\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0x5afc1e36228fcc2fc95c8c7722d2daff632495bd96d8ce3788b219953d91cb8a\",\n \"input\": \"0xa9059cbb000000000000000000000000f9f62aa096e721ebe220573e7c95e4453fba4a770000000000000000000000000000000000000000000000004563918244f40000\",\n \"nonce\": \"0x6b8\",\n \"r\": \"0x5bcb41a33af080beb85553c36f285741ecc9ba97ea5d2464b65192726184a635\",\n \"s\": \"0x6368000a5c2c51b16e9f94edb276f44ba176125d0cee350e5930873f1816c8c0\",\n \"to\": \"0x355a458d555151d3b27f94227960ade1504e526a\",\n \"transactionIndex\": \"0xbf\",\n \"v\": \"0x1c\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0xab0ef6a96fc1857a08bab6000f339460d80da12b\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0x70ec6edafbf1da7dfa951d06a3ee957dc66b0f2c470c72406d9854e6f713e7a6\",\n \"input\": \"0xa9059cbb0000000000000000000000007699db2595ab4fc8277b4b8239dee40b0918b31c0000000000000000000000000000000000000000000000004563918244f40000\",\n \"nonce\": \"0x792\",\n \"r\": \"0x443af91162f7f13e82bf3fbe67cb7476ea8ce3e1ab13b29e83f7455cb9c3903f\",\n \"s\": \"0x5b15f1540d523b88e2d594f6699080b2919953e06e64ec55f915a93ab7e105a3\",\n \"to\": \"0x355a458d555151d3b27f94227960ade1504e526a\",\n \"transactionIndex\": \"0xc0\",\n \"v\": \"0x1b\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0xc033b10ead77cb891c44c758c490eaceea2fc95d\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0x7ceb64c64fdda3af4fed3c2274678767634523149977634add3d347f58c886f8\",\n \"input\": \"0xa9059cbb0000000000000000000000004eaa5221f7eb6381ce837d0bdc68d317aee1cf6e0000000000000000000000000000000000000000000000004563918244f40000\",\n \"nonce\": \"0x6a4\",\n \"r\": \"0x2cd01e6b41cc575657612790d24ac1af29e1efe72b021e791ac674a6beba620d\",\n \"s\": \"0x5fff90f48850f56ccac804c13cc340f2be198d972e8a3352ad46fce9fe7d4f9c\",\n \"to\": \"0x355a458d555151d3b27f94227960ade1504e526a\",\n \"transactionIndex\": \"0xc1\",\n \"v\": \"0x1c\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0xbf15c60b6cabad081271e89084a12cd459555c85\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0xc5bc40e6beeaa6e698bcd06785b2b3d88330cb552dcce612e13074938a0ac53c\",\n \"input\": \"0xa9059cbb0000000000000000000000009d4346fbf00555b5a1c51ce96fec74128ad423000000000000000000000000000000000000000000000000004563918244f40000\",\n \"nonce\": \"0x7aa\",\n \"r\": \"0x843fdde2bd695e79ea43901fef325a6ddb1d15f48f76c4c361f24f4cb0ef1b8d\",\n \"s\": \"0x5d0b6c0c709b71b133dc2f24b4160805fcdee755514e193d367137f717f4c788\",\n \"to\": \"0x355a458d555151d3b27f94227960ade1504e526a\",\n \"transactionIndex\": \"0xc2\",\n \"v\": \"0x1c\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x8ae78f0c02222fca4dd00b31a3702c1e2a2b9106\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0x5b31c673e1d5345389ee90f87c0a2aa58181e775d22d3165651ed564b1c8c259\",\n \"input\": \"0xa9059cbb00000000000000000000000046fd61450b24325d6ca89e276a4f42c6074312220000000000000000000000000000000000000000000000004563918244f40000\",\n \"nonce\": \"0x6ac\",\n \"r\": \"0x952d8897350ba6f5ae787b0f11682caf2635b7fae73248f561518421fa2a2bdf\",\n \"s\": \"0x336c01de388b40e370ccfa0475afe6c1e74860932eba063661748c706bc65feb\",\n \"to\": \"0x355a458d555151d3b27f94227960ade1504e526a\",\n \"transactionIndex\": \"0xc3\",\n \"v\": \"0x1b\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x33b1335d677d73cd9691648ef6837f5ca95df4fa\",\n \"gas\": \"0x20129\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0xfee53a2f0db9a6201b11c9da0ea78f7d5ee5c8ee3cf70e9a4b96e9a865e6a905\",\n \"input\": \"0xf2fde38b00000000000000000000000033b1335d677d73cd9691648ef6837f5ca95df4fa\",\n \"nonce\": \"0x1c\",\n \"r\": \"0x817f04f182c2ecc9d84c3d445c639668be02ca03fefc326c913e5091f3077079\",\n \"s\": \"0x742c6d16c1858c71cc17a7341f0afb0250006d495591b5a2b2bfaca575c860e9\",\n \"to\": \"0x5bf2961a6bb8b04afd0b27518a96150c35595b23\",\n \"transactionIndex\": \"0xc4\",\n \"v\": \"0x26\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0xe79eef9b9388a4ff70ed7ec5bccd5b928ebb8bd1\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xee6b2800\",\n \"hash\": \"0x1babd2265ba8f8ef230a7def14b027323b5b5943b50437e703f7cff88af74003\",\n \"input\": \"0xa9059cbb00000000000000000000000018db6865c9e6720ad9fab988c86b50cff33ae86a000000000000000000000000000000000000000000000000d02ab486cedc0000\",\n \"nonce\": \"0x3d5b\",\n \"r\": \"0x5b7174ce9448d0005c0bd14d02d38b5ea6d8f349508fcefebe55df2f3e640c10\",\n \"s\": \"0x1e27bceefd2791045d954b5f0e599b5b5b165b9061d79f0f94ac481a7d9017f2\",\n \"to\": \"0x986ee2b944c42d017f52af21c4c69b84dbea35d8\",\n \"transactionIndex\": \"0xc5\",\n \"v\": \"0x1b\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x61fb217f6a9fde302751fd3d16f08f89c6049ca9\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0x8143e7675563fc711205c438063ed189895b3ff56b728a86512da3f6ea971013\",\n \"input\": \"0xa9059cbb000000000000000000000000bfd86d97534fcb0c06d308eefdba769d6ff157a30000000000000000000000000000000000000000000000004563918244f40000\",\n \"nonce\": \"0x71a\",\n \"r\": \"0x391124d6ef3369e79c180ed1ac1ae8c4f6d7e0357fbf7f9b59393534d44ea5ff\",\n \"s\": \"0x56d25868395458ff4d4b119f59e5132dfd99b518ed962a884f0bee9132e9cece\",\n \"to\": \"0x355a458d555151d3b27f94227960ade1504e526a\",\n \"transactionIndex\": \"0xc6\",\n \"v\": \"0x1b\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x6a0c60ded4d315b5508b2c88afc700ed53a6b385\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0x5bd671c744bb4389884e4ad6c6039dba775a17b61cda74d93b04cf45ae54e7ea\",\n \"input\": \"0xa9059cbb000000000000000000000000b2cb6adb6867240f371100bf421ab2292e68a0f30000000000000000000000000000000000000000000000004563918244f40000\",\n \"nonce\": \"0x72d\",\n \"r\": \"0x6e7d9b7f6de96f1129ddc27a47b0f45245162b2151d8b0b1f4d9870533ac8897\",\n \"s\": \"0x6630bfdcc5c053f256af506b14e53938cf9e67d124a10875e1306e45b214f553\",\n \"to\": \"0x355a458d555151d3b27f94227960ade1504e526a\",\n \"transactionIndex\": \"0xc7\",\n \"v\": \"0x1c\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0xf73ff30b125f0bcdfe53d52e1a670ccc5a40d264\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0x4b1edf99cb9e5742ca02604db130977517746ecd55abbfae636ba4090d653b56\",\n \"input\": \"0xa9059cbb0000000000000000000000000389e458e1505c030de16f33660bed381774b9770000000000000000000000000000000000000000000000004563918244f40000\",\n \"nonce\": \"0x6e4\",\n \"r\": \"0x143adf98912423ea53f5863e34d1e7b4a55e6308a70c0b1b473f31e69c30c2f7\",\n \"s\": \"0x54b2f19e01c0111fa316ac40fd83c7f422dad58302aa07eaaccc8ed88999655a\",\n \"to\": \"0x355a458d555151d3b27f94227960ade1504e526a\",\n \"transactionIndex\": \"0xc8\",\n \"v\": \"0x1b\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x7060b3ec1bec74330d86a01719c4ceb84a5d7d01\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0x2120cf3874b004e66dfb19484783230d8111a5d72eb0c0b90483c0e8cb3389c4\",\n \"input\": \"0xa9059cbb00000000000000000000000087af61239a9996d88178194da51d68d7864927db0000000000000000000000000000000000000000000000004563918244f40000\",\n \"nonce\": \"0x7ab\",\n \"r\": \"0x29ab099bad3fa20e36afeb5e3382f287cd809325ff1e4c5a0ee82d73ce7a65cb\",\n \"s\": \"0x23ac304909e3a7cf231d33f8000db64375c52d82af91bab71c3ca2dbb4d83314\",\n \"to\": \"0x355a458d555151d3b27f94227960ade1504e526a\",\n \"transactionIndex\": \"0xc9\",\n \"v\": \"0x1c\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x7d0a7f481d67bcbf3826d28e779769402fcc2d88\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0xb224b648e83aa7fac590a59e6195bc042cd52d0aeb4c7006ba7bc02afa14dd93\",\n \"input\": \"0xa9059cbb0000000000000000000000009bd7aa5f896c8ca11295db81babdad6eacca3a590000000000000000000000000000000000000000000000004563918244f40000\",\n \"nonce\": \"0x70e\",\n \"r\": \"0x30c198557dad69872279e91e263b7ef5e4883216c951889d19260400e247cbda\",\n \"s\": \"0x4c9bea39f0fd868f32d4bf6178584dc4291a2bed37290b86acb6b3c2191cf059\",\n \"to\": \"0x355a458d555151d3b27f94227960ade1504e526a\",\n \"transactionIndex\": \"0xca\",\n \"v\": \"0x1b\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0xfd5c417f350b900f0f03125defb3fc54d59cf702\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0xb3c804db14c118f67f336e556385ebbee29a13576e44a44f95ec8256ab2bbfdf\",\n \"input\": \"0xa9059cbb0000000000000000000000004205f7ffa0a7a812db14bf28627877040c3e0a520000000000000000000000000000000000000000000000004563918244f40000\",\n \"nonce\": \"0x789\",\n \"r\": \"0x68607ccfd4d2e81d063f7c25f4d5e9b99698c7be12bee30b1f5fd2e558c9f149\",\n \"s\": \"0x46a3cf5fdbf6a247f77c95894d36e61808a3673c72ddd631b23c1e5a0e2c1172\",\n \"to\": \"0x355a458d555151d3b27f94227960ade1504e526a\",\n \"transactionIndex\": \"0xcb\",\n \"v\": \"0x1c\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x59dd607181fc174433017a79c15f50573445a09e\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0x2a781f0f19e9e6f14bdbe2a82700a4516b03df3a2ef5f3d38db228dc083c1575\",\n \"input\": \"0xa9059cbb000000000000000000000000eafd4781a9ffc752ccd4330c82882440cd545ca50000000000000000000000000000000000000000000000004563918244f40000\",\n \"nonce\": \"0x6be\",\n \"r\": \"0x49d8dfd33cfb1de9b351a62612d7a76b36b60c59b1d9bf6da66657912f43690d\",\n \"s\": \"0x409e45ea35e66c9f936a1eee958bcbb4692d827d42dafe7131fdacf84cb23780\",\n \"to\": \"0x355a458d555151d3b27f94227960ade1504e526a\",\n \"transactionIndex\": \"0xcc\",\n \"v\": \"0x1b\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x99f1fabe5f4a25ddcc1e64870e3237f04680bcc5\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0x3ea4a51c48ca9a0c070f5fc49a16ea8745b9e79ae694ed4c9640388e1281eaf4\",\n \"input\": \"0xa9059cbb000000000000000000000000d939b3c6ed3c57dca15c49966c0aceda1456069b0000000000000000000000000000000000000000000000004563918244f40000\",\n \"nonce\": \"0x77b\",\n \"r\": \"0xf4d433cbb9b19f909e0e9348c1810560edaed303971a5c8e5b307fb8054d9595\",\n \"s\": \"0x7ef3a1af69cb49ddc9358da5846a6f5b2a0fbce32a824af037e1f08be1009671\",\n \"to\": \"0x355a458d555151d3b27f94227960ade1504e526a\",\n \"transactionIndex\": \"0xcd\",\n \"v\": \"0x1b\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0xc6d270daf9336097325320a3f5383400de2a1d5c\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0x927c44cd7ba62ffb93d015515a5b6183c710e07990f272d84cc8ced4ee4c28b8\",\n \"input\": \"0xa9059cbb0000000000000000000000000bc41fac940bf413faa12f2b00630299556fed150000000000000000000000000000000000000000000000004563918244f40000\",\n \"nonce\": \"0x766\",\n \"r\": \"0x502033a7e02a411e9075cd3d39f7bcb432e3a13ffb797fc595cc72aa42ee6c42\",\n \"s\": \"0x7c512843f137f8dbe60ba17b5abbb1e9e0f92f991d21caf0ae4422d85ee38ea6\",\n \"to\": \"0x355a458d555151d3b27f94227960ade1504e526a\",\n \"transactionIndex\": \"0xce\",\n \"v\": \"0x1b\",\n \"value\": \"0x0\"\n },\n {\n \"blockHash\": \"0x1ca3e9ffb390a11eb2cd8611ba2d1222a8659503a7208866add2fe3fd6c9e5bc\",\n \"blockNumber\": \"0x517024\",\n \"from\": \"0x4a2d6a500c825fb721e8aea9acabb222d18b5896\",\n \"gas\": \"0xea60\",\n \"gasPrice\": \"0xb2d05e00\",\n \"hash\": \"0xcf328834c6f442ca03276c91842394b34d0e6ec76ca57bb30787ea51798346bd\",\n \"input\": \"0xa9059cbb0000000000000000000000004eebebf6dbc3a015d7930bf7d8f89bf559d105c50000000000000000000000000000000000000000000000004563918244f40000\",\n \"nonce\": \"0x7ba\",\n \"r\": \"0xf760dfcf92104045ba2b5c55ea71e6fee77572f9b1d6fabe29dc185180f5de5f\",\n \"s\": \"0x7c593c8d6aa10bb57e8ec97821c4bb94feee177752c9b8f885589bc45d215496\",\n \"to\": \"0x355a458d555151d3b27f94227960ade1504e526a\",\n \"transactionIndex\": \"0xcf\",\n \"v\": \"0x1b\",\n \"value\": \"0x0\"\n }\n ],\n \"transactionsRoot\": \"0xa49768f9a65c629abe7b3a5da07e8b3bfeca70fe917caf4d19e3158ac75bc539\",\n \"uncles\": [\n \"0x0ed987ed75d5ef38e7ec58e4efb6b2169c899c965bc54b90cb1c92faa556a209\"\n ]\n }\n}\n"
]
],
[
[
"#### eth_getBalance",
"_____no_output_____"
]
],
[
[
"def make_request_eth_getBalance(address, block_nb = 'latest', use_hex=False):\n return json.dumps({\n \"jsonrpc\": \"2.0\",\n \"method\": \"eth_getBalance\",\n \"params\": [address, hex(block_nb) if use_hex else block_nb],\n \"id\": 1\n })\n\n# eth_getBalance",
"_____no_output_____"
],
[
"address = '0xc8f88d1c1259060a799af77120db270cdce07e37'\nprint_json(http_post_request(eth_url,\n make_request_eth_getBalance(address)))",
"url: https://mainnet.infura.io/TzMi1NSXsXK2SzUuEY9Q\nrequest: {\"params\": [\"0xc8f88d1c1259060a799af77120db270cdce07e37\", \"latest\"], \"method\": \"eth_getBalance\", \"id\": 1, \"jsonrpc\": \"2.0\"} \n\n{\n \"id\": 1,\n \"jsonrpc\": \"2.0\",\n \"result\": \"0xcd4b0af59c8490\"\n}\n"
],
[
"WEI_ETH_FACTOR = 1000000000000000000.0\nprint('Balance of address \\'{}\\':\\n {}'.format(address, \n int(http_post_request(eth_url,\n make_request_eth_getBalance(address))['result'], \n 0)/WEI_ETH_FACTOR))",
"url: https://mainnet.infura.io/TzMi1NSXsXK2SzUuEY9Q\nrequest: {\"params\": [\"0xc8f88d1c1259060a799af77120db270cdce07e37\", \"latest\"], \"method\": \"eth_getBalance\", \"id\": 1, \"jsonrpc\": \"2.0\"} \n\nBalance of address '0xc8f88d1c1259060a799af77120db270cdce07e37':\n 0.057784880668116115\n"
],
[
"address = '0xc8f88d1c1259060a799af77120db270cdce07e37'\nprint_json(http_post_request(eth_url,\n make_request_eth_getBalance(address, \n block_nb = 5337125, \n use_hex = True)))",
"url: https://mainnet.infura.io/TzMi1NSXsXK2SzUuEY9Q\nrequest: {\"params\": [\"0xc8f88d1c1259060a799af77120db270cdce07e37\", \"0x517025\"], \"method\": \"eth_getBalance\", \"id\": 1, \"jsonrpc\": \"2.0\"} \n\n{\n \"id\": 1,\n \"jsonrpc\": \"2.0\",\n \"result\": \"0x1b9c02a0a23a70\"\n}\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
e72a724ef675e894d273cb64892950897dae030d | 802,597 | ipynb | Jupyter Notebook | Optimization in ML/Gradient_descent/Practical_4.ipynb | Omar-Safwat/Numerical_Methods_Projects | 280b0fe04baed8b484f1fea6ef03bbb38be2d946 | [
"MIT"
] | null | null | null | Optimization in ML/Gradient_descent/Practical_4.ipynb | Omar-Safwat/Numerical_Methods_Projects | 280b0fe04baed8b484f1fea6ef03bbb38be2d946 | [
"MIT"
] | null | null | null | Optimization in ML/Gradient_descent/Practical_4.ipynb | Omar-Safwat/Numerical_Methods_Projects | 280b0fe04baed8b484f1fea6ef03bbb38be2d946 | [
"MIT"
] | null | null | null | 1,084.590541 | 42,052 | 0.957433 | [
[
[
"**Name:** Omar Khaled Mahmoud Safwat Mohamed Safwat<br>\n\n**Group:** Alex group 3",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom sklearn.metrics import r2_score\nfrom sklearn.preprocessing import StandardScaler # Standardize data for faster convergence\nimport seaborn as sns\nimport Linear_Regression as lr\nsns.set()",
"_____no_output_____"
]
],
[
[
"# Data generation",
"_____no_output_____"
]
],
[
[
"X = np.array([list(range(1, 20))]).T\ny = -1 * X + 2",
"_____no_output_____"
],
[
"# Plot data\nplt.scatter(X, y)\nplt.xlabel(\"x\")\nplt.ylabel(\"y\")",
"_____no_output_____"
]
],
[
[
"## Adagrad\n### 1st trial",
"_____no_output_____"
]
],
[
[
"lr_adagrad = lr.Linear_Regression(X, y)\ntheta = lr_adagrad.fit(solver=\"Adagrad\", alpha=0.01, max_epochs=1e3, standardize=False, stop_criteria=1e-3, eps=1e-8)\nlr_adagrad.show_summary()\nlr_adagrad.plot_LR_2D(show_trials=True, N_trials_to_show= 10)\nlr_adagrad.plot_MSE()",
"Solver summary:\n===============\nNumber of iterations: 1000\nMSE: 6.797968808367863\nStop criteria was reached first: False\nModel Training accuracy: 0.5468020794421424\n"
]
],
[
[
"### 2nd Trial",
"_____no_output_____"
]
],
[
[
"lr_adagrad = lr.Linear_Regression(X, y)\ntheta = lr_adagrad.fit(solver=\"Adagrad\", alpha=0.01, max_epochs=1e4, standardize=False, stop_criteria=1e-5, eps=1e-8)\nlr_adagrad.show_summary()\nlr_adagrad.plot_LR_2D(show_trials=True, N_trials_to_show= 10)\nlr_adagrad.plot_MSE()",
"Solver summary:\n===============\nNumber of iterations: 10000\nMSE: 0.7111622267407923\nStop criteria was reached first: False\nModel Training accuracy: 0.9525891848839472\n"
]
],
[
[
"## RMSprop\n### 1st trial",
"_____no_output_____"
]
],
[
[
"lr_rms = lr.Linear_Regression(X, y)\ntheta = lr_rms.fit(solver=\"RMSprop\", alpha=0.01, max_epochs=1e3, standardize=False, stop_criteria=1e-3, eps=1e-8, beta_grad=0.9)\nlr_rms.show_summary()\nlr_rms.plot_LR_2D(show_trials=True, N_trials_to_show= 10)\nlr_rms.plot_MSE()",
"Solver summary:\n===============\nNumber of iterations: 300\nMSE: 0.32095489048071413\nStop criteria was reached first: True\nModel Training accuracy: 0.9786030073012857\n"
]
],
[
[
"### 2nd Trial",
"_____no_output_____"
]
],
[
[
"lr_rms = lr.Linear_Regression(X, y)\ntheta = lr_rms.fit(solver=\"RMSprop\", alpha=0.001, max_epochs=1e4, standardize=False, stop_criteria=1e-4, eps=1e-8, beta_grad=0.9)\nlr_rms.show_summary()\nlr_rms.plot_LR_2D(show_trials=True, N_trials_to_show= 10)\nlr_rms.plot_MSE()",
"Solver summary:\n===============\nNumber of iterations: 3117\nMSE: 0.022779696979792863\nStop criteria was reached first: True\nModel Training accuracy: 0.9984813535346805\n"
]
],
[
[
"## Adam\n### 1st trial",
"_____no_output_____"
]
],
[
[
"lr_adam = lr.Linear_Regression(X, y)\ntheta = lr_adam.fit(\n solver=\"Adam\", \n alpha=0.001, \n max_epochs=1e3, \n standardize=False, \n stop_criteria=1e-3, \n eps=1e-7, \n beta_grad=0.9, \n beta_nu=0.8)\n\nlr_adam.show_summary()\nlr_adam.plot_LR_2D(show_trials=True, N_trials_to_show= 10)\nlr_adam.plot_MSE()",
"Solver summary:\n===============\nNumber of iterations: 909\nMSE: 0.8091644884352912\nStop criteria was reached first: True\nModel Training accuracy: 0.9460557007709806\n"
]
],
[
[
"### 2nd trial",
"_____no_output_____"
]
],
[
[
"lr_adam = lr.Linear_Regression(X, y)\ntheta = lr_adam.fit(\n solver=\"Adam\", \n alpha=0.01, \n max_epochs=1e4, \n standardize=False, \n stop_criteria=1e-3, \n eps=1e-7, \n beta_grad=0.9, \n beta_nu=0.8)\n\nlr_adam.show_summary()\nlr_adam.plot_LR_2D(show_trials=True, N_trials_to_show= 10)\nlr_adam.plot_MSE()",
"Solver summary:\n===============\nNumber of iterations: 424\nMSE: 0.023983926232788545\nStop criteria was reached first: True\nModel Training accuracy: 0.9984010715844808\n"
]
],
[
[
"# Compare between three algorithms\n\nAt \n* alpha = 0.01<br>\n* max_epochs=1e4, \n* standardize=False, \n* stop_criteria=1e-4, \n* eps=1e-7, \n* beta_grad=0.9, \n* beta_nu=0.8",
"_____no_output_____"
]
],
[
[
"# Adagrad\nlr_adagrad = lr.Linear_Regression(X, y)\ntheta = lr_adagrad.fit(solver=\"Adagrad\", alpha=0.01, max_epochs=1e4, standardize=False, stop_criteria=1e-4, eps=1e-7)\nlr_adagrad.show_summary()\nlr_adagrad.plot_LR_2D(show_trials=True, N_trials_to_show= 10)\nlr_adagrad.plot_MSE()",
"Solver summary:\n===============\nNumber of iterations: 4586\nMSE: 0.8887403018785204\nStop criteria was reached first: True\nModel Training accuracy: 0.9407506465414319\n"
],
[
"# RMS\nlr_rms = lr.Linear_Regression(X, y)\ntheta = lr_rms.fit(\n solver=\"RMSprop\", \n alpha=0.01, \n max_epochs=1e4, \n standardize=False, \n stop_criteria=1e-4, \n eps=1e-7, \n beta_grad=0.9)\nlr_rms.show_summary()\nlr_rms.plot_LR_2D(show_trials=True, N_trials_to_show= 10)\nlr_rms.plot_MSE()",
"Solver summary:\n===============\nNumber of iterations: 281\nMSE: 0.39088861507395783\nStop criteria was reached first: True\nModel Training accuracy: 0.9739407589950695\n"
],
[
"# Adam\nlr_adam = lr.Linear_Regression(X, y)\ntheta = lr_adam.fit(\n solver=\"Adam\", \n alpha=0.01, \n max_epochs=1e4, \n standardize=False, \n stop_criteria=1e-4, \n eps=1e-7, \n beta_grad=0.9, \n beta_nu=0.8)\n\nlr_adam.show_summary()\nlr_adam.plot_LR_2D(show_trials=True, N_trials_to_show= 10)\nlr_adam.plot_MSE()",
"Solver summary:\n===============\nNumber of iterations: 473\nMSE: 0.0004158716712860208\nStop criteria was reached first: True\nModel Training accuracy: 0.9999722752219142\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
e72a73ae74b0ee5a51f9c5e325fabcecbaa0ba85 | 67,686 | ipynb | Jupyter Notebook | raspi/lab/.ipynb_checkpoints/imgdiff-Copy1-checkpoint.ipynb | ShogoNagano/KB_1801 | b0bea642f79e0b77fed897c5a936ba01181ea74f | [
"MIT"
] | 2 | 2018-10-23T09:00:15.000Z | 2018-10-24T03:56:20.000Z | raspi/lab/imgdiff.ipynb | ShogoNagano/KB_1801 | b0bea642f79e0b77fed897c5a936ba01181ea74f | [
"MIT"
] | 1 | 2018-10-20T14:33:21.000Z | 2018-10-21T04:48:52.000Z | raspi/lab/imgdiff.ipynb | ShogoNagano/KB_1801 | b0bea642f79e0b77fed897c5a936ba01181ea74f | [
"MIT"
] | 2 | 2018-10-21T04:34:06.000Z | 2018-10-22T03:10:12.000Z | 154.182232 | 28,140 | 0.884393 | [
[
[
"%matplotlib inline\n#画像をくっつける\nimport pandas as pd#行列計算,データフレームの処理\nimport cv2#画像処理\nimport numpy as np#高速計算\nimport seaborn as sns#可視化\nfrom matplotlib import pyplot as plt",
"_____no_output_____"
],
[
"img_src1 = cv2.imread(\"image10.jpg\", 0)\nimg_src2 = cv2.imread(\"image2.jpg\", 0)\n#img_src1=cv2.cvtColor(img_src1, cv2.COLOR_BGR2HSV)\n#img_src2=cv2.cvtColor(img_src2, cv2.COLOR_BGR2HSV)\n#plt.imshow(img_src1)",
"_____no_output_____"
],
[
"hifuku = []\nfor i in range (2,47):\n img_src2 = cv2.imread(\"image%d.jpg\"%(i), 0)\n fgbg = cv2.bgsegm.createBackgroundSubtractorMOG()\n fgmask = fgbg.apply(img_src1)\n fgmask = fgbg.apply(img_src2)\n# kernel = np.ones((3,3),np.uint8)\n# opening = cv2.morphologyEx(fgmask, cv2.MORPH_OPEN, kernel)\n# closing = cv2.morphologyEx(opening, cv2.MORPH_CLOSE, kernel)\n# fgmask=closing\n cv2.imwrite(\"imgdiff%d_opcl.jpg\"%(i),fgmask)\n hifuku.append(fgmask.flatten().sum())\n#plt.imshow(fgmask)",
"_____no_output_____"
],
[
"plt.style.use('ggplot') \nfont = {'family' : 'meiryo'}\n#matplotlib.rc('font', **font)\npd.DataFrame(hifuku).plot(alpha=0.6, figsize=(12,3))",
"_____no_output_____"
],
[
"#img_diff = cv2.absdiff(img_src2, img_src1)\n\n# 差分を二値化\n#img_diffm = cv2.threshold(img_src1, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)",
"_____no_output_____"
],
[
"for i in range (2,47):\n# 背景画像との差分を算出\n img_src2 = cv2.imread(\"image%d.jpg\"%(i), 0)\n img_diff = cv2.absdiff(img_src2, img_src1)\n\n # 差分を二値化\n img_diffm = cv2.threshold(img_diff, 60, 255, cv2.THRESH_BINARY)[1]\n\n # 膨張処理、収縮処理を施してマスク画像を生成\n operator = np.ones((3, 3), np.uint8)\n img_dilate = cv2.dilate(img_diffm, operator, iterations=4)\n img_mask = cv2.erode(img_dilate, operator, iterations=4)\n cv2.imwrite(\"imgdiff_binary_%d.jpg\"%(i),img_mask)",
"_____no_output_____"
],
[
"plt.imshow(cv2.cvtColor(img_mask, cv2.COLOR_RGB2GRAY))",
"_____no_output_____"
],
[
"cv2.imwrite(\"test.jpg\",img_mask)",
"_____no_output_____"
],
[
"hifuku = []\nfor i in range (2,47):\n img_src2 = cv2.imread(\"image%d.jpg\"%(i), 1)\n fgbg = cv2.bgsegm.createBackgroundSubtractorMOG()\n fgmask = fgbg.apply(img_src1)\n fgmask = fgbg.apply(img_src2)\n# kernel = np.ones((3,3),np.uint8)\n# opening = cv2.morphologyEx(fgmask, cv2.MORPH_OPEN, kernel)\n# closing = cv2.morphologyEx(opening, cv2.MORPH_CLOSE, kernel)\n# fgmask=closing\n cv2.imwrite(\"imgdiff%d_opcl.jpg\"%(i),fgmask)\n hifuku.append(fgmask.flatten().sum())\n#plt.imshow(fgmask)",
"_____no_output_____"
],
[
"plt.style.use('ggplot') \nfont = {'family' : 'meiryo'}\n#matplotlib.rc('font', **font)\npd.DataFrame(hifuku).plot(alpha=0.6, figsize=(12,3))",
"_____no_output_____"
],
[
"fgmask.flatten().sum()",
"_____no_output_____"
],
[
"fgbg = cv2.bgsegm.createBackgroundSubtractorMOG()\nfgmask = fgbg.apply(img_src1)\nfgmask = fgbg.apply(img_src2)",
"_____no_output_____"
],
[
"plt.imshow(fgmask)",
"_____no_output_____"
],
[
"# 背景画像との差分を算出\nimg_diff = cv2.absdiff(img_src2, img_src1)\n\n# 差分を二値化\nimg_diffm = cv2.threshold(img_diff, 20, 255, cv2.THRESH_BINARY)[1]\n\n\n# 膨張処理、収縮処理を施してマスク画像を生成\noperator = np.ones((3, 3), np.uint8)\nimg_dilate = cv2.dilate(img_diffm, operator, iterations=4)\nimg_mask = cv2.erode(img_dilate, operator, iterations=4)\n\n# マスク画像を使って対象を切り出す\nimg_dst = cv2.bitwise_and(img_src2, img_mask)",
"_____no_output_____"
],
[
"# 画像の読み込み\nimg_src1 = cv2.imread(\"image1.jpg\", 1)\nimg_src2 = cv2.imread(\"image2.jpg\", 1)\n\nfgbg = cv2.bgsegm.createBackgroundSubtractorMOG()\n\nfgmask = fgbg.apply(img_src1)\nfgmask = fgbg.apply(img_src2)\n\n# 表示\nplt.imshow(fgmask)\n\n# 検出画像\n#bg_diff_path = './diff.jpg'\n#cv2.imwrite(bg_diff_path,fgmask)\n\n#cv2.waitKey(0)\n#cv2.destroyAllWindows()",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e72a7f65af18483aba2e7197cc072a43c29f9fc4 | 49,402 | ipynb | Jupyter Notebook | Recurrent Neural Networks/Character_Level_RNN_Exercise.ipynb | sayakpaul/Favorite-Execises-from-Udacity-s-Deep-Learning-Course | f30716bcfe0b64e35c0b2d5a2d5e1e6daa3ac7ff | [
"Apache-2.0"
] | 1 | 2020-08-09T06:03:18.000Z | 2020-08-09T06:03:18.000Z | Recurrent Neural Networks/Character_Level_RNN_Exercise.ipynb | sayakpaul/Favorite-Execises-from-Udacity-s-Deep-Learning-Course | f30716bcfe0b64e35c0b2d5a2d5e1e6daa3ac7ff | [
"Apache-2.0"
] | null | null | null | Recurrent Neural Networks/Character_Level_RNN_Exercise.ipynb | sayakpaul/Favorite-Execises-from-Udacity-s-Deep-Learning-Course | f30716bcfe0b64e35c0b2d5a2d5e1e6daa3ac7ff | [
"Apache-2.0"
] | 4 | 2019-07-07T18:48:45.000Z | 2021-03-05T00:03:58.000Z | 40.794385 | 564 | 0.553581 | [
[
[
"# Character-Level LSTM in PyTorch\n\nIn this notebook, I'll construct a character-level LSTM with PyTorch. The network will train character by character on some text, then generate new text character by character. As an example, I will train on Anna Karenina. **This model will be able to generate new text based on the text from the book!**\n\nThis network is based off of Andrej Karpathy's [post on RNNs](http://karpathy.github.io/2015/05/21/rnn-effectiveness/) and [implementation in Torch](https://github.com/karpathy/char-rnn). Below is the general architecture of the character-wise RNN.\n\n<img src=\"assets/charseq.jpeg\" width=\"500\">",
"_____no_output_____"
],
[
"First let's load in our required resources for data loading and model creation.",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport torch\nfrom torch import nn\nimport torch.nn.functional as F",
"_____no_output_____"
]
],
[
[
"## Load in Data\n\nThen, we'll load the Anna Karenina text file and convert it into integers for our network to use. ",
"_____no_output_____"
]
],
[
[
"# open text file and read in data as `text`\nwith open('data/anna.txt', 'r') as f:\n text = f.read()",
"_____no_output_____"
]
],
[
[
"Let's check out the first 100 characters, make sure everything is peachy. According to the [American Book Review](http://americanbookreview.org/100bestlines.asp), this is the 6th best first line of a book ever.",
"_____no_output_____"
]
],
[
[
"text[:100]",
"_____no_output_____"
]
],
[
[
"### Tokenization\n\nIn the cells, below, I'm creating a couple **dictionaries** to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.",
"_____no_output_____"
]
],
[
[
"# encode the text and map each character to an integer and vice versa\n\n# we create two dictionaries:\n# 1. int2char, which maps integers to characters\n# 2. char2int, which maps characters to unique integers\nchars = tuple(set(text))\nint2char = dict(enumerate(chars))\nchar2int = {ch: ii for ii, ch in int2char.items()}\n\n# encode the text\nencoded = np.array([char2int[ch] for ch in text])",
"_____no_output_____"
]
],
[
[
"And we can see those same characters from above, encoded as integers.",
"_____no_output_____"
]
],
[
[
"encoded[:100]",
"_____no_output_____"
]
],
[
[
"## Pre-processing the data\n\nAs you can see in our char-RNN image above, our LSTM expects an input that is **one-hot encoded** meaning that each character is converted into an integer (via our created dictionary) and *then* converted into a column vector where only it's corresponding integer index will have the value of 1 and the rest of the vector will be filled with 0's. Since we're one-hot encoding the data, let's make a function to do that!\n",
"_____no_output_____"
]
],
[
[
"def one_hot_encode(arr, n_labels):\n \n # Initialize the the encoded array\n one_hot = np.zeros((arr.size, n_labels), dtype=np.float32)\n \n # Fill the appropriate elements with ones\n one_hot[np.arange(one_hot.shape[0]), arr.flatten()] = 1.\n \n # Finally reshape it to get back to the original array\n one_hot = one_hot.reshape((*arr.shape, n_labels))\n \n return one_hot",
"_____no_output_____"
],
[
"# check that the function works as expected\ntest_seq = np.array([[3, 5, 1]])\none_hot = one_hot_encode(test_seq, 8)\n\nprint(one_hot)",
"[[[0. 0. 0. 1. 0. 0. 0. 0.]\n [0. 0. 0. 0. 0. 1. 0. 0.]\n [0. 1. 0. 0. 0. 0. 0. 0.]]]\n"
]
],
[
[
"## Making training mini-batches\n\n\nTo train on this data, we also want to create mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this:\n\n<img src=\"assets/[email protected]\" width=500px>\n\n\n<br>\n\nIn this example, we'll take the encoded characters (passed in as the `arr` parameter) and split them into multiple sequences, given by `batch_size`. Each of our sequences will be `seq_length` long.\n\n### Creating Batches\n\n**1. The first thing we need to do is discard some of the text so we only have completely full mini-batches. **\n\nEach batch contains $N \\times M$ characters, where $N$ is the batch size (the number of sequences in a batch) and $M$ is the seq_length or number of time steps in a sequence. Then, to get the total number of batches, $K$, that we can make from the array `arr`, you divide the length of `arr` by the number of characters per batch. Once you know the number of batches, you can get the total number of characters to keep from `arr`, $N * M * K$.\n\n**2. After that, we need to split `arr` into $N$ batches. ** \n\nYou can do this using `arr.reshape(size)` where `size` is a tuple containing the dimensions sizes of the reshaped array. We know we want $N$ sequences in a batch, so let's make that the size of the first dimension. For the second dimension, you can use `-1` as a placeholder in the size, it'll fill up the array with the appropriate data for you. After this, you should have an array that is $N \\times (M * K)$.\n\n**3. Now that we have this array, we can iterate through it to get our mini-batches. **\n\nThe idea is each batch is a $N \\times M$ window on the $N \\times (M * K)$ array. For each subsequent batch, the window moves over by `seq_length`. We also want to create both the input and target arrays. Remember that the targets are just the inputs shifted over by one character. The way I like to do this window is use `range` to take steps of size `n_steps` from $0$ to `arr.shape[1]`, the total number of tokens in each sequence. That way, the integers you get from `range` always point to the start of a batch, and each window is `seq_length` wide.\n\n> **TODO:** Write the code for creating batches in the function below. The exercises in this notebook _will not be easy_. I've provided a notebook with solutions alongside this notebook. If you get stuck, checkout the solutions. The most important thing is that you don't copy and paste the code into here, **type out the solution code yourself.**",
"_____no_output_____"
]
],
[
[
"def get_batches(arr, batch_size, seq_length):\n '''Create a generator that returns batches of size\n batch_size x seq_length from arr.\n \n Arguments\n ---------\n arr: Array you want to make batches from\n batch_size: Batch size, the number of sequences per batch\n seq_length: Number of encoded chars in a sequence\n '''\n \n ## TODO: Get the number of batches we can make\n n_batches = len(arr)//(batch_size*seq_length)\n to_keep = n_batches*seq_length*batch_size\n # print(to_keep)\n \n ## TODO: Keep only enough characters to make full batches\n arr = arr[:to_keep]\n \n ## TODO: Reshape into batch_size rows\n arr = arr.reshape((batch_size, -1))\n \n ## TODO: Iterate over the batches using a window of size seq_length\n for n in range(0, arr.shape[1], seq_length):\n # The features\n x = arr[:, n:n+seq_length]\n # The targets, shifted by one\n y = np.zeros_like(x)\n try:\n y[:, :-1], y[:, -1] = x[:, 1:], arr[:, n+seq_length]\n except IndexError:\n y[:, :-1], y[:, -1] = x[:, 1:], arr[:, 0]\n yield x, y",
"_____no_output_____"
]
],
[
[
"### Test Your Implementation\n\nNow I'll make some data sets and we can check out what's going on as we batch data. Here, as an example, I'm going to use a batch size of 8 and 50 sequence steps.",
"_____no_output_____"
]
],
[
[
"batches = get_batches(encoded, 8, 50)\nx, y = next(batches)",
"1985200\n"
],
[
"# printing out the first 10 items in a sequence\nprint('x\\n', x[:10, :10])\nprint('\\ny\\n', y[:10, :10])",
"x\n [[52 66 23 3 30 72 76 2 26 36]\n [71 50 82 2 30 66 23 30 2 23]\n [72 82 57 2 50 76 2 23 2 20]\n [71 2 30 66 72 2 48 66 25 72]\n [ 2 71 23 29 2 66 72 76 2 30]\n [48 33 71 71 25 50 82 2 23 82]\n [ 2 74 82 82 23 2 66 23 57 2]\n [34 14 56 50 82 71 0 44 65 2]]\n\ny\n [[66 23 3 30 72 76 2 26 36 36]\n [50 82 2 30 66 23 30 2 23 30]\n [82 57 2 50 76 2 23 2 20 50]\n [ 2 30 66 72 2 48 66 25 72 20]\n [71 23 29 2 66 72 76 2 30 72]\n [33 71 71 25 50 82 2 23 82 57]\n [74 82 82 23 2 66 23 57 2 71]\n [14 56 50 82 71 0 44 65 2 47]]\n"
]
],
[
[
"If you implemented `get_batches` correctly, the above output should look something like \n```\nx\n [[25 8 60 11 45 27 28 73 1 2]\n [17 7 20 73 45 8 60 45 73 60]\n [27 20 80 73 7 28 73 60 73 65]\n [17 73 45 8 27 73 66 8 46 27]\n [73 17 60 12 73 8 27 28 73 45]\n [66 64 17 17 46 7 20 73 60 20]\n [73 76 20 20 60 73 8 60 80 73]\n [47 35 43 7 20 17 24 50 37 73]]\n\ny\n [[ 8 60 11 45 27 28 73 1 2 2]\n [ 7 20 73 45 8 60 45 73 60 45]\n [20 80 73 7 28 73 60 73 65 7]\n [73 45 8 27 73 66 8 46 27 65]\n [17 60 12 73 8 27 28 73 45 27]\n [64 17 17 46 7 20 73 60 20 80]\n [76 20 20 60 73 8 60 80 73 17]\n [35 43 7 20 17 24 50 37 73 36]]\n ```\n although the exact numbers may be different. Check to make sure the data is shifted over one step for `y`.",
"_____no_output_____"
],
[
"---\n## Defining the network with PyTorch\n\nBelow is where you'll define the network.\n\n<img src=\"assets/charRNN.png\" width=500px>\n\nNext, you'll use PyTorch to define the architecture of the network. We start by defining the layers and operations we want. Then, define a method for the forward pass. You've also been given a method for predicting characters.",
"_____no_output_____"
],
[
"### Model Structure\n\nIn `__init__` the suggested structure is as follows:\n* Create and store the necessary dictionaries (this has been done for you)\n* Define an LSTM layer that takes as params: an input size (the number of characters), a hidden layer size `n_hidden`, a number of layers `n_layers`, a dropout probability `drop_prob`, and a batch_first boolean (True, since we are batching)\n* Define a dropout layer with `drop_prob`\n* Define a fully-connected layer with params: input size `n_hidden` and output size (the number of characters)\n* Finally, initialize the weights (again, this has been given)\n\nNote that some parameters have been named and given in the `__init__` function, and we use them and store them by doing something like `self.drop_prob = drop_prob`.",
"_____no_output_____"
],
[
"---\n### LSTM Inputs/Outputs\n\nYou can create a basic [LSTM layer](https://pytorch.org/docs/stable/nn.html#lstm) as follows\n\n```python\nself.lstm = nn.LSTM(input_size, n_hidden, n_layers, \n dropout=drop_prob, batch_first=True)\n```\n\nwhere `input_size` is the number of characters this cell expects to see as sequential input, and `n_hidden` is the number of units in the hidden layers in the cell. And we can add dropout by adding a dropout parameter with a specified probability; this will automatically add dropout to the inputs or outputs. Finally, in the `forward` function, we can stack up the LSTM cells into layers using `.view`. With this, you pass in a list of cells and it will send the output of one cell into the next cell.\n\nWe also need to create an initial hidden state of all zeros. This is done like so\n\n```python\nself.init_hidden()\n```",
"_____no_output_____"
]
],
[
[
"# check if GPU is available\ntrain_on_gpu = torch.cuda.is_available()\nif(train_on_gpu):\n print('Training on GPU!')\nelse: \n print('No GPU available, training on CPU; consider making n_epochs very small.')",
"No GPU available, training on CPU; consider making n_epochs very small.\n"
],
[
"class CharRNN(nn.Module):\n \n def __init__(self, tokens, n_hidden=256, n_layers=2,\n drop_prob=0.5, lr=0.001):\n super().__init__()\n self.drop_prob = drop_prob\n self.n_layers = n_layers\n self.n_hidden = n_hidden\n self.lr = lr\n \n # creating character dictionaries\n self.chars = tokens\n self.int2char = dict(enumerate(self.chars))\n self.char2int = {ch: ii for ii, ch in self.int2char.items()}\n \n ## TODO: define the layers of the model\n self.lstm = nn.LSTM(len(self.chars), n_hidden, n_layers, \n dropout=drop_prob, batch_first=True)\n self.dropout = nn.Dropout(p=drop_prob)\n self.fc = nn.Linear(n_hidden, len(self.chars))\n \n def forward(self, x, hidden):\n ''' Forward pass through the network. \n These inputs are x, and the hidden/cell state `hidden`. '''\n \n ## TODO: Get the outputs and the new hidden state from the lstm\n r_output, hidden = self.lstm(x, hidden)\n out = self.dropout(r_output)\n out = out.contiguous().view(-1, self.n_hidden)\n out = self.fc(out)\n \n # return the final output and the hidden state\n return out, hidden\n \n \n def init_hidden(self, batch_size):\n ''' Initializes hidden state '''\n # Create two new tensors with sizes n_layers x batch_size x n_hidden,\n # initialized to zero, for hidden state and cell state of LSTM\n weight = next(self.parameters()).data\n \n if (train_on_gpu):\n hidden = (weight.new(self.n_layers, batch_size, self.n_hidden).zero_().cuda(),\n weight.new(self.n_layers, batch_size, self.n_hidden).zero_().cuda())\n else:\n hidden = (weight.new(self.n_layers, batch_size, self.n_hidden).zero_(),\n weight.new(self.n_layers, batch_size, self.n_hidden).zero_())\n \n return hidden\n ",
"_____no_output_____"
]
],
[
[
"## Time to train\n\nThe train function gives us the ability to set the number of epochs, the learning rate, and other parameters.\n\nBelow we're using an Adam optimizer and cross entropy loss since we are looking at character class scores as output. We calculate the loss and perform backpropagation, as usual!\n\nA couple of details about training: \n>* Within the batch loop, we detach the hidden state from its history; this time setting it equal to a new *tuple* variable because an LSTM has a hidden state that is a tuple of the hidden and cell states.\n* We use [`clip_grad_norm_`](https://pytorch.org/docs/stable/_modules/torch/nn/utils/clip_grad.html) to help prevent exploding gradients.",
"_____no_output_____"
]
],
[
[
"from tqdm import tqdm \n\ndef train(net, data, epochs=10, batch_size=10, seq_length=50, \n lr=0.001, clip=5, val_frac=0.1, print_every=10):\n ''' Training a network \n \n Arguments\n ---------\n \n net: CharRNN network\n data: text data to train the network\n epochs: Number of epochs to train\n batch_size: Number of mini-sequences per mini-batch, aka batch size\n seq_length: Number of character steps per mini-batch\n lr: learning rate\n clip: gradient clipping\n val_frac: Fraction of data to hold out for validation\n print_every: Number of steps for printing training and validation loss\n \n '''\n net.train()\n \n opt = torch.optim.Adam(net.parameters(), lr=lr)\n criterion = nn.CrossEntropyLoss()\n \n # create training and validation data\n val_idx = int(len(data)*(1-val_frac))\n data, val_data = data[:val_idx], data[val_idx:]\n \n if(train_on_gpu):\n net.cuda()\n \n counter = 0\n n_chars = len(net.chars)\n for e in tqdm(range(epochs)):\n # initialize hidden state\n h = net.init_hidden(batch_size)\n \n for x, y in get_batches(data, batch_size, seq_length):\n counter += 1\n \n # One-hot encode our data and make them Torch tensors\n x = one_hot_encode(x, n_chars)\n inputs, targets = torch.from_numpy(x), torch.from_numpy(y)\n \n if(train_on_gpu):\n inputs, targets = inputs.cuda(), targets.cuda()\n\n # Creating new variables for the hidden state, otherwise\n # we'd backprop through the entire training history\n h = tuple([each.data for each in h])\n\n # zero accumulated gradients\n net.zero_grad()\n \n # get the output from the model\n output, h = net(inputs, h)\n \n # calculate the loss and perform backprop\n loss = criterion(output, targets.view(batch_size*seq_length).long())\n loss.backward()\n # `clip_grad_norm` helps prevent the exploding gradient problem in RNNs / LSTMs.\n nn.utils.clip_grad_norm_(net.parameters(), clip)\n opt.step()\n \n # loss stats\n if counter % print_every == 0:\n # Get validation loss\n val_h = net.init_hidden(batch_size)\n val_losses = []\n net.eval()\n for x, y in get_batches(val_data, batch_size, seq_length):\n # One-hot encode our data and make them Torch tensors\n x = one_hot_encode(x, n_chars)\n x, y = torch.from_numpy(x), torch.from_numpy(y)\n \n # Creating new variables for the hidden state, otherwise\n # we'd backprop through the entire training history\n val_h = tuple([each.data for each in val_h])\n \n inputs, targets = x, y\n if(train_on_gpu):\n inputs, targets = inputs.cuda(), targets.cuda()\n\n output, val_h = net(inputs, val_h)\n val_loss = criterion(output, targets.view(batch_size*seq_length).long())\n \n val_losses.append(val_loss.item())\n \n net.train() # reset to train mode after iterationg through validation data\n \n print(\"Epoch: {}/{}...\".format(e+1, epochs),\n \"Step: {}...\".format(counter),\n \"Loss: {:.4f}...\".format(loss.item()),\n \"Val Loss: {:.4f}\".format(np.mean(val_losses)))",
"_____no_output_____"
]
],
[
[
"## Instantiating the model\n\nNow we can actually train the network. First we'll create the network itself, with some given hyperparameters. Then, define the mini-batches sizes, and start training!",
"_____no_output_____"
]
],
[
[
"# define and print the net\nn_hidden=256\nn_layers=2\n\nnet = CharRNN(chars, n_hidden, n_layers)\nprint(net)",
"CharRNN(\n (lstm): LSTM(83, 256, num_layers=2, batch_first=True, dropout=0.5)\n (dropout): Dropout(p=0.5)\n (fc): Linear(in_features=256, out_features=83, bias=True)\n)\n"
]
],
[
[
"### Set your training hyperparameters!",
"_____no_output_____"
]
],
[
[
"batch_size = 128\nseq_length = 75\nn_epochs = 5 # start small if you are just testing initial behavior\n\n# train the model\ntrain(net, encoded, epochs=n_epochs, batch_size=batch_size, seq_length=seq_length, lr=0.001, print_every=10)",
"\n 0%| | 0/5 [00:00<?, ?it/s]\u001b[A"
]
],
[
[
"## Getting the best model\n\nTo set your hyperparameters to get the best performance, you'll want to watch the training and validation losses. If your training loss is much lower than the validation loss, you're overfitting. Increase regularization (more dropout) or use a smaller network. If the training and validation losses are close, you're underfitting so you can increase the size of the network.",
"_____no_output_____"
],
[
"## Hyperparameters\n\nHere are the hyperparameters for the network.\n\nIn defining the model:\n* `n_hidden` - The number of units in the hidden layers.\n* `n_layers` - Number of hidden LSTM layers to use.\n\nWe assume that dropout probability and learning rate will be kept at the default, in this example.\n\nAnd in training:\n* `batch_size` - Number of sequences running through the network in one pass.\n* `seq_length` - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.\n* `lr` - Learning rate for training\n\nHere's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to [where it originally came from](https://github.com/karpathy/char-rnn#tips-and-tricks).\n\n> ## Tips and Tricks\n\n>### Monitoring Validation Loss vs. Training Loss\n>If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular:\n\n> - If your training loss is much lower than validation loss then this means the network might be **overfitting**. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on.\n> - If your training/validation loss are about equal then your model is **underfitting**. Increase the size of your model (either number of layers or the raw number of neurons per layer)\n\n> ### Approximate number of parameters\n\n> The two most important parameters that control the model are `n_hidden` and `n_layers`. I would advise that you always use `n_layers` of either 2/3. The `n_hidden` can be adjusted based on how much data you have. The two important quantities to keep track of here are:\n\n> - The number of parameters in your model. This is printed when you start training.\n> - The size of your dataset. 1MB file is approximately 1 million characters.\n\n>These two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples:\n\n> - I have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make `n_hidden` larger.\n> - I have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that helps the validation loss.\n\n> ### Best models strategy\n\n>The winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end.\n\n>It is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance.\n\n>By the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative.",
"_____no_output_____"
],
[
"## Checkpoint\n\nAfter training, we'll save the model so we can load it again later if we need too. Here I'm saving the parameters needed to create the same architecture, the hidden layer hyperparameters and the text characters.",
"_____no_output_____"
]
],
[
[
"# change the name, for saving multiple files\nmodel_name = 'rnn_sayak_5.net'\n\ncheckpoint = {'n_hidden': net.n_hidden,\n 'n_layers': net.n_layers,\n 'state_dict': net.state_dict(),\n 'tokens': net.chars}\n\nwith open(model_name, 'wb') as f:\n torch.save(checkpoint, f)",
"_____no_output_____"
]
],
[
[
"---\n## Making Predictions\n\nNow that the model is trained, we'll want to sample from it and make predictions about next characters! To sample, we pass in a character and have the network predict the next character. Then we take that character, pass it back in, and get another predicted character. Just keep doing this and you'll generate a bunch of text!\n\n### A note on the `predict` function\n\nThe output of our RNN is from a fully-connected layer and it outputs a **distribution of next-character scores**.\n\n> To actually get the next character, we apply a softmax function, which gives us a *probability* distribution that we can then sample to predict the next character.\n\n### Top K sampling\n\nOur predictions come from a categorical probability distribution over all the possible characters. We can make the sample text and make it more reasonable to handle (with less variables) by only considering some $K$ most probable characters. This will prevent the network from giving us completely absurd characters while allowing it to introduce some noise and randomness into the sampled text. Read more about [topk, here](https://pytorch.org/docs/stable/torch.html#torch.topk).\n",
"_____no_output_____"
]
],
[
[
"def predict(net, char, h=None, top_k=None):\n ''' Given a character, predict the next character.\n Returns the predicted character and the hidden state.\n '''\n \n # tensor inputs\n x = np.array([[net.char2int[char]]])\n x = one_hot_encode(x, len(net.chars))\n inputs = torch.from_numpy(x)\n \n if(train_on_gpu):\n inputs = inputs.cuda()\n \n # detach hidden state from history\n h = tuple([each.data for each in h])\n # get the output of the model\n out, h = net(inputs, h)\n\n # get the character probabilities\n p = F.softmax(out, dim=1).data\n if(train_on_gpu):\n p = p.cpu() # move to cpu\n \n # get top characters\n if top_k is None:\n top_ch = np.arange(len(net.chars))\n else:\n p, top_ch = p.topk(top_k)\n top_ch = top_ch.numpy().squeeze()\n \n # select the likely next character with some element of randomness\n p = p.numpy().squeeze()\n char = np.random.choice(top_ch, p=p/p.sum())\n \n # return the encoded value of the predicted char and the hidden state\n return net.int2char[char], h",
"_____no_output_____"
]
],
[
[
"### Priming and generating text \n\nTypically you'll want to prime the network so you can build up a hidden state. Otherwise the network will start out generating characters at random. In general the first bunch of characters will be a little rough since it hasn't built up a long history of characters to predict from.",
"_____no_output_____"
]
],
[
[
"def sample(net, size, prime='The', top_k=None):\n \n if(train_on_gpu):\n net.cuda()\n else:\n net.cpu()\n \n net.eval() # eval mode\n \n # First off, run through the prime characters\n chars = [ch for ch in prime]\n h = net.init_hidden(1)\n for ch in prime:\n char, h = predict(net, ch, h, top_k=top_k)\n\n chars.append(char)\n \n # Now pass in the previous character and get a new one\n for ii in range(size):\n char, h = predict(net, chars[-1], h, top_k=top_k)\n chars.append(char)\n\n return ''.join(chars)",
"_____no_output_____"
],
[
"print(sample(net, 1000, prime='Anna', top_k=5))",
"Annad\nand she samidad hit\nand the\nserstaly to would the had the ware a sompal it his warl on the him and and have to his all some the shings at a mile of thoughs. And he his\nare whrouther stell wouse as it her words with him had wounds, his the\ncalled wint,\nand wame to mare the witt the\nhed and andarisched a than the seld and at and the had\nand her alr\nwas her the child\nof\nher sead the sain what ale sheming the while\nand the poncinch wathe to has the wat her somed as though the\nhad her same\nsaid ham tass, astand whilk he\nseid, to he would and her tamped on the hersent at the coult, whought her.,\nI wourd\nwat so hould as\nthe shese tore to showe at tar and. Ive thit had he to said to he sood that he wat, and a dilling that so derated whale\nthough to him. The sond\nout the hadss of the\ncean heard, sterted and she sher of\nthe cendess, and so had as in a lent of her and\nateens the wald har the sansing, sail with see she, wimh\na wiss to her..\n\n\"I home the daly, was a sonetent and he has seade he\npr\n"
]
],
[
[
"## Loading a checkpoint",
"_____no_output_____"
]
],
[
[
"# Here we have loaded in a model that trained over 20 epochs `rnn_20_epoch.net`\nwith open('rnn_sayak_5.net', 'rb') as f:\n checkpoint = torch.load(f)\n \nloaded = CharRNN(checkpoint['tokens'], n_hidden=checkpoint['n_hidden'], n_layers=checkpoint['n_layers'])\nloaded.load_state_dict(checkpoint['state_dict'])",
"_____no_output_____"
],
[
"# Sample using a loaded model\nprint(sample(loaded, 2000, top_k=5, prime=\"And Levin said\"))",
"And Levin said.\"\n\n\"The was wame\nthat thet have and shy whis seid wost has as with the\nhad havled, her\nwas a shas to seall have the cond whor\nhad\nthile to dere to to this toming and that were to the wourd of\na derited his andad wom and\nwhith the whas alladide thet and the wert the promass.\n\nThe hessing he hive wos there\nsore one wome all had thrat wore was\nand, whene a shore\nthe had the\nhad hinds had had hind thought he hed\nthat he ham his to mintens to\ndould what\nhe alkey the sees of he sall over him\nher whoned\nhid and word the cout on she samen he the have ans antting that wene hind\na somed the cealt.\n\n\"Yin, soming he\nthith out of she sis ho monte the has the his the\nwas he ale\nthat she sampinct and wore, tally were west him to his was shand had to he wost of\na to ser were sale wish that. And at she cam and the seres his sees wish\nher. The wise a thret a the sone the wand a droming how as\nhe her seet the wert her hes and the chims to thome as ale that shished the her sore and the\nwat this her to the him.\n\nThe sand heand her the hid the sand of the\ncondes in the whong him horde whome had had ham her. Bot and ham\nhaving and woush her had\nhis whall at the his att her to time tind wat whing wher and serted what he\nprased, worle stersining and to masted, and the condlover hid to her and tore stere whan he treps a would the\npoless he tham the coust wor of her sas wous in ather, and she hes\nalrast a dent and ste to has\nand shat a derperessed of her some the silk on\natered and there have to sime tere whet to so the hed and thother\nthe coust\nand\nto hins of the cesser on had so to thas at wam they he\ndastented whe hom has seart and seane that she silloving.\n\n\"Well as at had shang.\n\n\"I\nwalles, and shame his\nside, all her seing the wead the wander\nto see te tilk of the harded, hin her at her\nalede has the whothe had his was tho can the\nham a to darle whet, he hid, and thret the had her him wound at at a did hear tamale,\" the pastian a wong at\nthe har was\nthe werter, at he his\nsisele there wi\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
e72a81a44ef1e352582e54a4fd89bc095fbf149f | 14,195 | ipynb | Jupyter Notebook | natural-language-processing/NLP_Disaster_Recovery_Translation.ipynb | dr-natetorious/studio-lab-examples | a3d1a23b1c60853923dfc0cdd48e6c35d1e0749a | [
"Apache-2.0"
] | null | null | null | natural-language-processing/NLP_Disaster_Recovery_Translation.ipynb | dr-natetorious/studio-lab-examples | a3d1a23b1c60853923dfc0cdd48e6c35d1e0749a | [
"Apache-2.0"
] | null | null | null | natural-language-processing/NLP_Disaster_Recovery_Translation.ipynb | dr-natetorious/studio-lab-examples | a3d1a23b1c60853923dfc0cdd48e6c35d1e0749a | [
"Apache-2.0"
] | null | null | null | 30.858696 | 675 | 0.578091 | [
[
[
"# Fine-tune T5 locally for machine translation on COVID-19 Health Service Announcements with Hugging Face\n\n[](https://studiolab.sagemaker.aws/import/github/aws/studio-lab-examples/blob/main/natural-language-processing/NLP_Disaster_Recovery_Translation.ipynb)\n\nThis notebook requires running SageMaker Lab on an [Amazon Elastic Compute Cloud (Amazon EC2)]( https://aws.amazon.com/ec2/) GPU-enabled instance like the **g4dn.xlarge**. If you are not using that right now, please restart your session and select **GPU** to speed up the training process to tens of minutes rather than hours.\n\nIf more information on training a large-scale machine translation model, see [Use Hugging Face on Amazon SageMaker](https://docs.aws.amazon.com/sagemaker/latest/dg/hugging-face.html)! ",
"_____no_output_____"
],
[
"### Step 0. Install all necessary packages",
"_____no_output_____"
]
],
[
[
"%%writefile requirements.txt\n\nipywidgets\ngit+https://github.com/huggingface/transformers\ndatasets\nsacrebleu\ntorch\nsentencepiece",
"_____no_output_____"
],
[
"%pip install -r requirements.txt",
"_____no_output_____"
],
[
"import IPython\n# make sure to restart your kernel to use the newly install packages\n# IPython.Application.instance().kernel.do_shutdown(True) ",
"_____no_output_____"
]
],
[
[
"## Step 1. Explore the available datasets on Translators without Borders \n\nNext, download a preferred language pair for training the translation model. This notebook chose English to Spanish, but you can select any two languages.\n\n- Overall site page: https://tico-19.github.io/\n\n- Page with all language pairs: https://tico-19.github.io/memories.html \n\nScroll through all supported language pairs and pick your favorite. This notebook demonstrates English to Spanish (`en-to-es`)\n\nCopy the link to that pair, for `en-to-es` it looks like this:\n- https://tico-19.github.io/data/TM/all.en-es-LA.tmx.zip ",
"_____no_output_____"
]
],
[
[
"path_to_my_data = 'https://tico-19.github.io/data/TM/all.en-es-LA.tmx.zip'",
"_____no_output_____"
],
[
"!wget {path_to_my_data}",
"_____no_output_____"
],
[
"local_file = path_to_my_data.split('/')[-1]\nprint (local_file)\nfilename = local_file.split('.zip')[0]\nprint (filename)",
"_____no_output_____"
],
[
"!unzip {local_file}",
"_____no_output_____"
]
],
[
[
"## Step 2: Extract data from `.tmx` file type \nNext, you can use this local function to extract data from the `.tmx` file type and format for local training with Hugging Face.",
"_____no_output_____"
]
],
[
[
"# paste the name of your file and language codes here\nsource_code_1 = 'en'\ntarget_code_2 = 'es'",
"_____no_output_____"
],
[
"def parse_tmx(filename, source_code_1, target_code_2):\n '''\n Takes a local TMX filename and codes for source and target languages. \n Walks through your file, row by row, looking for tmx / html specific formatting.\n If there's a regex match, will clean your string and add to a dictionary for downstream pandas formatting.\n '''\n \n data = {source_code_1:[], target_code_2:[]}\n\n with open(filename) as f:\n\n for row in f.readlines():\n\n if not row.endswith('</seg></tuv>\\n'):\n continue\n\n if row.startswith('<seg>'):\n\n st_1 = row.strip()\n\n st_1 = st_1.replace('<seg>', '')\n st_1 = st_1.replace('</seg></tuv>', '')\n\n data[source_code_1].append(st_1)\n\n # when you use your own target code, remove the -LA string \n if row.startswith('<tuv xml:lang=\"{}-LA\"><seg>'.format(target_code_2)):\n\n st_2 = row.strip()\n # when you use your own target code, remove the -LA string \n st_2 = st_2.replace('<tuv xml:lang=\"{}-LA\"><seg>'.format(target_code_2), '')\n st_2 = st_2.replace('</seg></tuv>', '')\n\n data[target_code_2].append(st_2)\n \n return data\n\ndata = parse_tmx(filename, source_code_1, target_code_2)",
"_____no_output_____"
],
[
"# this makes sure you got actual pairs\nassert len(data[source_code_1]) == len(data[target_code_2])",
"_____no_output_____"
],
[
"import pandas as pd\n\ndf = pd.DataFrame.from_dict(data, orient = 'columns')\n\ndf.head()",
"_____no_output_____"
],
[
"# write to disk in case you need to restart your kernel later\ndf.to_csv('language_pairs.csv', index=False, header=True)",
"_____no_output_____"
]
],
[
[
"## Step 3. Format extracted data for machine translation with Hugging Face\n\nHugging Face’s translation module is documented on [GitHub](https://github.com/huggingface/transformers/tree/master/examples/pytorch/translation), and their official documentation also includes additional details for [loading the data set](https://huggingface.co/docs/datasets/loading_datasets.html#json-files).\n",
"_____no_output_____"
]
],
[
[
"import pandas as pd\n\ndf = pd.read_csv('language_pairs.csv')\ndf.head()",
"_____no_output_____"
]
],
[
[
"The translation task only supports custom [JSONLINES formatted files](https://jsonlines.org/). Each line is a dictionary with a key \"translation\" and its value another dictionary whose keys are the language pair. For example:\n\n```json\n{ \"translation\": { \"en\": \"Others have dismissed him as a joke.\", \"ro\": \"Alții l-au numit o glumă.\" } }\n{ \"translation\": { \"en\": \"And some are holding out for an implosion.\", \"ro\": \"Iar alții așteaptă implozia.\" } }\n```\n",
"_____no_output_____"
]
],
[
[
"objs = []\n\nfor idx, row in df.iterrows():\n \n obj = {\"translation\": {source_code_1: row[source_code_1], target_code_2: row[target_code_2]}} \n objs.append(obj)",
"_____no_output_____"
],
[
"objs[:5]",
"_____no_output_____"
],
[
"import json \n!mkdir data\nwith open('data/train.json', 'w') as f:\n for row in objs:\n j = json.dumps(row, ensure_ascii = False)\n f.write(j)\n f.write('\\n')",
"_____no_output_____"
]
],
[
[
"## Step 4 - Finetune a machine translation model locally\n\nFourth, download the raw Python file from Hugging Face to fine-tune your model.",
"_____no_output_____"
]
],
[
[
"!wget https://raw.githubusercontent.com/huggingface/transformers/master/examples/pytorch/translation/run_translation.py",
"_____no_output_____"
],
[
"# full hugging face Trainer API args available here\n# https://github.com/huggingface/transformers/blob/de635af3f1ef740aa32f53a91473269c6435e19e/src/transformers/training_args.py\n# T5 trainig args available here\n# https://huggingface.co/transformers/model_doc/t5.html#t5config\n!python run_translation.py \\\n --model_name_or_path t5-small \\\n --do_train \\\n --source_lang en \\\n --target_lang es \\\n --source_prefix \"translate English to Spanish: \" \\\n --train_file data/train.json \\\n --output_dir output/tst-translation \\\n --per_device_train_batch_size=4 \\\n --per_device_eval_batch_size=4 \\\n --overwrite_output_dir \\\n --predict_with_generate \\\n --save_strategy epoch \\\n --num_train_epochs 3\n# --do_eval \\\n# --validation_file path_to_jsonlines_file \\\n# --dataset_name cov-19 \\\n# --dataset_config_name en-es \\\n",
"_____no_output_____"
],
[
"!ls output/tst-translation",
"_____no_output_____"
]
],
[
[
"## Step 5. Test your newly fine-tuned translation model",
"_____no_output_____"
]
],
[
[
"from transformers import AutoTokenizer, AutoModelWithLMHead\n \ntokenizer = AutoTokenizer.from_pretrained(\"t5-small\")\n\nmodel = AutoModelWithLMHead.from_pretrained(pretrained_model_name_or_path = 'output/tst-translation')",
"_____no_output_____"
],
[
"# line to make sure your model supports local inference\nmodel.eval()",
"_____no_output_____"
]
],
[
[
"Fifth, let's test it! Remember that, in using the default settings of only three epochs, your translation is probably not going to be SOTA. For achieving state of the art, (SOTA), we recommend migrating to Amazon SageMaker to scale up and out. Scaling up means moving your code to more advanced compute types, such as the [Amazon EC2 p4-series](https://aws.amazon.com/ec2/instance-types/) or [AWS Trainium](https://aws.amazon.com/machine-learning/trainium/). Scaling out means adding more compute resources by increasing from one to many instances. Harnessing the power of the AWS cloud enables training across enormous datasets, which can produce more accurate models!",
"_____no_output_____"
]
],
[
[
"input_sequences = ['about how long have these symptoms been going on?',\t\n'and all chest pain should be treated this way especially with your age\t',\n'and along with a fever\t',\n'and also needs to be checked your cholesterol blood pressure',\t\n'and are you having a fever now?\t',\n'and are you having any of the following symptoms with your chest pain',\t\n'and are you having a runny nose?',\t\n'and are you having this chest pain now?',\n'and besides do you have difficulty breathing',\n'and can you tell me what other symptoms are you having along with this?',\n'and does this pain move from your chest?',\n'and drink lots of fluids',\n'and how high has your fever been',\n'and i have a cough too',\n'and i have a little cold and a cough',\n'''and i'm really having some bad chest pain today''']\n\ntask_prefix = \"translate English to Spanish: \"\n\nfor i in input_sequences:\n input_ids = tokenizer('''{} {}'''.format(task_prefix, i), return_tensors='pt').input_ids\n outputs = model.generate(input_ids)\n print(i, tokenizer.decode(outputs[0], skip_special_tokens=True))\n",
"_____no_output_____"
],
[
"model.save_pretrained('my-tf-en-to-sp')",
"_____no_output_____"
],
[
"!tar -czf my_model.tar.gz my-tf-en-to-sp",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
e72a8992223026f5493a711f0896aa483ab355df | 632,784 | ipynb | Jupyter Notebook | LS_DS_243_Model_Interpretation_Assignment.ipynb | bkrant/DS-Unit-2-Sprint-4-Practicing-Understanding | e2866c843aebacdfc3d1be03949ef19526a5a002 | [
"MIT"
] | null | null | null | LS_DS_243_Model_Interpretation_Assignment.ipynb | bkrant/DS-Unit-2-Sprint-4-Practicing-Understanding | e2866c843aebacdfc3d1be03949ef19526a5a002 | [
"MIT"
] | null | null | null | LS_DS_243_Model_Interpretation_Assignment.ipynb | bkrant/DS-Unit-2-Sprint-4-Practicing-Understanding | e2866c843aebacdfc3d1be03949ef19526a5a002 | [
"MIT"
] | null | null | null | 247.18125 | 252,634 | 0.666645 | [
[
[
"<a href=\"https://colab.research.google.com/github/bkrant/DS-Unit-2-Sprint-4-Practicing-Understanding/blob/master/LS_DS_243_Model_Interpretation_Assignment.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd",
"_____no_output_____"
],
[
"def load(): \n fremont_bridge = 'https://data.seattle.gov/api/views/65db-xm6k/rows.csv?accessType=DOWNLOAD'\n \n bicycle_weather = 'https://raw.githubusercontent.com/jakevdp/PythonDataScienceHandbook/master/notebooks/data/BicycleWeather.csv'\n\n counts = pd.read_csv(fremont_bridge, index_col='Date', parse_dates=True, \n infer_datetime_format=True)\n\n weather = pd.read_csv(bicycle_weather, index_col='DATE', parse_dates=True, \n infer_datetime_format=True)\n\n daily = counts.resample('d').sum()\n daily['Total'] = daily.sum(axis=1)\n daily = daily[['Total']] # remove other columns\n\n weather_columns = ['PRCP', 'SNOW', 'SNWD', 'TMAX', 'TMIN', 'AWND']\n daily = daily.join(weather[weather_columns], how='inner')\n \n # Make a feature for yesterday's total\n daily['Total_yesterday'] = daily.Total.shift(1)\n daily = daily.drop(index=daily.index[0])\n \n return daily",
"_____no_output_____"
],
[
"def split(daily):\n # Hold out an \"out-of-time\" test set, from the last 100 days of data\n \n train = daily[:-100]\n test = daily[-100:]\n \n X_train = train.drop(columns='Total')\n y_train = train.Total\n\n X_test = test.drop(columns='Total')\n y_test = test.Total\n \"\"\n return X_train, X_test, y_train, y_test",
"_____no_output_____"
],
[
"def jake_wrangle(X): \n X = X.copy()\n\n # patterns of use generally vary from day to day; \n # let's add binary columns that indicate the day of the week:\n days = ['Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat', 'Sun']\n for i, day in enumerate(days):\n X[day] = (X.index.dayofweek == i).astype(float)\n\n\n # we might expect riders to behave differently on holidays; \n # let's add an indicator of this as well:\n from pandas.tseries.holiday import USFederalHolidayCalendar\n cal = USFederalHolidayCalendar()\n holidays = cal.holidays('2012', '2016')\n X = X.join(pd.Series(1, index=holidays, name='holiday'))\n X['holiday'].fillna(0, inplace=True)\n\n\n # We also might suspect that the hours of daylight would affect \n # how many people ride; let's use the standard astronomical calculation \n # to add this information:\n def hours_of_daylight(date, axis=23.44, latitude=47.61):\n \"\"\"Compute the hours of daylight for the given date\"\"\"\n days = (date - pd.datetime(2000, 12, 21)).days\n m = (1. - np.tan(np.radians(latitude))\n * np.tan(np.radians(axis) * np.cos(days * 2 * np.pi / 365.25)))\n return 24. * np.degrees(np.arccos(1 - np.clip(m, 0, 2))) / 180.\n\n X['daylight_hrs'] = list(map(hours_of_daylight, X.index))\n\n\n # temperatures are in 1/10 deg C; convert to C\n X['TMIN'] /= 10\n X['TMAX'] /= 10\n\n # We can also calcuate the average temperature.\n X['Temp (C)'] = 0.5 * (X['TMIN'] + X['TMAX'])\n\n\n # precip is in 1/10 mm; convert to inches\n X['PRCP'] /= 254\n\n # In addition to the inches of precipitation, let's add a flag that \n # indicates whether a day is dry (has zero precipitation):\n X['dry day'] = (X['PRCP'] == 0).astype(int)\n\n\n # Let's add a counter that increases from day 1, and measures how many \n # years have passed. This will let us measure any observed annual increase \n # or decrease in daily crossings:\n X['annual'] = (X.index - X.index[0]).days / 365.\n\n return X",
"_____no_output_____"
],
[
"data = load()",
"_____no_output_____"
],
[
"def wrangle(X):\n X = X.copy()\n X = X.replace(-9999, 0)\n X = jake_wrangle(X)\n \n # DS1 DH\n X['PRCP_yesterday'] = X.PRCP.shift(1).fillna(X.PRCP.mean())\n X['Windchill'] = (((X['Temp (C)'] * (9/5) + 32) * .6215) + 34.74) - (35.75 * (X['AWND']** .16)) + (.4275 * (X['Temp (C)'])) * (X['AWND'] ** .16)\n X['Rl_Cold'] = (((X['Temp (C)'] * (9/5) + 32) - X['Windchill']) -32) * (5/9)\n X['TMIN_squared'] = X['TMIN'] **2\n \n months = ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec']\n for i, month in enumerate(months):\n X[month] = (X.index.month == i+1).astype(float)\n \n # DS3 JD\n X['light_rain'] = (X['PRCP'] > 0) & (X['PRCP'] < 0.10)\n X['moderate_rain'] = (X['PRCP'] >= 0.1) & (X['PRCP'] < 0.30)\n X['heavy_rain'] = (X['PRCP'] >= 0.30)\n X['weekend_day'] = (X['Sat'] == 1) | (X['Sun'] == 1)\n\n return X",
"_____no_output_____"
],
[
"data.describe(include='number')",
"_____no_output_____"
],
[
"# Split data into train and test\nX_train, X_test, y_train, y_test = split(data)\n\n# Do the same wrangling to X_train and X_test\nX_train = wrangle(X_train)\nX_test = wrangle(X_test)",
"_____no_output_____"
],
[
"X_train.describe(exclude='number')",
"_____no_output_____"
],
[
"from scipy.stats import randint\nfrom sklearn.model_selection import RandomizedSearchCV\nfrom xgboost import XGBRegressor\n\nparam_distributions = {\n 'n_estimators': randint(50, 500), \n 'max_depth': randint(1, 5)\n}\n\nsearch = RandomizedSearchCV(\n estimator=XGBRegressor(n_jobs=-1, random_state=42), \n param_distributions=param_distributions, \n n_iter=50, \n scoring='neg_mean_absolute_error', \n n_jobs=-1, \n cv=3, \n verbose=10, \n return_train_score=True, \n random_state=42\n)\n\nsearch.fit(X_train, y_train)",
"Fitting 3 folds for each of 50 candidates, totalling 150 fits\n"
],
[
"print('Mean Absolute Error with Cross-Validation:')\nprint(f'Predictions are off by {int(-search.best_score_)} bicyclists per day, on average')",
"Mean Absolute Error with Cross-Validation:\nPredictions are off by 265 bicyclists per day, on average\n"
],
[
"plt.figure(figsize=(10,10))\nimportances = pd.Series(search.best_estimator_.feature_importances_, X_train.columns)\nimportances.sort_values().plot.barh(color='blue');",
"_____no_output_____"
],
[
"pip install eli5\npip install pdpbox\npip install shap",
"_____no_output_____"
],
[
"import eli5\nfrom eli5.sklearn import PermutationImportance\n\npermuter = PermutationImportance(search.best_estimator_, scoring='neg_mean_absolute_error', cv='prefit', \n n_iter=2, random_state=42)\n\npermuter.fit(X_train, y_train)",
"_____no_output_____"
],
[
"feature_names = X_train.columns.tolist()\neli5.show_weights(permuter, top=None, feature_names=feature_names)",
"_____no_output_____"
],
[
"permuter.fit(X_test, y_test)\nfeature_names = X_test.columns.tolist()\neli5.show_weights(permuter, top=None, feature_names=feature_names)",
"_____no_output_____"
],
[
"print('Shape before removing features:', X_train.shape)",
"Shape before removing features: (963, 19)\n"
],
[
"mask = permuter.feature_importances_ > 0\nfeatures = X_train.columns[mask]\nX_train = X_train[features]\nprint('Shape after removing features:', X_train.shape)",
"Shape after removing features: (963, 17)\n"
],
[
"param_distributions = {\n 'n_estimators': randint(50, 500), \n 'max_depth': randint(1, 5)\n}\n\nsearch = RandomizedSearchCV(\n estimator=XGBRegressor(n_jobs=-1, random_state=42), \n param_distributions=param_distributions, \n n_iter=50, \n scoring='neg_mean_absolute_error', \n n_jobs=-1, \n cv=3, \n verbose=10, \n return_train_score=True, \n random_state=42\n)\n\nsearch.fit(X_train, y_train)",
"Fitting 3 folds for each of 50 candidates, totalling 150 fits\n"
],
[
"print('Mean Absolute Error with Cross-Validation:')\nprint(f'Predictions are off by {int(-search.best_score_)} bicyclists per day, on average')",
"Mean Absolute Error with Cross-Validation:\nPredictions are off by 265 bicyclists per day, on average\n"
],
[
"from pdpbox.pdp import pdp_isolate, pdp_plot\n\nfeature = 'TMAX'\n\nisolated = pdp_isolate(\n model=search.best_estimator_, \n dataset=X_test, \n model_features=X_test.columns, \n feature=feature\n)\n\npdp_plot(isolated, feature_name=feature);",
"_____no_output_____"
],
[
"from pdpbox.pdp import pdp_interact, pdp_interact_plot\n\nfeatures = ['TMAX', 'PRCP']\n\ninteraction = pdp_interact(\n model=search.best_estimator_, \n dataset=X_test, \n model_features=X_test.columns, \n features=features\n)\n\npdp_interact_plot(interaction, plot_type='grid', feature_names=features);\nplt.figure(figsize=(20,20));",
"_____no_output_____"
],
[
"X_test.head(2)",
"_____no_output_____"
],
[
"data_for_prediction = X_test.iloc[1:2,:]\ndata_for_prediction",
"_____no_output_____"
],
[
"search.best_estimator_.predict(data_for_prediction)",
"_____no_output_____"
],
[
"import shap\nshap.initjs()\n\nexplainer = shap.TreeExplainer(search.best_estimator_)\nshap_values = explainer.shap_values(data_for_prediction)\nshap.force_plot(explainer.expected_value, shap_values, data_for_prediction)",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e72a9d2f8dc41ac19151dd9e999577f77150e25f | 79,729 | ipynb | Jupyter Notebook | nlu/colab/component_examples/named_entity_recognition_(NER)/NLU_ner_CONLL_2003_5class_example.ipynb | fcivardi/spark-nlp-workshop | aedb1f5d93577c81bc3dd0da5e46e02586941541 | [
"Apache-2.0"
] | 687 | 2018-09-07T03:45:39.000Z | 2022-03-20T17:11:20.000Z | nlu/colab/component_examples/named_entity_recognition_(NER)/NLU_ner_CONLL_2003_5class_example.ipynb | fcivardi/spark-nlp-workshop | aedb1f5d93577c81bc3dd0da5e46e02586941541 | [
"Apache-2.0"
] | 89 | 2018-09-18T02:04:42.000Z | 2022-02-24T18:22:27.000Z | nlu/colab/component_examples/named_entity_recognition_(NER)/NLU_ner_CONLL_2003_5class_example.ipynb | fcivardi/spark-nlp-workshop | aedb1f5d93577c81bc3dd0da5e46e02586941541 | [
"Apache-2.0"
] | 407 | 2018-09-07T03:45:44.000Z | 2022-03-20T05:12:25.000Z | 79,729 | 79,729 | 0.827453 | [
[
[
"\n\n[](https://colab.research.google.com/github/JohnSnowLabs/nlu/blob/master/examples/colab/component_examples/named_entity_recognition_(NER)/NLU_ner_CONLL_2003_5class_example.ipynb)\n\n\nNamed entities are phrases that contain the names of persons, organizations, locations, times and quantities. Example:\n<br>\n<br>\n[ORG **U.N.** ] official [PER **Ekeus** ] heads for [LOC **Baghdad** ] . \n<br>\n\nhttps://www.aclweb.org/anthology/W03-0419.pdf \nCoNLL-2003 is a NER dataset that available in English and German. NLU provides pretrained languages for both of these languages.\n\nIt features **5 classes** of tags, **LOC (location)** , **ORG(Organisation)**, **PER(Persons)** and the forth which describes all the named entities which do not belong to any of the thre previously mentioned tags **(MISC)**. \nThe fifth class **(O)** is used for tokens which belong to no named entity.\n\n\n\n\n\n|Tag | \tDescription |\n|------|--------------|\n|PER | A person like **Jim** or **Joe** |\n|ORG | An organisation like **Microsoft** or **PETA**|\n|LOC | A location like **Germany**|\n|MISC | Anything else like **Playstation** |\n|O| Everything that is not an entity. | \n\n\nThe shared task of [CoNLL-2003 concerns](https://www.clips.uantwerpen.be/conll2003/) language-independent named entity recognition. We will concentrate on four types of named entities: persons, locations, organizations and names of miscellaneous entities that do not belong to the previous three groups. The participants of the shared task will be offered training and test data for two languages. They will use the data for developing a named-entity recognition system that includes a machine learning component. For each language, additional information (lists of names and non-annotated data) will be supplied as well. The challenge for the participants is to find ways of incorporating this information in their system.\n\n\n\n\n\n\n\n\n\n",
"_____no_output_____"
]
],
[
[
"!wget https://setup.johnsnowlabs.com/nlu/colab.sh -O - | bash\n \n\nimport nlu",
"--2021-05-01 21:59:41-- https://raw.githubusercontent.com/JohnSnowLabs/nlu/master/scripts/colab_setup.sh\nResolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.110.133, 185.199.109.133, 185.199.108.133, ...\nConnecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.110.133|:443... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 1671 (1.6K) [text/plain]\nSaving to: ‘STDOUT’\n\n\r- 0%[ ] 0 --.-KB/s \r- 100%[===================>] 1.63K --.-KB/s in 0s \n\n2021-05-01 21:59:41 (36.2 MB/s) - written to stdout [1671/1671]\n\nInstalling NLU 3.0.0 with PySpark 3.0.2 and Spark NLP 3.0.1 for Google Colab ...\n\u001b[K |████████████████████████████████| 204.8MB 74kB/s \n\u001b[K |████████████████████████████████| 153kB 46.7MB/s \n\u001b[K |████████████████████████████████| 204kB 22.9MB/s \n\u001b[K |████████████████████████████████| 204kB 43.2MB/s \n\u001b[?25h Building wheel for pyspark (setup.py) ... \u001b[?25l\u001b[?25hdone\n"
]
],
[
[
"# NLU makes NER easy. \n\nYou just need to load the NER model via ner.load() and predict on some dataset. \nIt could be a pandas dataframe with a column named text or just an array of strings.",
"_____no_output_____"
]
],
[
[
"import nlu \n\nexample_text = [\"A person like Jim or Joe\", \n \"An organisation like Microsoft or PETA\",\n \"A location like Germany\",\n \"Anything else like Playstation\", \n \"Person consisting of multiple tokens like Angela Merkel or Donald Trump\",\n \"Organisations consisting of multiple tokens like JP Morgan\",\n \"Locations consiting of multiple tokens like Los Angeles\", \n \"Anything else made up of multiple tokens like Super Nintendo\",]\n\nnlu.load('ner').predict(example_text)",
"onto_recognize_entities_sm download started this may take some time.\nApprox size to download 160.1 MB\n[OK!]\n"
],
[
"text = [\"Barclays misled shareholders and the public about one of the biggest investments in the bank's history, a BBC Panorama investigation has found.\",\n\"The bank announced in 2008 that Manchester City owner Sheikh Mansour had agreed to invest more than £3bn.\",\n\"But the BBC found that the money, which helped Barclays avoid a bailout by British taxpayers, actually came from the Abu Dhabi government.\",\n\"Barclays said the mistake in its accounts was 'a drafting error'.\",\n\"Unlike RBS and Lloyds TSB, Barclays narrowly avoided having to request a government bailout late in 2008 after it was rescued by £7bn worth of new investment, most of which came from the Gulf states of Qatar and Abu Dhabi.\",\n\"The S&P 500's price to earnings multiple is 71% higher than Apple's, and if Apple were simply valued at the same multiple, its share price would be $840, which is 52% higher than its current price.\",\n\"Alice has a cat named Alice and also a dog named Alice and also a parrot named Alice, it is her favorite name!\"\n] + example_text\nner_df = nlu.load('ner').predict(text, output_level= 'chunk')",
"onto_recognize_entities_sm download started this may take some time.\nApprox size to download 160.1 MB\n[OK!]\n"
],
[
"ner_df",
"_____no_output_____"
]
],
[
[
"## Lets explore our data which the predicted NER tags and visalize them! \n\nWe specify [1:] so we dont se the count for the O-tag wich is the most common, since most words in a sentence are not named entities and thus not part of a chunk",
"_____no_output_____"
]
],
[
[
"ner_df['entities'].value_counts()[1:].plot.bar(title='Occurence of Named Entity tokens in dataset')",
"_____no_output_____"
],
[
"ner_type_to_viz = 'LOC'\nner_df[ner_df.entities_class == ner_type_to_viz]['entities'].value_counts().plot.bar(title='Most often occuring LOC labeled tokens in the dataset')",
"_____no_output_____"
],
[
"ner_type_to_viz = 'ORG'\nner_df[ner_df.entities_class == ner_type_to_viz]['entities'].value_counts().plot.bar(title='Most often occuring ORG labeled tokens in the dataset')",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
e72ab0d7831946cabef3ff5be4f6fd14ca255322 | 105,485 | ipynb | Jupyter Notebook | docs/Prelim Analysis.ipynb | Cemlyn/CreditRiskVAE | 6f017cedcaafa3dc334b92ec67013fdb76c36d40 | [
"MIT"
] | null | null | null | docs/Prelim Analysis.ipynb | Cemlyn/CreditRiskVAE | 6f017cedcaafa3dc334b92ec67013fdb76c36d40 | [
"MIT"
] | null | null | null | docs/Prelim Analysis.ipynb | Cemlyn/CreditRiskVAE | 6f017cedcaafa3dc334b92ec67013fdb76c36d40 | [
"MIT"
] | null | null | null | 35.469065 | 152 | 0.291444 | [
[
[
"import pandas as pd",
"_____no_output_____"
],
[
"df = pd.read_csv('./data/application_train.csv')\ndf.head()",
"_____no_output_____"
],
[
"df.iloc[:,10:30]",
"_____no_output_____"
],
[
"# Keep certain columns for encoding\nkeep_cols = ['TARGET','NAME_CONTRACT_TYPE','CODE_GENDER',\n 'FLAG_OWN_CAR','FLAG_OWN_REALTY','CNT_CHILDREN',\n 'AMT_INCOME_TOTAL','AMT_CREDIT','AMT_ANNUITY','AMT_GOODS_PRICE','NAME_TYPE_SUITE','NAME_INCOME_TYPE',\n 'NAME_EDUCATION_TYPE','NAME_FAMILY_STATUS','NAME_HOUSING_TYPE','DAYS_BIRTH','DAYS_EMPLOYED','DAYS_REGISTRATION','OCCUPATION_TYPE']\n\n# Encoding the variables\n\n\ndf = df[keep_cols].copy()",
"_____no_output_____"
],
[
"num_cols = df.select_dtypes('number').columns.tolist()\ncat_cols = df.select_dtypes('object').columns.tolist()\n\nprint(f'{len(num_cols)} numerical columns, {len(cat_cols)} categorical columns')",
"9 numerical columns, 10 categorical columns\n"
],
[
"df.shape",
"_____no_output_____"
],
[
"# encoding the categorical columns\nfrom sklearn.preprocessing import LabelEncoder\nfrom sklearn.preprocessing import OneHotEncoder\n\nle = LabelEncoder()\ncat_df = df[cat_cols].copy().fillna('missing')\n\nfor col in cat_df:\n cat_df.loc[:,col] = le.fit_transform(cat_df[col])\ncat_df",
"_____no_output_____"
],
[
"df.loc[:,cat_cols] = cat_df\ndf = df.fillna(-99)\ndf.to_csv('./data/vae_train.csv',index=False)",
"_____no_output_____"
],
[
"df.shape",
"_____no_output_____"
],
[
"df.head()",
"_____no_output_____"
]
],
[
[
"### Calculate Mutual Information Values",
"_____no_output_____"
]
],
[
[
"num_cols = df.select_dtypes('number').columns.tolist()\ncat_cols = df.select_dtypes('object').columns.tolist()\n\nprint(f'{len(num_cols)} numerical columns, {len(cat_cols)} categorical columns')",
"106 numerical columns, 16 categorical columns\n"
],
[
"cat_cols",
"_____no_output_____"
],
[
"# encoding the categorical columns\nfrom sklearn.preprocessing import LabelEncoder\nfrom sklearn.preprocessing import OneHotEncoder\n\nle = LabelEncoder()\ncat_df = df[cat_cols].copy().fillna('missing')\n\nfor col in cat_df:\n cat_df.loc[:,col] = le.fit_transform(cat_df[col])\ncat_df",
"_____no_output_____"
],
[
"df.loc[:,cat_cols] = cat_df",
"_____no_output_____"
],
[
"df.head()",
"_____no_output_____"
],
[
"df.iloc[:,:20]",
"_____no_output_____"
],
[
"keep_cols = ['SK_ID_CURR','TARGET','NAME_CONTRACT_TYPE','CODE_GENDER',\n 'FLAG_OWN_CAR','FLAG_OWN_REALTY','CNT_CHILDREN',\n 'AMT_INCOME_TOTAL','AMT_CREDIT','AMT_ANNUITY','AMT_GOODS_PRICE','NAME_TYPE_SUITE']\ndf[keep_cols]",
"_____no_output_____"
],
[
"df.isna().sum()",
"_____no_output_____"
],
[
"df = df.fillna(-999)\ndf.shape",
"_____no_output_____"
],
[
"from sklearn.feature_selection import mutual_info_classif\n\nX,y = df.drop(columns=['SK_ID_CURR','TARGET']),df['TARGET']\nX_cols = X.columns.tolist()\nX,y = X.values,y.values\n\nmi = mutual_info_classif(X,y)\nmi_df = pd.DataFrame(mi,index=X_cols,columns=['mi'])\nmi_df.sort_values(by=['mi'],ascending=False)",
"_____no_output_____"
],
[
"mi_df.sort_values(by=['mi'],ascending=False)[10:60]",
"_____no_output_____"
],
[
"mi_df.sort_values(by=['mi'],ascending=False).to_csv('mutual_information_scores.csv')",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e72ab5198dafff75a24c5dfd8f9199f35f248abe | 349,408 | ipynb | Jupyter Notebook | FCD_M2_1_Numpy_avancado.ipynb | zavaleta/Fundamentos_DS | ebe46d33e4c2abefca05c219a7f53eda01eba85f | [
"CC0-1.0"
] | 7 | 2021-02-05T14:40:23.000Z | 2022-03-29T01:24:32.000Z | .ipynb_checkpoints/FCD_M2_1_Numpy_avancado-checkpoint.ipynb | zavaleta/Fundamentos_DS | ebe46d33e4c2abefca05c219a7f53eda01eba85f | [
"CC0-1.0"
] | null | null | null | .ipynb_checkpoints/FCD_M2_1_Numpy_avancado-checkpoint.ipynb | zavaleta/Fundamentos_DS | ebe46d33e4c2abefca05c219a7f53eda01eba85f | [
"CC0-1.0"
] | 9 | 2021-02-04T19:37:36.000Z | 2021-07-13T19:23:45.000Z | 601.390706 | 139,232 | 0.945061 | [
[
[
"\n# Fundamentos de Ciência de Dados",
"_____no_output_____"
],
[
"---\n[](https://zenodo.org/badge/latestdoi/335308405)",
"_____no_output_____"
],
[
"---\n# PPGI/UFRJ 2020.3\n## Prof Sergio Serra e Jorge Zavaleta",
"_____no_output_____"
],
[
"---\n# Módulo 2 - Numpy Avançado - Tensores",
"_____no_output_____"
],
[
"## Tensor\n\n>Um **tensor** é uma matriz **ndimensional** (narray) com um tipo uniforme(dtype). Todos os tensores são imutáveis como os números e strings do Python, não se pode atualizar o conteúdo de um tensor, apenas criar um novo.",
"_____no_output_____"
]
],
[
[
"# importando o pacote numpy\nimport numpy as np",
"_____no_output_____"
]
],
[
[
"## Tensores básicos",
"_____no_output_____"
],
[
"> ### Tensor escalar ou de categoria \"0\".\n> Um escalar contém um único valor e nenhum \"eixo\".",
"_____no_output_____"
]
],
[
[
"# tensor escalar\nrank_0_tensor = np.array(4,dtype=np.int64)\nprint('T:',rank_0_tensor)\nprint('R:',rank_0_tensor.shape)",
"_____no_output_____"
]
],
[
[
"> ### Tensor de \"vetor\" ou categoria \"1\"\n> Lista de valores. Este tensor tem um eixo.",
"_____no_output_____"
]
],
[
[
"# tensor de vetor ou categoria 1\nrank_1_tensor = np.array([2,3,4],dtype=np.float64)\nprint('T:',rank_1_tensor)\nprint('R:',rank_1_tensor.shape) # dimensão da matriz/vetor",
"_____no_output_____"
]
],
[
[
"> ### Tensor de \"Matriz\" ou categoria 2\n> O tensor matriz tem dois eixos.",
"_____no_output_____"
]
],
[
[
"# tensor de matriz ou categoria 2\nrank_2_tensor = np.array([[1,2],[3,4],[5,6]],dtype=np.float64)\nprint('T:',rank_2_tensor)\nprint('R:',rank_2_tensor.shape) # dimensão da matriz",
"_____no_output_____"
]
],
[
[
"",
"_____no_output_____"
]
],
[
[
"# tensor de matriz pxqxr ou categoria 3\nrank_3_tensor = np.array( [[[0, 1, 2, 3, 4],[5, 6, 7, 8, 9]],\n [[10, 11, 12, 13, 14],[15, 16, 17, 18, 19]],\n [[20, 21, 22, 23, 24],[25, 26, 27, 28, 29]]],dtype=np.float64)\n\nprint('T:',rank_3_tensor)\nprint('R:',rank_3_tensor.shape) # dimensão da matriz",
"_____no_output_____"
]
],
[
[
"",
"_____no_output_____"
],
[
"> ### Sobre as formas:\n\n> - **Forma** : O comprimento (número de elementos) de cada um dos eixos de um tensor.\n> - **Rank** : Número de eixos tensores. Um escalar tem posto 0, um vetor tem posto 1, uma matriz tem posto 2.\n> - **Eixo ou dimensão** : uma dimensão particular de um tensor.\n> - **Tamanho** : o número total de itens no tensor.",
"_____no_output_____"
],
[
"## Aritmética de tensores",
"_____no_output_____"
],
[
"### Adição de tensores\n> Os tensores devem pertencer à mesma categoria e ter as mesmas dimensões. A soma de tensores é outro tensor. O elemento solução é o resultado da operação entre elementos correspondentes dos tensores.",
"_____no_output_____"
]
],
[
[
"# exemplo de adicao de tensores\n# define o tensor A\nA = np.array([[[1,2,3], [4,5,6], [7,8,9]],\n [[11,12,13], [14,15,16], [17,18,19]],\n [[21,22,23], [24,25,26], [27,28,29]]])\nprint('Tensor A:')\nprint(A)\nprint('D:',A.shape)\n# define o tensor B\nB = np.array([[[1,2,3], [4,5,6], [7,8,9]],\n [[11,12,13], [14,15,16], [17,18,19]],\n [[21,22,23], [24,25,26], [27,28,29]]])\nprint('Tensor B:')\nprint(B)\nprint('D:',B.shape)\n# adicao do tensores A + B\nC = A + B\nprint('C = A + B:')\nprint(C)\nprint('D:',C.shape)",
"_____no_output_____"
]
],
[
[
"### Subtração de Tensores\n> Os tensores devem pertencer à mesma categoria e ter as mesmas dimensões. A substração de tensores é outro tensor. O elemento solução é o resultado da operação entre elementos correspondentes dos tensores.",
"_____no_output_____"
]
],
[
[
"# tensor subtraction\n# define tensor A\nA = np.array([[[1,2,3], [4,5,6], [7,8,9]],\n [[11,12,13], [14,15,16], [17,18,19]],\n [[21,22,23], [24,25,26], [27,28,29]]])\nprint('Tensor A:')\nprint(A)\nprint('D:',A.shape)\n# define o tensor B\nB = np.array([[[1,2,3], [4,5,6], [7,8,9]],\n [[11,12,13], [14,15,16], [17,18,19]],\n [[21,22,23], [24,26,26], [27,28,28]]])\nprint('Tensor B:')\nprint(B)\nprint('D:',B.shape)\n# subtração de tensores C = A - B\nC = A - B\nprint('Tensor C = A - B:')\nprint(C)\nprint('D:',C.shape)",
"_____no_output_____"
]
],
[
[
"### Multiplicação de Tensores (hadamart*)\n> Os tensores devem pertencer à mesma categoria e ter as mesmas dimensões. O produto de tensores é outro tensor. O elemento solução é o resultado da operação entre elementos correspondentes dos tensores.",
"_____no_output_____"
]
],
[
[
"# define o tensor A\nA = np.array([[[1,2,3], [4,5,6], [7,8,9]],\n [[11,12,13], [14,15,16], [17,18,19]],\n [[21,22,23], [24,25,26], [27,28,29]]])\nprint('Tensor A:')\nprint('D:',A.shape)\nprint(A)\n# define tensor B\nB = np.array([[[1,2,3], [4,5,6], [7,8,9]],\n [[11,12,13], [14,15,16], [17,18,19]],\n [[21,22,23], [24,25,26], [27,28,29]]])\nprint('Tensor B:')\nprint('D:',B.shape)\nprint(B)\n# multiplicação de tensores C = A * B\nC = A * B \nprint('Tensor C = A * B:')\nprint('D:',C.shape)\nprint(C)",
"_____no_output_____"
]
],
[
[
"### Divisão de Tensores\n> Os tensores devem pertencer à mesma categoria e ter as mesmas dimensões. A divisão de tensores é outro tensor. O elemento solução é o resultado da operação entre elementos correspondentes dos tensores.",
"_____no_output_____"
]
],
[
[
"# define o tensor A\nA = np.array([[[1,2,3], [4,5,6], [7,8,9]],\n [[11,12,13], [14,15,16], [17,18,19]],\n [[21,22,23], [24,25,26], [27,28,29]]])\nprint('Tensor A:')\nprint('D:',A.shape)\nprint(A)\n# define o tensor B\nB = np.array([[[1,2,3], [4,5,6], [7,8,9]],\n [[11,12,13], [14,15,16], [17,18,19]],\n [[21,22,23], [24,25,26], [27,28,29]]])\nprint('Tensor B:')\nprint('D:',B.shape)\nprint(B)\n# divide tensores A/B\nC = A / B\nprint('Tensor C = A/B:')\nprint('D:',C.shape)\nprint(C)",
"_____no_output_____"
]
],
[
[
"### Produto de Tensores ($\\otimes$)\n> O produto de tensores de dimensôes diferentes é otro tensor de dimensão (dim(T1)+dim(T2)). Seja o tensor A de **q** dimensões e o tensor B de **r** dimensões. O produto dos tensores C = $A \\otimes B$ de **q+r** dimensões. O numpy usa a função **tensordot()** para calcular o tensor produto sobre os eixos especificados.\n\n",
"_____no_output_____"
]
],
[
[
"# produto de tensores\n# define tensor A (vetor)\nA = np.array([1,2])\nprint('Tensor A:')\nprint('D:',A.shape)\nprint(A)\n# define tensor B (vetor)\nB = np.array([3,4])\nprint('Tensor B:')\nprint('D:',B.shape)\nprint(B)\n# calculate tensor produto: C = A x B\nC = np.tensordot(A, B, axes=0) \nprint('Tensor C:')\nprint('D:',C.shape)\nprint(C)",
"_____no_output_____"
],
[
"T1 = np.arange(10).reshape(5,2)\nT2 = np.arange(4,14).reshape(2,5)\nprint('T1:')\nprint(T1.shape)\nprint(T1)\nprint('T2:')\nprint(T2.shape)\nprint(T2)\n# calcular o produto tensorial\nR = np.tensordot(T1,T2, axes=1) # axe = 1 (produto de matriz)!!!\nprint('R:')\nprint(R.shape)\nprint(R)",
"_____no_output_____"
],
[
"# produto de tensores\n# define o tensor A\nA = np.array([[1,2],[2,1]])\nprint('Tensor A:')\nprint('D:',A.shape)\nprint(A)\n# define o tensor B\nB = np.array([[3,4],[6,8]])\nprint('Tensor B:')\nprint('D:',B.shape)\nprint(B)\n# calculate tensor produto \nC = np.tensordot(A, B, axes=1)\nprint('Tensor C = A x B:')\nprint('D:',C.shape)\nprint(C)",
"_____no_output_____"
]
],
[
[
"## TensorFlow\n>  \n> **TensorFlow** é uma biblioteca de aprendizado profundo (deep learning) desenvolvida pela Google. Ela fornece primitivas para\nfunções defininda em **tensores** e cálculos automaticos de suas operações derivadas.",
"_____no_output_____"
],
[
"### O que é um tensor?\n> Formalmente, os tensores são aplicações multilineares de espaços vetoriais para os números reais ( $V$ espaço vetorial e $V^{*}$ espaço dual).\n> \n> - Um **escalar** é um tensor : $ f:R \\rightarrow R, f(e_{1}) = c $\n> - Um **vetor** é um tensor : $ f:R^{n} \\rightarrow R, f(e_{i}) = v_{i} $\n> - Uma **matriz** é um tensor : $ f:R^{n}\\times R^{m} \\rightarrow R, f(e_{i},e_{j}) = A_{ij} $\n\n> Deve-se ter uma **base fixa**, e como consequência, **um tensor pode ser representado como uma matriz multidimensional de números**.",
"_____no_output_____"
],
[
"### Instalação do TensorFlow\n> - Usando o anaconda prompt: ```pip install --upgrade tensorflow```\n> - Diretamente no jupyter : ```!pip install --upgrade tensorflow```\n> - Também pode ser instalado em um ambiente vitual (virtualenv) criado para trabalhar com tensorFlow (Usar o anaconda navigator para criar o ambiente virtual ou diretamente na linha de comnados usando o anaconda prompt)\n\n> **OBS** - A instalação depende dos recursos do computador como CPU e placas gráficas (GPU)!!!\n\n> Verificando a instalação: \n>> - Usando o anaconda prompt (1): ```python```\n>> - (2): ```import tensorflow as tf```\n>> - (3): ```print(tf.__version__)```\n>> - (4): ```2.2.0``` -> versão instalada",
"_____no_output_____"
],
[
"\n### Numpy vs.TensorFlow\n> TensorFlow é Numpy são bastante semelhantes (Ambos são bilbliotecas de matriz N-dimensionais).\n> Numpy é compatível com N-array, mas não oferece métodos para criar funções de tensores e calcular automaticamente as operações derivadas (+ sem suporte de GPU - *Graphics Processing Unit* )",
"_____no_output_____"
],
[
"### Exemplo em Numpy",
"_____no_output_____"
]
],
[
[
"# definir dois tensores em numpy\nTa = np.zeros((2,2)) # tensor A\nTb = np.ones((2,2)) # Tensor B\nprint('DTa:',Ta.shape) # dimensão do tensor Ta\nprint('Ta:')\nprint(Ta)\nprint('DTb:',Tb.shape) # dimensão do tensor Tb\nprint('Tb:')\nprint(Tb)\nsTb = np.sum(Tb, axis=1) # soma do tensor Tb, axis =1 (vetor)\nprint('Soma Tb:>',sTb)\nprint('rD:>',np.reshape(Ta, (1,4))) # redimensiona o Tensor 2x2 para 1x4\nprint('rD:>',np.reshape(Ta, (4,1))) # redimensiona o Tensor 2x2 para 4x1",
"_____no_output_____"
]
],
[
[
"### Exemplo em TensorFlow",
"_____no_output_____"
]
],
[
[
"# usando o tensorflow\nimport tensorflow as tf # importar a biblioteca\n#\nsess = tf.compat.v1.InteractiveSession() # ativa a sessão para tensorflow 2.0 ou inferior\n#\nta = tf.zeros((2,2), dtype = tf.float32) # cria o tenso a\nprint('Tensor A:')\nprint(ta)\nprint()\ntb = tf.ones((2,2), dtype = tf.float32) # cria o tensor b\nprint('Tensor B:')\nprint(tb)\nprint()\n#\nrTb = tf.math.reduce_sum(tb, axis=0, keepdims=1)\nprint('reduce_sum tb:>',rTb)\nprint()\nsa = tf.shape(ta) # verifica a dimensão do tensor ta\nprint('Shape ta:>',sa)\nprint()\nprint('Shape ta:>',ta.get_shape()) # retorna a dimensão do tensor ta\nprint()\n#\nres_ta =tf.reshape(ta, (1, 4)) # redimensiona o tensor ta de 2x2 para 1x4\nprint('res_ta:>',res_ta)\n#\nsess.close() # fecha a sessão",
"_____no_output_____"
]
],
[
[
"---\n#### Fudamentos para Ciência Dados © Copyright 2021, Sergio Serra & Jorge Zavaleta",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
e72ab5f71f270db4bebebdb0f3156091e6d64bfb | 123,337 | ipynb | Jupyter Notebook | 4_Applied Text Mining in Python/Assignment 1.ipynb | Pankaj-Ra/Coursera-Data-Science-in-Python | 0440439cbdebd973cb33fc039dc3ae268ce128a2 | [
"MIT"
] | null | null | null | 4_Applied Text Mining in Python/Assignment 1.ipynb | Pankaj-Ra/Coursera-Data-Science-in-Python | 0440439cbdebd973cb33fc039dc3ae268ce128a2 | [
"MIT"
] | null | null | null | 4_Applied Text Mining in Python/Assignment 1.ipynb | Pankaj-Ra/Coursera-Data-Science-in-Python | 0440439cbdebd973cb33fc039dc3ae268ce128a2 | [
"MIT"
] | null | null | null | 61.300696 | 1,092 | 0.569618 | [
[
[
"---\n\n_You are currently looking at **version 1.1** of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the [Jupyter Notebook FAQ](https://www.coursera.org/learn/python-text-mining/resources/d9pwm) course resource._\n\n---",
"_____no_output_____"
],
[
"# Assignment 1\n\nIn this assignment, you'll be working with messy medical data and using regex to extract relevant infromation from the data. \n\nEach line of the `dates.txt` file corresponds to a medical note. Each note has a date that needs to be extracted, but each date is encoded in one of many formats.\n\nThe goal of this assignment is to correctly identify all of the different date variants encoded in this dataset and to properly normalize and sort the dates. \n\nHere is a list of some of the variants you might encounter in this dataset:\n* 04/20/2009; 04/20/09; 4/20/09; 4/3/09\n* Mar-20-2009; Mar 20, 2009; March 20, 2009; Mar. 20, 2009; Mar 20 2009;\n* 20 Mar 2009; 20 March 2009; 20 Mar. 2009; 20 March, 2009\n* Mar 20th, 2009; Mar 21st, 2009; Mar 22nd, 2009\n* Feb 2009; Sep 2009; Oct 2010\n* 6/2008; 12/2009\n* 2009; 2010\n\nOnce you have extracted these date patterns from the text, the next step is to sort them in ascending chronological order accoring to the following rules:\n* Assume all dates in xx/xx/xx format are mm/dd/yy\n* Assume all dates where year is encoded in only two digits are years from the 1900's (e.g. 1/5/89 is January 5th, 1989)\n* If the day is missing (e.g. 9/2009), assume it is the first day of the month (e.g. September 1, 2009).\n* If the month is missing (e.g. 2010), assume it is the first of January of that year (e.g. January 1, 2010).\n* Watch out for potential typos as this is a raw, real-life derived dataset.\n\nWith these rules in mind, find the correct date in each note and return a pandas Series in chronological order of the original Series' indices.\n\nFor example if the original series was this:\n\n 0 1999\n 1 2010\n 2 1978\n 3 2015\n 4 1985\n\nYour function should return this:\n\n 0 2\n 1 4\n 2 0\n 3 1\n 4 3\n\nYour score will be calculated using [Kendall's tau](https://en.wikipedia.org/wiki/Kendall_rank_correlation_coefficient), a correlation measure for ordinal data.\n\n*This function should return a Series of length 500 and dtype int.*",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\nimport re\n\ndoc = []\nwith open('dates.txt') as file:\n for line in file:\n doc.append(line)\n\ndf = pd.Series(doc)\nlist(df)\n#df.size",
"_____no_output_____"
],
[
"'''\ndef date_sorter1():\n \n # Your code here\n dates1 = []\n dates2 = []\n dates3 = []\n for line in df:\n match = re.search(r'(?:\\d{1,2} )?(?:Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec)[a-z]*?([?:., -]*)(?:\\d{2}?(th|st|nd)?([?:., -]*))?(\\d{2,4})', line)\n if match:\n dates1.append(match.group())\n continue\n match = re.search(r'\\d{1,2}[/-]\\d{1,2}[/-]\\d{2,4}', line)\n if match:\n dates2.append(match.group())\n continue\n match = re.search(r'(?:\\d{1,2}[/-])?(?:\\d{1,2}[/-])?(\\d{4})', line)\n if match:\n dates3.append(match.group())\n else:\n dates.append('Null')\n #dates = pd.Series(dates)\n return dates\n''' ",
"_____no_output_____"
],
[
"def date_sorter(): \n # Your code here\n # Full date\n \n dates = df.str.extractall(r'(?P<dates>(?P<month>\\d{1,2})[/|-](?P<day>([0-2]?[0-9])|([3][01]))[/|-](?P<year>\\d{2,4}))')\n index_left = ~df.index.isin([x[0] for x in dates.index])\n dates = dates.append(df[index_left].str.extractall(r'(?P<dates>(?P<day>\\d{1,2})[?:., -](?P<month>(?:Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec)[a-z]*)[?:., -]? (?P<year>\\d{2,4}))'))\n index_left = ~df.index.isin([x[0] for x in dates.index])\n dates = dates.append(df[index_left].str.extractall(r'(?P<dates>(?P<month>(?:Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec)[a-z]*)[?:., -]? (?P<day>\\d{1,2})[?:., -]? (?P<year>\\d{2,4}))'))\n index_left = ~df.index.isin([x[0] for x in dates.index])\n \n \n del dates[3]\n del dates[4]\n \n \n # Without day\n dates_without_day = df[index_left].str.extractall(r'(?P<dates>(?P<month>(?:Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec)[a-z]*)[?:., -]? (?P<year>\\d{4}))')\n dates_without_day = dates_without_day.append(df[index_left].str.extractall(r'(?P<dates>(?P<month>\\d{1,2})/(?P<year>\\d{4}))'))\n dates_without_day['day'] = 1\n dates = dates.append(dates_without_day)\n index_left = ~df.index.isin([x[0] for x in dates.index])\n \n # Only year\n dates_only_year = df[index_left].str.extractall(r'(?P<dates>(?P<year>\\d{4}))')\n dates_only_year['day'] = 1\n dates_only_year['month'] = 1\n dates = dates.append(dates_only_year)\n index_left = ~df.index.isin([x[0] for x in dates.index])\n \n # Year\n dates['year'] = dates['year'].apply(lambda x: '19' + x if len(x) == 2 else x)\n dates['year'] = dates['year'].apply(lambda x: str(x))\n \n # Month\n dates['month'] = dates['month'].apply(lambda x: x[1:] if type(x) is str and x.startswith('0') else x)\n \n month = dict({'Jan': 1, 'January': 1, 'Janaury': 1, 'Feb': 2, 'February': 2, 'Mar': 3, 'March': 3, 'Apr': 4, 'April': 4, 'May': 5,\n 'Jun': 6, 'June': 6, 'Jul': 7, 'July': 7, 'Aug': 8, 'August': 8, 'September': 9, 'Sep': 9, 'Oct': 10,\n 'October': 10, 'Nov': 11, 'November': 11, 'Dec': 12, 'December': 12, 'Decemeber': 12})\n \n dates.replace({\"month\": month}, inplace=True)\n dates['month'] = dates['month'].apply(lambda x: str(x))\n \n # Day\n dates['day'] = dates['day'].apply(lambda x: str(x))\n \n # Cleaned date\n dates['date'] = dates['month'] + '/' + dates['day'] + '/' + dates['year']\n \n dates['date'] = pd.to_datetime(dates['date'])\n \n dates.sort_values(by='date', inplace=True)\n sorted_dates = pd.Series(list(dates.index.labels[0]))\n \n \n return sorted_dates",
"_____no_output_____"
],
[
"dates = date_sorter()\n#dates1 = date_sorter1()\n#dates2 = [x for x in dates if x not in dates1]\ndates",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
e72ab80cf8b8db045675d14aaf90c94d8d7bf260 | 117,279 | ipynb | Jupyter Notebook | Find_Line.ipynb | cy6253/Udacity_SDC-Project1 | 045b931c1a50abd9fc96a717940ab120475c276c | [
"MIT"
] | null | null | null | Find_Line.ipynb | cy6253/Udacity_SDC-Project1 | 045b931c1a50abd9fc96a717940ab120475c276c | [
"MIT"
] | null | null | null | Find_Line.ipynb | cy6253/Udacity_SDC-Project1 | 045b931c1a50abd9fc96a717940ab120475c276c | [
"MIT"
] | null | null | null | 723.944444 | 105,744 | 0.952558 | [
[
[
"import matplotlib.pyplot as plt\nimport matplotlib.image as mpimg\nimport numpy as np\nimport cv2",
"_____no_output_____"
],
[
"image = mpimg.imread('test_images/solidWhiteCurve.jpg')\nysize = image.shape[0]\nxsize = image.shape[1]\nprint(ysize, xsize)",
"540 960\n"
],
[
"color_select = np.copy(image)\nline_image = np.copy(image)\n\nred_threshold = 200\ngreen_threshold = 200\nblue_threshold = 200\nrgb_threshold = [red_threshold ,green_threshold ,blue_threshold]\n\nleft_bottom = [120,540]\nright_bottom = [900,540]\napex = [480,300]",
"_____no_output_____"
],
[
"color_threshold = (image[:,:,0] < rgb_threshold[0])\\\n|(image[:,:,1] < rgb_threshold[1])\\\n|(image[:,:,2] < rgb_threshold[2])\n\nfit_left = np.polyfit((left_bottom[0],apex[0]),(left_bottom[1],apex[1]),1)\nfit_right = np.polyfit((right_bottom[0],apex[0]),(right_bottom[1],apex[1]),1)\nfit_bottom = np.polyfit((left_bottom[0],right_bottom[0]),(left_bottom[1],right_bottom[1]),1)\n\nXX,YY = np.meshgrid(np.arange(0,xsize),np.arange(0,ysize))\nregion_threshold = (YY > (XX*fit_left[0] + fit_left[1])) &\\\n(YY > (XX*fit_right[0] + fit_right[1])) &\\\n(YY < (XX*fit_bottom[0] + fit_bottom[1]))",
"_____no_output_____"
],
[
"color_select[color_threshold] = [0,0,0]\nline_image[~color_threshold & region_threshold] = [255,0,0]",
"_____no_output_____"
],
[
"plt.imshow(color_select)\nplt.show()\nplt.imshow(line_image)\nplt.show()",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e72ac0fd4d974136af0dc0fe9b6fe130c4dfd27e | 3,836 | ipynb | Jupyter Notebook | from_mwt_ds/DataScience/estimate.ipynb | cheng-tan/data-science | 95f40d82c3aa9d3eae10f010ec1d82e94ccd573f | [
"BSD-3-Clause"
] | null | null | null | from_mwt_ds/DataScience/estimate.ipynb | cheng-tan/data-science | 95f40d82c3aa9d3eae10f010ec1d82e94ccd573f | [
"BSD-3-Clause"
] | null | null | null | from_mwt_ds/DataScience/estimate.ipynb | cheng-tan/data-science | 95f40d82c3aa9d3eae10f010ec1d82e94ccd573f | [
"BSD-3-Clause"
] | null | null | null | 20.961749 | 111 | 0.514599 | [
[
[
"# Contextual Bandits data",
"_____no_output_____"
],
[
"## Load data",
"_____no_output_____"
]
],
[
[
"import pandas as pd\n\ndf = pd.read_csv(r'test_data/cb/01.csv', parse_dates=['t']).set_index('t')\ndf.head()",
"_____no_output_____"
]
],
[
[
"## Apply estimators",
"_____no_output_____"
]
],
[
[
"from cb.estimators import ips_snips\n\n\ndef init_ips_snips(r, p, p_log, n):\n result = ips_snips()\n result.add(r, p_log, p, n * int(p > 0))\n return result\n\npolicies = ['random', 'baseline1']\nfor p in policies:\n df[p] = df.apply(lambda r: init_ips_snips(r['r'], r[f\"('b', '{p}')\"], r['p'], r['n']), axis = 1)\n\ndf = df[policies].resample('5min').sum()\ndf",
"_____no_output_____"
]
],
[
[
"## Visualize",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\n\ndf.apply(lambda r: r['random'].get('snips'), axis=1).plot(label='random')\ndf.apply(lambda r: r['baseline1'].get('snips'), axis=1).plot(label='baseline1')\n\nplt.legend(loc='best')",
"_____no_output_____"
]
],
[
[
"## Reaggregate (if needed)",
"_____no_output_____"
]
],
[
[
"df = df.resample('10min').sum()\ndf",
"_____no_output_____"
]
],
[
[
"## Visualize",
"_____no_output_____"
]
],
[
[
"df.apply(lambda r: r['random'].get('snips'), axis=1).plot(label='random')\ndf.apply(lambda r: r['baseline1'].get('snips'), axis=1).plot(label='baseline1')\n\nplt.legend(loc='best')",
"_____no_output_____"
]
],
[
[
"# Conditional Contextual Bandits",
"_____no_output_____"
]
],
[
[
"import pandas as pd\n\ndf = pd.read_pickle(r'test_data\\ccb\\01.pickle')\ndf.head()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e72ac42662851f3c170ac6b10d7e05bad154ea3e | 651,776 | ipynb | Jupyter Notebook | Capstone Project 1/Data Story - Pet Adoption.ipynb | emenriquez/Springboard-Coursework | 7ac89a5b8bf7855bcd5cefaa02367134cb81ce8a | [
"Apache-2.0"
] | null | null | null | Capstone Project 1/Data Story - Pet Adoption.ipynb | emenriquez/Springboard-Coursework | 7ac89a5b8bf7855bcd5cefaa02367134cb81ce8a | [
"Apache-2.0"
] | null | null | null | Capstone Project 1/Data Story - Pet Adoption.ipynb | emenriquez/Springboard-Coursework | 7ac89a5b8bf7855bcd5cefaa02367134cb81ce8a | [
"Apache-2.0"
] | 1 | 2019-04-22T14:57:02.000Z | 2019-04-22T14:57:02.000Z | 395.975699 | 68,728 | 0.911776 | [
[
[
"# Understanding Factors in Animal Shelter Pet Adoption - Data Story\n\nIn efforts to understand trends in pet adoption outcomes, the Austin Animal Center has provided data relating to the pets in their adoption center. Understanding this data and using it to model the factors that influence pet adoption could lead to recommendations that improve the performance of the center and help more pets find homes.\n\n### Objective\n\nIn this project I will be exploring the data and using visualizations to answer some basic questions, including:\n\n 1. How likely are adoptions for cats vs. dogs?\n 2. Do factors such as color, breed and age affect the outcome for animal adoptions?\n 3. Are there trends by years for adoptions of cats and dogs?\n \nFirst I will begin by importing the necessary packages for analysis, as well as the dataset that was cleaned and formatted **[here](https://github.com/emenriquez/Springboard-Coursework/blob/master/Capstone%20Project%201/Data%20Wrangling%20-%20Pet%20Adoption%20V2.ipynb)**",
"_____no_output_____"
]
],
[
[
"# For working with dataframes and manipulation\nimport numpy as np\nimport pandas as pd\n\n# Used to create and customize graphics/plots\nimport matplotlib.pyplot as plt\nimport matplotlib.patches as mpatches\n\n# Used to work with datetime and timedelta objects\nfrom datetime import datetime, timedelta",
"_____no_output_____"
],
[
"# Load the formatted dataset\ndata = pd.read_pickle('data/data_clean.pkl')",
"_____no_output_____"
]
],
[
[
"### 1. How Likely are adoptions for Cats vs. Dogs?\n\nIt is very important to understand the general distributions of outcomes for cats and dogs, as well as the total number of each that the center recieves, in order to efficiently provide resources to shelter these animals. This section will break down the outcomes for both cats and dogs in order to gain more insight into the placement of these animals in permanent homes.",
"_____no_output_____"
]
],
[
[
"# Separate dataset entries into those for cats and dogs\ncats = data[data['Animal Type'] == 'Cat']\ndogs = data[data['Animal Type'] == 'Dog']",
"_____no_output_____"
],
[
"# Set figure and font size\nplt.subplots(figsize=(12, 7))\nplt.rc('font', size=14)\n\n# Create pie chart for cat outcomes\nplt.subplot(1, 2, 1)\ncat_homes = [(cats['Found Home'] == 1).sum(), (cats['Found Home'] == 0).sum()]\nlabels1 = ['Home Found', 'No Home Found']\nplt.pie(cat_homes,\n labels=labels1,\n autopct='%1.1f%%'\n )\n\n# plot formatting\nplt.axis('equal')\nplt.title('Distribution of Outcomes\\n for Cats', size=20)\n\n\n# Create pie chart for dog outcomes\nplt.subplot(1, 2, 2)\ndog_homes = [(dogs['Found Home'] == 1).sum(), (dogs['Found Home'] == 0).sum()]\nlabels2 = ['Home Found', 'No Home Found']\nplt.pie(dog_homes,\n labels=labels2,\n autopct='%1.1f%%'\n )\n\n# plot formatting\nplt.axis('equal')\nplt.title('Distribution of Outcomes\\n for Dogs', size=20)\n\nplt.show()",
"_____no_output_____"
]
],
[
[
"We can see here that dogs are much more likely to have an outcome resulting in a permanent home than cats.\n\nWe can also break these outcomes down further by their specific outcome type:",
"_____no_output_____"
]
],
[
[
"# Set figure and font size\nplt.subplots(figsize=(12, 10))\nplt.rc('font', size=14)\n\n# Create pie chart for cat outcomes\nplt.subplot(1, 2, 1)\ncats['Outcome Type'].value_counts().plot(kind='pie',\n autopct='%1.1f%%',\n labels=None,\n legend=True,\n colors=['lightskyblue', 'orange', 'yellowgreen', 'lightcoral', 'mediumorchid']\n )\n\n# plot formatting\nplt.axis('equal')\nplt.title('Distribution of Outcomes\\n for Cats', size=20)\n\n\n# Create pie chart for dog outcomes\nplt.subplot(1, 2, 2)\ndogs['Outcome Type'].value_counts().plot(kind='pie',\n autopct='%1.1f%%',\n labels=None,\n legend=True,\n colors=['orange', 'lightcoral', 'lightskyblue', 'yellowgreen', 'mediumorchid']\n )\n\n# plot formatting\nplt.axis('equal')\nplt.title('Distribution of Outcomes\\n for Dogs', size=20)\n\nplt.show()",
"_____no_output_____"
]
],
[
[
"While numbers for adoption are similar for both cats and dogs, many more dogs are classified as 'Return to Owner' than cats, which denotes dogs that were lost and returned to their owners. A large majority of cats are transferred to other facilities, which may indicate that other facilities are either better equipped to handle the volume of cats, or simply that the real estate at the Austin Animal Center does not allow support for enough cats.\n\n### 2. Analysis of Adoption Outcomes vs. Animal Attributes\n\nIn understanding the animal attributes that most affect the outcomes of animals at this center, we can identify which animals have a higher chance of being adopted for this area. This, in conjunction with data on neighboring or partner centers might allow for avenues to 'match' animals that can maximize their chance of adoption in each area.\n\n\n#### i. Gender",
"_____no_output_____"
]
],
[
[
"# Separate Cats by Sex upon Outcome\nmale_n = cats[cats['Sex upon Outcome'] == 'Neutered Male']\nmale_i = cats[cats['Sex upon Outcome'] == 'Intact Male']\nfemale_s = cats[cats['Sex upon Outcome'] == 'Spayed Female']\nfemale_i = cats[cats['Sex upon Outcome'] == 'Intact Female']\nunknown = cats[cats['Sex upon Outcome'] == 'Unknown']\n\n# find percentages of cats that found homes\ncat_homes_pct = [(male_n['Found Home'] == 1).sum()/male_n.shape[0],\n (male_i['Found Home'] == 1).sum()/male_i.shape[0],\n (female_s['Found Home'] == 1).sum()/female_s.shape[0],\n (female_i['Found Home'] == 1).sum()/female_i.shape[0],\n (unknown['Found Home'] == 1).sum()/unknown.shape[0]]\n\ncat_no_homes_pct = [(male_n['Found Home'] == 0).sum()/male_n.shape[0],\n (male_i['Found Home'] == 0).sum()/male_i.shape[0],\n (female_s['Found Home'] == 0).sum()/female_s.shape[0],\n (female_i['Found Home'] == 0).sum()/female_i.shape[0],\n (unknown['Found Home'] == 0).sum()/unknown.shape[0]]\n\n# Create stacked bar chart to compare outcome vs. sex upon outcome\nfig, ax = plt.subplots(figsize=(12,8))\nplt.rc('font', size=14)\n\nind = np.arange(5)\n\np1 = ax.bar(ind,\n cat_homes_pct,\n color='lightskyblue')\n\np2 = ax.bar(ind,\n cat_no_homes_pct,\n bottom=cat_homes_pct,\n color='mediumorchid')\n\nax.set_title('Distribution of Cat Outcomes by Gender')\nax.set_xticks(ind)\nax.set_xticklabels(('Neutered Male', 'Intact Male', 'Spayed Female', 'Intact Female', 'Unknown'))\n\nax.legend((p1[0], p2[0]), ('Home Found', 'No Home Found'), loc='best')\nax.autoscale_view()\n\nplt.show()",
"_____no_output_____"
],
[
"# Separate dogs by Sex upon Outcome\nmale_n = dogs[dogs['Sex upon Outcome'] == 'Neutered Male']\nmale_i = dogs[dogs['Sex upon Outcome'] == 'Intact Male']\nfemale_s = dogs[dogs['Sex upon Outcome'] == 'Spayed Female']\nfemale_i = dogs[dogs['Sex upon Outcome'] == 'Intact Female']\nunknown = dogs[dogs['Sex upon Outcome'] == 'Unknown']\n\n# find percentages of dogs that found homes\ndog_homes_pct = [(male_n['Found Home'] == 1).sum()/male_n.shape[0],\n (male_i['Found Home'] == 1).sum()/male_i.shape[0],\n (female_s['Found Home'] == 1).sum()/female_s.shape[0],\n (female_i['Found Home'] == 1).sum()/female_i.shape[0],\n (unknown['Found Home'] == 1).sum()/unknown.shape[0]]\n\ndog_no_homes_pct = [(male_n['Found Home'] == 0).sum()/male_n.shape[0],\n (male_i['Found Home'] == 0).sum()/male_i.shape[0],\n (female_s['Found Home'] == 0).sum()/female_s.shape[0],\n (female_i['Found Home'] == 0).sum()/female_i.shape[0],\n (unknown['Found Home'] == 0).sum()/unknown.shape[0]]\n\n# Create stacked bar chart to compare outcome vs. sex upon outcome\nfig, ax = plt.subplots(figsize=(12,8))\nplt.rc('font', size=14)\n\nind = np.arange(5)\n\np1 = ax.bar(ind,\n dog_homes_pct,\n color='dodgerblue')\n\np2 = ax.bar(ind,\n dog_no_homes_pct,\n bottom=dog_homes_pct,\n color='firebrick')\n\nax.set_title('Distribution of Dog Outcomes by Sex')\nax.set_xticks(ind)\nax.set_xticklabels(('Neutered Male', 'Intact Male', 'Spayed Female', 'Intact Female', 'Unknown'))\n\nax.legend((p1[0], p2[0]), ('Home Found', 'No Home Found'), loc='best')\nax.autoscale_view()\n\nplt.show()",
"_____no_output_____"
]
],
[
[
"The distribution of outcomes for male and females in the cases of both cats and dogs shows that there is not a strong preference for either gender. Naturally, since animals are spayed and neutered when possible at animal shelters, most adoptions occur for these types rather than intact gender animals.\n\n#### ii. Age",
"_____no_output_____"
]
],
[
[
"# Convert ages to years\ncat_ages_in_years = cats['Age upon Outcome'].apply(lambda x: x//timedelta(days=365.25))\ndog_ages_in_years = dogs['Age upon Outcome'].apply(lambda x: x//timedelta(days=365.25))\n\n# Plot distribution of cat ages\nplt.subplots(figsize=(14,8))\nplt.subplot(1, 2, 1)\nplt.hist(cat_ages_in_years, bins=23, color='yellow', edgecolor='black', linewidth=1.2)\nplt.title('Distribution of Cat Ages')\nplt.ylabel('Frequency (log scale)')\nplt.xlabel('Age (in Years)')\nplt.yscale('log')\nplt.xticks([0, 4, 8, 12, 16, 20])\n\n# Plot distribution of dog ages\nplt.subplot(1, 2, 2)\nplt.hist(dog_ages_in_years, bins=21, color='green', edgecolor='black', linewidth=1.2)\nplt.title('Distribution of Dog Ages')\nplt.yscale('log')\nplt.ylabel('Frequency (log scale)')\nplt.xlabel('Age (in Years)')\nplt.xticks([0, 4, 8, 12, 16, 20])\n\nplt.show()",
"_____no_output_____"
]
],
[
[
"There seems to be a wide spread of ages for both cats and dogs, up to 22 years for cats and 20 years for dogs. Most of the animals are less than 4 years old in both cases. It would also be helpful to see the breakdown of outcomes for each of these age groups.",
"_____no_output_____"
]
],
[
[
"cat_ages_in_years[cats['Found Home'] == 0]\n\n# Plot distribution of cat ages for cats who found homes\nplt.subplots(figsize=(14,8))\nplt.subplot(1, 2, 1)\ncats_fh_freq, cats_bins, _ = plt.hist(cat_ages_in_years[cats['Found Home'] == 1],\n bins=23, \n color='yellow', \n edgecolor='black', \n linewidth=1.2, \n alpha = 0.3\n )\ncats_nofh_freq, _, _ = plt.hist(cat_ages_in_years[cats['Found Home'] == 0],\n bins=23,\n color='purple',\n edgecolor='black',\n linewidth=1.2,\n alpha = 0.3\n )\nplt.legend(['Home Found', 'No Home Found'])\nplt.title('Distribution of Cat Outcomes vs. Age')\nplt.ylabel('Frequency')\nplt.xlabel('Age (in Years)')\nplt.yscale('log')\nplt.xticks([0, 4, 8, 12, 16, 20])\n\n# Plot distribution of dog ages\nplt.subplot(1, 2, 2)\ndogs_fh_freq, dogs_bins, _ = plt.hist(dog_ages_in_years[dogs['Found Home'] == 1],\n bins=21, \n color='green', \n edgecolor='black',\n linewidth=1.2,\n alpha = 0.3\n )\ndogs_nofh_freq, _, _ = plt.hist(dog_ages_in_years[dogs['Found Home'] == 0],\n bins=20, \n color='red', \n edgecolor='black', \n linewidth=1.2, \n alpha = 0.3\n )\nplt.legend(['Home Found', 'No Home Found'])\nplt.title('Distribution of Dog Outcomes vs. Age')\nplt.yscale('log')\nplt.ylabel('Frequency')\nplt.xlabel('Age (in Years)')\nplt.xticks([0, 4, 8, 12, 16, 20])\n\nplt.show()",
"_____no_output_____"
],
[
"# Add a zero value to the last bin of dogs_nofh_freq so that it matches the number of bins of dogs who found homes\ndogs_nofh_freq2 = np.append(dogs_nofh_freq, 0)\n\n# Initialize figure\nplt.subplots(figsize=(14,10))\n\n# Display difference between cats that either found homes or did not find homes\nplt.subplot(1, 2, 1)\n\n# Generate differenct colors for positive and negative values\ncat_colors = np.array([(0.7,0.3,0.8)]*len(cats_fh_freq))\ncat_colors[cats_fh_freq-cats_nofh_freq >= 0] = (1,1,0.6)\n\n# Create bar graph\nbarlist=plt.bar(cats_bins[1:], \n cats_fh_freq-cats_nofh_freq, \n color=cat_colors, \n edgecolor='k'\n )\nplt.ylabel('Frequency')\nplt.xlabel('Age (in Years)')\nplt.xticks([0, 4, 8, 12, 16, 20])\nplt.title('Dominant Outcomes for Cats vs. Age')\n\n# Create a legend for cat outcomes plot\npos_patch = mpatches.Patch(facecolor=(1,1,0.6), edgecolor='k', label='Home Found')\nneg_patch = mpatches.Patch(facecolor=(0.7,0.3,0.8), edgecolor='k', label='No Home Found')\nplt.legend(handles=[pos_patch, neg_patch])\n\n# Display difference between cats that either found homes or did not find homes\nplt.subplot(1, 2, 2)\nplt.bar(dogs_bins[1:], \n dogs_fh_freq-dogs_nofh_freq2, \n color='lightgreen', \n edgecolor='k'\n )\nplt.ylabel('Frequency')\nplt.xlabel('Age (in Years)')\nplt.xticks([0, 4, 8, 12, 16, 20])\nplt.title('Dominant Outcomes for Dogs vs. Age')\nplt.legend(['Home Found'])\n\n\nplt.show()",
"_____no_output_____"
],
[
"# Add a zero value to the last bin of dogs_nofh_freq so that it matches the number of bins of dogs who found homes\ndogs_nofh_freq2 = np.append(dogs_nofh_freq, 0)\n\n# Initialize figure\nplt.subplots(figsize=(14,10))\n\n# Display difference between cats that either found homes or did not find homes\nplt.subplot(1, 2, 1)\n\n# Generate differenct colors for positive and negative values\ncat_colors = np.array([(0.7,0.3,0.8)]*len(cats_fh_freq))\ncat_colors[cats_fh_freq-cats_nofh_freq >= 0] = (1,1,0.6)\n\n# Create bar graph\nbarlist=plt.bar(cats_bins[1:], \n 100*(cats_fh_freq-cats_nofh_freq)/(cats_fh_freq+cats_nofh_freq), \n color=cat_colors, \n edgecolor='k'\n )\nplt.ylabel('% of Total Outcomes per Age group')\nplt.xlabel('Age (in Years)')\nplt.xticks([0, 4, 8, 12, 16, 20])\nplt.title('Dominant Outcomes for Cats vs. Age')\n\n# Create a legend for cat outcomes plot\npos_patch = mpatches.Patch(facecolor=(1,1,0.6), edgecolor='k', label='Home Found')\nneg_patch = mpatches.Patch(facecolor=(0.7,0.3,0.8), edgecolor='k', label='No Home Found')\nplt.legend(handles=[pos_patch, neg_patch])\n\n# Display difference between cats that either found homes or did not find homes\nplt.subplot(1, 2, 2)\nplt.bar(dogs_bins[1:], \n 100*(dogs_fh_freq-dogs_nofh_freq2)/(dogs_fh_freq+dogs_nofh_freq2), \n color='lightgreen', \n edgecolor='k'\n )\nplt.ylabel('% of Total Outcomes per Age Group')\nplt.xlabel('Age (in Years)')\nplt.xticks([0, 4, 8, 12, 16, 20])\nplt.title('Dominant Outcomes for Dogs vs. Age')\nplt.legend(['Home Found'])\n\n\nplt.show()",
"_____no_output_____"
]
],
[
[
"As shown above, we can see that while all age groups have a higher frequency of dogs that are placed/returned to their homes, cats have a more complicated distribution. Both young (< 5 years old) and old (> 12 years old) seem to have mixed chances of being placed in a permanent home.\n\nOne interesting note is that in both cases, the oldest animals seem to have higher chances of adoption.\n\n#### iii. Breed",
"_____no_output_____"
]
],
[
[
"# Generate Top 10 ranking breeds for cats by frequency\ntop10_cat_breeds = pd.DataFrame(data= {'Cat Breed': cats['Breed'].value_counts().index.values[:10],\n '# of Cats': cats['Breed'].value_counts().values[:10]},\n columns = ['Cat Breed', '# of Cats'],\n index=range(1,11)\n )\n\n# Label the index as ranking\ntop10_cat_breeds.index.name = 'Rank'\n\n# Display top10 rankings\nprint('10 Most Common Cat Breeds\\n',top10_cat_breeds)\nprint('\\nTotal Number of Distinct Cat Breeds: {0}'.format(len(cats['Breed'].unique())))\nprint('Fraction of Total Cats Occupied by 10 most common breeds: {:0.2f} %'.format(100*cats['Breed'].value_counts()[:10].sum()/cats['Breed'].value_counts().sum()))\n\n# Generate Top 10 ranking breeds for dogs by frequency\ntop10_dog_breeds = pd.DataFrame(data= {'Dog Breed': dogs['Breed'].value_counts().index.values[:10],\n '# of Dogs': dogs['Breed'].value_counts().values[:10]},\n columns = ['Dog Breed', '# of Dogs'],\n index=range(1,11)\n )\n\n# Label the index as ranking\ntop10_dog_breeds.index.name = 'Rank'\n\n# Display top10 rankings\nprint('\\n\\n10 Most Common Dog Breeds\\n',top10_dog_breeds)\nprint('\\nTotal Number of Distinct Dog Breeds: {0}'.format(len(dogs['Breed'].unique())))\nprint('Fraction of Total Dogs Occupied by 10 most common breeds: {:0.2f} %'.format(100*dogs['Breed'].value_counts()[:10].sum()/dogs['Breed'].value_counts().sum()))\n",
"10 Most Common Cat Breeds\n Cat Breed # of Cats\nRank \n1 Domestic Shorthair Mix 22773\n2 Domestic Medium Hair Mix 2257\n3 Domestic Longhair Mix 1204\n4 Siamese Mix 997\n5 Domestic Shorthair 378\n6 American Shorthair Mix 194\n7 Snowshoe Mix 150\n8 Domestic Medium Hair 127\n9 Maine Coon Mix 103\n10 Manx Mix 85\n\nTotal Number of Distinct Cat Breeds: 55\nFraction of Total Cats Occupied by 10 most common breeds: 98.53 %\n\n\n10 Most Common Dog Breeds\n Dog Breed # of Dogs\nRank \n1 Pit Bull Mix 6283\n2 Labrador Retriever Mix 5628\n3 Chihuahua Shorthair Mix 5264\n4 German Shepherd Mix 2239\n5 Australian Cattle Dog Mix 1281\n6 Dachshund Mix 1094\n7 Border Collie Mix 827\n8 Boxer Mix 820\n9 Miniature Poodle Mix 743\n10 Catahoula Mix 581\n\nTotal Number of Distinct Dog Breeds: 345\nFraction of Total Dogs Occupied by 10 most common breeds: 57.78 %\n"
]
],
[
[
"In both categories, we can see that mixed breeds are the most common. This is not surprising, though the distribution above shows that the breeds of dogs are much more varied than cats. The 10 most common breeds of dogs only account for about 58% of the total population of dogs that have gone through the center, but for cats the 10 most common breeds account for over 98% of the entries.",
"_____no_output_____"
]
],
[
[
"plt.subplots(figsize=(14, 8))\n\n# Create plot to show distribution of most common cat breeds by percent of total cats\nplt.subplot(1, 2, 1)\n(100*cats['Breed'].value_counts()[:10]/cats['Breed'].value_counts().sum()).plot(kind='bar', color='gold', edgecolor='k')\nplt.ylabel('% of Total Cats')\nplt.xlabel('Breed')\nplt.title('Distribution of 10 Most Common Cat Breeds')\n\n# Create plot to show distribution of most common dog breeds by percent of total dogs\nplt.subplot(1, 2, 2)\n(100*dogs['Breed'].value_counts()[:10]/dogs['Breed'].value_counts().sum()).plot(kind='bar', edgecolor='k')\nplt.ylabel('% of Total Dogs')\nplt.xlabel('Breed')\nplt.title('Distribution of 10 Most Common Dog Breeds')\n\nplt.show()",
"_____no_output_____"
]
],
[
[
"In the plot shown above, we can see that Domestic Shorthair mixed breeds in cats account for almost 80% of the cat entries alone. All entries in the most common cat breeds are mixed, since 'Domestic Shorthair' and 'Domestic Medium Hair' breeds are themselves mixed breed classifications.\n\nNext we can investigate the outcomes by breed:",
"_____no_output_____"
]
],
[
[
"# Find percentage of cats that found homes by breed\ncat_breeds_fh = {}\nfor breed in cats['Breed'].unique():\n fh_temp = cats.loc[(cats['Found Home'] == 1) & (cats['Breed'] == breed)].shape[0]\n total_temp = cats.loc[cats['Breed'] == breed].shape[0]\n cat_breeds_fh[breed] = fh_temp/total_temp\n\n# Create ranking lists\ntop10_cat_breeds_fh = pd.Series(cat_breeds_fh).sort_values(ascending=False)[:10]\nbottom10_cat_breeds_fh = pd.Series(cat_breeds_fh).sort_values()[:10]\n\n# Format dataframe for display\ntop10_cat_breeds_fh2 = pd.DataFrame(data={'Breed': top10_cat_breeds_fh.index,\n '% of Breed that Found Home': 100*top10_cat_breeds_fh.values\n },\n columns=['Breed', '% of Breed that Found Home'],\n index=range(1,11)\n )\n\nbottom10_cat_breeds_fh2 = pd.DataFrame(data={'Breed': bottom10_cat_breeds_fh.index,\n '% of Breed that Found Home': 100*bottom10_cat_breeds_fh.values\n },\n columns=['Breed', '% of Breed that Found Home'],\n index=range(1,11)\n )\n\n# Rename index to ranking\ntop10_cat_breeds_fh2.index.name = 'Rank'\nbottom10_cat_breeds_fh2.index.name = 'Rank'\n\n\n# Display Rankings\nprint('Cat Breeds with highest percentage of Homes Found\\n', top10_cat_breeds_fh2)\nprint('\\nCat Breeds with lowest percentage of Homes Found\\n', bottom10_cat_breeds_fh2)",
"Cat Breeds with highest percentage of Homes Found\n Breed % of Breed that Found Home\nRank \n1 Turkish Van Mix 100.0\n2 British Shorthair 100.0\n3 Devon Rex Mix 100.0\n4 Cornish Rex Mix 100.0\n5 Munchkin Shorthair Mix 100.0\n6 Ocicat Mix 100.0\n7 Oriental Sh Mix 100.0\n8 Havana Brown Mix 100.0\n9 Burmese 100.0\n10 Pixiebob Shorthair Mix 100.0\n\nCat Breeds with lowest percentage of Homes Found\n Breed % of Breed that Found Home\nRank \n1 Exotic Shorthair Mix 0.000000\n2 Manx 0.000000\n3 Devon Rex 0.000000\n4 Birman Mix 0.000000\n5 Munchkin Longhair Mix 0.000000\n6 Turkish Angora Mix 25.000000\n7 Himalayan 33.333333\n8 American Shorthair Mix 40.206186\n9 Domestic Shorthair Mix 46.476090\n10 Domestic Medium Hair Mix 48.958795\n"
]
],
[
[
"All of the cat breeds with the highest percentages of placement in permanent homes represent breeds that are somewhat exotic when compared to the population that can be found in Austin, TX. We can also see that the Domestic Shorthair Mix that dominates the cat population in this dataset has a fairly low rate of adoption with ~46%.",
"_____no_output_____"
]
],
[
[
"# Find percentage of dogs that found homes by breed\ndog_breeds_fh = {}\nfor breed in dogs['Breed'].unique():\n fh_temp = dogs.loc[(dogs['Found Home'] == 1) & (dogs['Breed'] == breed)].shape[0]\n total_temp = dogs.loc[dogs['Breed'] == breed].shape[0]\n dog_breeds_fh[breed] = fh_temp/total_temp\n\n# Create ranking lists\ntop10_dog_breeds_fh = pd.Series(dog_breeds_fh).sort_values(ascending=False)[:10]\nbottom10_dog_breeds_fh = pd.Series(dog_breeds_fh).sort_values()[:10]\n\n# Format dataframe for display\ntop10_dog_breeds_fh2 = pd.DataFrame(data={'Breed': top10_dog_breeds_fh.index,\n '% of Breed that Found Home': 100*top10_dog_breeds_fh.values\n },\n columns=['Breed', '% of Breed that Found Home'],\n index=range(1,11)\n )\n\nbottom10_dog_breeds_fh2 = pd.DataFrame(data={'Breed': bottom10_dog_breeds_fh.index,\n '% of Breed that Found Home': 100*bottom10_dog_breeds_fh.values\n },\n columns=['Breed', '% of Breed that Found Home'],\n index=range(1,11)\n )\n\n# Rename index to ranking\ntop10_dog_breeds_fh2.index.name = 'Rank'\nbottom10_dog_breeds_fh2.index.name = 'Rank'\n\n\n# Display Rankings\nprint('Dog Breeds with highest percentage of Homes Found\\n', top10_dog_breeds_fh2)\nprint('\\nDog Breeds with lowest percentage of Homes Found\\n', bottom10_dog_breeds_fh2)",
"Dog Breeds with highest percentage of Homes Found\n Breed % of Breed that Found Home\nRank \n1 Affenpinscher Mix 100.0\n2 Norfolk Terrier 100.0\n3 Boerboel 100.0\n4 Mexican Hairless 100.0\n5 Manchester Terrier 100.0\n6 Bouv Flandres Mix 100.0\n7 Silky Terrier 100.0\n8 Lowchen Mix 100.0\n9 Smooth Fox Terrier 100.0\n10 Leonberger 100.0\n\nDog Breeds with lowest percentage of Homes Found\n Breed % of Breed that Found Home\nRank \n1 Spanish Mastiff Mix 0.0\n2 Jindo 0.0\n3 Landseer 0.0\n4 Japanese Chin 0.0\n5 Irish Setter Mix 0.0\n6 Dogue De Bordeaux 0.0\n7 Entlebucher Mix 0.0\n8 Sussex Span Mix 0.0\n9 Bruss Griffon 0.0\n10 Old English Sheepdog Mix 0.0\n"
]
],
[
[
"The distribution for dogs similarly shows that exotic breeds seem to occupy many of the top ranking spots for adoption rates, although the wide variety of dog breeds also shows that there are many breeds that don't fare well. This might indicate that breed alone is not a good enough indication of the chances of adoption.",
"_____no_output_____"
]
],
[
[
"# Cat Breed vs. Mixed\nmixed_cats = cats[cats['Breed'].str.contains('Mix')]['Found Home']\npure_cats = cats[~cats['Breed'].str.contains('Mix')]['Found Home']\n\npure_cats_fh = 100*(pure_cats == 1).sum()/pure_cats.shape[0]\nmixed_cats_fh = 100*(mixed_cats == 1).sum()/mixed_cats.shape[0]\n\n# Dog Breed vs. Mixed\nmixed_dogs = dogs[dogs['Breed'].str.contains('Mix')]['Found Home']\npure_dogs = dogs[~dogs['Breed'].str.contains('Mix')]['Found Home']\n\npure_dogs_fh = 100*(pure_dogs == 1).sum()/pure_dogs.shape[0]\nmixed_dogs_fh = 100*(mixed_dogs == 1).sum()/mixed_dogs.shape[0]\n\n# Generate plot\nfig, ax = plt.subplots(figsize=(8,8))\nax.bar(list(range(4)), \n np.array([pure_cats_fh, mixed_cats_fh, pure_dogs_fh, mixed_dogs_fh]), \n color=['purple','mediumorchid','green','lightgreen'], \n edgecolor='k'\n )\nax.set_title('Percentage of Homes Found vs. Pure and Mixed Breeds')\nax.set_xticks(list(range(4)))\nax.set_xticklabels(('Purebreed Cats', 'Mixed Cats', 'Purebreed Dogs', 'Mixed Dogs'))\nplt.ylabel('% of Animals that Found a Home')\n\n# Draw lines at average value of FH for cats and dogs\nplt.axhline(y=47.8, xmin=0, xmax=0.5, color='b', linestyle='--')\nplt.axhline(y=74.5, xmin=0.5, xmax=1, color='r', linestyle='--')\n\n# Display plot\nplt.show()",
"_____no_output_____"
]
],
[
[
"We can see that there are two opposing trends for cats and dogs here. For cats, purebreeds have a noticeably higher rate of adoption, while dogs see a drop in adoption rates for those that are not mixed breeds. This may be related to the high occurences of Domestic Shorthair cats at the center. When people come in to browse for pet adoption, it is easier for purebreeds to stand out in appearance when most cats are similar. The distribution of breeds for dogs are much more varied, and so this may not have the same impact on adoptions for dogs.\n\n#### iv. Colors",
"_____no_output_____"
]
],
[
[
"# Most Common Colors\nplt.subplots(figsize=(14, 8))\n\n# Create plot to show distribution of most common cat colors by percent of total cats\nplt.subplot(1, 2, 1)\ncat_colors=['brown', 'black', 'orange', 'lightsteelblue', 'white', 'lightsteelblue', 'saddlebrown', 'sandybrown', 'saddlebrown', 'wheat']\n(100*cats['Primary Color'].value_counts()[:10]/cats['Primary Color'].value_counts().sum()).plot(kind='bar', color=[cat_colors], edgecolor='k')\nplt.ylabel('% of Total Cats')\nplt.xlabel('Primary Color')\nplt.title('Distribution of 10 Most Common Cat Colors')\n\n# Create plot to show distribution of most common dog colors by percent of total dogs\nplt.subplot(1, 2, 2)\ndog_colors=['black', 'white', 'brown', 'tan', 'saddlebrown', 'maroon', 'saddlebrown', 'lightsteelblue', 'chocolate', 'black']\n(100*dogs['Primary Color'].value_counts()[:10]/dogs['Primary Color'].value_counts().sum()).plot(kind='bar', color=[dog_colors], edgecolor='k')\nplt.ylabel('% of Total Dogs')\nplt.xlabel('Primary Color')\nplt.title('Distribution of 10 Most Common Dog Colors')\n\nplt.show()",
"_____no_output_____"
]
],
[
[
"Above are the 10 most common colors for both cats and dogs. If we investigate the rates of placement in permanent homes by color, it may be possible to extract information on which color animals are preferred by people looking for pets at the Austin Animal Center.",
"_____no_output_____"
]
],
[
[
"## Find percentage of cats that found homes by primary color\ncat_colors_fh = {}\nfor color in cats['Primary Color'].unique():\n fh_temp = cats.loc[(cats['Found Home'] == 1) & (cats['Primary Color'] == color)].shape[0]\n total_temp = cats.loc[cats['Primary Color'] == color].shape[0]\n cat_colors_fh[color] = fh_temp/total_temp\n\n# Create ranking lists\ntop10_cat_colors_fh = pd.Series(cat_colors_fh).sort_values(ascending=False)[:10]\nbottom10_cat_colors_fh = pd.Series(cat_colors_fh).sort_values()[:10]\n\n# Format dataframe for display\ntop10_cat_colors_fh2 = pd.DataFrame(data={'Primary Color': top10_cat_colors_fh.index,\n '% of Color that Found Home': 100*top10_cat_colors_fh.values\n },\n columns=['Primary Color', '% of Color that Found Home'],\n index=range(1,11)\n )\n\nbottom10_cat_colors_fh2 = pd.DataFrame(data={'Primary Color': bottom10_cat_colors_fh.index,\n '% of Color that Found Home': 100*bottom10_cat_colors_fh.values\n },\n columns=['Primary Color', '% of Color that Found Home'],\n index=range(1,11)\n )\n\n# Rename index to ranking\ntop10_cat_colors_fh2.index.name = 'Rank'\nbottom10_cat_colors_fh2.index.name = 'Rank'\n\n\n# Display Rankings\nprint('Cat Colors with Highest percentage of Homes Found\\n', top10_cat_colors_fh2)\nprint('\\nCat Colors with Lowest percentage of Homes Found\\n', bottom10_cat_colors_fh2)",
"Cat Colors with Highest percentage of Homes Found\n Primary Color % of Color that Found Home\nRank \n1 Fawn 100.000000\n2 Apricot 100.000000\n3 Blue Smoke 91.666667\n4 Agouti 66.666667\n5 Tortie Point 63.636364\n6 Black Smoke 62.878788\n7 Chocolate Point 61.016949\n8 Flame Point 58.208955\n9 Chocolate 57.692308\n10 Lynx Point 55.230126\n\nCat Colors with Lowest percentage of Homes Found\n Primary Color % of Color that Found Home\nRank \n1 Tricolor 0.000000\n2 Pink 0.000000\n3 Black Brindle 0.000000\n4 Orange Tiger 0.000000\n5 Black Tiger 0.000000\n6 Brown Merle 0.000000\n7 Brown Brindle 0.000000\n8 Buff 9.090909\n9 Orange 22.807018\n10 Gray Tabby 25.603865\n"
],
[
"# Find rankings of likelihood to find home for 10 most common colors\ncat_color_fh_rankings = pd.Series(cat_colors_fh).sort_values(ascending=False)\n\nprint('Rankings of Colors Most Likely to Find Homes - for 10 Most Common Cat Colors')\nfor i in range(len(cat_color_fh_rankings)):\n if cat_color_fh_rankings.index[i] in cats['Primary Color'].value_counts().index[:10]:\n print('{0}: {1}'.format(cat_color_fh_rankings.index[i], i+1))",
"Rankings of Colors Most Likely to Find Homes - for 10 Most Common Cat Colors\nTorbie: 11\nCalico: 15\nCream Tabby: 16\nBlue Tabby: 17\nBlue: 22\nTortie: 23\nOrange Tabby: 25\nBrown Tabby: 27\nWhite: 28\nBlack: 29\n"
],
[
"# Find percentage of dogs that found homes by primary color\ndog_colors_fh = {}\nfor color in dogs['Primary Color'].unique():\n fh_temp = dogs.loc[(dogs['Found Home'] == 1) & (dogs['Primary Color'] == color)].shape[0]\n total_temp = dogs.loc[dogs['Primary Color'] == color].shape[0]\n dog_colors_fh[color] = fh_temp/total_temp\n\n# Create ranking lists\ntop10_dog_colors_fh = pd.Series(dog_colors_fh).sort_values(ascending=False)[:10]\nbottom10_dog_colors_fh = pd.Series(dog_colors_fh).sort_values()[:10]\n\n# Format dataframe for display\ntop10_dog_colors_fh2 = pd.DataFrame(data={'Primary Color': top10_dog_colors_fh.index,\n '% of Color that Found Home': 100*top10_dog_colors_fh.values\n },\n columns=['Primary Color', '% of Color that Found Home'],\n index=range(1,11)\n )\n\nbottom10_dog_colors_fh2 = pd.DataFrame(data={'Primary Color': bottom10_dog_colors_fh.index,\n '% of Color that Found Home': 100*bottom10_dog_colors_fh.values\n },\n columns=['Primary Color', '% of Color that Found Home'],\n index=range(1,11)\n )\n\n# Rename index to ranking\ntop10_dog_colors_fh2.index.name = 'Rank'\nbottom10_dog_colors_fh2.index.name = 'Rank'\n\n\n# Display Rankings\nprint('Dog Colors with Highest percentage of Homes Found\\n', top10_dog_colors_fh2)\nprint('\\nDog Colors with Lowest percentage of Homes Found\\n', bottom10_dog_colors_fh2)",
"Dog Colors with Highest percentage of Homes Found\n Primary Color % of Color that Found Home\nRank \n1 Agouti 100.000000\n2 Ruddy 100.000000\n3 Black Tiger 100.000000\n4 Black Smoke 92.307692\n5 Blue Tiger 90.625000\n6 Yellow Brindle 81.578947\n7 Blue Merle 80.890052\n8 Silver 80.373832\n9 Brown Merle 80.368098\n10 Yellow 78.360656\n\nDog Colors with Lowest percentage of Homes Found\n Primary Color % of Color that Found Home\nRank \n1 Brown Tiger 60.000000\n2 Orange 66.666667\n3 Blue Smoke 66.666667\n4 Gold 71.022727\n5 Liver Tick 71.428571\n6 Blue Cream 71.428571\n7 Cream 71.698113\n8 Apricot 71.875000\n9 Gray 71.922246\n10 Fawn 72.759857\n"
],
[
"# Find rankings of likelihood to find home for 10 most common colors\ndog_color_fh_rankings = pd.Series(dog_colors_fh).sort_values(ascending=False)\n\nprint('Rankings of Colors Most Likely to Find Homes - for 10 Most Common Dog Colors')\nfor i in range(len(dog_color_fh_rankings)):\n if dog_color_fh_rankings.index[i] in dogs['Primary Color'].value_counts().index[:10]:\n print('{0}: {1}'.format(dog_color_fh_rankings.index[i], i+1))",
"Rankings of Colors Most Likely to Find Homes - for 10 Most Common Dog Colors\nChocolate: 14\nTricolor: 15\nBlue: 17\nRed: 19\nBlack: 20\nBrown Brindle: 21\nTan: 22\nSable: 24\nBrown: 25\nWhite: 26\n"
],
[
"plt.subplots(figsize=(14, 8))\n\n# Create plot to show distribution of most common cat colors by percent of total cats\nplt.subplot(1, 2, 1)\ncat_colors=['brown', 'black', 'orange', 'lightsteelblue', 'white', 'lightsteelblue', 'saddlebrown', 'sandybrown', 'saddlebrown', 'wheat']\nax1 = (100*cats['Primary Color'].value_counts()[:10]/cats['Primary Color'].value_counts().sum()).plot(kind='bar', color=[cat_colors], edgecolor='k')\nplt.ylabel('% of Total Cats')\nplt.xlabel('Primary Color')\nplt.title('Distribution of 10 Most Common Cat Colors')\n\n# Label bars with ranking of cats most likely to find homes\nrects = ax1.patches\nlabels1 = ['27th', '29th', '25th', '22nd', '28th', '17th', '23rd', '15th', '11th', '16th']\n\nfor rect, label in zip(rects, labels1):\n height = rect.get_height()\n ax1.text(rect.get_x() + rect.get_width()/2, height*1.01, label, ha='center', va='bottom')\n \n# Create plot to show distribution of most common dog colors by percent of total dogs\nplt.subplot(1, 2, 2)\ndog_colors=['black', 'white', 'brown', 'tan', 'saddlebrown', 'maroon', 'saddlebrown', 'lightsteelblue', 'chocolate', 'black']\nax2 = (100*dogs['Primary Color'].value_counts()[:10]/dogs['Primary Color'].value_counts().sum()).plot(kind='bar', color=[dog_colors], edgecolor='k')\nplt.ylabel('% of Total Dogs')\nplt.xlabel('Primary Color')\nplt.title('Distribution of 10 Most Common Dog Colors')\n\n# Label bars with ranking of dogs most likely to find homes\nrects = ax2.patches\nlabels2 = ['20th', '26th', '25th', '22nd', '21st', '19th', '15th', '17th', '14th', '24th']\n\nfor rect, label in zip(rects, labels2):\n height = rect.get_height()\n ax2.text(rect.get_x() + rect.get_width()/2, height*1.01, label, ha='center', va='bottom')\n \nplt.show()",
"_____no_output_____"
]
],
[
[
"The respective ranks in highest adoption rates for cats and dogs are denoted above the bars for each of the 10 most common colors. We can see again that none of the most common colors for both cats and dogs appear in their respective top lists of adoption rates. This further supports that a sense of exotic appearance of a pet may be a primary driver in people's choice of a pet.",
"_____no_output_____"
]
],
[
[
"# % of Homes Found vs. cats with secondary colors\ncats_secondary = cats[cats['Secondary Color'].notnull()]\ncats_no_secondary = cats[~cats['Secondary Color'].notnull()]\n\ncats_secondary_fh = 100*(cats_secondary['Found Home'] == 1).sum()/cats_secondary.shape[0]\ncats_no_secondary_fh = 100*(cats_no_secondary['Found Home'] == 1).sum()/cats_no_secondary.shape[0]\n\n# % of Homes Found vs. dogs with secondary colors\ndogs_secondary = dogs[dogs['Secondary Color'].notnull()]\ndogs_no_secondary = dogs[~dogs['Secondary Color'].notnull()]\n\ndogs_secondary_fh = 100*(dogs_secondary['Found Home'] == 1).sum()/dogs_secondary.shape[0]\ndogs_no_secondary_fh = 100*(dogs_no_secondary['Found Home'] == 1).sum()/dogs_no_secondary.shape[0]\n\n\n# Generate plot\nfig, ax = plt.subplots(figsize=(12,8))\nax.bar(list(range(4)),\n np.array([cats_secondary_fh, cats_no_secondary_fh, dogs_secondary_fh, dogs_no_secondary_fh]),\n color=['darkgoldenrod','gold','royalblue','lightsteelblue'],\n width=0.5,\n edgecolor='k'\n )\nax.set_title('Percentage of Homes Found vs. Animals with Secondary Colors')\nax.set_xticks(list(range(4)))\nax.set_xticklabels(('Cats: w/ Secondary', 'Cats: No Secondary', 'Dogs: w/ Secondary', 'Dogs: No Secondary'))\nplt.ylabel('% of Animals that Found a Home')\n\n# Draw a line at the average rat of FH for cats and dogs\nplt.axhline(y=47.8, xmin=0, xmax=0.5, color='b', linestyle='--')\nplt.axhline(y=74.5, xmin=0.5, xmax=1, color='r', linestyle='--')\n\n# Display plot\nplt.show()",
"_____no_output_____"
]
],
[
[
"The data above shows that for both cats and dogs, a secondary color slightly improves the rates of adoption. Animals with distinctive color combinations in their coats may stand out more visually to potential pet owners.",
"_____no_output_____"
],
[
"### 3. Analysis of Adoption Outcomes vs. Year\n\nFinally, I will take a brief look at the trends of cat and dog adoptions by year. ",
"_____no_output_____"
]
],
[
[
"print('The dataset covers a time period between {0} and {1}'.format(data['DateTime'].min(), data['DateTime'].max()))",
"The dataset covers a time period between 2013-10-01 09:31:00 and 2017-12-10 12:59:00\n"
],
[
"# Reset indices for cats and dogs\ncats.reset_index(drop=True, inplace=True)\ndogs.reset_index(drop=True, inplace=True)\n\n\n# Separate data into years and months\ncat_years = []\ncat_months = []\n\nfor cat in cats['DateTime']:\n cat_years.append(cat.year)\n cat_months.append(cat.month)\n \ndog_years = []\ndog_months = []\n\nfor dog in dogs['DateTime']:\n dog_years.append(dog.year)\n dog_months.append(dog.month)\n \n# Convert collected months and years into Series format\ncat_years = pd.Series(cat_years)\ncat_months = pd.Series(cat_months)\ndog_years = pd.Series(dog_years)\ndog_months = pd.Series(dog_months)",
"_____no_output_____"
],
[
"# cat outcomes vs. year\ncat_year_fh = []\n\nfor year in range(2013,2018):\n cat_year = 100*(cats[cat_years == year]['Found Home'] == 1).sum()/(cats[cat_years == year].shape[0])\n cat_year_fh.append(cat_year)\n\n# dog outcomes vs. year\ndog_year_fh = []\n\nfor year in range(2013,2018):\n dog_year = 100*(dogs[dog_years == year]['Found Home'] == 1).sum()/(dogs[dog_years == year].shape[0])\n dog_year_fh.append(dog_year)",
"_____no_output_____"
],
[
"# Construct plots to show percentage trends of cats and dogs that found homes by year\nplt.plot(range(2013,2018), list(cat_year_fh), 'k-')\nplt.plot(range(2013,2018), list(cat_year_fh), 'ro')\n\nplt.plot(range(2013,2018), list(dog_year_fh), 'k-')\nplt.plot(range(2013,2018), list(dog_year_fh), 'bo')\n\n# Plot formatting\nplt.xlabel('Year')\nplt.ylabel('% of Entries with Home Found')\n\n# Create a legend for plot\ncat_patch = mpatches.Patch(facecolor=(1, 0, 0), edgecolor='k', label='Cats')\ndog_patch = mpatches.Patch(facecolor=(0, 0, 1), edgecolor='k', label='Dogs')\nplt.legend(handles=[dog_patch, cat_patch])\n\n\n# Display plot\nplt.show()",
"_____no_output_____"
]
],
[
[
"The graph above shows the average rates of placement in permanent homes for cats and dogs broken down by year. Although dogs have experienced a relative upward trend with time, cats seem to show a drop in rates from 2013 to 2014, but then the same relative upward trend. This may be an anomaly of our dataset, since we only have 2013 data for the months of October-December. If there is a dependence on adoptions vs. months of the year, this can introduce a bias into the 2013 data points.",
"_____no_output_____"
]
],
[
[
"# cat outcomes vs. year\ncat_month_fh = []\n\nfor month in range(1,13):\n cat_month = 100*(cats[cat_months == month]['Found Home'] == 1).sum()/(cats[cat_months == month].shape[0])\n cat_month_fh.append(cat_month)\n\n# dog outcomes vs. year\ndog_month_fh = []\n\nfor month in range(1,13):\n dog_month = 100*(dogs[dog_months == month]['Found Home'] == 1).sum()/(dogs[dog_months == month].shape[0])\n dog_month_fh.append(dog_month)",
"_____no_output_____"
],
[
"# Construct plots to show percentage trends of cats and dogs that found homes by month\nplt.plot(range(1,13), list(cat_month_fh), 'k-')\nplt.plot(range(1,13), list(cat_month_fh), 'ro')\n\nplt.plot(range(1,13), list(dog_month_fh), 'k-')\nplt.plot(range(1,13), list(dog_month_fh), 'bo')\n\n# Plot formatting\nplt.xlabel('Month')\nplt.ylabel('% of Entries with Home Found')\n\n# Create a legend for plot\ncat_patch = mpatches.Patch(facecolor=(1, 0, 0), edgecolor='k', label='Cats')\ndog_patch = mpatches.Patch(facecolor=(0, 0, 1), edgecolor='k', label='Dogs')\nplt.legend(handles=[dog_patch, cat_patch])\n\n\n# Display plot\nplt.show()",
"_____no_output_____"
]
],
[
[
"By looking at the average adoption rates broken down by month we can see that for both cats and dogs, there seems to be spikes in adoptions around winter months (Nov. - Feb.) and summer months (Jun. - Aug.). The main difference here is that cats seem to have a much stronger dependence on the month of the year than dogs, which remains relatively consistent.\n\nThis suggests that the average placement rate value for 2013 found in the previous graph may not be representative of the placement rates for the entire year due to the data only being collected in October-December.",
"_____no_output_____"
]
],
[
[
"# cat outcomes vs. year\ncat_monthyear_fh = []\n\nfor year in range(2014,2018):\n for month in range(1,13):\n cat_month = 100*(cats.loc[(cat_years == year) & (cat_months == month)]['Found Home'] == 1).sum()/(cats.loc[(cat_years == year) & (cat_months == month)].shape[0])\n cat_monthyear_fh.append(cat_month)\n\n# dog outcomes vs. year\ndog_monthyear_fh = []\n\nfor year in range(2014,2018):\n for month in range(1,13):\n dog_month = 100*(dogs.loc[(dog_years == year) & (dog_months == month)]['Found Home'] == 1).sum()/(dogs.loc[(dog_years == year) & (dog_months == month)].shape[0])\n dog_monthyear_fh.append(dog_month)",
"_____no_output_____"
],
[
"# Construct plots to show percentage trends of cats and dogs that found homes by month\nplt.plot(range(len(cat_monthyear_fh)), list(cat_monthyear_fh), 'k-')\nplt.plot(range(len(cat_monthyear_fh)), list(cat_monthyear_fh), 'ro')\n\nplt.plot(range(len(dog_monthyear_fh)), list(dog_monthyear_fh), 'k-')\nplt.plot(range(len(dog_monthyear_fh)), list(dog_monthyear_fh), 'bo')\n\n# Plot formatting\nplt.xlabel('Breakdown by Months (Beginning Jan. 2014)')\nplt.ylabel('% of Entries with Home Found')\n\n# Create a legend for plot\ncat_patch = mpatches.Patch(facecolor=(1, 0, 0), edgecolor='k', label='Cats')\ndog_patch = mpatches.Patch(facecolor=(0, 0, 1), edgecolor='k', label='Dogs')\nplt.legend(handles=[dog_patch, cat_patch])\n\n\n# Display plot\nplt.show()",
"_____no_output_____"
]
],
[
[
"We can see that the variations in adoptions for cats is again much larger than that for dogs, which is relatively consistent. ",
"_____no_output_____"
],
[
"### Closing Remarks\n\nIn this project the Austin Animal Center dataset was investigated with a wide range of metrics to suggest which factors seem to influence the animals that are able to be placed in permanent homes vs. those which are not. It was shown that dogs have a much higher placement rate overall than cats, while attributes such as breed and color seem to have a strong influence on the placement rates for both cats and dogs.\n\n### Thanks for Reading!",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
]
] |
e72ac5d14e32224bb09ada31c04d83b2eafe2dfd | 29,604 | ipynb | Jupyter Notebook | lessons/02_spacetime/Module_2_assignment.ipynb | rbds/Numerical_Methods_working_folder | d929ed7506054e7aa7ba059623c37ecf7d6ae993 | [
"CC-BY-3.0"
] | null | null | null | lessons/02_spacetime/Module_2_assignment.ipynb | rbds/Numerical_Methods_working_folder | d929ed7506054e7aa7ba059623c37ecf7d6ae993 | [
"CC-BY-3.0"
] | null | null | null | lessons/02_spacetime/Module_2_assignment.ipynb | rbds/Numerical_Methods_working_folder | d929ed7506054e7aa7ba059623c37ecf7d6ae993 | [
"CC-BY-3.0"
] | null | null | null | 77.09375 | 11,892 | 0.811377 | [
[
[
"import numpy\nimport sympy\nfrom matplotlib import pyplot\n%matplotlib inline\nfrom matplotlib import rcParams\nrcParams['font.family'] = 'serif'\nrcParams['font.size'] = 16\nfrom sympy import init_printing\ninit_printing()",
"_____no_output_____"
],
[
"Vmax = 80\nL = 11\nrho_max = 250\nnx = 51\ndt = .001 #[hours]\ndx = L/nx",
"_____no_output_____"
],
[
"#Initial Conditions\nx = numpy.linspace(0, L, nx)\nrho0 = numpy.ones(nx)*10\nrho0[10:20] = 50\n\n#Boundary Condition\nrho0[0] = 10",
"_____no_output_____"
],
[
"F, V, t = sympy.symbols('F V t')\n#F = Vmax*rho*(1-rho/rho_max)\nrho = rho0",
"_____no_output_____"
],
[
"v0 = Vmax*(1-rho0/rho_max)\nvmin = min(v0)/3.6 #units in m/s \nprint(\"Min. velocity at t=0 is: {} m/s\".format(vmin))",
"_____no_output_____"
],
[
"#test code, can do one time step at a time.\n#rho_n = rho0.copy()\n#test = rho_n\n#test[0] = 10\n#test[1:] = -dt/dx*(Vmax*rho_n[1:]*(1-rho_n[1:]/rho_max) - Vmax*rho_n[0:-1]*(1-rho_n[0:-1]/rho_max)) + rho_n[1:]\n#print(test[0])\n\n#pyplot.figure(figsize=(8,5), dpi=100)\n#pyplot.plot(x, test, ls='--', lw=3)\n#pyplot.xlim([0,20])\n#pyplot.ylim([0,60]);",
"_____no_output_____"
],
[
"T = 3/60\nnt = int(T/dt)\n\nfor n in range(nt):\n rho_n = rho.copy()\n \n rho[0] = 10\n rho[1:] = -dt/dx*Vmax*(rho_n[1:]*(1-rho_n[1:]/rho_max) - \\\n rho_n[0:-1]*(1-rho_n[0:-1]/rho_max)) + rho_n[1:]",
"_____no_output_____"
],
[
"pyplot.figure(figsize=(8,5), dpi=100)\npyplot.plot(x, rho, ls='--', lw=3)\npyplot.xlim([0,20])\npyplot.ylim([0,60]);",
"_____no_output_____"
],
[
"#find average velocity at t= 3 minutes\nV = Vmax*(1- rho/rho_max)\nV_ave = numpy.mean(V)/3.6 #units in m/s\nprint(\"Average velocity at t=3min. is {}\".format(V_ave))\n",
"_____no_output_____"
],
[
"Vmax = 80\nL = 11\nrho_max = 250\nnx = 101\ndt = .001 #[hours]\ndx = L/nx\n\n#Initial Conditions\nx = numpy.linspace(0, L, nx)\nrho0 = numpy.ones(nx)*10\nrho0[10:20] = 50\n\n#Boundary Condition\nrho0[0] = 10\n\nrho = rho0 #reset rho to initial condition\n\n#find minimum velocity at t=6 minutes\nT = 0.1 # 6 minutes\nnt = int(T/dt)\n\nfor n in range(nt):\n rho_n = rho.copy()\n rho[0] = 10\n rho[1:] = -dt/dx*Vmax*(rho_n[1:]*(1-rho_n[1:]/rho_max) - \\\n rho_n[0:-1]*(1-rho_n[0:-1]/rho_max)) + rho_n[1:]\n \nV = Vmax*(1-rho0/rho_max)\nvmin = min(V)/3.6 #units in m/s \n#print(V)\nprint(\"Min. velocity at t=6 is: {} m/s\".format(vmin))\npyplot.figure(figsize=(8,5), dpi=100)\npyplot.plot(x, rho, ls='--', lw=3)\npyplot.xlim([0,15])\npyplot.ylim([0,60]);",
"_____no_output_____"
],
[
"Vmax = 136\nL = 11\nrho_max = 250\nnx = 51\ndt = .001 #[hours]\ndx = L/nx\n\n#Initial Conditions\nx = numpy.linspace(0, L, nx)\nrho0 = numpy.ones(nx)*10\nrho0[10:20] = 50\n\n#Boundary Condition\nrho0[0] = 20\nrho = rho0",
"_____no_output_____"
],
[
"v0 = Vmax*(1-rho0/rho_max)\nvmin = min(v0)/3.6 #units in m/s \nprint(\"Min. velocity at t=0 is: {} m/s\".format(vmin))",
"Min. velocity at t=0 is: 30.222222222222225 m/s\n"
],
[
"T = 3/60\nnt = int(T/dt)\n\nfor n in range(nt):\n rho_n = rho.copy()\n \n rho[0] = 20\n rho[1:] = -dt/dx*Vmax*(rho_n[1:]*(1-rho_n[1:]/rho_max) - \\\n rho_n[0:-1]*(1-rho_n[0:-1]/rho_max)) + rho_n[1:]\n\npyplot.figure(figsize=(8,5), dpi=100)\npyplot.plot(x, rho, ls='--', lw=3)\npyplot.xlim([0,20])\npyplot.ylim([0,60]);\n\n#find average velocity at t= 3 minutes\nV = Vmax*(1- rho/rho_max)\nV_ave = numpy.mean(V)/3.6 #units in m/s\nprint(\"Average velocity at t=3min. is {}\".format(V_ave))\nvmin = min(V)/3.6 #units in m/s \nprint(\"Minimum velocity at t=3min. is {}\".format(vmin))",
"Average velocity at t=3min. is 34.25970813456213\nMinimum velocity at t=3min. is 31.29595736342468\n"
],
[
"Vmax = 136\nL = 11\nrho_max = 250\nnx = 51\ndt = .001 #[hours]\ndx = L/nx\n\n#Initial Conditions\nx = numpy.linspace(0, L, nx)\nrho0 = numpy.ones(nx)*20\nrho0[10:20] = 50\n\n#Boundary Condition\nrho0[0] = 20\n\nrho = rho0 #reset rho to initial condition\n\n#find minimum velocity at t=6 minutes\nT = 0.1 # 6 minutes\nnt = int(T/dt)\n\nfor n in range(nt):\n rho_n = rho.copy()\n rho[0] = 20\n rho[1:] = -dt/dx*Vmax*(rho_n[1:]*(1-rho_n[1:]/rho_max) - \\\n rho_n[0:-1]*(1-rho_n[0:-1]/rho_max)) + rho_n[1:]\n \nV = Vmax*(1-rho0/rho_max)\nvmin = min(V)/3.6 #units in m/s \nprint(\"Min. velocity at t=6 is: {} m/s\".format(vmin))\npyplot.figure(figsize=(8,5), dpi=100)\npyplot.plot(x, rho, ls='--', lw=3)\npyplot.xlim([0,15])\npyplot.ylim([0,60]);",
"Min. velocity at t=6 is: 34.611849730763694 m/s\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e72ac686d8d65c1cca47e1c29f56ae73fb8e8ac3 | 3,457 | ipynb | Jupyter Notebook | eval/eval.ipynb | akkarimi/BERT-For-ABSA | e093c0b0c41f047dc6e1a6480a0cd4435296692c | [
"Apache-2.0"
] | 13 | 2020-10-30T09:49:41.000Z | 2022-02-18T17:21:13.000Z | eval/eval.ipynb | akkarimi/Adversarial-Training-for-ABSA | e093c0b0c41f047dc6e1a6480a0cd4435296692c | [
"Apache-2.0"
] | null | null | null | eval/eval.ipynb | akkarimi/Adversarial-Training-for-ABSA | e093c0b0c41f047dc6e1a6480a0cd4435296692c | [
"Apache-2.0"
] | 4 | 2020-05-09T03:05:16.000Z | 2020-09-27T21:41:29.000Z | 37.172043 | 146 | 0.441423 | [
[
[
"import json\nimport os\nimport sklearn.metrics\nimport numpy as np\nfrom pprint import pprint",
"_____no_output_____"
],
[
"def evaluate(tasks, berts, domains, runs=10):\n for task in tasks:\n for bert in berts:\n for domain in domains: \n scores=[]\n for run in range(1, runs+1):\n DATA_DIR=os.path.join(task, domain)\n OUTPUT_DIR=os.path.join(\"run\", bert+\"_\"+task, domain, str(run) )\n if os.path.exists(os.path.join(OUTPUT_DIR, \"predictions.json\") ):\n if \"rrc\" in task:\n ret=!python eval/evaluate-v1.1.py $DATA_DIR/test.json $OUTPUT_DIR/predictions.json\n score=json.loads(ret[0])\n scores.append([score[\"exact_match\"], score[\"f1\"] ] )\n elif \"ae\" in task:\n ret=!python eval/evaluate_ae.py --pred_json $OUTPUT_DIR/predictions.json\n scores.append(float(ret[0])*100 )\n elif \"asc\" in task:\n with open(os.path.join(OUTPUT_DIR, \"predictions.json\") ) as f:\n results=json.load(f)\n y_true=results['label_ids']\n y_pred=[np.argmax(logit) for logit in results['logits'] ]\n p_macro, r_macro, f_macro, _=sklearn.metrics.precision_recall_fscore_support(y_true, y_pred, average='macro')\n f_macro = 2*p_macro*r_macro/(p_macro+r_macro)\n scores.append([100*sklearn.metrics.accuracy_score(y_true, y_pred), 100*f_macro ] )\n else:\n raise Exception(\"unknown task\")\n scores=np.array(scores)\n m=scores.mean(axis=0)\n \n if len(scores.shape)>1:\n for iz, score in enumerate(m):\n print(task, \":\", bert, domain, \"metric\", iz, round(score, 2))\n pprint(scores)\n else:\n print(task, \":\", bert, domain, round(m,2))\n pprint(scores)\n print",
"_____no_output_____"
],
[
"evaluate(tasks, berts, domains, runs)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code"
]
] |
e72aca8112fe9a75b36de2d4179ce7640f0be5e6 | 87,353 | ipynb | Jupyter Notebook | cp1/cp1.ipynb | jet-code/multivariable-control-systems | 81b57d51a4dfc92964f989794f71d525af0359ff | [
"MIT"
] | null | null | null | cp1/cp1.ipynb | jet-code/multivariable-control-systems | 81b57d51a4dfc92964f989794f71d525af0359ff | [
"MIT"
] | null | null | null | cp1/cp1.ipynb | jet-code/multivariable-control-systems | 81b57d51a4dfc92964f989794f71d525af0359ff | [
"MIT"
] | null | null | null | 28.151144 | 91 | 0.346262 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
e72ae46bff2f5fd79debd9016ebf46a94d9b4ec6 | 11,502 | ipynb | Jupyter Notebook | cirq-core/cirq/contrib/quimb/Cirq-to-Tensor-Networks.ipynb | anonymousr007/Cirq | fae0d85f79440e046ef365b58d86605ce35d4626 | [
"Apache-2.0"
] | 1 | 2022-02-02T07:13:54.000Z | 2022-02-02T07:13:54.000Z | cirq-core/cirq/contrib/quimb/Cirq-to-Tensor-Networks.ipynb | anonymousr007/Cirq | fae0d85f79440e046ef365b58d86605ce35d4626 | [
"Apache-2.0"
] | null | null | null | cirq-core/cirq/contrib/quimb/Cirq-to-Tensor-Networks.ipynb | anonymousr007/Cirq | fae0d85f79440e046ef365b58d86605ce35d4626 | [
"Apache-2.0"
] | null | null | null | 29.341837 | 459 | 0.569292 | [
[
[
"import importlib.util\n\ntry:\n import cirq\nexcept ImportError:\n print(\"installing cirq...\")\n !pip install --quiet cirq --pre\n print(\"installed cirq.\")\n\ntry:\n import quimb\nexcept ImportError:\n print(\"installing cirq-core[contrib]...\")\n !pip install --quiet cirq-core[contrib]\n print(\"installed cirq-core[contrib].\")\n ",
"_____no_output_____"
]
],
[
[
"# Cirq to Tensor Networks\n\nHere we demonstrate turning circuits into tensor network representations of the circuit's unitary, final state vector, final density matrix, and final noisy density matrix. ",
"_____no_output_____"
],
[
"### Imports",
"_____no_output_____"
]
],
[
[
"import cirq\nimport numpy as np\nimport pandas as pd\nfrom cirq.contrib.svg import SVGCircuit",
"_____no_output_____"
],
[
"import cirq.contrib.quimb as ccq\nimport quimb\nimport quimb.tensor as qtn",
"_____no_output_____"
]
],
[
[
"### Create a random circuit",
"_____no_output_____"
]
],
[
[
"qubits = cirq.LineQubit.range(3)\ncircuit = cirq.testing.random_circuit(qubits, n_moments=10, op_density=0.8, random_state=52)\ncircuit = cirq.drop_empty_moments(circuit)\nSVGCircuit(circuit)",
"_____no_output_____"
]
],
[
[
"### Circuit to Tensors\nThe circuit defines a tensor network representation. By default, the initial state is the `|0...0>` state (represented by the \"zero qubit\" operations labeled \"Q0\" in the legend. \"Q1\" are single qubit operations and \"Q2\" are two qubit operations. The open legs are the indices into the state vector and are of the form \"i{m}_q{n}\" where `m` is the time index (given by the returned `qubit_frontier` dictionary) and \"n\" is the qubit string.\n\nNote: this notebook relies on unreleased Cirq features. If you want to try these features, make sure you install cirq via `pip install cirq --pre`.",
"_____no_output_____"
]
],
[
[
"tensors, qubit_frontier, fix = ccq.circuit_to_tensors(circuit, qubits)\ntn = qtn.TensorNetwork(tensors)\nprint(qubit_frontier)\nfrom matplotlib import pyplot as plt\ntn.graph(fix=fix, color=['Q0', 'Q1', 'Q2'], figsize=(8,8))",
"_____no_output_____"
]
],
[
[
"### To dense",
"_____no_output_____"
]
],
[
[
"psi_tn = ccq.tensor_state_vector(circuit, qubits)\npsi_cirq = cirq.final_state_vector(circuit, qubit_order=qubits)\nnp.testing.assert_allclose(psi_cirq, psi_tn, atol=1e-7)",
"_____no_output_____"
]
],
[
[
"### Circuit Unitary\nWe can also leave the input legs open which gives a tensor network representation of the unitary",
"_____no_output_____"
]
],
[
[
"tensors, qubit_frontier, fix = ccq.circuit_to_tensors(circuit, qubits, initial_state=None)\ntn = qtn.TensorNetwork(tensors)\nprint(qubit_frontier)\ntn.graph(fix=fix, color=['Q0', 'Q1', 'Q2'], figsize=(8, 8))",
"_____no_output_____"
]
],
[
[
"### To dense",
"_____no_output_____"
]
],
[
[
"u_tn = ccq.tensor_unitary(circuit, qubits)\nu_cirq = circuit.unitary(qubit_order=qubits)\nnp.testing.assert_allclose(u_cirq, u_tn, atol=1e-7)",
"_____no_output_____"
]
],
[
[
"### Density Matrix\nWe can also turn a circuit into its density matrix. The density matrix resulting from the evolution of the `|0><0|` initial state can be thought of as two copies of the circuit: one going \"forwards\" and one going \"backwards\" (i.e. use the complex conjugate of each operation). Kraus operator noise operations \"link\" the forwards and backwards circuits. As such, the density matrix for pure states is simple.\n\nNote: for density matrices, we return a `fix` variable for a circuit-like layout of the tensors when calling `tn.graph`.",
"_____no_output_____"
]
],
[
[
"tensors, qubit_frontier, fix = ccq.circuit_to_density_matrix_tensors(circuit=circuit, qubits=qubits)\ntn = qtn.TensorNetwork(tensors)\ntn.graph(fix=fix, color=['Q0', 'Q1', 'Q2'])",
"_____no_output_____"
]
],
[
[
"### Noise\nNoise operations entangle the forwards and backwards evolutions. The new tensors labeled \"kQ1\" are 1-qubit Kraus operators.",
"_____no_output_____"
]
],
[
[
"noise_model = cirq.ConstantQubitNoiseModel(cirq.DepolarizingChannel(p=1e-3))\ncircuit = cirq.Circuit(noise_model.noisy_moments(circuit.moments, qubits))\nSVGCircuit(circuit)",
"_____no_output_____"
],
[
"tensors, qubit_frontier, fix = ccq.circuit_to_density_matrix_tensors(circuit=circuit, qubits=qubits)\ntn = qtn.TensorNetwork(tensors)\ntn.graph(fix=fix, color=['Q0', 'Q1', 'Q2', 'kQ1'], figsize=(8,8))",
"_____no_output_____"
]
],
[
[
"### For 6 or fewer qubits, we specify the contraction ordering.\nFor low-qubit-number circuits, a reasonable contraction ordering is to go in moment order (as a normal simulator would do). Otherwise, quimb will try to find an optimal ordering which was observed to take longer than it takes to do the contraction itself. We show how to tell quimb to contract in order by using the moment tags.",
"_____no_output_____"
]
],
[
[
"partial = 12\ntags_seq = [(f'i{i}b', f'i{i}f') for i in range(partial)]\ntn.graph(fix=fix, color = [x for x, _ in tags_seq] + [y for _, y in tags_seq], figsize=(8, 8))",
"_____no_output_____"
]
],
[
[
"### The result of a partial contraction",
"_____no_output_____"
]
],
[
[
"tn2 = tn.contract_cumulative(tags_seq, inplace=False)\ntn2.graph(fix=fix, color=['Q0', 'Q1', 'Q2', 'kQ1'], figsize=(8, 8))",
"_____no_output_____"
]
],
[
[
"### To Dense",
"_____no_output_____"
]
],
[
[
"rho_tn = ccq.tensor_density_matrix(circuit, qubits)\nrho_cirq = cirq.final_density_matrix(circuit, qubit_order=qubits)\nnp.testing.assert_allclose(rho_cirq, rho_tn, atol=1e-5)",
"_____no_output_____"
]
],
[
[
"## Profile\nFor low-qubit-number, deep, noisy circuits, the quimb contraction is faster.",
"_____no_output_____"
]
],
[
[
"import timeit\n\ndef profile(n_qubits: int, n_moments: int):\n qubits = cirq.LineQubit.range(n_qubits)\n circuit = cirq.testing.random_circuit(qubits, n_moments=n_moments, op_density=0.8)\n noise_model = cirq.ConstantQubitNoiseModel(cirq.DepolarizingChannel(p=1e-3))\n circuit = cirq.Circuit(noise_model.noisy_moments(circuit.moments, qubits))\n circuit = cirq.drop_empty_moments(circuit)\n n_moments = len(circuit)\n variables = {'circuit': circuit, 'qubits': qubits}\n\n setup1 = [\n 'import cirq',\n 'import numpy as np',\n ]\n n_call_cs, duration_cs = timeit.Timer(\n stmt='cirq.final_density_matrix(circuit)',\n setup='; '.join(setup1),\n globals=variables).autorange()\n\n setup2 = [\n 'from cirq.contrib.quimb import tensor_density_matrix',\n 'import numpy as np',\n ]\n n_call_t, duration_t = timeit.Timer(\n stmt='tensor_density_matrix(circuit, qubits)',\n setup='; '.join(setup2),\n globals=variables).autorange()\n\n return {\n 'n_qubits': n_qubits,\n 'n_moments': n_moments,\n 'duration_cirq': duration_cs,\n 'duration_quimb': duration_t,\n 'n_call_cirq': n_call_cs,\n 'n_call_quimb': n_call_t,\n }",
"_____no_output_____"
],
[
"records = []\nmax_qubits = 6\nmax_moments = 500\nfor n_qubits in [3, max_qubits]:\n for n_moments in range(1, max_moments, 50):\n record = profile(n_qubits=n_qubits, n_moments=n_moments)\n records.append(record)\n print('.', end='', flush=True)\n\ndf = pd.DataFrame(records)\ndf.head()",
"_____no_output_____"
],
[
"def select(df, k, v):\n return df[df[k] == v].drop(k, axis=1)\n\npd.DataFrame.select = select\n\ndef plot1(df, labelfmt):\n for k in ['duration_cirq', 'duration_quimb']:\n plt.plot(df['n_moments'], df[k], '.-', label=labelfmt.format(k))\n plt.legend(loc='best')\n\n\ndef plot(df):\n df['duration_cirq'] /= df['n_call_cirq']\n df['duration_quimb'] /= df['n_call_quimb']\n plot1(df.select('n_qubits', 3), 'n = 3, {}')\n plot1(df.select('n_qubits', 6), 'n = 6, {}')\n plt.xlabel('N Moments')\n plt.ylabel('Time / s')\n \nplot(df)\nplt.tight_layout()",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
e72aec6a5a07dfe0c50b9c84efcfb3566cef3486 | 5,930 | ipynb | Jupyter Notebook | Exploratory Analysis/.ipynb_checkpoints/feature_selection - Jake-checkpoint.ipynb | georgetown-analytics/Data-Oriented-Proposal-Engine | 8548c311c949f790ccd6699a16970cfe7c50d52a | [
"MIT"
] | 1 | 2021-02-20T22:00:04.000Z | 2021-02-20T22:00:04.000Z | Exploratory Analysis/feature_selection - Jake.ipynb | jharmon96/Data-Oriented-Proposal-Engine | f013ac0b48d0bbe06c70c737cf5cd7a8cc9075ac | [
"MIT"
] | 3 | 2019-11-02T18:20:55.000Z | 2019-11-16T18:18:13.000Z | Exploratory Analysis/feature_selection - Jake.ipynb | jharmon96/Data-Oriented-Proposal-Engine | f013ac0b48d0bbe06c70c737cf5cd7a8cc9075ac | [
"MIT"
] | 5 | 2019-11-02T17:57:41.000Z | 2020-03-04T01:37:38.000Z | 28.647343 | 123 | 0.588364 | [
[
[
"# Jake's original attempt to whittle data down in python. We did a lot of data wrangling in SQL\n# before getting to this point. \n\n# Import libraries\n\nimport psycopg2\nimport numpy as np\nimport pandas as pd\nfrom functools import reduce\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nfrom sklearn.preprocessing import LabelEncoder, OneHotEncoder\nimport warnings\nwarnings.filterwarnings(\"ignore\")\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.svm import SVC\nfrom sklearn.metrics import confusion_matrix\nnp.random.seed(123)",
"_____no_output_____"
],
[
"conn = psycopg2.connect(database='usaspending', user='postgres', password='gtown2019', host='127.0.0.1', port='5432')",
"_____no_output_____"
],
[
"tbl_name = 'consolidated_data_2_filtered'",
"_____no_output_____"
],
[
"df = pd.read_sql_query('SELECT * FROM ' + tbl_name, con=conn)\ndf = df[:5]\n\n# Remove duplicative columns\n\ndel df['modification_number'] # SQL filter = '0'\ndel df['awarding_agency_code'] # SQL filter = '97'\ndel df['primary_place_of_performance_country_code'] # SQL filter = 'USA'\ndel df['performance_based_service_acquisition_code'] # SQL filter = 'Y'\n# Codes for column names\ndel df['awarding_sub_agency_code']\ndel df['awarding_office_code']\ndel df['funding_sub_agency_code']\ndel df['funding_office_code']\ndel df['recipient_duns']\ndel df['award_or_idv_flag']\ndel df['award_type_code']\ndel df['type_of_contract_pricing_code']\ndel df['product_or_service_code']\ndel df['dod_claimant_program_code']\ndel df['naics_code']\ndel df['extent_competed_code']\ndel df['solicitation_procedures_code']\ndel df['type_of_set_aside']\ndel df['performance_based_service_acquisition']\ndel df['multi_year_contract_code']\ndel df['contracting_officers_determination_of_business_size_code']\n\n# Remove \"manually identified\" unimportant columns\ndel df['action_date']\ndel df['funding_office_name']\ndel df['multi_year_contract']\ndel df['solicitation_procedures']\n\n# Remove \"future\" data elements\ndel df['recipient_name']\ndel df['recipient_country_code']\ndel df['recipient_state_code']\ndel df['number_of_offers_received']\ndel df['extent_competed']\ndel df['contracting_officers_determination_of_business_size']\ndel df['number_of_employees']\ndel df['annual_revenue']\n\ndf.shape",
"_____no_output_____"
],
[
"# Create conditional column to determine if set aside is YES or NO\n\ndef set_aside(c):\n if c['type_of_set_aside_code'] == 'NONE':\n return 0\n else:\n return 1\n\ndef contract_value(c):\n if c['base_and_exercised_options_value'] > 0:\n return c['base_and_exercised_options_value']\n elif c['base_and_all_options_value'] > 0:\n return c['base_and_all_options_value']\n elif c['total_dollars_obligated'] > 0:\n return c['total_dollars_obligated']\n elif c['federal_action_obligation'] > 0:\n return c['federal_action_obligation'] \n else:\n return 0\n \ndf['set_aside'] = df.apply(set_aside, axis=1)\ndf['contract_value'] = df.apply(contract_value, axis=1)\n\ndel df['type_of_set_aside_code']\ndel df['base_and_exercised_options_value']\ndel df['base_and_all_options_value']\ndel df['total_dollars_obligated']\ndel df['federal_action_obligation']\n\ndf2 = df.dropna()\ndf2.shape",
"_____no_output_____"
],
[
"df3 = df2\ndel df3['contract_transaction_unique_key']\ndf3 = pd.get_dummies(df2)\ndf3.head()",
"_____no_output_____"
],
[
"# Pearson Correlation\nplt.figure(figsize=(12,10))\ncor = df3.corr()\nsns.heatmap(cor, annot=True, cmap=plt.cm.Reds)\nplt.show()",
"_____no_output_____"
],
[
"# Correlation with output variable\ncor_target = abs(cor['set_aside'])\n# Selecting highly correlated features\nrelevant_features = cor_target[cor_target>0.5]\nrelevant_features",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e72b0b4b5f247542eaec6c1fdf9eacbf4da25e0d | 2,028 | ipynb | Jupyter Notebook | Notebooks/00-Menu.ipynb | research-licit/Hierarchical-Platooning | 70b17215fe2d848d3f262f8c7de33e92f9c453eb | [
"MIT"
] | 2 | 2019-11-18T05:09:51.000Z | 2020-01-13T13:06:36.000Z | Notebooks/00-Menu.ipynb | research-licit/Hierarchical-Platooning | 70b17215fe2d848d3f262f8c7de33e92f9c453eb | [
"MIT"
] | 1 | 2019-09-29T10:18:57.000Z | 2019-10-02T09:25:21.000Z | Notebooks/00-Menu.ipynb | research-licit/hierarchical-split-platooning | 70b17215fe2d848d3f262f8c7de33e92f9c453eb | [
"MIT"
] | 1 | 2018-06-25T09:55:55.000Z | 2018-06-25T09:55:55.000Z | 43.148936 | 256 | 0.679487 | [
[
[
"# A Hierarchical Approach For Splitting Truck Plattoons Near Network Discontinuities\n\nAurelien Duret, Meng Wang, Andres Ladino\n\nThis is a guide to recover results presented in *\"A hierarchical approach for splitting truck platoons near network discontinuities\"*. Results are presented in different Notebooks. Which should be able to run independently. \n\n***Note***: In order to run notebooks `01-Open-loop.ipynb`, `02-Tactical-strategy.ipynb` , `03-Operational-strategy.ipynb` a Symuvia version is required. Please do not hesitate to contact us in case you need support. You can recover results of the\npaper by runing the `04-Results.ipynb`\n\nThe notes are divided as follows: \n\n- [01-Open-loop.ipynb](01-Open-loop.ipynb): This notebook details on a platoon formation reaching a merging situation is presented. No controls strategies are proposed for the merge \n- [02-Tactical-strategy.ipynb](02-Tactical-strategy.ipynb): In order to solve the control problem two phases are presented in the research. This notebook presents computation for the *Tactical* decisions \n- [03-Operational-strategy.ipynb](03-Operational-strategy.ipynb): In this notebook the deployment of the control strategy over the platoon of vehicles is presented. \n- [04-Results](04-Results.ipynb): Presents the results shown in the publication. This notebook does not require simulation library. \n",
"_____no_output_____"
]
]
] | [
"markdown"
] | [
[
"markdown"
]
] |
e72b0c58a1202ea9924a0f7a42e443ac34d3d6b3 | 26,662 | ipynb | Jupyter Notebook | mnist-dataset.ipynb | lanodburke/Emerging-Technologies-Project | 15f6db3cfeda5199d38c3ea85464f37fa49bcd63 | [
"MIT"
] | null | null | null | mnist-dataset.ipynb | lanodburke/Emerging-Technologies-Project | 15f6db3cfeda5199d38c3ea85464f37fa49bcd63 | [
"MIT"
] | null | null | null | mnist-dataset.ipynb | lanodburke/Emerging-Technologies-Project | 15f6db3cfeda5199d38c3ea85464f37fa49bcd63 | [
"MIT"
] | null | null | null | 45.037162 | 4,808 | 0.61237 | [
[
[
"# MNIST Dataset Notebook\nThe MNIST dataset is a dataset that is built up of hand written digits derived from the NIST dataset. It is used for people to derive machine learning models for pattern recognition from a real world set of data. \n\nThe MNIST data set is widely used for training image recognition classifiers. \n\nThe dataset consists of:\n- 60,000 training images\n- 10,000 test images\n\nThe dataset can be found on this website [here](http://yann.lecun.com/exdb/mnist/).",
"_____no_output_____"
]
],
[
[
"%%html\n<style>\n table {\n display: inline-block\n }\n</style>",
"_____no_output_____"
]
],
[
[
"## MNIST Dataset File format\n> All the integers in the files are stored in the MSB first (high endian) format used by most non-Intel processors. Users of Intel processors and other low-endian machines must flip the bytes of the header.\n\nThe MNIST dataset is stored with the IDX file format extension. The IDX file format is a simple format for vectors and multidimensional matrices of various numerical types. \n\nThe MNIST dataset consists of four files outlined below:\n\n| File | Description | \n| :------------ | ----------------- | \n| [train-images-idx3-ubyte.gz](http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz) | training set images (9912422 bytes) | \n| [train-labels-idx1-ubyte.gz](http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz) | training set labels (28881 bytes) |\n| [t10k-images-idx3-ubyte.gz](http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz) | test set images (1648877 bytes) | \n| [t10k-labels-idx1-ubyte.gz](http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz) | test set labels (4542 bytes) | \n\n> The first 5000 examples of the test set are taken from the original NIST training set. The last 5000 are taken from the original NIST test set. The first 5000 are cleaner and easier than the last 5000. \n\n### Training set label file (train-labels-idx1-ubyte)\nThe labels values are 0 to 9.\n\n| offset | type | value | description |\n| :------------ | -------------- | ------------------- | ------------------------ |\n| 0000 | 32 bit integer | 0x00000801(2049) | magic number (MSB first) |\n| 0004 | 32 bit integer | 60000 | number of items |\n| 0008 | unsigned byte | ?? | label |\n| 0009 | unsigned byte | ?? | label |\n\n### Training set image file (train-images-idx3-ubyte):\nPixels are organized row-wise. Pixel values are 0 to 255. 0 means background (white), 255 means foreground (black).\n\n| offset | type | value | description |\n| :------------ | -------------- | ------------------- | ------------------------ |\n| 0000 | 32 bit integer | 0x00000803(2051) | magic number |\n| 0004 | 32 bit integer | 60000 | number of images |\n| 0008 | 32 bit integer | 28 | number of rows |\n| 0012 | 32 bit integer | 28 | number of columns |\n| 0016 | unsigned byte | ?? | pixel |\n| 0017 | unsigned byte | ?? | pixel |\n| xxxx | unsigned byte | ?? | pixel |\n\n\n### Test set label file (t10k-labels-idx1-ubyte)\nThe labels values are 0 to 9.\n\n| offset | type | value | description |\n| :------------ | -------------- | ------------------- | ------------------------ |\n| 0000 | 32 bit integer | 0x00000801(2049 | magic number (MSB first) |\n| 0004 | 32 bit integer | 10000 | number of items |\n| 0008 | unsigned byte | ?? | label |\n| 0009 | unsigned byte | ?? | label |\n\n### Test set image file (t10k-images-idx3-ubyte):\nPixels are organized row-wise. Pixel values are 0 to 255. 0 means background (white), 255 means foreground (black). \n\n| offset | type | value | description |\n| :------------ | -------------- | ------------------- | ------------------------ |\n| 0000 | 32 bit integer | 0x00000803(2051) | magic number |\n| 0004 | 32 bit integer | 10000 | number of images |\n| 0008 | 32 bit integer | 28 | number of rows |\n| 0012 | 32 bit integer | 28 | number of columns |\n| 0016 | unsigned byte | ?? | pixel |\n| 0017 | unsigned byte | ?? | pixel |\n| xxxx | unsigned byte | ?? | pixel |",
"_____no_output_____"
],
[
"## IDX File Format\nThe dataset is stored with the IDX file format, the full specification of the IDX file format can be found [here](http://www.fon.hum.uva.nl/praat/manual/IDX_file_format.html). \n\n> The IDX file format is a simple format for vectors and multidimensional matrices of various numerical types.\n\n### Example\nThe training and testing data of the MNIST database of handwritten digits at [mnist](http://yann.lecun.com/exdb/mnist/) is stored in compressed IDX formatted files.\n\nReading the uncompressed file train-images-idx3-ubyte available at [mnist](http://yann.lecun.com/exdb/mnist/) with 60000 images of 28×28 pixel data, will result in a new Matrix object with 60000 rows and 784 (=28×28) columns. \n\nEach cell will contain a number in the interval from 0 to 255.\n\nReading the uncompressed file train-labels-idx1-ubyte with 60000 labels will result in a new Matrix object with 1 row and 60000 columns. Each cell will contain a number in the interval from 0 to 9.\n",
"_____no_output_____"
],
[
"## Reading Dataset from file\nTo read the dataset into memory we will use a widely supported python package called gzip to convert the file into bytes. ",
"_____no_output_____"
],
[
"### Big Endian, Little Endian\nIn the MNIST dataset specifcation as seen above, it states that the vectors are stored in big endian format. I will briefly explain what big endian means and why it is relavant to understand how it works when working with this dataset.\n\n#### Big Endian byte order\nThe most significant byte (the \"big end\") of the data is placed at the byte with the lowest address. The rest of the data is placed in order in the next three bytes in memory.\n\n#### Little Endian byte order\nThe least significant byte (the \"little end\") of the data is placed at the byte with the lowest address. The rest of the data is placed in order in the next three bytes in memory.\n\nFor this dataset as we are using a machine with an Intel CPU we need to convert the dataset stored with big endian byter order to little endian. We need to do this otherwise we would not be able to manipulate the dataset.",
"_____no_output_____"
]
],
[
[
"import gzip \n\n# Open zip file with gzip\nwith gzip.open('data/t10k-images-idx3-ubyte.gz', 'rb') as f:\n file_content = f.read()\n \n# Read in first 4 bytes which we know is the magic number\n# Convert bytes to int with big endian byte order (most intel CPUs are big endian)\nmagic = int.from_bytes(file_content[0:4], byteorder=\"big\")\nprint(\"Magic number: \", magic)\n\n# Read in number of images\nimages = int.from_bytes(file_content[4:8], byteorder=\"big\")\nprint(\"Number of images: \", images)\n\n# Read in number of rows\nrows = int.from_bytes(file_content[8:12], byteorder=\"big\")\nprint(\"Number of rows: \", rows)\n\n# Read in number of columns\ncols = int.from_bytes(file_content[12:16], byteorder=\"big\")\nprint(\"Number of columns: \", cols)",
"Magic number: 2051\nNumber of images: 10000\nNumber of rows: 28\nNumber of columns: 28\n"
]
],
[
[
"## Display an Image\nWe will use the matplotlib.pyplot package and the numpy package to display the first image in the dataset. The first image in the dataset is known to be a 7. We can see this by the plot below but can also validate this when we read in the labels from the labels file.",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\n\n# flip the bytes from black/white with tilda(~).\nimage = ~np.array(list(file_content[16:800])).reshape(28,28).astype(np.uint8)\n\n# use the imshow method and map the image to gray scale\nplt.imshow(image, cmap='gray')",
"_____no_output_____"
]
],
[
[
"## Read labels",
"_____no_output_____"
]
],
[
[
"with gzip.open('data/t10k-labels-idx1-ubyte.gz', 'rb') as f:\n labels = f.read()\n\nmagic = int.from_bytes(labels[0:4], byteorder=\"big\")\nprint(\"Magic number is: \", magic)\n\n# Read in number of labels\nnum_labels = int.from_bytes(labels[4:8], byteorder=\"big\")\nprint(\"Number of labels: \", num_labels)\n\n# Finally read in the first label and output to console\nfirst_label = int.from_bytes(labels[8:9], byteorder=\"big\")\nprint(\"First label: \", first_label)",
"Magic number is: 2049\nNumber of labels: 10000\nFirst label: 7\n"
]
],
[
[
"### Read entire dataset into memory\nTo read the dataset into memory we will use gzip as described above. We will then reshape the files into a format that can be used by our model by normalizing the inputs and one-hot encoding the outputs (labels).",
"_____no_output_____"
]
],
[
[
"import gzip\n\n# Read in entire training set with gzip\nwith gzip.open('data/train-images-idx3-ubyte.gz', 'rb') as f:\n train_img = f.read()\n\n# Read in training labels\nwith gzip.open('data/train-labels-idx1-ubyte.gz', 'rb') as f:\n train_labels = f.read()\n\n# Read in training images\n# Convert images from black background and white foreground to white background and white foreground with tilda(~)\n# Reshape the array to 28 * 28 with numpy convert to unsigned 8 bit integer\ntrain_img = ~np.array(list(train_img[16:])).reshape(60000,28,28).astype(np.uint8)\n\n# Read in training labels\n# Convert to unsigned 8 bit integer\ntrain_labels = np.array(list(train_labels[8:])).astype(np.uint8)",
"_____no_output_____"
]
],
[
[
"## Build a simple model\nWe will build a simple linear model with keras. The model will have an input layer of 784 (number of pixels in a single image) represents an image with dimensions 28 x 28. We will read in all 784 pixels as we reshaped the input array into a two dimensional vector with the dimesionsions 60,000 x 784.\n\n- The first layer will also has a hidden layer with 1,000 neurons and uses the relu activation function.\n- The output layer will have 10 outputs for 0-9. It also uses the softmax function.\n- Compile the model with categorical cross entropy and use the adam optimzer, set the metrics to accuracy.",
"_____no_output_____"
]
],
[
[
"# For building neural networks.\nimport keras as kr\n\n# For interacting with data sets.\nimport pandas as pd\n\n# For encoding categorical variables.\nimport sklearn.preprocessing as pre\n\n# For splitting into training and test sets.\nimport sklearn.model_selection as mod\n\n# Use keras Sequential model\nmodel = kr.models.Sequential()\n\n# hidden layer with 1000 neurons and 784 input neurons\n# 784 pixels per image in MNIST\nmodel.add(kr.layers.Dense(units=1000, activation=\"relu\", input_dim=784))\n\n# 10 output layers for numbers 0-9 \nmodel.add(kr.layers.Dense(units=10, activation=\"softmax\"))\n\n# Use categorical crossentropy as loss function\nmodel.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])",
"_____no_output_____"
],
[
"inputs = train_img.reshape(60000, 784)",
"_____no_output_____"
]
],
[
[
"### One Hot encoding\nOne Hot encoding is a pre proccessing process for data that is used for ML algorithms. It is mainly used for categorical data such as the MNIST dataset. One hot encoding is used because a majority of ML algorithms as they cannot use label data and they can only read data in a numeric format.\n\nIn this example we will use One Hot encoding on the output data (the labels).\n\nThe format of the output will look something like this if the label value is 5:\n\n[0, 0, 0, 0, 0, 1, 0, 0, 0, 0] \n\nThe digit 1 represents the label 5, all other positions in the array are set to 0. This is to indicate that they are not the label value. The label is the position where the 1 is located in the array.",
"_____no_output_____"
]
],
[
[
"# Encode the classes as above.\nencoder = pre.LabelBinarizer()\nencoder.fit(train_labels)\n# One hot encode output labels\noutputs = encoder.transform(train_labels)\n# first digit is 5, represented by the 6th position in array with value 1\nprint(train_labels[0], outputs[0])",
"5 [0 0 0 0 0 1 0 0 0 0]\n"
]
],
[
[
"### Example of One hot encoding",
"_____no_output_____"
]
],
[
[
"# Example of one-hot encoding values from 1-10 \nfor i in range(10):\n print(i, encoder.transform([i]))",
"0 [[1 0 0 0 0 0 0 0 0 0]]\n1 [[0 1 0 0 0 0 0 0 0 0]]\n2 [[0 0 1 0 0 0 0 0 0 0]]\n3 [[0 0 0 1 0 0 0 0 0 0]]\n4 [[0 0 0 0 1 0 0 0 0 0]]\n5 [[0 0 0 0 0 1 0 0 0 0]]\n6 [[0 0 0 0 0 0 1 0 0 0]]\n7 [[0 0 0 0 0 0 0 1 0 0]]\n8 [[0 0 0 0 0 0 0 0 1 0]]\n9 [[0 0 0 0 0 0 0 0 0 1]]\n"
]
],
[
[
"### Fit the model",
"_____no_output_____"
]
],
[
[
"# Fit the model with inputs and outputs set number of epochs to 1 and batch size to 100\nmodel.fit(inputs, outputs, epochs=10, batch_size=200)",
"Epoch 1/10\n60000/60000 [==============================] - 10s 172us/step - loss: 14.5262 - acc: 0.0988\nEpoch 2/10\n60000/60000 [==============================] - 9s 144us/step - loss: 14.5270 - acc: 0.0987\nEpoch 3/10\n60000/60000 [==============================] - 9s 149us/step - loss: 14.5270 - acc: 0.0987\nEpoch 4/10\n60000/60000 [==============================] - 9s 148us/step - loss: 14.5270 - acc: 0.0987\nEpoch 5/10\n60000/60000 [==============================] - 9s 154us/step - loss: 14.5270 - acc: 0.0987\nEpoch 6/10\n60000/60000 [==============================] - 9s 155us/step - loss: 14.5270 - acc: 0.0987\nEpoch 7/10\n60000/60000 [==============================] - 10s 173us/step - loss: 14.5270 - acc: 0.0987\nEpoch 8/10\n60000/60000 [==============================] - 11s 179us/step - loss: 14.5270 - acc: 0.0987\nEpoch 9/10\n60000/60000 [==============================] - 9s 156us/step - loss: 14.5270 - acc: 0.0987\nEpoch 10/10\n60000/60000 [==============================] - 9s 151us/step - loss: 14.5270 - acc: 0.0987\n"
]
],
[
[
"## Test model\nTo test the model that we created we will read in the test images and labels files with gzip.",
"_____no_output_____"
]
],
[
[
"with gzip.open('data/t10k-images-idx3-ubyte.gz', 'rb') as f:\n test_img = f.read()\n\nwith gzip.open('data/t10k-labels-idx1-ubyte.gz', 'rb') as f:\n test_lbl = f.read()\n \ntest_img = ~np.array(list(test_img[16:])).reshape(10000, 784).astype(np.uint8) / 255.0\ntest_lbl = np.array(list(test_lbl[ 8:])).astype(np.uint8)",
"_____no_output_____"
]
],
[
[
"### Model accuracy",
"_____no_output_____"
]
],
[
[
"acc = (encoder.inverse_transform(model.predict(test_img)) == test_lbl).sum()\nprint(\"Accuracy: %.0f%%\" % (acc / 100))",
"Accuracy: 10%\n"
]
],
[
[
"## Model evaluation\nAs we can see from above the model has an accuracy of 10%. This is the same accuarcy that a person would have if they were to try and guess a digit from the dataset. \n\nFor the next part of this project I am going to build a covolutional nerual network to try and achieve a much higher accuarcy.",
"_____no_output_____"
],
[
"## References\n- [MNIST](http://yann.lecun.com/exdb/mnist/)\n- [MNIST classification](https://towardsdatascience.com/image-classification-in-10-minutes-with-mnist-dataset-54c35b77a38d)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
e72b1f8cffe32cd95791b8cb1add8330b32548a2 | 2,393 | ipynb | Jupyter Notebook | GUI.ipynb | erickaalgr/OOP-1-1 | cdf80e78adfbc59758ce1c2ce8a9c4e70b18ed94 | [
"Apache-2.0"
] | null | null | null | GUI.ipynb | erickaalgr/OOP-1-1 | cdf80e78adfbc59758ce1c2ce8a9c4e70b18ed94 | [
"Apache-2.0"
] | null | null | null | GUI.ipynb | erickaalgr/OOP-1-1 | cdf80e78adfbc59758ce1c2ce8a9c4e70b18ed94 | [
"Apache-2.0"
] | null | null | null | 28.831325 | 218 | 0.481822 | [
[
[
"<a href=\"https://colab.research.google.com/github/erickaalgr/OOP-1-1/blob/main/GUI.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
]
],
[
[
"#@title Students'Grade in OOP\nstudent_name = \"Ericka Jane A. Alegre\" #@param {type:\"string\"}\nprelim = 96#@param {type:'number'}\nmidterm = 98#@param {type:'number'}\nfinal = 98#@param {type:'number'}\nsemestral_grade = round((prelim + midterm+ final)/3,2)\n\nprint(\"My prelim grade is \" + str(prelim))\nprint(\"My midterm grade is \" + str(midterm))\nprint(\"My final grade is \" + str(final))\nprint(\"My semestral grade is: \" + str(semestral_grade))\n\nGender= \"Female\" #@param [\"Male\",\"Female\"]\nBirth_Date= '2003-11-24' #@param {type: \"date\"}\n\nprint(\"My birthdate is: \" + Birth_Date)",
"My prelim grade is 96\nMy midterm grade is 98\nMy final grade is 98\nMy semestral grade is: 97.33\nMy birthdate is: 2003-11-24\n"
]
],
[
[
"Tkinter module - pycharm",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
e72b2b22682bf06f3ecec72d3b33fedb21ac8700 | 3,884 | ipynb | Jupyter Notebook | Nodejs-Examples/ibm_db-createDbSync.ipynb | ibmdb/jupyter-node-ibm_db | 588bf32955237aa2829aa796a9ce10865a5dfa99 | [
"Apache-2.0"
] | null | null | null | Nodejs-Examples/ibm_db-createDbSync.ipynb | ibmdb/jupyter-node-ibm_db | 588bf32955237aa2829aa796a9ce10865a5dfa99 | [
"Apache-2.0"
] | 1 | 2019-06-12T10:23:03.000Z | 2019-06-12T10:23:03.000Z | Nodejs-Examples/ibm_db-createDbSync.ipynb | ibmdb/jupyter-node-ibm_db | 588bf32955237aa2829aa796a9ce10865a5dfa99 | [
"Apache-2.0"
] | 2 | 2019-11-03T17:23:38.000Z | 2021-12-28T11:00:46.000Z | 70.618182 | 96 | 0.412461 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
e72b2f0d7bfe1668f4c72265e47af421bd87dd2f | 91,405 | ipynb | Jupyter Notebook | Train.ipynb | MrDemeanor/AerialPipeline | f12249b68a5877f3d080fd0800035a9f0a5789c7 | [
"Apache-2.0"
] | null | null | null | Train.ipynb | MrDemeanor/AerialPipeline | f12249b68a5877f3d080fd0800035a9f0a5789c7 | [
"Apache-2.0"
] | 10 | 2020-01-28T22:13:33.000Z | 2022-03-11T23:55:40.000Z | Train.ipynb | MrDemeanor/AerialPipeline | f12249b68a5877f3d080fd0800035a9f0a5789c7 | [
"Apache-2.0"
] | 2 | 2019-09-09T19:05:29.000Z | 2021-12-26T01:00:06.000Z | 62.735072 | 10,380 | 0.625589 | [
[
[
"## Import Dependencies",
"_____no_output_____"
]
],
[
[
"# COCO related libraries\nfrom samples.coco import coco\n\n# MaskRCNN libraries\nfrom mrcnn.config import Config\nimport mrcnn.utils as utils\nfrom mrcnn import visualize\nimport mrcnn.model as modellib\n\n# Misc\nimport os\nimport sys\nimport json\nimport numpy as np\nimport time\nfrom PIL import Image, ImageDraw",
"Using TensorFlow backend.\n"
]
],
[
[
"## Constants",
"_____no_output_____"
]
],
[
[
"# Number of classes in dataset. Must be of type integer\nNUM_CLASSES = 3\n\n# Relative path to .h5 weights file\nWEIGHTS_FILE = None\n\n# Relative path to annotations JSON file\nTRAIN_ANNOTATIONS_FILE = \"datasets/Downtown_Sliced/train/annotations_split.json\"\n\n# Relative path to directory of images that pertain to annotations file\nTRAIN_ANNOTATION_IMAGE_DIR = 'datasets/Downtown_Sliced/train/images'\n\n# Relative path to annotations JSON file\nVALIDATION_ANNOTATIONS_FILE = \"datasets/Downtown_Sliced/validation/annotations_split.json\"\n\n# Relative path to directory of images that pertain to annotations file\nVALIDATION_ANNOTATION_IMAGE_DIR = 'datasets/Downtown_Sliced/validation/images'\n\n# Number of epochs to train dataset on\nNUM_EPOCHS = 80\n\nMODEL_NAME = \"model_2\"",
"_____no_output_____"
]
],
[
[
"## Additional setup",
"_____no_output_____"
]
],
[
[
"# Set the ROOT_DIR variable to the root directory of the Mask_RCNN git repo\nROOT_DIR = os.getcwd()\n\n# Directory to save logs and trained model\nMODEL_DIR = os.path.join(ROOT_DIR, \"logs\")\n\n# Select which GPU to use\nos.environ[\"CUDA_DEVICE_ORDER\"]=\"PCI_BUS_ID\";\nos.environ[\"CUDA_VISIBLE_DEVICES\"]=\"0\"; ",
"_____no_output_____"
]
],
[
[
"## Declare training configuration",
"_____no_output_____"
]
],
[
[
"class TrainConfig(coco.CocoConfig):\n \"\"\"\n \"\"\"\n # Give the configuration a recognizable name\n NAME = MODEL_NAME\n\n # Train on 1 image per GPU. Batch size is 1 (GPUs * images/GPU).\n GPU_COUNT = 1\n IMAGES_PER_GPU = 1\n\n # Number of classes (including background)\n NUM_CLASSES = 1 + NUM_CLASSES\n\n # Min and max image dimensions\n IMAGE_MIN_DIM = 1152\n IMAGE_MAX_DIM = 1280\n\n # You can experiment with this number to see if it improves training\n STEPS_PER_EPOCH = 180\n\n # This is how often validation is run. If you are using too much hard drive space\n # on saved models (in the MODEL_DIR), try making this value larger.\n VALIDATION_STEPS = 50\n \n # Matterport originally used resnet101, but I downsized to fit it on my graphics card\n BACKBONE = 'resnet101'\n\n # To be honest, I haven't taken the time to figure out what these do\n RPN_ANCHOR_SCALES = (32, 64, 128, 256, 512)\n \n # Changed to 512 because that's how many the original MaskRCNN paper used\n TRAIN_ROIS_PER_IMAGE = 200\n MAX_GT_INSTANCES = 114\n POST_NMS_ROIS_INFERENCE = 1000 \n POST_NMS_ROIS_TRAINING = 2000 \n \n DETECTION_MAX_INSTANCES = 114\n DETECTION_MIN_CONFIDENCE = 0.1",
"_____no_output_____"
]
],
[
[
"## Display configuration",
"_____no_output_____"
]
],
[
[
"TrainConfig().display()",
"\nConfigurations:\nBACKBONE resnet101\nBACKBONE_STRIDES [4, 8, 16, 32, 64]\nBATCH_SIZE 1\nBBOX_STD_DEV [0.1 0.1 0.2 0.2]\nCOMPUTE_BACKBONE_SHAPE None\nDETECTION_MAX_INSTANCES 114\nDETECTION_MIN_CONFIDENCE 0.1\nDETECTION_NMS_THRESHOLD 0.3\nFPN_CLASSIF_FC_LAYERS_SIZE 1024\nGPU_COUNT 1\nGRADIENT_CLIP_NORM 5.0\nIMAGES_PER_GPU 1\nIMAGE_CHANNEL_COUNT 3\nIMAGE_MAX_DIM 1280\nIMAGE_META_SIZE 16\nIMAGE_MIN_DIM 1152\nIMAGE_MIN_SCALE 0\nIMAGE_RESIZE_MODE square\nIMAGE_SHAPE [1280 1280 3]\nLEARNING_MOMENTUM 0.9\nLEARNING_RATE 0.001\nLOSS_WEIGHTS {'rpn_class_loss': 1.0, 'rpn_bbox_loss': 1.0, 'mrcnn_class_loss': 1.0, 'mrcnn_bbox_loss': 1.0, 'mrcnn_mask_loss': 1.0}\nMASK_POOL_SIZE 14\nMASK_SHAPE [28, 28]\nMAX_GT_INSTANCES 114\nMEAN_PIXEL [123.7 116.8 103.9]\nMINI_MASK_SHAPE (56, 56)\nNAME model_2\nNUM_CLASSES 4\nPOOL_SIZE 7\nPOST_NMS_ROIS_INFERENCE 1000\nPOST_NMS_ROIS_TRAINING 2000\nPRE_NMS_LIMIT 6000\nROI_POSITIVE_RATIO 0.33\nRPN_ANCHOR_RATIOS [0.5, 1, 2]\nRPN_ANCHOR_SCALES (32, 64, 128, 256, 512)\nRPN_ANCHOR_STRIDE 1\nRPN_BBOX_STD_DEV [0.1 0.1 0.2 0.2]\nRPN_NMS_THRESHOLD 0.7\nRPN_TRAIN_ANCHORS_PER_IMAGE 256\nSTEPS_PER_EPOCH 180\nTOP_DOWN_PYRAMID_SIZE 256\nTRAIN_BN False\nTRAIN_ROIS_PER_IMAGE 200\nUSE_MINI_MASK True\nUSE_RPN_ROIS True\nVALIDATION_STEPS 50\nWEIGHT_DECAY 0.0001\n\n\n"
]
],
[
[
"## Create class to load dataset",
"_____no_output_____"
]
],
[
[
"class CocoLikeDataset(utils.Dataset):\n \"\"\" Generates a COCO-like dataset, i.e. an image dataset annotated in the style of the COCO dataset.\n See http://cocodataset.org/#home for more information.\n \"\"\"\n def load_data(self, annotation_json, images_dir):\n \"\"\" Load the coco-like dataset from json\n Args:\n annotation_json: The path to the coco annotations json file\n images_dir: The directory holding the images referred to by the json file\n \"\"\"\n # Load json from file\n json_file = open(annotation_json)\n coco_json = json.load(json_file)\n json_file.close()\n \n # Add the class names using the base method from utils.Dataset\n source_name = \"coco_like\"\n for category in coco_json['categories']:\n class_id = category['id']\n class_name = category['name']\n if class_id < 1:\n print('Error: Class id for \"{}\" cannot be less than one. (0 is reserved for the background)'.format(class_name))\n return\n \n self.add_class(source_name, class_id, class_name)\n \n # Get all annotations\n annotations = {}\n for annotation in coco_json['annotations']:\n image_id = annotation['image_id']\n if image_id not in annotations:\n annotations[image_id] = []\n annotations[image_id].append(annotation)\n \n # Get all images and add them to the dataset\n seen_images = {}\n for image in coco_json['images']:\n image_id = image['id']\n if image_id in seen_images:\n print(\"Warning: Skipping duplicate image id: {}\".format(image))\n else:\n seen_images[image_id] = image\n try:\n image_file_name = image['file_name']\n image_width = image['width']\n image_height = image['height']\n except KeyError as key:\n print(\"Warning: Skipping image (id: {}) with missing key: {}\".format(image_id, key))\n \n image_path = os.path.abspath(os.path.join(images_dir, image_file_name))\n image_annotations = annotations[image_id]\n \n # Add the image using the base method from utils.Dataset\n self.add_image(\n source=source_name,\n image_id=image_id,\n path=image_path,\n width=image_width,\n height=image_height,\n annotations=image_annotations\n )\n \n def load_mask(self, image_id):\n \"\"\" Load instance masks for the given image.\n MaskRCNN expects masks in the form of a bitmap [height, width, instances].\n Args:\n image_id: The id of the image to load masks for\n Returns:\n masks: A bool array of shape [height, width, instance count] with\n one mask per instance.\n class_ids: a 1D array of class IDs of the instance masks.\n \"\"\"\n image_info = self.image_info[image_id]\n annotations = image_info['annotations']\n instance_masks = []\n class_ids = []\n \n for annotation in annotations:\n class_id = annotation['category_id']\n mask = Image.new('1', (image_info['width'], image_info['height']))\n mask_draw = ImageDraw.ImageDraw(mask, '1')\n for segmentation in annotation['segmentation']:\n mask_draw.polygon(segmentation, fill=1)\n bool_array = np.array(mask) > 0\n instance_masks.append(bool_array)\n class_ids.append(class_id)\n\n mask = np.dstack(instance_masks)\n class_ids = np.array(class_ids, dtype=np.int32)\n \n return mask, class_ids",
"_____no_output_____"
]
],
[
[
"## Load train and validation datasets",
"_____no_output_____"
]
],
[
[
"dataset_train = CocoLikeDataset()\ndataset_train.load_data(TRAIN_ANNOTATIONS_FILE, TRAIN_ANNOTATION_IMAGE_DIR)\ndataset_train.prepare()\n\ndataset_val = CocoLikeDataset()\ndataset_val.load_data(VALIDATION_ANNOTATIONS_FILE, VALIDATION_ANNOTATION_IMAGE_DIR)\ndataset_val.prepare()",
"_____no_output_____"
]
],
[
[
"## Build MaskRCNN Model",
"_____no_output_____"
]
],
[
[
"# Create model in training mode\nmodel = modellib.MaskRCNN(mode = \"training\", config = TrainConfig(), model_dir = MODEL_DIR)",
"WARNING: Logging before flag parsing goes to stderr.\nW0805 20:55:55.028046 139871543367488 deprecation_wrapper.py:119] From /home/venv/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:508: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.\n\nW0805 20:55:55.066967 139871543367488 deprecation_wrapper.py:119] From /home/venv/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:68: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead.\n\nW0805 20:55:55.108044 139871543367488 deprecation_wrapper.py:119] From /home/venv/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:3837: The name tf.random_uniform is deprecated. Please use tf.random.uniform instead.\n\nW0805 20:55:55.145821 139871543367488 deprecation_wrapper.py:119] From /home/venv/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:3661: The name tf.nn.max_pool is deprecated. Please use tf.nn.max_pool2d instead.\n\nW0805 20:56:01.127417 139871543367488 deprecation_wrapper.py:119] From /home/venv/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:1944: The name tf.image.resize_nearest_neighbor is deprecated. Please use tf.compat.v1.image.resize_nearest_neighbor instead.\n\nW0805 20:56:01.878015 139871543367488 deprecation.py:323] From /home/venv/lib/python3.6/site-packages/tensorflow/python/ops/array_ops.py:1354: add_dispatch_support.<locals>.wrapper (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.\nInstructions for updating:\nUse tf.where in 2.0, which has the same broadcast rule as np.where\nW0805 20:56:02.051718 139871543367488 deprecation_wrapper.py:119] From /home/venv/src/mask-rcnn/mrcnn/model.py:553: The name tf.random_shuffle is deprecated. Please use tf.random.shuffle instead.\n\nW0805 20:56:02.144878 139871543367488 deprecation_wrapper.py:119] From /home/venv/src/mask-rcnn/mrcnn/utils.py:202: The name tf.log is deprecated. Please use tf.math.log instead.\n\nW0805 20:56:02.175616 139871543367488 deprecation.py:506] From /home/venv/src/mask-rcnn/mrcnn/model.py:600: calling crop_and_resize_v1 (from tensorflow.python.ops.image_ops_impl) with box_ind is deprecated and will be removed in a future version.\nInstructions for updating:\nbox_ind is deprecated, use box_indices instead\n"
]
],
[
[
"## Load weights into model if weights file is not None\n### This is meant to be used if you are refining on a set of preexisting weights",
"_____no_output_____"
]
],
[
[
"if WEIGHTS_FILE is not None:\n model.load_weights(WEIGHTS_FILE, by_name = True)",
"_____no_output_____"
]
],
[
[
"## Train model\n### The model after each epoch will be saved in the logs folder",
"_____no_output_____"
]
],
[
[
"start_train = time.time()\nmodel.train(dataset_train, dataset_val, learning_rate = TrainConfig().LEARNING_RATE, epochs = NUM_EPOCHS, layers = 'all')\nend_train = time.time()\nminutes = round((end_train - start_train) / 60, 2)\nprint(f'Training took {minutes} minutes')",
"\nStarting at epoch 0. LR=0.001\n\nCheckpoint Path: /home/logs/model_220190805T2055/mask_rcnn_model_2_{epoch:04d}.h5\nSelecting layers to train\nconv1 (Conv2D)\nbn_conv1 (BatchNorm)\nres2a_branch2a (Conv2D)\nbn2a_branch2a (BatchNorm)\nres2a_branch2b (Conv2D)\nbn2a_branch2b (BatchNorm)\nres2a_branch2c (Conv2D)\nres2a_branch1 (Conv2D)\nbn2a_branch2c (BatchNorm)\nbn2a_branch1 (BatchNorm)\nres2b_branch2a (Conv2D)\nbn2b_branch2a (BatchNorm)\nres2b_branch2b (Conv2D)\nbn2b_branch2b (BatchNorm)\nres2b_branch2c (Conv2D)\nbn2b_branch2c (BatchNorm)\nres2c_branch2a (Conv2D)\nbn2c_branch2a (BatchNorm)\nres2c_branch2b (Conv2D)\nbn2c_branch2b (BatchNorm)\nres2c_branch2c (Conv2D)\nbn2c_branch2c (BatchNorm)\nres3a_branch2a (Conv2D)\nbn3a_branch2a (BatchNorm)\nres3a_branch2b (Conv2D)\nbn3a_branch2b (BatchNorm)\nres3a_branch2c (Conv2D)\nres3a_branch1 (Conv2D)\nbn3a_branch2c (BatchNorm)\nbn3a_branch1 (BatchNorm)\nres3b_branch2a (Conv2D)\nbn3b_branch2a (BatchNorm)\nres3b_branch2b (Conv2D)\nbn3b_branch2b (BatchNorm)\nres3b_branch2c (Conv2D)\nbn3b_branch2c (BatchNorm)\nres3c_branch2a (Conv2D)\nbn3c_branch2a (BatchNorm)\nres3c_branch2b (Conv2D)\nbn3c_branch2b (BatchNorm)\nres3c_branch2c (Conv2D)\nbn3c_branch2c (BatchNorm)\nres3d_branch2a (Conv2D)\nbn3d_branch2a (BatchNorm)\nres3d_branch2b (Conv2D)\nbn3d_branch2b (BatchNorm)\nres3d_branch2c (Conv2D)\nbn3d_branch2c (BatchNorm)\nres4a_branch2a (Conv2D)\nbn4a_branch2a (BatchNorm)\nres4a_branch2b (Conv2D)\nbn4a_branch2b (BatchNorm)\nres4a_branch2c (Conv2D)\nres4a_branch1 (Conv2D)\nbn4a_branch2c (BatchNorm)\nbn4a_branch1 (BatchNorm)\nres4b_branch2a (Conv2D)\nbn4b_branch2a (BatchNorm)\nres4b_branch2b (Conv2D)\nbn4b_branch2b (BatchNorm)\nres4b_branch2c (Conv2D)\nbn4b_branch2c (BatchNorm)\nres4c_branch2a (Conv2D)\nbn4c_branch2a (BatchNorm)\nres4c_branch2b (Conv2D)\nbn4c_branch2b (BatchNorm)\nres4c_branch2c (Conv2D)\nbn4c_branch2c (BatchNorm)\nres4d_branch2a (Conv2D)\nbn4d_branch2a (BatchNorm)\nres4d_branch2b (Conv2D)\nbn4d_branch2b (BatchNorm)\nres4d_branch2c (Conv2D)\nbn4d_branch2c (BatchNorm)\nres4e_branch2a (Conv2D)\nbn4e_branch2a (BatchNorm)\nres4e_branch2b (Conv2D)\nbn4e_branch2b (BatchNorm)\nres4e_branch2c (Conv2D)\nbn4e_branch2c (BatchNorm)\nres4f_branch2a (Conv2D)\nbn4f_branch2a (BatchNorm)\nres4f_branch2b (Conv2D)\nbn4f_branch2b (BatchNorm)\nres4f_branch2c (Conv2D)\nbn4f_branch2c (BatchNorm)\nres4g_branch2a (Conv2D)\nbn4g_branch2a (BatchNorm)\nres4g_branch2b (Conv2D)\nbn4g_branch2b (BatchNorm)\nres4g_branch2c (Conv2D)\nbn4g_branch2c (BatchNorm)\nres4h_branch2a (Conv2D)\nbn4h_branch2a (BatchNorm)\nres4h_branch2b (Conv2D)\nbn4h_branch2b (BatchNorm)\nres4h_branch2c (Conv2D)\nbn4h_branch2c (BatchNorm)\nres4i_branch2a (Conv2D)\nbn4i_branch2a (BatchNorm)\nres4i_branch2b (Conv2D)\nbn4i_branch2b (BatchNorm)\nres4i_branch2c (Conv2D)\nbn4i_branch2c (BatchNorm)\nres4j_branch2a (Conv2D)\nbn4j_branch2a (BatchNorm)\nres4j_branch2b (Conv2D)\nbn4j_branch2b (BatchNorm)\nres4j_branch2c (Conv2D)\nbn4j_branch2c (BatchNorm)\nres4k_branch2a (Conv2D)\nbn4k_branch2a (BatchNorm)\nres4k_branch2b (Conv2D)\nbn4k_branch2b (BatchNorm)\nres4k_branch2c (Conv2D)\nbn4k_branch2c (BatchNorm)\nres4l_branch2a (Conv2D)\nbn4l_branch2a (BatchNorm)\nres4l_branch2b (Conv2D)\nbn4l_branch2b (BatchNorm)\nres4l_branch2c (Conv2D)\nbn4l_branch2c (BatchNorm)\nres4m_branch2a (Conv2D)\nbn4m_branch2a (BatchNorm)\nres4m_branch2b (Conv2D)\nbn4m_branch2b (BatchNorm)\nres4m_branch2c (Conv2D)\nbn4m_branch2c (BatchNorm)\nres4n_branch2a (Conv2D)\nbn4n_branch2a (BatchNorm)\nres4n_branch2b (Conv2D)\nbn4n_branch2b (BatchNorm)\nres4n_branch2c (Conv2D)\nbn4n_branch2c (BatchNorm)\nres4o_branch2a (Conv2D)\nbn4o_branch2a (BatchNorm)\nres4o_branch2b (Conv2D)\nbn4o_branch2b (BatchNorm)\nres4o_branch2c (Conv2D)\nbn4o_branch2c (BatchNorm)\nres4p_branch2a (Conv2D)\nbn4p_branch2a (BatchNorm)\nres4p_branch2b (Conv2D)\nbn4p_branch2b (BatchNorm)\nres4p_branch2c (Conv2D)\nbn4p_branch2c (BatchNorm)\nres4q_branch2a (Conv2D)\nbn4q_branch2a (BatchNorm)\nres4q_branch2b (Conv2D)\nbn4q_branch2b (BatchNorm)\nres4q_branch2c (Conv2D)\nbn4q_branch2c (BatchNorm)\nres4r_branch2a (Conv2D)\nbn4r_branch2a (BatchNorm)\nres4r_branch2b (Conv2D)\nbn4r_branch2b (BatchNorm)\nres4r_branch2c (Conv2D)\nbn4r_branch2c (BatchNorm)\nres4s_branch2a (Conv2D)\nbn4s_branch2a (BatchNorm)\nres4s_branch2b (Conv2D)\nbn4s_branch2b (BatchNorm)\nres4s_branch2c (Conv2D)\nbn4s_branch2c (BatchNorm)\nres4t_branch2a (Conv2D)\nbn4t_branch2a (BatchNorm)\nres4t_branch2b (Conv2D)\nbn4t_branch2b (BatchNorm)\nres4t_branch2c (Conv2D)\nbn4t_branch2c (BatchNorm)\nres4u_branch2a (Conv2D)\nbn4u_branch2a (BatchNorm)\nres4u_branch2b (Conv2D)\nbn4u_branch2b (BatchNorm)\nres4u_branch2c (Conv2D)\nbn4u_branch2c (BatchNorm)\nres4v_branch2a (Conv2D)\nbn4v_branch2a (BatchNorm)\nres4v_branch2b (Conv2D)\nbn4v_branch2b (BatchNorm)\nres4v_branch2c (Conv2D)\nbn4v_branch2c (BatchNorm)\nres4w_branch2a (Conv2D)\nbn4w_branch2a (BatchNorm)\nres4w_branch2b (Conv2D)\nbn4w_branch2b (BatchNorm)\nres4w_branch2c (Conv2D)\nbn4w_branch2c (BatchNorm)\nres5a_branch2a (Conv2D)\nbn5a_branch2a (BatchNorm)\nres5a_branch2b (Conv2D)\nbn5a_branch2b (BatchNorm)\nres5a_branch2c (Conv2D)\nres5a_branch1 (Conv2D)\nbn5a_branch2c (BatchNorm)\nbn5a_branch1 (BatchNorm)\nres5b_branch2a (Conv2D)\nbn5b_branch2a (BatchNorm)\nres5b_branch2b (Conv2D)\nbn5b_branch2b (BatchNorm)\nres5b_branch2c (Conv2D)\nbn5b_branch2c (BatchNorm)\nres5c_branch2a (Conv2D)\nbn5c_branch2a (BatchNorm)\nres5c_branch2b (Conv2D)\nbn5c_branch2b (BatchNorm)\nres5c_branch2c (Conv2D)\nbn5c_branch2c (BatchNorm)\nfpn_c5p5 (Conv2D)\nfpn_c4p4 (Conv2D)\nfpn_c3p3 (Conv2D)\nfpn_c2p2 (Conv2D)\nfpn_p5 (Conv2D)\nfpn_p2 (Conv2D)\nfpn_p3 (Conv2D)\nfpn_p4 (Conv2D)\nIn model: rpn_model\n rpn_conv_shared (Conv2D)\n rpn_class_raw (Conv2D)\n rpn_bbox_pred (Conv2D)\nmrcnn_mask_conv1 (TimeDistributed)\nmrcnn_mask_bn1 (TimeDistributed)\nmrcnn_class_conv1 (TimeDistributed)\nmrcnn_class_bn1 (TimeDistributed)\nmrcnn_mask_conv2 (TimeDistributed)\nmrcnn_mask_bn2 (TimeDistributed)\nmrcnn_class_conv2 (TimeDistributed)\nmrcnn_class_bn2 (TimeDistributed)\nmrcnn_mask_conv3 (TimeDistributed)\nmrcnn_mask_bn3 (TimeDistributed)\nmrcnn_bbox_fc (TimeDistributed)\nmrcnn_mask_deconv (TimeDistributed)\nmrcnn_class_logits (TimeDistributed)\nmrcnn_mask (TimeDistributed)\n"
]
],
[
[
"## Include evaluation scripts in training script so that the kernel does not have to be reloaded. Eases the process of rapidly training and evaluating models",
"_____no_output_____"
],
[
"## Import dependencies",
"_____no_output_____"
]
],
[
[
"# COCO related libraries\nfrom pycocotools.cocoeval import COCOeval\nfrom pycocotools import mask as maskUtils\nfrom samples.coco import coco\nfrom samples.coco.coco import evaluate_coco\nfrom pycocotools.coco import COCO\n\n# MaskRCNN libraries\nimport mrcnn.model as modellib\nimport mrcnn.utils as utils\nimport mrcnn.visualize as visualize\n\n# Misc\nimport os\nimport skimage.io\nimport random\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom PIL import Image\nfrom tqdm import tnrange, tqdm_notebook",
"_____no_output_____"
]
],
[
[
"## Declare evaluation configuration",
"_____no_output_____"
]
],
[
[
"class EvalConfig(coco.CocoConfig):\n \"\"\" Configuration for evaluation \"\"\"\n \n # Give the configuration a recognizable name\n NAME = MODEL_NAME\n \n # How many GPUs\n GPU_COUNT = 1\n \n # How many images per gpu\n IMAGES_PER_GPU = 1\n\n # Number of classes (including background)\n NUM_CLASSES = 1 + NUM_CLASSES # background + other classes\n \n IMAGE_MIN_DIM = 1152\n IMAGE_MAX_DIM = 1280\n \n # Matterport originally used resnet101, but I downsized to fit it on my graphics card\n BACKBONE = 'resnet101'\n\n # To be honest, I haven't taken the time to figure out what these do\n RPN_ANCHOR_SCALES = (32, 64, 128, 256, 512)\n \n # Changed to 512 because that's how many the original MaskRCNN paper used\n TRAIN_ROIS_PER_IMAGE = 200\n MAX_GT_INSTANCES = 114\n POST_NMS_ROIS_INFERENCE = 1000 \n POST_NMS_ROIS_TRAINING = 2000 \n \n DETECTION_MAX_INSTANCES = 114\n DETECTION_MIN_CONFIDENCE = 0.1",
"_____no_output_____"
]
],
[
[
"## Display configuration",
"_____no_output_____"
]
],
[
[
"EvalConfig().display()",
"\nConfigurations:\nBACKBONE resnet101\nBACKBONE_STRIDES [4, 8, 16, 32, 64]\nBATCH_SIZE 1\nBBOX_STD_DEV [0.1 0.1 0.2 0.2]\nCOMPUTE_BACKBONE_SHAPE None\nDETECTION_MAX_INSTANCES 114\nDETECTION_MIN_CONFIDENCE 0.1\nDETECTION_NMS_THRESHOLD 0.3\nFPN_CLASSIF_FC_LAYERS_SIZE 1024\nGPU_COUNT 1\nGRADIENT_CLIP_NORM 5.0\nIMAGES_PER_GPU 1\nIMAGE_CHANNEL_COUNT 3\nIMAGE_MAX_DIM 1280\nIMAGE_META_SIZE 16\nIMAGE_MIN_DIM 1152\nIMAGE_MIN_SCALE 0\nIMAGE_RESIZE_MODE square\nIMAGE_SHAPE [1280 1280 3]\nLEARNING_MOMENTUM 0.9\nLEARNING_RATE 0.001\nLOSS_WEIGHTS {'rpn_class_loss': 1.0, 'rpn_bbox_loss': 1.0, 'mrcnn_class_loss': 1.0, 'mrcnn_bbox_loss': 1.0, 'mrcnn_mask_loss': 1.0}\nMASK_POOL_SIZE 14\nMASK_SHAPE [28, 28]\nMAX_GT_INSTANCES 114\nMEAN_PIXEL [123.7 116.8 103.9]\nMINI_MASK_SHAPE (56, 56)\nNAME model_2\nNUM_CLASSES 4\nPOOL_SIZE 7\nPOST_NMS_ROIS_INFERENCE 1000\nPOST_NMS_ROIS_TRAINING 2000\nPRE_NMS_LIMIT 6000\nROI_POSITIVE_RATIO 0.33\nRPN_ANCHOR_RATIOS [0.5, 1, 2]\nRPN_ANCHOR_SCALES (32, 64, 128, 256, 512)\nRPN_ANCHOR_STRIDE 1\nRPN_BBOX_STD_DEV [0.1 0.1 0.2 0.2]\nRPN_NMS_THRESHOLD 0.7\nRPN_TRAIN_ANCHORS_PER_IMAGE 256\nSTEPS_PER_EPOCH 1000\nTOP_DOWN_PYRAMID_SIZE 256\nTRAIN_BN False\nTRAIN_ROIS_PER_IMAGE 200\nUSE_MINI_MASK True\nUSE_RPN_ROIS True\nVALIDATION_STEPS 50\nWEIGHT_DECAY 0.0001\n\n\n"
]
],
[
[
"## Build class to load ground truth data",
"_____no_output_____"
]
],
[
[
"class CocoDataset(utils.Dataset):\n def load_coco_gt(self, annotations_file, dataset_dir):\n \"\"\"Load a COCO styled ground truth dataset\n \"\"\"\n \n # Create COCO object\n coco = COCO(annotations_file)\n\n # Load all classes\n class_ids = sorted(coco.getCatIds())\n\n # Load all images\n image_ids = list(coco.imgs.keys())\n\n # Add classes\n for i in class_ids:\n self.add_class(\"coco\", i, coco.loadCats(i)[0][\"name\"])\n\n # Add images\n for i in image_ids:\n self.add_image(\n \"coco\", image_id = i,\n path = os.path.join(dataset_dir, coco.imgs[i]['file_name']),\n width = coco.imgs[i][\"width\"],\n height = coco.imgs[i][\"height\"],\n annotations = coco.loadAnns(coco.getAnnIds(\n imgIds = [i], catIds = class_ids, iscrowd=None)))\n \n return coco\n \n def load_mask(self, image_id):\n \"\"\"Load instance masks for the given image.\n Different datasets use different ways to store masks. This\n function converts the different mask format to one format\n in the form of a bitmap [height, width, instances].\n Returns:\n masks: A bool array of shape [height, width, instance count] with\n one mask per instance.\n class_ids: a 1D array of class IDs of the instance masks.\n \"\"\"\n # If not a COCO image, delegate to parent class.\n image_info = self.image_info[image_id]\n if image_info[\"source\"] != \"coco\":\n return super(CocoDataset, self).load_mask(image_id)\n\n instance_masks = []\n class_ids = []\n annotations = self.image_info[image_id][\"annotations\"]\n # Build mask of shape [height, width, instance_count] and list\n # of class IDs that correspond to each channel of the mask.\n for annotation in annotations:\n class_id = self.map_source_class_id(\n \"coco.{}\".format(annotation['category_id']))\n if class_id:\n m = self.annToMask(annotation, image_info[\"height\"],\n image_info[\"width\"])\n # Some objects are so small that they're less than 1 pixel area\n # and end up rounded out. Skip those objects.\n if m.max() < 1:\n continue\n # Is it a crowd? If so, use a negative class ID.\n if annotation['iscrowd']:\n # Use negative class ID for crowds\n class_id *= -1\n # For crowd masks, annToMask() sometimes returns a mask\n # smaller than the given dimensions. If so, resize it.\n if m.shape[0] != image_info[\"height\"] or m.shape[1] != image_info[\"width\"]:\n m = np.ones([image_info[\"height\"], image_info[\"width\"]], dtype=bool)\n instance_masks.append(m)\n class_ids.append(class_id)\n\n # Pack instance masks into an array\n if class_ids:\n mask = np.stack(instance_masks, axis=2).astype(np.bool)\n class_ids = np.array(class_ids, dtype=np.int32)\n return mask, class_ids\n else:\n # Call super class to return an empty mask\n return super(CocoDataset, self).load_mask(image_id)\n\n def image_reference(self, image_id):\n \"\"\"Return a link to the image in the COCO Website.\"\"\"\n info = self.image_info[image_id]\n if info[\"source\"] == \"coco\":\n return \"http://cocodataset.org/#explore?id={}\".format(info[\"id\"])\n else:\n super(CocoDataset, self).image_reference(image_id)\n\n # The following two functions are from pycocotools with a few changes.\n\n def annToRLE(self, ann, height, width):\n \"\"\"\n Convert annotation which can be polygons, uncompressed RLE to RLE.\n :return: binary mask (numpy 2D array)\n \"\"\"\n segm = ann['segmentation']\n if isinstance(segm, list):\n # polygon -- a single object might consist of multiple parts\n # we merge all parts into one mask rle code\n rles = maskUtils.frPyObjects(segm, height, width)\n rle = maskUtils.merge(rles)\n elif isinstance(segm['counts'], list):\n # uncompressed RLE\n rle = maskUtils.frPyObjects(segm, height, width)\n else:\n # rle\n rle = ann['segmentation']\n return rle\n\n def annToMask(self, ann, height, width):\n \"\"\"\n Convert annotation which can be polygons, uncompressed RLE, or RLE to binary mask.\n :return: binary mask (numpy 2D array)\n \"\"\"\n rle = self.annToRLE(ann, height, width)\n m = maskUtils.decode(rle)\n return m",
"_____no_output_____"
]
],
[
[
"## Build MaskRCNN Model",
"_____no_output_____"
]
],
[
[
"# Create the model in inference mode\nmodel = modellib.MaskRCNN(mode = \"inference\", config = EvalConfig(), model_dir = MODEL_DIR)",
"W0806 02:20:35.781523 139871543367488 deprecation_wrapper.py:119] From /home/venv/src/mask-rcnn/mrcnn/model.py:720: The name tf.sets.set_intersection is deprecated. Please use tf.sets.intersection instead.\n\nW0806 02:20:35.898080 139871543367488 deprecation.py:323] From /home/venv/src/mask-rcnn/mrcnn/model.py:772: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.\nInstructions for updating:\nUse `tf.cast` instead.\n"
]
],
[
[
"## Load weights from last trained model",
"_____no_output_____"
]
],
[
[
"# Load weights by name\nmodel.load_weights(model.find_last(), by_name = True)",
"Re-starting from epoch 80\n"
]
],
[
[
"## Load dataset",
"_____no_output_____"
]
],
[
[
"# Relative path to ground truth annotations JSON file\nGT_ANNOTATIONS_FILE = \"datasets/Downtown_Sliced/test/annotations_split.json\"\n\n# Relative path to images associated with ground truth JSON file\nGT_DATASET_DIR = \"datasets/Downtown_Sliced/test/images\"\n\ndataset_val = CocoDataset()\ncoco = dataset_val.load_coco_gt(annotations_file = GT_ANNOTATIONS_FILE, dataset_dir = GT_DATASET_DIR)\ndataset_val.prepare()",
"loading annotations into memory...\nDone (t=1.24s)\ncreating index...\nindex created!\n"
]
],
[
[
"## Evaluate model with COCO test\n### If your results come back as a bunch of zeros, check to make sure that the \"width\" and \"height\" tag in your COCO dataset are correct",
"_____no_output_____"
]
],
[
[
"evaluate_coco(model, dataset_val, coco, \"segm\")",
"_____no_output_____"
]
],
[
[
"## Calculating mAP as per example in train_shapes.ipynb",
"_____no_output_____"
]
],
[
[
"# Compute VOC-Style mAP @ IoU=0.5\n# Running on 10 images. Increase for better accuracy.\nimage_ids = np.random.choice(dataset_val.image_ids, len(dataset_val.image_ids))\n\n# Instanciate arrays to create our metrics\nAPs = []\nprecisions_arr = []\nrecalls_arr = []\noverlaps_arr = []\nclass_ids_arr = []\nscores_arr = []\n\nfor id in tnrange(len(image_ids), desc = \"Processing images in dataset...\"):\n # Load image and ground truth data\n image, image_meta, gt_class_id, gt_bbox, gt_mask =\\\n modellib.load_image_gt(dataset_val, EvalConfig(),\n image_ids[id], use_mini_mask=False)\n molded_images = np.expand_dims(modellib.mold_image(image, EvalConfig()), 0)\n # Run object detection\n results = model.detect([image], verbose=0)\n r = results[0]\n # Compute AP\n AP, precisions, recalls, overlaps =\\\n utils.compute_ap(gt_bbox, gt_class_id, gt_mask,\n r[\"rois\"], r[\"class_ids\"], r[\"scores\"], r['masks'])\n # Append AP to AP array\n APs.append(AP)\n \n # Append precisions\n for precision in precisions:\n precisions_arr.append(precision)\n \n # Append recalls\n for recall in recalls:\n recalls_arr.append(recall)\n \n # Append overlaps\n for overlap in overlaps:\n overlaps_arr.append(overlap)\n \n # Append class_ids\n for class_id in r[\"class_ids\"]:\n class_ids_arr.append(class_id)\n \n # Append scores \n for score in r[\"scores\"]:\n scores_arr.append(score)\n \nprint(\"mAP: \", np.mean(APs))",
"_____no_output_____"
]
],
[
[
"## Plot precision recall curve",
"_____no_output_____"
]
],
[
[
"visualize.plot_precision_recall(AP, precisions, recalls)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e72b3212bdba809ab915c04b3283ee7d515f82ed | 7,978 | ipynb | Jupyter Notebook | FileOperations.ipynb | DannyML-DSC/Hash-analytics | c9c0e3fd72abae99617190a93120a51dee82ef79 | [
"MIT"
] | null | null | null | FileOperations.ipynb | DannyML-DSC/Hash-analytics | c9c0e3fd72abae99617190a93120a51dee82ef79 | [
"MIT"
] | null | null | null | FileOperations.ipynb | DannyML-DSC/Hash-analytics | c9c0e3fd72abae99617190a93120a51dee82ef79 | [
"MIT"
] | null | null | null | 35.616071 | 320 | 0.538732 | [
[
[
"<a href=\"https://colab.research.google.com/github/DannyML-DSC/Hash-analytics/blob/master/FileOperations.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
]
],
[
[
"#authenticatiopn script in gcp\n!apt-get install -y -qq software-properties-common python-software-properties module-init-tools\n\n!apt-get install software-properties-common\n\n!apt-get install -y -qq software-properties-common module-init-tools\n\n!apt-get install -y -qq python-software-properties module-init-tools\n\n!add-apt-repository -y ppa:alessandro-strada/ppa 2>&1 > /dev/null\n\n!apt-get update -qq 2>&1 > /dev/null\n!apt-get -y install -qq google-drive-ocamlfuse fuse\nfrom google.colab import auth\nauth.authenticate_user()\nfrom oauth2client.client import GoogleCredentials\ncreds = GoogleCredentials.get_application_default()\nimport getpass\n!google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret} < /dev/null 2>&1 | grep URL\nvcode = getpass.getpass()\n!echo {vcode} | google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret}\n\n\n\n",
"E: Package 'python-software-properties' has no installation candidate\nReading package lists... Done\nBuilding dependency tree \nReading state information... Done\nsoftware-properties-common is already the newest version (0.96.24.32.12).\nThe following package was automatically installed and is no longer required:\n libnvidia-common-430\nUse 'apt autoremove' to remove it.\n0 upgraded, 0 newly installed, 0 to remove and 16 not upgraded.\nE: Package 'python-software-properties' has no installation candidate\nSelecting previously unselected package google-drive-ocamlfuse.\n(Reading database ... 134485 files and directories currently installed.)\nPreparing to unpack .../google-drive-ocamlfuse_0.7.17-0ubuntu1~ubuntu18.04.1_amd64.deb ...\nUnpacking google-drive-ocamlfuse (0.7.17-0ubuntu1~ubuntu18.04.1) ...\nSetting up google-drive-ocamlfuse (0.7.17-0ubuntu1~ubuntu18.04.1) ...\nProcessing triggers for man-db (2.8.3-2ubuntu0.1) ...\nPlease, open the following URL in a web browser: https://accounts.google.com/o/oauth2/auth?client_id=32555940559.apps.googleusercontent.com&redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob&scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdrive&response_type=code&access_type=offline&approval_prompt=force\n··········\nPlease, open the following URL in a web browser: https://accounts.google.com/o/oauth2/auth?client_id=32555940559.apps.googleusercontent.com&redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob&scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdrive&response_type=code&access_type=offline&approval_prompt=force\nPlease enter the verification code: Access token retrieved correctly.\n"
],
[
"#script for reading data from google drive\n!mkdir -p drive\n!google-drive-ocamlfuse drive",
"_____no_output_____"
],
[
"#important libraries in this exercise\nimport pandas as pd\nimport xlrd\n",
"_____no_output_____"
],
[
"data = pd.ExcelFile(\"drive//app//data.xlsx\") \n\nwbook = xlrd.open_workbook(\"drive//app//data.xlsx\")\n\nfor excel_sheet in wbook.sheets():\n print(excel_sheet.name)",
"Sheet1\nSheet2\nSheet3\nSheet4\nSheet5\nSheet6\nSheet7\nSheet8\nSheet9\nSheet10\n"
],
[
" #reading all the sheets in the excel workbook\n sheet1 = pd.read_excel(data, sheet_name='Sheet1')\n sheet2 = pd.read_excel(data, sheet_name='Sheet2')\n sheet3 = pd.read_excel(data, sheet_name='Sheet3')\n sheet4 = pd.read_excel(data, sheet_name='Sheet4')\n sheet5 = pd.read_excel(data, sheet_name='Sheet5')\n sheet6 = pd.read_excel(data, sheet_name='Sheet6')\n sheet7 = pd.read_excel(data, sheet_name='Sheet7')\n sheet8 = pd.read_excel(data, sheet_name='Sheet8')\n sheet9 = pd.read_excel(data, sheet_name='Sheet9')\n sheet10 = pd.read_excel(data, sheet_name='Sheet10')\n ",
"_____no_output_____"
],
[
"#exporting all read sheets to csv file\nsheet1.to_csv('drive//app//sheet1.csv')\nsheet2.to_csv('drive//app//sheet2.csv')\nsheet3.to_csv('drive//app//sheet3.csv')\nsheet4.to_csv('drive//app//sheet4.csv')\nsheet5.to_csv('drive//app//sheet5.csv')\nsheet6.to_csv('drive//app//sheet6.csv')\nsheet7.to_csv('drive//app//sheet7.csv')\nsheet8.to_csv('drive//app//sheet8.csv')\nsheet9.to_csv('drive//app//sheet9.csv')\nsheet10.to_csv('drive//app//sheet10.csv')",
"_____no_output_____"
],
[
" ",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e72b37806697c674e0046ea61e50f063a98d2c88 | 20,198 | ipynb | Jupyter Notebook | versions/MicroPython/tutorials/Quantum-Rogues.ipynb | TigrisCallidus/MicroQiskit | 87523a7069d1e955da796b2e37dff1c7b017d3fe | [
"Apache-2.0"
] | 27 | 2020-04-19T12:47:50.000Z | 2022-03-27T08:19:38.000Z | versions/MicroPython/tutorials/Quantum-Rogues.ipynb | TigrisCallidus/MicroQiskit | 87523a7069d1e955da796b2e37dff1c7b017d3fe | [
"Apache-2.0"
] | 24 | 2020-05-02T07:59:15.000Z | 2022-01-01T20:52:10.000Z | versions/MicroPython/tutorials/Quantum-Rogues.ipynb | TigrisCallidus/MicroQiskit | 87523a7069d1e955da796b2e37dff1c7b017d3fe | [
"Apache-2.0"
] | 25 | 2019-12-17T09:52:12.000Z | 2022-01-01T04:05:42.000Z | 31.559375 | 374 | 0.457471 | [
[
[
"# Making a version of *Quantum Rogues*\n\n[*Quantum Rogues*](https://github.com/qiskit-community/qiskit-camp-europe-19/issues/5) was a game created at the [Qiskit Camp Europe](https://qiskit.camp/europe/) hackathon in September 2019. In this notebook we create a new game, which takes the basic idea of the original *Quantum Rogues* and ports it to the PewPew.",
"_____no_output_____"
]
],
[
[
"%matplotlib notebook",
"_____no_output_____"
]
],
[
[
"In this game, the player controls a flying avatar that can move around a room. I think of it as a bat - a qubat, in fact - but you can interpret it as you wish.\n\nFirst we simply create the room and the bat, and set up the controls. The **O** button makes the bat flap its wings. The ◀︎ and ► buttons move left and right.",
"_____no_output_____"
]
],
[
[
"import pew\n\npew.init()\nscreen = pew.Pix()\n\n# create a border of B=2 pixels\nfor A in range(8):\n for B in [0,7]:\n screen.pixel(A,B,2)\n screen.pixel(B,A,2) \n\n# the player is a B=3 pixel\nX,Y = 4,6\nscreen.pixel(X,Y,3)\n\nwhile True: # loop which checks for user input and responds\n\n # use key presses to determine how the player moves\n dX,dY = 0,1 # default is to fall\n keys = pew.keys()\n if keys!=0:\n if keys&pew.K_O: # fly up with O\n dY = -1\n # just left and right\n if keys&pew.K_LEFT:\n dX = -1\n dY = 0\n if keys&pew.K_RIGHT:\n dX = +1\n dY = 0\n \n # blank player pixel at old pos\n screen.pixel(X,Y,0)\n # change pos\n if Y+dY in range(1,7):\n Y+=dY\n if X+dX in range(1,7):\n X+=dX\n # put player pixel at new pos\n screen.pixel(X,Y,3)\n \n pew.show(screen)\n\n pew.tick(1/6)",
"_____no_output_____"
]
],
[
[
"Each room will have doors, through which it can move to neighbouring rooms. There are four possible doors: one in each direction.\n\nWhether a door is open or closed is generated randomly from a two qubit quantum circuit. The left door depends on the result from one of the qubits when a z measurement is made: `0` for open and `1` for closed. The right door depends similarly on the z measurement outcome of the other qubit. The doors on the floor and ceiling depend on the x measurement outcomes.\n\nThe program below adds in these doors, and ends whenever you enter one.",
"_____no_output_____"
]
],
[
[
"import pew\nfrom microqiskit import *\n\npew.init()\nscreen = pew.Pix()\n\n# set positions of doors\nl = (0,4)\nr = (7,4)\nu = (3,0)\nd = (3,7)\ndoors = [l,r,u,d]\n\n# create a border of B=2 pixels\nfor A in range(8):\n for B in [0,7]:\n screen.pixel(A,B,2)\n screen.pixel(B,A,2) \n\n# the player is a B=3 pixel\nX,Y = 4,6\nscreen.pixel(X,Y,3)\n\n# set up the circuit that decides whether doors are open\nqc = QuantumCircuit(2,2)\n\n# and those for measurement\nmeas_zz = QuantumCircuit(2,2)\nmeas_zz.measure(0,0)\nmeas_zz.measure(1,1)\nmeas_xx = QuantumCircuit(2,2)\nmeas_xx.h(0)\nmeas_xx.h(1)\nmeas_xx.measure(0,0)\nmeas_xx.measure(1,1)\n\n# use the results to set which doors are open\nm_zz = simulate(qc+meas_zz,shots=1,get='memory')[0]\nm_xx = simulate(qc+meas_xx,shots=1,get='memory')[0]\n\nopened = []\nif m_zz[0]=='0':\n opened.append(l)\nif m_zz[1]=='0':\n opened.append(r)\nif m_xx[0]=='0':\n opened.append(u)\nif m_xx[1]=='0':\n opened.append(d)\n \n# set open door pixels to B=0\nfor door in doors:\n screen.pixel(door[0],door[1],2)\nfor door in opened:\n screen.pixel(door[0],door[1],0)\n\nwhile (X,Y) not in opened:\n\n # use key presses to determine how the player moves\n dX,dY = 0,1 # default is to fall\n keys = pew.keys()\n if keys!=0:\n if keys&pew.K_O: # fly up with O\n dY = -1\n # just left and right\n if keys&pew.K_LEFT:\n dX = -1\n dY = 0\n if keys&pew.K_RIGHT:\n dX = +1\n dY = 0\n \n # blank player pixel at old pos\n screen.pixel(X,Y,0)\n # change pos\n if ( Y+dY in range(1,7) and X+dX in range(1,7) ) or ( (X+dX,Y+dY) in opened ):\n X+=dX\n Y+=dY\n # put player pixel at new pos\n screen.pixel(X,Y,3)\n \n pew.show(screen)\n\n pew.tick(1/6)",
"_____no_output_____"
]
],
[
[
"Obviously what *should* happen when you pass through a door is that you end up in another room. This is implemented in the program below. The new room will again has open and closed doors, generated randomly by the circuit.",
"_____no_output_____"
]
],
[
[
"import pew\nfrom microqiskit import *\n\npew.init()\nscreen = pew.Pix()\n\n# set positions of doors\nl = (0,4)\nr = (7,4)\nu = (3,0)\nd = (3,7)\ndoors = [l,r,u,d]\n\n# create a border of B=2 pixels\nfor A in range(8):\n for B in [0,7]:\n screen.pixel(A,B,2)\n screen.pixel(B,A,2) \n\n# the player is a B=3 pixel\nX,Y = 4,6\nscreen.pixel(X,Y,3)\n\n# set up the circuit that decides whether doors are open\nqc = QuantumCircuit(2,2)\n\n# and those for measurement\nmeas_zz = QuantumCircuit(2,2)\nmeas_zz.measure(0,0)\nmeas_zz.measure(1,1)\nmeas_xx = QuantumCircuit(2,2)\nmeas_xx.h(0)\nmeas_xx.h(1)\nmeas_xx.measure(0,0)\nmeas_xx.measure(1,1)\n\nwhile True:\n\n # use the results to set which doors are open\n m_zz = simulate(qc+meas_zz,shots=1,get='memory')[0]\n m_xx = simulate(qc+meas_xx,shots=1,get='memory')[0]\n\n opened = []\n if m_zz[0]=='0':\n opened.append(l)\n if m_zz[1]=='0':\n opened.append(r)\n if m_xx[0]=='0':\n opened.append(u)\n if m_xx[1]=='0':\n opened.append(d)\n \n # set open door pixels to B=0\n for door in doors:\n screen.pixel(door[0],door[1],2)\n for door in opened:\n screen.pixel(door[0],door[1],0)\n\n while (X,Y) not in opened:\n\n # use key presses to determine how the player moves\n dX,dY = 0,1 # default is to fall\n keys = pew.keys()\n if keys!=0:\n if keys&pew.K_O: # fly up with O\n dY = -1\n # just left and right\n if keys&pew.K_LEFT:\n dX = -1\n dY = 0\n if keys&pew.K_RIGHT:\n dX = +1\n dY = 0\n\n # blank player pixel at old pos\n screen.pixel(X,Y,0)\n # change pos\n if ( Y+dY in range(1,7) and X+dX in range(1,7) ) or ( (X+dX,Y+dY) in opened ):\n X+=dX\n Y+=dY\n # put player pixel at new pos\n screen.pixel(X,Y,3)\n\n pew.show(screen)\n\n pew.tick(1/6)\n \n if (X,Y)==u:\n X,Y = 3,6\n if (X,Y)==d:\n X,Y = 3,1\n if (X,Y)==l:\n X,Y = 6,4\n if (X,Y)==r:\n X,Y = 1,4",
"_____no_output_____"
]
],
[
[
"So far, we have always used a blank circuit. This always outputs `00` for z measurements, and so always has the left and right doors open. The other two doors are randomly open or closed.\n\nThe aim of the game will be to climb sufficiently high up the tower. For this, you'll want to make sure that the ceiling door is open as much as possible. This can be done by adding a Hadamard to the circuit, but how can we allow the player to do this?\n\nIn the following, two randomly moving objects are added. If the bat hits them, a Hadamard is applied to the circuit. For the object near the bottom of the room, it is the Hadamard that controls the bottom door that is applied. The object near the top similarly controls the top door.",
"_____no_output_____"
]
],
[
[
"import pew\nfrom microqiskit import *\n\npew.init()\nscreen = pew.Pix()\n\n# set positions of doors\nl = (0,4)\nr = (7,4)\nu = (3,0)\nd = (3,7)\ndoors = [l,r,u,d]\n\n# create a border of B=2 pixels\nfor A in range(8):\n for B in [0,7]:\n screen.pixel(A,B,2)\n screen.pixel(B,A,2) \n\n# the player is a B=3 pixel\nX,Y = 4,6\nscreen.pixel(X,Y,3)\n\n# set positions of the hadamard objects\nH = [[],[]]\nH[0] = [6,6]\nH[1] = [1,1]\n\n# set up the circuit that decides whether doors are open\nqc = QuantumCircuit(2,2)\n\n# and those for measurement\nmeas_zz = QuantumCircuit(2,2)\nmeas_zz.measure(0,0)\nmeas_zz.measure(1,1)\nmeas_xx = QuantumCircuit(2,2)\nmeas_xx.h(0)\nmeas_xx.h(1)\nmeas_xx.measure(0,0)\nmeas_xx.measure(1,1)\n\nqrng = QuantumCircuit(2,2)\n\nwhile True:\n\n # use the results to set which doors are open\n m_zz = simulate(qc+meas_zz,shots=1,get='memory')[0]\n m_xx = simulate(qc+meas_xx,shots=1,get='memory')[0]\n\n opened = []\n if m_zz[0]=='0':\n opened.append(l)\n if m_zz[1]=='0':\n opened.append(r)\n if m_xx[0]=='0':\n opened.append(u)\n if m_xx[1]=='0':\n opened.append(d)\n \n # set open door pixels to B=0\n for door in doors:\n screen.pixel(door[0],door[1],2)\n for door in opened:\n screen.pixel(door[0],door[1],0)\n \n frame = 0\n while (X,Y) not in opened:\n \n # randomly move the positions of H[0] and H[1]\n for j in range(2):\n screen.pixel(H[j][0],H[j][1],0)\n m = simulate(qrng+meas_xx,shots=1,get='memory')[0]\n for j in range(2):\n dH = (m[j]=='0') - (m[j]=='1')\n if H[j][0]+dH in range(1,7):\n H[j][0]+=dH\n frame += 1 # brightness flashes, and so depends on frame\n for j in range(2):\n screen.pixel(H[j][0],H[j][1],1+(frame%2)) \n\n # use key presses to determine how the player moves\n dX,dY = 0,1 # default is to fall\n keys = pew.keys()\n if keys!=0:\n if keys&pew.K_O: # fly up with O\n dY = -1\n # just left and right\n if keys&pew.K_LEFT:\n dX = -1\n dY = 0\n if keys&pew.K_RIGHT:\n dX = +1\n dY = 0\n\n # blank player pixel at old pos\n screen.pixel(X,Y,0)\n # change pos\n if ( Y+dY in range(1,7) and X+dX in range(1,7) ) or ( (X+dX,Y+dY) in opened ):\n X+=dX\n Y+=dY\n # put player pixel at new pos\n screen.pixel(X,Y,3)\n \n # if the player is at the same pos as H[0] or H[1]\n # apply the corresponding Hadamard\n for j in range(2):\n if (X,Y)==(H[j][0],H[j][1]):\n qc.h(j)\n\n pew.show(screen)\n\n pew.tick(1/6)\n \n if (X,Y)==u:\n X,Y = 3,6\n if (X,Y)==d:\n X,Y = 3,1\n if (X,Y)==l:\n X,Y = 6,4\n if (X,Y)==r:\n X,Y = 1,4",
"_____no_output_____"
]
],
[
[
"Now we just need to add in an end condition, so that we can actually complete the game. We'll simply make it so that you win when you are ten floors higher than when you started. This then triggers the end screen, which will be some random pixels generated by the circuit.",
"_____no_output_____"
]
],
[
[
"import pew\nfrom microqiskit import *\n\npew.init()\nscreen = pew.Pix()\n\n# set positions of doors\nl = (0,4)\nr = (7,4)\nu = (3,0)\nd = (3,7)\ndoors = [l,r,u,d]\n\n# create a border of B=2 pixels\nfor A in range(8):\n for B in [0,7]:\n screen.pixel(A,B,2)\n screen.pixel(B,A,2) \n\n# the player is a B=3 pixel\nX,Y = 4,6\nscreen.pixel(X,Y,3)\n\n# set how high the player has gone up the tower\nheight = 0\n\n# set positions of the hadamard objects\nH = [[],[]]\nH[0] = [6,6]\nH[1] = [1,1]\n\n# set up the circuit that decides whether doors are open\nqc = QuantumCircuit(2,2)\n\n# and those for measurement\nmeas_zz = QuantumCircuit(2,2)\nmeas_zz.measure(0,0)\nmeas_zz.measure(1,1)\nmeas_xx = QuantumCircuit(2,2)\nmeas_xx.h(0)\nmeas_xx.h(1)\nmeas_xx.measure(0,0)\nmeas_xx.measure(1,1)\n\nqrng = QuantumCircuit(2,2)\n\nwhile height<10:\n\n # use the results to set which doors are open\n m_zz = simulate(qc+meas_zz,shots=1,get='memory')[0]\n m_xx = simulate(qc+meas_xx,shots=1,get='memory')[0]\n\n opened = []\n if m_zz[0]=='0':\n opened.append(l)\n if m_zz[1]=='0':\n opened.append(r)\n if m_xx[0]=='0':\n opened.append(u)\n if m_xx[1]=='0':\n opened.append(d)\n \n # set open door pixels to B=0\n for door in doors:\n screen.pixel(door[0],door[1],2)\n for door in opened:\n screen.pixel(door[0],door[1],0)\n \n frame = 0\n while (X,Y) not in opened:\n \n # randomly move the positions of H[0] and H[1]\n for j in range(2):\n screen.pixel(H[j][0],H[j][1],0)\n m = simulate(qrng+meas_xx,shots=1,get='memory')[0]\n for j in range(2):\n dH = (m[j]=='0') - (m[j]=='1')\n if H[j][0]+dH in range(1,7):\n H[j][0]+=dH\n frame += 1 # brightness flashes, and so depends on frame\n for j in range(2):\n screen.pixel(H[j][0],H[j][1],1+(frame%2)) \n\n # use key presses to determine how the player moves\n dX,dY = 0,1 # default is to fall\n keys = pew.keys()\n if keys!=0:\n if keys&pew.K_O: # fly up with O\n dY = -1\n # just left and right\n if keys&pew.K_LEFT:\n dX = -1\n dY = 0\n if keys&pew.K_RIGHT:\n dX = +1\n dY = 0\n\n # blank player pixel at old pos\n screen.pixel(X,Y,0)\n # change pos\n if ( Y+dY in range(1,7) and X+dX in range(1,7) ) or ( (X+dX,Y+dY) in opened ):\n X+=dX\n Y+=dY\n # put player pixel at new pos\n screen.pixel(X,Y,3)\n \n # if the player is at the same pos as H[0] or H[1]\n # apply the corresponding Hadamard\n for j in range(2):\n if (X,Y)==(H[j][0],H[j][1]):\n qc.h(j)\n\n pew.show(screen)\n\n pew.tick(1/6)\n \n if (X,Y)==u:\n X,Y = 3,6\n height += 1\n if (X,Y)==d:\n X,Y = 3,1\n height -= 1\n if (X,Y)==l:\n X,Y = 6,4\n if (X,Y)==r:\n X,Y = 1,4\n \nwhile True:\n for x in range(4):\n for y in range(4):\n m_zz = simulate(qc+meas_zz,shots=1,get='memory')[0]\n m_xx = simulate(qc+meas_xx,shots=1,get='memory')[0]\n screen.pixel(x,y,1+2*(m_zz[0]=='0'))\n screen.pixel(x+4,y,1+2*(m_zz[1]=='0'))\n screen.pixel(x,y+4,1+2*(m_xx[0]=='0'))\n screen.pixel(x+4,y+4,1+2*(m_xx[1]=='0'))\n pew.show(screen)\n pew.tick(1/6)",
"_____no_output_____"
]
],
[
[
"And there we have it! A simple game, made using quantum programming.",
"_____no_output_____"
],
[
"**[Click here to return to the index](Start-Here.ipynb)**",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
e72b4f9fce9cd6f829f5f723811c48aac9918010 | 28,088 | ipynb | Jupyter Notebook | Course4_Applied_Text_Mining_in_Python/Week-3/Assignment+3.ipynb | Collumbus/Applied_Data_Science-with_Python-Coursera | b567072ff4ec41a44416071fc05d95c7ed285f1d | [
"MIT"
] | null | null | null | Course4_Applied_Text_Mining_in_Python/Week-3/Assignment+3.ipynb | Collumbus/Applied_Data_Science-with_Python-Coursera | b567072ff4ec41a44416071fc05d95c7ed285f1d | [
"MIT"
] | null | null | null | Course4_Applied_Text_Mining_in_Python/Week-3/Assignment+3.ipynb | Collumbus/Applied_Data_Science-with_Python-Coursera | b567072ff4ec41a44416071fc05d95c7ed285f1d | [
"MIT"
] | null | null | null | 29.754237 | 294 | 0.527449 | [
[
[
"---\n\n_You are currently looking at **version 1.1** of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the [Jupyter Notebook FAQ](https://www.coursera.org/learn/python-text-mining/resources/d9pwm) course resource._\n\n---",
"_____no_output_____"
],
[
"# Assignment 3\n\nIn this assignment you will explore text message data and create models to predict if a message is spam or not. ",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\n\nspam_data = pd.read_csv('spam.csv')\n\nspam_data['target'] = np.where(spam_data['target']=='spam',1,0)\nspam_data.head(10)",
"_____no_output_____"
],
[
"from sklearn.model_selection import train_test_split\n\n\nX_train, X_test, y_train, y_test = train_test_split(spam_data['text'], \n spam_data['target'], \n random_state=0)",
"_____no_output_____"
]
],
[
[
"### Question 1\nWhat percentage of the documents in `spam_data` are spam?\n\n*This function should return a float, the percent value (i.e. $ratio * 100$).*",
"_____no_output_____"
]
],
[
[
"spam_data['target'].value_counts()[1]/len(spam_data['target']) * 100",
"_____no_output_____"
],
[
"def answer_one():\n \n pspam = spam_data['target'].value_counts()[1]/len(spam_data['target']) * 100\n \n return pspam #Your answer here",
"_____no_output_____"
],
[
"answer_one()",
"_____no_output_____"
]
],
[
[
"### Question 2\n\nFit the training data `X_train` using a Count Vectorizer with default parameters.\n\nWhat is the longest token in the vocabulary?\n\n*This function should return a string.*",
"_____no_output_____"
]
],
[
[
"from sklearn.feature_extraction.text import CountVectorizer\n\ndef answer_two():\n \n vect = CountVectorizer().fit(X_train)\n f_names = vect.get_feature_names()\n \n return max(f_names, key=len)#Your answer here",
"_____no_output_____"
],
[
"answer_two()",
"_____no_output_____"
]
],
[
[
"### Question 3\n\nFit and transform the training data `X_train` using a Count Vectorizer with default parameters.\n\nNext, fit a fit a multinomial Naive Bayes classifier model with smoothing `alpha=0.1`. Find the area under the curve (AUC) score using the transformed test data.\n\n*This function should return the AUC score as a float.*",
"_____no_output_____"
]
],
[
[
"from sklearn.naive_bayes import MultinomialNB\nfrom sklearn.metrics import roc_auc_score\n\ndef answer_three():\n # initializing Count Vector\n cv = CountVectorizer().fit(X_train)\n \n # transforming X_train and X_test into cv object type\n X_train_cv = cv.transform(X_train)\n X_test_cv = cv.transform(X_test)\n \n # initializing Naive Bayes classifier and making predections\n nbc = MultinomialNB(alpha=0.1).fit(X_train_cv, y_train)\n predictions = nbc.predict(X_test_cv)\n auc = roc_auc_score(y_test, predictions)\n \n return auc#Your answer here",
"_____no_output_____"
],
[
"answer_three()",
"_____no_output_____"
]
],
[
[
"### Question 4\n\nFit and transform the training data `X_train` using a Tfidf Vectorizer with default parameters.\n\nWhat 20 features have the smallest tf-idf and what 20 have the largest tf-idf?\n\nPut these features in a two series where each series is sorted by tf-idf value and then alphabetically by feature name. The index of the series should be the feature name, and the data should be the tf-idf.\n\nThe series of 20 features with smallest tf-idfs should be sorted smallest tfidf first, the list of 20 features with largest tf-idfs should be sorted largest first. \n\n*This function should return a tuple of two series\n`(smallest tf-idfs series, largest tf-idfs series)`.*",
"_____no_output_____"
]
],
[
[
"from sklearn.feature_extraction.text import TfidfVectorizer\n\ndef answer_four():\n \n # initializing Tfidf Vectorizer\n tfidV = TfidfVectorizer().fit(X_train)\n \n # transforming X_train into Tfidf object type\n X_train_tfidV = tfidV.transform(X_train)\n tfidV.get_params\n # getting features names\n f_names = np.array(tfidV.get_feature_names())\n \n # fetting largest tfidf values across all documents\n sorted_tfid_max = X_train_tfidV.max(0).toarray()[0]\n \n # creating a sorted list of index\n sorted_tfidf_index = sorted_tfid_max.argsort()\n sorted_tfid = sorted_tfid_max[sorted_tfidf_index]\n \n # creating the two series\n smallest_tfid_series = pd.Series(sorted_tfid[:20], index=f_names[sorted_tfidf_index[:20]])\n largest_tfid_series = pd.Series(sorted_tfid[:-21:-1], index=f_names[sorted_tfidf_index[:-21:-1]])\n \n return (smallest_tfid_series, largest_tfid_series)#Your answer here",
"_____no_output_____"
],
[
"answer_four()",
"_____no_output_____"
]
],
[
[
"### Question 5\n\nFit and transform the training data `X_train` using a Tfidf Vectorizer ignoring terms that have a document frequency strictly lower than **3**.\n\nThen fit a multinomial Naive Bayes classifier model with smoothing `alpha=0.1` and compute the area under the curve (AUC) score using the transformed test data.\n\n*This function should return the AUC score as a float.*",
"_____no_output_____"
]
],
[
[
"def answer_five():\n \n # initializing Tfidf Vectorizer\n tfidV = TfidfVectorizer(min_df=3).fit(X_train)\n \n # transforming X_train and X_test into Tfidf object type\n X_train_tfidV = tfidV.transform(X_train)\n X_test_tfidV = tfidV.transform(X_test)\n \n # initializing Naive Bayes classifier and making predections\n nbc = MultinomialNB(alpha=0.1).fit(X_train_tfidV, y_train) \n predictions = nbc.predict(X_test_tfidV)\n auc = roc_auc_score(y_test, predictions)\n \n return auc#Your answer here",
"_____no_output_____"
],
[
"answer_five()",
"_____no_output_____"
]
],
[
[
"### Question 6\n\nWhat is the average length of documents (number of characters) for not spam and spam documents?\n\n*This function should return a tuple (average length not spam, average length spam).*",
"_____no_output_____"
]
],
[
[
"def answer_six():\n \n spam_data['chars_qtd'] = [len(c) for c in spam_data['text']]\n avg_len_span = np.mean(spam_data[spam_data['target'] == 1]['chars_qtd'])\n avg_len_not_span = np.mean(spam_data[spam_data['target'] == 0]['chars_qtd'])\n \n return (avg_len_not_span, avg_len_span)#Your answer here",
"_____no_output_____"
]
],
[
[
"#another way to do the same\ndef answer_six():\n \n len_spam = [len(x) for x in spam_data.loc[spam_data['target']==1, 'text']]\n len_not_spam = [len(x) for x in spam_data.loc[spam_data['target']==0, 'text']]\n return (np.mean(len_not_spam), np.mean(len_spam))",
"_____no_output_____"
]
],
[
[
"answer_six()",
"_____no_output_____"
]
],
[
[
"<br>\n<br>\nThe following function has been provided to help you combine new features into the training data:",
"_____no_output_____"
]
],
[
[
"def add_feature(X, feature_to_add):\n \"\"\"\n Returns sparse feature matrix with added feature.\n feature_to_add can also be a list of features.\n \"\"\"\n from scipy.sparse import csr_matrix, hstack\n return hstack([X, csr_matrix(feature_to_add).T], 'csr')",
"_____no_output_____"
]
],
[
[
"### Question 7\n\nFit and transform the training data X_train using a Tfidf Vectorizer ignoring terms that have a document frequency strictly lower than **5**.\n\nUsing this document-term matrix and an additional feature, **the length of document (number of characters)**, fit a Support Vector Classification model with regularization `C=10000`. Then compute the area under the curve (AUC) score using the transformed test data.\n\n*This function should return the AUC score as a float.*",
"_____no_output_____"
]
],
[
[
"from sklearn.svm import SVC\n\ndef answer_seven():\n \n X_train_len = [len(x) for x in X_train]\n X_test_len = [len(x) for x in X_test]\n \n vect = TfidfVectorizer(min_df=5).fit(X_train) \n X_train_tfidf = vect.transform(X_train)\n X_test_tfidf = vect.transform(X_test)\n \n new_X_train = add_feature(X_train_tfidf, X_train_len)\n new_X_test = add_feature(X_test_tfidf, X_test_len)\n \n svc = SVC(C=10000).fit(new_X_train, y_train)\n predictions = svc.predict(new_X_test)\n \n auc = roc_auc_score(y_test, predictions)\n \n return auc#Your answer here",
"_____no_output_____"
],
[
"answer_seven()",
"_____no_output_____"
]
],
[
[
"### Question 8\n\nWhat is the average number of digits per document for not spam and spam documents?\n\n*This function should return a tuple (average # digits not spam, average # digits spam).*",
"_____no_output_____"
]
],
[
[
"a = 'dado 32'\nsum(c.isnumeric() for c in a)",
"_____no_output_____"
],
[
"def answer_eight():\n\n avg_d_span = [sum(char.isnumeric() for char in document) for document in spam_data.loc[spam_data['target']==1, 'text']]\n avg_d_not_span = [sum(char.isnumeric() for char in document) for document in spam_data.loc[spam_data['target']==0, 'text']]\n \n return (np.mean(avg_d_not_span), np.mean(avg_d_span))#Your answer here",
"_____no_output_____"
],
[
"answer_eight()",
"_____no_output_____"
]
],
[
[
"### Question 9\n\nFit and transform the training data `X_train` using a Tfidf Vectorizer ignoring terms that have a document frequency strictly lower than **5** and using **word n-grams from n=1 to n=3** (unigrams, bigrams, and trigrams).\n\nUsing this document-term matrix and the following additional features:\n* the length of document (number of characters)\n* **number of digits per document**\n\nfit a Logistic Regression model with regularization `C=100`. Then compute the area under the curve (AUC) score using the transformed test data.\n\n*This function should return the AUC score as a float.*",
"_____no_output_____"
]
],
[
[
"from sklearn.linear_model import LogisticRegression\n\ndef answer_nine():\n \n X_train_len = [len(x) for x in X_train]\n X_test_len = [len(x) for x in X_test]\n \n X_train_dig = [sum(char.isnumeric() for char in x) for x in X_train]\n X_test_dig = [sum(char.isnumeric() for char in x) for x in X_test]\n \n tf = TfidfVectorizer(min_df=5, ngram_range=(1, 3)).fit(X_train) \n X_train_tf = tf.transform(X_train)\n X_test_tf = tf.transform(X_test)\n \n X_train_tf = add_feature(X_train_tf, [X_train_len, X_train_dig])\n X_test_tf = add_feature(X_test_tf, [X_test_len, X_test_dig])\n \n lrc = LogisticRegression(C=100, max_iter=1000).fit(X_train_tf, y_train)\n predictions = lrc.predict(X_test_tf)\n \n auc = roc_auc_score(y_test, predictions)\n \n return auc #Your answer here",
"_____no_output_____"
],
[
"answer_nine()",
"_____no_output_____"
]
],
[
[
"### Question 10\n\nWhat is the average number of non-word characters (anything other than a letter, digit or underscore) per document for not spam and spam documents?\n\n*Hint: Use `\\w` and `\\W` character classes*\n\n*This function should return a tuple (average # non-word characters not spam, average # non-word characters spam).*",
"_____no_output_____"
]
],
[
[
"import re\ndef answer_ten():\n \n non_d_spam = [char for char in spam_data.loc[spam_data['target']==1, 'text'].str.count(r'\\W')]\n non_d_not_spam = [char for char in spam_data.loc[spam_data['target']==0, 'text'].str.count(r'\\W')]\n return (np.mean(non_d_not_spam), np.mean(non_d_spam))#Your answer here",
"_____no_output_____"
],
[
"answer_ten()",
"_____no_output_____"
]
],
[
[
"### Question 11\n\nFit and transform the training data X_train using a Count Vectorizer ignoring terms that have a document frequency strictly lower than **5** and using **character n-grams from n=2 to n=5.**\n\nTo tell Count Vectorizer to use character n-grams pass in `analyzer='char_wb'` which creates character n-grams only from text inside word boundaries. This should make the model more robust to spelling mistakes.\n\nUsing this document-term matrix and the following additional features:\n* the length of document (number of characters)\n* number of digits per document\n* **number of non-word characters (anything other than a letter, digit or underscore.)**\n\nfit a Logistic Regression model with regularization C=100. Then compute the area under the curve (AUC) score using the transformed test data.\n\nAlso **find the 10 smallest and 10 largest coefficients from the model** and return them along with the AUC score in a tuple.\n\nThe list of 10 smallest coefficients should be sorted smallest first, the list of 10 largest coefficients should be sorted largest first.\n\nThe three features that were added to the document term matrix should have the following names should they appear in the list of coefficients:\n['length_of_doc', 'digit_count', 'non_word_char_count']\n\n*This function should return a tuple `(AUC score as a float, smallest coefs list, largest coefs list)`.*",
"_____no_output_____"
]
],
[
[
"def answer_eleven():\n \n \n X_train_len = [len(x) for x in X_train]\n X_test_len = [len(x) for x in X_test]\n \n X_train_dig = [sum(char.isnumeric() for char in x) for x in X_train]\n X_test_dig = [sum(char.isnumeric() for char in x) for x in X_test]\n \n X_train_ndig =X_train.str.count('\\W')\n X_test_ndig = X_test.str.count('\\W')\n \n tf = CountVectorizer(min_df=5, ngram_range=(2, 5), analyzer='char_wb').fit(X_train) \n X_train_tf = tf.transform(X_train)\n X_test_tf = tf.transform(X_test)\n \n X_train_tf = add_feature(X_train_tf, [X_train_len, X_train_dig, X_train_ndig])\n X_test_tf = add_feature(X_test_tf, [X_test_len, X_test_dig, X_test_ndig])\n \n lrc = LogisticRegression(C=100, max_iter=10000).fit(X_train_tf, y_train)\n predictions = lrc.predict(X_test_tf)\n \n auc = roc_auc_score(y_test, predictions)\n \n \n # getting features names\n f_names = np.array(tf.get_feature_names() + ['length_of_doc', 'digit_count', 'non_word_char_count'])\n \n # fetting largest tfidf values across all documents\n sorted_coef_index = lrc.coef_[0].argsort()\n \n # creating the two series\n smallest_coef_list = list(f_names[sorted_coef_index[:10]])\n largest_coef_list = list(f_names[sorted_coef_index[:-11:-1]])\n \n \n \n return (auc, smallest_coef_list, largest_coef_list) #Your answer here",
"_____no_output_____"
],
[
"answer_eleven()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
e72b590b9154e3dd2ef839a082978705d9e60d6b | 49,788 | ipynb | Jupyter Notebook | pypkg/examples/Untitled.ipynb | TNonet/gL0Learn | cfa94ffd83b294faf94c8c7820f195d6b93c620b | [
"MIT"
] | 1 | 2022-03-07T21:33:13.000Z | 2022-03-07T21:33:13.000Z | pypkg/examples/Untitled.ipynb | TNonet/gL0Learn | cfa94ffd83b294faf94c8c7820f195d6b93c620b | [
"MIT"
] | null | null | null | pypkg/examples/Untitled.ipynb | TNonet/gL0Learn | cfa94ffd83b294faf94c8c7820f195d6b93c620b | [
"MIT"
] | null | null | null | 34.479224 | 1,348 | 0.494818 | [
[
[
"from typing import Optional, Dict, Iterable, List, Tuple\n\nimport hypothesis.strategies\nimport numpy as np\nfrom gl0learn import fit, synthetic\nimport pytest\nfrom gl0learn.metrics import nonzeros\nfrom hypothesis import given, settings, HealthCheck, assume\nfrom hypothesis.strategies import composite, just, booleans, floats, integers",
"_____no_output_____"
],
[
"@composite\ndef random_penalty(draw,\n l0: hypothesis.strategies.SearchStrategy[bool],\n l1: hypothesis.strategies.SearchStrategy[bool],\n l2: hypothesis.strategies.SearchStrategy[bool],\n ) -> List[str]:\n penalties = []\n\n if draw(l0):\n penalties.append('l0')\n\n if draw(l1):\n penalties.append('l1')\n\n if draw(l2):\n penalties.append('l2')\n\n return penalties\n\n\n@composite\ndef random_penalty_values(draw,\n values_strategies: Dict[str, hypothesis.strategies.SearchStrategy[float]],\n penalty_strategies: hypothesis.strategies.SearchStrategy[Iterable[str]]\n ) -> hypothesis.strategies.SearchStrategy[Dict[str, float]]:\n\n penalties = draw(penalty_strategies)\n values = {}\n for penalty in penalties:\n values[penalty] = draw(values_strategies[penalty])\n\n return values",
"_____no_output_____"
],
[
"penalty_strategty = random_penalty(l0=just(True), l1=booleans(), l2=booleans())\n\nrandom_penalty_values",
"_____no_output_____"
],
[
"penalty_strategty.example()",
"_____no_output_____"
],
[
"s = random_penalty_values(penalty_strategies=penalty_strategty,\n values_strategies={\"l0\": floats(0.01, 10), \"l1\": floats(0.01, 10), \"l2\": floats(0.01, 10)})",
"_____no_output_____"
],
[
"s.example()",
"_____no_output_____"
],
[
"def top_n_triu_indicies(x, n):\n p = x.shape[1]\n x = np.copy(x)\n x[np.tril_indices(p, k=0)] = 0\n value = np.sort(x.flatten())[::-1][n - 1]\n\n return np.where(x >= value)\n\ndef make_bisect_func(desired_nnz: int, Y: np.ndarray, verbose: bool = True, seed: int = 0, **kwargs):\n def inner_bisect(l0):\n np.random.seed(seed)\n fit_gl0learn = fit(Y, l0=l0, **kwargs)\n theta_hat = fit_gl0learn.theta\n np.fill_diagonal(theta_hat, 0)\n\n nnz = np.count_nonzero(theta_hat) // 2\n cost = desired_nnz - nnz\n if verbose:\n print(f\"gl0Learn found solution with {nnz} non-zeros with parameters:\")\n print(f\"\\t l0 = {l0})\")\n print(f\"\\t cost = {cost}\")\n return cost\n\n return inner_bisect\n\ndef _sample_data(n: int = 1000, seed: int = 0) -> Tuple[np.ndarray, np.ndarray]:\n \"\"\"\n\n\n Example Data!\n\n\n >>>from tabulate import tabulate\n ...import numpy as np\n ...coords = np.array([str(t).replace('(','').replace(')','') for t in zip(*np.nonzero(np.ones([5,5])))]).reshape(5,5)\n ...table = tabulate(coords, tablefmt=\"fancy_grid\")\n ...print(table)\n ╒══════╤══════╤══════╤══════╤══════╕\n │ 0, 0 │ 0, 1 │ 0, 2 │ 0, 3 │ 0, 4 │\n ├──────┼──────┼──────┼──────┼──────┤\n │ 1, 0 │ 1, 1 │ 1, 2 │ 1, 3 │ 1, 4 │\n ├──────┼──────┼──────┼──────┼──────┤\n │ 2, 0 │ 2, 1 │ 2, 2 │ 2, 3 │ 2, 4 │\n ├──────┼──────┼──────┼──────┼──────┤\n │ 3, 0 │ 3, 1 │ 3, 2 │ 3, 3 │ 3, 4 │\n ├──────┼──────┼──────┼──────┼──────┤\n │ 4, 0 │ 4, 1 │ 4, 2 │ 4, 3 │ 4, 4 │\n ╘══════╧══════╧══════╧══════╧══════╛\n\n Suppose:\n Coordinates (0,1) and (1,2) are the initial support\n Coordinates (0,2) and (1,3) are also in the active set\n Coordinates (0,3) and (1,4) are also in the super active set\n\n Supplying `theta_truth` as a upper triangular diagonally dominate matrix, we can set which of `theta_hat` should be learned first.\n\n This allows us to check if fit is behaving as expected!\n \"\"\"\n N = 5\n mu = np.zeros(N)\n\n theta_truth_tril = (1 / 8) * np.asarray([[8, 0, 0, 0, 1],\n [0, 8, 4, 2, 3],\n [0, 0, 8, 6, 5],\n [0, 0, 0, 8, 7],\n [0, 0, 0, 0, 8]])\n\n theta_truth = (theta_truth_tril + theta_truth_tril.T) / 2\n\n rng = np.random.default_rng(seed)\n x = rng.multivariate_normal(mu, cov=np.linalg.inv(theta_truth), size=n)\n\n return theta_truth, x\n\ndef overlap_covariance_matrix(n: int, max_overlaps: int = 1, seed: int = 0, max_iters: int = 1000):\n rng = np.random.RandomState(seed=seed)\n \n row_overlaps = {i: 0 for i in range(n-1)}\n col_overlaps = {i: 0 for i in range(1, n)}\n \n cov = np.eye(n)\n \n for _ in range(max_iters):\n rows = list(row_overlaps.keys())\n \n row_openings = {}\n for row in rows:\n row_openings[row]= sum(1 for k in col_overlaps if k > row)\n \n num_openings = sum(row_openings.values())\n \n if not num_openings:\n break\n \n\n row_probability = [r/num_openings for r in row_openings.values()]\n row = rng.choice(rows, p=row_probability)\n try:\n col = rng.choice(list(c for c in col_overlaps.keys() if c > row))\n except ValueError:\n continue\n cov[row, col] += 1\n \n row_overlaps[row] += 1\n col_overlaps[col] += 1\n \n row_overlaps = {r: o for (r, o) in row_overlaps.items() if o < max_overlaps}\n col_overlaps = {c: o for (c, o) in col_overlaps.items() if o < max_overlaps}\n \n \n return cov\n \n \ndef _sample_data2(n: int = 1000, seed: int = 0) -> Tuple[np.ndarray, np.ndarray]:\n p = 5\n mu = np.zeros(p)\n theta_truth_tril = overlap_covariance_matrix(p, 1)\n\n theta_truth = (theta_truth_tril + theta_truth_tril.T) / 2\n\n rng = np.random.default_rng(seed)\n x = rng.multivariate_normal(mu, cov=np.linalg.inv(theta_truth), size=n)\n \n return theta_truth, x\n\n",
"_____no_output_____"
],
[
"def test_cd_example_2(sample_data, n, algorithm, lXs, seed=0):\n theta_truth, x = sample_data\n\n _, _, _, _, Y, _ = synthetic.preprocess(x, assume_centered=False, cholesky=True)\n\n fit_kwargs = dict(**lXs, scale_x=False, max_active_set_size=10, initial_active_set=0.,\n super_active_set=0., algorithm=algorithm)\n\n f = make_bisect_func(n, Y, seed=seed, **fit_kwargs)\n\n from scipy.optimize import bisect\n try:\n opt_l0 = bisect(f, a=0, b=10)\n except ValueError:\n assume(False)\n np.random.seed(seed)\n results = fit(Y, l0=opt_l0, **fit_kwargs)\n\n theta = results.theta\n \n print(f\"===================================> NNZS = {nonzeros(theta[np.tril_indices(5, k=-1)]).sum()}\")\n assume(nonzeros(theta[np.tril_indices(5, k=-1)]).sum() == n)\n\n cd_indices = top_n_triu_indicies(results.theta, n)\n indices = top_n_triu_indicies(theta_truth, n)\n\n if any(theta_truth[cd_indices] == 0):\n # CD algorithm has selected zero items. This can be fine if we ask for more non-zeros than are in theta_truth!\n # Check if indicies is contained in cd_indices\n indices_set = set(zip(*indices))\n cd_indices_set = set(zip(*indices))\n assert cd_indices_set.issuperset(indices_set)\n should_be_zero_indices_set = cd_indices_set - indices_set\n\n for (i, j) in should_be_zero_indices_set:\n assert theta_truth[i, j] == 0\n\n else:\n np.testing.assert_array_equal(cd_indices, indices)",
"_____no_output_____"
],
[
"lXs={}\ntest_cd_example_2(_sample_data2(), n = 1, algorithm = 'CD', lXs=lXs)",
"gL0LearnFit 1\ngL0LearnFit 2\ngL0LearnFit 2\nfit 1\nfit loop0\ncurrent_iter: 1 cur_objective = 4.95906\nfit loop1\ncurrent_iter: 2 cur_objective = 4.90657\nfit loop2\ncurrent_iter: 3 cur_objective = 4.89955\nfit loop3\ncurrent_iter: 4 cur_objective = 4.89858\nfit loop4\ncurrent_iter: 5 cur_objective = 4.89844\nfit loop5\ncurrent_iter: 6 cur_objective = 4.89841\nfit loop6\ncurrent_iter: 7 cur_objective = 4.8984\nfit loop7\ncurrent_iter: 8 cur_objective = 4.8984\ngl0Learn found solution with 10 non-zeros with parameters:\n\t l0 = 0.0)\n\t cost = -9\ngL0LearnFit 1\ngL0LearnFit 2\ngL0LearnFit 2\nfit 1\nfit loop0\ncurrent_iter: 1 cur_objective = 7.03075\nfit loop1\ncurrent_iter: 2 cur_objective = 7.03075\ngl0Learn found solution with 0 non-zeros with parameters:\n\t l0 = 10.0)\n\t cost = 1\ngL0LearnFit 1\ngL0LearnFit 2\ngL0LearnFit 2\nfit 1\nfit loop0\ncurrent_iter: 1 cur_objective = 7.03075\nfit loop1\ncurrent_iter: 2 cur_objective = 7.03075\ngl0Learn found solution with 0 non-zeros with parameters:\n\t l0 = 5.0)\n\t cost = 1\ngL0LearnFit 1\ngL0LearnFit 2\ngL0LearnFit 2\nfit 1\nfit loop0\ncurrent_iter: 1 cur_objective = 7.03075\nfit loop1\ncurrent_iter: 2 cur_objective = 7.03075\ngl0Learn found solution with 0 non-zeros with parameters:\n\t l0 = 2.5)\n\t cost = 1\ngL0LearnFit 1\ngL0LearnFit 2\ngL0LearnFit 2\nfit 1\nfit loop0\ncurrent_iter: 1 cur_objective = 7.03075\nfit loop1\ncurrent_iter: 2 cur_objective = 7.03075\ngl0Learn found solution with 0 non-zeros with parameters:\n\t l0 = 1.25)\n\t cost = 1\ngL0LearnFit 1\ngL0LearnFit 2\ngL0LearnFit 2\nfit 1\nfit loop0\ncurrent_iter: 1 cur_objective = 6.80996\nfit loop1\ncurrent_iter: 2 cur_objective = 6.80032\nfit loop2\ncurrent_iter: 3 cur_objective = 6.79856\nfit loop3\ncurrent_iter: 4 cur_objective = 6.79823\nfit loop4\ncurrent_iter: 5 cur_objective = 6.79818\nfit loop5\ncurrent_iter: 6 cur_objective = 6.79817\nfit loop6\ncurrent_iter: 7 cur_objective = 6.79816\ngl0Learn found solution with 3 non-zeros with parameters:\n\t l0 = 0.625)\n\t cost = -2\ngL0LearnFit 1\ngL0LearnFit 2\ngL0LearnFit 2\nfit 1\nfit loop0\ncurrent_iter: 1 cur_objective = 7.10638\nfit loop1\ncurrent_iter: 2 cur_objective = 7.10156\nfit loop2\ncurrent_iter: 3 cur_objective = 7.10018\nfit loop3\ncurrent_iter: 4 cur_objective = 7.09979\nfit loop4\ncurrent_iter: 5 cur_objective = 7.09968\nfit loop5\ncurrent_iter: 6 cur_objective = 7.09965\nfit loop6\ncurrent_iter: 7 cur_objective = 7.09964\nfit loop7\ncurrent_iter: 8 cur_objective = 7.09964\ngl0Learn found solution with 1 non-zeros with parameters:\n\t l0 = 0.9375)\n\t cost = 0\ngL0LearnFit 1\ngL0LearnFit 2\ngL0LearnFit 2\nfit 1\nfit loop0\ncurrent_iter: 1 cur_objective = 7.10638\nfit loop1\ncurrent_iter: 2 cur_objective = 7.10156\nfit loop2\ncurrent_iter: 3 cur_objective = 7.10018\nfit loop3\ncurrent_iter: 4 cur_objective = 7.09979\nfit loop4\ncurrent_iter: 5 cur_objective = 7.09968\nfit loop5\ncurrent_iter: 6 cur_objective = 7.09965\nfit loop6\ncurrent_iter: 7 cur_objective = 7.09964\nfit loop7\ncurrent_iter: 8 cur_objective = 7.09964\n===================================> NNZS = 1\n"
],
[
"n = 1\nopt_l0 = 0\nalgorithm = \"CDPSI\"\nseed = 0\n\ntheta_truth, x = _sample_data2()\n_, _, _, _, Y, _ = synthetic.preprocess(x, assume_centered=False, cholesky=True)\n\nfit_kwargs = dict(**lXs, scale_x=False, max_active_set_size=1, initial_active_set=np.inf,\n super_active_set=0., algorithm=algorithm)\n\nnp.random.seed(seed)\nresults = fit(Y, l0=opt_l0, **fit_kwargs)\n\ntheta = results.theta\n\nprint(f\"===================================> NNZS = {nonzeros(theta[np.tril_indices(5, k=-1)]).sum()}\")\n#assume(nonzeros(theta[np.tril_indices(5, k=-1)]).sum() == n)\n\ncd_indices = top_n_triu_indicies(results.theta, n)\nindices = top_n_triu_indicies(theta_truth, n)",
"gL0LearnFit 1\ngL0LearnFit 2\ngL0LearnFit 2\nfitpsi called \nfit 1\nfit loop0\ncurrent_iter: 1 cur_objective = 7.03075\nfit loop1\ncurrent_iter: 2 cur_objective = 7.03075\nthis->params.max_active_set_size = 1\nthis->active_set.size() = 0\nn_to_keep = 1\nfit loop2\ncurrent_iter: 3 cur_objective = 6.50749\nfit loop3\ncurrent_iter: 4 cur_objective = 6.48649\nfit loop4\ncurrent_iter: 5 cur_objective = 6.48387\nfit loop5\ncurrent_iter: 6 cur_objective = 6.4835\nfit loop6\ncurrent_iter: 7 cur_objective = 6.48345\nfit loop7\ncurrent_iter: 8 cur_objective = 6.48344\nfit loop8\ncurrent_iter: 9 cur_objective = 6.48344\nthis->params.max_active_set_size = 1\nthis->active_set.size() = 1\nn_to_keep = 0\nPre psi cost: 6.48344 \nPSI iter: 0 \nPSI iter: 0 Swapping row: 0\npsi_row_fit row = 0 \nselected super_active_set start = {0, 1} \nselected super_active_set end = {1, 2} \nzero_indices = 1\n 3\n 4\n \nnon_zero_indices = 2\n \nNon Zero Index Loop: (0, 2) \nNo swap for (0, 2) \nPSI iter: 0 Swapping row: 1\npsi_row_fit row = 1 \nselected super_active_set start = {1, 2} \nselected super_active_set end = {2, 3} \nPSI iter: 0 Swapping row: 2\npsi_row_fit row = 2 \nselected super_active_set start = {2, 3} \nselected super_active_set end = {3, 4} \nPSI iter: 0 Swapping row: 3\npsi_row_fit row = 3 \nselected super_active_set start = {3, 4} \nselected super_active_set end = {244901324817184, 7} \n===================================> NNZS = 1\n"
],
[
"results.theta",
"_____no_output_____"
],
[
"theta_truth, x = _sample_data2()\ntheta_truth",
"_____no_output_____"
],
[
"def overlap_covariance_matrix(n: int, max_overlaps: int = 1, seed: int = 0, max_iters: int = 1000, decay=.99):\n rng = np.random.RandomState(seed=seed)\n \n overlaps = {i: 0 for i in range(n)} \n cov = np.eye(n)\n \n v = 1\n \n while len(overlaps) >= 2:\n rows = list(overlaps.keys())\n\n row, col = rng.choice(rows, size=2, replace=False)\n \n overlaps[row] += 1\n overlaps[col] += 1\n \n cov[row, col] += v\n v *= decay\n \n overlaps = {r: o for (r, o) in overlaps.items() if o < max_overlaps} \n \n return cov\n \n ",
"_____no_output_____"
],
[
"p = 5\nmu = np.zeros(p)\ntheta_truth_tril = overlap_covariance_matrix(p, 2, decay=.8)\n\ntheta_truth = (theta_truth_tril + theta_truth_tril.T) / 2\n\nrng = np.random.default_rng(seed)\nx = rng.multivariate_normal(mu, cov=np.linalg.inv(theta_truth), size=n)\n",
"_____no_output_____"
],
[
"theta_truth",
"_____no_output_____"
],
[
"\n@composite\ndef overlap_covariance_matrix(draw,\n n: hypothesis.strategies.SearchStrategy[int] = integers(3, 10),\n seed: hypothesis.strategies.SearchStrategy[int] = integers(0, 2**32 - 1),\n max_overlaps: int = 1,\n decay=.99):\n n = draw(n)\n seed = draw(seed)\n\n overlaps = {i: 0 for i in range(n)}\n cov = np.eye(n)\n\n v = 1\n\n rng = np.random.RandomState(seed=seed)\n while len(overlaps) >= 2:\n rows = list(overlaps.keys())\n\n row, col = rng.choice(rows, size=2, replace=False)\n\n overlaps[row] += 1\n overlaps[col] += 1\n\n cov[row, col] += v\n v *= decay\n\n overlaps = {r: o for (r, o) in overlaps.items() if o < max_overlaps}\n \n cov = (cov + cov.T)/2\n\n return cov",
"_____no_output_____"
],
[
"g = overlap_covariance_matrix(max_overlaps=1, decay=.8)",
"_____no_output_____"
],
[
"theta_truth = g.example()\ntheta_truth",
"_____no_output_____"
],
[
"def sample_from_cov(cov: np.ndarray, n: int = 1000, seed: int = 9) -> np.ndarray:\n p, p2 = cov.shape\n assert p == p2\n\n mu = np.zeros(p)\n rng = np.random.default_rng(seed)\n x = rng.multivariate_normal(mu, cov=np.linalg.inv(cov), size=n)\n\n return x",
"_____no_output_____"
],
[
"x = sample_from_cov(theta_truth)\n_, _, _, _, Y, _ = synthetic.preprocess(x, assume_centered=False, cholesky=True)\nresults = fit(x, l0=0, scale_x=True, max_active_set_size=1, initial_active_set=np.inf, super_active_set=0.)\n\ntheta_truth_copy = np.copy(theta_truth)\nnp.fill_diagonal(theta_truth_copy, 0)\ni, j = np.unravel_index(np.argmax(theta_truth_copy), theta_truth.shape)",
"gL0LearnFit 1\ngL0LearnFit 2\ngL0LearnFit 2\nfit 1\nfit loop0\ncurrent_iter: 1 cur_objective = 6.11474\nfit loop1\ncurrent_iter: 2 cur_objective = 6.11474\nthis->params.max_active_set_size = 1\nthis->active_set.size() = 0\nn_to_keep = 1\nfit loop2\ncurrent_iter: 3 cur_objective = 5.53552\nfit loop3\ncurrent_iter: 4 cur_objective = 5.50832\nfit loop4\ncurrent_iter: 5 cur_objective = 5.50442\nfit loop5\ncurrent_iter: 6 cur_objective = 5.50379\nfit loop6\ncurrent_iter: 7 cur_objective = 5.50368\nfit loop7\ncurrent_iter: 8 cur_objective = 5.50366\nfit loop8\ncurrent_iter: 9 cur_objective = 5.50366\nthis->params.max_active_set_size = 1\nthis->active_set.size() = 1\nn_to_keep = 0\n"
],
[
"results.theta[i, j] > np.mean(theta_truth_copy)",
"_____no_output_____"
],
[
"results.theta",
"_____no_output_____"
],
[
"i, j",
"_____no_output_____"
],
[
"theta_truth",
"_____no_output_____"
],
[
"p = 10\nmodule = hypothesis.strategies._internal.core.RandomSeeder(0)\nnnz = 4\nalgorithm = 'CD'\nlXs = {}",
"_____no_output_____"
],
[
"theta_truth",
"_____no_output_____"
],
[
"def overlap_covariance_matrix(n: int,\n seed: int = 0,\n max_overlaps: int = 1,\n decay=.99):\n\n overlaps = {i: 0 for i in range(n)}\n cov = np.eye(n)\n\n v = 1\n\n rng = np.random.RandomState(seed=seed)\n while len(overlaps) >= 2:\n rows = list(overlaps.keys())\n\n row, col = rng.choice(rows, size=2, replace=False)\n\n overlaps[row] += 1\n overlaps[col] += 1\n\n cov[row, col] += v\n v *= decay\n\n overlaps = {r: o for (r, o) in overlaps.items() if o < max_overlaps}\n\n cov = (cov + cov.T)/2\n\n return cov",
"_____no_output_____"
],
[
"1 - np.exp(5 - 6)",
"_____no_output_____"
],
[
"overlap_covariance_matrix(5, max_overlaps=1, decay=.99)",
"_____no_output_____"
],
[
"np.linalg.eigvalsh(overlap_covariance_matrix(10, max_overlaps=1, decay=.99))",
"_____no_output_____"
],
[
"theta_truth = overlap_covariance_matrix(n=p, decay=.8)\nx = sample_from_cov(theta_truth)\n\n_, _, _, _, Y, _ = synthetic.preprocess(x, assume_centered=False, cholesky=True)\n\nfit_kwargs = dict(**lXs, scale_x=False, max_active_set_size=p*(p-1)//2, initial_active_set=0.,\n super_active_set=0., algorithm=algorithm)\n\nf = make_bisect_func(nnz, Y, **fit_kwargs)\n\nfrom scipy.optimize import bisect\ntry:\n opt_l0 = bisect(f, a=0, b=10)\nexcept ValueError:\n assume(False)\n\nnp.random.seed(0)\nresults = fit(Y, l0=opt_l0, **fit_kwargs)\n\ntheta = results.theta",
"gL0LearnFit 1\ngL0LearnFit 2\ngL0LearnFit 2\nfit 1\nfit loop0\ncurrent_iter: 1 cur_objective = 9.97932\nfit loop1\ncurrent_iter: 2 cur_objective = 9.97091\nfit loop2\ncurrent_iter: 3 cur_objective = 9.97034\nfit loop3\ncurrent_iter: 4 cur_objective = 9.97028\nfit loop4\ncurrent_iter: 5 cur_objective = 9.97028\ngl0Learn found solution with 45 non-zeros with parameters:\n\t l0 = 0.0)\n\t cost = -41\ngL0LearnFit 1\ngL0LearnFit 2\ngL0LearnFit 2\nfit 1\nfit loop0\ncurrent_iter: 1 cur_objective = 11.3992\nfit loop1\ncurrent_iter: 2 cur_objective = 11.3992\ngl0Learn found solution with 0 non-zeros with parameters:\n\t l0 = 10.0)\n\t cost = 4\ngL0LearnFit 1\ngL0LearnFit 2\ngL0LearnFit 2\nfit 1\nfit loop0\ncurrent_iter: 1 cur_objective = 11.3992\nfit loop1\ncurrent_iter: 2 cur_objective = 11.3992\ngl0Learn found solution with 0 non-zeros with parameters:\n\t l0 = 5.0)\n\t cost = 4\ngL0LearnFit 1\ngL0LearnFit 2\ngL0LearnFit 2\nfit 1\nfit loop0\ncurrent_iter: 1 cur_objective = 11.3992\nfit loop1\ncurrent_iter: 2 cur_objective = 11.3992\ngl0Learn found solution with 0 non-zeros with parameters:\n\t l0 = 2.5)\n\t cost = 4\ngL0LearnFit 1\ngL0LearnFit 2\ngL0LearnFit 2\nfit 1\nfit loop0\ncurrent_iter: 1 cur_objective = 11.3992\nfit loop1\ncurrent_iter: 2 cur_objective = 11.3992\ngl0Learn found solution with 0 non-zeros with parameters:\n\t l0 = 1.25)\n\t cost = 4\ngL0LearnFit 1\ngL0LearnFit 2\ngL0LearnFit 2\nfit 1\nfit loop0\ncurrent_iter: 1 cur_objective = 11.3935\nfit loop1\ncurrent_iter: 2 cur_objective = 11.3925\nfit loop2\ncurrent_iter: 3 cur_objective = 11.3924\nfit loop3\ncurrent_iter: 4 cur_objective = 11.3923\nfit loop4\ncurrent_iter: 5 cur_objective = 11.3923\ngl0Learn found solution with 1 non-zeros with parameters:\n\t l0 = 0.625)\n\t cost = 3\ngL0LearnFit 1\ngL0LearnFit 2\ngL0LearnFit 2\nfit 1\nfit loop0\ncurrent_iter: 1 cur_objective = 11.0938\nfit loop1\ncurrent_iter: 2 cur_objective = 11.0927\nfit loop2\ncurrent_iter: 3 cur_objective = 11.0926\nfit loop3\ncurrent_iter: 4 cur_objective = 11.0925\nfit loop4\ncurrent_iter: 5 cur_objective = 11.0925\ngl0Learn found solution with 2 non-zeros with parameters:\n\t l0 = 0.3125)\n\t cost = 2\ngL0LearnFit 1\ngL0LearnFit 2\ngL0LearnFit 2\nfit 1\nfit loop0\ncurrent_iter: 1 cur_objective = 10.7175\nfit loop1\ncurrent_iter: 2 cur_objective = 10.7163\nfit loop2\ncurrent_iter: 3 cur_objective = 10.7162\nfit loop3\ncurrent_iter: 4 cur_objective = 10.7161\nfit loop4\ncurrent_iter: 5 cur_objective = 10.7161\ngl0Learn found solution with 3 non-zeros with parameters:\n\t l0 = 0.15625)\n\t cost = 1\ngL0LearnFit 1\ngL0LearnFit 2\ngL0LearnFit 2\nfit 1\nfit loop0\ncurrent_iter: 1 cur_objective = 10.4546\nfit loop1\ncurrent_iter: 2 cur_objective = 10.4531\nfit loop2\ncurrent_iter: 3 cur_objective = 10.453\nfit loop3\ncurrent_iter: 4 cur_objective = 10.4529\nfit loop4\ncurrent_iter: 5 cur_objective = 10.4529\ngl0Learn found solution with 5 non-zeros with parameters:\n\t l0 = 0.078125)\n\t cost = -1\ngL0LearnFit 1\ngL0LearnFit 2\ngL0LearnFit 2\nfit 1\nfit loop0\ncurrent_iter: 1 cur_objective = 10.6003\nfit loop1\ncurrent_iter: 2 cur_objective = 10.5992\nfit loop2\ncurrent_iter: 3 cur_objective = 10.599\nfit loop3\ncurrent_iter: 4 cur_objective = 10.5989\nfit loop4\ncurrent_iter: 5 cur_objective = 10.5989\ngl0Learn found solution with 3 non-zeros with parameters:\n\t l0 = 0.1171875)\n\t cost = 1\ngL0LearnFit 1\ngL0LearnFit 2\ngL0LearnFit 2\nfit 1\nfit loop0\ncurrent_iter: 1 cur_objective = 10.538\nfit loop1\ncurrent_iter: 2 cur_objective = 10.5366\nfit loop2\ncurrent_iter: 3 cur_objective = 10.5364\nfit loop3\ncurrent_iter: 4 cur_objective = 10.5364\nfit loop4\ncurrent_iter: 5 cur_objective = 10.5364\ngl0Learn found solution with 4 non-zeros with parameters:\n\t l0 = 0.09765625)\n\t cost = 0\ngL0LearnFit 1\ngL0LearnFit 2\ngL0LearnFit 2\nfit 1\nfit loop0\ncurrent_iter: 1 cur_objective = 10.538\nfit loop1\ncurrent_iter: 2 cur_objective = 10.5366\nfit loop2\ncurrent_iter: 3 cur_objective = 10.5364\nfit loop3\ncurrent_iter: 4 cur_objective = 10.5364\nfit loop4\ncurrent_iter: 5 cur_objective = 10.5364\n"
],
[
"np.round(theta, decimals=1)",
"_____no_output_____"
],
[
"np.round(theta_truth, decimals=1)",
"_____no_output_____"
],
[
"p = 5\nseed = 1\n\ntheta_truth = overlap_covariance_matrix(n=p, seed=seed, decay=.8)\nx = sample_from_cov(n=30*p**2, cov=theta_truth)\n\n_, _, _, _, Y, _ = synthetic.preprocess(x, assume_centered=False, cholesky=True)\n\npossible_active_set = np.where(np.abs(np.triu(theta_truth, k=1)) > 0)\n\nfull_super_active_set = np.asarray(possible_active_set).T\nidx = np.random.randint(full_super_active_set.shape[0], size=1)\ninitial_super_active_set = full_super_active_set[idx, :]\n\nresults = fit(Y, **lXs, initial_active_set=np.inf, super_active_set=initial_super_active_set,\n max_active_set_size=p**2)\n\ncd_indices = top_n_triu_indicies(results.theta, 1)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e72b5d1283e5e0b4ec9783f312a403b186cbb017 | 100,601 | ipynb | Jupyter Notebook | LSTM.ipynb | zahraDehghanian97/SCINet | 7b50a90a67f028886765e6e81fa5522dc783bbb6 | [
"Apache-2.0"
] | null | null | null | LSTM.ipynb | zahraDehghanian97/SCINet | 7b50a90a67f028886765e6e81fa5522dc783bbb6 | [
"Apache-2.0"
] | null | null | null | LSTM.ipynb | zahraDehghanian97/SCINet | 7b50a90a67f028886765e6e81fa5522dc783bbb6 | [
"Apache-2.0"
] | null | null | null | 105.121212 | 40,694 | 0.741494 | [
[
[
"<a href=\"https://colab.research.google.com/github/zahraDehghanian97/SCINet/blob/master/LSTM.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"# **Data Preprocess**",
"_____no_output_____"
]
],
[
[
"import pandas as pd\ndf = pd.read_csv('/content/MCIRD_aaic2021_train.csv', header=0)\ndf.head()\n",
"_____no_output_____"
],
[
"df1 = df[['subscriber_ecid', 'data_usage_volume']]\ndf1.head()",
"_____no_output_____"
]
],
[
[
"خیلی از مقادیر دیتاست -0.0139766307813943 این هستن و میشه به راحتی اینارو جایگزین کرد",
"_____no_output_____"
]
],
[
[
"unique_sub_id = set(df1['subscriber_ecid'].values)",
"_____no_output_____"
],
[
"len(unique_sub_id)",
"_____no_output_____"
]
],
[
[
"use just sample with 69 element",
"_____no_output_____"
]
],
[
[
"import numpy as np\ndata_list = []\nfor sub_id in unique_sub_id:\n # print(sub_id)\n temp = df1[df1['subscriber_ecid']==sub_id].values\n if temp.shape[0] < 69:\n print(temp.shape)\n print(sub_id)\n else:\n data_list.append(temp)",
"(60, 2)\n-QXHomYaJxYXi\n(61, 2)\n0o-xDa8uTNBGu\n(1, 2)\n37v4v4PPObMC_\n(18, 2)\n1EN04BS-9nKgc\n(1, 2)\n32ez6CX89v6KZ\n(68, 2)\n-DgEYYT0gqMqr\n(67, 2)\n-XU6p4P-782mp\n(2, 2)\n28gWxNYMU_2dg\n(53, 2)\n0T7ixhiDdZ8TL\n(22, 2)\n-gjfIaG2oxwzj\n"
]
],
[
[
"data plot",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\nfor j in range(len(data_list)) :\n one_sample = data_list[j]\n y = one_sample[:,1]\n x = []\n\n for i in range(len(one_sample[:,1])):\n x.append(i)\n plt.plot(x,y)\n\nplt.show()",
"_____no_output_____"
]
],
[
[
"# **LSTM Model**",
"_____no_output_____"
]
],
[
[
"X = []\ny = []\nmem_step = 14\n\nfor item in data_list:\n for i in range(mem_step, 69):\n X.append(item[i-mem_step:i, 1:2])\n # print(item[i-mem_step:i, 1:2])\n y.append(item[i, 1:2])\n\nX, y = np.array(X), np.array(y)\nX, y = X.astype('float32'), y.astype('float32')",
"_____no_output_____"
],
[
" from sklearn.model_selection import train_test_split\n X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)",
"_____no_output_____"
],
[
"import tensorflow as tf\nimport pandas as pd\nimport numpy as np\nfrom tensorflow import keras\nimport matplotlib.pyplot as plt\n\nfrom keras.models import Sequential\nfrom keras.layers import Dense\nfrom keras.layers import LSTM\nfrom keras.layers import Dropout\n\n# check GPU availability\nprint(\"GPU is available :)\" if tf.config.list_physical_devices(\"GPU\") else \"Not available :(\")\n\nimport warnings\nwarnings.filterwarnings(\"ignore\")",
"GPU is available :)\n"
],
[
"from tensorflow.keras.backend import clear_session \nclear_session()\nkeras_reg = Sequential()\n\nkeras_reg.add(LSTM(units = 64, return_sequences = True, input_shape = (X.shape[1], 1)))\nkeras_reg.add(Dropout(0.4))\n\nkeras_reg.add(LSTM(units = 100, return_sequences = True))\nkeras_reg.add(Dropout(0.4))\n\nkeras_reg.add(LSTM(units = 100, return_sequences = True))\nkeras_reg.add(Dropout(0.2))\n\nkeras_reg.add(LSTM(units = 16))\nkeras_reg.add(Dropout(0.2))\n\n# keras_reg.add(Dense(units=32, activation='relu' ))\nkeras_reg.add(Dense(units=16, activation='relu' ))\nkeras_reg.add(Dense(units = 1))\nkeras_reg.compile(optimizer = 'adam',\n loss = 'mean_squared_error',\n metrics='mse')\n# keras_reg.compile(optimizer = 'rmsprop', loss = 'mean_squared_error')\nhistory = keras_reg.fit(X_train, y_train,validation_split=0.2,shuffle=False,epochs = 30,verbose = 1)",
"Epoch 1/30\n99/99 [==============================] - 8s 28ms/step - loss: 14.3707 - mse: 14.3707 - val_loss: 5.4612 - val_mse: 5.4612\nEpoch 2/30\n99/99 [==============================] - 1s 11ms/step - loss: 13.0289 - mse: 13.0289 - val_loss: 5.0439 - val_mse: 5.0439\nEpoch 3/30\n99/99 [==============================] - 1s 11ms/step - loss: 12.5689 - mse: 12.5689 - val_loss: 4.7255 - val_mse: 4.7255\nEpoch 4/30\n99/99 [==============================] - 1s 11ms/step - loss: 12.4230 - mse: 12.4230 - val_loss: 4.5919 - val_mse: 4.5919\nEpoch 5/30\n99/99 [==============================] - 1s 11ms/step - loss: 12.0877 - mse: 12.0877 - val_loss: 4.2216 - val_mse: 4.2216\nEpoch 6/30\n99/99 [==============================] - 1s 11ms/step - loss: 12.0400 - mse: 12.0400 - val_loss: 4.1053 - val_mse: 4.1053\nEpoch 7/30\n99/99 [==============================] - 1s 11ms/step - loss: 11.7191 - mse: 11.7191 - val_loss: 3.9907 - val_mse: 3.9907\nEpoch 8/30\n99/99 [==============================] - 1s 11ms/step - loss: 11.6767 - mse: 11.6767 - val_loss: 3.8909 - val_mse: 3.8909\nEpoch 9/30\n99/99 [==============================] - 1s 11ms/step - loss: 11.7431 - mse: 11.7431 - val_loss: 3.9236 - val_mse: 3.9236\nEpoch 10/30\n99/99 [==============================] - 1s 11ms/step - loss: 11.5723 - mse: 11.5723 - val_loss: 4.0515 - val_mse: 4.0515\nEpoch 11/30\n99/99 [==============================] - 1s 11ms/step - loss: 11.5108 - mse: 11.5108 - val_loss: 4.0628 - val_mse: 4.0628\nEpoch 12/30\n99/99 [==============================] - 1s 11ms/step - loss: 11.4352 - mse: 11.4352 - val_loss: 3.9515 - val_mse: 3.9515\nEpoch 13/30\n99/99 [==============================] - 1s 11ms/step - loss: 11.5064 - mse: 11.5064 - val_loss: 3.9903 - val_mse: 3.9903\nEpoch 14/30\n99/99 [==============================] - 1s 11ms/step - loss: 11.2429 - mse: 11.2429 - val_loss: 3.9862 - val_mse: 3.9862\nEpoch 15/30\n99/99 [==============================] - 1s 11ms/step - loss: 11.3599 - mse: 11.3599 - val_loss: 4.1523 - val_mse: 4.1523\nEpoch 16/30\n99/99 [==============================] - 1s 11ms/step - loss: 11.2702 - mse: 11.2702 - val_loss: 4.1504 - val_mse: 4.1504\nEpoch 17/30\n99/99 [==============================] - 1s 11ms/step - loss: 11.4893 - mse: 11.4893 - val_loss: 4.2371 - val_mse: 4.2371\nEpoch 18/30\n99/99 [==============================] - 1s 11ms/step - loss: 11.2904 - mse: 11.2904 - val_loss: 4.2310 - val_mse: 4.2310\nEpoch 19/30\n99/99 [==============================] - 1s 11ms/step - loss: 10.9230 - mse: 10.9230 - val_loss: 4.1002 - val_mse: 4.1002\nEpoch 20/30\n99/99 [==============================] - 1s 11ms/step - loss: 11.3282 - mse: 11.3282 - val_loss: 4.6562 - val_mse: 4.6562\nEpoch 21/30\n99/99 [==============================] - 1s 11ms/step - loss: 10.9069 - mse: 10.9069 - val_loss: 4.2636 - val_mse: 4.2636\nEpoch 22/30\n99/99 [==============================] - 1s 11ms/step - loss: 10.9787 - mse: 10.9787 - val_loss: 4.1366 - val_mse: 4.1366\nEpoch 23/30\n99/99 [==============================] - 1s 11ms/step - loss: 10.8652 - mse: 10.8652 - val_loss: 4.1003 - val_mse: 4.1003\nEpoch 24/30\n99/99 [==============================] - 1s 11ms/step - loss: 10.9224 - mse: 10.9224 - val_loss: 4.1032 - val_mse: 4.1032\nEpoch 25/30\n99/99 [==============================] - 1s 11ms/step - loss: 10.9260 - mse: 10.9260 - val_loss: 4.3015 - val_mse: 4.3015\nEpoch 26/30\n99/99 [==============================] - 1s 11ms/step - loss: 10.4223 - mse: 10.4223 - val_loss: 4.1306 - val_mse: 4.1306\nEpoch 27/30\n99/99 [==============================] - 1s 13ms/step - loss: 10.7209 - mse: 10.7209 - val_loss: 4.1024 - val_mse: 4.1024\nEpoch 28/30\n99/99 [==============================] - 2s 20ms/step - loss: 10.7294 - mse: 10.7294 - val_loss: 4.4491 - val_mse: 4.4491\nEpoch 29/30\n99/99 [==============================] - 2s 19ms/step - loss: 11.0960 - mse: 11.0960 - val_loss: 4.2069 - val_mse: 4.2069\nEpoch 30/30\n99/99 [==============================] - 2s 18ms/step - loss: 10.3782 - mse: 10.3782 - val_loss: 4.3662 - val_mse: 4.3662\n"
],
[
"from math import sqrt\n\ndef plot_loss(history):\n fig = plt.figure(figsize=(7,7))\n plt.plot(history.history['mse'], label='mse')\n plt.plot(history.history['val_mse'], label='val_mse')\n plt.xlabel('Epoch')\n plt.ylabel('Error')\n plt.legend()\n plt.grid(True)\n\nplot_loss(history)\n\npredicted_stock_price = keras_reg.predict(X_test)\nfrom sklearn.metrics import mean_squared_error\nprint(sqrt(mean_squared_error(y_test, predicted_stock_price)))",
"2.784862872023127\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
e72b66d48d160d64e784b7d0559413ac02c14fbe | 14,441 | ipynb | Jupyter Notebook | course/named_entity_recognition/00_ner_with_spacy.ipynb | augustodn/NPL_with_Transformers | f80066af459fa3cba8a8719eaf4209c57118635b | [
"MIT"
] | 1 | 2022-03-11T12:11:48.000Z | 2022-03-11T12:11:48.000Z | course/named_entity_recognition/00_ner_with_spacy.ipynb | augustodn/NLP_with_Transformers | f80066af459fa3cba8a8719eaf4209c57118635b | [
"MIT"
] | null | null | null | course/named_entity_recognition/00_ner_with_spacy.ipynb | augustodn/NLP_with_Transformers | f80066af459fa3cba8a8719eaf4209c57118635b | [
"MIT"
] | null | null | null | 49.455479 | 415 | 0.625788 | [
[
[
"# Named Entity Recognition (NER) With SpaCy\n\nWe will be performing NER on threads from the **Investing** subreddit, but first let's test SpaCy for named entity recognition (NER) using an example from */r/investing*.",
"_____no_output_____"
]
],
[
[
"import spacy\nfrom spacy import displacy",
"_____no_output_____"
],
[
"!python -m spacy download en_core_web_sm",
"Collecting en-core-web-sm==3.0.0\n Downloading https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-3.0.0/en_core_web_sm-3.0.0-py3-none-any.whl (13.7 MB)\n\u001b[K |████████████████████████████████| 13.7 MB 65 kB/s eta 0:00:011 |████████████████▍ | 7.0 MB 112 kB/s eta 0:01:00\n\u001b[?25hRequirement already satisfied: spacy<3.1.0,>=3.0.0 in /usr/local/Caskroom/miniconda/base/envs/ml/lib/python3.8/site-packages (from en-core-web-sm==3.0.0) (3.0.6)\nRequirement already satisfied: cymem<2.1.0,>=2.0.2 in /usr/local/Caskroom/miniconda/base/envs/ml/lib/python3.8/site-packages (from spacy<3.1.0,>=3.0.0->en-core-web-sm==3.0.0) (2.0.6)\nRequirement already satisfied: packaging>=20.0 in /usr/local/Caskroom/miniconda/base/envs/ml/lib/python3.8/site-packages (from spacy<3.1.0,>=3.0.0->en-core-web-sm==3.0.0) (20.4)\nRequirement already satisfied: blis<0.8.0,>=0.4.0 in /usr/local/Caskroom/miniconda/base/envs/ml/lib/python3.8/site-packages (from spacy<3.1.0,>=3.0.0->en-core-web-sm==3.0.0) (0.7.6)\nRequirement already satisfied: pydantic<1.8.0,>=1.7.1 in /usr/local/Caskroom/miniconda/base/envs/ml/lib/python3.8/site-packages (from spacy<3.1.0,>=3.0.0->en-core-web-sm==3.0.0) (1.7.4)\nRequirement already satisfied: catalogue<2.1.0,>=2.0.3 in /usr/local/Caskroom/miniconda/base/envs/ml/lib/python3.8/site-packages (from spacy<3.1.0,>=3.0.0->en-core-web-sm==3.0.0) (2.0.6)\nRequirement already satisfied: srsly<3.0.0,>=2.4.1 in /usr/local/Caskroom/miniconda/base/envs/ml/lib/python3.8/site-packages (from spacy<3.1.0,>=3.0.0->en-core-web-sm==3.0.0) (2.4.2)\nRequirement already satisfied: spacy-legacy<3.1.0,>=3.0.4 in /usr/local/Caskroom/miniconda/base/envs/ml/lib/python3.8/site-packages (from spacy<3.1.0,>=3.0.0->en-core-web-sm==3.0.0) (3.0.8)\nRequirement already satisfied: pathy>=0.3.5 in /usr/local/Caskroom/miniconda/base/envs/ml/lib/python3.8/site-packages (from spacy<3.1.0,>=3.0.0->en-core-web-sm==3.0.0) (0.6.1)\nRequirement already satisfied: murmurhash<1.1.0,>=0.28.0 in /usr/local/Caskroom/miniconda/base/envs/ml/lib/python3.8/site-packages (from spacy<3.1.0,>=3.0.0->en-core-web-sm==3.0.0) (1.0.6)\nRequirement already satisfied: jinja2 in /usr/local/Caskroom/miniconda/base/envs/ml/lib/python3.8/site-packages (from spacy<3.1.0,>=3.0.0->en-core-web-sm==3.0.0) (2.11.2)\nRequirement already satisfied: typer<0.4.0,>=0.3.0 in /usr/local/Caskroom/miniconda/base/envs/ml/lib/python3.8/site-packages (from spacy<3.1.0,>=3.0.0->en-core-web-sm==3.0.0) (0.3.2)\nRequirement already satisfied: setuptools in /usr/local/Caskroom/miniconda/base/envs/ml/lib/python3.8/site-packages (from spacy<3.1.0,>=3.0.0->en-core-web-sm==3.0.0) (50.3.1.post20201107)\nRequirement already satisfied: requests<3.0.0,>=2.13.0 in /usr/local/Caskroom/miniconda/base/envs/ml/lib/python3.8/site-packages (from spacy<3.1.0,>=3.0.0->en-core-web-sm==3.0.0) (2.24.0)\nRequirement already satisfied: tqdm<5.0.0,>=4.38.0 in /usr/local/Caskroom/miniconda/base/envs/ml/lib/python3.8/site-packages (from spacy<3.1.0,>=3.0.0->en-core-web-sm==3.0.0) (4.50.2)\nRequirement already satisfied: wasabi<1.1.0,>=0.8.1 in /usr/local/Caskroom/miniconda/base/envs/ml/lib/python3.8/site-packages (from spacy<3.1.0,>=3.0.0->en-core-web-sm==3.0.0) (0.9.0)\nRequirement already satisfied: preshed<3.1.0,>=3.0.2 in /usr/local/Caskroom/miniconda/base/envs/ml/lib/python3.8/site-packages (from spacy<3.1.0,>=3.0.0->en-core-web-sm==3.0.0) (3.0.6)\nRequirement already satisfied: numpy>=1.15.0 in /usr/local/Caskroom/miniconda/base/envs/ml/lib/python3.8/site-packages (from spacy<3.1.0,>=3.0.0->en-core-web-sm==3.0.0) (1.19.2)\nRequirement already satisfied: thinc<8.1.0,>=8.0.3 in /usr/local/Caskroom/miniconda/base/envs/ml/lib/python3.8/site-packages (from spacy<3.1.0,>=3.0.0->en-core-web-sm==3.0.0) (8.0.13)\nRequirement already satisfied: six in /usr/local/Caskroom/miniconda/base/envs/ml/lib/python3.8/site-packages (from packaging>=20.0->spacy<3.1.0,>=3.0.0->en-core-web-sm==3.0.0) (1.15.0)\nRequirement already satisfied: pyparsing>=2.0.2 in /usr/local/Caskroom/miniconda/base/envs/ml/lib/python3.8/site-packages (from packaging>=20.0->spacy<3.1.0,>=3.0.0->en-core-web-sm==3.0.0) (2.4.7)\nRequirement already satisfied: smart-open<6.0.0,>=5.0.0 in /usr/local/Caskroom/miniconda/base/envs/ml/lib/python3.8/site-packages (from pathy>=0.3.5->spacy<3.1.0,>=3.0.0->en-core-web-sm==3.0.0) (5.2.1)\nRequirement already satisfied: MarkupSafe>=0.23 in /usr/local/Caskroom/miniconda/base/envs/ml/lib/python3.8/site-packages (from jinja2->spacy<3.1.0,>=3.0.0->en-core-web-sm==3.0.0) (1.1.1)\nRequirement already satisfied: click<7.2.0,>=7.1.1 in /usr/local/Caskroom/miniconda/base/envs/ml/lib/python3.8/site-packages (from typer<0.4.0,>=0.3.0->spacy<3.1.0,>=3.0.0->en-core-web-sm==3.0.0) (7.1.2)\nRequirement already satisfied: certifi>=2017.4.17 in /usr/local/Caskroom/miniconda/base/envs/ml/lib/python3.8/site-packages (from requests<3.0.0,>=2.13.0->spacy<3.1.0,>=3.0.0->en-core-web-sm==3.0.0) (2020.6.20)\nRequirement already satisfied: idna<3,>=2.5 in /usr/local/Caskroom/miniconda/base/envs/ml/lib/python3.8/site-packages (from requests<3.0.0,>=2.13.0->spacy<3.1.0,>=3.0.0->en-core-web-sm==3.0.0) (2.10)\nRequirement already satisfied: chardet<4,>=3.0.2 in /usr/local/Caskroom/miniconda/base/envs/ml/lib/python3.8/site-packages (from requests<3.0.0,>=2.13.0->spacy<3.1.0,>=3.0.0->en-core-web-sm==3.0.0) (3.0.4)\nRequirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/Caskroom/miniconda/base/envs/ml/lib/python3.8/site-packages (from requests<3.0.0,>=2.13.0->spacy<3.1.0,>=3.0.0->en-core-web-sm==3.0.0) (1.25.11)\nInstalling collected packages: en-core-web-sm\nSuccessfully installed en-core-web-sm-3.0.0\n\u001b[38;5;2m✔ Download and installation successful\u001b[0m\nYou can now load the package via spacy.load('en_core_web_sm')\n"
],
[
"nlp = spacy.load('en_core_web_sm')",
"_____no_output_____"
],
[
"txt = (\"Given the recent downturn in stocks especially in tech which is likely to persist as yields keep going up, \"\n \"I thought it would be prudent to share the risks of investing in ARK ETFs, written up very nicely by \"\n \"[The Bear Cave](https://thebearcave.substack.com/p/special-edition-will-ark-invest-blow). The risks comes \"\n \"primarily from ARK's illiquid and very large holdings in small cap companies. ARK is forced to sell its \"\n \"holdings whenever its liquid ETF gets hit with outflows as is especially the case in market downturns. \"\n \"This could force very painful liquidations at unfavorable prices and the ensuing crash goes into a \"\n \"positive feedback loop leading into a death spiral enticing even more outflows and predatory shorts.\")",
"_____no_output_____"
],
[
"doc = nlp(txt)",
"_____no_output_____"
],
[
"displacy.render(doc, style='ent')\n# displacy.serve(doc, style='ent') if not running in a notebook",
"_____no_output_____"
]
],
[
[
"Immediately we're able to produce not perfect, but pretty good NER. We are using the [`en_core_web_sm`](https://spacy.io/models/en) model - `en` referring to English and `sm` small.\n\nThe model is accurately identifying ARK as an organization. It does also classify ETF (exchange traded fund) as an organization, which is not the case (an ETF is a grouping of securities on the markets), but it's easy to see why this is being classified as one. The other tag we can see is `WORK_OF_ART`, it isn't inherently clear what exactly this means, so we can get more information using `spacy.explain`:",
"_____no_output_____"
]
],
[
[
"spacy.explain('WORK_OF_ART')",
"_____no_output_____"
]
],
[
[
"And we can see that this description fits well to the tagged item, which refers to an article (although not quite a book).\n\nWe have a visual output from our tagged text, but this won't be particularly useful programatically. What we need is a way to extract the relevant tags (the organizations) from our text. To do that we can use `doc.ents` which will return a list of all identified entities.\n\nEach item in this entity list contains two attributes that we are interested in, `label_` and `text`:",
"_____no_output_____"
]
],
[
[
"for entity in doc.ents:\n print(f\"{entity.label_}: {entity.text}\")",
"GPE: ARK\nORG: ARK\nORG: ARK\n"
]
],
[
[
"We're almost there. Now, we need to filter out any entities that are not `ORG` entities, and append those remaining `ORG`s to an organization list:",
"_____no_output_____"
]
],
[
[
"# initialize our list\norg_list = []\n\nfor entity in doc.ents:\n # if label_ is ORG, we append text, otherwise ignore\n if entity.label_ == 'ORG':\n org_list.append(entity.text)\n\norg_list",
"_____no_output_____"
],
[
"# we don't need to see 'ARK' three times, so we use set() to remove duplicates, and then convert back to list\norg_list = list(set(org_list))\n\norg_list",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
e72b6919398f50cb33bfdfab67103d8403e3d352 | 241,995 | ipynb | Jupyter Notebook | notebooks/1.1-cl-quicklook_at_processed_data.ipynb | utplanets/deepmars | ba306aa9b25b654636b61cf952af2791b7ed0e56 | [
"MIT"
] | 2 | 2021-08-08T03:06:58.000Z | 2021-11-25T04:06:00.000Z | notebooks/1.1-cl-quicklook_at_processed_data.ipynb | utplanets/deepmars | ba306aa9b25b654636b61cf952af2791b7ed0e56 | [
"MIT"
] | null | null | null | notebooks/1.1-cl-quicklook_at_processed_data.ipynb | utplanets/deepmars | ba306aa9b25b654636b61cf952af2791b7ed0e56 | [
"MIT"
] | 2 | 2020-11-23T09:38:26.000Z | 2021-02-26T01:14:28.000Z | 1,136.126761 | 134,820 | 0.952904 | [
[
[
"import matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nimport h5py\nimport os\nimport dotenv\nproject_dir = os.path.join(os.getcwd(), os.pardir)\ndotenv_path = os.path.join(project_dir, '.env')\nfound = dotenv.load_dotenv(dotenv_path)",
"_____no_output_____"
],
[
"def plot_loc(loc, index,imgs, craters,fmt=\"05d\"):\n name='img_{}'.format(index)\n fig = plt.figure(figsize=[20, 20])\n [ax1, ax2, ax3] = fig.subplots(1,3)\n ax1.imshow(imgs['input_images'][loc][...], origin='upper', cmap='Greys_r')#, vmin=120, vmax=200)\n ax2.imshow(imgs['target_masks'][loc][...], origin='upper', cmap='Greys_r')\n im = np.dstack([imgs['input_images'][loc][...],imgs['input_images'][loc][...],imgs['input_images'][loc][...]])\n im[...,0][imgs['target_masks'][loc][...]>0.9]=0.\n im[...,1][imgs['target_masks'][loc][...]>0.9]=0.\n im[...,2][imgs['target_masks'][loc][...]>0.9]=255.\n ax3.imshow(im, origin='upper')\n plt.show()\n print(\"Long Lat Bounds\")\n print(\"\\t{}\".format(imgs['longlat_bounds'].attrs['definition']))\n print(\"\\t{}\".format(imgs['longlat_bounds'][name][...]))\n print(\"\\t{}\".format(imgs['pix_bounds'].attrs['definition']))\n print(\"\\t{}\".format(imgs['pix_bounds'][name][...]))\n \n print(\"\\tFound {} craters\".format(craters[name].size))\n return imgs['pix_bounds'][name][...]\n",
"_____no_output_____"
],
[
"processed_data = os.path.join(os.getenv(\"DM_ROOTDIR\"),\"data/processed\")\ngen_imgs = h5py.File(os.path.join(processed_data, 'ran_images_175000.hdf5'), 'r')\n#gen_craters = h5py.File(os.path.join(processed_data, 'train_craters.hdf5'), 'r')\ngen_craters = pd.HDFStore(processed_data + '/ran_craters_175000.hdf5', 'r')\n",
"_____no_output_____"
]
],
[
[
"Print out the header for the HDF file",
"_____no_output_____"
]
],
[
[
"print(\"Images files\")\nfor k in gen_imgs.keys():\n print(\"{} len({})\".format(k,len(gen_imgs[k])))\n for k2,v2 in gen_imgs[k].attrs.items():\n print(\"\\t {}={}\".format(k2,v2))",
"Images files\ncll_xy len(95)\n\t definition=(x, y) pixel coordinates of the central long / lat.\ninput_images len(100)\n\t definition=Input image dataset.\nlonglat_bounds len(95)\n\t definition=(long min, long max, lat min, lat max) of the cropped image.\npix_bounds len(95)\n\t definition=Pixel bounds of the Global DEM region that was cropped for the image.\npix_distortion_coefficient len(95)\n\t definition=Distortion coefficient due to projection transformation.\ntarget_masks len(100)\n\t definition=Target mask dataset.\n"
]
],
[
[
"For a chosen location (image index) in the file, plot the DEM image, crater image, and overlay craters on the DEM",
"_____no_output_____"
]
],
[
[
"loc=48\nll = plot_loc(loc,175000+loc,gen_imgs,gen_craters)",
"_____no_output_____"
]
],
[
[
"Load the DEM raw data and find the corresponding location bsed on pixel bounds listed in the image",
"_____no_output_____"
]
],
[
[
"import os\nimport tifffile\nmola = tifffile.imread(os.getenv(\"DM_MarsDEM\"))\n\n\nx = np.linspace(-180,180,1+mola.shape[1]//256)\ny = np.linspace(90,-90,1+mola.shape[0]//256)\nplt.figure(figsize=(12,6))\nplt.imshow(mola[ll[1]:ll[3],ll[0]:ll[2]],cmap=\"Greys_r\",origin='upper')\nplt.show()\n",
"_____no_output_____"
],
[
"gen_imgs.close()\ngen_craters.close()",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
e72b697700c77deb5cea37dcc493e4f6ba8186f7 | 16,079 | ipynb | Jupyter Notebook | Untitled.ipynb | archit120/lingatagger | cb3d0e262900dba1fd1ead0a37fad531e37cff9f | [
"Apache-2.0"
] | 1 | 2019-06-29T10:59:22.000Z | 2019-06-29T10:59:22.000Z | Untitled.ipynb | archit120/lingatagger | cb3d0e262900dba1fd1ead0a37fad531e37cff9f | [
"Apache-2.0"
] | null | null | null | Untitled.ipynb | archit120/lingatagger | cb3d0e262900dba1fd1ead0a37fad531e37cff9f | [
"Apache-2.0"
] | null | null | null | 36.05157 | 678 | 0.519 | [
[
[
"from keras.models import Sequential\nfrom keras.layers import Dense, Dropout\nfrom keras.layers import Embedding\nfrom keras.layers import Conv1D, GlobalAveragePooling1D, MaxPooling1D\nfrom keras.layers import Dense, Conv2D, Flatten\nfrom sklearn.feature_extraction.text import TfidfVectorizer\nimport numpy as np\nimport lingatagger.genderlist as gndrlist\nimport lingatagger.tokenizer as tok\nfrom lingatagger.tagger import *\nimport re\nimport heapq\n",
"C:\\ProgramData\\Anaconda3\\lib\\site-packages\\h5py\\__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.\n from ._conv import register_converters as _register_converters\nUsing TensorFlow backend.\n"
],
[
"from scipy.sparse import *",
"_____no_output_____"
],
[
"def encodex(text):\n s = list(text)\n a = list(set(list(\"ऀँंःऄअआइईउऊऋऌऍऎएऐऑऒओऔकखगघङचछजझञटठडढणतथदधनऩपफबभमयरऱलळऴवशषसहऺऻ़ऽािीुूृॄॅॆेैॉॊोौ्ॎॏॐ॒॑॓॔ॕॖॗक़ख़ग़ज़ड़ढ़फ़य़ॠॡॢॣ।॥॰ॱॲॳॴॵॶॷॸॹॺॻॼॽॾॿ-\")))\n indices = []\n for i in s:\n indices.append(a.index(i))\n encoded = np.zeros([len(a)], dtype=\"int\")\n #print(len(a)+1)\n k = 0\n for i in indices:\n encoded[i]+=1\n k = k + 1\n return encoded\n\n",
"_____no_output_____"
],
[
"def encodex(text):\n s = list(text)\n a = list(set(list(\"ऀँंःऄअआइईउऊऋऌऍऎएऐऑऒओऔकखगघङचछजझञटठडढणतथदधनऩपफबभमयरऱलळऴवशषसहऺऻ़ऽािीुूृॄॅॆेैॉॊोौ्ॎॏॐ॒॑॓॔ॕॖॗक़ख़ग़ज़ड़ढ़फ़य़ॠॡॢॣ।॥॰ॱॲॳॴॵॶॷॸॹॺॻॼॽॾॿ-\")))\n indices = []\n for i in s:\n indices.append(a.index(i))\n encoded = np.zeros([len(a)], dtype=\"int\")\n #print(len(a)+1)\n k = 0\n for i in indices:\n encoded[i]+=1\n k = k + 1\n return encoded\n\n",
"_____no_output_____"
],
[
"genders = gndrlist.drawlist()\nlst = []\ncorpus = []\na = list(set(list(\"ऀँंःऄअआइईउऊऋऌऍऎएऐऑऒओऔकखगघङचछजझञटठडढणतथदधनऩपफबभमयरऱलळऴवशषसहऺऻ़ऽािीुूृॄॅॆेैॉॊोौ्ॎॏॐ॒॑॓॔ॕॖॗक़ख़ग़ज़ड़ढ़फ़य़ॠॡॢॣ।॥॰ॱॲॳॴॵॶॷॸॹॺॻॼॽॾॿ-\")))\nfor i in genders:\n x = i.split(\"\\t\")\n if type(numericTagger(x[0])[0]) != tuple:\n lst.append(x) ",
"_____no_output_____"
],
[
"y = []\n\nfor i in lst:\n\tcount = 0\n\tfor ch in list(i[0]):\n\t\tif ch not in a:\n\t\t\tcount+=1\n\tif count == 0:\n\t\tcorpus.append(i[0])\n\t\ty.append(encodey(i[1]))\n",
"_____no_output_____"
],
[
"from sklearn.feature_extraction.text import CountVectorizer\n",
"_____no_output_____"
],
[
"vectorizer = CountVectorizer(analyzer='char')\n",
"_____no_output_____"
],
[
"vectorizer.fit(corpus)\n",
"_____no_output_____"
],
[
"x = vectorizer.transform(corpus)\n",
"_____no_output_____"
],
[
"x.shape",
"_____no_output_____"
],
[
"len(y)",
"_____no_output_____"
],
[
"model = Sequential()\nmodel.add(Dense(80,activation='relu', input_dim=71))\nmodel.add(Dropout(0.5))\nmodel.add(Dense(50,activation='relu'))\nmodel.add(Dropout(0.5))\n\nmodel.add(Dense(3, activation='softmax'))\n\nmodel.summary()\nmodel.compile(loss='binary_crossentropy', optimizer='Adam', metrics=['accuracy'])\n",
"_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ndense_9 (Dense) (None, 80) 5760 \n_________________________________________________________________\ndropout_4 (Dropout) (None, 80) 0 \n_________________________________________________________________\ndense_10 (Dense) (None, 50) 4050 \n_________________________________________________________________\ndropout_5 (Dropout) (None, 50) 0 \n_________________________________________________________________\ndense_11 (Dense) (None, 3) 153 \n=================================================================\nTotal params: 9,963\nTrainable params: 9,963\nNon-trainable params: 0\n_________________________________________________________________\n"
],
[
"X = np.array(x)\nY = np.array(y)\n\n",
"_____no_output_____"
],
[
"x",
"_____no_output_____"
],
[
"model.fit(x, Y, batch_size=16, epochs=10, validation_split=0.3)\n",
"Train on 14092 samples, validate on 6040 samples\nEpoch 1/10\n14092/14092 [==============================] - 5s 330us/step - loss: 0.4548 - acc: 0.7954 - val_loss: 0.4910 - val_acc: 0.7757\nEpoch 2/10\n14092/14092 [==============================] - 4s 302us/step - loss: 0.4556 - acc: 0.7934 - val_loss: 0.4918 - val_acc: 0.7719\nEpoch 3/10\n14092/14092 [==============================] - 4s 290us/step - loss: 0.4522 - acc: 0.7947 - val_loss: 0.4882 - val_acc: 0.7757\nEpoch 4/10\n14092/14092 [==============================] - 4s 307us/step - loss: 0.4509 - acc: 0.7955 - val_loss: 0.4892 - val_acc: 0.7736\nEpoch 5/10\n14092/14092 [==============================] - 5s 349us/step - loss: 0.4506 - acc: 0.7964 - val_loss: 0.4888 - val_acc: 0.7757\nEpoch 6/10\n14092/14092 [==============================] - 4s 298us/step - loss: 0.4481 - acc: 0.7989 - val_loss: 0.4898 - val_acc: 0.7734\nEpoch 7/10\n14092/14092 [==============================] - 4s 257us/step - loss: 0.4461 - acc: 0.7993 - val_loss: 0.4883 - val_acc: 0.7752\nEpoch 8/10\n14092/14092 [==============================] - 5s 336us/step - loss: 0.4457 - acc: 0.8015 - val_loss: 0.4939 - val_acc: 0.7761\nEpoch 9/10\n14092/14092 [==============================] - 5s 348us/step - loss: 0.4443 - acc: 0.8007 - val_loss: 0.4973 - val_acc: 0.7745\nEpoch 10/10\n14092/14092 [==============================] - 4s 253us/step - loss: 0.4455 - acc: 0.7994 - val_loss: 0.4978 - val_acc: 0.7710\n"
],
[
"sentence = tok.wordtokenize('हाल के वर्षों में सत्ता के गलियारे से सड़क तक एक छात्र संघ के चुनाव को लेकर ऐसा संघर्ष देखने को नहीं मिला. लेकिन यह चुनाव प्रशांत किशोर ने कैसे अपने पक्ष में किया यह काफी रोचक रहा. इसे जानिए 10 प्रमुख पाइंट में-')",
"_____no_output_____"
],
[
"for token in sentence:\n a = model.predict(vectorizer.transform([token]))\n print((token, genderdecode(a),a))\n",
"('हाल', 'm', array([[0.09274878, 0.7879021 , 0.11934915]], dtype=float32))\n('के', 'm', array([[0.04472945, 0.89106995, 0.06420058]], dtype=float32))\n('वर्षों', 'm', array([[0.02652642, 0.9483737 , 0.02509986]], dtype=float32))\n('में', 'm', array([[0.33366296, 0.4632969 , 0.20304014]], dtype=float32))\n('सत्ता', 'm', array([[0.14043865, 0.63898 , 0.22058131]], dtype=float32))\n('के', 'm', array([[0.04472945, 0.89106995, 0.06420058]], dtype=float32))\n('गलियारे', 'm', array([[0.07028367, 0.8030855 , 0.12663084]], dtype=float32))\n('से', 'm', array([[0.04921352, 0.85207444, 0.09871212]], dtype=float32))\n('सड़क', 'm', array([[0.07057218, 0.6290886 , 0.30033922]], dtype=float32))\n('तक', 'm', array([[0.2347364 , 0.5469405 , 0.21832307]], dtype=float32))\n('एक', 'm', array([[0.04920134, 0.8453986 , 0.10540006]], dtype=float32))\n('छात्र', 'm', array([[0.20489277, 0.6599359 , 0.13517141]], dtype=float32))\n('संघ', 'm', array([[0.0800533, 0.7222943, 0.1976524]], dtype=float32))\n('के', 'm', array([[0.04472945, 0.89106995, 0.06420058]], dtype=float32))\n('चुनाव', 'm', array([[0.05279115, 0.7914204 , 0.15578847]], dtype=float32))\n('को', 'm', array([[0.12123868, 0.7613778 , 0.11738347]], dtype=float32))\n('लेकर', 'm', array([[0.04021881, 0.7261014 , 0.23367977]], dtype=float32))\n('ऐसा', 'm', array([[0.06970991, 0.82558066, 0.10470951]], dtype=float32))\n('संघर्ष', 'm', array([[0.02546834, 0.71497387, 0.2595578 ]], dtype=float32))\n('देखने', 'm', array([[0.00179632, 0.74339354, 0.25481012]], dtype=float32))\n('को', 'm', array([[0.12123868, 0.7613778 , 0.11738347]], dtype=float32))\n('नहीं', 'f', array([[0.55698997, 0.26140156, 0.18160848]], dtype=float32))\n('मिला', 'm', array([[0.1303539 , 0.74635863, 0.12328751]], dtype=float32))\n('लेकिन', 'm', array([[0.04558346, 0.6734428 , 0.2809738 ]], dtype=float32))\n('यह', 'm', array([[0.07293683, 0.75864047, 0.16842273]], dtype=float32))\n('चुनाव', 'm', array([[0.05279115, 0.7914204 , 0.15578847]], dtype=float32))\n('प्रशांत', 'm', array([[0.26689938, 0.62268233, 0.11041827]], dtype=float32))\n('किशोर', 'm', array([[0.19151214, 0.7169829 , 0.09150493]], dtype=float32))\n('ने', 'm', array([[0.04410199, 0.7244729 , 0.23142508]], dtype=float32))\n('कैसे', 'm', array([[0.0296813 , 0.9192418 , 0.05107695]], dtype=float32))\n('अपने', 'm', array([[0.04171145, 0.80654055, 0.15174808]], dtype=float32))\n('पक्ष', 'm', array([[0.05476736, 0.78968775, 0.15554486]], dtype=float32))\n('में', 'm', array([[0.33366296, 0.4632969 , 0.20304014]], dtype=float32))\n('किया', 'm', array([[0.18474372, 0.5778857 , 0.23737057]], dtype=float32))\n('यह', 'm', array([[0.07293683, 0.75864047, 0.16842273]], dtype=float32))\n('काफी', 'm', array([[0.3199562 , 0.4998321 , 0.18021168]], dtype=float32))\n('रोचक', 'm', array([[0.04503868, 0.74411124, 0.21085006]], dtype=float32))\n('रहा', 'm', array([[0.09466751, 0.7701159 , 0.13521646]], dtype=float32))\n('इसे', 'm', array([[0.06346332, 0.8516628 , 0.08487376]], dtype=float32))\n('जानिए', 'm', array([[0.09202606, 0.74382365, 0.16415028]], dtype=float32))\n('10', 'm', array([[0.18533832, 0.6141362 , 0.20052545]], dtype=float32))\n('प्रमुख', 'm', array([[0.03092757, 0.84179574, 0.1272768 ]], dtype=float32))\n('पाइंट', 'm', array([[0.20474376, 0.71616584, 0.07909034]], dtype=float32))\n('में-', 'any', array([[0.299695 , 0.2765087 , 0.42379636]], dtype=float32))\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e72b6bf8a0b0632b232ff7e71403b5e9ee6644ee | 3,525 | ipynb | Jupyter Notebook | source/training01.ipynb | hskm07/pybeginner_training100 | 4196dd33781d41fa733a671c73fa45d041d20b0f | [
"MIT"
] | null | null | null | source/training01.ipynb | hskm07/pybeginner_training100 | 4196dd33781d41fa733a671c73fa45d041d20b0f | [
"MIT"
] | null | null | null | source/training01.ipynb | hskm07/pybeginner_training100 | 4196dd33781d41fa733a671c73fa45d041d20b0f | [
"MIT"
] | null | null | null | 16.471963 | 63 | 0.43773 | [
[
[
"# 第1章 値と変数について",
"_____no_output_____"
],
[
"## **問1-1 変数の使い方**",
"_____no_output_____"
],
[
"まずは変数の使い方を学びましょう。以下のコードを実行してみましょう。\n<br>\nスクリプト名:training01.py",
"_____no_output_____"
]
],
[
[
"# 変数 width, hightを設定\nwidth = 20\nhight = 100\n\n# 変数の値を確認\nprint(width)\nprint(hight)",
"20\n100\n"
]
],
[
[
"",
"_____no_output_____"
],
[
"変数widthと変数hightを使って、面積の計算をしてみましょう。",
"_____no_output_____"
]
],
[
[
"# 変数areaに計算結果を代入\narea = width * hight / 2\n\n# 結果の出力\nprint(area)",
"1000.0\n"
]
],
[
[
"***",
"_____no_output_____"
],
[
"## **問1-2 変数の使い方**",
"_____no_output_____"
]
],
[
[
"# 変数の定義\napple = 100\norange = 60\ntotal = apple * 3 + orange * 2\n\n# 結果の出力\nprint(total)",
"420\n"
]
],
[
[
"print()関数は、色々な形式で出力することができる。\n<br>\n***`print(値1,値2,...,seq=\"区切り文字\", end=\"行末文字\")`***",
"_____no_output_____"
]
],
[
[
"# 先ほど定義した変数 hight新たに値を格納\nhight = 100\narea = width * hight / 2\n\n# 出力結果のフォーマットを指定する\nprint(width, hight, sep=\",\", end=\" / 面積:\")\nprint(area)",
"20,100 / 面積:1000.0\n"
]
],
[
[
"***",
"_____no_output_____"
],
[
"### **問1-3 変数の使い方**",
"_____no_output_____"
]
],
[
[
"# 変数の定義\norange = 12000\ntotal = apple*3 + orange*2\n\n# 結果の出力\nprint(apple, orange, sep=\".\", end=\" --合計値-- \") \nprint(total)",
"100.12000 --合計値-- 24300\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
]
] |
e72b719f833d1bdeb98e15d9e26fa0973c3ac892 | 2,610 | ipynb | Jupyter Notebook | LinkedIn Learning/Tensorflow esencial/ObjetosPython.ipynb | fercanepari/t81_558_deep_learning | 0f943029cf057c810551b1228cd65770e017318e | [
"Apache-2.0"
] | 1 | 2021-06-22T13:20:27.000Z | 2021-06-22T13:20:27.000Z | LinkedIn Learning/Tensorflow esencial/ObjetosPython.ipynb | fercanepari/t81_558_deep_learning | 0f943029cf057c810551b1228cd65770e017318e | [
"Apache-2.0"
] | null | null | null | LinkedIn Learning/Tensorflow esencial/ObjetosPython.ipynb | fercanepari/t81_558_deep_learning | 0f943029cf057c810551b1228cd65770e017318e | [
"Apache-2.0"
] | null | null | null | 24.857143 | 267 | 0.529119 | [
[
[
"# Objetos Python en TensorFlow\n\n## Definición: \n Otra forma de crear tensores es a partir de objetos de Python\n\n\n## Sintaxis: \n \n tf.convert_to_tensor(\n objeto,\n dtype\n )",
"_____no_output_____"
]
],
[
[
"import tensorflow.compat.v1 as tf\ntf.disable_v2_behavior()\n\nimport numpy as np\n\ntfloat = tf.convert_to_tensor(5.9, dtype=tf.float64)\nprint(\"Valor del tensor \", tfloat)\nsess = tf.Session()\nprint(\"Valor del tensor \", sess.run(tfloat))\n",
"WARNING:tensorflow:From C:\\ProgramData\\Miniconda3\\envs\\torch\\lib\\site-packages\\tensorflow\\python\\compat\\v2_compat.py:96: disable_resource_variables (from tensorflow.python.ops.variable_scope) is deprecated and will be removed in a future version.\nInstructions for updating:\nnon-resource variables are not supported in the long term\nValor del tensor Tensor(\"Const:0\", shape=(), dtype=float64)\nValor del tensor 5.9\n"
],
[
"vvar = np.array([1,2,3,4.4,5])\nprint(\"Valor de vvar \", vvar)\ntvvar = tf.convert_to_tensor(vvar)\nprint(\"Valor de tvar \", tvvar)\nsess = tf.Session()\nprint(\"Valor con contenido tvar \", sess.run(tvvar))\n\n",
"Valor de vvar [1. 2. 3. 4.4 5. ]\nValor de tvar Tensor(\"Const_1:0\", shape=(5,), dtype=float64)\nValor con contenido tvar [1. 2. 3. 4.4 5. ]\n"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
]
] |
e72b73bac0f1d42882eb4e4f2953e00f1dd52e64 | 17,341 | ipynb | Jupyter Notebook | 15_vector_addition_subtraction_numpy.ipynb | kangwonlee/19ECA-40-lin-alg-1 | 84d91af8619686b75a9f73ba1bbbce506f701419 | [
"BSD-3-Clause"
] | null | null | null | 15_vector_addition_subtraction_numpy.ipynb | kangwonlee/19ECA-40-lin-alg-1 | 84d91af8619686b75a9f73ba1bbbce506f701419 | [
"BSD-3-Clause"
] | null | null | null | 15_vector_addition_subtraction_numpy.ipynb | kangwonlee/19ECA-40-lin-alg-1 | 84d91af8619686b75a9f73ba1bbbce506f701419 | [
"BSD-3-Clause"
] | null | null | null | 22.203585 | 234 | 0.480249 | [
[
[
"# 그래프, 수학 기능 추가\n# Add graph and math features\nimport pylab as py\nimport numpy as np\nimport numpy.linalg as nl\n# 기호 연산 기능 추가\n# Add symbolic operation capability\nimport sympy as sy\n\n",
"_____no_output_____"
]
],
[
[
"# 파이썬에서의 선형대수 : 사이파이 계열의 넘파이<br>Linear Algebra in Python: NumPy of SciPy Stack\n\n",
"_____no_output_____"
],
[
"파이썬 프로그래밍 언어의 기본 기능만으로도 선형 대수 문제를 해결하는 것이 가능은 하나, 보다 효율을 높이기 위해, 1990년대 이후, 여러 개발자들의 공헌으로 [**사이파이** 계열 확장 모듈](https://www.scipy.org/stackspec.html)을 개발하였다.<br>\nWe can solve linear algebra with default features of python. However, to make it more efficient, since 1990's, a group of community developers contributed in developing [**SciPy** stack](https://www.scipy.org/stackspec.html).\n\n",
"_____no_output_____"
],
[
"이 장에서는 그 가운데 주로 넘파이 NumPy 를 사용할 것이다.<br>\nIn this chapter, we would mostly use NumPy.\n\n",
"_____no_output_____"
],
[
"## 벡터 정의 : NumPy<br>Definition of vectors : NumPy\n\n",
"_____no_output_____"
],
[
"`numpy` 배열을 이용하여 다음과 같이 벡터를 정의할 수 있다.<br>\nWe can define the vectors using the arrays of the `numpy` as follows.\n\n",
"_____no_output_____"
]
],
[
[
"# ref : https://www.youtube.com/watch?v=8QihetGj3pg\na = np.array((6, -2))\nb = np.array((-4, 4))\n\n",
"_____no_output_____"
],
[
"a\n\n",
"_____no_output_____"
],
[
"b\n\n",
"_____no_output_____"
]
],
[
[
"위 2차원 벡터를 한번 그려 보자<br>\nLet's plot the 2D vectors above.\n\n",
"_____no_output_____"
]
],
[
[
"def draw_2dvec(x, y, x0=0, y0=0, color='k', name=None):\n py.quiver(x0, y0, x, y, color=color, angles='xy', scale_units='xy', scale=1)\n if name is not None:\n if not name.startswith('$'):\n vec_str = '$\\\\vec{%s}$' % name\n else:\n vec_str = name\n py.text(0.5 * x + x0, 0.5 * y + y0, vec_str)\n\n",
"_____no_output_____"
],
[
"draw_2dvec(a[0], a[1], name='a')\ndraw_2dvec(b[0], b[1], name='b')\n\npy.axis('equal')\npy.xlim((-8, 8))\npy.ylim((-8, 8))\npy.grid(True)\n\n",
"_____no_output_____"
]
],
[
[
"## 벡터 합 : NumPy<br>Sum of two vectors : NumPy\n\n",
"_____no_output_____"
],
[
"두 벡터를 더해 보자.<br>Let's add two vectors.\n\n",
"_____no_output_____"
]
],
[
[
"a_plus_b = a + b\n\n",
"_____no_output_____"
],
[
"a_plus_b\n\n",
"_____no_output_____"
]
],
[
[
"이 벡터의 합을 그려보자.<br>\nLet's draw this sum of vectors.\n\n",
"_____no_output_____"
]
],
[
[
"draw_2dvec(a[0], a[1], name='a')\ndraw_2dvec(b[0], b[1], name='b')\ndraw_2dvec(a_plus_b[0], a_plus_b[1], name='$\\\\vec{a}+\\\\vec{b}$')\n\npy.axis('equal')\npy.xlim((-8, 8))\npy.ylim((-8, 8))\npy.grid(True)\n\n",
"_____no_output_____"
]
],
[
[
"어떻게 해서 벡터의 합은 이렇게 된 것일까? $\\vec{b}$ 벡터의 시작점을 $\\vec{a}$ 벡터의 끝점으로 옮겨 보자.<br>\nHow come this vector sum came up like this? Let's move the starting point of $\\vec{b}$ to the starting point of $\\vec{a}$.\n\n",
"_____no_output_____"
]
],
[
[
"draw_2dvec(a[0], a[1], name='a')\ndraw_2dvec(b[0], b[1], a[0], a[1], color=(0.5, 0.5, 0.5), name='b')\ndraw_2dvec(b[0], b[1], name='b')\ndraw_2dvec(a_plus_b[0], a_plus_b[1], name='$\\\\vec{a}+\\\\vec{b}$')\n\npy.axis('equal')\npy.xlim((-8, 8))\npy.ylim((-8, 8))\npy.grid(True)\n\n",
"_____no_output_____"
]
],
[
[
"여기서 $\\vec{a}$, $\\vec{b}$ 그리고 $\\vec{a} + \\vec{b}$ 가 삼각형을 이룬다는 것을 알 수 있다.<br>\nHere, you can see that $\\vec{a}$, $\\vec{b}$, and $\\vec{a} + \\vec{b}$ form a triangle.\n\n",
"_____no_output_____"
],
[
"$\\vec{b}$의 시작점을 $\\vec{a}$의 끝점으로 옮긴 결과, 회색 벡터의 끝점이 $\\vec{a} + \\vec{b}$ 의 끝점과 같다.<br>\nAs the result of moving the start point of $\\vec{b}$ to the end point of $\\vec{a}$, the end points of the gray vector and $\\vec{a} + \\vec{b}$ are identicial.\n\n",
"_____no_output_____"
],
[
"### 교환법칙 : NumPy<br>Commutative Law : NumPy\n\n",
"_____no_output_____"
],
[
"벡터의 합의 순서를 바꾸어 보자.<br>Let's change the order of addition.\n\n",
"_____no_output_____"
]
],
[
[
"b_plus_a = b + a\n\n",
"_____no_output_____"
],
[
"b_plus_a\n\n",
"_____no_output_____"
]
],
[
[
"이는 $\\vec{a}+\\vec{b}$와 같다.<br>This is the same as $\\vec{a}+\\vec{b}$.\n\n",
"_____no_output_____"
],
[
"이번에도 시각화 해 보자.<br>Let's visualize again.\n\n",
"_____no_output_____"
]
],
[
[
"draw_2dvec(a[0], a[1], name='a')\ndraw_2dvec(b[0], b[1], name='b')\ndraw_2dvec(a[0], a[1], b[0], b[1], color=(0.75, 0.75, 0.75), name='a')\ndraw_2dvec(b_plus_a[0], b_plus_a[1], name='$\\\\vec{b}+\\\\vec{a}$')\n\npy.axis('equal')\npy.xlim((-8, 8))\npy.ylim((-8, 8))\npy.grid(True)\n\n",
"_____no_output_____"
]
],
[
[
"비슷하게, $\\vec{a}$, $\\vec{b}$ 그리고 $\\vec{b} + \\vec{a}$ 가 삼각형을 이룬다는 것을 알 수 있다.<br>\nSimilarly, you can see that $\\vec{a}$, $\\vec{b}$, and $\\vec{b} + \\vec{a}$ form a triangle.\n\n",
"_____no_output_____"
],
[
"이번에는 $\\vec{a}$의 시작점을 $\\vec{b}$의 끝점으로 옮겨 보았다. 덧셈의 순서와는 상관 없이, 회색 벡터의 끝점이 $\\vec{b} + \\vec{a}$ 의 끝점과 일치하는 것을 확인할 수 있다.<br>\nThis time, we moved the start point of $\\vec{a}$ to the end point of $\\vec{b}$. We can confirm that regardless of the order of addition, the end points of the gray vector and $\\vec{a} + \\vec{b}$ are identicial.\n\n",
"_____no_output_____"
],
[
"두 방식을 모두 표시해 보자.<br>Let's indicate both ways.\n\n",
"_____no_output_____"
]
],
[
[
"draw_2dvec(a[0], a[1], name='a')\ndraw_2dvec(b[0], b[1], name='b')\ndraw_2dvec(a[0], a[1], b[0], b[1], color=(0.75, 0.75, 0.75), name='a')\ndraw_2dvec(b[0], b[1], a[0], a[1], color=(0.5, 0.5, 0.5), name='b')\ndraw_2dvec(b_plus_a[0], b_plus_a[1], name='$\\\\vec{a}+\\\\vec{b}$')\n\npy.axis('equal')\npy.xlim((-8, 8))\npy.ylim((-8, 8))\npy.grid(True)\n\n",
"_____no_output_____"
]
],
[
[
"벡터 합은 $\\vec{a}$와 $\\vec{b}$가 이루는 평행 사변형의 한 대각선임을 알 수 있다.<br>\nWe can see that the vector sum is one of diagonals of the parallogram of $\\vec{a}$'s and $\\vec{b}$'s.\n\n",
"_____no_output_____"
],
[
"## 스칼라와 벡터의 곱 : NumPy<br>Product of a Scalar and a Vector : NumPy\n\n",
"_____no_output_____"
],
[
"벡터에 어떤 스칼라 값을 곱해 보자.<br>Let's multiply a scalar value to a vector.\n\n",
"_____no_output_____"
]
],
[
[
"x = np.array((2, 1))\nalpha = 3\nalpha_x = alpha * x\n\n",
"_____no_output_____"
],
[
"alpha_x\n\n",
"_____no_output_____"
]
],
[
[
"그림으로 표시해 보자.<br>Let's draw.\n\n",
"_____no_output_____"
]
],
[
[
"draw_2dvec(alpha_x[0], alpha_x[1], name='$\\\\alpha\\\\vec{x}$', color=(0.5, 0.5, 0.5))\ndraw_2dvec(x[0], x[1], name='x')\n\npy.axis('equal')\npy.xlim((-6, 6))\npy.ylim((-6, 6))\npy.grid(True)\n\n",
"_____no_output_____"
]
],
[
[
"방향은 바뀌지 않고 크기만 달라지는 것을 알 수 있다.<br>\nThe direction does not change but the magnitude changes.\n\n",
"_____no_output_____"
],
[
"스칼라 값이 음인 경우는 어떠할까?<br>What if scalar value is negative?\n\n",
"_____no_output_____"
]
],
[
[
"x = np.array((2, 1))\nbeta = -1\nbeta_x = beta * x\n\n",
"_____no_output_____"
],
[
"beta_x\n\n",
"_____no_output_____"
],
[
"draw_2dvec(alpha_x[0], alpha_x[1], name='$\\\\alpha\\\\vec{x}$', color=(0.5, 0.5, 0.5))\ndraw_2dvec(beta_x[0], beta_x[1], name='$\\\\beta\\\\vec{x}$', color=py.ones((1, 3)) * 0.25)\ndraw_2dvec(x[0], x[1], name='x')\n\npy.axis('equal')\npy.xlim((-6, 6))\npy.ylim((-6, 6))\npy.grid(True)\n\n",
"_____no_output_____"
]
],
[
[
"음의 스칼라를 곱하면 방향이 반대로 된다는 것을 알 수 있다.<br>\nIn case of the negative scalar, the direction becomes the opposite.\n\n",
"_____no_output_____"
],
[
"## 벡터의 차 : NumPy<br>Difference of two vectors : NumPy\n\n",
"_____no_output_____"
],
[
"어떤 벡터 $\\vec{b}$를 다른 벡터 $\\vec{a}$에서 빼는 셈에 대해 생각해 보자.<br>\nLet's think about subtracting a vector $\\vec{b}$ from another vector $\\vec{a}$.\n\n",
"_____no_output_____"
]
],
[
[
"a_minus_b = a - b\n\n",
"_____no_output_____"
],
[
"a_minus_b\n\n",
"_____no_output_____"
]
],
[
[
"그림으로 표시보자.<br>\nLet's visualize.\n\n",
"_____no_output_____"
]
],
[
[
"draw_2dvec(a[0], a[1], name='a')\ndraw_2dvec(b[0], b[1], name='b')\ndraw_2dvec(a_minus_b[0], a_minus_b[1], name='$\\\\vec{a}-\\\\vec{b}$')\n\npy.axis('equal')\npy.xlim((-10, 10))\npy.ylim((-10, 10))\npy.grid(True)\n\n",
"_____no_output_____"
]
],
[
[
"이번에는 어떻게 해서 벡터의 차가 이렇게 된 것인지 알아 보자. $\\vec{b}$ 벡터에 -1을 곱해서 시작점을 $\\vec{a}$ 벡터의 끝점으로 옮겨 보자.<br>\nLet's figure out the vector subtraction. Let's multiply by -1 to $\\vec{b}$ and move the starting point to the end point of $\\vec{a}$.\n\n",
"_____no_output_____"
]
],
[
[
"draw_2dvec(a[0], a[1], name='a')\ndraw_2dvec(-b[0], -b[1], a[0], a[1], color=(0.5, 0.5, 0.5), name='$-\\\\vec{b}$')\ndraw_2dvec(b[0], b[1], name='b')\ndraw_2dvec(a_minus_b[0], a_minus_b[1], name='$\\\\vec{a}-\\\\vec{b}$')\n\npy.axis('equal')\npy.xlim((-10, 10))\npy.ylim((-10, 10))\npy.grid(True)\n\n",
"_____no_output_____"
]
],
[
[
"벡터 뺄셈은 부호를 바꾸어 더하는 것과 같다.<br>Subtracting a vector is equivalent to changing the sign of the vector and adding it.\n\n",
"_____no_output_____"
],
[
" 이번에는 $\\vec{a}-\\vec{b}$ 의 시작점을 $\\vec{b}$의 끝점으로 옮겨 보자.<br>\n This time, let's move the start point of $\\vec{a}-\\vec{b}$ to the end point of $\\vec{b}$.\n\n",
"_____no_output_____"
]
],
[
[
"draw_2dvec(a[0], a[1], name='a')\ndraw_2dvec(b[0], b[1], name='b')\ndraw_2dvec(a[0], a[1], b[0], b[1], color=(0.75, 0.75, 0.75), name='a')\ndraw_2dvec(b[0], b[1], a[0], a[1], color=(0.5, 0.5, 0.5), name='b')\ndraw_2dvec(a_minus_b[0], a_minus_b[1], b[0], b[1], name='$\\\\vec{a}-\\\\vec{b}$')\n\npy.axis('equal')\npy.xlim((-8, 8))\npy.ylim((-8, 8))\npy.grid(True)\n\n",
"_____no_output_____"
]
],
[
[
"이는 다음을 뜻한다.<br>\nThis means the following.\n\n",
"_____no_output_____"
],
[
"$$\n\\vec{b}+\\left(\\vec{a}-\\vec{b}\\right)=\\vec{a}\n$$\n\n",
"_____no_output_____"
],
[
"또한, $\\vec{a}-\\vec{b}$ 도 $\\vec{a}$와 $\\vec{b}$가 이루는 평행 사변형의 다른 대각선임을 알 수 있다.<br>\nWe can also see that the vector subtraction is the other diagonal of the parallogram of $\\vec{a}$'s and $\\vec{b}$'s.\n\n",
"_____no_output_____"
],
[
"## 연습문제<br>Exercise\n\n",
"_____no_output_____"
],
[
"* 임의의 두 2차원 벡터를 numpy array 로 정의하시오<br>Define two 2-dimensional vectors as numpy array's\n\n",
"_____no_output_____"
],
[
"* 위에서 보인 예와 같이 두 벡터를 그려 보시오<br>Plot these two vectors as above\n\n",
"_____no_output_____"
],
[
"* 두 벡터의 합과 두 벡터를 함께 그려 보시오<br>Plot the sum of two vectors with the two vectors\n\n",
"_____no_output_____"
],
[
"* 두 벡터와 두 벡터의 합, 두 벡터의 차를 함께 그려 보시오<br>Plot the two vectors, the sum vector and the difference vector\n\n",
"_____no_output_____"
],
[
"## Final Bell<br>마지막 종\n\n",
"_____no_output_____"
]
],
[
[
"# stackoverfow.com/a/24634221\nimport os\nos.system(\"printf '\\a'\");\n\n",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
]
] |
e72b845cc53c85f3d5cf6cf96f4716b3144749ee | 4,719 | ipynb | Jupyter Notebook | 8. All Data KNN Submission.ipynb | JifuZhao/Landmark_Recognition | d394736b0696198a31cea33784d5fb114e55f593 | [
"MIT"
] | 1 | 2019-08-22T18:49:05.000Z | 2019-08-22T18:49:05.000Z | 8. All Data KNN Submission.ipynb | JifuZhao/Landmark_Recognition | d394736b0696198a31cea33784d5fb114e55f593 | [
"MIT"
] | null | null | null | 8. All Data KNN Submission.ipynb | JifuZhao/Landmark_Recognition | d394736b0696198a31cea33784d5fb114e55f593 | [
"MIT"
] | null | null | null | 23.713568 | 100 | 0.531045 | [
[
[
"import warnings\nwarnings.simplefilter('ignore')\n\nimport numpy as np\nimport pandas as pd\nfrom sklearn.neighbors import NearestNeighbors\n\n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"# Load Features and Labels",
"_____no_output_____"
]
],
[
[
"# Load the data\ntrain_df = pd.read_csv('./data/all/train.csv')\ntrain_imgs_id = np.load('./result/train_imgs_id.npy')\n\ntest_df = pd.read_csv('./data/all/test.csv')\ntest_imgs_id = np.load('./result/test_imgs_id.npy')\n\nprint('Train:\\t\\t', train_df.shape, train_imgs_id.shape)\nprint('Test:\\t\\t', test_df.shape, test_imgs_id.shape)\nprint('Landmarks:\\t', len(train_df['landmark_id'].unique()))",
"Train:\t\t (1225029, 3) (1192931,)\nTest:\t\t (117703, 2) (108383,)\nLandmarks:\t 14951\n"
],
[
"train_x = np.load('./data/all/train_features.npy')\ntrain_y = np.load('./data/all/train_id.npy')\n\ntest_x = np.load('./data/all/test_features.npy')\n\nprint('Train:\\t', train_x.shape, train_y.shape)\nprint('Test:\\t', test_x.shape)",
"Train:\t (1192931, 2048) (1192931,)\nTest:\t (108383, 2048)\n"
]
],
[
[
"# Implement KNN Model",
"_____no_output_____"
]
],
[
[
"# Implement KNN model\nknn = NearestNeighbors(n_neighbors=1, algorithm='auto', leaf_size=30, \n metric='minkowski', p=2, n_jobs=-1)\nknn.fit(train_x)",
"_____no_output_____"
],
[
"# Search the first neighbors\nneighbor_index = knn.kneighbors(test_x, return_distance=False)",
"_____no_output_____"
],
[
"np.save('./result/knn_all_neighbor_index.npy', neighbor_index)\nprint('KNN Neighbor:\\t\\t', neighbor_index.shape)",
"KNN Neighbor:\t\t (108383, 1)\n"
],
[
"# Get prediction for each query images\nlandmarks = []\nids = []\n\nfor i in range(len(neighbor_index)):\n idx = test_imgs_id[i]\n ids.append(test_df.loc[idx, 'id'])\n landmarks.append(train_y[neighbor_index[i][0]])\n\nprediction_tuple = [str(idx) + ' ' + '1.0' for idx in landmarks]",
"_____no_output_____"
],
[
"# Create submission files\nsample_submission = pd.read_csv('./data/all/sample_submission.csv', usecols=['id'])\n\nsubmission = pd.DataFrame({'id': ids, 'landmarks': prediction_tuple})\nsubmission = pd.merge(sample_submission, submission, how='left', on='id')\nsubmission.to_csv('./result/knn_all_submission.csv', index=False, columns=['id', 'landmarks'])",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
e72b9c30d5d85e55c4c386e8635267fc0a4def8c | 11,809 | ipynb | Jupyter Notebook | workshop/lessons/05_automated_dft/Exercises-Solutions.ipynb | acrutt/workshop | fddf9e43065fed3c6786381cbbb6b7f47a61a996 | [
"BSD-3-Clause"
] | null | null | null | workshop/lessons/05_automated_dft/Exercises-Solutions.ipynb | acrutt/workshop | fddf9e43065fed3c6786381cbbb6b7f47a61a996 | [
"BSD-3-Clause"
] | null | null | null | workshop/lessons/05_automated_dft/Exercises-Solutions.ipynb | acrutt/workshop | fddf9e43065fed3c6786381cbbb6b7f47a61a996 | [
"BSD-3-Clause"
] | null | null | null | 27.851415 | 379 | 0.577018 | [
[
[
"# Automating DFT Exercises\n\n## Exercise 1: Writing Input Files\n\nFor these exercises we'll start off with a aluminum chromium alloy structure and an ethylene carbonate molecule. In these exercises our goals wil be to:\n\n**1.1:** Write `MPStaticSet` VASP input files for an aluminum chromium alloy (`struct`)\n\n**1.2:** Write `OptSet` Q-Chem input files for ethylene carbonate (`mol`)",
"_____no_output_____"
]
],
[
[
"from pymatgen.core import Structure\nstruct = Structure.from_file(filename=\"./example_files/Al16Cr10.cif\")\nprint(struct.composition)",
"Al16 Cr10\n"
],
[
"from pymatgen.core import Molecule\nmol = Molecule.from_file(\"./example_files/ethylene_carbonate.xyz\")\nprint(mol.composition)",
"O3 C3 H4\n"
]
],
[
[
"### Exercise 1.1\n\nLet's try writing the input files for a different type of VASP calculation, MPStatic set. First we'll need to start by importing the `MPStaticSet` object. Then we must initiate the InputSet object with our desired structure. Finally, we can use the `.write_input()` method to write out our VASP input files into the directory we specify.\n\n**Hint 1:** The import statement is similar to MPRelaxSet from Lesson 1 below.\n\n`from pymatgen.io.vasp.sets import MPRelaxSet`\n\n**Hint 2:** Remember to include the `potcar_spec=True` flag when using `.write_input()`",
"_____no_output_____"
]
],
[
[
"from pymatgen.io.vasp.sets import MPStaticSet",
"_____no_output_____"
],
[
"static_set = MPStaticSet(structure=struct)",
"_____no_output_____"
],
[
"static_set.write_input(output_dir=\"./AlCr_MPStaticSet\",potcar_spec=True)",
"/Users/acrutt/miniconda3/envs/cms38/lib/python3.8/site-packages/pymatgen/io/vasp/sets.py:592: BadInputSetWarning: Relaxation of likely metal with ISMEAR < 1 detected. Please see VASP recommendations on ISMEAR for metals.\n warnings.warn(\n"
]
],
[
[
"### Exercise 1.2\n\nNow let's try writing input files for a different external code, Q-Chem. Even though we have not worked with Q-Chem before, the steps are similar to how we approach writing input files for VASP.\n\nFirst we must modify the import statement to find `OptSet` from the Q-Chem IO sets. Then we must initiate the InputSet object with our ethylene carbonate molecule. Finally, we must find the method used for writing out the input file for Q-Chem. Note Q-Chem only requires a single input file so we will need to specify a filename instead of a directory.\n\n**Hint:** Use `shift+tab` and `tab` to explore autocomplete options as you search for new modules or methods.",
"_____no_output_____"
]
],
[
[
"from pymatgen.io.qchem.sets import OptSet",
"_____no_output_____"
],
[
"opt_set = OptSet(molecule=mol)",
"_____no_output_____"
],
[
"opt_set.write(input_file=\"ec_input\") ",
"_____no_output_____"
]
],
[
[
"## Exercise 2: Parsing Output Files\n\nFor these exercises we'll be working with example output files in two directories. We'll pick-up where we left off in Lesson 2 where we have imported VaspDrone from atomate, initiated the drone object, and have used the .assimulate() method to parse the output directory.\n\n**2.1:** Explore the `task_doc` produced by `VaspDrone` to parse the files in `example_VASP_Al16Cr10`\n\n**2.2:** Use `QChemDrone` to parse the files in `example_QChem_ethylene_carbonate`",
"_____no_output_____"
]
],
[
[
"from atomate.vasp.drones import VaspDrone",
"_____no_output_____"
],
[
"drone = VaspDrone()",
"_____no_output_____"
],
[
"task_doc = drone.assimilate(path=\"./example_VASP_Al16Cr10\")",
"_____no_output_____"
]
],
[
[
"### Exercise 2.2\n\nAnswer the following questions by examining the information parsed by the drone and stored in `task_doc`.\n\n Q1. How many sites are in our calculation's structure?\n \n Q2. What is the final energy? Does this match the vasp.xml final energy found during the Vasprun demonstration from Lesson 2 (-157.80974238 eV)?\n \n Q3. What is the final energy per atom? Does this make sense based on dividing your answers from the past two questions?\n\n**Hint:** Recall you can explore the fields in a dictionary using `task_doc.keys()` to navigate what is available in the dictionary.",
"_____no_output_____"
]
],
[
[
"task_doc.keys()",
"_____no_output_____"
],
[
"task_doc[\"nsites\"]",
"_____no_output_____"
],
[
"task_doc[\"output\"].keys()",
"_____no_output_____"
],
[
"task_doc[\"output\"][\"energy\"]",
"_____no_output_____"
],
[
"task_doc[\"output\"][\"energy_per_atom\"]",
"_____no_output_____"
],
[
"-157.80974238 / 26 # everything checks out!",
"_____no_output_____"
]
],
[
[
"### Exercise 2.2\n\nEven though we are parsing the outputs from a different code, the process is similar to the approach we used with VASP. First we must import `QChemDrone` from the Q-Chem module of atomate. Then we must initiate the drone object. Then we can use the `.assimilate()` method from the drone to parse the ethylene molecule directory, `\"./example_QChem_ethylene_carbonate\"`.\n\n**Hint 1:** Recall the code for importing a drone, initializing the drone object, and parsing a directory for VASP are included at the beginning of this exercise. While modifications will be needed, this is a great starting point!\n\n**Hint 2:** Note that `QChemDrone.assimilate()` has additional input parameters. Can you infer what the values should be based on the files in `example_QChem_ethylene_carbonate`?\n\n input_file (str): base name of the input file(s)\n output_file (str): base name of the output file(s)\n multirun (bool): Whether the job to parse includes multiple\n calculations in one input / output pair.",
"_____no_output_____"
]
],
[
[
"from atomate.qchem.drones import QChemDrone",
"_____no_output_____"
],
[
"qc_drone = QChemDrone()",
"_____no_output_____"
],
[
"qc_task_doc = qc_drone.assimilate(path=\"./example_QChem_ethylene_carbonate\",\n input_file=\"mol.qin.gz\",\n output_file=\"mol.qout.gz\",\n multirun=False)",
"2021-07-15 14:16:12,517 INFO atomate.qchem.drones Getting task doc for base dir :./example_QChem_ethylene_carbonate\n2021-07-15 14:16:12,665 INFO atomate.qchem.drones Post-processing dir:./example_QChem_ethylene_carbonate\n"
],
[
"qc_task_doc.keys()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
e72ba396e2d3a0450c1c1d12320a4d4fafc439ee | 11,663 | ipynb | Jupyter Notebook | Notebooks/Joining_geo_files.ipynb | AndOleAnd/Capstone_N_A_P | f619ed31171d83ebdc080776ce06055b580c6705 | [
"MIT"
] | null | null | null | Notebooks/Joining_geo_files.ipynb | AndOleAnd/Capstone_N_A_P | f619ed31171d83ebdc080776ce06055b580c6705 | [
"MIT"
] | 38 | 2020-12-11T19:35:25.000Z | 2021-06-16T08:34:09.000Z | Notebooks/Joining_geo_files.ipynb | AndOleAnd/Capstone_N_A_P | f619ed31171d83ebdc080776ce06055b580c6705 | [
"MIT"
] | null | null | null | 35.23565 | 157 | 0.591443 | [
[
[
"import math\nimport pandas as pd \nimport geopandas as gpd\n\nimport h3 # h3 bins from uber",
"_____no_output_____"
],
[
"def create_crash_df(train_file = '../Inputs/Train.csv'): \n crash_df = pd.read_csv(train_file, parse_dates=['datetime'])\n return crash_df\n\ndef create_temporal_features(df):\n dict_windows = {1: \"00-03\", 2: \"03-06\", 3: \"06-09\", 4: \"09-12\", 5: \"12-15\", 6: \"15-18\", 7: \"18-21\", 8: \"21-24\"}\n dict_months = {1: \"Jan\", 2: \"Feb\", 3: \"Mar\", 4: \"Apr\", 5: \"May\", 6: \"Jun\",\n 7: \"Jul\", 8: \"Aug\", 9: \"Sep\", 10: \"Oct\", 11: \"Nov\", 12: \"Dec\"}\n \n df[\"time_window\"] = df[\"datetime\"].apply(lambda x: math.floor(x.hour / 3) + 1)\n df[\"time_window_str\"] = df[\"time_window\"].apply(lambda x: dict_windows.get(x))\n df[\"day\"] = df[\"datetime\"].apply(lambda x: x.day)\n df[\"month\"] = df[\"datetime\"].apply(lambda x: dict_months.get(x.month))\n df[\"year\"] = df[\"datetime\"].apply(lambda x: x.year)\n df[\"weekday\"] = df[\"datetime\"].apply(lambda x: x.weekday())\n return df\n\ndef assign_hex_bin(df,lat_column=\"latitude\",lon_column=\"longitude\"):\n df[\"h3_zone_5\"] = df.apply(lambda x: h3.geo_to_h3(x[lat_column], x[lon_column], 5),axis=1)\n df[\"h3_zone_6\"] = df.apply(lambda x: h3.geo_to_h3(x[lat_column], x[lon_column], 6),axis=1)\n df[\"h3_zone_7\"] = df.apply(lambda x: h3.geo_to_h3(x[lat_column], x[lon_column], 7),axis=1)\n return df\n\ndef export_df_to_csv(df,path_file='../Inputs/train_h3.csv'):\n df.to_csv(path_file,index=False)\n print(f'file created {path_file}')\n ",
"_____no_output_____"
],
[
"# create command line commands for downlaoding uber movement data with OSM segment info\nmonth_list = [('01','31'),\n ('02','28'),\n ('03','31'),\n ('04','30'),\n ('05','31'),\n ('06','30'),\n ('07','31'),\n ('08','31'),\n ('09','30'),\n ('10','31'),\n ('11','30'),\n ('12','31')]\nfor year in ['2018','2019']:\n for month, end_day in month_list:\n break # remove when you want the commands\n # print([f'mdt speeds-to-geojson nairobi {year}-{month}-01 {year}-{month}-{end_day} --output=Inputs/nairobi_{year}_{month}geojson.geojson'])\n # print([f'mdt speeds-transform historical nairobi {year}-{month}-1 {year}-{month}-{end_day} --output=Inputs/nairobi_{year}_{month}_osm.csv'])",
"['mdt speeds-transform historical nairobi 2018-01-1 2018-01-31 --output=Inputs/nairobi_2018_01_osm.csv']\n['mdt speeds-transform historical nairobi 2018-02-1 2018-02-28 --output=Inputs/nairobi_2018_02_osm.csv']\n['mdt speeds-transform historical nairobi 2018-03-1 2018-03-31 --output=Inputs/nairobi_2018_03_osm.csv']\n['mdt speeds-transform historical nairobi 2018-04-1 2018-04-30 --output=Inputs/nairobi_2018_04_osm.csv']\n['mdt speeds-transform historical nairobi 2018-05-1 2018-05-31 --output=Inputs/nairobi_2018_05_osm.csv']\n['mdt speeds-transform historical nairobi 2018-06-1 2018-06-30 --output=Inputs/nairobi_2018_06_osm.csv']\n['mdt speeds-transform historical nairobi 2018-07-1 2018-07-31 --output=Inputs/nairobi_2018_07_osm.csv']\n['mdt speeds-transform historical nairobi 2018-08-1 2018-08-31 --output=Inputs/nairobi_2018_08_osm.csv']\n['mdt speeds-transform historical nairobi 2018-09-1 2018-09-30 --output=Inputs/nairobi_2018_09_osm.csv']\n['mdt speeds-transform historical nairobi 2018-10-1 2018-10-31 --output=Inputs/nairobi_2018_10_osm.csv']\n['mdt speeds-transform historical nairobi 2018-11-1 2018-11-30 --output=Inputs/nairobi_2018_11_osm.csv']\n['mdt speeds-transform historical nairobi 2018-12-1 2018-12-31 --output=Inputs/nairobi_2018_12_osm.csv']\n['mdt speeds-transform historical nairobi 2019-01-1 2019-01-31 --output=Inputs/nairobi_2019_01_osm.csv']\n['mdt speeds-transform historical nairobi 2019-02-1 2019-02-28 --output=Inputs/nairobi_2019_02_osm.csv']\n['mdt speeds-transform historical nairobi 2019-03-1 2019-03-31 --output=Inputs/nairobi_2019_03_osm.csv']\n['mdt speeds-transform historical nairobi 2019-04-1 2019-04-30 --output=Inputs/nairobi_2019_04_osm.csv']\n['mdt speeds-transform historical nairobi 2019-05-1 2019-05-31 --output=Inputs/nairobi_2019_05_osm.csv']\n['mdt speeds-transform historical nairobi 2019-06-1 2019-06-30 --output=Inputs/nairobi_2019_06_osm.csv']\n['mdt speeds-transform historical nairobi 2019-07-1 2019-07-31 --output=Inputs/nairobi_2019_07_osm.csv']\n['mdt speeds-transform historical nairobi 2019-08-1 2019-08-31 --output=Inputs/nairobi_2019_08_osm.csv']\n['mdt speeds-transform historical nairobi 2019-09-1 2019-09-30 --output=Inputs/nairobi_2019_09_osm.csv']\n['mdt speeds-transform historical nairobi 2019-10-1 2019-10-31 --output=Inputs/nairobi_2019_10_osm.csv']\n['mdt speeds-transform historical nairobi 2019-11-1 2019-11-30 --output=Inputs/nairobi_2019_11_osm.csv']\n['mdt speeds-transform historical nairobi 2019-12-1 2019-12-31 --output=Inputs/nairobi_2019_12_osm.csv']\n"
],
[
"def join_segment_files(path='../Inputs/', road_surveys='Segment_info.csv',segments_geometry='segments_geometry.geojson'):\n ''' \n Load the survey data, Load the segment geometry, Join the two segment dfs.\n return a combined dataframe\n '''\n road_surveys = pd.read_csv(path+road_surveys)\n road_segment_locs = gpd.read_file(path+segments_geometry)\n segments_merged = pd.merge(road_segment_locs, road_surveys, on='segment_id', how='left')\n segments_merged[\"longitude\"] = segments_merged.geometry.centroid.x\n segments_merged[\"latitude\"] = segments_merged.geometry.centroid.y\n segments_merged = assign_hex_bin(segments_merged)\n return segments_merged",
"_____no_output_____"
],
[
"crash_df = create_crash_df(train_file = '../Inputs/Train.csv')\ncrash_df = create_temporal_features(crash_df)\ncrash_df = assign_hex_bin(crash_df)\n#crash_df.head()",
"_____no_output_____"
],
[
"segments_merged = join_segment_files()",
"_____no_output_____"
],
[
"segments_merged.describe()",
"_____no_output_____"
],
[
"# This needs work\nsegments_h3_zone_7= segments_merged.groupby(by='h3_zone_7').max()\nsegments_h3_zone_7['h3_zone_5']= segments_merged.groupby(by='h3_zone_5').latitude.max()\nsegments_h3_zone_7['h3_zone_6']= segments_merged.groupby(by='h3_zone_6').latitude.max()\nsegments_h3_zone_7['latitude']= segments_merged.groupby(by='h3_zone_7').latitude.mean()\nsegments_h3_zone_7['longitude']= segments_merged.groupby(by='h3_zone_7').longitude.mean()\nsegments_h3_zone_7.head()",
"_____no_output_____"
],
[
"path = '../Inputs/'\nroad_surveys='Segment_info.csv'\nsegments_geometry='segments_geometry.geojson'\nroad_segment_locs = gpd.read_file(path+segments_geometry)\nroad_surveys = pd.read_csv(path+road_surveys)",
"_____no_output_____"
],
[
"road_segment_locs.segment_id.nunique()",
"_____no_output_____"
],
[
"road_surveys.segment_id.nunique()",
"_____no_output_____"
],
[
"def join_segment_crash_files(crash_data=crash_df, segments=segments_merged, h3_zone='h3_zone_5'):\n ''' \n Combine the segment data and the crash data by chosen hex.\n return a combined dataframe\n '''\n # Add some groupby function here\n segment_crash_df = pd.merge(crash_data, segments, on=h3_zone, how='left')\n return segment_crash_df",
"_____no_output_____"
],
[
"segment_crash_df = join_segment_crash_files()",
"_____no_output_____"
],
[
"segment_crash_df.head()",
"_____no_output_____"
]
],
[
[
"### The crash data and the segment data needs to be grouped before this join makes sense\n### Also need to deal with the issue of missing segments\n",
"_____no_output_____"
]
],
[
[
"uber_movement_osm = pd.read_csv('../Inputs/nairobi_2018_01_osm.csv')",
"_____no_output_____"
],
[
"uber_movement_osm.head()",
"_____no_output_____"
],
[
"geojsonfile = gpd.read_file('../Inputs/nairobi_2018_01_speeds.geojson', parse_dates=['utc_timestamp'])",
"_____no_output_____"
],
[
"geojsonfile.osmhighway.unique()",
"_____no_output_____"
],
[
"geojsonfile.speed_mean_kph.nunique()",
"_____no_output_____"
],
[
"geojsonfile.head()",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e72bab2c26fabd2ee338282de2454b5720a36e54 | 58,812 | ipynb | Jupyter Notebook | Convolution model - Step by Step.ipynb | danieljeswin/DeepLearningSpecialization | 7da2b72afef84781d95bab759236f54cc56d3a3b | [
"MIT"
] | null | null | null | Convolution model - Step by Step.ipynb | danieljeswin/DeepLearningSpecialization | 7da2b72afef84781d95bab759236f54cc56d3a3b | [
"MIT"
] | null | null | null | Convolution model - Step by Step.ipynb | danieljeswin/DeepLearningSpecialization | 7da2b72afef84781d95bab759236f54cc56d3a3b | [
"MIT"
] | null | null | null | 41.859075 | 5,306 | 0.559393 | [
[
[
"# Convolutional Neural Networks: Step by Step\n\nWelcome to Course 4's first assignment! In this assignment, you will implement convolutional (CONV) and pooling (POOL) layers in numpy, including both forward propagation and (optionally) backward propagation. \n\n**Notation**:\n- Superscript $[l]$ denotes an object of the $l^{th}$ layer. \n - Example: $a^{[4]}$ is the $4^{th}$ layer activation. $W^{[5]}$ and $b^{[5]}$ are the $5^{th}$ layer parameters.\n\n\n- Superscript $(i)$ denotes an object from the $i^{th}$ example. \n - Example: $x^{(i)}$ is the $i^{th}$ training example input.\n \n \n- Lowerscript $i$ denotes the $i^{th}$ entry of a vector.\n - Example: $a^{[l]}_i$ denotes the $i^{th}$ entry of the activations in layer $l$, assuming this is a fully connected (FC) layer.\n \n \n- $n_H$, $n_W$ and $n_C$ denote respectively the height, width and number of channels of a given layer. If you want to reference a specific layer $l$, you can also write $n_H^{[l]}$, $n_W^{[l]}$, $n_C^{[l]}$. \n- $n_{H_{prev}}$, $n_{W_{prev}}$ and $n_{C_{prev}}$ denote respectively the height, width and number of channels of the previous layer. If referencing a specific layer $l$, this could also be denoted $n_H^{[l-1]}$, $n_W^{[l-1]}$, $n_C^{[l-1]}$. \n\nWe assume that you are already familiar with `numpy` and/or have completed the previous courses of the specialization. Let's get started!",
"_____no_output_____"
],
[
"## 1 - Packages\n\nLet's first import all the packages that you will need during this assignment. \n- [numpy](www.numpy.org) is the fundamental package for scientific computing with Python.\n- [matplotlib](http://matplotlib.org) is a library to plot graphs in Python.\n- np.random.seed(1) is used to keep all the random function calls consistent. It will help us grade your work.",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport h5py\nimport matplotlib.pyplot as plt\n\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\n%load_ext autoreload\n%autoreload 2\n\nnp.random.seed(1)",
"The autoreload extension is already loaded. To reload it, use:\n %reload_ext autoreload\n"
]
],
[
[
"## 2 - Outline of the Assignment\n\nYou will be implementing the building blocks of a convolutional neural network! Each function you will implement will have detailed instructions that will walk you through the steps needed:\n\n- Convolution functions, including:\n - Zero Padding\n - Convolve window \n - Convolution forward\n - Convolution backward (optional)\n- Pooling functions, including:\n - Pooling forward\n - Create mask \n - Distribute value\n - Pooling backward (optional)\n \nThis notebook will ask you to implement these functions from scratch in `numpy`. In the next notebook, you will use the TensorFlow equivalents of these functions to build the following model:\n\n<img src=\"images/model.png\" style=\"width:800px;height:300px;\">\n\n**Note** that for every forward function, there is its corresponding backward equivalent. Hence, at every step of your forward module you will store some parameters in a cache. These parameters are used to compute gradients during backpropagation. ",
"_____no_output_____"
],
[
"## 3 - Convolutional Neural Networks\n\nAlthough programming frameworks make convolutions easy to use, they remain one of the hardest concepts to understand in Deep Learning. A convolution layer transforms an input volume into an output volume of different size, as shown below. \n\n<img src=\"images/conv_nn.png\" style=\"width:350px;height:200px;\">\n\nIn this part, you will build every step of the convolution layer. You will first implement two helper functions: one for zero padding and the other for computing the convolution function itself. ",
"_____no_output_____"
],
[
"### 3.1 - Zero-Padding\n\nZero-padding adds zeros around the border of an image:\n\n<img src=\"images/PAD.png\" style=\"width:600px;height:400px;\">\n<caption><center> <u> <font color='purple'> **Figure 1** </u><font color='purple'> : **Zero-Padding**<br> Image (3 channels, RGB) with a padding of 2. </center></caption>\n\nThe main benefits of padding are the following:\n\n- It allows you to use a CONV layer without necessarily shrinking the height and width of the volumes. This is important for building deeper networks, since otherwise the height/width would shrink as you go to deeper layers. An important special case is the \"same\" convolution, in which the height/width is exactly preserved after one layer. \n\n- It helps us keep more of the information at the border of an image. Without padding, very few values at the next layer would be affected by pixels as the edges of an image.\n\n**Exercise**: Implement the following function, which pads all the images of a batch of examples X with zeros. [Use np.pad](https://docs.scipy.org/doc/numpy/reference/generated/numpy.pad.html). Note if you want to pad the array \"a\" of shape $(5,5,5,5,5)$ with `pad = 1` for the 2nd dimension, `pad = 3` for the 4th dimension and `pad = 0` for the rest, you would do:\n```python\na = np.pad(a, ((0,0), (1,1), (0,0), (3,3), (0,0)), 'constant', constant_values = (..,..))\n```",
"_____no_output_____"
]
],
[
[
"# GRADED FUNCTION: zero_pad\n\ndef zero_pad(X, pad):\n \"\"\"\n Pad with zeros all images of the dataset X. The padding is applied to the height and width of an image, \n as illustrated in Figure 1.\n \n Argument:\n X -- python numpy array of shape (m, n_H, n_W, n_C) representing a batch of m images\n pad -- integer, amount of padding around each image on vertical and horizontal dimensions\n \n Returns:\n X_pad -- padded image of shape (m, n_H + 2*pad, n_W + 2*pad, n_C)\n \"\"\"\n \n ### START CODE HERE ### (≈ 1 line)\n X_pad = np.pad(X, ((0,0), (pad, pad), (pad, pad), (0,0)), 'constant', constant_values = (0, 0))\n ### END CODE HERE ###\n \n return X_pad",
"_____no_output_____"
],
[
"np.random.seed(1)\nx = np.random.randn(4, 3, 3, 2)\nx_pad = zero_pad(x, 2)\nprint (\"x.shape =\", x.shape)\nprint (\"x_pad.shape =\", x_pad.shape)\nprint (\"x[1,1] =\", x[1,1])\nprint (\"x_pad[1,1] =\", x_pad[1,1])\n\nfig, axarr = plt.subplots(1, 2)\naxarr[0].set_title('x')\naxarr[0].imshow(x[0,:,:,0])\naxarr[1].set_title('x_pad')\naxarr[1].imshow(x_pad[0,:,:,0])",
"x.shape = (4, 3, 3, 2)\nx_pad.shape = (4, 7, 7, 2)\nx[1,1] = [[ 0.90085595 -0.68372786]\n [-0.12289023 -0.93576943]\n [-0.26788808 0.53035547]]\nx_pad[1,1] = [[ 0. 0.]\n [ 0. 0.]\n [ 0. 0.]\n [ 0. 0.]\n [ 0. 0.]\n [ 0. 0.]\n [ 0. 0.]]\n"
]
],
[
[
"**Expected Output**:\n\n<table>\n <tr>\n <td>\n **x.shape**:\n </td>\n <td>\n (4, 3, 3, 2)\n </td>\n </tr>\n <tr>\n <td>\n **x_pad.shape**:\n </td>\n <td>\n (4, 7, 7, 2)\n </td>\n </tr>\n <tr>\n <td>\n **x[1,1]**:\n </td>\n <td>\n [[ 0.90085595 -0.68372786]\n [-0.12289023 -0.93576943]\n [-0.26788808 0.53035547]]\n </td>\n </tr>\n <tr>\n <td>\n **x_pad[1,1]**:\n </td>\n <td>\n [[ 0. 0.]\n [ 0. 0.]\n [ 0. 0.]\n [ 0. 0.]\n [ 0. 0.]\n [ 0. 0.]\n [ 0. 0.]]\n </td>\n </tr>\n\n</table>",
"_____no_output_____"
],
[
"### 3.2 - Single step of convolution \n\nIn this part, implement a single step of convolution, in which you apply the filter to a single position of the input. This will be used to build a convolutional unit, which: \n\n- Takes an input volume \n- Applies a filter at every position of the input\n- Outputs another volume (usually of different size)\n\n<img src=\"images/Convolution_schematic.gif\" style=\"width:500px;height:300px;\">\n<caption><center> <u> <font color='purple'> **Figure 2** </u><font color='purple'> : **Convolution operation**<br> with a filter of 2x2 and a stride of 1 (stride = amount you move the window each time you slide) </center></caption>\n\nIn a computer vision application, each value in the matrix on the left corresponds to a single pixel value, and we convolve a 3x3 filter with the image by multiplying its values element-wise with the original matrix, then summing them up and adding a bias. In this first step of the exercise, you will implement a single step of convolution, corresponding to applying a filter to just one of the positions to get a single real-valued output. \n\nLater in this notebook, you'll apply this function to multiple positions of the input to implement the full convolutional operation. \n\n**Exercise**: Implement conv_single_step(). [Hint](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.sum.html).\n",
"_____no_output_____"
]
],
[
[
"# GRADED FUNCTION: conv_single_step\n\ndef conv_single_step(a_slice_prev, W, b):\n \"\"\"\n Apply one filter defined by parameters W on a single slice (a_slice_prev) of the output activation \n of the previous layer.\n \n Arguments:\n a_slice_prev -- slice of input data of shape (f, f, n_C_prev)\n W -- Weight parameters contained in a window - matrix of shape (f, f, n_C_prev)\n b -- Bias parameters contained in a window - matrix of shape (1, 1, 1)\n \n Returns:\n Z -- a scalar value, result of convolving the sliding window (W, b) on a slice x of the input data\n \"\"\"\n\n ### START CODE HERE ### (≈ 2 lines of code)\n # Element-wise product between a_slice and W. Do not add the bias yet.\n s = np.multiply(a_slice_prev, W)\n # Sum over all entries of the volume s.\n Z = np.sum(s)\n # Add bias b to Z. Cast b to a float() so that Z results in a scalar value.\n Z = Z + float(b)\n ### END CODE HERE ###\n\n return Z",
"_____no_output_____"
],
[
"np.random.seed(1)\na_slice_prev = np.random.randn(4, 4, 3)\nW = np.random.randn(4, 4, 3)\nb = np.random.randn(1, 1, 1)\n\nZ = conv_single_step(a_slice_prev, W, b)\nprint(\"Z =\", Z)",
"Z = -6.99908945068\n"
]
],
[
[
"**Expected Output**:\n<table>\n <tr>\n <td>\n **Z**\n </td>\n <td>\n -6.99908945068\n </td>\n </tr>\n\n</table>",
"_____no_output_____"
],
[
"### 3.3 - Convolutional Neural Networks - Forward pass\n\nIn the forward pass, you will take many filters and convolve them on the input. Each 'convolution' gives you a 2D matrix output. You will then stack these outputs to get a 3D volume: \n\n<center>\n<video width=\"620\" height=\"440\" src=\"images/conv_kiank.mp4\" type=\"video/mp4\" controls>\n</video>\n</center>\n\n**Exercise**: Implement the function below to convolve the filters W on an input activation A_prev. This function takes as input A_prev, the activations output by the previous layer (for a batch of m inputs), F filters/weights denoted by W, and a bias vector denoted by b, where each filter has its own (single) bias. Finally you also have access to the hyperparameters dictionary which contains the stride and the padding. \n\n**Hint**: \n1. To select a 2x2 slice at the upper left corner of a matrix \"a_prev\" (shape (5,5,3)), you would do:\n```python\na_slice_prev = a_prev[0:2,0:2,:]\n```\nThis will be useful when you will define `a_slice_prev` below, using the `start/end` indexes you will define.\n2. To define a_slice you will need to first define its corners `vert_start`, `vert_end`, `horiz_start` and `horiz_end`. This figure may be helpful for you to find how each of the corner can be defined using h, w, f and s in the code below.\n\n<img src=\"images/vert_horiz_kiank.png\" style=\"width:400px;height:300px;\">\n<caption><center> <u> <font color='purple'> **Figure 3** </u><font color='purple'> : **Definition of a slice using vertical and horizontal start/end (with a 2x2 filter)** <br> This figure shows only a single channel. </center></caption>\n\n\n**Reminder**:\nThe formulas relating the output shape of the convolution to the input shape is:\n$$ n_H = \\lfloor \\frac{n_{H_{prev}} - f + 2 \\times pad}{stride} \\rfloor +1 $$\n$$ n_W = \\lfloor \\frac{n_{W_{prev}} - f + 2 \\times pad}{stride} \\rfloor +1 $$\n$$ n_C = \\text{number of filters used in the convolution}$$\n\nFor this exercise, we won't worry about vectorization, and will just implement everything with for-loops.",
"_____no_output_____"
]
],
[
[
"# GRADED FUNCTION: conv_forward\n\ndef conv_forward(A_prev, W, b, hparameters):\n \"\"\"\n Implements the forward propagation for a convolution function\n \n Arguments:\n A_prev -- output activations of the previous layer, numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev)\n W -- Weights, numpy array of shape (f, f, n_C_prev, n_C)\n b -- Biases, numpy array of shape (1, 1, 1, n_C)\n hparameters -- python dictionary containing \"stride\" and \"pad\"\n \n Returns:\n Z -- conv output, numpy array of shape (m, n_H, n_W, n_C)\n cache -- cache of values needed for the conv_backward() function\n \"\"\"\n \n ### START CODE HERE ###\n # Retrieve dimensions from A_prev's shape (≈1 line) \n (m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape\n \n # Retrieve dimensions from W's shape (≈1 line)\n (f, f, n_C_prev, n_C) = W.shape\n \n # Retrieve information from \"hparameters\" (≈2 lines)\n stride = hparameters['stride']\n pad = hparameters['pad']\n \n # Compute the dimensions of the CONV output volume using the formula given above. Hint: use int() to floor. (≈2 lines)\n n_H = int((n_H_prev - f + 2 * pad) / stride) + 1\n n_W = int((n_W_prev - f + 2 * pad) / stride) + 1\n \n # Initialize the output volume Z with zeros. (≈1 line)\n Z = np.zeros((m, n_H, n_W, n_C))\n \n # Create A_prev_pad by padding A_prev\n A_prev_pad = zero_pad(A_prev, pad)\n \n for i in range(m): # loop over the batch of training examples\n a_prev_pad = A_prev_pad[i, :, :, :] # Select ith training example's padded activation\n for h in range(n_H): # loop over vertical axis of the output volume\n for w in range(n_W): # loop over horizontal axis of the output volume\n for c in range(n_C): # loop over channels (= #filters) of the output volume\n \n # Find the corners of the current \"slice\" (≈4 lines)\n vert_start = h * stride\n vert_end = vert_start + f\n horiz_start = w * stride\n horiz_end = horiz_start + f\n \n # Use the corners to define the (3D) slice of a_prev_pad (See Hint above the cell). (≈1 line)\n a_slice_prev = a_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :]\n \n # Convolve the (3D) slice with the correct filter W and bias b, to get back one output neuron. (≈1 line)\n Z[i, h, w, c] = conv_single_step(a_slice_prev, W[:, :, :, c], b[:, :, :, c])\n \n ### END CODE HERE ###\n \n # Making sure your output shape is correct\n assert(Z.shape == (m, n_H, n_W, n_C))\n \n # Save information in \"cache\" for the backprop\n cache = (A_prev, W, b, hparameters)\n \n return Z, cache",
"_____no_output_____"
],
[
"np.random.seed(1)\nA_prev = np.random.randn(10,4,4,3)\nW = np.random.randn(2,2,3,8)\nb = np.random.randn(1,1,1,8)\nhparameters = {\"pad\" : 2,\n \"stride\": 2}\n\nZ, cache_conv = conv_forward(A_prev, W, b, hparameters)\nprint(\"Z's mean =\", np.mean(Z))\nprint(\"Z[3,2,1] =\", Z[3,2,1])\nprint(\"cache_conv[0][1][2][3] =\", cache_conv[0][1][2][3])",
"Z's mean = 0.0489952035289\nZ[3,2,1] = [-0.61490741 -6.7439236 -2.55153897 1.75698377 3.56208902 0.53036437\n 5.18531798 8.75898442]\ncache_conv[0][1][2][3] = [-0.20075807 0.18656139 0.41005165]\n"
]
],
[
[
"**Expected Output**:\n\n<table>\n <tr>\n <td>\n **Z's mean**\n </td>\n <td>\n 0.0489952035289\n </td>\n </tr>\n <tr>\n <td>\n **Z[3,2,1]**\n </td>\n <td>\n [-0.61490741 -6.7439236 -2.55153897 1.75698377 3.56208902 0.53036437\n 5.18531798 8.75898442]\n </td>\n </tr>\n <tr>\n <td>\n **cache_conv[0][1][2][3]**\n </td>\n <td>\n [-0.20075807 0.18656139 0.41005165]\n </td>\n </tr>\n\n</table>\n",
"_____no_output_____"
],
[
"Finally, CONV layer should also contain an activation, in which case we would add the following line of code:\n\n```python\n# Convolve the window to get back one output neuron\nZ[i, h, w, c] = ...\n# Apply activation\nA[i, h, w, c] = activation(Z[i, h, w, c])\n```\n\nYou don't need to do it here. \n",
"_____no_output_____"
],
[
"## 4 - Pooling layer \n\nThe pooling (POOL) layer reduces the height and width of the input. It helps reduce computation, as well as helps make feature detectors more invariant to its position in the input. The two types of pooling layers are: \n\n- Max-pooling layer: slides an ($f, f$) window over the input and stores the max value of the window in the output.\n\n- Average-pooling layer: slides an ($f, f$) window over the input and stores the average value of the window in the output.\n\n<table>\n<td>\n<img src=\"images/max_pool1.png\" style=\"width:500px;height:300px;\">\n<td>\n\n<td>\n<img src=\"images/a_pool.png\" style=\"width:500px;height:300px;\">\n<td>\n</table>\n\nThese pooling layers have no parameters for backpropagation to train. However, they have hyperparameters such as the window size $f$. This specifies the height and width of the fxf window you would compute a max or average over. \n\n### 4.1 - Forward Pooling\nNow, you are going to implement MAX-POOL and AVG-POOL, in the same function. \n\n**Exercise**: Implement the forward pass of the pooling layer. Follow the hints in the comments below.\n\n**Reminder**:\nAs there's no padding, the formulas binding the output shape of the pooling to the input shape is:\n$$ n_H = \\lfloor \\frac{n_{H_{prev}} - f}{stride} \\rfloor +1 $$\n$$ n_W = \\lfloor \\frac{n_{W_{prev}} - f}{stride} \\rfloor +1 $$\n$$ n_C = n_{C_{prev}}$$",
"_____no_output_____"
]
],
[
[
"# GRADED FUNCTION: pool_forward\n\ndef pool_forward(A_prev, hparameters, mode = \"max\"):\n \"\"\"\n Implements the forward pass of the pooling layer\n \n Arguments:\n A_prev -- Input data, numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev)\n hparameters -- python dictionary containing \"f\" and \"stride\"\n mode -- the pooling mode you would like to use, defined as a string (\"max\" or \"average\")\n \n Returns:\n A -- output of the pool layer, a numpy array of shape (m, n_H, n_W, n_C)\n cache -- cache used in the backward pass of the pooling layer, contains the input and hparameters \n \"\"\"\n \n # Retrieve dimensions from the input shape\n (m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape\n \n # Retrieve hyperparameters from \"hparameters\"\n f = hparameters[\"f\"]\n stride = hparameters[\"stride\"]\n \n # Define the dimensions of the output\n n_H = int(1 + (n_H_prev - f) / stride)\n n_W = int(1 + (n_W_prev - f) / stride)\n n_C = n_C_prev\n \n # Initialize output matrix A\n A = np.zeros((m, n_H, n_W, n_C)) \n \n ### START CODE HERE ###\n for i in range(m): # loop over the training examples\n for h in range(n_H): # loop on the vertical axis of the output volume\n for w in range(n_W): # loop on the horizontal axis of the output volume\n for c in range (n_C): # loop over the channels of the output volume\n \n # Find the corners of the current \"slice\" (≈4 lines)\n vert_start = h * stride\n vert_end = vert_start + f\n horiz_start = w * stride\n horiz_end = horiz_start + f\n \n # Use the corners to define the current slice on the ith training example of A_prev, channel c. (≈1 line)\n a_prev_slice = A_prev[i, vert_start:vert_end, horiz_start:horiz_end, c]\n \n # Compute the pooling operation on the slice. Use an if statment to differentiate the modes. Use np.max/np.mean.\n if mode == \"max\":\n A[i, h, w, c] = np.max(a_prev_slice)\n elif mode == \"average\":\n A[i, h, w, c] = np.mean(a_prev_slice)\n \n ### END CODE HERE ###\n \n # Store the input and hparameters in \"cache\" for pool_backward()\n cache = (A_prev, hparameters)\n \n # Making sure your output shape is correct\n assert(A.shape == (m, n_H, n_W, n_C))\n \n return A, cache",
"_____no_output_____"
],
[
"np.random.seed(1)\nA_prev = np.random.randn(2, 4, 4, 3)\nhparameters = {\"stride\" : 2, \"f\": 3}\n\nA, cache = pool_forward(A_prev, hparameters)\nprint(\"mode = max\")\nprint(\"A =\", A)\nprint()\nA, cache = pool_forward(A_prev, hparameters, mode = \"average\")\nprint(\"mode = average\")\nprint(\"A =\", A)",
"mode = max\nA = [[[[ 1.74481176 0.86540763 1.13376944]]]\n\n\n [[[ 1.13162939 1.51981682 2.18557541]]]]\n\nmode = average\nA = [[[[ 0.02105773 -0.20328806 -0.40389855]]]\n\n\n [[[-0.22154621 0.51716526 0.48155844]]]]\n"
]
],
[
[
"**Expected Output:**\n<table>\n\n <tr>\n <td>\n A =\n </td>\n <td>\n [[[[ 1.74481176 0.86540763 1.13376944]]]\n\n\n [[[ 1.13162939 1.51981682 2.18557541]]]]\n\n </td>\n </tr>\n <tr>\n <td>\n A =\n </td>\n <td>\n [[[[ 0.02105773 -0.20328806 -0.40389855]]]\n\n\n [[[-0.22154621 0.51716526 0.48155844]]]]\n\n </td>\n </tr>\n\n</table>\n",
"_____no_output_____"
],
[
"Congratulations! You have now implemented the forward passes of all the layers of a convolutional network. \n\nThe remainer of this notebook is optional, and will not be graded.\n",
"_____no_output_____"
],
[
"## 5 - Backpropagation in convolutional neural networks (OPTIONAL / UNGRADED)\n\nIn modern deep learning frameworks, you only have to implement the forward pass, and the framework takes care of the backward pass, so most deep learning engineers don't need to bother with the details of the backward pass. The backward pass for convolutional networks is complicated. If you wish however, you can work through this optional portion of the notebook to get a sense of what backprop in a convolutional network looks like. \n\nWhen in an earlier course you implemented a simple (fully connected) neural network, you used backpropagation to compute the derivatives with respect to the cost to update the parameters. Similarly, in convolutional neural networks you can to calculate the derivatives with respect to the cost in order to update the parameters. The backprop equations are not trivial and we did not derive them in lecture, but we briefly presented them below.\n\n### 5.1 - Convolutional layer backward pass \n\nLet's start by implementing the backward pass for a CONV layer. \n\n#### 5.1.1 - Computing dA:\nThis is the formula for computing $dA$ with respect to the cost for a certain filter $W_c$ and a given training example:\n\n$$ dA += \\sum _{h=0} ^{n_H} \\sum_{w=0} ^{n_W} W_c \\times dZ_{hw} \\tag{1}$$\n\nWhere $W_c$ is a filter and $dZ_{hw}$ is a scalar corresponding to the gradient of the cost with respect to the output of the conv layer Z at the hth row and wth column (corresponding to the dot product taken at the ith stride left and jth stride down). Note that at each time, we multiply the the same filter $W_c$ by a different dZ when updating dA. We do so mainly because when computing the forward propagation, each filter is dotted and summed by a different a_slice. Therefore when computing the backprop for dA, we are just adding the gradients of all the a_slices. \n\nIn code, inside the appropriate for-loops, this formula translates into:\n```python\nda_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :] += W[:,:,:,c] * dZ[i, h, w, c]\n```\n\n#### 5.1.2 - Computing dW:\nThis is the formula for computing $dW_c$ ($dW_c$ is the derivative of one filter) with respect to the loss:\n\n$$ dW_c += \\sum _{h=0} ^{n_H} \\sum_{w=0} ^ {n_W} a_{slice} \\times dZ_{hw} \\tag{2}$$\n\nWhere $a_{slice}$ corresponds to the slice which was used to generate the acitivation $Z_{ij}$. Hence, this ends up giving us the gradient for $W$ with respect to that slice. Since it is the same $W$, we will just add up all such gradients to get $dW$. \n\nIn code, inside the appropriate for-loops, this formula translates into:\n```python\ndW[:,:,:,c] += a_slice * dZ[i, h, w, c]\n```\n\n#### 5.1.3 - Computing db:\n\nThis is the formula for computing $db$ with respect to the cost for a certain filter $W_c$:\n\n$$ db = \\sum_h \\sum_w dZ_{hw} \\tag{3}$$\n\nAs you have previously seen in basic neural networks, db is computed by summing $dZ$. In this case, you are just summing over all the gradients of the conv output (Z) with respect to the cost. \n\nIn code, inside the appropriate for-loops, this formula translates into:\n```python\ndb[:,:,:,c] += dZ[i, h, w, c]\n```\n\n**Exercise**: Implement the `conv_backward` function below. You should sum over all the training examples, filters, heights, and widths. You should then compute the derivatives using formulas 1, 2 and 3 above. ",
"_____no_output_____"
]
],
[
[
"def conv_backward(dZ, cache):\n \"\"\"\n Implement the backward propagation for a convolution function\n \n Arguments:\n dZ -- gradient of the cost with respect to the output of the conv layer (Z), numpy array of shape (m, n_H, n_W, n_C)\n cache -- cache of values needed for the conv_backward(), output of conv_forward()\n \n Returns:\n dA_prev -- gradient of the cost with respect to the input of the conv layer (A_prev),\n numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev)\n dW -- gradient of the cost with respect to the weights of the conv layer (W)\n numpy array of shape (f, f, n_C_prev, n_C)\n db -- gradient of the cost with respect to the biases of the conv layer (b)\n numpy array of shape (1, 1, 1, n_C)\n \"\"\"\n \n ### START CODE HERE ###\n # Retrieve information from \"cache\"\n (A_prev, W, b, hparameters) = cache\n \n # Retrieve dimensions from A_prev's shape\n (m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape\n \n # Retrieve dimensions from W's shape\n (f, f, n_C_prev, n_C) = W.shape\n \n # Retrieve information from \"hparameters\"\n stride = hparameters['stride']\n pad = hparameters['pad']\n \n # Retrieve dimensions from dZ's shape\n (m, n_H, n_W, n_C) = dZ.shape\n \n # Initialize dA_prev, dW, db with the correct shapes\n dA_prev = np.zeros((m, n_H_prev, n_W_prev, n_C_prev)) \n dW = np.zeros((f, f, n_C_prev, n_C))\n db = np.zeros((1, 1, 1, n_C))\n\n # Pad A_prev and dA_prev\n A_prev_pad = zero_pad(A_prev, pad)\n dA_prev_pad = zero_pad(dA_prev, pad)\n \n for i in range(m): # loop over the training examples\n \n # select ith training example from A_prev_pad and dA_prev_pad\n a_prev_pad = A_prev_pad[i, :, :, :]\n da_prev_pad = dA_prev_pad[i, :, :, :]\n \n for h in range(n_H): # loop over vertical axis of the output volume\n for w in range(n_W): # loop over horizontal axis of the output volume\n for c in range(n_C): # loop over the channels of the output volume\n \n # Find the corners of the current \"slice\"\n vert_start = h * stride\n vert_end = vert_start + f\n horiz_start = w * stride\n horiz_end = horiz_start + f\n \n # Use the corners to define the slice from a_prev_pad\n a_slice = a_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :]\n\n # Update gradients for the window and the filter's parameters using the code formulas given above\n da_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :] += W[:, :, :, c] * dZ[i, h, w, c]\n dW[:,:,:,c] += a_slice * dZ[i, h, w, c]\n db[:,:,:,c] += dZ[i, h, w, c]\n \n # Set the ith training example's dA_prev to the unpaded da_prev_pad (Hint: use X[pad:-pad, pad:-pad, :])\n dA_prev[i, :, :, :] = da_prev_pad[pad:-pad, pad:-pad, :]\n ### END CODE HERE ###\n \n # Making sure your output shape is correct\n assert(dA_prev.shape == (m, n_H_prev, n_W_prev, n_C_prev))\n \n return dA_prev, dW, db",
"_____no_output_____"
],
[
"np.random.seed(1)\ndA, dW, db = conv_backward(Z, cache_conv)\nprint(\"dA_mean =\", np.mean(dA))\nprint(\"dW_mean =\", np.mean(dW))\nprint(\"db_mean =\", np.mean(db))",
"dA_mean = 1.45243777754\ndW_mean = 1.72699145831\ndb_mean = 7.83923256462\n"
]
],
[
[
"** Expected Output: **\n<table>\n <tr>\n <td>\n **dA_mean**\n </td>\n <td>\n 1.45243777754\n </td>\n </tr>\n <tr>\n <td>\n **dW_mean**\n </td>\n <td>\n 1.72699145831\n </td>\n </tr>\n <tr>\n <td>\n **db_mean**\n </td>\n <td>\n 7.83923256462\n </td>\n </tr>\n\n</table>\n",
"_____no_output_____"
],
[
"## 5.2 Pooling layer - backward pass\n\nNext, let's implement the backward pass for the pooling layer, starting with the MAX-POOL layer. Even though a pooling layer has no parameters for backprop to update, you still need to backpropagation the gradient through the pooling layer in order to compute gradients for layers that came before the pooling layer. \n\n### 5.2.1 Max pooling - backward pass \n\nBefore jumping into the backpropagation of the pooling layer, you are going to build a helper function called `create_mask_from_window()` which does the following: \n\n$$ X = \\begin{bmatrix}\n1 && 3 \\\\\n4 && 2\n\\end{bmatrix} \\quad \\rightarrow \\quad M =\\begin{bmatrix}\n0 && 0 \\\\\n1 && 0\n\\end{bmatrix}\\tag{4}$$\n\nAs you can see, this function creates a \"mask\" matrix which keeps track of where the maximum of the matrix is. True (1) indicates the position of the maximum in X, the other entries are False (0). You'll see later that the backward pass for average pooling will be similar to this but using a different mask. \n\n**Exercise**: Implement `create_mask_from_window()`. This function will be helpful for pooling backward. \nHints:\n- [np.max()]() may be helpful. It computes the maximum of an array.\n- If you have a matrix X and a scalar x: `A = (X == x)` will return a matrix A of the same size as X such that:\n```\nA[i,j] = True if X[i,j] = x\nA[i,j] = False if X[i,j] != x\n```\n- Here, you don't need to consider cases where there are several maxima in a matrix.",
"_____no_output_____"
]
],
[
[
"def create_mask_from_window(x):\n \"\"\"\n Creates a mask from an input matrix x, to identify the max entry of x.\n \n Arguments:\n x -- Array of shape (f, f)\n \n Returns:\n mask -- Array of the same shape as window, contains a True at the position corresponding to the max entry of x.\n \"\"\"\n \n ### START CODE HERE ### (≈1 line)\n mask = (x == np.max(x))\n ### END CODE HERE ###\n \n return mask",
"_____no_output_____"
],
[
"np.random.seed(1)\nx = np.random.randn(2,3)\nmask = create_mask_from_window(x)\nprint('x = ', x)\nprint(\"mask = \", mask)",
"x = [[ 1.62434536 -0.61175641 -0.52817175]\n [-1.07296862 0.86540763 -2.3015387 ]]\nmask = [[ True False False]\n [False False False]]\n"
]
],
[
[
"**Expected Output:** \n\n<table> \n<tr> \n<td>\n\n**x =**\n</td>\n\n<td>\n\n[[ 1.62434536 -0.61175641 -0.52817175] <br>\n [-1.07296862 0.86540763 -2.3015387 ]]\n\n </td>\n</tr>\n\n<tr> \n<td>\n**mask =**\n</td>\n<td>\n[[ True False False] <br>\n [False False False]]\n</td>\n</tr>\n\n\n</table>",
"_____no_output_____"
],
[
"Why do we keep track of the position of the max? It's because this is the input value that ultimately influenced the output, and therefore the cost. Backprop is computing gradients with respect to the cost, so anything that influences the ultimate cost should have a non-zero gradient. So, backprop will \"propagate\" the gradient back to this particular input value that had influenced the cost. ",
"_____no_output_____"
],
[
"### 5.2.2 - Average pooling - backward pass \n\nIn max pooling, for each input window, all the \"influence\" on the output came from a single input value--the max. In average pooling, every element of the input window has equal influence on the output. So to implement backprop, you will now implement a helper function that reflects this.\n\nFor example if we did average pooling in the forward pass using a 2x2 filter, then the mask you'll use for the backward pass will look like: \n$$ dZ = 1 \\quad \\rightarrow \\quad dZ =\\begin{bmatrix}\n1/4 && 1/4 \\\\\n1/4 && 1/4\n\\end{bmatrix}\\tag{5}$$\n\nThis implies that each position in the $dZ$ matrix contributes equally to output because in the forward pass, we took an average. \n\n**Exercise**: Implement the function below to equally distribute a value dz through a matrix of dimension shape. [Hint](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.ones.html)",
"_____no_output_____"
]
],
[
[
"def distribute_value(dz, shape):\n \"\"\"\n Distributes the input value in the matrix of dimension shape\n \n Arguments:\n dz -- input scalar\n shape -- the shape (n_H, n_W) of the output matrix for which we want to distribute the value of dz\n \n Returns:\n a -- Array of size (n_H, n_W) for which we distributed the value of dz\n \"\"\"\n \n ### START CODE HERE ###\n # Retrieve dimensions from shape (≈1 line)\n (n_H, n_W) = shape\n \n # Compute the value to distribute on the matrix (≈1 line)\n average = dz / (n_H * n_W)\n \n # Create a matrix where every entry is the \"average\" value (≈1 line)\n a = np.ones(shape) * average\n ### END CODE HERE ###\n \n return a",
"_____no_output_____"
],
[
"a = distribute_value(2, (2,2))\nprint('distributed value =', a)",
"distributed value = [[ 0.5 0.5]\n [ 0.5 0.5]]\n"
]
],
[
[
"**Expected Output**: \n\n<table> \n<tr> \n<td>\ndistributed_value =\n</td>\n<td>\n[[ 0.5 0.5]\n<br\\> \n[ 0.5 0.5]]\n</td>\n</tr>\n</table>",
"_____no_output_____"
],
[
"### 5.2.3 Putting it together: Pooling backward \n\nYou now have everything you need to compute backward propagation on a pooling layer.\n\n**Exercise**: Implement the `pool_backward` function in both modes (`\"max\"` and `\"average\"`). You will once again use 4 for-loops (iterating over training examples, height, width, and channels). You should use an `if/elif` statement to see if the mode is equal to `'max'` or `'average'`. If it is equal to 'average' you should use the `distribute_value()` function you implemented above to create a matrix of the same shape as `a_slice`. Otherwise, the mode is equal to '`max`', and you will create a mask with `create_mask_from_window()` and multiply it by the corresponding value of dZ.",
"_____no_output_____"
]
],
[
[
"def pool_backward(dA, cache, mode = \"max\"):\n \"\"\"\n Implements the backward pass of the pooling layer\n \n Arguments:\n dA -- gradient of cost with respect to the output of the pooling layer, same shape as A\n cache -- cache output from the forward pass of the pooling layer, contains the layer's input and hparameters \n mode -- the pooling mode you would like to use, defined as a string (\"max\" or \"average\")\n \n Returns:\n dA_prev -- gradient of cost with respect to the input of the pooling layer, same shape as A_prev\n \"\"\"\n \n ### START CODE HERE ###\n \n # Retrieve information from cache (≈1 line)\n (A_prev, hparameters) = cache\n \n # Retrieve hyperparameters from \"hparameters\" (≈2 lines)\n stride = hparameters['stride']\n f = hparameters['f']\n \n # Retrieve dimensions from A_prev's shape and dA's shape (≈2 lines)\n m, n_H_prev, n_W_prev, n_C_prev = A_prev.shape\n m, n_H, n_W, n_C = dA.shape\n \n # Initialize dA_prev with zeros (≈1 line)\n dA_prev = np.zeros((m, n_H_prev, n_W_prev, n_C_prev))\n \n for i in range(m): # loop over the training examples\n \n # select training example from A_prev (≈1 line)\n a_prev = A_prev[i, :, :, :]\n \n for h in range(n_H): # loop on the vertical axis\n for w in range(n_W): # loop on the horizontal axis\n for c in range(n_C): # loop over the channels (depth)\n \n # Find the corners of the current \"slice\" (≈4 lines)\n vert_start = h * stride\n vert_end = vert_start + f\n horiz_start = w * stride\n horiz_end = horiz_start + f\n \n # Compute the backward propagation in both modes.\n if mode == \"max\":\n \n # Use the corners and \"c\" to define the current slice from a_prev (≈1 line)\n a_prev_slice = a_prev[vert_start:vert_end, horiz_start:horiz_end, c]\n # Create the mask from a_prev_slice (≈1 line)\n mask = create_mask_from_window(a_prev_slice)\n # Set dA_prev to be dA_prev + (the mask multiplied by the correct entry of dA) (≈1 line)\n dA_prev[i, vert_start: vert_end, horiz_start: horiz_end, c] += mask * dA[i, h, w, c]\n \n elif mode == \"average\":\n \n # Get the value a from dA (≈1 line)\n da = dA[i, h, w, c]\n # Define the shape of the filter as fxf (≈1 line)\n shape = (f, f)\n # Distribute it to get the correct slice of dA_prev. i.e. Add the distributed value of da. (≈1 line)\n dA_prev[i, vert_start: vert_end, horiz_start: horiz_end, c] += distribute_value(da, shape)\n \n ### END CODE ###\n \n # Making sure your output shape is correct\n assert(dA_prev.shape == A_prev.shape)\n \n return dA_prev",
"_____no_output_____"
],
[
"np.random.seed(1)\nA_prev = np.random.randn(5, 5, 3, 2)\nhparameters = {\"stride\" : 1, \"f\": 2}\nA, cache = pool_forward(A_prev, hparameters)\ndA = np.random.randn(5, 4, 2, 2)\n\ndA_prev = pool_backward(dA, cache, mode = \"max\")\nprint(\"mode = max\")\nprint('mean of dA = ', np.mean(dA))\nprint('dA_prev[1,1] = ', dA_prev[1,1]) \nprint()\ndA_prev = pool_backward(dA, cache, mode = \"average\")\nprint(\"mode = average\")\nprint('mean of dA = ', np.mean(dA))\nprint('dA_prev[1,1] = ', dA_prev[1,1]) ",
"mode = max\nmean of dA = 0.145713902729\ndA_prev[1,1] = [[ 0. 0. ]\n [ 5.05844394 -1.68282702]\n [ 0. 0. ]]\n\nmode = average\nmean of dA = 0.145713902729\ndA_prev[1,1] = [[ 0.08485462 0.2787552 ]\n [ 1.26461098 -0.25749373]\n [ 1.17975636 -0.53624893]]\n"
]
],
[
[
"**Expected Output**: \n\nmode = max:\n<table> \n<tr> \n<td>\n\n**mean of dA =**\n</td>\n\n<td>\n\n0.145713902729\n\n </td>\n</tr>\n\n<tr> \n<td>\n**dA_prev[1,1] =** \n</td>\n<td>\n[[ 0. 0. ] <br>\n [ 5.05844394 -1.68282702] <br>\n [ 0. 0. ]]\n</td>\n</tr>\n</table>\n\nmode = average\n<table> \n<tr> \n<td>\n\n**mean of dA =**\n</td>\n\n<td>\n\n0.145713902729\n\n </td>\n</tr>\n\n<tr> \n<td>\n**dA_prev[1,1] =** \n</td>\n<td>\n[[ 0.08485462 0.2787552 ] <br>\n [ 1.26461098 -0.25749373] <br>\n [ 1.17975636 -0.53624893]]\n</td>\n</tr>\n</table>",
"_____no_output_____"
],
[
"### Congratulations !\n\nCongratulation on completing this assignment. You now understand how convolutional neural networks work. You have implemented all the building blocks of a neural network. In the next assignment you will implement a ConvNet using TensorFlow.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
]
] |
e72bac556a3681591e828aa925b8f7c65d11d70a | 301,799 | ipynb | Jupyter Notebook | notebook/modelling/maxcut.ipynb | kreibaum/JuMPTutorials.jl | a2d1744d9d10a013557059e126bfcdf3b5005191 | [
"MIT"
] | 75 | 2020-06-15T13:05:14.000Z | 2022-02-28T12:58:48.000Z | notebook/modelling/maxcut.ipynb | kreibaum/JuMPTutorials.jl | a2d1744d9d10a013557059e126bfcdf3b5005191 | [
"MIT"
] | 34 | 2019-05-27T05:36:48.000Z | 2019-08-22T09:52:29.000Z | notebook/modelling/maxcut.ipynb | kreibaum/JuMPTutorials.jl | a2d1744d9d10a013557059e126bfcdf3b5005191 | [
"MIT"
] | 19 | 2019-10-09T09:32:56.000Z | 2020-06-02T17:41:21.000Z | 202.007363 | 9,920 | 0.596208 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
e72bb1e0eed8d198f56ab871a4d32eb2e65d5494 | 4,101 | ipynb | Jupyter Notebook | ipynb/Diamond-Princess.ipynb | RobertRosca/oscovida.github.io | d609949076e3f881e38ec674ecbf0887e9a2ec25 | [
"CC-BY-4.0"
] | null | null | null | ipynb/Diamond-Princess.ipynb | RobertRosca/oscovida.github.io | d609949076e3f881e38ec674ecbf0887e9a2ec25 | [
"CC-BY-4.0"
] | null | null | null | ipynb/Diamond-Princess.ipynb | RobertRosca/oscovida.github.io | d609949076e3f881e38ec674ecbf0887e9a2ec25 | [
"CC-BY-4.0"
] | null | null | null | 28.678322 | 170 | 0.512314 | [
[
[
"# Diamond Princess\n\n* Homepage of project: https://oscovida.github.io\n* [Execute this Jupyter Notebook using myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Diamond-Princess.ipynb)",
"_____no_output_____"
]
],
[
[
"import datetime\nimport time\n\nstart = datetime.datetime.now()\nprint(f\"Notebook executed on: {start.strftime('%d/%m/%Y %H:%M:%S%Z')} {time.tzname[time.daylight]}\")",
"_____no_output_____"
],
[
"%config InlineBackend.figure_formats = ['svg']\nfrom oscovida import *",
"_____no_output_____"
],
[
"overview(\"Diamond Princess\");",
"_____no_output_____"
],
[
"# load the data\ncases, deaths, region_label = get_country_data(\"Diamond Princess\")\n\n# compose into one table\ntable = compose_dataframe_summary(cases, deaths)\n\n# show tables with up to 500 rows\npd.set_option(\"max_rows\", 500)\n\n# display the table\ntable",
"_____no_output_____"
]
],
[
[
"# Explore the data in your web browser\n\n- If you want to execute this notebook, [click here to use myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Diamond-Princess.ipynb)\n- and wait (~1 to 2 minutes)\n- Then press SHIFT+RETURN to advance code cell to code cell\n- See http://jupyter.org for more details on how to use Jupyter Notebook",
"_____no_output_____"
],
[
"# Acknowledgements:\n\n- Johns Hopkins University provides data for countries\n- Robert Koch Institute provides data for within Germany\n- Open source and scientific computing community for the data tools\n- Github for hosting repository and html files\n- Project Jupyter for the Notebook and binder service\n- The H2020 project Photon and Neutron Open Science Cloud ([PaNOSC](https://www.panosc.eu/))\n\n--------------------",
"_____no_output_____"
]
],
[
[
"print(f\"Download of data from Johns Hopkins university: cases at {fetch_cases_last_execution()} and \"\n f\"deaths at {fetch_deaths_last_execution()}.\")",
"_____no_output_____"
],
[
"# to force a fresh download of data, run \"clear_cache()\"",
"_____no_output_____"
],
[
"print(f\"Notebook execution took: {datetime.datetime.now()-start}\")\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
]
] |
e72bc44bc28b4cec71a4dc9e2bd834117c760384 | 28,989 | ipynb | Jupyter Notebook | Lab_Notebooks/.ipynb_checkpoints/S4_3_THOR-checkpoint.ipynb | Gjuri/2022_ML_Earth_Env_Sci | 2d0b823ad30ca1ef295df6ea76729201b5236269 | [
"MIT"
] | null | null | null | Lab_Notebooks/.ipynb_checkpoints/S4_3_THOR-checkpoint.ipynb | Gjuri/2022_ML_Earth_Env_Sci | 2d0b823ad30ca1ef295df6ea76729201b5236269 | [
"MIT"
] | null | null | null | Lab_Notebooks/.ipynb_checkpoints/S4_3_THOR-checkpoint.ipynb | Gjuri/2022_ML_Earth_Env_Sci | 2d0b823ad30ca1ef295df6ea76729201b5236269 | [
"MIT"
] | null | null | null | 38.600533 | 397 | 0.564283 | [
[
[
"<a href=\"https://colab.research.google.com/github/tbeucler/2022_ML_Earth_Env_Sci/blob/main/Lab_Notebooks/S4_3_THOR.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"<img src='https://unils-my.sharepoint.com/:i:/g/personal/tom_beucler_unil_ch/ESLP1e1BfUxKu-hchh7wZKcBZiG3bJnNbnt0PDDm3BK-9g?download=1'>\n\n<center> \nPhoto Credits: <a href=\"https://unsplash.com/photos/zCMWw56qseM\">Sea Foam</a> by <a href=\"https://unsplash.com/@unstable_affliction\">Ivan Bandura</a> licensed under the <a href='https://unsplash.com/license'>Unsplash License</a> \n</center>\n\n\n>*A frequently asked question related to this work is “Which mixing processes matter most for climate?” As with many alluringly comprehensive sounding questions, the answer is “it depends.”* <br>\n> $\\qquad$ MacKinnon, Jennifer A., et al. <br>$\\qquad$\"Climate process team on internal wave–driven ocean mixing.\" <br>$\\qquad$ Bulletin of the American Meteorological Society 98.11 (2017): 2429-2454.",
"_____no_output_____"
],
[
"In week 4's final notebook, we will perform clustering to identify regimes in data taken from the realistic numerical ocean model [Estimating the Circulation and Climate of the Ocean](https://www.ecco-group.org/products-ECCO-V4r4.htm). Sonnewald et al. point out that finding robust regimes is intractable with a naïve approach, so we will be using using reduced dimensionality data. \n\nIt is worth pointing out, however, that the reduction was done with an equation instead of one of the algorithms we discussed this week. If you're interested in the full details, you can check out [Sonnewald et al. (2019)](https://doi.org/10.1029/2018EA000519)",
"_____no_output_____"
],
[
"# Setup",
"_____no_output_____"
],
[
"First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20.",
"_____no_output_____"
]
],
[
[
"# Python ≥3.5 is required\nimport sys\nassert sys.version_info >= (3, 5)\n\n# Scikit-Learn ≥0.20 is required\nimport sklearn\nassert sklearn.__version__ >= \"0.20\"\n\n# Common imports\nimport numpy as np\nimport os\nimport xarray as xr\nimport pooch\n\n# to make this notebook's output stable across runs\nrnd_seed = 42\nrnd_gen = np.random.default_rng(rnd_seed)\n\n# To plot pretty figures\n%matplotlib inline\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\nmpl.rc('axes', labelsize=14)\nmpl.rc('xtick', labelsize=12)\nmpl.rc('ytick', labelsize=12)\n\n# Where to save the figures\nPROJECT_ROOT_DIR = \".\"\nCHAPTER_ID = \"dim_reduction\"\nIMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, \"images\", CHAPTER_ID)\nos.makedirs(IMAGES_PATH, exist_ok=True)\n\ndef save_fig(fig_id, tight_layout=True, fig_extension=\"png\", resolution=300):\n path = os.path.join(IMAGES_PATH, fig_id + \".\" + fig_extension)\n print(\"Saving figure\", fig_id)\n if tight_layout:\n plt.tight_layout()\n plt.savefig(path, format=fig_extension, dpi=resolution)",
"_____no_output_____"
]
],
[
[
"Here we're going to import the [StandardScaler](https://duckduckgo.com/sklearn.preprocessing.standardscaler) function from scikit's preprocessing tools, import the [scikit clustering library](https://duckduckgo.com/sklearn.clustering), and set up the colormap that we will use when plotting.",
"_____no_output_____"
]
],
[
[
"from sklearn.preprocessing import StandardScaler\nimport sklearn.cluster as cluster\n\nfrom matplotlib.colors import LinearSegmentedColormap, ListedColormap\ncolors = ['royalblue', 'cyan','yellow', 'orange', 'magenta', 'red']\nmycmap = ListedColormap(colors)",
"_____no_output_____"
]
],
[
[
"# Data Preprocessing",
"_____no_output_____"
],
[
"The first thing we need to do is retrieve the list of files we'll be working on. We'll rely on pooch to access the files hosted on the cloud.",
"_____no_output_____"
]
],
[
[
"# Retrieve the files from the cloud using Pooch.\ndata_url = 'https://unils-my.sharepoint.com/:u:/g/personal/tom_beucler_unil_ch/EUYqUzpIjoJBui02QEo6q1wBSN1Zsi1ofE6I3G4B9LJn_Q?download=1'\nhash = '3f41661c7a087fa7d7af1d2a8baf95c065468f8a415b8514baedda2f5bc18bb5'\n\nfiles = pooch.retrieve(data_url, known_hash=hash, processor=pooch.Unzip())\n[print(filename) for filename in files];",
"_____no_output_____"
]
],
[
[
"And now that we have a set of files to load, let's set up a dictionary with the variable names as keys and the data in numpy array format as the values.",
"_____no_output_____"
]
],
[
[
"# Let's read in the variable names from the filepaths\nvar_names = []\n[var_names.append(path.split('/')[-1][:-4]) for path in files]\n\n# And build a dictionary of the data variables keyed to the filenames\ndata_dict = {}\nfor idx, val in enumerate(var_names):\n data_dict[val] = np.load(files[idx]).T\n\n#We'll print the name of the variable loaded and the associated shape \n[print(f'Varname: {item[0]:<15} Shape: {item[1].shape}') for item in data_dict.items()];",
"_____no_output_____"
]
],
[
[
"We now have a dictionary that uses the filename as the key! Feel free to explore the data (e.g., loading the keys, checking the shape of the arrays, plotting)",
"_____no_output_____"
]
],
[
[
"#Feel free to explore the data dictionary",
"_____no_output_____"
]
],
[
[
"We're eventually going to have an array of cluster classes that we're going to use to label dynamic regimes in the ocean. Let's make an array full of NaN (not-a-number) values that has the same shape as our other variables and store it in the data dictionary. ",
"_____no_output_____"
]
],
[
[
"data_dict['clusters'] = np.full_like(data_dict['BPT'],np.nan)",
"_____no_output_____"
]
],
[
[
"### Reformatting as Xarray",
"_____no_output_____"
],
[
"In the original paper, this data was loaded as numpy arrays. However, we'll take this opportunity to demonstrate the same procedure while relying on xarray. First, let's instantiate a blank dataset.<br><br>\n\n###**Q1) Make a blank xarray dataset.**<br>\n*Hint: Look at the xarray [documentation](https://duckduckgo.com/?q=xarray+dataset)*",
"_____no_output_____"
]
],
[
[
"# Make your blank dataset here! Instantiate the class without passing any parameters.",
"_____no_output_____"
]
],
[
[
"<img src='https://unils-my.sharepoint.com/:i:/g/personal/tom_beucler_unil_ch/EZv_qqVz_h1Hio6Nq11ckScBb01bGb9jtNKzdqAg1TPrKQ?download=1'>\n<center> Image taken from the xarray <a href='https://xarray.pydata.org/en/stable/user-guide/data-structures.html#:~:text=Dataset-,xarray.,from%20the%20netCDF%20file%20format.'> <i>Data Structure documentation</i> </a> </center>\n\nIn order to build the dataset, we're going to need a set of coordinate vectors that help us map out our data! For our data, we have two axes corresponding to longitude ($\\lambda$) and latitude ($\\phi$). \n\nWe don't know much about how many lat/lon points we have, so let's explore one of the variables to make sense of the data the shape of one of the numpy arrays.\n\n###**Q2) Visualize the data using a plot and printing the shape of the data to the console output.**",
"_____no_output_____"
]
],
[
[
"#Complete the code\n# Let's print out an image of the Bottom Pressure Torques (BPT)\nplt.imshow( ________ , origin='lower')\n\n# It will also be useful to store and print out the shape of the data\ndata_shape = _________.shape\nprint(data_shape)",
"_____no_output_____"
]
],
[
[
"Now that we know how the resolution of our data, we can prepare a set of axis arrays. We will use these to organize the data we will feed into the dataset.\n\n###**Q3) Prepare the latitude and longitude arrays to be used as axes for our dataset**\n\n*Hint 1: You can build ordered numpy arrays using, e.g., [numpy.linspace](https://numpy.org/doc/stable/reference/generated/numpy.linspace.html) and [numpy.arange](https://numpy.org/doc/stable/reference/generated/numpy.arange.html)*\n\n*Hint 2: You can rely on the data_shape variable we loaded previously to know how many points you need along each axis*",
"_____no_output_____"
]
],
[
[
"#Complete the code\n# Let's prepare the lat and lon axes for our data.\nlat = \nlon = ",
"_____no_output_____"
]
],
[
[
"Now that we have the axes we need, we can build xarray [*data arrays*](https://xarray.pydata.org/en/stable/generated/xarray.DataArray.html) for each data variable. Since we'll be doing it several times, let's go ahead and defined a function that does this for us!\n\n###**Q4) Define a function that takes in: 1) an array name, 2) a numpy array, 3) a lat vector, and 4) a lon vector. The function should return a dataArray with lat-lon as the coordinate dimensions**",
"_____no_output_____"
]
],
[
[
"#Complete the code\ndef np_to_xr(array_name, array, lat, lon):\n #building the xarrray\n da = xr.DataArray(data = ______, # Data to be stored\n \n #set the name of dimensions for the dataArray \n dims = ['lat', '___'],\n \n #Set the dictionary pointing the name dimensions to np arrays \n coords = {'lat':___,\n '___':___},\n \n name=______)\n return ______",
"_____no_output_____"
]
],
[
[
"We're now ready to build our data array! Let's iterate through the items and merge our blank dataset with the data arrays we create.\n\n###**Q5) Build the dataset from the data dictionary**\n\n*Hint: We'll be using the xarray merge command to put everything together.*",
"_____no_output_____"
]
],
[
[
"# The code in the notebook assumes you named your dataset ds. Change it to \n# whatever you used!\n\n# Complete the code\nfor key, item in data_dict.items():\n # Let's make use of our np_to_xr function to get the data as a dataArray\n da = np_to_xr(key, item, lat, lon)\n\n # Merge the dataSet with the dataArray here!\n ds = xr.merge( [____ , ____ ] )",
"_____no_output_____"
]
],
[
[
"Congratulations! You should now have a nicely set up xarray dataset. This let's you access a ton of nice features, e.g.:\n> Data plotting by calling, e.g., `ds.BPT.plot.imshow(cmap='ocean')`\n> \n> Find statistical measures of all variables at once! (e.g.: `ds.std()`, `ds.mean()`)",
"_____no_output_____"
]
],
[
[
"# Play around with the dataset here if you'd like :)",
"_____no_output_____"
]
],
[
[
"Now we want to find clusters of data considering each grid point as a datapoint with 5 dimensional data. However, we went through a lot of work to get the data nicely associated with a lat and lon - do we really want to undo that?\n\nLuckily, xarray develops foresaw the need to group dimensions together. Let's create a 'flat' version of our dataset using the [`stack`](https://xarray.pydata.org/en/stable/generated/xarray.DataArray.stack.html) method. Let's make a flattened version of our dataset.\n\n###**Q6) Store a flattened version of our dataset**\n\n*Hint 1: You'll need to pass a dictionary with the 'new' stacked dimension name as the key and the 'flattened' dimensions as the values.*\n\n*Hint 2: xarrays have a ['.values' attribute](https://xarray.pydata.org/en/stable/generated/xarray.DataArray.values.html) that return their data as a numpy array.*",
"_____no_output_____"
]
],
[
[
"# Complete the code\n# Let's store the stacked version of our dataset\nstacked = ds.stack( { ____ :[ ___ , ___ ] } )\n\n# And verify the shape of our data\nprint(stacked.to_array()._____._____)",
"_____no_output_____"
]
],
[
[
"So far we've ignored an important point - we're supposed to have 5 variables, not 6! As you may have guessed, noiseMask helps us throw away data we dont want (e.g., from land mass or bad pixels). \n\nWe're now going to clean up the stacked dataset using the noise mask. Relax and read through the code, since there won't be a question in this part :) ",
"_____no_output_____"
]
],
[
[
"# Let's redefine stacked as all the points where noiseMask = 1, since noisemask\n# is binary data.\n\nprint(f'Dataset shape before processing: {stacked.to_array().values.shape}')\n\nprint(\"Let's do some data cleaning!\")\nprint(f'Points before cleaning: {len(stacked.BPT)}')\nstacked = stacked.where(stacked.noiseMask==1, drop=True)\nprint(f'Points after cleaning: {len(stacked.BPT)}')\n\n# We also no longer need the noiseMask variable, so we can just drop it.\n\nprint('And drop the noisemask variable...')\nprint(f'Before dropping: {stacked.to_array().values.shape}')\nstacked = stacked.drop('noiseMask')\nprint(f'Dataset shape after processing: {stacked.to_array().values.shape}')",
"_____no_output_____"
]
],
[
[
"We now have several thousand points which we want to divide into clusters using the kmeans clustering algorithm (you can check out the documentation for scikit's implementation of kmeans [here](https://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html)).\n\nYou'll note that the algorithm expects the input data `X` to be fed as `(n_samples, n_features)`. This is the opposite of what we have! Let's go ahead and make a copy to a numpy array has the axes in the right order.\n\nYou'll need xarray's [`.to_array()`](https://xarray.pydata.org/en/stable/generated/xarray.Dataset.to_array.html) method and [`.values`](https://xarray.pydata.org/en/stable/generated/xarray.DataArray.values.html) parameter, as well as numpy's [`.moveaxis`](https://numpy.org/doc/stable/reference/generated/numpy.moveaxis.html) method.\n\n###**Q7) Load the datapoints into a numpy array following the convention where the 0th axis corresponds to the samples and the 1st axis corresponds to the features.** ",
"_____no_output_____"
]
],
[
[
"# Complete the code\ninput_data = np._____(stacked._____()._____, # data to reshape\n 'number', # source axis as integer, \n 'number') # destination axis as integer\n\n# Does the input data look the way it's supposed to? Print the shape.\nprint(________)",
"_____no_output_____"
]
],
[
[
"In previous classes we discussed the importance of the scaling the data before implementing our algorithms. Now that our data is all but ready to be fed into an algorithm, let's make sure that it's been scaled.\n\n###**Q8) Scale the input data**\n\n*Hint 1: Import the [`StandardScaler`](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html) class from scikit and instantiate it*\n\n*Hint 2: Update the input array to the one returned by the [`.fit_transform(X)`](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html#sklearn.preprocessing.StandardScaler.fit_transform) method*",
"_____no_output_____"
]
],
[
[
"#Write your scaling code here",
"_____no_output_____"
]
],
[
[
"Now we're finally ready to train our algorithm! Let's load up the kmeans model and find clusters in our data.\n\n###**Q9) Instantiate the kmeans clustering algorithm, and then fit it using 50 clusters, trying out 10 different initial centroids.**\n\n*Hint 1: `sklearn.cluster` was imported as `cluser` during the notebook setup! [Here is the scikit `KMeans` documentation](https://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html).*\n\n*Hint 2: Use the `fit_predict` method to organize the data into clusters*\n\n*Warning! : Fitting the data may take some time (under a minute during the testing of the notebook)",
"_____no_output_____"
]
],
[
[
"# Complete the code\nkmeans = cluster._____(________ =50, # Number of clusters\n ________ =42, # setting a random state\n ________ =10, # Number of initial centroid states to try\n verbose = 1) # Verbosity so we know things are working\n\ncluster_labels = kmeans.______(____) # Feed in out scaled input data!",
"_____no_output_____"
]
],
[
[
"We now have a set of cluster labels that group the data into 50 similar groups. Let's store it in our stacked dataset!",
"_____no_output_____"
]
],
[
[
"# Let's run this line\nstacked['clusters'].values = cluster_labels",
"_____no_output_____"
]
],
[
[
"We now have a set of labels, but they're stored in a flattened array. Since we'd like to see the data as a map, we still have some work to do. Let's go back to a 2D representation of our values.\n\n###**Q10) Turn the flattened xarray back into a set of 2D fields**\n*Hint 1: xarrays have an [`.unstack` method](https://xarray.pydata.org/en/stable/generated/xarray.DataArray.unstack.html) that you will find to be very useful for this.",
"_____no_output_____"
]
],
[
[
"# Complete the code:\nprocessed_ds = ds.____()",
"_____no_output_____"
]
],
[
[
"Now we have an unstacked dataset, and can now easily plot out the clusters we found!\n\n###**Q11) Plot the 'cluster' variable using the buil-in xarray function**\n*Hint: `.plot()` [link text](https://xarray.pydata.org/en/stable/generated/xarray.DataArray.plot.html) let's you access the xarray implementations of [`pcolormesh`](https://matplotlib.org/3.1.1/api/_as_gen/matplotlib.pyplot.pcolormesh.html) and [`imshow`](https://matplotlib.org/3.1.1/api/_as_gen/matplotlib.pyplot.imshow.html).*",
"_____no_output_____"
]
],
[
[
"",
"_____no_output_____"
]
],
[
[
"Compare your results to those from the paper:\n<img src='https://unils-my.sharepoint.com/:i:/g/personal/tom_beucler_unil_ch/EdLh6Ds0yVlFivyfIOXlV74B_G35dVz87GFagzylIG-gZA?download=1'>",
"_____no_output_____"
],
[
"We now want to find the 5 most common regimes, and group the rest. This isn't straightforward, so we've gone ahead and prepared the code for you. Run through it and try to understand what the code is doing!",
"_____no_output_____"
]
],
[
[
"# Make field filled with -1 vals so unprocessed points are easily retrieved.\n# Noise masked applied automatically by using previously found labels as base.\nprocessed_ds['final_clusters'] = (processed_ds.clusters * 0) - 1\n\n# Find the 5 most common cluster labels\ntop_clusters = processed_ds.groupby('clusters').count().sortby('BPT').tail(5).clusters.values\n\n#Build the set of indices for the cluster data, used for rewriting cluster labels\nfor idx, label in enumerate(top_clusters):\n #Find the indices where the label is found\n indices = (processed_ds.clusters == label)\n\n processed_ds['final_clusters'].values[indices] = 4-idx\n\n# Set the remaining unlabeled regions to category 5 \"non-linear\"\nprocessed_ds['final_clusters'].values[processed_ds.final_clusters==-1] = 5\n\n# Plot the figure\nprocessed_ds.final_clusters.plot.imshow(cmap=mycmap, figsize=(18,8));",
"_____no_output_____"
],
[
"# Feel free to use this space ",
"_____no_output_____"
]
],
[
[
"Compare it to the regimes found in the paper:\n<img src='https://unils-my.sharepoint.com/:i:/g/personal/tom_beucler_unil_ch/EehuR9cUfaJImrw4DCAzDPoBiGuG7R3Ys6453Umi1cN_OQ?download=1'>\n\n",
"_____no_output_____"
],
[
"The authors then went on to train neural networks ***to infer in-depth dynamics from data that is largely readily available from for example CMIP6 models, using NN methods to infer the source of predictive skill*** and ***to apply the trained Ensemble MLP to a climate model in order to assess circulation changes under global heating***. \n\nFor our purposes, however, we will say goodbye to *THOR* at this point 😃",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
]
] |
e72bc7df13b5f0c2fe1a214a524bda9a933fc614 | 85,218 | ipynb | Jupyter Notebook | week1_05_BERT_and_LDA/week05_BERT_Fine_Tunning.ipynb | GendalfSeriyy/ml-mipt | 647a482baba57d8b920392ed534c8179194dfea7 | [
"MIT"
] | 6 | 2021-11-17T18:34:34.000Z | 2022-01-18T18:29:07.000Z | week1_05_BERT_and_LDA/week05_BERT_Fine_Tunning.ipynb | GendalfSeriyy/ml-mipt | 647a482baba57d8b920392ed534c8179194dfea7 | [
"MIT"
] | 15 | 2021-09-12T15:06:13.000Z | 2022-03-31T19:02:08.000Z | week1_05_BERT_and_LDA/week05_BERT_Fine_Tunning.ipynb | GendalfSeriyy/ml-mipt | 647a482baba57d8b920392ed534c8179194dfea7 | [
"MIT"
] | 2 | 2020-09-30T21:22:47.000Z | 2021-01-05T14:44:01.000Z | 48.919633 | 16,000 | 0.595285 | [
[
[
"## week05: BERT fine tunning\n*Based on [BERT Fine-Tuning Sentence Classification notebook on Colab](https://colab.research.google.com/drive/1ywsvwO6thOVOrfagjjfuxEf6xVRxbUNO#scrollTo=6J-FYdx6nFE_), refined by [Anastasia Ianina](https://www.linkedin.com/in/anastasia-ianina/)*",
"_____no_output_____"
],
[
"We will use BERT implementation from `pytorch-transformers` library, which contains almost all recent architectures.",
"_____no_output_____"
]
],
[
[
"# ! pip install pytorch-transformers\n# ! wget -O negative.csv 'https://www.dropbox.com/s/qwp22e0t3d3n2xa/negative.csv?dl=0'\n# ! wget -O positive.csv 'https://www.dropbox.com/s/t6nxxuplzsyica6/positive.csv?dl=0'",
"_____no_output_____"
],
[
"import torch\nfrom torch.utils.data import TensorDataset, DataLoader, RandomSampler, SequentialSampler\nfrom keras.preprocessing.sequence import pad_sequences\nfrom sklearn.model_selection import train_test_split\nfrom pytorch_transformers import BertTokenizer, BertConfig\nfrom pytorch_transformers import AdamW, BertForSequenceClassification\nfrom tqdm import tqdm, trange\nimport pandas as pd\nimport io\nimport numpy as np\nfrom sklearn.metrics import accuracy_score\nimport matplotlib.pyplot as plt",
"Using TensorFlow backend.\n"
]
],
[
[
"Если у вас есть GPU, будем использовать ее для обучения. Тем не менее, этот ноутбук можно выполнить и с помощью только CPU. Правда, это будет значительно дольше.",
"_____no_output_____"
]
],
[
[
"device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n\nif device == torch.device('cpu'):\n print('Using cpu')\nelse:\n n_gpu = torch.cuda.device_count()\n print('Using {} GPUs'.format(torch.cuda.get_device_name(0)))",
"Using cpu\n"
]
],
[
[
"## Загрузка данных\n",
"_____no_output_____"
],
[
"Мы выбрали не очень известный, необычный датасет с разметкой сентимента русскоязычных твитов (подробнее про него в [статье](http://www.swsys.ru/index.php?page=article&id=3962&lang=)). В корпусе, который мы использовали 114,911 положительных и 111,923 отрицательных записей. Загрузить его можно [тут](https://study.mokoron.com/).",
"_____no_output_____"
]
],
[
[
"import pandas as pd\n\npos_texts = pd.read_csv('positive.csv', encoding='utf8', sep=';', header=None)\nneg_texts = pd.read_csv('negative.csv', encoding='utf8', sep=';', header=None)",
"_____no_output_____"
],
[
"pos_texts.sample(5)",
"_____no_output_____"
]
],
[
[
"Обратите внимание на специальные токены [CLS] и [SEP], которые мы добавляем в началои конец предложения.",
"_____no_output_____"
]
],
[
[
"sentences = np.concatenate([pos_texts[3].values, neg_texts[3].values])\n\nsentences = [\"[CLS] \" + sentence + \" [SEP]\" for sentence in sentences]\nlabels = [[1] for _ in range(pos_texts.shape[0])] + [[0] for _ in range(neg_texts.shape[0])]\n",
"_____no_output_____"
],
[
"assert len(sentences) == len(labels) == pos_texts.shape[0] + neg_texts.shape[0]",
"_____no_output_____"
],
[
"print(sentences[1000])",
"[CLS] Дим, ты помогаешь мне, я тебе, все взаимно, все правильно) [SEP]\n"
],
[
"from sklearn.model_selection import train_test_split\n\ntrain_sentences, test_sentences, train_gt, test_gt = train_test_split(sentences, labels, test_size=0.3)",
"_____no_output_____"
],
[
"print(len(train_gt), len(test_gt))",
"158783 68051\n"
]
],
[
[
"## Inputs",
"_____no_output_____"
],
[
"Теперь импортируем токенизатор для BERT'а, который превратит наши тексты в набор токенов, соответствующих тем, что встречаются в словаре предобученной модели.",
"_____no_output_____"
]
],
[
[
"from pytorch_transformers import BertTokenizer, BertConfig\n\n\ntokenizer = BertTokenizer.from_pretrained('bert-base-uncased', do_lower_case=True)\n\ntokenized_texts = [tokenizer.tokenize(sent) for sent in train_sentences]\nprint (tokenized_texts[0])",
"100%|██████████| 231508/231508 [00:00<00:00, 598241.24B/s]\n"
]
],
[
[
"BERT'у нужно предоставить специальный формат входных данных.\n\n\n- **input ids**: последовательность чисел, отождествляющих каждый токен с его номером в словаре.\n- **labels**: вектор из нулей и единиц. В нашем случае нули обозначают негативную эмоциональную окраску, единицы - положительную.\n- **segment mask**: (необязательно) последовательность нулей и единиц, которая показывает, состоит ли входной текст из одного или двух предложений. Для случая одного предложения получится вектор из одних нулей. Для двух: <length_of_sent_1> нулей и <length_of_sent_2> единиц.\n- **attention mask**: (необязательно) последовательность нулей и единиц, где единицы обозначают токены предложения, нули - паддинг.",
"_____no_output_____"
],
[
"Паддинг нужен для того, чтобы BERT мог работать с предложениями разной длины. Выбираем максимально возможную длину предложения (в нашем случае пусть это будет 100). \n\nТеперь более длинные предложения будем обрезать до 100 токенов, а для более коротких использовать паддинг. Возьмем готовую функцию `pad_sequences` из библиотеки `keras`.\n\n",
"_____no_output_____"
]
],
[
[
"input_ids = [tokenizer.convert_tokens_to_ids(x) for x in tokenized_texts]\ninput_ids = pad_sequences(\n input_ids,\n maxlen=100,\n dtype=\"long\",\n truncating=\"post\",\n padding=\"post\"\n)\nattention_masks = [[float(i>0) for i in seq] for seq in input_ids]",
"_____no_output_____"
]
],
[
[
"Делим данные на `train` и `val`:",
"_____no_output_____"
]
],
[
[
"train_inputs, validation_inputs, train_labels, validation_labels = train_test_split(\n input_ids, train_gt, \n random_state=42,\n test_size=0.1\n)\n\ntrain_masks, validation_masks, _, _ = train_test_split(\n attention_masks,\n input_ids,\n random_state=42,\n test_size=0.1\n)",
"_____no_output_____"
]
],
[
[
"Преобразуем данные в `pytorch` тензоры:",
"_____no_output_____"
]
],
[
[
"train_inputs = torch.tensor(train_inputs)\ntrain_labels = torch.tensor(train_labels)\ntrain_masks = torch.tensor(train_masks)",
"_____no_output_____"
],
[
"validation_inputs = torch.tensor(validation_inputs)\nvalidation_labels = torch.tensor(validation_labels)\nvalidation_masks = torch.tensor(validation_masks)",
"_____no_output_____"
],
[
"train_labels",
"_____no_output_____"
]
],
[
[
"Воспользуемся классом `DataLoader`. Это поможет нам использовать эффективнее память во время тренировки модели, так как нам не нужно будет загружать в память весь датасет. Данные по батчам будем разбивать произвольно с помощью RandomSampler. Также обратите внимание на размер батча: если во время тренировки возникнет `Memory Error`, размер батча необходимо уменьшить.",
"_____no_output_____"
]
],
[
[
"train_data = TensorDataset(train_inputs, train_masks, train_labels)\ntrain_dataloader = DataLoader(\n train_data,\n sampler=RandomSampler(train_data),\n batch_size=32\n)",
"_____no_output_____"
],
[
"validation_data = TensorDataset(validation_inputs, validation_masks, validation_labels)\nvalidation_dataloader = DataLoader(\n validation_data,\n sampler=SequentialSampler(validation_data),\n batch_size=32\n)",
"_____no_output_____"
]
],
[
[
"## Обучение модели",
"_____no_output_____"
],
[
"Теперь когда данные подготовлены, надо написать пайплайн обучения модели.\n\nДля начала мы хотим изменить предобученный BERT так, чтобы он выдавал метки для классификации текстов, а затем файнтюнить его на наших данных. Мы возьмем готовую модификацию BERTа для классификации из pytorch-transformers. Она интуитивно понятно называется `BertForSequenceClassification`. Это обычный BERT с добавленным линейным слоем для классификации.",
"_____no_output_____"
],
[
"Загружаем [BertForSequenceClassification](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/modeling.py#L1129):",
"_____no_output_____"
]
],
[
[
"from pytorch_transformers import AdamW, BertForSequenceClassification",
"_____no_output_____"
]
],
[
[
"Аналогичные модели есть и для других задач. Все они построены на основе одной и той же архитектуры и различаются только верхними слоями.",
"_____no_output_____"
]
],
[
[
"from pytorch_transformers import BertForQuestionAnswering, BertForTokenClassification",
"_____no_output_____"
]
],
[
[
"Теперь подробнее рассмотрим процесс файн-тюнинга. Как мы помним, первый токен в каждом предложении - это `[CLS]`. В отличие от скрытого состояния, относящего к обычному слову (не метке `[CLS]`), скрытое состояние относящееся к этой метке должно содержать в себе аггрегированное представление всего предложения, которое дальше будет использоваться для классификации. Таким образом, когда мы скормили предложение в процессе обучения сети, выходом будет вектор со скрытым состоянием, относящийся к метке `[CLS]`. Дополнительный полносвязный слой, который мы добавили, имеет размер `[hidden_state, количество_классов]`, в нашем случае количество классов равно двум. То есть нав выходе мы получим два числа, представляющих классы \"положительная эмоциональная окраска\" и \"отрицательная эмоциональная окраска\".\n\nПроцесс дообучения достаточно дешев. По факту мы тренируем наш верхний слой и немного меняем веса во всех остальных слоях в процессе, чтобы подстроиться под нашу задачу.\n\nИногда некоторые слои специально \"замораживают\" или применяют разные стратегии работы с learning rate, в общем, делают все, чтобы сохранить \"хорошие\" веса в нижних слоях и ускорить дообучение. В целом, замораживание слоев BERTа обычно не сильно сказывается на итоговом качестве, однако надо помнить о тех случаях, когда данные, использованные для предобучения и дообучения очень разные (разные домены или стиль: академическая и разговорная лексика). В таких случаях лучше тренировать все слои сети, не замораживая ничего.",
"_____no_output_____"
],
[
"Загружаем BERT. `bert-base-uncased` - это версия \"base\" (в оригинальной статье рассказывается про две модели: \"base\" vs \"large\"), где есть только буквы в нижнем регистре (\"uncased\").",
"_____no_output_____"
]
],
[
[
"model = BertForSequenceClassification.from_pretrained(\"bert-base-uncased\", num_labels=2)\nmodel.to(device)",
"_____no_output_____"
]
],
[
[
"Теперь обсудим гиперпараметры для обучения нашей модели. Авторы статьи советуют выбирать `learning rate` `5e-5`, `3e-5`, `2e-5`, а количество эпох не делать слишком большим, 2-4 вполне достаточно. Мы пойдем еще дальше и попробуем дообучить нашу модель всего за одну эпоху.",
"_____no_output_____"
]
],
[
[
"param_optimizer = list(model.named_parameters())\nno_decay = ['bias', 'gamma', 'beta']\noptimizer_grouped_parameters = [\n {'params': [p for n, p in param_optimizer if not any(nd in n for nd in no_decay)],\n 'weight_decay_rate': 0.01},\n {'params': [p for n, p in param_optimizer if any(nd in n for nd in no_decay)],\n 'weight_decay_rate': 0.0}\n]\n\noptimizer = AdamW(optimizer_grouped_parameters, lr=2e-5)\n\n",
"_____no_output_____"
],
[
"from IPython.display import clear_output\n\n# Будем сохранять loss во время обучения\n# и рисовать график в режиме реального времени\ntrain_loss_set = []\ntrain_loss = 0\n\n\n# Обучение\n# Переводим модель в training mode\nmodel.train()\n\n\nfor step, batch in enumerate(train_dataloader):\n # добавляем батч для вычисления на GPU\n batch = tuple(t.to(device) for t in batch)\n # Распаковываем данные из dataloader\n b_input_ids, b_input_mask, b_labels = batch\n \n # если не сделать .zero_grad(), градиенты будут накапливаться\n optimizer.zero_grad()\n \n # Forward pass\n loss = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask, labels=b_labels)\n\n train_loss_set.append(loss[0].item()) \n \n # Backward pass\n loss[0].backward()\n \n # Обновляем параметры и делаем шаг используя посчитанные градиенты\n optimizer.step()\n\n # Обновляем loss\n train_loss += loss[0].item()\n \n # Рисуем график\n clear_output(True)\n plt.plot(train_loss_set)\n plt.title(\"Training loss\")\n plt.xlabel(\"Batch\")\n plt.ylabel(\"Loss\")\n plt.show()\n \nprint(\"Loss на обучающей выборке: {0:.5f}\".format(train_loss / len(train_dataloader)))\n\n\n# Валидация\n# Переводим модель в evaluation mode\nmodel.eval()\n\nvalid_preds, valid_labels = [], []\n\nfor batch in validation_dataloader: \n # добавляем батч для вычисления на GPU\n batch = tuple(t.to(device) for t in batch)\n \n # Распаковываем данные из dataloader\n b_input_ids, b_input_mask, b_labels = batch\n \n # При использовании .no_grad() модель не будет считать и хранить градиенты.\n # Это ускорит процесс предсказания меток для валидационных данных.\n with torch.no_grad():\n logits = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask)\n\n # Перемещаем logits и метки классов на CPU для дальнейшей работы\n logits = logits[0].detach().cpu().numpy()\n label_ids = b_labels.to('cpu').numpy()\n \n batch_preds = np.argmax(logits, axis=1)\n batch_labels = np.concatenate(label_ids) \n valid_preds.extend(batch_preds)\n valid_labels.extend(batch_labels)\n\nprint(\"Процент правильных предсказаний на валидационной выборке: {0:.2f}%\".format(\n accuracy_score(valid_labels, valid_preds) * 100\n))",
"_____no_output_____"
],
[
"print(\"Процент правильных предсказаний на валидационной выборке: {0:.2f}%\".format(\n accuracy_score(valid_labels, valid_preds) * 100\n))",
"_____no_output_____"
]
],
[
[
"# Оценка качества на отложенной выборке",
"_____no_output_____"
],
[
"Качество на валидационной выборке оказалось очень хорошим. Не переобучилась ли наша модель?",
"_____no_output_____"
],
[
"Делаем точно такую же предобработку для тестовых данных, как и в начале ноутбука делали для обучающих данных:",
"_____no_output_____"
]
],
[
[
"tokenized_texts = [tokenizer.tokenize(sent) for sent in test_sentences]\ninput_ids = [tokenizer.convert_tokens_to_ids(x) for x in tokenized_texts]\n\ninput_ids = pad_sequences(\n input_ids,\n maxlen=100,\n dtype=\"long\",\n truncating=\"post\",\n padding=\"post\"\n)",
"_____no_output_____"
]
],
[
[
"Создаем attention маски и приводим данные в необходимый формат:",
"_____no_output_____"
]
],
[
[
"attention_masks = [[float(i>0) for i in seq] for seq in input_ids]\n\nprediction_inputs = torch.tensor(input_ids)\nprediction_masks = torch.tensor(attention_masks)\nprediction_labels = torch.tensor(test_gt)\n\nprediction_data = TensorDataset(\n prediction_inputs,\n prediction_masks,\n prediction_labels\n)\n\nprediction_dataloader = DataLoader(\n prediction_data, \n sampler=SequentialSampler(prediction_data),\n batch_size=32\n)",
"_____no_output_____"
],
[
"model.eval()\ntest_preds, test_labels = [], []\n\nfor batch in prediction_dataloader:\n # добавляем батч для вычисления на GPU\n batch = tuple(t.to(device) for t in batch)\n \n # Распаковываем данные из dataloader\n b_input_ids, b_input_mask, b_labels = batch\n \n # При использовании .no_grad() модель не будет считать и хранить градиенты.\n # Это ускорит процесс предсказания меток для тестовых данных.\n with torch.no_grad():\n logits = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask)\n\n # Перемещаем logits и метки классов на CPU для дальнейшей работы\n logits = logits[0].detach().cpu().numpy()\n label_ids = b_labels.to('cpu').numpy()\n\n # Сохраняем предсказанные классы и ground truth\n batch_preds = np.argmax(logits, axis=1)\n batch_labels = np.concatenate(label_ids) \n test_preds.extend(batch_preds)\n test_labels.extend(batch_labels)",
"_____no_output_____"
],
[
"acc_score = accuracy_score(test_labels, test_preds)\nprint('Процент правильных предсказаний на отложенной выборке составил: {0:.2f}%'.format(\n acc_score*100\n))",
"Процент правильных предсказаний на отложенной выборке составил: 98.12%\n"
],
[
"print('Неправильных предсказаний: {0}/{1}'.format(\n sum(test_labels != test_preds),\n len(test_labels)\n))",
"Неправильных предсказаний: 1282/68051\n"
]
],
[
[
"### Оценка качества работы без fine-tuning",
"_____no_output_____"
]
],
[
[
"model_wo_finetuning = BertForSequenceClassification.from_pretrained(\"bert-base-uncased\", num_labels=2)\nmodel_wo_finetuning.cuda()",
"_____no_output_____"
],
[
"model_wo_finetuning.eval()\npreds_wo_finetuning, labels_wo_finetuning = [], []\n\nfor batch in prediction_dataloader:\n batch = tuple(t.to(device) for t in batch)\n b_input_ids, b_input_mask, b_labels = batch\n with torch.no_grad():\n logits = model_wo_finetuning(b_input_ids, token_type_ids=None, attention_mask=b_input_mask)\n\n logits = logits[0].detach().cpu().numpy()\n label_ids = b_labels.to('cpu').numpy()\n\n batch_preds = np.argmax(logits, axis=1)\n batch_labels = np.concatenate(label_ids) \n preds_wo_finetuning.extend(batch_preds)\n labels_wo_finetuning.extend(batch_labels)",
"_____no_output_____"
],
[
"acc_score_wo_finetuning = accuracy_score(labels_wo_finetuning, preds_wo_finetuning)\nprint('Процент правильных предсказаний на отложенной выборке составил: {0:.2f}%'.format(\n acc_score_wo_finetuning*100\n))",
"Процент правильных предсказаний на отложенной выборке составил: 48.57%\n"
]
],
[
[
"Сравним точность и полноту предсказаний:",
"_____no_output_____"
]
],
[
[
"from sklearn.metrics import recall_score, precision_score\n\nprint('1 эпоха: точность (precision) {0:.2f}%, полнота (recall) {1:.2f}%'.format(\n precision_score(test_labels, test_preds) * 100,\n recall_score(test_labels, test_preds) * 100\n))\n \nprint('Без дообучения: точность (precision) {0:.2f}%, полнота (recall) {1:.2f}%'.format(\n precision_score(labels_wo_finetuning, preds_wo_finetuning) * 100,\n recall_score(labels_wo_finetuning, preds_wo_finetuning) * 100,\n))",
"1 эпоха: точность (precision) 99.93%, полнота (recall) 96.34%\nБез дообучения: точность (precision) 37.68%, полнота (recall) 2.91%\n"
]
],
[
[
"Итак, мы показали, что предобученный BERT может быстро (всего за одну эпоху) давать хорошее качество при решении задачи анализа эмоциональной окраски текстов. Обратите внимание, что мы не тюнили параметры и использовали сравнительно небольшой размеченный корпус, чтобы получить accuracy больше 98\\%. Тем не менее, если не делать дообучения под конкретную задачу вовсе, получить хорошее качество не удается.\n\nКроме того, мы познакомились с библиотекой `pytorch-transformers`, которая позволяет использовать готовые обертки над моделями, специально созданными для решения той или иной задачи. Использовать BERT при решении повседневных NLP задач совсем нетрудно: не нужно даже вручную скачивать веса модели, библиотека все сделает за вас. Отбросив необходимость чуть-чуть предобработать тексты, сложность применения предобученного BERT'а оказывается не сильно больше, чем импортировать и применить лог.регрессию из `sklearn`.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
e72be03041d21c8e93ca5d0a03a1acba7502949c | 20,106 | ipynb | Jupyter Notebook | DNN threshold test.ipynb | ITSEG-MQ/Box-to-drive | 7933a0891d2de9a28adb373a37c9538e83a1f18b | [
"MIT"
] | null | null | null | DNN threshold test.ipynb | ITSEG-MQ/Box-to-drive | 7933a0891d2de9a28adb373a37c9538e83a1f18b | [
"MIT"
] | null | null | null | DNN threshold test.ipynb | ITSEG-MQ/Box-to-drive | 7933a0891d2de9a28adb373a37c9538e83a1f18b | [
"MIT"
] | null | null | null | 36.824176 | 225 | 0.532876 | [
[
[
"# Multi-Sensor Test\n## Angle Threshold - DNN",
"_____no_output_____"
],
[
"### Experiment Aims:\nTest the influence of the threshold of different steer angles on the classification accuracy, and choose a suitable threshold of the steer angle.",
"_____no_output_____"
],
[
"### Experiment Design:\nFor efficiency purposes, this experiment uses DNN to determine whether the current angle_threshold is reasonable. \nIf the accuracy rate of DNN is low, we believe that the current angle_threshold cannot separate left and right well. If the accuracy is high, we can regard the current angle_threshold is reasonable. \nThis experiment uses the KFold method of 4 Folds to reduce the influence of randomness in the separation of the training set and the test set on the results.\n\nThis experiment tested the effect of angle_threshold of 10, 20, 30...90 on the rationality of splitting the data set.",
"_____no_output_____"
],
[
"### Experiment Content:",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\nimport os",
"_____no_output_____"
],
[
"# Read the data, assign a label to each image data according to the threshold, including go, stop, left, right\n# The default speed_threshold=5, angle_threshold=30\n# Finally we generate bounding_box data: X, corresponding label: y\nimport random\ndef process_data(data, speed_threshold, angle_threshold, sign_threshold, data_size):\n stop, go, left, right = split(data, speed_threshold, angle_threshold, sign_threshold, data_size)\n\n print(\"go, stop, left, right\")\n print(len(go), len(stop), len(left), len(right) )\n\n X = np.array(list(go)+list(stop)+list(left)+list(right))\n y = np.array(list(np.ones(len(go)))+list(np.ones(len(stop))*2)+list(np.ones(len(left))*3)+list(np.ones(len(right))*4))\n \n mask = [i for i in range(len(y))]\n random.shuffle(mask)\n\n X=X[mask]\n y=y[mask]\n\n X = np.reshape(X, (len(y),21*5))\n\n print(\"X:\", X.shape)\n print(\"y:\", y.shape)\n return X, y-1\n\n# Separate all data files into four categories: go, stop, left, and right by threshold.\n# The default speed_threshold=5, angle_threshold=30\ndef split(data, speed_threshold=5, angle_threshold=30, sign_threshold=0.5, data_size=200):\n stop_full = data[data[\"vehicle_speed\"]<=speed_threshold]\n\n go = data[data[\"vehicle_speed\"]>speed_threshold]\n go_full = go[go[\"steering_angle_calculated\"]<=angle_threshold]\n\n steer = go[go[\"steering_angle_calculated\"]>angle_threshold]\n left_full = steer[steer[\"steering_angle_sign\"]<=sign_threshold]\n right_full = steer[steer[\"steering_angle_sign\"]>sign_threshold]\n \n go = get_box(go_full[:data_size])\n stop = get_box(stop_full[:data_size])\n left = get_box(left_full[:data_size])\n right = get_box(right_full[:data_size])\n \n return stop, go, left, right\n\n# Take out the bounding boxs, turning angle, speed and other data of all pictures from the data file.\ndef get_box(fulltsv, padding=0):\n maxBox = 21\n\n header = [col for col in fulltsv]\n header.remove('box')\n \n x_full = []\n \n label_dict = {'Car': 1,\n 'VanSUV': 2,\n 'Pedestrian': 3,\n 'Trailer': 4,\n 'Bus': 5,\n 'Truck': 6,\n 'Bicycle': 7,\n 'MotorBiker': 8,\n 'Motorcycle': 9,\n 'Animal': 10,\n 'UtilityVehicle': 11,\n 'CaravanTransporter': 12,\n 'EmergencyVehicle': 13,\n 'Cyclist': 14}\n \n for index, row in fulltsv.iterrows():\n x = []\n \n boxs = eval(row['box'])\n for box in boxs[:maxBox]: # 生成x, 添加已有的box,box上限数量是maxBox\n x.append(box['2d_bbox'] + [label_dict[box['class']]])\n\n for i in range(maxBox - len(boxs)): # 填补空的box\n x.append([padding,padding,padding,padding,padding])\n \n x_full.append(x)\n return np.array(x_full)",
"_____no_output_____"
],
[
"import torch\nimport torch.nn as nn\nimport torch.nn.functional as F",
"_____no_output_____"
],
[
"class CNN_Net(nn.Module):\n def __init__(self):\n super(CNN_Net, self).__init__()\n\n # First 2D convolutional layer, taking in 1 input channel (image),\n # outputting 32 convolutional features, with a square kernel size of 3\n self.conv1 = nn.Conv2d(1, 32, 3, 1)\n # Second 2D convolutional layer, taking in the 32 input layers,\n # outputting 64 convolutional features, with a square kernel size of 3\n self.conv2 = nn.Conv2d(32, 64, 3, 1)\n\n # Designed to ensure that adjacent pixels are either all 0s or all active\n # with an input probability\n self.dropout1 = nn.Dropout2d(0.25)\n self.dropout2 = nn.Dropout2d(0.5)\n\n # First fully connected layer\n self.fc1 = nn.Linear(9216, 128)\n # Second fully connected layer that outputs our 10 labels\n self.fc2 = nn.Linear(128, 4)\n \n def forward(self, x):\n # Pass data through conv1\n x = self.conv1(x)\n # Use the rectified-linear activation function over x\n x = F.relu(x)\n\n x = self.conv2(x)\n x = F.relu(x)\n\n # Run max pooling over x\n x = F.max_pool2d(x, 2)\n # Pass data through dropout1\n x = self.dropout1(x)\n # Flatten x with start_dim=1\n x = torch.flatten(x, 1)\n # Pass data through fc1\n x = self.fc1(x)\n x = F.relu(x)\n x = self.dropout2(x)\n x = self.fc2(x)\n\n # Apply softmax to x\n output = F.log_softmax(x, dim=1)\n return output\n\nclass DNN_Net(nn.Module):\n def __init__(self):\n super(DNN_Net, self).__init__()\n \n self.fc1 = nn.Linear(105, 128)\n self.dropout1 = nn.Dropout(0.25) \n \n self.fc2 = nn.Linear(128, 64)\n self.dropout2 = nn.Dropout(0.25)\n \n self.fc3 = nn.Linear(64, 36)\n self.dropout3 = nn.Dropout(0.25)\n \n self.fc4 = nn.Linear(36, 4) \n\n \n def forward(self, x):\n x = self.fc1(x)\n x = F.relu(x)\n x = self.dropout1(x)\n x = self.fc2(x)\n x = F.relu(x)\n x = self.dropout2(x)\n x = self.fc3(x)\n x = F.relu(x)\n x = self.dropout3(x)\n x = self.fc4(x)\n \n output = F.log_softmax(x, dim=1)\n return output\nmy_nn = DNN_Net()\nprint(my_nn)",
"DNN_Net(\n (fc1): Linear(in_features=105, out_features=128, bias=True)\n (dropout1): Dropout(p=0.25)\n (fc2): Linear(in_features=128, out_features=64, bias=True)\n (dropout2): Dropout(p=0.25)\n (fc3): Linear(in_features=64, out_features=36, bias=True)\n (dropout3): Dropout(p=0.25)\n (fc4): Linear(in_features=36, out_features=4, bias=True)\n)\n"
],
[
"def train(dataloader, model, loss_fn, optimizer, flag):\n size = len(dataloader.dataset)\n for batch, (X, y) in enumerate(dataloader):\n X, y = X.to(device), y.to(device, dtype=torch.long)\n\n # Compute prediction error\n pred = model(X)\n loss = loss_fn(pred, y)\n\n # Backpropagation\n optimizer.zero_grad()\n loss.backward()\n optimizer.step()\n\n if flag:\n loss, current = loss.item(), batch * len(X)\n print(f\"loss: {loss:>7f} [{current:>5d}/{size:>5d}]\")\n \ndef test(dataloader, model, loss_fn, flag):\n size = len(dataloader.dataset)\n model.eval()\n test_loss, correct = 0, 0\n with torch.no_grad():\n for X, y in dataloader:\n X, y = X.to(device), y.to(device, dtype=torch.long)\n pred = model(X)\n test_loss += loss_fn(pred, y).item()\n correct += (pred.argmax(1) == y).type(torch.float).sum().item()\n test_loss /= size\n correct /= size\n if flag:\n print(f\"Test Error: \\n Accuracy: {(100*correct):>0.1f}%, Avg loss: {test_loss:>8f} \\n\")\n\ndef fit(model, train_dataloader, test_dataloader, epochs=100):\n model = model.to(device)\n loss_fn = nn.CrossEntropyLoss()\n optimizer = torch.optim.SGD(model.parameters(), lr=1e-3)\n\n epochs = 100\n for t in range(epochs):\n train(train_dataloader, model, loss_fn, optimizer, False)\n test(test_dataloader, model, loss_fn, False)\n return model",
"_____no_output_____"
],
[
"from sklearn.model_selection import cross_val_score\nfrom sklearn.metrics import confusion_matrix\nfrom sklearn.model_selection import KFold\n\nfrom torch.utils.data import DataLoader, TensorDataset\ndevice = \"cuda\" if torch.cuda.is_available() else \"cpu\"\n# device = \"cpu\"\nprint(\"Using {} device\".format(device))\n\n# Use Kfold Train and Test in this function\ndef process(X, y, n_splits=4):\n cm_result = np.zeros((4,4))\n kf = KFold(n_splits=n_splits, shuffle=True, random_state=False)\n for train_index, test_index in kf.split(X):\n # print(\"TRAIN:\", train_index, \"TEST:\", test_index)\n X_train, X_test = X[train_index], X[test_index]\n y_train, y_test = y[train_index], y[test_index]\n \n X_train = torch.from_numpy(X_train)\n y_train = torch.from_numpy(y_train)\n X_test = torch.from_numpy(X_test)\n y_test = torch.from_numpy(y_test) \n\n training_data = TensorDataset(X_train, y_train)\n testing_data = TensorDataset(X_test , y_test)\n train_dataloader = DataLoader(training_data, batch_size=64)\n test_dataloader = DataLoader(testing_data, batch_size=64)\n \n model = DNN_Net()\n model = fit(model, train_dataloader, test_dataloader)\n \n pred = model(X_test.to(device))\n# y_pred = pred.argmax(1).detach().numpy()\n y_pred = pred.argmax(1).detach().cpu().numpy()\n y_test = y_test.numpy()\n\n cm = confusion_matrix(y_test, y_pred)\n cm_rate = cm/cm.sum(axis=1)\n cm_result += cm_rate\n \n correct = sum(y_pred==y_test)/len(y_test)\n print(f\"Test Accuracy: {(100*correct):>0.1f}%\")\n return cm_result/n_splits",
"Using cuda device\n"
]
],
[
[
"This experiment tried the influence of angle_threshold of 10, 20, 30...90 on the classification results.\n\nThe classification accuracy is expressed in the form of a confusion matrix. \nThe diagonal line from top left to bottom right corresponds to the accuracy of the four categories. The four types are go, stop, left, and right. \nThe data in each row represents the probability that the data which is actually belonged to the row is classified and classified into the corresponding column by the classifier.",
"_____no_output_____"
]
],
[
[
"data = pd.read_csv(\"full_info.tsv\", sep =\"\\t\")\n\nfor i in range(1,10):\n speed_threshold = 5\n angle_threshold = i*10\n sign_threshold = 0.5\n \n print(\"angle_threshold\", angle_threshold)\n \n data_size = 200\n X, y = process_data(data, speed_threshold, angle_threshold, sign_threshold, data_size)\n X = X.astype(np.float32) \n y = y.astype(np.long) \n \n n_splits = 4\n print(process(X, y, 4))\n print()",
"angle_threshold 10\ngo, stop, left, right\n200 200 200 200\nX: (800, 105)\ny: (800,)\nTest Accuracy: 41.0%\n[[0.50025253 0.12082591 0.24501594 0.13461275]\n [0.17687015 0.46573084 0.2025837 0.16768295]\n [0.29662568 0.19569222 0.29508399 0.2279755 ]\n [0.27784365 0.19255051 0.24583091 0.28598122]]\n\nangle_threshold 20\ngo, stop, left, right\n200 200 200 200\nX: (800, 105)\ny: (800,)\nTest Accuracy: 34.5%\n[[0.46941176 0.16427771 0.20454182 0.16370532]\n [0.22948802 0.38384093 0.20494998 0.17736936]\n [0.34633987 0.16151963 0.28045218 0.20821476]\n [0.2579085 0.20104049 0.27526411 0.26763943]]\n\nangle_threshold 30\ngo, stop, left, right\n200 200 200 200\nX: (800, 105)\ny: (800,)\nTest Accuracy: 43.5%\n[[0.50309591 0.13892365 0.17974507 0.18768577]\n [0.09660827 0.54964539 0.1900932 0.18453901]\n [0.22746129 0.19743429 0.33220669 0.2676773 ]\n [0.25700691 0.14465999 0.26567708 0.34664612]]\n\nangle_threshold 40\ngo, stop, left, right\n200 200 200 200\nX: (800, 105)\ny: (800,)\nTest Accuracy: 38.0%\n[[0.51174987 0.13867739 0.14659646 0.21750259]\n [0.17992751 0.41972135 0.17201618 0.24267612]\n [0.21671751 0.23108142 0.23874389 0.32298692]\n [0.24562721 0.1671186 0.16498779 0.42049275]]\n\nangle_threshold 50\ngo, stop, left, right\n200 200 200 200\nX: (800, 105)\ny: (800,)\nTest Accuracy: 46.0%\n[[0.48977667 0.12451119 0.15991019 0.22568311]\n [0.18180165 0.51397234 0.14431391 0.16151077]\n [0.2389659 0.22555869 0.28095238 0.2609949 ]\n [0.20472554 0.16961541 0.21097013 0.41900794]]\n\nangle_threshold 60\ngo, stop, left, right\n200 200 200 200\nX: (800, 105)\ny: (800,)\nTest Accuracy: 38.5%\n[[0.53958394 0.19058201 0.19372583 0.10136111]\n [0.15747154 0.54021164 0.15082907 0.16021189]\n [0.232728 0.22026455 0.36617686 0.20140571]\n [0.27340746 0.25931217 0.21638639 0.268845 ]]\n\nangle_threshold 70\ngo, stop, left, right\n200 200 184 175\nX: (759, 105)\ny: (759,)\nTest Accuracy: 40.2%\n[[0.53213436 0.12211439 0.21714956 0.17726946]\n [0.15367711 0.46283385 0.25507886 0.18476464]\n [0.23507343 0.15182971 0.42786727 0.17450313]\n [0.23668921 0.1285779 0.31215884 0.27552553]]\n\nangle_threshold 80\ngo, stop, left, right\n200 200 162 151\nX: (713, 105)\ny: (713,)\nTest Accuracy: 39.9%\n[[0.49546958 0.17628368 0.26288763 0.14642002]\n [0.2082672 0.52895008 0.2415061 0.11353736]\n [0.15228175 0.23732093 0.39065382 0.16215541]\n [0.20406746 0.16947943 0.26120577 0.23562249]]\n\nangle_threshold 90\ngo, stop, left, right\n200 200 148 138\nX: (686, 105)\ny: (686,)\nTest Accuracy: 48.5%\n[[0.67093204 0.12470907 0.21635579 0.07562859]\n [0.2694421 0.53437831 0.1568127 0.12585022]\n [0.26216026 0.11710279 0.33700919 0.17208173]\n [0.30153852 0.10790839 0.23651758 0.16164137]]\n\n"
]
],
[
[
"### Experiment Analysis:\nThe data in the third row and the third column and the data in the fourth row and fourth column of the confusion matrix correspond to the classification accuracy of the left class and the right class, respectively. \nAccording to the results of different angle_threshold, we can believe that when angle_threshold ∈ [20,70], the classification accuracy of the two types does not change a lot. \nWhen angle_threshold<20, the accuracy of the classification may deteriorate due to insufficient data discrimination.\nWhen angle_threshold>70, it may be that the total amount of data becomes smaller, and the final classification accuracy becomes worse.\n\nFor angle_threshold ∈ [20,70], we can find that the accuracy rate is highest when angle_threshold=50, which is more appropriate angle_threshold",
"_____no_output_____"
],
[
"### Expeiment Conclusion:\n50 should be a more appropriate angle_threshold. However, it does not improve the final accuracy rate much, and the accureate rate is still lower than 60%.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
e72be0aba9db152103c43e4e4b61ffe11e3e5c60 | 8,228 | ipynb | Jupyter Notebook | .ipynb_checkpoints/gitt_functions-checkpoint.ipynb | vwhu/gitt_toolkit | 77cb9c6f9248a2817fe5de112f12f0dde2df5d5b | [
"MIT"
] | null | null | null | .ipynb_checkpoints/gitt_functions-checkpoint.ipynb | vwhu/gitt_toolkit | 77cb9c6f9248a2817fe5de112f12f0dde2df5d5b | [
"MIT"
] | null | null | null | .ipynb_checkpoints/gitt_functions-checkpoint.ipynb | vwhu/gitt_toolkit | 77cb9c6f9248a2817fe5de112f12f0dde2df5d5b | [
"MIT"
] | null | null | null | 33.044177 | 111 | 0.448469 | [
[
[
"import pandas as pd\nimport numpy as np\nimport scipy\nimport matplotlib.pyplot as plt\n\n%matplotlib inline",
"_____no_output_____"
],
[
"import pandas as pd\nimport numpy as np\n\ndef polynomial_filter(x, y, window,step_size, polyorder):\n n = window//2\n x_data = []\n y_data = []\n y_model = []\n dy_model = []\n d2y_model = []\n rmse = []\n coeff_matrix = np.zeros(\n (\n int((polyorder + 1)), len(range(n,int(len(x)),step_size))\n )\n )\n idx = 0\n \n for i in range(n,int(len(x)),step_size):\n x_range = x[(i-n):(i+n)]\n y_range = y[(i-n):(i+n)]\n coeffs = np.polyfit(x_range, y_range, polyorder)\n ffit = np.poly1d(coeffs)\n fderiv = ffit.deriv()\n fderiv2 = fderiv.deriv()\n ym = np.polyval(ffit, x_range)\n dym = np.polyval(fderiv, x_range)\n d2ym = np.polyval(fderiv2, x_range)\n coeff_matrix[:,idx] = coeffs\n idx = idx + 1\n \n if i == n:\n for ind in range(0,int(n-step_size),step_size):\n x_data.append(x_range[ind])\n y_data.append(y_range[ind])\n y_model.append(ym[ind])\n dy_model.append(dym[ind])\n d2y_model.append(d2ym[ind])\n rmse.append(np.sqrt(np.mean((ym - y_range)**2)))\n x_data.append(x_range[n])\n y_data.append(y_range[n])\n y_model.append(ym[n])\n dy_model.append(dym[n])\n d2y_model.append(d2ym[n])\n rmse.append(np.sqrt(np.mean((ym-y_range)**2))) \n if int(len(x_range)) - i < n:\n for ind2 in range(n+step_size,int(n-step_size),step_size):\n x_data.append(x_range[ind2])\n y_data.append(y_range[ind2])\n y_model.append(ym[ind2])\n dy_model.append([dym[ind2]])\n d2y_model.append(d2ym[ind2])\n rmse.append(np.sqrt(np.mean((ym-y_range)**2))) \n elif int(len(x_range)) - i < n - step_size:\n break\n\n data = pd.DataFrame() \n data['x_data'] = pd.Series(data = x_data)\n data['y_data'] = pd.Series(data = y_data)\n data['y_model'] = pd.Series(data = y_model)\n data['dydx'] = pd.Series(data = dy_model)\n data['d2ydx2'] = pd.Series(data = d2y_model)\n data['RMSE'] = pd.Series(data = rmse)\n \n rmse_avg = np.sqrt(np.mean((data['RMSE'])**2))\n avg_error = np.mean(data['RMSE'])\n\n data['avg_error'] = pd.Series(data=avg_error)\n data['rmse_avg'] = pd.Series(data=rmse_avg)\n data['window'] = pd.Series(data=window)\n data['stepsize'] = pd.Series(step_size)\n \n return data, coeff_matrix",
"_____no_output_____"
],
[
"# Cleaning up Kevin's Data\n\nimport pandas as pd\nimport numpy as np\n\ndef fixing_time_str(df):\n time = np.zeros(len(df['TestTime']))\n\n for i in range(len(df['TestTime'])):\n if '-' not in df['TestTime'][i]:\n hr = float(df['TestTime'][i].split(':')[0]) * 3600\n mins = float(df['TestTime'][i].split(':')[1]) * 60\n sec = float(df['TestTime'][i].split(':')[2])\n time[i] = hr + mins + sec\n if '-' in gitt1['TestTime'][i]:\n #print(gitt1['TestTime'][i].split('-'))\n day = float(df['TestTime'][i].split('-')[0])*86400\n idx2 = df['TestTime'][i].split('-')[1]\n hr = float(idx2.split(':')[0]) * 3600\n mins = float(idx2.split(':')[1]) * 60\n sec = float(idx2.split(':')[2])\n time[i] = day + hr + mins+ sec\n \n df['Time(s)'] = pd.Series(data = time)\n \n return df",
"_____no_output_____"
],
[
"import pandas as pd\nimport numpy as np\n\ndef pulse_separations(df, current):\n d = {}\n workingcurrent = np.argwhere(current != 0)\n startidx = int(workingcurrent[0])\n cycle = 1\n\n for i in range(len(workingcurrent)-1):\n if workingcurrent[i + 1] - workingcurrent[i] != 1:\n endidx = int(workingcurrent[i +1] - 1)\n d[cycle] = df[startidx:endidx]\n d[cycle] = d[cycle].reset_index(drop = True)\n\n startidx = int(workingcurrent[i +1])\n cycle = cycle + 1\n \n return d\n ",
"_____no_output_____"
],
[
"import pandas as pd\n\ndef transient_pulses(d):\n d_trans = {}\n dkeys = d.keys()\n\n for key in dkeys:\n mask = d[key]['Current/mA'] != 0\n d_trans[key] = d[key][mask]\n d_trans[key] = d_trans[key].reset_index(drop = True)\n\n d_trans[key]['sqrt_time_diff'] = np.sqrt(d_trans[key]['Time(s)'] - d_trans[key]['Time(s)'][0])\n \n return d_trans",
"_____no_output_____"
],
[
"def find_linear_fit(x, y, window)\n window = 240\n n = window//2\n coefs = []\n mses = []\n intcs = []\n starts = []\n ends = []\n\n for i in range(n,int(len(x)-n)):\n x_range = np.array(x[(i-n):(i+n)]).reshape(-1, 1)\n y_range = np.array(y[(i-n):(i+n)]).reshape(-1, 1)\n\n lin_model = LinearRegression()\n lin_model.fit(x_range, y_range)\n y_pred = lin_model.predict(y_range)\n coef = lin_model.coef_\n intc = lin_model.intercept_\n mse = mean_squared_error(y_range, y_pred)\n startidx = int(i - n)\n endidx = int(i + n)\n\n starts.append(startidx)\n ends.append(endidx)\n coefs.append(coef.flatten()[0])\n intcs.append(intc.flatten()[0])\n mses.append(mse)\n\n data = pd.DataFrame()\n data['start_idx'] = pd.Series(data = starts)\n data['end_idx'] = pd.Series(data = ends)\n data['coefs'] = pd.Series(data = coefs)\n data['intcs'] = pd.Series(data = intcs)\n data['mses'] = pd.Series(data = mses)\n \n return data",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e72c09afcd154014c3f1b9a49cd1f6c6610a5a20 | 5,110 | ipynb | Jupyter Notebook | cross/vectorize_both.ipynb | sudarshan85/phd_code | e24c3c9749b54d2d963057128856eef03add8344 | [
"Apache-2.0"
] | null | null | null | cross/vectorize_both.ipynb | sudarshan85/phd_code | e24c3c9749b54d2d963057128856eef03add8344 | [
"Apache-2.0"
] | null | null | null | cross/vectorize_both.ipynb | sudarshan85/phd_code | e24c3c9749b54d2d963057128856eef03add8344 | [
"Apache-2.0"
] | null | null | null | 25.422886 | 116 | 0.577886 | [
[
[
"# Cross Training with MIMIC and MLH",
"_____no_output_____"
],
[
"## Imports & Inits",
"_____no_output_____"
]
],
[
[
"%load_ext autoreload\n%autoreload 2",
"_____no_output_____"
],
[
"import pdb\nimport pandas as pd\nimport pickle\nimport numpy as np\nnp.set_printoptions(precision=4)\n\nfrom tqdm import tqdm_notebook as tqdm\nfrom ast import literal_eval\nfrom pathlib import Path\nfrom scipy import stats\n\nfrom sklearn.feature_extraction.text import TfidfVectorizer\n\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nsns.set_style(\"darkgrid\")\n%matplotlib inline",
"_____no_output_____"
],
[
"mimic_path = Path('mimic_data')\nmlh_path = Path('mlh_data')\npath = Path('data')\nworkdir = path/'workdir'\nvectordir = workdir/'vectordir'",
"_____no_output_____"
],
[
"mimic_notes_df = pd.read_csv(mimic_path/'notes_all_proc.csv', usecols=['hadm_id', 'note', 'imi_adm_label'])\nmimic_notes_df = mimic_notes_df[mimic_notes_df['imi_adm_label'] != -1].reset_index(drop=True)\n\nmlh_notes_df = pd.read_csv(mlh_path/'notes_all_proc.csv', usecols=['hadm_id', 'note', 'imi_adm_label'])\nmlh_notes_df = mlh_notes_df[mlh_notes_df['imi_adm_label'] != -1].reset_index(drop=True)\n\nmimic_notes_df.shape, mlh_notes_df.shape",
"_____no_output_____"
],
[
"mimic2mlh_vec = TfidfVectorizer(ngram_range=(1,2), max_features=60_000)\n\nx_train_mimic = mimic2mlh_vec.fit_transform(mimic_notes_df['note'])\nx_test_mlh = mimic2mlh_vec.transform(mlh_notes_df['note'])\n\ny_train_mimic = mimic_notes_df['imi_adm_label']\ny_test_mlh = mlh_notes_df['imi_adm_label']\n\nwith open(vectordir/'mimic2mlh.pkl', 'wb') as f:\n pickle.dump(mimic2mlh_vec, f)\n pickle.dump(x_train_mimic, f)\n pickle.dump(x_test_mlh, f)\n pickle.dump(y_train_mimic, f)\n pickle.dump(y_test_mlh, f)",
"_____no_output_____"
],
[
"mlh2mimic_vec = TfidfVectorizer(ngram_range=(1,2), max_features=60_000)\n\nx_train_mlh = mlh2mimic_vec.fit_transform(mlh_notes_df['note'])\nx_test_mimic = mlh2mimic_vec.transform(mimic_notes_df['note'])\n\ny_train_mlh = mlh_notes_df['imi_adm_label']\ny_test_mimic = mimic_notes_df['imi_adm_label']\n\nwith open(vectordir/'mlh2mimic.pkl', 'wb') as f:\n pickle.dump(mlh2mimic_vec, f)\n pickle.dump(x_train_mlh, f)\n pickle.dump(x_test_mimic, f)\n pickle.dump(y_train_mlh, f)\n pickle.dump(y_test_mimic, f)",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e72c1c0dd24b5b84bb44370720a37c41fe011c4b | 13,585 | ipynb | Jupyter Notebook | 1-foundations/python/fsnd01_07_classes_text.ipynb | Ahmed9914/udacity-fsnd | 5c45158a1909126917299fb330bffa2319171104 | [
"MIT"
] | null | null | null | 1-foundations/python/fsnd01_07_classes_text.ipynb | Ahmed9914/udacity-fsnd | 5c45158a1909126917299fb330bffa2319171104 | [
"MIT"
] | null | null | null | 1-foundations/python/fsnd01_07_classes_text.ipynb | Ahmed9914/udacity-fsnd | 5c45158a1909126917299fb330bffa2319171104 | [
"MIT"
] | null | null | null | 42.058824 | 490 | 0.621126 | [
[
[
"# Lesson 07. Classes: send texts\n\n**Udacity Full Stack Web Developer Nanodegree program**\n\nPart 01. Programming fundamentals and the web\n\n[Programming foundations with Python](https://www.udacity.com/course/programming-foundations-with-python--ud036)\n\nBrendon Smith\n\nbr3ndonland",
"_____no_output_____"
],
[
"## 01. Course Map\n\nWe're now moving on to the second part of Lesson 2: using classes.\n\n## 02. Send Text Messages (Story)\n\nWe will design a system for automated sending of text messages. Cool!\n\n## 03. Send Text Messages (Output)\n\nThe output sends automated text messages to his phone.",
"_____no_output_____"
],
[
"## 04. Introducing Twilio\n\n`twilio` is a Python package that sends text messages with Python code. It does not come with the Python standard library.\n\nThe PyPI (Python Package Index) ranking site is down. According to [this reddit](https://www.reddit.com/r/Python/comments/3bo3wc/what_happened_to_pypirankinginfo/), the owner has had trouble maintaining the site. As alternatives, check out PyPI directly, or the [Python 3 Wall of Superpowers](https://python3wos.appspot.com/) (formerly Wall of Shame). ",
"_____no_output_____"
],
[
"## 05. Download Twilio\n\nI first tried installing from the command line through conda (to keep package management centralized within my Anaconda distribution). I got an error:\n\n```bash\nbrendon-smiths-macbook:~ brendonsmith$ conda install twilio\nFetching package metadata .........\n\nPackageNotFoundError: Packages missing in current channels:\n\n - twilio\n\nWe have searched for the packages in the following channels:\n\n - https://repo.continuum.io/pkgs/free/osx-64\n - https://repo.continuum.io/pkgs/free/noarch\n - https://repo.continuum.io/pkgs/r/osx-64\n - https://repo.continuum.io/pkgs/r/noarch\n - https://repo.continuum.io/pkgs/pro/osx-64\n - https://repo.continuum.io/pkgs/pro/noarch\n \n```\n\nTwilio is not in the conda library. Some users have the twilio package in their anaconda cloud accounts, which you can use.\n\n> Troubleshooting\n> \n> Anaconda\n> \n> If you have Anaconda installed on your Mac or PC, it comes with its own version of Python that is siloed from the version of Python that is included with IDLE. You may have installed Twilio to Anaconda's Python instead of IDLE's Python -- to find out whether or not you've done that, we suggest opening Spyder, Anaconda's alternative to IDLE, and trying an `import twilio` there. If that works, then feel free to do your programming for this assignment in Spyder instead of IDLE.\n\nI searched \"twilio python\" and found the [twilio SMS and MMS python quickstart video](https://www.twilio.com/docs/quickstart/python), and the associated [guide](https://www.twilio.com/docs/guides/how-to-send-sms-messages-in-python). It was helpful. \n\n**He had his account_sid, auth_token, and phone number stored in environment variables with `os.environ[]`.** \n\nHe also made a demo message about the Millenium Falcon in Star Wars!\n\n### [install twilio helper library with pip](https://www.twilio.com/docs/libraries/python)\n\nRocked it. Looks like it integrated with my Anaconda Python distribution.\n\n```bash\n\nbrendon-smiths-macbook:~ brendonsmith$ pip install twilio\nCollecting twilio\n Downloading twilio-6.5.2-py2.py3-none-any.whl (749kB)\n 100% |████████████████████████████████| 757kB 807kB/s\nRequirement already satisfied: requests>=2.0.0; python_version >= \"3.0\" in ./anaconda3/lib/python3.6/site-packages (from twilio)\nRequirement already satisfied: six in ./anaconda3/lib/python3.6/site-packages (from twilio)\nCollecting pysocks; python_version >= \"3.0\" (from twilio)\n Downloading PySocks-1.6.7-py3-none-any.whl\nCollecting PyJWT>=1.4.2 (from twilio)\n Downloading PyJWT-1.5.2-py2.py3-none-any.whl\nRequirement already satisfied: pytz in ./anaconda3/lib/python3.6/site-packages (from twilio)\nInstalling collected packages: pysocks, PyJWT, twilio\nSuccessfully installed PyJWT-1.5.2 pysocks-1.6.7 twilio-6.5.2\n\n```\n\n\nTo double check anaconda integration, I opened spyder and ran:\n\n```python\n\nimport twilio\nprint(twilio.__version__)\n\n```\n\n 6.5.2\n\n\n## 06. Quiz: Twilio Download Feedback\n\nDownloaded successfully",
"_____no_output_____"
],
[
"## 07. Setting Up Our Code\n\n## 08. Registering with Twilio\n\nGot phone number (415) 903-8062\n\n## 09. Running Our Code\n\n[Python Helper Library SMS Test](https://www.twilio.com/docs/libraries/python)\n\nThe account_sid is like a username, and the auth_token is like a password.\n\n```python\n\nfrom twilio.rest import Client\n\n# Your Account SID from twilio.com/console\naccount_sid = \"ACbfc39c298c6e430c00c68cc232aa752b\"\n# Your Auth Token from twilio.com/console\nauth_token = \"efc269ed6fa132a68789640ca0fc139d\"\n\nclient = Client(account_sid, auth_token)\n\nmessage = client.messages.create(\n to=\"+18578950514\", # my mobile #\n from_=\"+14159038062\", # twilio #\n body=\"Hello from Python!\")\n\nprint(message.sid)\n\n```\n\n**It worked!**",
"_____no_output_____"
],
[
"## 10. Quiz: Python Keyword `From`\n\n*What does `from` do in Python?* -> accesses part of a Python module.\n\nFrom Python documentation:\n\n> As your program gets longer, you may want to split it into several files for easier maintenance. You may also want to use a handy function that you’ve written in several programs without copying its definition into each program.\n> \n> To support this, Python has a way to put definitions in a file and use them in a script or in an interactive instance of the interpreter. Such a file is called a module; definitions from a module can be imported into other modules or into the main module (the collection of variables that you have access to in a script executed at the top level and in calculator mode).\n> \n> Note that in general the practice of importing * from a module or package is frowned upon, since it often causes poorly readable code. However, it is okay to use it to save typing in interactive sessions.\n\nFrom [Stack Overflow](https://stackoverflow.com/questions/710551/import-module-or-from-module-import):\n\n<div class=\"post-text\" itemprop=\"text\">\n<p>The difference between <code>import module</code> and <code>from module import foo</code> is mainly subjective. Pick the one you like best and be consistent in your use of it. Here are some points to help you decide.</p>\n\n<p><code>import module</code></p>\n\n<ul>\n<li><strong>Pros:</strong>\n\n<ul>\n<li>Less maintenance of your <code>import</code> statements. Don't need to add any additional imports to start using another item from the module</li>\n</ul></li>\n<li><strong>Cons:</strong>\n\n<ul>\n<li>Typing <code>module.foo</code> in your code can be tedious and redundant (tedium can be minimized by using <code>import module as mo</code> then typing <code>mo.foo</code>)</li>\n</ul></li>\n</ul>\n\n<p><code>from module import foo</code></p>\n\n<ul>\n<li><strong>Pros:</strong>\n\n<ul>\n<li>Less typing to use <code>foo</code></li>\n<li>More control over which items of a module can be accessed</li>\n</ul></li>\n<li><strong>Cons:</strong>\n\n<ul>\n<li>To use a new item from the module you have to update your <code>import</code> statement</li>\n<li>You lose context about <code>foo</code>. For example, it's less clear what <code>ceil()</code> does compared to <code>math.ceil()</code></li>\n</ul></li>\n</ul>\n\n<p>Either method is acceptable, but <strong>don't</strong> use <code>from module import *</code>. </p>\n\n<p>For any reasonable large set of code, if you <code>import *</code> you will likely be cementing it into the module, unable to be removed. This is because it is difficult to determine what items used in the code are coming from 'module', making it easy to get to the point where you think you don't use the <code>import</code> any more but it's extremely difficult to be sure.</p>\n </div>\n \n\n[tutorialspoint](https://www.tutorialspoint.com/python/python_modules.htm)\n> The `from...import` Statement\n> Python's from statement lets you import specific attributes from a module into the current namespace. The from...import has the following syntax −\n> \n> `from modname import name1[, name2[, ... nameN]]`\n> For example, to import the function fibonacci from the module fib, use the following statement −\n> \n> `from fib import fibonacci`\n> This statement does not import the entire module fib into the current namespace; it just introduces the item fibonacci from the module fib into the global symbol table of the importing module.\n> \n> The `from...import *` Statement:\n> It is also possible to import all names from a module into the current namespace by using the following import statement −\n> \n> `from modname import *`\n> This provides an easy way to import all the items from a module into the current namespace; however, this statement should be used sparingly.\n \n## 11. Investigating the Code\n\nResources from Udacity:\n\n<div class=\"ltr\"><div class=\"ureact-markdown--markdown--3IhZa ureact-markdown \"><p>1) <a href=\"http://www.tutorialspoint.com/python/python_modules.htm\" target=\"_blank\">How does Python Keyword <b>from</b> work</a><br>\n2) <a href=\"https://github.com/twilio/twilio-python\" target=\"_blank\">Read the actual <b>Twilio</b> code on Github</a></p>\n</div></div>\n\nalready checked out the GitHub page.\n\n## 12. Where Does Twilio Come From?\n\n`TwilioRestClient` is a class within the `twilio.rest` module.",
"_____no_output_____"
],
[
"## 13. Connecting Turtle, Twilio and Classes\n\n**Classes can be used as blueprints to build repeated instances, or objects.** The activities performed in each instance are defined inside the class.\n\n<img src =\"img/classes-objects-example.png\" alt=\"Turtle graphics and Twilio as examples of classes and objects\" width=\"75%\">\n\n<img src =\"img/classes-objects.png\" alt=\"the relationship between classes and objects\" width=\"75%\">",
"_____no_output_____"
],
[
"## 14. Send Text Messages Mini-Project\n\nAnswer the following questions on the discussion forum:\n\n1. *What is a class?*\n2. *What is an instance of a class?*\n3. *Thus far we have compared the class to a blueprint. Can you think of another analogy to explain classes?*\n\nMy answers:\n\n1. *What is a class?* -> a class is a collection of functions written in Python code.\n2. *What is an instance of a class?* -> an instance is an specific adaptation of the code functions available in a class.\n3. *Thus far we have compared the class to a blueprint. Can you think of another analogy to explain classes?* -> it's like a template that you can fill in.",
"_____no_output_____"
]
]
] | [
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
e72c2887daba9caa46e5d3c3a5d127885580720a | 287,245 | ipynb | Jupyter Notebook | queue_imbalance/logistic_regression/queue_imbalance-9061.ipynb | vevurka/mt-lob | 70989bcb61f4cfa7884437e1cff2db2454b3ceff | [
"MIT"
] | 2 | 2019-04-17T02:19:22.000Z | 2019-05-23T12:14:59.000Z | queue_imbalance/logistic_regression/queue_imbalance-9061.ipynb | vevurka/mt-lob | 70989bcb61f4cfa7884437e1cff2db2454b3ceff | [
"MIT"
] | 10 | 2020-01-28T22:32:13.000Z | 2021-09-08T00:41:37.000Z | queue_imbalance/logistic_regression/queue_imbalance-9061.ipynb | vevurka/mt-lob | 70989bcb61f4cfa7884437e1cff2db2454b3ceff | [
"MIT"
] | 6 | 2018-12-05T22:17:05.000Z | 2020-09-03T03:00:50.000Z | 470.893443 | 47,364 | 0.925764 | [
[
[
"# Testing of queue imbalance for stock 9091\n\nOrder of this notebook is as follows:\n1. [Data](#Data)\n2. [Data visualization](#Data-visualization)\n3. [Tests](#Tests)\n4. [Conclusions](#Conclusions)\n\nGoal is to implement queue imbalance predictor from [[1]](#Resources).",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\n\nimport warnings\n\nimport matplotlib.dates as md\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nfrom lob_data_utils import lob\nfrom sklearn.metrics import roc_curve, roc_auc_score\n\nwarnings.filterwarnings('ignore')",
"_____no_output_____"
]
],
[
[
"## Data\n\nMarket is open between 8-16 on every weekday. We decided to use data from only 9-15 for each day.\n\n### Test and train data\n\nFor training data we used data from 2013-09-01 - 2013-11-16:\n\n* 0901\n* 0916\n* 1001\n* 1016\n* 1101\n\nWe took 75% of this data (randomly), the rest is the test data.",
"_____no_output_____"
]
],
[
[
"df, df_test = lob.load_prepared_data('9061', data_dir='../data/prepared/', length=None)\ndf.head()",
"Len of data for 9061 is 17245\nTraining set length for 9061: 13796\nTesting set length for 9061: 3449\n"
]
],
[
[
"## Data visualization",
"_____no_output_____"
]
],
[
[
"df['sum_buy_bid'].plot(label='total size of buy orders', style='--')\ndf['sum_sell_ask'].plot(label='total size of sell orders', style='-')\nplt.title('Summed volumens for ask and bid lists')\nplt.xlabel('Time')\nplt.ylabel('Whole volume')\nplt.legend()",
"_____no_output_____"
],
[
"df[['bid_price', 'ask_price', 'mid_price']].plot(style='.')\nplt.legend()\nplt.title('Prices')\nplt.xlabel('Time')\nplt.ylabel('Price')",
"_____no_output_____"
],
[
"sns.jointplot(x=\"mid_price\", y=\"queue_imbalance\", data=df.loc[:, ['mid_price', 'queue_imbalance']], kind=\"kde\")\nplt.title('Density')\nplt.plot()",
"_____no_output_____"
],
[
"df['mid_price_indicator'].plot('kde')\nplt.legend()\nplt.xlabel('Mid price indicator')\nplt.title('Mid price indicator density')",
"_____no_output_____"
],
[
"df['queue_imbalance'].plot('kde')\nplt.legend()\nplt.xlabel('Queue imbalance')\nplt.title('Queue imbalance density')",
"_____no_output_____"
]
],
[
[
"## Tests\n\nWe use logistic regression to predict `mid_price_indicator`.\n\n### Mean square error \n\nWe calculate residual $r_i$:\n\n$$ r_i = \\hat{y_i} - y_i $$\n\nwhere \n\n$$ \\hat{y}(I) = \\frac{1}{1 + e −(x_0 + Ix_1 )}$$\n\nCalculating mean square residual for all observations in the testing set is also useful to assess the predictive power.\n\nThe predective power of null-model is 25%.",
"_____no_output_____"
]
],
[
[
"reg = lob.logistic_regression(df, 0, len(df))\n\nprobabilities = reg.predict_proba(df_test['queue_imbalance'].values.reshape(-1,1))\nprobabilities = [p1 for p0, p1 in probabilities]\nerr = ((df_test['mid_price_indicator'] - probabilities) ** 2).mean()\n\npredictions = reg.predict(df_test['queue_imbalance'].values.reshape(-1, 1))\n\nprint('Mean square error is', err)",
"Mean square error is 0.30136827396201277\n"
]
],
[
[
"#### Logistic regression fit curve",
"_____no_output_____"
]
],
[
[
"plt.plot(df_test['queue_imbalance'].values, \n lob.sigmoid(reg.coef_[0] * df_test['queue_imbalance'].values + reg.intercept_))\nplt.title('Logistic regression fit curve')\nplt.xlabel('Imbalance')\nplt.ylabel('Prediction')",
"_____no_output_____"
]
],
[
[
"#### ROC curve\n\nFor assessing the predectivity power we can calculate ROC score.",
"_____no_output_____"
]
],
[
[
"a, b, c = roc_curve(df_test['mid_price_indicator'], predictions)\nlogit_roc_auc = roc_auc_score(df_test['mid_price_indicator'], predictions)\nplt.plot(a, b, label='predictions (area {})'.format(logit_roc_auc))\nplt.plot([0, 1], [0, 1], color='navy', linestyle='--')\nplt.xlabel('False Positive Rate')\nplt.ylabel('True Positive Rate')\nplt.legend()",
"_____no_output_____"
],
[
"st = 0\nend = len(df)\nplt.plot(df_test.index[st:end], predictions[st:end], 'ro', label='prediction')\nplt.plot(df_test.index[st:end], probabilities[st:end], 'g.', label='probability')\nplt.plot(df_test.index[st:end], df_test['mid_price_indicator'].values[st:end], 'b.', label='mid price')\nplt.xticks(rotation=25)\nplt.legend(loc=1)\nplt.xlabel('Time')\nplt.ylabel('Mid price prediction')",
"_____no_output_____"
]
],
[
[
"## Conclusions\n\nLooking at mid_price_indicator density plot it seems that bid and ask queues are balanced. The same conclusion we can get from queue imbalance density plot - most often the queues are balanced.\n\n\n* predicted probability vs known indicator: 0.248, so it's slightly better than null-model (0.25). \n* area under ROC curve is 0.534, for null-model it's 0.50.\n\nWe didn't remove outliers.",
"_____no_output_____"
],
[
"### Resources\n\n1. [Queue Imbalance as a One-Tick-Ahead Price Predictor in a Limit Order Book](https://arxiv.org/abs/1512.03492) <a class=\"anchor-link\" href=\"#1\">¶</a> ",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
]
] |
e72c2bd4801bc699f615126446ff11820cae99a8 | 946,335 | ipynb | Jupyter Notebook | docs/jwst_optical_budgets.ipynb | kammerje/webbpsf | f2a123ac9107eaeec04fcad43374e4939facc859 | [
"BSD-3-Clause"
] | 1 | 2019-04-09T19:11:26.000Z | 2019-04-09T19:11:26.000Z | docs/jwst_optical_budgets.ipynb | kammerje/webbpsf | f2a123ac9107eaeec04fcad43374e4939facc859 | [
"BSD-3-Clause"
] | 3 | 2022-02-01T10:04:25.000Z | 2022-03-01T10:03:57.000Z | docs/jwst_optical_budgets.ipynb | kammerje/webbpsf | f2a123ac9107eaeec04fcad43374e4939facc859 | [
"BSD-3-Clause"
] | null | null | null | 5,316.488764 | 470,948 | 0.965189 | [
[
[
"%pylab inline\nimport webbpsf",
"Populating the interactive namespace from numpy and matplotlib\n"
]
],
[
[
"# Visualizing the JWST Optical Budget\n\nWebbPSF 1.0 adds a tool to display different components of the optical models used in the PSF calculations. This is based on the formal optical budgets used to track JWST requirements and predicted performance. \n\nThe total WFE is broken down into three major components: \n\n1. _OTE Static_ wavefront error (non-time-dependent terms).\n2. _OTE Dynamic_ wavefront error (time-variable terms such as drifts)\n3. _ISIM and SI_ wavefront error from each instrument. \n\nFor each of those major components, a further division into three subcomponents is shown. Some terms are quite small (by budget and design), and negligible compared to the larger terms. \n\nThis is invoked by the `vizualize_wfe_budget()` method: ",
"_____no_output_____"
]
],
[
[
"nrc = webbpsf.NIRCam()\nnrc.pupilopd = 'OPD_RevW_ote_for_NIRCam_predicted.fits.gz'\nnrc.visualize_wfe_budget()",
"generating optical models\ninferring OTE static WFE terms\n decomposing WFE into controllable and uncontrollable spatial frequencies\n modeling controllable and uncontrollable spatial frequencies\ninferring OTE dynamic WFE terms\nlos jitter 0.006 arcsec, as wfe 67.02470144874732 nm\ninferring ISIM + SI WFE terms\ndisplaying plots\n"
]
],
[
[
"The way to read the above plot is that the total system WFE (at top) is the sum of the 3 OPDs shown in the first column below. And then in each row, the total in the left panel is the sum of the three panels to the right. \n\nNote that in each panel, an annotation at lower left states the RMS WFE for that term. In lower right, two annotations state the maximum amount of that term allowed set by JWST's Requirements (\"Req:\"), and the amount predicted by the optical budget (\"Pred.\"). The way WebbPSF models some of these terms is distinct from the optical budget (which is fundamentally a statistical model), so exact consistency is not expected in all cases. \n\n\n** For more details on the JWST Optical Budgets:** See <a href=\"https://ui.adsabs.harvard.edu/abs/2018SPIE10698E..04L/abstract\">Lightsey et al. 2018</a> and <a href=\"https://ui.adsabs.harvard.edu/abs/2014SPIE.9143E..04L/abstract\">Lightsey et al. 2014</a>, or JWST project document JWST-REF-041994 (\"Guide to the JWST Optical Budget\", P. Lightsey).",
"_____no_output_____"
],
[
"The SI internal WFE will vary based on the selected field point. Here we select a different point in NIRCam; note the changes in the lowest row showing the NIRCam WFE.",
"_____no_output_____"
]
],
[
[
"nrc.detector = 'NRCA3'\nnrc.detector_position = (1024, 0)\n\nnrc.visualize_wfe_budget()",
"generating optical models\ninferring OTE static WFE terms\n decomposing WFE into controllable and uncontrollable spatial frequencies\n modeling controllable and uncontrollable spatial frequencies\ninferring OTE dynamic WFE terms\nlos jitter 0.006 arcsec, as wfe 67.02470144874732 nm\ninferring ISIM + SI WFE terms\ndisplaying plots\n"
]
],
[
[
"In these plots, image motion (line of sight jitter) is _depicted_ as a defocus Zernike term, with magnitude calculated to yield a comparable blurring of the PSF. This is _NOT_ a physically correct representation of how line of sight jitter works optically, but it is a useful shorthand to treat jitter in comparable units as the rest of the optical budget. This is consistent with how LOS jitter is treated in the JWST budgets. \n\nWe expect that as we gain experience in flight, the model terms will be updated to reflect achieved performance of the observatory and its systems. ",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
e72c2ddf1a8c185be9f627278e93d2b3edc884c8 | 3,397 | ipynb | Jupyter Notebook | lab13/semi_supervised/plot_label_propagation_structure.ipynb | cruxiu/MLStudies | 2b0a9ac7dbede4200080666dfdcba6a2f65f93af | [
"MIT"
] | 1 | 2019-08-22T01:35:16.000Z | 2019-08-22T01:35:16.000Z | lab13/semi_supervised/plot_label_propagation_structure.ipynb | cruxiu/MLStudies | 2b0a9ac7dbede4200080666dfdcba6a2f65f93af | [
"MIT"
] | null | null | null | lab13/semi_supervised/plot_label_propagation_structure.ipynb | cruxiu/MLStudies | 2b0a9ac7dbede4200080666dfdcba6a2f65f93af | [
"MIT"
] | null | null | null | 62.907407 | 2,051 | 0.600824 | [
[
[
"%matplotlib inline",
"_____no_output_____"
]
],
[
[
"\n# Label Propagation learning a complex structure\n\n\nExample of LabelPropagation learning a complex internal structure\nto demonstrate \"manifold learning\". The outer circle should be\nlabeled \"red\" and the inner circle \"blue\". Because both label groups\nlie inside their own distinct shape, we can see that the labels\npropagate correctly around the circle.\n\n",
"_____no_output_____"
]
],
[
[
"print(__doc__)\n\n# Authors: Clay Woolam <[email protected]>\n# Andreas Mueller <[email protected]>\n# License: BSD\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom sklearn.semi_supervised import label_propagation\nfrom sklearn.datasets import make_circles\n\n# generate ring with inner box\nn_samples = 200\nX, y = make_circles(n_samples=n_samples, shuffle=False)\nouter, inner = 0, 1\nlabels = -np.ones(n_samples)\nlabels[0] = outer\nlabels[-1] = inner\n\n# #############################################################################\n# Learn with LabelSpreading\nlabel_spread = label_propagation.LabelSpreading(kernel='knn', alpha=0.2)\nlabel_spread.fit(X, labels)\n\n# #############################################################################\n# Plot output labels\noutput_labels = label_spread.transduction_\nplt.figure(figsize=(8.5, 4))\nplt.subplot(1, 2, 1)\nplt.scatter(X[labels == outer, 0], X[labels == outer, 1], color='navy',\n marker='s', lw=0, label=\"outer labeled\", s=10)\nplt.scatter(X[labels == inner, 0], X[labels == inner, 1], color='c',\n marker='s', lw=0, label='inner labeled', s=10)\nplt.scatter(X[labels == -1, 0], X[labels == -1, 1], color='darkorange',\n marker='.', label='unlabeled')\nplt.legend(scatterpoints=1, shadow=False, loc='upper right')\nplt.title(\"Raw data (2 classes=outer and inner)\")\n\nplt.subplot(1, 2, 2)\noutput_label_array = np.asarray(output_labels)\nouter_numbers = np.where(output_label_array == outer)[0]\ninner_numbers = np.where(output_label_array == inner)[0]\nplt.scatter(X[outer_numbers, 0], X[outer_numbers, 1], color='navy',\n marker='s', lw=0, s=10, label=\"outer learned\")\nplt.scatter(X[inner_numbers, 0], X[inner_numbers, 1], color='c',\n marker='s', lw=0, s=10, label=\"inner learned\")\nplt.legend(scatterpoints=1, shadow=False, loc='upper right')\nplt.title(\"Labels learned with Label Spreading (KNN)\")\n\nplt.subplots_adjust(left=0.07, bottom=0.07, right=0.93, top=0.92)\nplt.show()",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e72c3578bd52b629af4fa68309eb947eb750b529 | 27,398 | ipynb | Jupyter Notebook | assets/emse6574_assignments/Week_7_Assignment_Michael_Salceda.ipynb | ngau9567/msalceda.github.io | 607a653a7da84372a73106e70eccc62ae85f9e0a | [
"CC-BY-3.0"
] | null | null | null | assets/emse6574_assignments/Week_7_Assignment_Michael_Salceda.ipynb | ngau9567/msalceda.github.io | 607a653a7da84372a73106e70eccc62ae85f9e0a | [
"CC-BY-3.0"
] | null | null | null | assets/emse6574_assignments/Week_7_Assignment_Michael_Salceda.ipynb | ngau9567/msalceda.github.io | 607a653a7da84372a73106e70eccc62ae85f9e0a | [
"CC-BY-3.0"
] | 4 | 2020-12-11T23:50:28.000Z | 2022-01-27T12:31:14.000Z | 37.894882 | 184 | 0.321629 | [
[
[
"# Week 7 Assignment\nFind a dataset and apply a random forest classifier/regressor on it.",
"_____no_output_____"
]
],
[
[
"# Data EDA\nimport numpy as np\nimport pandas as pd\nfrom sklearn import datasets\n\n# Machine Learning\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.metrics import classification_report",
"_____no_output_____"
]
],
[
[
"## Data Loading\nLet's use the breast cancer dataset found in scikit-learn.",
"_____no_output_____"
]
],
[
[
"cancer_databunch = datasets.load_breast_cancer()\n\n# Get features\nfeatures = cancer_databunch.data\n\n# Get target labels (as numbers)\nlabels = cancer_databunch.target\n\n# Get column names for DataFrame construction\ncolumns = cancer_databunch.feature_names.tolist() + ['class']\n\n# Get mapping of label number to string name\ntarget_mapping = {idx:target for idx, target in enumerate(cancer_databunch.target_names)}\n\n# Create DataFrame\ncancer = pd.DataFrame(np.concatenate((features, labels.reshape(-1, 1)), axis = 1), columns = columns)\n\n# Replace \"species\" column values with actual names\ncancer['class'] = cancer['class'].apply(lambda x: target_mapping.get(x))\n\n# Show couple rows\ncancer.head()",
"_____no_output_____"
]
],
[
[
"## Train-Test Split\nSince all the features are numeric and scale doesn't really matter for random forests, we can go straight to the train-test split without any major feature engineering steps.",
"_____no_output_____"
]
],
[
[
"# Do a 80-20 split for train and test sets\nX_train, X_test, y_train, y_test = train_test_split(\n cancer.drop(columns = 'class'),\n cancer['class'],\n test_size = 0.2,\n random_state = 1\n)\nprint(f'Training Shape (Features): {X_train.shape}')\nprint(f'Testing Shape (Features): {X_test.shape}')\nprint(f'Training Shape (Labels): {y_train.shape}')\nprint(f'Testing Shape (Labels): {y_test.shape}')\nprint('TRAINING SAMPLE'.center(50, '='))\ndisplay(X_train.head(2))\nprint('TESTING SAMPLE'.center(50, '='))\ndisplay(X_test.head(2))",
"Training Shape (Features): (455, 30)\nTesting Shape (Features): (114, 30)\nTraining Shape (Labels): (455,)\nTesting Shape (Labels): (114,)\n=================TRAINING SAMPLE==================\n"
]
],
[
[
"## Model Training",
"_____no_output_____"
]
],
[
[
"# Training a random forest model\nrf = RandomForestClassifier()\nrf.fit(X_train, y_train)\n\n# Get predictions\npredictions = rf.predict(X_test)\n\n# Get metrics\nprint(classification_report(y_test, predictions))",
" precision recall f1-score support\n\n benign 0.93 0.99 0.96 72\n malignant 0.97 0.88 0.93 42\n\n accuracy 0.95 114\n macro avg 0.95 0.93 0.94 114\nweighted avg 0.95 0.95 0.95 114\n\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e72c53c1670ce1505f9081d5e9717aea9465e9fd | 21,779 | ipynb | Jupyter Notebook | nbs/dtypes.ipynb | antonbabkin/ig_format | 428bc273c19e27b6f972f3c458bdae77fc326c63 | [
"Apache-2.0"
] | null | null | null | nbs/dtypes.ipynb | antonbabkin/ig_format | 428bc273c19e27b6f972f3c458bdae77fc326c63 | [
"Apache-2.0"
] | 2 | 2021-09-28T00:17:08.000Z | 2021-09-28T17:16:05.000Z | nbs/dtypes.ipynb | antonbabkin/ig_format | 428bc273c19e27b6f972f3c458bdae77fc326c63 | [
"Apache-2.0"
] | null | null | null | 34.515055 | 435 | 0.542954 | [
[
[
"# Data types optimization\n\n> Convert columns to use more memory efficient dtypes.",
"_____no_output_____"
]
],
[
[
"import random\nimport itertools\nimport string\nimport timeit\n\nimport numpy as np\nimport pandas as pd",
"_____no_output_____"
]
],
[
[
"# NumPy data types",
"_____no_output_____"
],
[
"Pandas columns are internally stored as numpy arrays, and so [NumPy data types](https://numpy.org/doc/stable/user/basics.types.html) are used.\n\n**Boolean**\n\n`np.bool_` takes 1 byte per item, but can not hold missing values. Logical operations on columns return series of this dtype, unless some of the element-wise tests results in NA value, in which case result is of `object` dtype.\n\n\n**Integer**\n\nLimits and other details can be looked up with `numpy.iinfo()`.\n\nStoring value outside of limits creates overflow.\n\n| dtype | size (bytes) | min | max |\n|:------:|:------------:|:--------------------------:|:--------------------------:|\n| uint8 | 1 | 0 | 255 |\n| uint16 | 2 | 0 | 65,535 |\n| uint32 | 4 | 0 | 4,294,967,295 |\n| uint64 | 8 | 0 | 18,446,744,073,709,551,615 |\n| int8 | 1 | -128 | 127 |\n| int16 | 2 | -32,768 | 32,767 |\n| int32 | 4 | -2,147,483,648 | 2,147,483,647 |\n| int64 | 8 | -9,223,372,036,854,775,808 | 9,223,372,036,854,775,807 |\n\nInteger dtypes provide wide range of options, but the biggest constraint is that in standard pandas these dtypes do not allow for missing values in them.\n\n**Floating point**\n\nWikipedia: [float16](https://en.wikipedia.org/wiki/Half-precision_floating-point_format), [float32](https://en.wikipedia.org/wiki/Single-precision_floating-point_format), [float64](https://en.wikipedia.org/wiki/Double-precision_floating-point_format).\n\nLimits and other details can be looked up with `numpy.finfo()`.\n\nSpacing between a number and it's adjacent neighbor (`numpy.spacing()`) increases with number absolute magnitude. Therefore care should be taken when storing large integers as floats.\n\n\n| dtype | size (bytes) | max | max exact integer |\n|:-------:|:------------:|:-----------------------:|:-----------------------------:|\n| float16 | 2 | 6.55040e+04 | $2^{11}$ = 2,048 |\n| float32 | 4 | 3.4028235e+38 | $2^{24}$ = 16,777,216 |\n| float64 | 8 | 1.7976931348623157e+308 | $2^{53}$ = 9,007,199,254,740,992 |\n\nEven though `float16` might have good use cases (notably booleans with missing data), it looks like it is not always fully supported.\n\n**String**\n\nAlthough there are fixed length Unicode string dtype in NumPy (e.g. `np.dtype('U3')`), pandas uses `np.object_`. This is an array of pointers (item size of 32 or 64 bits, depending on platform architecture) to memory locations where actual strings are stored.",
"_____no_output_____"
]
],
[
[
"# floats spacing increases with number magnitude\ndt = np.float16\ninfo = np.finfo(dt)\nfor exp in range(-16, 17):\n x = 2**exp\n npx = dt(x)\n print(exp, x, npx, np.spacing(npx))",
"_____no_output_____"
],
[
"# integer precision limits on floats\nfor dt in [np.float16, np.float32, np.float64]:\n info = np.finfo(dt)\n max_int = 2**(info.nmant + 1)\n print(dt.__name__, info.nmant, max_int)\n assert dt(max_int - 1) != dt(max_int)\n assert dt(max_int) == dt(max_int + 1)",
"_____no_output_____"
]
],
[
[
"## Automatic conversion\n\nBe mindful of possible overflow when performing operations with numerical series, as dtypes will not always automatically convert to higher types.\n\nResult of series aggregation is numpy scalar with certain numpy dtype. \nSummation of ints results in `int64`, regardless of input dtype.\nSummation of `float32` remains `float32`, so precision may be lost.",
"_____no_output_____"
]
],
[
[
"# going beyond float32 integer precision\ns = pd.Series([2**24] * 3, dtype='float32')\nassert ((s + 1) == s).all()",
"_____no_output_____"
],
[
"# 127 is max int8, but s.sum() does not overflow, because result is stored in int64\ns = pd.Series([127, 127, 127], dtype='int8')\nss = s.sum()\nprint(ss.dtype, ss, 127 * 3)",
"_____no_output_____"
],
[
"# 2**24 is largest int that can be exactly represented by float32\ns = pd.Series([2**24] * 3, dtype='float32')\n# s.sum() is float32, but number is not exact integer\nss = s.sum()\nprint(ss.dtype, ss, 2**24 * 3)",
"_____no_output_____"
]
],
[
[
"# Categorical\n\n[User guide](https://pandas.pydata.org/docs/user_guide/categorical.html)\n\n[#](https://pandas.pydata.org/docs/user_guide/categorical.html#missing-data)\n> Missing values should not be included in the Categorical’s categories, only in the values. Instead, it is understood that NaN is different, and is always a possibility. When working with the Categorical’s codes, missing values will always have a code of -1.",
"_____no_output_____"
]
],
[
[
"def gen_unique_str(n, l, alphabet=None):\n \"\"\"Return list of `n` random unique strings of lenght `l`.\"\"\"\n if alphabet is None:\n alphabet = string.ascii_lowercase\n assert len(alphabet) ** l >= n, f'Can not generate {n} unique strings of length {l} from alphabet of length {len(alphabet)}.'\n str_set = set()\n while len(str_set) < n:\n str_set.add(''.join(random.choices(alphabet, k=l)))\n return list(str_set)\n \n\ndef gen_mock_data(n_rows, num=None, str_=None, cat=None):\n \"\"\"Return dataframe with random data.\n \n `num`: number of columns.\n \n `str_`: {'n': number of colums, 'len': string length, 'nuni': number of uniques}.\n If `str_` is number, it is interpreted as number of columns with defaults for other options.\n \n \n `cat`: {'n': number of colums, 'len': string length, 'nuni': number of uniques}.\n If `cat` is number, it is interpreted as number of columns with defaults for other options.\n \"\"\"\n \n def str_df(par, categorical):\n if isinstance(par, int):\n par = {\n 'n': par,\n 'len': 8,\n 'nuni': n_rows // 10\n }\n df = pd.DataFrame()\n for i in range(par['n']):\n uniques = gen_unique_str(par['nuni'], par['len'])\n if categorical: \n df[f'cat{i}'] = pd.Categorical(random.choices(uniques, k=n_rows), uniques)\n else:\n df[f'str{i}'] = random.choices(uniques, k=n_rows)\n return df\n \n dfs = [pd.DataFrame({'id': range(n_rows)})]\n \n if num is not None:\n dfs.append(pd.DataFrame(np.random.rand(n_rows, num), columns=[f'num{i}' for i in range(num)]))\n if str_ is not None:\n dfs.append(str_df(str_, False))\n if cat is not None:\n dfs.append(str_df(cat, True))\n return pd.concat(dfs, 1)",
"_____no_output_____"
]
],
[
[
"## Select and groupby\n\nSelection by equality test is x220 faster with categoricals.\n\nGroupby aggregation is x27 faster with categoricals.",
"_____no_output_____"
]
],
[
[
"df = gen_mock_data(1_000_000, str_=1, cat=1)\nprint('select str')\nneedle = df['str0'][0]\n%timeit _ = (df['str0'] == needle)\nprint('select cat')\nneedle = df['cat0'].cat.categories[0]\n%timeit _ = (df['cat0'] == needle)",
"_____no_output_____"
],
[
"df = gen_mock_data(1_000_000, num=1, str_=1, cat=1)\nprint('groupby str')\n%timeit _ = df.groupby('str0')['num0'].sum()\nprint('groupby cat')\n%timeit _ = df.groupby('cat0')['num0'].sum()",
"_____no_output_____"
]
],
[
[
"## String methods\n\n[#](https://pandas.pydata.org/docs/user_guide/categorical.html#string-and-datetime-accessors)\n\n`.str` and `.dt` accessors work on categoricals if categories are of an appropriate type.\n\n> The work is done on the categories and then a new Series is constructed. This has some performance implication if you have a Series of type string, where lots of elements are repeated (i.e. the number of unique elements in the Series is a lot smaller than the length of the Series). In this case it can be faster to convert the original Series to one of type category and use .str.\\<method\\> or .dt.\\<property\\> on that.\n\nAbout x8 speedup in `startswith()` and `contains()`, but the gain naturally declines as the share of unique values increases.",
"_____no_output_____"
]
],
[
[
"df = gen_mock_data(1_000_000, str_=1, cat=1)\n\nprint('str: startswith')\n%timeit _ = df.str0.str.startswith('a')\nprint('cat: startswith')\n%timeit _ = df.cat0.str.startswith('a')\n\nprint('str: contains non-regex')\n%timeit _ = df.str0.str.contains('a', regex=False)\nprint('cat: contains non-regex')\n%timeit _ = df.cat0.str.contains('a', regex=False)\n\nprint('str: contains regex')\n%timeit _ = df.str0.str.contains('[ab]', regex=True)\nprint('cat: contains regex')\n%timeit _ = df.cat0.str.contains('[ab]', regex=True)",
"_____no_output_____"
],
[
"n_rows = 10_000_000\ndf = gen_mock_data(n_rows, str_=dict(n=1, len=10, nuni=n_rows//2), cat=dict(n=1, len=10, nuni=n_rows//2))\n\nprint('str: startswith')\n%timeit _ = df.str0.str.startswith('a')\nprint('cat: startswith')\n%timeit _ = df.cat0.str.startswith('a')\n\nprint('str: contains non-regex')\n%timeit _ = df.str0.str.contains('a', regex=False)\nprint('cat: contains non-regex')\n%timeit _ = df.cat0.str.contains('a', regex=False)\n\nprint('str: contains regex')\n%timeit _ = df.str0.str.contains('[ab]', regex=True)\nprint('cat: contains regex')\n%timeit _ = df.cat0.str.contains('[ab]', regex=True)",
"_____no_output_____"
]
],
[
[
"## Merge\n\n[#](https://pandas.pydata.org/docs/user_guide/categorical.html#merging-concatenation)\n[#](https://pandas.pydata.org/docs/user_guide/merging.html#merge-dtypes)\n\n> By default, combining Series or DataFrames which contain the same categories results in category dtype, otherwise results will depend on the dtype of the underlying categories. **Merges that result in non-categorical dtypes will likely have higher memory usage.** Use .astype or union_categoricals to ensure category results.\n\n> The category dtypes must be exactly the same, meaning the same categories and the ordered attribute. Otherwise the result will coerce to the categories’ dtype.\n\n> Merging on category dtypes that are the same can be quite performant compared to object dtype merging.",
"_____no_output_____"
]
],
[
[
"import random\nimport itertools\nimport string\nimport timeit\n\nimport numpy as np\nimport pandas as pd\n\ndef gen_cat_data(n_rows, n_cats, cat_len, cat):\n cat_gen = itertools.product(string.ascii_lowercase, repeat=cat_len)\n cats = [''.join(next(cat_gen)) for _ in range(n_cats)]\n assert len(cats) == len(set(cats))\n df = pd.DataFrame({'key': random.choices(cats, k=n_rows),\n 'val': np.random.rand(n_rows)})\n if cat: df['key'] = pd.Categorical(df['key'], cats)\n agg = df.groupby('key')['val'].sum().rename('sum').reset_index() \n return df, agg",
"_____no_output_____"
],
[
"times = {}\nfor rows_order in range(4, 9):\n nr = 10**rows_order\n for nc in [10, 1000]:\n for cl in [5, 200]:\n for c in [False, True]:\n t = timeit.Timer('df.merge(agg)',\n f'df, agg = gen_cat_data({nr}, {nc}, {cl}, {c})',\n globals=globals())\n repeats, time = t.autorange()\n times[(nr, nc, cl, c)] = time / repeats\n\ndf = pd.Series(times).rename_axis(index=['n_rows', 'n_cats', 'cat_len', 'cats'])",
"_____no_output_____"
]
],
[
[
"Length of strings does not matter",
"_____no_output_____"
]
],
[
[
"x = df.unstack('cat_len')\nx.iloc[:, 1] / x.iloc[:, 0]",
"_____no_output_____"
]
],
[
[
"Categoricals improve performance with 100k+ rows, up to x2 speedup with 100M rows",
"_____no_output_____"
]
],
[
[
"x = df.unstack('cat_len').mean(1)\nx = x.unstack('cats')\nx.iloc[:, 1] / x.iloc[:, 0]",
"_____no_output_____"
]
],
[
[
"Number of categories slows down.",
"_____no_output_____"
]
],
[
[
"x = df.unstack('cat_len').mean(1)\nx = x.unstack('n_cats')\n(x.iloc[:, 1] / x.iloc[:, 0]).unstack('cats')",
"_____no_output_____"
]
],
[
[
"`time / n_rows` declines when few categoricals are used. No clear pattern otherwise.",
"_____no_output_____"
]
],
[
[
"x = df.unstack('cat_len').mean(1)\nx /= x.index.get_level_values('n_rows')\nx.unstack(['cats', 'n_cats'])",
"_____no_output_____"
]
],
[
[
"### If dataframe becomes wide\n\nWhen many columns are to be merged on one or both sides, merge starts taking significantly more time, mainly because data are to be copied to a new object.\n\nMerge on cat keys becomes slower than on str keys with wide dataframes, although difference is small compared to overall merge time.",
"_____no_output_____"
]
],
[
[
"# merge on strings\ndf, agg = gen_cat_data(10_000_000, 100, 10, False)\nprint('merge few columns')\n%time _ = df.merge(agg)\nfor i in range(100):\n df[f'var{i}'] = np.random.rand(len(df))\nprint('merge many columns')\n%time _ = df.merge(agg)\nfor i in range(100):\n agg[f'agg{i}'] = np.random.rand(len(agg))\nprint('merge many-many columns')\n%time _ = df.merge(agg)",
"_____no_output_____"
],
[
"# merge on categoricals\ndf, agg = gen_cat_data(10_000_000, 100, 10, True)\nprint('merge few columns')\n%time _ = df.merge(agg)\nfor i in range(100):\n df[f'var{i}'] = np.random.rand(len(df))\nprint('merge many columns')\n%time _ = df.merge(agg)\nfor i in range(100):\n agg[f'agg{i}'] = np.random.rand(len(agg))\nprint('merge many-many columns')\n%time _ = df.merge(agg)",
"_____no_output_____"
]
],
[
[
"## Container for boolean with NA\n\nThis might be a better solution than using `float32` (or less supported `float16`). Each item will only occupy one byte, and NA-related methods will work as expected.\n\n`fillna()` will not accept values outside of preset categories, so need to `add_categories()` first.\n\nCategories can be `[0, 1]` or `[False, True]`, but the latter is not supported by fastparquet writer.",
"_____no_output_____"
]
],
[
[
"df = gen_mock_data(100_000, num=1)\ndf['boo'] = (df.num0 > 0.8)\ndf.loc[df.sample(frac=0.1).index, 'boo'] = np.nan\nprint(df.boo.value_counts(dropna=False))\ndf['boo_cat'] = df.boo.astype('category').cat.rename_categories({0: False, 1: True})\nprint(df.boo_cat.value_counts(dropna=False))\nprint(df.memory_usage())",
"_____no_output_____"
]
],
[
[
"# Date and time\n\nTo be added later when use case arises.",
"_____no_output_____"
],
[
"# Parquet support\n\nInteger and float types are stored and converted automatically with exception of `float16`.",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport pandas as pd\nimport fastparquet as pq\n\ndata = list(range(100))\ndf = pd.DataFrame()\nfor dt in ['uint8', 'uint16', 'uint32', 'uint64',\n 'int8', 'int16', 'int32', 'int64',\n 'float16', 'float32', 'float64']:\n df[dt] = pd.Series(data, dtype=dt)\n\ndfpq_path = '/tmp/dataframe.pq'\ndf.to_parquet(dfpq_path, 'fastparquet', None, False)\n\ndfpq = pd.read_parquet(dfpq_path, 'fastparquet')\npd.concat([df.dtypes, dfpq.dtypes], 1)",
"_____no_output_____"
]
],
[
[
"Categories can not be `[False, True]`. Maybe fastparquet bug.",
"_____no_output_____"
]
],
[
[
"# df = pd.Series([False, False, True], dtype=pd.CategoricalDtype([False, True])).to_frame('col') # <-- this fails\ndf = pd.Series([False, False, True], dtype=pd.CategoricalDtype([False, True, 2])).to_frame('col') # <-- this works\ndf.to_parquet('/tmp/dataframe.pq', 'fastparquet', None, False)",
"_____no_output_____"
]
],
[
[
"# Extension types\n\nNewer versions of pandas introduced [extension dtypes](https://pandas.pydata.org/pandas-docs/stable/development/extending.html#extending-extension-types), although they are still in experimental stage as of pandas 1.1. These include support for nullable [integers](https://pandas.pydata.org/docs/user_guide/integer_na.html) and [strings](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.StringDtype.html).\n\nTest performance before using.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
e72c57ac33cd180d14c6024b1ec08b4b082c5068 | 4,092 | ipynb | Jupyter Notebook | probe/4_C-MNIST_classification_MEUL.ipynb | laurentperrinet/BoutinRuffierPerrinet17spars | 6b6a804342e8b9c2a9e86be09ab2baef9f0113b7 | [
"MIT"
] | 1 | 2018-04-25T00:42:09.000Z | 2018-04-25T00:42:09.000Z | probe/4_C-MNIST_classification_MEUL.ipynb | laurentperrinet/BoutinRuffierPerrinet17spars | 6b6a804342e8b9c2a9e86be09ab2baef9f0113b7 | [
"MIT"
] | 2 | 2017-10-31T08:19:50.000Z | 2017-11-14T15:15:20.000Z | probe/4_C-MNIST_classification_MEUL.ipynb | laurentperrinet/BoutinRuffierPerrinet17spars | 6b6a804342e8b9c2a9e86be09ab2baef9f0113b7 | [
"MIT"
] | null | null | null | 28.027397 | 123 | 0.550831 | [
[
[
"%load_ext autoreload\n%autoreload 2\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\n#%config InlineBackend.figure_format='svg'\n\nimport numpy as np\nnp.set_printoptions(precision=6, suppress=True)\nnp.set_printoptions(threshold=np.inf)\n\nfrom shl_scripts.shl_experiments import SHL\nfrom classification import SparseClassif\n",
"_____no_output_____"
],
[
"tag ='2017-06-01_MNIST_MEUL_DEBUG_'\nDEBUG_DOWNSCALE, verbose = 10, 10\ntag ='2017-06-01_MNIST_MEUL_'\nDEBUG_DOWNSCALE, verbose = 1, 10\npatch_size = (28,28)\nn_dictionary = 15**2\nl0_sparseness = 7\nn_iter = 2**14\neta = 0.01\neta_homeo = 0.01\nalpha_homeo = 0\nverbose = 0\nlist_figures=['show_dico']\nn_hidden = 100",
"_____no_output_____"
],
[
"results = []\nn_dictionarys = np.arange(12, 30, 3)**2\nfor n_dictionary_ in n_dictionarys:\n shl = SHL(DEBUG_DOWNSCALE=DEBUG_DOWNSCALE, \n eta = eta, eta_homeo = eta_homeo, verbose = verbose, \n n_iter = n_iter, patch_size = patch_size, l0_sparseness=l0_sparseness, alpha_homeo = alpha_homeo,\n n_dictionary = n_dictionary_) \n matname = tag + 'n_dictionary' + str(n_dictionary_)\n \n sc = SparseClassif(shl, matname)\n print(\" ----- learning for the dico of size : {0} -----\".format(n_dictionary_))\n sc.dico = shl.learn_dico(data=sc.training_image, matname=matname, list_figures=list_figures) \n sc.learn()\n results.append(sc.result())\n\nfig = plt.figure(figsize=(6, 4))\nax = fig.add_subplot(111)\nax.plot(n_dictionarys, results) ",
" ----- learning for the dico of size : 144 -----\n"
],
[
"results = []\nl0_sparsenesses = np.arange(5, 40, 5)\n\nfor l0_sparseness_ in l0_sparsenesses:\n shl = SHL(DEBUG_DOWNSCALE=DEBUG_DOWNSCALE, \n eta = eta, eta_homeo = eta_homeo, verbose = verbose, \n n_iter = n_iter, patch_size = patch_size, l0_sparseness=l0_sparseness_, alpha_homeo = alpha_homeo,\n n_dictionary = n_dictionary_) \n sc = SparseClassif(shl, matname)\n matname = tag + 'l0_sparseness=' + str(l0_sparseness_)\n sc.dico = shl.learn_dico(data=sc.training_image, matname=matname, list_figures=list_figures) \n sc.learn()\n results.append(sc.result())\n\nfig = plt.figure(figsize=(6, 4))\nax = fig.add_subplot(111)\nax.plot(l0_sparsenesses, results) ",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code"
]
] |
e72c62bfb182a87d0ff7db80ef0f8e150d369a9e | 53,457 | ipynb | Jupyter Notebook | hw_wordentail.ipynb | robertosemp/cs224u | f6e72c6e6990c8e18df82eeac192fdcbbcbe3f33 | [
"Apache-2.0"
] | null | null | null | hw_wordentail.ipynb | robertosemp/cs224u | f6e72c6e6990c8e18df82eeac192fdcbbcbe3f33 | [
"Apache-2.0"
] | null | null | null | hw_wordentail.ipynb | robertosemp/cs224u | f6e72c6e6990c8e18df82eeac192fdcbbcbe3f33 | [
"Apache-2.0"
] | null | null | null | 43.925226 | 1,929 | 0.603139 | [
[
[
"# Homework and bake-off: word-level entailment with neural networks",
"_____no_output_____"
]
],
[
[
"__author__ = \"Christopher Potts\"\n__version__ = \"CS224u, Stanford, Spring 2020\"",
"_____no_output_____"
]
],
[
[
"## Contents\n\n1. [Overview](#Overview)\n1. [Set-up](#Set-up)\n1. [Data](#Data)\n 1. [Edge disjoint](#Edge-disjoint)\n 1. [Word disjoint](#Word-disjoint)\n1. [Baseline](#Baseline)\n 1. [Representing words: vector_func](#Representing-words:-vector_func)\n 1. [Combining words into inputs: vector_combo_func](#Combining-words-into-inputs:-vector_combo_func)\n 1. [Classifier model](#Classifier-model)\n 1. [Baseline results](#Baseline-results)\n1. [Homework questions](#Homework-questions)\n 1. [Hypothesis-only baseline [2 points]](#Hypothesis-only-baseline-[2-points])\n 1. [Alternatives to concatenation [2 points]](#Alternatives-to-concatenation-[2-points])\n 1. [A deeper network [2 points]](#A-deeper-network-[2-points])\n 1. [Your original system [3 points]](#Your-original-system-[3-points])\n1. [Bake-off [1 point]](#Bake-off-[1-point])",
"_____no_output_____"
],
[
"## Overview",
"_____no_output_____"
],
[
"The general problem is word-level natural language inference.\n\nTraining examples are pairs of words $(w_{L}, w_{R}), y$ with $y = 1$ if $w_{L}$ entails $w_{R}$, otherwise $0$.\n\nThe homework questions below ask you to define baseline models for this and develop your own system for entry in the bake-off, which will take place on a held-out test-set distributed at the start of the bake-off. (Thus, all the data you have available for development is available for training your final system before the bake-off begins.)\n\n<img src=\"fig/wordentail-diagram.png\" width=600 alt=\"wordentail-diagram.png\" />",
"_____no_output_____"
],
[
"## Set-up",
"_____no_output_____"
],
[
"See [the first notebook in this unit](nli_01_task_and_data.ipynb) for set-up instructions.",
"_____no_output_____"
]
],
[
[
"from collections import defaultdict\nimport json\nimport numpy as np\nimport os\nimport pandas as pd\nfrom torch_shallow_neural_classifier import TorchShallowNeuralClassifier\nimport nli\nimport utils",
"_____no_output_____"
],
[
"DATA_HOME = 'data'\n\nNLIDATA_HOME = os.path.join(DATA_HOME, 'nlidata')\n\nwordentail_filename = os.path.join(\n NLIDATA_HOME, 'nli_wordentail_bakeoff_data.json')\n\nGLOVE_HOME = os.path.join(DATA_HOME, 'glove.6B')",
"_____no_output_____"
]
],
[
[
"## Data\n\nI've processed the data into two different train/test splits, in an effort to put some pressure on our models to actually learn these semantic relations, as opposed to exploiting regularities in the sample.\n\n* `edge_disjoint`: The `train` and `dev` __edge__ sets are disjoint, but many __words__ appear in both `train` and `dev`.\n* `word_disjoint`: The `train` and `dev` __vocabularies are disjoint__, and thus the edges are disjoint as well.\n\nThese are very different problems. For `word_disjoint`, there is real pressure on the model to learn abstract relationships, as opposed to memorizing properties of individual words.",
"_____no_output_____"
]
],
[
[
"with open(wordentail_filename) as f:\n wordentail_data = json.load(f)",
"_____no_output_____"
]
],
[
[
"The outer keys are the splits plus a list giving the vocabulary for the entire dataset:",
"_____no_output_____"
]
],
[
[
"wordentail_data.keys()",
"_____no_output_____"
]
],
[
[
"### Edge disjoint",
"_____no_output_____"
]
],
[
[
"wordentail_data['edge_disjoint'].keys()",
"_____no_output_____"
]
],
[
[
"This is what the split looks like; all three have this same format:",
"_____no_output_____"
]
],
[
[
"wordentail_data['edge_disjoint']['dev'][: 5]",
"_____no_output_____"
]
],
[
[
"Let's test to make sure no edges are shared between `train` and `dev`:",
"_____no_output_____"
]
],
[
[
"nli.get_edge_overlap_size(wordentail_data, 'edge_disjoint')",
"_____no_output_____"
]
],
[
[
"As we expect, a *lot* of vocabulary items are shared between `train` and `dev`:",
"_____no_output_____"
]
],
[
[
"nli.get_vocab_overlap_size(wordentail_data, 'edge_disjoint')",
"_____no_output_____"
]
],
[
[
"This is a large percentage of the entire vocab:",
"_____no_output_____"
]
],
[
[
"len(wordentail_data['vocab'])",
"_____no_output_____"
]
],
[
[
"Here's the distribution of labels in the `train` set. It's highly imbalanced, which will pose a challenge for learning. (I'll go ahead and reveal that the `dev` set is similarly distributed.)",
"_____no_output_____"
]
],
[
[
"def label_distribution(split):\n return pd.DataFrame(wordentail_data[split]['train'])[1].value_counts()",
"_____no_output_____"
],
[
"label_distribution('edge_disjoint')",
"_____no_output_____"
]
],
[
[
"### Word disjoint",
"_____no_output_____"
]
],
[
[
"wordentail_data['word_disjoint'].keys()",
"_____no_output_____"
]
],
[
[
"In the `word_disjoint` split, no __words__ are shared between `train` and `dev`:",
"_____no_output_____"
]
],
[
[
"nli.get_vocab_overlap_size(wordentail_data, 'word_disjoint')",
"_____no_output_____"
]
],
[
[
"Because no words are shared between `train` and `dev`, no edges are either:",
"_____no_output_____"
]
],
[
[
"nli.get_edge_overlap_size(wordentail_data, 'word_disjoint')",
"_____no_output_____"
]
],
[
[
"The label distribution is similar to that of `edge_disjoint`, though the overall number of examples is a bit smaller:",
"_____no_output_____"
]
],
[
[
"label_distribution('word_disjoint')",
"_____no_output_____"
]
],
[
[
"## Baseline",
"_____no_output_____"
],
[
"Even in deep learning, __feature representation is vital and requires care!__ For our task, feature representation has two parts: representing the individual words and combining those representations into a single network input.",
"_____no_output_____"
],
[
"### Representing words: vector_func",
"_____no_output_____"
],
[
"Let's consider two baseline word representations methods:\n\n1. Random vectors (as returned by `utils.randvec`).\n1. 50-dimensional GloVe representations.",
"_____no_output_____"
]
],
[
[
"def randvec(w, n=50, lower=-1.0, upper=1.0):\n \"\"\"Returns a random vector of length `n`. `w` is ignored.\"\"\"\n return utils.randvec(n=n, lower=lower, upper=upper)",
"_____no_output_____"
],
[
"# Any of the files in glove.6B will work here:\n\nglove_dim = 50\n\nglove_src = os.path.join(GLOVE_HOME, 'glove.6B.{}d.txt'.format(glove_dim))\n\n# Creates a dict mapping strings (words) to GloVe vectors:\nGLOVE = utils.glove2dict(glove_src)\n\ndef glove_vec(w): \n \"\"\"Return `w`'s GloVe representation if available, else return \n a random vector.\"\"\"\n return GLOVE.get(w, randvec(w, n=glove_dim))",
"_____no_output_____"
]
],
[
[
"### Combining words into inputs: vector_combo_func",
"_____no_output_____"
],
[
"Here we decide how to combine the two word vectors into a single representation. In more detail, where `u` is a vector representation of the left word and `v` is a vector representation of the right word, we need a function `vector_combo_func` such that `vector_combo_func(u, v)` returns a new input vector `z` of dimension `m`. A simple example is concatenation:",
"_____no_output_____"
]
],
[
[
"def vec_concatenate(u, v):\n \"\"\"Concatenate np.array instances `u` and `v` into a new np.array\"\"\"\n return np.concatenate((u, v))",
"_____no_output_____"
]
],
[
[
"`vector_combo_func` could instead be vector average, vector difference, etc. (even combinations of those) – there's lots of space for experimentation here; [homework question 2](#Alternatives-to-concatenation-[1-point]) below pushes you to do some exploration.",
"_____no_output_____"
],
[
"### Classifier model\n\nFor a baseline model, I chose `TorchShallowNeuralClassifier`:",
"_____no_output_____"
]
],
[
[
"net = TorchShallowNeuralClassifier(hidden_dim=50, max_iter=100)",
"_____no_output_____"
]
],
[
[
"### Baseline results\n\nThe following puts the above pieces together, using `vector_func=glove_vec`, since `vector_func=randvec` seems so hopelessly misguided for `word_disjoint`!",
"_____no_output_____"
]
],
[
[
"word_disjoint_experiment = nli.wordentail_experiment(\n train_data=wordentail_data['word_disjoint']['train'],\n assess_data=wordentail_data['word_disjoint']['dev'], \n model=net, \n vector_func=glove_vec,\n vector_combo_func=vec_concatenate)",
"Finished epoch 100 of 100; error is 0.02443103864789009"
],
[
"word_disjoint_experiment['macro-F1']",
"_____no_output_____"
]
],
[
[
"train_data is a list of examples in the structure {(word1,word2), entail}. The model takes every element of the list, finds its feature representation in vector_func (could be a glove). Then we combine them (concatenete or add them or whatever) and then we append everything to a long list of examples. It should be a list of combined vector representation of the words",
"_____no_output_____"
],
[
"## Homework questions\n\nPlease embed your homework responses in this notebook, and do not delete any cells from the notebook. (You are free to add as many cells as you like as part of your responses.)",
"_____no_output_____"
],
[
"### Hypothesis-only baseline [2 points]\n\nDuring our discussion of SNLI and MultiNLI, we noted that a number of research teams have shown that hypothesis-only baselines for NLI tasks can be remarkably robust. This question asks you to explore briefly how this baseline effects the 'edge_disjoint' and 'word_disjoint' versions of our task.\n\nFor this problem, submit two functions:\n\n1. A `vector_combo_func` function called `hypothesis_only` that simply throws away the premise, using the unmodified hypothesis (second) vector as its representation of the example.\n\n1. A function called `run_hypothesis_only_evaluation` that does the following:\n 1. Loops over the two conditions 'word_disjoint' and 'edge_disjoint' and the two `vector_combo_func` values `vec_concatenate` and `hypothesis_only`, calling `nli.wordentail_experiment` to train on the conditions 'train' portion and assess on its 'dev' portion, with `glove_vec` as the `vector_func`. So that the results are consistent, use an `sklearn.linear_model.LogisticRegression` with default parameters as the model.\n 1. Returns a `dict` mapping `(condition_name, function_name)` pairs to the 'macro-F1' score for that pair, as returned by the call to `nli.wordentail_experiment`. (Tip: you can get the `str` name of your function `hypothesis_only` with `hypothesis_only.__name__`.)\n \nThe test functions `test_hypothesis_only` and `test_run_hypothesis_only_evaluation` will help ensure that your functions have the desired logic.",
"_____no_output_____"
]
],
[
[
"def hypothesis_only(u,v):\n return v\n\ndef run_hypothesis_only_evaluation():\n import sklearn\n conditions = ['edge_disjoint', 'word_disjoint']\n functions = [vec_concatenate, hypothesis_only]\n results = {}\n for condition in conditions:\n for function in functions:\n experiment = nli.wordentail_experiment(\n train_data=wordentail_data[condition]['train'],\n assess_data=wordentail_data[condition]['dev'], \n model=sklearn.linear_model.LogisticRegression(), \n vector_func=glove_vec,\n vector_combo_func=function)\n results[(condition, function.__name__)] = experiment['macro-F1']\n return results",
"_____no_output_____"
],
[
"def test_hypothesis_only(hypothesis_only):\n v = hypothesis_only(1, 2)\n assert v == 2 ",
"_____no_output_____"
],
[
"def test_run_hypothesis_only_evaluation(run_hypothesis_only_evaluation):\n results = run_hypothesis_only_evaluation()\n assert ('word_disjoint', 'vec_concatenate') in results, \\\n \"The return value of `run_hypothesis_only_evaluation` does not have the intended kind of keys\"\n assert isinstance(results[('word_disjoint', 'vec_concatenate')], float), \\\n \"The values of the `run_hypothesis_only_evaluation` result should be floats\"",
"_____no_output_____"
],
[
"test_run_hypothesis_only_evaluation(run_hypothesis_only_evaluation)",
" precision recall f1-score support\n\n 0 0.875 0.969 0.920 7376\n 1 0.570 0.228 0.326 1321\n\n accuracy 0.857 8697\n macro avg 0.723 0.599 0.623 8697\nweighted avg 0.829 0.857 0.830 8697\n\n precision recall f1-score support\n\n 0 0.871 0.976 0.920 7376\n 1 0.589 0.195 0.293 1321\n\n accuracy 0.857 8697\n macro avg 0.730 0.585 0.607 8697\nweighted avg 0.828 0.857 0.825 8697\n\n precision recall f1-score support\n\n 0 0.901 0.981 0.939 1910\n 1 0.479 0.142 0.219 239\n\n accuracy 0.887 2149\n macro avg 0.690 0.561 0.579 2149\nweighted avg 0.854 0.887 0.859 2149\n\n precision recall f1-score support\n\n 0 0.893 0.988 0.938 1910\n 1 0.353 0.050 0.088 239\n\n accuracy 0.884 2149\n macro avg 0.623 0.519 0.513 2149\nweighted avg 0.833 0.884 0.844 2149\n\n"
]
],
[
[
"### Alternatives to concatenation [2 points]\n\nWe've so far just used vector concatenation to represent the premise and hypothesis words. This question asks you to explore two simple alternative:\n\n1. Write a function `vec_diff` that, for a given pair of vector inputs `u` and `v`, returns the element-wise difference between `u` and `v`.\n\n1. Write a function `vec_max` that, for a given pair of vector inputs `u` and `v`, returns the element-wise max values between `u` and `v`.\n\nYou needn't include your uses of `nli.wordentail_experiment` with these functions, but we assume you'll be curious to see how they do!",
"_____no_output_____"
]
],
[
[
"def vec_diff(u, v):\n return u-v\n \ndef vec_max(u, v):\n return np.maximum(u,v)",
"_____no_output_____"
],
[
"def test_vec_diff(vec_diff):\n u = np.array([10.2, 8.1])\n v = np.array([1.2, -7.1])\n result = vec_diff(u, v)\n expected = np.array([9.0, 15.2])\n assert np.array_equal(result, expected), \\\n \"Expected {}; got {}\".format(expected, result)",
"_____no_output_____"
],
[
"test_vec_diff(vec_diff)",
"_____no_output_____"
],
[
"def test_vec_max(vec_max):\n u = np.array([1.2, 8.1])\n v = np.array([10.2, -7.1])\n result = vec_max(u, v)\n expected = np.array([10.2, 8.1])\n assert np.array_equal(result, expected), \\\n \"Expected {}; got {}\".format(expected, result)",
"_____no_output_____"
],
[
"test_vec_max(vec_max)",
"_____no_output_____"
]
],
[
[
"### A deeper network [2 points]\n\nIt is very easy to subclass `TorchShallowNeuralClassifier` if all you want to do is change the network graph: all you have to do is write a new `define_graph`. If your graph has new arguments that the user might want to set, then you should also redefine `__init__` so that these values are accepted and set as attributes.\n\nFor this question, please subclass `TorchShallowNeuralClassifier` so that it defines the following graph:\n\n$$\\begin{align}\nh_{1} &= xW_{1} + b_{1} \\\\\nr_{1} &= \\textbf{Bernoulli}(1 - \\textbf{dropout\\_prob}, n) \\\\\nd_{1} &= r_1 * h_{1} \\\\\nh_{2} &= f(d_{1}) \\\\\nh_{3} &= h_{2}W_{2} + b_{2}\n\\end{align}$$\n\nHere, $r_{1}$ and $d_{1}$ define a dropout layer: $r_{1}$ is a random binary vector of dimension $n$, where the probability of a value being $1$ is given by $1 - \\textbf{dropout_prob}$. $r_{1}$ is multiplied element-wise by our first hidden representation, thereby zeroing out some of the values. The result is fed to the user's activation function $f$, and the result of that is fed through another linear layer to produce $h_{3}$. (Inside `TorchShallowNeuralClassifier`, $h_{3}$ is the basis for a softmax classifier, so no activation function is applied to it.)\n\nFor your implementation, please use `nn.Sequential`, `nn.Linear`, and `nn.Dropout` to define the required layers.\n\nFor comparison, using this notation, `TorchShallowNeuralClassifier` defines the following graph:\n\n$$\\begin{align}\nh_{1} &= xW_{1} + b_{1} \\\\\nh_{2} &= f(h_{1}) \\\\\nh_{3} &= h_{2}W_{2} + b_{2}\n\\end{align}$$\n\nThe following code starts this sub-class for you, so that you can concentrate on `define_graph`. Be sure to make use of `self.dropout_prob`\n\nFor this problem, submit just your completed `TorchDeepNeuralClassifier`. You needn't evaluate it, though we assume you will be keen to do that!\n\nYou can use `test_TorchDeepNeuralClassifier` to ensure that your network has the intended structure.",
"_____no_output_____"
]
],
[
[
"import torch.nn as nn\n\nclass TorchDeepNeuralClassifier(TorchShallowNeuralClassifier):\n def __init__(self, dropout_prob=0.7, **kwargs):\n self.dropout_prob = dropout_prob\n super().__init__(**kwargs)\n \n def define_graph(self):\n \"\"\"Complete this method!\n \n Returns\n -------\n an `nn.Module` instance, which can be a free-standing class you \n write yourself, as in `torch_rnn_classifier`, or the outpiut of \n `nn.Sequential`, as in `torch_shallow_neural_classifier`.\n \n \"\"\"\n return nn.Sequential(\n nn.Linear(self.input_dim, self.hidden_dim),\n nn.Dropout(self.dropout_prob),\n self.hidden_activation,\n nn.Linear(self.hidden_dim, self.n_classes_) \n )\n \n\n",
"_____no_output_____"
],
[
"def test_TorchDeepNeuralClassifier(TorchDeepNeuralClassifier):\n dropout_prob = 0.55\n assert hasattr(TorchDeepNeuralClassifier(), \"dropout_prob\"), \\\n \"TorchDeepNeuralClassifier must have an attribute `dropout_prob`.\"\n try:\n inst = TorchDeepNeuralClassifier(dropout_prob=dropout_prob)\n except TypeError:\n raise TypeError(\"TorchDeepNeuralClassifier must allow the user \"\n \"to set `dropout_prob` on initialization\")\n inst.input_dim = 10\n inst.n_classes_ = 5\n graph = inst.define_graph()\n assert len(graph) == 4, \\\n \"The graph should have 4 layers; yours has {}\".format(len(graph)) \n expected = {\n 0: 'Linear',\n 1: 'Dropout',\n 2: 'Tanh',\n 3: 'Linear'}\n for i, label in expected.items():\n name = graph[i].__class__.__name__\n assert label in name, \\\n \"The {} layer of the graph should be a {} layer; yours is {}\".format(i, label, name)\n assert graph[1].p == dropout_prob, \\\n \"The user's value for `dropout_prob` should be the value of `p` for the Dropout layer.\"",
"_____no_output_____"
],
[
"import sklearn\nlogreg = sklearn.linear_model.LogisticRegression()\nnet = TorchShallowNeuralClassifier(hidden_dim=50, max_iter=100)\ndeep = TorchDeepNeuralClassifier(hidden_dim=50, max_iter =200, dropout_prob=0.55)\n\ndef run_evaluation():\n conditions = ['edge_disjoint', 'word_disjoint']\n functions = [vec_concatenate, hypothesis_only, vec_diff, vec_max]\n models = [logreg, net, deep]\n results = {}\n for condition in conditions:\n for function in functions:\n for model in models:\n experiment = nli.wordentail_experiment(\n train_data=wordentail_data[condition]['train'],\n assess_data=wordentail_data[condition]['dev'], \n model=model, \n vector_func=glove_vec,\n vector_combo_func=function)\n results[(condition, function.__name__, model.__class__.__name__)] = experiment['macro-F1']\n return results",
"_____no_output_____"
]
],
[
[
"### Your original system [3 points]\n\nThis is a simple dataset, but our focus on the 'word_disjoint' condition ensures that it's a challenging one, and there are lots of modeling strategies one might adopt. \n\nYou are free to do whatever you like. We require only that your system differ in some way from those defined in the preceding questions. They don't have to be completely different, though. For example, you might want to stick with the model but represent examples differently, or the reverse.\n\nKeep in mind that, for the bake-off evaluation, the 'edge_disjoint' portions of the data are off limits. You can, though, train on the combination of the 'word_disjoint' 'train' and 'dev' portions. You are free to use different pretrained word vectors and the like. Please do not introduce additional entailment datasets into your training data, though.\n\nPlease embed your code in this notebook so that we can rerun it.\n\nIn the cell below, please provide a brief technical description of your original system, so that the teaching team can gain an understanding of what it does. This will help us to understand your code and analyze all the submissions to identify patterns and strategies.",
"_____no_output_____"
]
],
[
[
"# Enter your system description in this cell.\n# Please do not remove this comment.\n\n\n#my approach takes an expanded 300d glove vector embeddingm creates a combine function\n#that concatenates u and v a after being weighted by their cross product and finally applies\n#a deeper neural network with RELU Activations\n\n \nglove_dim = 50\nglove_src = os.path.join(GLOVE_HOME, 'glove.6B.{}d.txt'.format(glove_dim))\nGLOVE = utils.glove2dict(glove_src)\n\ndef glove_vec(w): \n \"\"\"Return `w`'s GloVe representation if available, else return \n a random vector.\"\"\"\n return GLOVE.get(w, randvec(w, n=glove_dim))\ndef cp_combine(u,v):\n cp = np.dot(u,v)\n u2 = cp * u\n v2 = cp * v\n return np.concatenate((u2,v2))\n\nclass TorchDeepNeuralClassifierCustom(TorchShallowNeuralClassifier):\n def __init__(self, dropout_prob=0.7, **kwargs):\n self.dropout_prob = dropout_prob\n self.hidden_activation = nn.ReLU()\n super().__init__(**kwargs)\n \n def define_graph(self):\n \"\"\"Complete this method!\n \n Returns\n -------\n an `nn.Module` instance, which can be a free-standing class you \n write yourself, as in `torch_rnn_classifier`, or the outpiut of \n `nn.Sequential`, as in `torch_shallow_neural_classifier`.\n \n \"\"\"\n return nn.Sequential(\n nn.Linear(self.input_dim, self.hidden_dim),\n nn.Dropout(self.dropout_prob),\n self.hidden_activation,\n nn.Linear(self.hidden_dim, self.hidden_dim),\n nn.Dropout(self.dropout_prob),\n self.hidden_activation,\n nn.Linear(self.hidden_dim, self.hidden_dim),\n nn.Dropout(self.dropout_prob),\n self.hidden_activation,\n nn.Linear(self.hidden_dim, self.n_classes_) \n )\n\ndeep = TorchDeepNeuralClassifierCustom(hidden_dim=300, max_iter=250, eta = 0.0005)\n\ndef run_evaluation():\n import sklearn\n conditions = ['edge_disjoint', 'word_disjoint']\n functions = [cp_combine]\n results = {}\n for condition in conditions:\n for function in functions:\n experiment = nli.wordentail_experiment(\n train_data=wordentail_data[condition]['train'],\n assess_data=wordentail_data[condition]['dev'], \n model=deep, \n vector_func=glove_vec,\n vector_combo_func=function)\n results[(condition, function.__name__)] = experiment['macro-F1']\n return results\n\nresults = run_evaluation()\nfor key, value in results.items():\n print(str(key) +\": \" +str(value))",
"Finished epoch 248 of 250; error is 3.6799179315567017"
]
],
[
[
"## Bake-off [1 point]\n\nThe goal of the bake-off is to achieve the highest macro-average F1 score on __word_disjoint__, on a test set that we will make available at the start of the bake-off. The announcement will go out on the discussion forum. To enter, you'll be asked to run `nli.bake_off_evaluation` on the output of your chosen `nli.wordentail_experiment` run. \n\nThe cells below this one constitute your bake-off entry.\n\nThe rules described in the [Your original system](#Your-original-system-[3-points]) homework question are also in effect for the bake-off.\n\nSystems that enter will receive the additional homework point, and systems that achieve the top score will receive an additional 0.5 points. We will test the top-performing systems ourselves, and only systems for which we can reproduce the reported results will win the extra 0.5 points.\n\nLate entries will be accepted, but they cannot earn the extra 0.5 points. Similarly, you cannot win the bake-off unless your homework is submitted on time.\n\nThe announcement will include the details on where to submit your entry.",
"_____no_output_____"
]
],
[
[
"# Enter your bake-off assessment code into this cell. \n# Please do not remove this comment.\n\n\ntest_data_filename = os.path.join(\n NLIDATA_HOME,\n \"bakeoff-wordentail-data\",\n \"nli_wordentail_bakeoff_data-test.json\")\n\nexperiment = nli.wordentail_experiment(\ntrain_data=wordentail_data['word_disjoint']['train'] + wordentail_data['word_disjoint']['dev']+wordentail_data['edge_disjoint']['train'] + wordentail_data['edge_disjoint']['dev'],\nassess_data=wordentail_data['word_disjoint']['dev'], \nmodel=deep, \nvector_func=glove_vec,\nvector_combo_func=cp_combine)\n\nfinal = nli.bake_off_evaluation(\n experiment,\n test_data_filename)\n",
"Finished epoch 250 of 250; error is 7.6396093219518665"
],
[
"# On an otherwise blank line in this cell, please enter\n# your macro-avg f1 value as reported by the code above. \n# Please enter only a number between 0 and 1 inclusive.\n# Please do not remove this comment.\n\n0.837\n\n\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
e72c6538a32dcc48212cb7898e8b41258378d647 | 345,357 | ipynb | Jupyter Notebook | examples/01-learning-lenet.ipynb | DuHao10086/skin-caffe | 7b040e6436198a233d55c3a9c47d616dfc7118cd | [
"BSD-2-Clause"
] | null | null | null | examples/01-learning-lenet.ipynb | DuHao10086/skin-caffe | 7b040e6436198a233d55c3a9c47d616dfc7118cd | [
"BSD-2-Clause"
] | null | null | null | examples/01-learning-lenet.ipynb | DuHao10086/skin-caffe | 7b040e6436198a233d55c3a9c47d616dfc7118cd | [
"BSD-2-Clause"
] | null | null | null | 316.260989 | 33,484 | 0.920204 | [
[
[
"# Python solving with LeNet\n\nIn this example, we'll explore learning with Caffe in Python, using the fully-exposed `Solver` interface.",
"_____no_output_____"
]
],
[
[
"import os\nos.chdir('..')",
"_____no_output_____"
],
[
"import sys\nsys.path.insert(0, './python')\nimport caffe\n\nfrom pylab import *\n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"We'll be running the provided LeNet example (make sure you've downloaded the data and created the databases, as below).",
"_____no_output_____"
]
],
[
[
"# Download and prepare data\n!data/mnist/get_mnist.sh\n!examples/mnist/create_mnist.sh",
"Downloading...\n--2015-06-30 14:41:56-- http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz\nResolving yann.lecun.com... 128.122.47.89\nConnecting to yann.lecun.com|128.122.47.89|:80... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 9912422 (9.5M) [application/x-gzip]\nSaving to: 'train-images-idx3-ubyte.gz'\n\ntrain-images-idx3-u 100%[=====================>] 9.45M 146KB/s in 57s \n\n2015-06-30 14:42:53 (171 KB/s) - 'train-images-idx3-ubyte.gz' saved [9912422/9912422]\n\n--2015-06-30 14:42:53-- http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz\nResolving yann.lecun.com... 128.122.47.89\nConnecting to yann.lecun.com|128.122.47.89|:80... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 28881 (28K) [application/x-gzip]\nSaving to: 'train-labels-idx1-ubyte.gz'\n\ntrain-labels-idx1-u 100%[=====================>] 28.20K 107KB/s in 0.3s \n\n2015-06-30 14:42:53 (107 KB/s) - 'train-labels-idx1-ubyte.gz' saved [28881/28881]\n\n--2015-06-30 14:42:53-- http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz\nResolving yann.lecun.com... 128.122.47.89\nConnecting to yann.lecun.com|128.122.47.89|:80... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 1648877 (1.6M) [application/x-gzip]\nSaving to: 't10k-images-idx3-ubyte.gz'\n\nt10k-images-idx3-ub 100%[=====================>] 1.57M 205KB/s in 8.2s \n\n2015-06-30 14:43:02 (197 KB/s) - 't10k-images-idx3-ubyte.gz' saved [1648877/1648877]\n\n--2015-06-30 14:43:02-- http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz\nResolving yann.lecun.com... 128.122.47.89\nConnecting to yann.lecun.com|128.122.47.89|:80... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 4542 (4.4K) [application/x-gzip]\nSaving to: 't10k-labels-idx1-ubyte.gz'\n\nt10k-labels-idx1-ub 100%[=====================>] 4.44K 26.9KB/s in 0.2s \n\n2015-06-30 14:43:02 (26.9 KB/s) - 't10k-labels-idx1-ubyte.gz' saved [4542/4542]\n\nUnzipping...\nDone.\nCreating lmdb...\nDone.\n"
]
],
[
[
"We need two external files to help out:\n* the net prototxt, defining the architecture and pointing to the train/test data\n* the solver prototxt, defining the learning parameters\n\nWe start with the net. We'll write the net in a succinct and natural way as Python code that serializes to Caffe's protobuf model format.\n\nThis network expects to read from pregenerated LMDBs, but reading directly from `ndarray`s is also possible using `MemoryDataLayer`.",
"_____no_output_____"
]
],
[
[
"from caffe import layers as L\nfrom caffe import params as P\n\ndef lenet(lmdb, batch_size):\n # our version of LeNet: a series of linear and simple nonlinear transformations\n n = caffe.NetSpec()\n n.data, n.label = L.Data(batch_size=batch_size, backend=P.Data.LMDB, source=lmdb,\n transform_param=dict(scale=1./255), ntop=2)\n n.conv1 = L.Convolution(n.data, kernel_size=5, num_output=20, weight_filler=dict(type='xavier'))\n n.pool1 = L.Pooling(n.conv1, kernel_size=2, stride=2, pool=P.Pooling.MAX)\n n.conv2 = L.Convolution(n.pool1, kernel_size=5, num_output=50, weight_filler=dict(type='xavier'))\n n.pool2 = L.Pooling(n.conv2, kernel_size=2, stride=2, pool=P.Pooling.MAX)\n n.ip1 = L.InnerProduct(n.pool2, num_output=500, weight_filler=dict(type='xavier'))\n n.relu1 = L.ReLU(n.ip1, in_place=True)\n n.ip2 = L.InnerProduct(n.relu1, num_output=10, weight_filler=dict(type='xavier'))\n n.loss = L.SoftmaxWithLoss(n.ip2, n.label)\n return n.to_proto()\n \nwith open('examples/mnist/lenet_auto_train.prototxt', 'w') as f:\n f.write(str(lenet('examples/mnist/mnist_train_lmdb', 64)))\n \nwith open('examples/mnist/lenet_auto_test.prototxt', 'w') as f:\n f.write(str(lenet('examples/mnist/mnist_test_lmdb', 100)))",
"_____no_output_____"
]
],
[
[
"The net has been written to disk in more verbose but human-readable serialization format using Google's protobuf library. You can read, write, and modify this description directly. Let's take a look at the train net.",
"_____no_output_____"
]
],
[
[
"!cat examples/mnist/lenet_auto_train.prototxt",
"layer {\r\n name: \"data\"\r\n type: \"Data\"\r\n top: \"data\"\r\n top: \"label\"\r\n transform_param {\r\n scale: 0.00392156862745\r\n }\r\n data_param {\r\n source: \"examples/mnist/mnist_train_lmdb\"\r\n batch_size: 64\r\n backend: LMDB\r\n }\r\n}\r\nlayer {\r\n name: \"conv1\"\r\n type: \"Convolution\"\r\n bottom: \"data\"\r\n top: \"conv1\"\r\n convolution_param {\r\n num_output: 20\r\n kernel_size: 5\r\n weight_filler {\r\n type: \"xavier\"\r\n }\r\n }\r\n}\r\nlayer {\r\n name: \"pool1\"\r\n type: \"Pooling\"\r\n bottom: \"conv1\"\r\n top: \"pool1\"\r\n pooling_param {\r\n pool: MAX\r\n kernel_size: 2\r\n stride: 2\r\n }\r\n}\r\nlayer {\r\n name: \"conv2\"\r\n type: \"Convolution\"\r\n bottom: \"pool1\"\r\n top: \"conv2\"\r\n convolution_param {\r\n num_output: 50\r\n kernel_size: 5\r\n weight_filler {\r\n type: \"xavier\"\r\n }\r\n }\r\n}\r\nlayer {\r\n name: \"pool2\"\r\n type: \"Pooling\"\r\n bottom: \"conv2\"\r\n top: \"pool2\"\r\n pooling_param {\r\n pool: MAX\r\n kernel_size: 2\r\n stride: 2\r\n }\r\n}\r\nlayer {\r\n name: \"ip1\"\r\n type: \"InnerProduct\"\r\n bottom: \"pool2\"\r\n top: \"ip1\"\r\n inner_product_param {\r\n num_output: 500\r\n weight_filler {\r\n type: \"xavier\"\r\n }\r\n }\r\n}\r\nlayer {\r\n name: \"relu1\"\r\n type: \"ReLU\"\r\n bottom: \"ip1\"\r\n top: \"ip1\"\r\n}\r\nlayer {\r\n name: \"ip2\"\r\n type: \"InnerProduct\"\r\n bottom: \"ip1\"\r\n top: \"ip2\"\r\n inner_product_param {\r\n num_output: 10\r\n weight_filler {\r\n type: \"xavier\"\r\n }\r\n }\r\n}\r\nlayer {\r\n name: \"loss\"\r\n type: \"SoftmaxWithLoss\"\r\n bottom: \"ip2\"\r\n bottom: \"label\"\r\n top: \"loss\"\r\n}\r\n"
]
],
[
[
"Now let's see the learning parameters, which are also written as a `prototxt` file. We're using SGD with momentum, weight decay, and a specific learning rate schedule.",
"_____no_output_____"
]
],
[
[
"!cat examples/mnist/lenet_auto_solver.prototxt",
"# The train/test net protocol buffer definition\r\ntrain_net: \"examples/mnist/lenet_auto_train.prototxt\"\r\ntest_net: \"examples/mnist/lenet_auto_test.prototxt\"\r\n# test_iter specifies how many forward passes the test should carry out.\r\n# In the case of MNIST, we have test batch size 100 and 100 test iterations,\r\n# covering the full 10,000 testing images.\r\ntest_iter: 100\r\n# Carry out testing every 500 training iterations.\r\ntest_interval: 500\r\n# The base learning rate, momentum and the weight decay of the network.\r\nbase_lr: 0.01\r\nmomentum: 0.9\r\nweight_decay: 0.0005\r\n# The learning rate policy\r\nlr_policy: \"inv\"\r\ngamma: 0.0001\r\npower: 0.75\r\n# Display every 100 iterations\r\ndisplay: 100\r\n# The maximum number of iterations\r\nmax_iter: 10000\r\n# snapshot intermediate results\r\nsnapshot: 5000\r\nsnapshot_prefix: \"examples/mnist/lenet\"\r\n"
]
],
[
[
"Let's pick a device and load the solver. We'll use SGD (with momentum), but Adagrad and Nesterov's accelerated gradient are also available.",
"_____no_output_____"
]
],
[
[
"caffe.set_device(0)\ncaffe.set_mode_gpu()\nsolver = caffe.SGDSolver('examples/mnist/lenet_auto_solver.prototxt')",
"_____no_output_____"
]
],
[
[
"To get an idea of the architecture of our net, we can check the dimensions of the intermediate features (blobs) and parameters (these will also be useful to refer to when manipulating data later).",
"_____no_output_____"
]
],
[
[
"# each output is (batch size, feature dim, spatial dim)\n[(k, v.data.shape) for k, v in solver.net.blobs.items()]",
"_____no_output_____"
],
[
"# just print the weight sizes (not biases)\n[(k, v[0].data.shape) for k, v in solver.net.params.items()]",
"_____no_output_____"
]
],
[
[
"Before taking off, let's check that everything is loaded as we expect. We'll run a forward pass on the train and test nets and check that they contain our data.",
"_____no_output_____"
]
],
[
[
"solver.net.forward() # train net\nsolver.test_nets[0].forward() # test net (there can be more than one)",
"_____no_output_____"
],
[
"# we use a little trick to tile the first eight images\nimshow(solver.net.blobs['data'].data[:8, 0].transpose(1, 0, 2).reshape(28, 8*28), cmap='gray')\nprint solver.net.blobs['label'].data[:8]",
"[ 5. 0. 4. 1. 9. 2. 1. 3.]\n"
],
[
"imshow(solver.test_nets[0].blobs['data'].data[:8, 0].transpose(1, 0, 2).reshape(28, 8*28), cmap='gray')\nprint solver.test_nets[0].blobs['label'].data[:8]",
"[ 7. 2. 1. 0. 4. 1. 4. 9.]\n"
]
],
[
[
"Both train and test nets seem to be loading data, and to have correct labels.\n\nLet's take one step of (minibatch) SGD and see what happens.",
"_____no_output_____"
]
],
[
[
"solver.step(1)",
"_____no_output_____"
]
],
[
[
"Do we have gradients propagating through our filters? Let's see the updates to the first layer, shown here as a $4 \\times 5$ grid of $5 \\times 5$ filters.",
"_____no_output_____"
]
],
[
[
"imshow(solver.net.params['conv1'][0].diff[:, 0].reshape(4, 5, 5, 5)\n .transpose(0, 2, 1, 3).reshape(4*5, 5*5), cmap='gray')",
"_____no_output_____"
]
],
[
[
"Something is happening. Let's run the net for a while, keeping track of a few things as it goes.\nNote that this process will be the same as if training through the `caffe` binary. In particular:\n* logging will continue to happen as normal\n* snapshots will be taken at the interval specified in the solver prototxt (here, every 5000 iterations)\n* testing will happen at the interval specified (here, every 500 iterations)\n\nSince we have control of the loop in Python, we're free to compute additional things as we go, as we show below. We can do many other things as well, for example:\n* write a custom stopping criterion\n* change the solving process by updating the net in the loop",
"_____no_output_____"
]
],
[
[
"%%time\nniter = 200\ntest_interval = 25\n# losses will also be stored in the log\ntrain_loss = zeros(niter)\ntest_acc = zeros(int(np.ceil(niter / test_interval)))\noutput = zeros((niter, 8, 10))\n\n# the main solver loop\nfor it in range(niter):\n solver.step(1) # SGD by Caffe\n \n # store the train loss\n train_loss[it] = solver.net.blobs['loss'].data\n \n # store the output on the first test batch\n # (start the forward pass at conv1 to avoid loading new data)\n solver.test_nets[0].forward(start='conv1')\n output[it] = solver.test_nets[0].blobs['ip2'].data[:8]\n \n # run a full test every so often\n # (Caffe can also do this for us and write to a log, but we show here\n # how to do it directly in Python, where more complicated things are easier.)\n if it % test_interval == 0:\n print 'Iteration', it, 'testing...'\n correct = 0\n for test_it in range(100):\n solver.test_nets[0].forward()\n correct += sum(solver.test_nets[0].blobs['ip2'].data.argmax(1)\n == solver.test_nets[0].blobs['label'].data)\n test_acc[it // test_interval] = correct / 1e4",
"Iteration 0 testing...\nIteration 25 testing...\nIteration 50 testing...\nIteration 75 testing...\nIteration 100 testing...\nIteration 125 testing...\nIteration 150 testing...\nIteration 175 testing...\nCPU times: user 12.3 s, sys: 3.96 s, total: 16.2 s\nWall time: 15.7 s\n"
]
],
[
[
"Let's plot the train loss and test accuracy.",
"_____no_output_____"
]
],
[
[
"_, ax1 = subplots()\nax2 = ax1.twinx()\nax1.plot(arange(niter), train_loss)\nax2.plot(test_interval * arange(len(test_acc)), test_acc, 'r')\nax1.set_xlabel('iteration')\nax1.set_ylabel('train loss')\nax2.set_ylabel('test accuracy')",
"_____no_output_____"
]
],
[
[
"The loss seems to have dropped quickly and coverged (except for stochasticity), while the accuracy rose correspondingly. Hooray!\n\nSince we saved the results on the first test batch, we can watch how our prediction scores evolved. We'll plot time on the $x$ axis and each possible label on the $y$, with lightness indicating confidence.",
"_____no_output_____"
]
],
[
[
"for i in range(8):\n figure(figsize=(2, 2))\n imshow(solver.test_nets[0].blobs['data'].data[i, 0], cmap='gray')\n figure(figsize=(10, 2))\n imshow(output[:50, i].T, interpolation='nearest', cmap='gray')\n xlabel('iteration')\n ylabel('label')",
"_____no_output_____"
]
],
[
[
"We started with little idea about any of these digits, and ended up with correct classifications for each. If you've been following along, you'll see the last digit is the most difficult, a slanted \"9\" that's (understandably) most confused with \"4\".\n\nNote that these are the \"raw\" output scores rather than the softmax-computed probability vectors. The latter, shown below, make it easier to see the confidence of our net (but harder to see the scores for less likely digits).",
"_____no_output_____"
]
],
[
[
"for i in range(8):\n figure(figsize=(2, 2))\n imshow(solver.test_nets[0].blobs['data'].data[i, 0], cmap='gray')\n figure(figsize=(10, 2))\n imshow(exp(output[:50, i].T) / exp(output[:50, i].T).sum(0), interpolation='nearest', cmap='gray')\n xlabel('iteration')\n ylabel('label')",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e72c661ffb44d288304060f051a7a554e7aba120 | 68,803 | ipynb | Jupyter Notebook | deep_learning/seq2seq/1_torch_seq2seq_intro.ipynb | certara-ShengnanHuang/machine-learning | d21dfbeabf2876ffe49fcef444ca4516c4d36df0 | [
"MIT"
] | 2,104 | 2016-04-15T13:35:55.000Z | 2022-03-28T10:39:51.000Z | deep_learning/seq2seq/1_torch_seq2seq_intro.ipynb | certara-ShengnanHuang/machine-learning | d21dfbeabf2876ffe49fcef444ca4516c4d36df0 | [
"MIT"
] | 10 | 2017-04-07T14:25:23.000Z | 2021-05-18T03:16:15.000Z | deep_learning/seq2seq/1_torch_seq2seq_intro.ipynb | certara-ShengnanHuang/machine-learning | d21dfbeabf2876ffe49fcef444ca4516c4d36df0 | [
"MIT"
] | 539 | 2015-12-10T04:23:44.000Z | 2022-03-31T07:15:28.000Z | 34.678931 | 2,503 | 0.539671 | [
[
[
"<h1>Table of Contents<span class=\"tocSkip\"></span></h1>\n<div class=\"toc\"><ul class=\"toc-item\"><li><span><a href=\"#Seq2Seq\" data-toc-modified-id=\"Seq2Seq-1\"><span class=\"toc-item-num\">1 </span>Seq2Seq</a></span><ul class=\"toc-item\"><li><span><a href=\"#Seq2Seq-Introduction\" data-toc-modified-id=\"Seq2Seq-Introduction-1.1\"><span class=\"toc-item-num\">1.1 </span>Seq2Seq Introduction</a></span></li><li><span><a href=\"#Data-Preparation\" data-toc-modified-id=\"Data-Preparation-1.2\"><span class=\"toc-item-num\">1.2 </span>Data Preparation</a></span><ul class=\"toc-item\"><li><span><a href=\"#Declaring-Fields\" data-toc-modified-id=\"Declaring-Fields-1.2.1\"><span class=\"toc-item-num\">1.2.1 </span>Declaring Fields</a></span></li><li><span><a href=\"#Constructing-Dataset\" data-toc-modified-id=\"Constructing-Dataset-1.2.2\"><span class=\"toc-item-num\">1.2.2 </span>Constructing Dataset</a></span></li><li><span><a href=\"#Constructing-Iterator\" data-toc-modified-id=\"Constructing-Iterator-1.2.3\"><span class=\"toc-item-num\">1.2.3 </span>Constructing Iterator</a></span></li></ul></li><li><span><a href=\"#Seq2Seq-Implementation\" data-toc-modified-id=\"Seq2Seq-Implementation-1.3\"><span class=\"toc-item-num\">1.3 </span>Seq2Seq Implementation</a></span><ul class=\"toc-item\"><li><span><a href=\"#Encoder-Module\" data-toc-modified-id=\"Encoder-Module-1.3.1\"><span class=\"toc-item-num\">1.3.1 </span>Encoder Module</a></span></li><li><span><a href=\"#Decoder-Module\" data-toc-modified-id=\"Decoder-Module-1.3.2\"><span class=\"toc-item-num\">1.3.2 </span>Decoder Module</a></span></li><li><span><a href=\"#Seq2Seq-Module\" data-toc-modified-id=\"Seq2Seq-Module-1.3.3\"><span class=\"toc-item-num\">1.3.3 </span>Seq2Seq Module</a></span></li><li><span><a href=\"#Training-Seq2Seq\" data-toc-modified-id=\"Training-Seq2Seq-1.3.4\"><span class=\"toc-item-num\">1.3.4 </span>Training Seq2Seq</a></span></li><li><span><a href=\"#Evaluating-Seq2Seq\" data-toc-modified-id=\"Evaluating-Seq2Seq-1.3.5\"><span class=\"toc-item-num\">1.3.5 </span>Evaluating Seq2Seq</a></span></li></ul></li><li><span><a href=\"#Summary\" data-toc-modified-id=\"Summary-1.4\"><span class=\"toc-item-num\">1.4 </span>Summary</a></span></li></ul></li><li><span><a href=\"#Reference\" data-toc-modified-id=\"Reference-2\"><span class=\"toc-item-num\">2 </span>Reference</a></span></li></ul></div>",
"_____no_output_____"
]
],
[
[
"# code for loading the format for the notebook\nimport os\n\n# path : store the current path to convert back to it later\npath = os.getcwd()\nos.chdir(os.path.join('..', '..', 'notebook_format'))\n\nfrom formats import load_style\nload_style(css_style='custom2.css', plot_style=False)",
"_____no_output_____"
],
[
"os.chdir(path)\n\n# 1. magic for inline plot\n# 2. magic to print version\n# 3. magic so that the notebook will reload external python modules\n# 4. magic to enable retina (high resolution) plots\n# https://gist.github.com/minrk/3301035\n%matplotlib inline\n%load_ext watermark\n%load_ext autoreload\n%autoreload 2\n%config InlineBackend.figure_format='retina'\n\nimport os\nimport math\nimport time\nimport spacy\nimport torch\nimport random\nimport numpy as np\nimport torch.nn as nn\nimport torch.optim as optim\nfrom typing import List\nfrom torchtext.datasets import Multi30k\nfrom torchtext.data import Field, BucketIterator\n\n%watermark -a 'Ethen' -d -t -v -p numpy,torch,torchtext,spacy",
"Ethen 2020-01-07 21:44:32 \n\nCPython 3.6.4\nIPython 7.9.0\n\nnumpy 1.16.5\ntorch 1.3.1\ntorchtext 0.4.0\nspacy 2.1.6\n"
]
],
[
[
"# Seq2Seq",
"_____no_output_____"
],
[
"**Seq2Seq (Sequence to Sequence)** is a many to many network where two neural networks, one encoder and one decoder work together to transform one sequence to another. The core highlight of this method is having no restrictions on the length of the source and target sequence. At a high-level, the way it works is:\n\n- The encoder network condenses an input sequence into a vector, this vector is a smaller dimensional representation and is often referred to as the context/thought vector. This thought vector is served as an abstract representation for the entire input sequence.\n- The decoder network takes in that thought vector and unfolds that vector into the output sequence.\n\nThe main use case includes:\n\n- chatbots\n- text summarization\n- speech recognition\n- image captioning\n- machine translation\n\nIn this notebook, we'll be implementing the seq2seq model ourselves using Pytorch and use it in the context of German to English translations.",
"_____no_output_____"
],
[
"## Seq2Seq Introduction",
"_____no_output_____"
],
[
"The following sections are heavily \"borrowed\" from the wonderful tutorial on this topic listed below.\n\n- [Jupyter Notebook: Sequence to Sequence Learning with Neural Networks](https://nbviewer.jupyter.org/github/bentrevett/pytorch-seq2seq/blob/master/1%20-%20Sequence%20to%20Sequence%20Learning%20with%20Neural%20Networks.ipynb)\n\nSome personal preference modifications have been made.\n\n\n<img src=\"img/1_seq2seq.png\" width=\"70%\" height=\"70%\">\n\nThe above image shows an example translation. The input/source sentence, \"guten morgen\", is input into the encoder (green) one word at a time. We also append a *start of sequence* (`<sos>`) and *end of sequence* (`<eos>`) token to the start and end of sentence, respectively. At each time-step, the input to the encoder is both the current word, $x_t$, as well as the hidden state from the previous time-step, $h_{t-1}$, and the encoder outputs a new hidden state $h_t$. We can think of the hidden state as a vector representation of the sentence so far. The can be represented as a function of both of $x_t$ and $h_{t-1}$:\n\n$$h_t = \\text{Encoder}(x_t, h_{t-1})$$\n\nWe're using the term encoder loosely here, in practice, it can be any type of architecture, the most common ones being RNN-type network such as *LSTM* (Long Short-Term Memory) or a *GRU* (Gated Recurrent Unit). \n\nHere, we have $X = \\{x_1, x_2, ..., x_T\\}$, where $x_1 = \\text{<sos>}, x_2 = \\text{guten}$, etc. The initial hidden state, $h_0$, is usually either initialized to zeros or a learned parameter.\n\nOnce the final word, $x_T$, has been passed into the encoder, we use the final hidden state, $h_T$, as the context vector, i.e. $h_T = z$. This is a vector representation of the entire source sentence.\n\nNow we have our context vector, $z$, we can start decoding it to get the target sentence, \"good morning\". Again, we append the start and end of sequence tokens to the target sentence. At each time-step, the input to the decoder (blue) is the current word, $y_t$, as well as the hidden state from the previous time-step, $s_{t-1}$, where the initial decoder hidden state, $s_0$, is the context vector, $s_0 = z = h_T$, i.e. the initial decoder hidden state is the final encoder hidden state. Thus, similar to the encoder, we can represent the decoder as:\n\n$$s_t = \\text{Decoder}(y_t, s_{t-1})$$\n\nIn the decoder, we need to go from the hidden state to an actual word, therefore at each time-step we use $s_t$ to predict (by passing it through a `Linear` layer, shown in purple) what we think is the next word in the sequence, $\\hat{y}_t$. \n\n$$\\hat{y}_t = f(s_t)$$\n\nThe words in the decoder are always generated one after another, with one per time-step. We always use `<sos>` for the first input to the decoder, $y_1$, but for subsequent inputs, $y_{t>1}$, we will sometimes use the actual, ground truth next word in the sequence, $y_t$ and sometimes use the word predicted by our decoder, $\\hat{y}_{t-1}$. This is called **teacher forcing**, which we'll later see in action.\n\nWhen training/testing our model, we always know how many words are in our target sentence, so we stop generating words once we hit that many. During inference (i.e. real world usage) it is common to keep generating words until the model outputs an `<eos>` token or after a certain amount of words have been generated.\n\nOnce we have our predicted target sentence, $\\hat{Y} = \\{ \\hat{y}_1, \\hat{y}_2, ..., \\hat{y}_T \\}$, we compare it against our actual target sentence, $Y = \\{ y_1, y_2, ..., y_T \\}$, to calculate our loss. We then use this loss to update all of the parameters in our model.",
"_____no_output_____"
],
[
"## Data Preparation",
"_____no_output_____"
],
[
"We'll be coding up the models in PyTorch and using TorchText to help us do all of the pre-processing required. We'll also be using spaCy to assist in the tokenization of the data. We will introduce the functionalities some these libraries along the way as well.",
"_____no_output_____"
]
],
[
[
"SEED = 2222\nrandom.seed(SEED)\ntorch.manual_seed(SEED)",
"_____no_output_____"
]
],
[
[
"The next two code chunks:\n\n- Downloads the spacy model for the German and English language.\n- Create the tokenizer functions, which will take in the sentence as the input and return the sentence as a list of tokens. These functions can then be passed to torchtext.",
"_____no_output_____"
]
],
[
[
"# !python -m spacy download de\n# !python -m spacy download en",
"_____no_output_____"
],
[
"# the link below contains explanation of how spacy's tokenization works\n# https://spacy.io/usage/spacy-101#annotations-token\nspacy_de = spacy.load('de_core_news_sm')\nspacy_en = spacy.load('en_core_web_sm')\n\n\ndef tokenize_de(text: str) -> List[str]:\n return [tok.text for tok in spacy_de.tokenizer(text)][::-1]\n\ndef tokenize_en(text: str) -> List[str]:\n return [tok.text for tok in spacy_en.tokenizer(text)]",
"_____no_output_____"
],
[
"text = \"I don't like apple.\"\ntokenize_en(text)",
"_____no_output_____"
]
],
[
[
"The tokenizer is language specific, e.g. it knows that in the English language don't should be tokenized into do not (n't).\n\nAnother thing to note is that **the order of the source sentence is reversed during the tokenization process**. The rationale behind things comes from the original seq2seq paper where they identified that this trick improved the result of their model.\n\n> Normally, when we concatenate a source sentence with a target sentence, each word in the source sentence is far from its corresponding word in the target sentence. By reversing the source sentence, the first few words in the source sentence now becomes very close to the first few words in the target sentence, thus the model would have lesser issue establishing communication between the source and target sentence.\n> Although, the average distance between words in the source and target language remains the same during this process, however, it was shown that the model learned much better even on later parts of the sentence.",
"_____no_output_____"
],
[
"### Declaring Fields",
"_____no_output_____"
],
[
"Moving on, we will begin leveraging torchtext's functionality. The first once is [`Field`](https://pytorch.org/text/data.html#field), which is where we specify how we wish to preprocess our text data for a certain field.\n\nHere, we set the `tokenize` argument to the correct tokenization function for the source and target field, with German being the source field and English being the target field. The field also appends the \"start of sequence\" and \"end of sequence\" tokens via the `init_token` and `eos_token` arguments, and converts all words to lowercase. The docstring of the `Field` object is pretty well-written, please refer to it to see other arguments that it takes in.",
"_____no_output_____"
]
],
[
[
"source = Field(tokenize=tokenize_de, init_token='<sos>', eos_token='<eos>', lower=True)\ntarget = Field(tokenize=tokenize_en, init_token='<sos>', eos_token='<eos>', lower=True)",
"_____no_output_____"
]
],
[
[
"### Constructing Dataset",
"_____no_output_____"
],
[
"We've defined the logic of processing our raw text data, now we need to tell the fields what data it should work on. This is where `Dataset` comes in. The dataset we'll be using is the [Multi30k dataset](https://pytorch.org/text/datasets.html#multi30k). This is a dataset with ~30,000 parallel English, German and French sentences, each with ~12 words per sentence. Torchtext comes with a capability for us to download and load the training, validation and test data.\n\n`exts` specifies which languages to use as the source and target (source goes first) and `fields` specifies which field to use for the source and target.",
"_____no_output_____"
]
],
[
[
"train_data, valid_data, test_data = Multi30k.splits(exts=('.de', '.en'), fields=(source, target))\nprint(f\"Number of training examples: {len(train_data.examples)}\")\nprint(f\"Number of validation examples: {len(valid_data.examples)}\")\nprint(f\"Number of testing examples: {len(test_data.examples)}\")",
"Number of training examples: 29000\nNumber of validation examples: 1014\nNumber of testing examples: 1000\n"
]
],
[
[
"Upon loading the dataset, we can indexed and iterate over the `Dataset` like a normal list. Each element in the dataset bundles the attributes of a single record for us. We can index our dataset like a list and then access the `.src` and `.trg` attribute to take a look at the tokenized source and target sentence.",
"_____no_output_____"
]
],
[
[
"# equivalent, albeit more verbiage train_data.examples[0].src\ntrain_data[0].src",
"_____no_output_____"
],
[
"train_data[0].trg",
"_____no_output_____"
]
],
[
[
"The next missing piece is to build the vocabulary for the source and target languages. That way we can convert our tokenized tokens into integers so that they can be fed into downstream models. Constructing the vocabulary and word to integer mapping is done by calling the `build_vocab` method of a `Field` on a dataset. This adds the `vocab` attribute to the field.\n\nThe vocabularies of the source and target languages are distinct. Using the `min_freq` argument, we only allow tokens that appear at least 2 times to appear in our vocabulary. Tokens that appear only once are converted into an `<unk>` (unknown) token (we can customize this in the Field earlier if we like).\n\nIt is important to note that our vocabulary should only be built from the training set and not the validation/test set. This prevents \"information leakage\" into our model, giving us artificially inflated validation/test scores.",
"_____no_output_____"
]
],
[
[
"source.build_vocab(train_data, min_freq=2)\ntarget.build_vocab(train_data, min_freq=2)\nprint(f\"Unique tokens in source (de) vocabulary: {len(source.vocab)}\")\nprint(f\"Unique tokens in target (en) vocabulary: {len(target.vocab)}\")",
"Unique tokens in source (de) vocabulary: 7855\nUnique tokens in target (en) vocabulary: 5893\n"
]
],
[
[
"### Constructing Iterator",
"_____no_output_____"
],
[
"The final step of preparing the data is to create the iterators. Very similar to `DataLoader` in the standard pytorch package, `Iterator` in torchtext converts our data into batches, so that they can be fed into the model. These can be iterated on to return a batch of data which will have a `src` and `trg` attribute (PyTorch tensors containing a batch of numericalized source and target sentences). Numericalized is just a fancy way of saying they have been converted from a sequence of tokens to a sequence of corresponding indices, where the mapping between the tokens and indices comes from the learned vocabulary. \n\nWhen we get a batch of examples using an iterator we need to make sure that all of the source sentences are padded to the same length, the same with the target sentences. Luckily, torchtext iterators handle this for us! `BucketIterator` is a extremely useful torchtext feature. It automatically shuffles and buckets the input sequences into sequences of similar length, this minimizes the amount of padding that we need to perform.",
"_____no_output_____"
]
],
[
[
"BATCH_SIZE = 128\n\n# pytorch boilerplate that determines whether a GPU is present or not,\n# this determines whether our dataset or model can to moved to a GPU\ndevice = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\n\n# create batches out of the dataset and sends them to the appropriate device\ntrain_iterator, valid_iterator, test_iterator = BucketIterator.splits(\n (train_data, valid_data, test_data), batch_size=BATCH_SIZE, device=device)",
"_____no_output_____"
],
[
"# pretend that we're iterating over the iterator and print out the print element\ntest_batch = next(iter(test_iterator))\ntest_batch",
"_____no_output_____"
],
[
"test_batch.src",
"_____no_output_____"
]
],
[
[
"We can list out the first batch, we see each element of the iterator is a `Batch` object, similar to element of a `Dataset`, we can access the fields via its attributes. The next important thing to note that it is of size [sentence length, batch size], and the longest sentence in the first batch of the source language has a length of 10.",
"_____no_output_____"
],
[
"## Seq2Seq Implementation",
"_____no_output_____"
]
],
[
[
"# adjustable parameters\nINPUT_DIM = len(source.vocab)\nOUTPUT_DIM = len(target.vocab)\nENC_EMB_DIM = 256\nDEC_EMB_DIM = 256\nHID_DIM = 512\nN_LAYERS = 2\nENC_DROPOUT = 0.5\nDEC_DROPOUT = 0.5",
"_____no_output_____"
]
],
[
[
"To define our seq2seq model, we first specify the encoder and decoder separately.",
"_____no_output_____"
],
[
"### Encoder Module",
"_____no_output_____"
]
],
[
[
"class Encoder(nn.Module):\n \"\"\"\n Input :\n - source batch\n Layer : \n source batch -> Embedding -> LSTM\n Output :\n - LSTM hidden state\n - LSTM cell state\n\n Parmeters\n ---------\n input_dim : int\n Input dimension, should equal to the source vocab size.\n \n emb_dim : int\n Embedding layer's dimension.\n \n hid_dim : int\n LSTM Hidden/Cell state's dimension.\n \n n_layers : int\n Number of LSTM layers.\n \n dropout : float\n Dropout for the LSTM layer.\n \"\"\"\n\n def __init__(self, input_dim: int, emb_dim: int, hid_dim: int, n_layers: int, dropout: float):\n super().__init__()\n self.emb_dim = emb_dim\n self.hid_dim = hid_dim\n self.input_dim = input_dim\n self.n_layers = n_layers\n self.dropout = dropout\n\n self.embedding = nn.Embedding(input_dim, emb_dim)\n self.rnn = nn.LSTM(emb_dim, hid_dim, n_layers, dropout=dropout)\n\n def forward(self, src_batch: torch.LongTensor):\n \"\"\"\n\n Parameters\n ----------\n src_batch : 2d torch.LongTensor\n Batched tokenized source sentence of shape [sent len, batch size].\n\n Returns\n -------\n hidden, cell : 3d torch.LongTensor\n Hidden and cell state of the LSTM layer. Each state's shape\n [n layers * n directions, batch size, hidden dim]\n \"\"\"\n embedded = self.embedding(src_batch) # [sent len, batch size, emb dim]\n outputs, (hidden, cell) = self.rnn(embedded)\n # outputs -> [sent len, batch size, hidden dim * n directions]\n return hidden, cell",
"_____no_output_____"
],
[
"encoder = Encoder(INPUT_DIM, ENC_EMB_DIM, HID_DIM, N_LAYERS, ENC_DROPOUT).to(device)\nhidden, cell = encoder(test_batch.src)\nhidden.shape, cell.shape",
"_____no_output_____"
]
],
[
[
"### Decoder Module",
"_____no_output_____"
],
[
"The decoder accept a batch of input tokens, previous hidden states and previous cell states. Note that in the decoder module, we are only decoding one token at a time, the input tokens will always have a sequence length of 1. This is different from the encoder module where we encode the entire source sentence all at once.",
"_____no_output_____"
]
],
[
[
"class Decoder(nn.Module):\n \"\"\"\n Input :\n - first token in the target batch\n - LSTM hidden state from the encoder\n - LSTM cell state from the encoder\n Layer :\n target batch -> Embedding -- \n |\n encoder hidden state ------|--> LSTM -> Linear\n |\n encoder cell state -------\n \n Output :\n - prediction\n - LSTM hidden state\n - LSTM cell state\n\n Parmeters\n ---------\n output : int\n Output dimension, should equal to the target vocab size.\n \n emb_dim : int\n Embedding layer's dimension.\n \n hid_dim : int\n LSTM Hidden/Cell state's dimension.\n \n n_layers : int\n Number of LSTM layers.\n \n dropout : float\n Dropout for the LSTM layer.\n \"\"\"\n\n def __init__(self, output_dim: int, emb_dim: int, hid_dim: int, n_layers: int, dropout: float):\n super().__init__()\n self.emb_dim = emb_dim\n self.hid_dim = hid_dim\n self.output_dim = output_dim\n self.n_layers = n_layers\n self.dropout = dropout\n\n self.embedding = nn.Embedding(output_dim, emb_dim)\n self.rnn = nn.LSTM(emb_dim, hid_dim, n_layers, dropout=dropout)\n self.out = nn.Linear(hid_dim, output_dim)\n\n def forward(self, trg: torch.LongTensor, hidden: torch.FloatTensor, cell: torch.FloatTensor):\n \"\"\"\n\n Parameters\n ----------\n trg : 1d torch.LongTensor\n Batched tokenized source sentence of shape [batch size].\n \n hidden, cell : 3d torch.FloatTensor\n Hidden and cell state of the LSTM layer. Each state's shape\n [n layers * n directions, batch size, hidden dim]\n\n Returns\n -------\n prediction : 2d torch.LongTensor\n For each token in the batch, the predicted target vobulary.\n Shape [batch size, output dim]\n\n hidden, cell : 3d torch.FloatTensor\n Hidden and cell state of the LSTM layer. Each state's shape\n [n layers * n directions, batch size, hidden dim]\n \"\"\"\n # [1, batch size, emb dim], the 1 serves as sent len\n embedded = self.embedding(trg.unsqueeze(0))\n outputs, (hidden, cell) = self.rnn(embedded, (hidden, cell))\n prediction = self.out(outputs.squeeze(0))\n return prediction, hidden, cell",
"_____no_output_____"
],
[
"decoder = Decoder(OUTPUT_DIM, DEC_EMB_DIM, HID_DIM, N_LAYERS, DEC_DROPOUT).to(device)\n\n# notice that we are not passing the entire the .trg\nprediction, hidden, cell = decoder(test_batch.trg[0], hidden, cell)\nprediction.shape, hidden.shape, cell.shape",
"_____no_output_____"
]
],
[
[
"### Seq2Seq Module",
"_____no_output_____"
],
[
"For the final part of the implementation, we'll implement the seq2seq model. This will handle: \n\n- receiving the input/source sentence\n- using the encoder to produce the context vectors \n- using the decoder to produce the predicted output/target sentence\n\nThe `Seq2Seq` model takes in an `Encoder`, `Decoder`, and a `device` (used to place tensors on the GPU, if it exists).\n\nFor this implementation, we have to ensure that the number of layers and the hidden (and cell) dimensions are equal in the `Encoder` and `Decoder`. This is not always the case, as we do not necessarily need the same number of layers or the same hidden dimension sizes in a sequence-to-sequence model. However, if we do have a different number of layers we will need to make decisions about how this is handled. For example, if our encoder has 2 layers and our decoder only has 1, how is this handled? Do we average the two context vectors output by the decoder? Do we pass both through a linear layer? Do we only use the context vector from the highest layer? etc.\n\nOur `forward` method takes the source sentence, target sentence and a teacher-forcing ratio. The teacher forcing ratio is used when training our model. When decoding, at each time-step we will predict what the next token in the target sequence will be from the previous tokens decoded. With probability equal to the teaching forcing ratio (`teacher_forcing_ratio`) we will use the actual ground-truth next token in the sequence as the input to the decoder during the next time-step. However, with probability `1 - teacher_forcing_ratio`, we will use the token that the model predicted as the next input to the model, even if it doesn't match the actual next token in the sequence. Note that the teacher forcing ratio is only done during training and should be shut off during evaluation.\n\nThe first thing we do in the `forward` method is to create an `outputs` tensor that will store all of our predictions, $\\hat{Y}$.\n\nWe then feed the input/source sentence, $X$/`src`, into the encoder and receive our final hidden and cell states.\n\nThe first input to the decoder is the start of sequence (`<sos>`) token. As our `trg` tensor already has the `<sos>` token appended (all the way back when we defined the `init_token` in our target field) we get our $y_1$ by slicing into it. We know how long our target sentences should be (`max_len`), so we loop that many times. During each iteration of the loop, we:\n- pass the input, previous hidden and previous cell states ($y_t, s_{t-1}, c_{t-1}$) into the decoder\n- receive a prediction, next hidden state and next cell state ($\\hat{y}_{t+1}, s_{t}, c_{t}$) from the decoder\n- place our prediction, $\\hat{y}_{t+1}$/`output` in our tensor of predictions, $\\hat{Y}$/`outputs`\n- decide if we are going to \"teacher force\" or not\n - if we do, the next `input` is the ground-truth next token in the sequence, $y_{t+1}$/`trg[t]`\n - if we don't, the next `input` is the predicted next token in the sequence, $\\hat{y}_{t+1}$/`top1`, which we get by doing an `argmax` over the output tensor\n \nOnce we've made all of our predictions, we return our tensor full of predictions, $\\hat{Y}$/`outputs`.\n\n**Note**: our decoder loop starts at 1, not 0. This means the 0th element of our `outputs` tensor remains all zeros. So our `trg` and `outputs` look something like:\n\n$$\\begin{align*}\n\\text{trg} = [<sos>, &y_1, y_2, y_3, <eos>]\\\\\n\\text{outputs} = [0, &\\hat{y}_1, \\hat{y}_2, \\hat{y}_3, <eos>]\n\\end{align*}$$\n\nLater on when we calculate the loss, we cut off the first element of each tensor to get:\n\n$$\\begin{align*}\n\\text{trg} = [&y_1, y_2, y_3, <eos>]\\\\\n\\text{outputs} = [&\\hat{y}_1, \\hat{y}_2, \\hat{y}_3, <eos>]\n\\end{align*}$$\n\nAll of this should make more sense after we look at the code in the next few section. Feel free to check out the discussion in these two github issues for some more context with this topic. [issue-45](https://github.com/bentrevett/pytorch-seq2seq/issues/45) and [issue-46](https://github.com/bentrevett/pytorch-seq2seq/issues/46)",
"_____no_output_____"
]
],
[
[
"class Seq2Seq(nn.Module):\n def __init__(self, encoder: Encoder, decoder: Decoder, device: torch.device):\n super().__init__()\n self.encoder = encoder\n self.decoder = decoder\n self.device = device\n\n assert encoder.hid_dim == decoder.hid_dim, \\\n 'Hidden dimensions of encoder and decoder must be equal!'\n assert encoder.n_layers == decoder.n_layers, \\\n 'Encoder and decoder must have equal number of layers!'\n\n def forward(self, src_batch: torch.LongTensor, trg_batch: torch.LongTensor,\n teacher_forcing_ratio: float=0.5):\n\n max_len, batch_size = trg_batch.shape\n trg_vocab_size = self.decoder.output_dim\n\n # tensor to store decoder's output\n outputs = torch.zeros(max_len, batch_size, trg_vocab_size).to(self.device)\n\n # last hidden & cell state of the encoder is used as the decoder's initial hidden state\n hidden, cell = self.encoder(src_batch)\n\n trg = trg_batch[0]\n for i in range(1, max_len):\n prediction, hidden, cell = self.decoder(trg, hidden, cell)\n outputs[i] = prediction\n\n if random.random() < teacher_forcing_ratio:\n trg = trg_batch[i]\n else:\n trg = prediction.argmax(1)\n\n return outputs",
"_____no_output_____"
],
[
"# note that this implementation assumes that the size of the hidden layer,\n# and the number of layer are the same between the encoder and decoder\nencoder = Encoder(INPUT_DIM, ENC_EMB_DIM, HID_DIM, N_LAYERS, ENC_DROPOUT)\ndecoder = Decoder(OUTPUT_DIM, DEC_EMB_DIM, HID_DIM, N_LAYERS, DEC_DROPOUT)\nseq2seq = Seq2Seq(encoder, decoder, device).to(device)\nseq2seq",
"_____no_output_____"
],
[
"outputs = seq2seq(test_batch.src, test_batch.trg)\noutputs.shape",
"_____no_output_____"
],
[
"def count_parameters(model):\n return sum(p.numel() for p in model.parameters() if p.requires_grad)\n\nprint(f'The model has {count_parameters(seq2seq):,} trainable parameters')",
"The model has 13,899,013 trainable parameters\n"
]
],
[
[
"### Training Seq2Seq",
"_____no_output_____"
],
[
"We've done the hard work of defining our seq2seq module. The final touch is to specify the training/evaluation loop.",
"_____no_output_____"
]
],
[
[
"optimizer = optim.Adam(seq2seq.parameters())\n\n# ignore the padding index when calculating the loss\nPAD_IDX = target.vocab.stoi['<pad>']\ncriterion = nn.CrossEntropyLoss(ignore_index=PAD_IDX)",
"_____no_output_____"
],
[
"def train(seq2seq, iterator, optimizer, criterion):\n seq2seq.train()\n\n epoch_loss = 0\n for batch in iterator:\n optimizer.zero_grad()\n outputs = seq2seq(batch.src, batch.trg)\n\n # 1. as mentioned in the seq2seq section, we will\n # cut off the first element when performing the evaluation\n # 2. the loss function only works on 2d inputs\n # with 1d targets we need to flatten each of them\n outputs_flatten = outputs[1:].view(-1, outputs.shape[-1])\n trg_flatten = batch.trg[1:].view(-1)\n loss = criterion(outputs_flatten, trg_flatten)\n\n loss.backward()\n optimizer.step()\n\n epoch_loss += loss.item()\n\n return epoch_loss / len(iterator)",
"_____no_output_____"
],
[
"def evaluate(seq2seq, iterator, criterion):\n seq2seq.eval()\n\n epoch_loss = 0\n with torch.no_grad():\n for batch in iterator:\n # turn off teacher forcing\n outputs = seq2seq(batch.src, batch.trg, teacher_forcing_ratio=0) \n\n # trg = [trg sent len, batch size]\n # output = [trg sent len, batch size, output dim]\n outputs_flatten = outputs[1:].view(-1, outputs.shape[-1])\n trg_flatten = batch.trg[1:].view(-1)\n loss = criterion(outputs_flatten, trg_flatten)\n epoch_loss += loss.item()\n\n return epoch_loss / len(iterator)",
"_____no_output_____"
],
[
"def epoch_time(start_time, end_time):\n elapsed_time = end_time - start_time\n elapsed_mins = int(elapsed_time / 60)\n elapsed_secs = int(elapsed_time - (elapsed_mins * 60))\n return elapsed_mins, elapsed_secs",
"_____no_output_____"
],
[
"N_EPOCHS = 20\nbest_valid_loss = float('inf')\n\nfor epoch in range(N_EPOCHS): \n start_time = time.time()\n train_loss = train(seq2seq, train_iterator, optimizer, criterion)\n valid_loss = evaluate(seq2seq, valid_iterator, criterion)\n end_time = time.time()\n\n epoch_mins, epoch_secs = epoch_time(start_time, end_time)\n\n if valid_loss < best_valid_loss:\n best_valid_loss = valid_loss\n torch.save(seq2seq.state_dict(), 'tut1-model.pt')\n\n # it's easier to see a change in perplexity between epoch as it's an exponential\n # of the loss, hence the scale of the measure is much bigger\n print(f'Epoch: {epoch+1:02} | Time: {epoch_mins}m {epoch_secs}s')\n print(f'\\tTrain Loss: {train_loss:.3f} | Train PPL: {math.exp(train_loss):7.3f}')\n print(f'\\t Val. Loss: {valid_loss:.3f} | Val. PPL: {math.exp(valid_loss):7.3f}')",
"Epoch: 01 | Time: 1m 12s\n\tTrain Loss: 5.023 | Train PPL: 151.870\n\t Val. Loss: 4.904 | Val. PPL: 134.856\nEpoch: 02 | Time: 1m 12s\n\tTrain Loss: 4.396 | Train PPL: 81.134\n\t Val. Loss: 4.651 | Val. PPL: 104.687\nEpoch: 03 | Time: 1m 12s\n\tTrain Loss: 4.076 | Train PPL: 58.924\n\t Val. Loss: 4.411 | Val. PPL: 82.381\nEpoch: 04 | Time: 1m 12s\n\tTrain Loss: 3.811 | Train PPL: 45.217\n\t Val. Loss: 4.314 | Val. PPL: 74.703\nEpoch: 05 | Time: 1m 12s\n\tTrain Loss: 3.569 | Train PPL: 35.482\n\t Val. Loss: 4.014 | Val. PPL: 55.342\nEpoch: 06 | Time: 1m 12s\n\tTrain Loss: 3.355 | Train PPL: 28.659\n\t Val. Loss: 3.933 | Val. PPL: 51.046\nEpoch: 07 | Time: 1m 12s\n\tTrain Loss: 3.187 | Train PPL: 24.222\n\t Val. Loss: 3.811 | Val. PPL: 45.207\nEpoch: 08 | Time: 1m 12s\n\tTrain Loss: 3.028 | Train PPL: 20.662\n\t Val. Loss: 3.810 | Val. PPL: 45.140\nEpoch: 09 | Time: 1m 12s\n\tTrain Loss: 2.863 | Train PPL: 17.513\n\t Val. Loss: 3.709 | Val. PPL: 40.809\nEpoch: 10 | Time: 1m 12s\n\tTrain Loss: 2.751 | Train PPL: 15.661\n\t Val. Loss: 3.755 | Val. PPL: 42.746\nEpoch: 11 | Time: 1m 12s\n\tTrain Loss: 2.615 | Train PPL: 13.666\n\t Val. Loss: 3.727 | Val. PPL: 41.568\nEpoch: 12 | Time: 1m 12s\n\tTrain Loss: 2.481 | Train PPL: 11.959\n\t Val. Loss: 3.692 | Val. PPL: 40.135\nEpoch: 13 | Time: 1m 12s\n\tTrain Loss: 2.389 | Train PPL: 10.898\n\t Val. Loss: 3.734 | Val. PPL: 41.846\nEpoch: 14 | Time: 1m 12s\n\tTrain Loss: 2.281 | Train PPL: 9.791\n\t Val. Loss: 3.748 | Val. PPL: 42.419\nEpoch: 15 | Time: 1m 12s\n\tTrain Loss: 2.179 | Train PPL: 8.838\n\t Val. Loss: 3.722 | Val. PPL: 41.360\nEpoch: 16 | Time: 1m 12s\n\tTrain Loss: 2.082 | Train PPL: 8.019\n\t Val. Loss: 3.798 | Val. PPL: 44.629\nEpoch: 17 | Time: 1m 12s\n\tTrain Loss: 2.017 | Train PPL: 7.514\n\t Val. Loss: 3.731 | Val. PPL: 41.717\nEpoch: 18 | Time: 1m 12s\n\tTrain Loss: 1.912 | Train PPL: 6.767\n\t Val. Loss: 3.791 | Val. PPL: 44.289\nEpoch: 19 | Time: 1m 11s\n\tTrain Loss: 1.839 | Train PPL: 6.292\n\t Val. Loss: 3.789 | Val. PPL: 44.197\nEpoch: 20 | Time: 1m 11s\n\tTrain Loss: 1.758 | Train PPL: 5.802\n\t Val. Loss: 3.880 | Val. PPL: 48.423\n"
]
],
[
[
"### Evaluating Seq2Seq",
"_____no_output_____"
]
],
[
[
"seq2seq.load_state_dict(torch.load('tut1-model.pt'))\n\ntest_loss = evaluate(seq2seq, test_iterator, criterion)\nprint(f'| Test Loss: {test_loss:.3f} | Test PPL: {math.exp(test_loss):7.3f} |')",
"| Test Loss: 3.650 | Test PPL: 38.477 |\n"
]
],
[
[
"Here, we pick a random example in our dataset, print out the original source and target sentence. Then takes a look at whether the \"predicted\" target sentence generated by the model.",
"_____no_output_____"
]
],
[
[
"example_idx = 0\nexample = train_data.examples[example_idx]\nprint('source sentence: ', ' '.join(example.src))\nprint('target sentence: ', ' '.join(example.trg))",
"source sentence: . büsche vieler nähe der in freien im sind männer weiße junge zwei\ntarget sentence: two young , white males are outside near many bushes .\n"
],
[
"src_tensor = source.process([example.src]).to(device)\ntrg_tensor = target.process([example.trg]).to(device)\nprint(trg_tensor.shape)\n\nseq2seq.eval()\nwith torch.no_grad():\n outputs = seq2seq(src_tensor, trg_tensor, teacher_forcing_ratio=0)\n\noutputs.shape",
"torch.Size([13, 1])\n"
],
[
"output_idx = outputs[1:].squeeze(1).argmax(1)\n' '.join([target.vocab.itos[idx] for idx in output_idx])",
"_____no_output_____"
]
],
[
[
"## Summary",
"_____no_output_____"
],
[
"In this document:\n\n- We took a stab at implementing a vanilla version of the seq2seq model, and train it on a German to English translation.\n- Implemented the trick introduced by the original seq2seq paper where they reverse the order of the tokens in the source sentence.\n\nThere are a lot of other tricks/ideas that are mentioned in the original paper and worth exploring. e.g.\n\n- A LSTM with 4 layers was chosen.\n- Beam Search was also used to decode the sentence.\n- Instead of only relying on log-loss or perplexity, another evaluation metric that they used to evaluate the quality of their translation.",
"_____no_output_____"
],
[
"# Reference",
"_____no_output_____"
],
[
"- [Blog: A Comprehensive Introduction to Torchtext (Practical Torchtext part 1)](https://mlexplained.com/2018/02/08/a-comprehensive-tutorial-to-torchtext/)\n- [Jupyter Notebook: Using TorchText with Your Own Datasets](https://nbviewer.jupyter.org/github/bentrevett/pytorch-sentiment-analysis/blob/master/A%20-%20Using%20TorchText%20with%20Your%20Own%20Datasets.ipynb)\n- [Jupyter Notebook: Sequence to Sequence Learning with Neural Networks](https://nbviewer.jupyter.org/github/bentrevett/pytorch-seq2seq/blob/master/1%20-%20Sequence%20to%20Sequence%20Learning%20with%20Neural%20Networks.ipynb)\n- [Paper: Sutskever, I., Vinyals, O., and Le, Q. (2014). Sequence to sequence learning with neural networks.](https://arxiv.org/abs/1409.3215)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
e72c71ed056d701f30080f072789444a586cfa08 | 14,495 | ipynb | Jupyter Notebook | Bokeh Functionalization.ipynb | acep-uaf/ACEP_solar | 3ae25d011cdacd660e3e65e8044be24f9f6679cf | [
"MIT"
] | null | null | null | Bokeh Functionalization.ipynb | acep-uaf/ACEP_solar | 3ae25d011cdacd660e3e65e8044be24f9f6679cf | [
"MIT"
] | null | null | null | Bokeh Functionalization.ipynb | acep-uaf/ACEP_solar | 3ae25d011cdacd660e3e65e8044be24f9f6679cf | [
"MIT"
] | null | null | null | 41.772334 | 365 | 0.487202 | [
[
[
"import bokeh\nimport json\nimport pandas as pd\nimport numpy as np\nimport requests\nfrom bokeh.plotting import figure, output_file, show, output_notebook\nfrom bokeh.models import NumeralTickFormatter\nfrom bokeh.io import show\nfrom bokeh.layouts import column\nfrom bokeh.models import ColumnDataSource, CustomJS, Select\nfrom bokeh.plotting import figure\nfrom ARCTIC import hdf5_interface\nfrom ARCTIC import nrel_api_interface",
"_____no_output_____"
]
],
[
[
"To call a number of different locations all at once, just call using a dictionary with the location names as the keys and have nested lat and lon keys therein, and then the lat and lon. Save the figures as the dictionary keys+ the .format construction that includes the tilt plot name or w/e. Also allow option to pass a list of tilts to make it customizable.",
"_____no_output_____"
]
],
[
[
"location_dataframe = pd.DataFrame(columns=['location','latitude','longitude'])\nlocation_dataframe['location']=['Ambler-Shungnak-Kobuk','Anchorage','Bethel','Chickaloon',\n 'Deering','Denali Park','Fairbanks','Fort Yukon',\n 'Galena-Koyukuk-Ruby', 'Homer','Naknek','Noatak',\n 'Noorvik','Soldotna','Valdez','Wasilla-Palmer']\n\nlocation_dataframe['latitude']=[66.995834, 61.193625, 60.794938, 61.823570,\n 66.069413, 63.537277, 64.838033, 66.571563,\n 64.782991, 59.652521, 58.728349, 67.570921,\n 66.836039, 60.486370, 61.128663, 61.582242]\n\n\nlocation_dataframe['longitude']=[ -157.377096, -149.694974, -161.770716, -148.450442,\n -162.766760, -150.985453, -147.668970, -145.250173,\n -156.744933, -151.536496, -157.017444, -162.967490,\n -161.041913, -151.060702, -146.353366, -149.441001]\n\n\nlocation_dataframe",
"_____no_output_____"
],
[
"def tilt_angle_plot_generation(location_dataframe): \n \"\"\"This function takes in a dataframe that contains latitudes and longitudes for a number of \n locations and generates interactive Bokeh plots showing the variation of monthly production \n with changing tilt angles.\"\"\"\n #The below list is sufficiently granular to cover most situations.\n tilt_list = [5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60, 65, 70, 75, 80, 85, 90]\n\n #Walk through each row in the location dataframe, calling PVwatts and plotting results.\n for j in range(len(location_dataframe.index)):\n print(\"Data for \" + str(location_dataframe['location'][j]) + \" is being calculated\")\n nrel_long_tilt = []\n for i in range(len(tilt_list)):\n list_parameters = {\"formt\": 'JSON', \"api_key\": \"spJFj2l5ghY5jwk7dNfVYs3JHbpR6BOGHQNO8Y9Z\", \"system_capacity\": 1, \"module_type\": 0, \"losses\": 14.08,\n \"array_type\": 0, \"tilt\": tilt_list[i], \"azimuth\": 180, \"lat\": location_dataframe['latitude'][j], \"lon\": location_dataframe['longitude'][j], \"dataset\": 'tmy3'}\n json_response = requests.get(\"https://developer.nrel.gov/api/pvwatts/v6\", params = list_parameters).json()\n new_dataframe = pd.DataFrame(data = json_response['outputs'])\n nrel_long_tilt.append(new_dataframe)\n tilt_response_dataframe = pd.DataFrame(columns = tilt_list)\n for i, tilt in enumerate(tilt_list):\n tilt_response_dataframe[tilt] = nrel_long_tilt[i]['ac_monthly']\n\n #The below is all of the data for the plotting components.\n #This adjusts the name of the saved file, so it's specific to each location.\n output_file(\"{}_monthly_production_varying_tilts.html\".format(location_dataframe['location'][j]))\n #Set up a month proxy\n x = np.arange(1,13)\n\n #Tell the plot where to look for the data. The extra specifications of y values\n #enable the plot to be interactive.\n source = ColumnDataSource(data=dict(x=x, y=tilt_response_dataframe[5],\n tilt_5_degrees=tilt_response_dataframe[5], tilt_10_degrees=tilt_response_dataframe[10],\n tilt_15_degrees=tilt_response_dataframe[15], tilt_20_degrees=tilt_response_dataframe[20],\n tilt_25_degrees=tilt_response_dataframe[25], tilt_30_degrees=tilt_response_dataframe[30],\n tilt_35_degrees=tilt_response_dataframe[35], tilt_40_degrees=tilt_response_dataframe[40],\n tilt_45_degrees=tilt_response_dataframe[45], tilt_50_degrees=tilt_response_dataframe[50],\n tilt_55_degrees=tilt_response_dataframe[55], tilt_60_degrees=tilt_response_dataframe[60],\n tilt_65_degrees=tilt_response_dataframe[65], tilt_70_degrees=tilt_response_dataframe[70],\n tilt_75_degrees=tilt_response_dataframe[75], tilt_80_degrees=tilt_response_dataframe[80],\n tilt_85_degrees=tilt_response_dataframe[85], tilt_90_degrees=tilt_response_dataframe[90],\n ))\n #Plot specifications\n plot = figure(x_axis_label='Month', y_axis_label='Normalized Monthly Production (kWh/kW)', plot_height=400)\n plot.line(x='x', y='y', source=source)\n plot.title.text = \"Annual Production at Varying Tilt Angles\"\n plot.title.align = \"center\"\n plot.title.text_font = \"times\"\n plot.title.text_font_style = \"italic\"\n plot.title.text_font_size = '15pt'\n #This line is what connects the changing dropdown menu with the data that is displayed.\n select = Select(value='foo', options=['tilt_5_degrees', 'tilt_10_degrees','tilt_15_degrees',\n 'tilt_20_degrees','tilt_25_degrees','tilt_30_degrees',\n 'tilt_35_degrees','tilt_40_degrees','tilt_45_degrees',\n 'tilt_50_degrees','tilt_55_degrees','tilt_60_degrees',\n 'tilt_65_degrees','tilt_70_degrees','tilt_75_degrees',\n 'tilt_80_degrees','tilt_85_degrees','tilt_90_degrees'])\n #javascript that actually makes the changes possible.\n select.js_on_change('value', CustomJS(args=dict(source=source, select=select), code=\"\"\"\n // make a shallow copy of the current data dict\n const new_data = Object.assign({}, source.data)\n\n // update the y column in the new data dict from the appropriate other column\n new_data.y = source.data[select.value]\n\n // set the new data on source, BokehJS will pick this up automatically\n source.data = new_data\n \"\"\"))\n\n show(column(plot, select))\n",
"_____no_output_____"
],
[
"tilt_angle_plot_generation(location_dataframe)",
"Data for Ambler-Shungnak-Kobuk is being calculated\nData for Anchorage is being calculated\nData for Bethel is being calculated\nData for Chickaloon is being calculated\nData for Deering is being calculated\nData for Denali Park is being calculated\nData for Fairbanks is being calculated\nData for Fort Yukon is being calculated\n"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
e72c8d35ebcea8525af7488b6c21af9400f90b22 | 46,339 | ipynb | Jupyter Notebook | src/experimental_code/.ipynb_checkpoints/Izh_LSM_StaticSyn-checkpoint.ipynb | Roboy/LSM_SpiNNaker_MyoArm | 04fa1eaf78778edea3ba3afa4c527d20c491718e | [
"BSD-3-Clause"
] | 2 | 2020-11-01T13:22:11.000Z | 2020-11-01T13:22:20.000Z | src/experimental_code/.ipynb_checkpoints/Izh_LSM_StaticSyn-checkpoint.ipynb | Roboy/LSM_SpiNNaker_MyoArm | 04fa1eaf78778edea3ba3afa4c527d20c491718e | [
"BSD-3-Clause"
] | null | null | null | src/experimental_code/.ipynb_checkpoints/Izh_LSM_StaticSyn-checkpoint.ipynb | Roboy/LSM_SpiNNaker_MyoArm | 04fa1eaf78778edea3ba3afa4c527d20c491718e | [
"BSD-3-Clause"
] | null | null | null | 100.956427 | 34,196 | 0.845724 | [
[
[
"# Reservoir of Izhikevich neuron models",
"_____no_output_____"
],
[
"In this script a reservoir of neurons models with the differential equations proposed by Izhikevich is defined. ",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\nimport pyNN.nest as p\nfrom pyNN.random import NumpyRNG, RandomDistribution\nfrom pyNN.utility import Timer\nimport matplotlib.pyplot as plt\nimport numpy as np\n\ntimer = Timer()\np.setup(timestep=0.1) # 0.1ms \n\n",
"_____no_output_____"
]
],
[
[
"## Definition of Inputs",
"_____no_output_____"
],
[
"The input can be:\n- the joint position of the robot arm (rate coded or temporal coded)",
"_____no_output_____"
]
],
[
[
"poisson_input = p.SpikeSourcePoisson(rate = 10, start = 20.)\n#input_neuron = p.Population(2, p.SpikeSourcePoisson, {'rate': 0.7}, label='input')\ninput_neuron = p.Population(2, poisson_input, label='input')",
"_____no_output_____"
]
],
[
[
"## Definition of neural populations\n",
"_____no_output_____"
],
[
"Izhikevich spiking model with a quadratic non-linearity: \n\ndv/dt = 0.04*v^2 + 5*v + 140 - u + I \n\ndu/dt = a*(b*v - u)",
"_____no_output_____"
]
],
[
[
"n = 1500 # number of cells\nexc_ratio = 0.8 # ratio of excitatory neurons\n\nn_exc = int(round(n*0.8))\nn_inh = n-n_exc\nprint n_exc, n_inh\n\n\ncelltype = p.Izhikevich()\n# default_parameters = {'a': 0.02, 'c': -65.0, 'd': 2.0, 'b': 0.2, 'i_offset': 0.0}¶\n# default_initial_values = {'v': -70.0, 'u': -14.0}¶\nexc_cells = p.Population(n_exc, celltype, label=\"Excitatory_Cells\")\ninh_cells = p.Population(n_inh, celltype, label=\"Inhibitory_Cells\")\n\n# initialize with a uniform random distributin\n# use seeding for reproducability\nrngseed = 98766987\nparallel_safe = True\nrng = NumpyRNG(seed=rngseed, parallel_safe=parallel_safe)\n\nunifDistr = RandomDistribution('uniform', (-75,-65), rng=rng)\nexc_cells.initialize(v=unifDistr)\ninh_cells.initialize(v=unifDistr)",
"1200 300\n"
]
],
[
[
"## Definition of readout neurons",
"_____no_output_____"
],
[
"Decide:\n- 2 readout neurons: representing in which direction to move the joint\n- 1 readout neuron: representing the desired goal position of the joint",
"_____no_output_____"
]
],
[
[
"readout_neurons = p.Population(2, celltype, label=\"readout_neuron\")",
"_____no_output_____"
]
],
[
[
"## Define the connections between the neurons",
"_____no_output_____"
]
],
[
[
"inp_conn = p.AllToAllConnector()\nrout_conn = p.AllToAllConnector()\n\n\nw_exc = 20. # later add unit\nw_inh = 51. # later add unit\ndelay_exc = 1 # defines how long (ms) the synapse takes for transmission\ndelay_inh = 1\n\nstat_syn_exc = p.StaticSynapse(weight =w_exc, delay=delay_exc)\nstat_syn_inh = p.StaticSynapse(weight =w_inh, delay=delay_inh)\n\n\n\nweight_distr_exc = RandomDistribution('normal', [w_exc, 1e-3], rng=rng)\nweight_distr_inh = RandomDistribution('normal', [w_inh, 1e-3], rng=rng)\n\nexc_synapse = p.TsodyksMarkramSynapse(U=0.04, tau_rec=100.0, tau_facil=1000.0,\n weight=weight_distr_exc, delay=lambda d: 0.1+d/100.0)\ninh_synapse = p.TsodyksMarkramSynapse(U=0.04, tau_rec=100.0, tau_facil=1000.0,\n weight=weight_distr_inh, delay=lambda d: 0.1+d/100.0)\n# tau_rec: depression time constant (ms)\n# tau_facil: facilitation time constant (ms)\n\n\n\npconn = 0.01 # sparse connection probability\n\nexc_conn = p.FixedProbabilityConnector(pconn, rng=rng)\ninh_conn = p.FixedProbabilityConnector(pconn, rng=rng)\n\nconnections = {}\nconnections['e2e'] = p.Projection(exc_cells, exc_cells, exc_conn,\n synapse_type=stat_syn_exc, receptor_type='excitatory')\nconnections['e2i'] = p.Projection(exc_cells, inh_cells, exc_conn,\n synapse_type=stat_syn_exc,receptor_type='excitatory')\nconnections['i2e'] = p.Projection(inh_cells, exc_cells, inh_conn,\n synapse_type=stat_syn_inh,receptor_type='inhibitory')\nconnections['i2i'] = p.Projection(inh_cells, inh_cells, inh_conn,\n synapse_type=stat_syn_inh,receptor_type='inhibitory')\n\n\nconnections['inp2e'] = p.Projection(input_neuron, exc_cells, inp_conn,\n synapse_type=stat_syn_exc,receptor_type='excitatory')\nconnections['inp2i'] = p.Projection(input_neuron, inh_cells, inp_conn,\n synapse_type=stat_syn_exc,receptor_type='excitatory')\n\nconnections['e2rout'] = p.Projection(exc_cells, readout_neurons, rout_conn,\n synapse_type=stat_syn_exc,receptor_type='excitatory')\nconnections['i2rout'] = p.Projection(inh_cells, readout_neurons, rout_conn,\n synapse_type=stat_syn_inh,receptor_type='inhibitory')\n",
"_____no_output_____"
]
],
[
[
"## Setup recording and run the simulation",
"_____no_output_____"
]
],
[
[
"readout_neurons.record(['v','spikes'])\nexc_cells.record(['v','spikes'])\np.run(1000)",
"_____no_output_____"
]
],
[
[
"## Plotting the Results",
"_____no_output_____"
]
],
[
[
"p.end()\ndata_rout = readout_neurons.get_data()\n\ndata_exc = exc_cells.get_data()\n\n",
"_____no_output_____"
],
[
"fig_settings = {\n 'lines.linewidth': 0.5,\n 'axes.linewidth': 0.5,\n 'axes.labelsize': 'small',\n 'legend.fontsize': 'small',\n 'font.size': 8\n}\nplt.rcParams.update(fig_settings)\nplt.figure(1, figsize=(6,8))",
"_____no_output_____"
],
[
"def plot_spiketrains(segment):\n for spiketrain in segment.spiketrains:\n y = np.ones_like(spiketrain) * spiketrain.annotations['source_id']\n plt.plot(spiketrain, y, '.')\n plt.ylabel(segment.name)\n plt.setp(plt.gca().get_xticklabels(), visible=False)",
"_____no_output_____"
],
[
"def plot_signal(signal, index, colour='b'):\n label = \"Neuron %d\" % signal.annotations['source_ids'][index]\n plt.plot(signal.times, signal[:, index], colour, label=label)\n plt.ylabel(\"%s (%s)\" % (signal.name, signal.units._dimensionality.string))\n plt.setp(plt.gca().get_xticklabels(), visible=False)\n plt.legend()",
"_____no_output_____"
]
],
[
[
"Plot readout neurons",
"_____no_output_____"
]
],
[
[
"n_panels = sum(a.shape[1] for a in data_rout.segments[0].analogsignalarrays) + 2\nplt.subplot(n_panels, 1, 1)\nplot_spiketrains(data_rout.segments[0])\npanel = 3\nfor array in data_rout.segments[0].analogsignalarrays:\n for i in range(array.shape[1]):\n plt.subplot(n_panels, 1, panel)\n plot_signal(array, i, colour='bg'[panel%2])\n panel += 1\nplt.xlabel(\"time (%s)\" % array.times.units._dimensionality.string)\nplt.setp(plt.gca().get_xticklabels(), visible=True)\n\nplt.savefig(\"neo_example.png\")",
"_____no_output_____"
]
],
[
[
"Plot excitatory cells",
"_____no_output_____"
]
],
[
[
"n_panels = sum(a.shape[1] for a in data_exc.segments[0].analogsignalarrays) + 2\nplt.subplot(n_panels, 1, 1)\nplot_spiketrains(data_exc.segments[0])\npanel = 3\nfor array in data_exc.segments[0].analogsignalarrays:\n for i in range(array.shape[1]):\n plt.subplot(n_panels, 1, panel)\n plot_signal(array, i, colour='bg'[panel%2])\n panel += 1\nplt.xlabel(\"time (%s)\" % array.times.units._dimensionality.string)\nplt.setp(plt.gca().get_xticklabels(), visible=True)\n\nplt.savefig(\"neo_example.png\")\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
e72c91dc0a306207b34182add15dd62801dc8ff2 | 2,414 | ipynb | Jupyter Notebook | lecture-04/solution/tanhkh.ipynb | ocefpaf/waves_and_tides | 80139ec42cc8df65ac41f4a1f97e66c57c58668c | [
"Artistic-2.0"
] | 12 | 2016-02-24T13:27:29.000Z | 2021-11-12T11:18:39.000Z | lecture-04/solution/tanhkh.ipynb | ocefpaf/waves_and_tides | 80139ec42cc8df65ac41f4a1f97e66c57c58668c | [
"Artistic-2.0"
] | 1 | 2020-05-29T19:10:50.000Z | 2020-06-01T15:09:23.000Z | lecture-04/solution/tanhkh.ipynb | ocefpaf/waves_and_tides | 80139ec42cc8df65ac41f4a1f97e66c57c58668c | [
"Artistic-2.0"
] | 4 | 2019-07-02T08:02:00.000Z | 2021-07-10T00:02:20.000Z | 26.822222 | 131 | 0.470174 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
e72ca1a30ff831c3747fbc90600e823ac4a09b3e | 7,528 | ipynb | Jupyter Notebook | 02_Data_Preparation/02_Data_GO_Overlap.ipynb | srijitseal/Using-Cell-Morphology-Gene-Expression-Features-and-Structural-Fingerprints-to-Aid-Detection-of-Mito | a9c7a34b80102fd08d811a9de0e28c714bbf93a2 | [
"MIT"
] | 1 | 2022-02-24T11:37:01.000Z | 2022-02-24T11:37:01.000Z | 02_Data_Preparation/02_Data_GO_Overlap.ipynb | srijitseal/Using-Cell-Morphology-Gene-Expression-Features-and-Structural-Fingerprints-to-Aid-Detection-of-Mito | a9c7a34b80102fd08d811a9de0e28c714bbf93a2 | [
"MIT"
] | null | null | null | 02_Data_Preparation/02_Data_GO_Overlap.ipynb | srijitseal/Using-Cell-Morphology-Gene-Expression-Features-and-Structural-Fingerprints-to-Aid-Detection-of-Mito | a9c7a34b80102fd08d811a9de0e28c714bbf93a2 | [
"MIT"
] | null | null | null | 28.842912 | 125 | 0.418305 | [
[
[
"import pandas as pd\n\nnot_to_be_selected_list=[\n 'Activity Summary', 'Viability Activity', 'PUBCHEM_ACTIVITY_SCORE',\n 'Viability Potency (uM)', 'Viability Efficacy (%)', \"index\"]",
"_____no_output_____"
],
[
"mito=pd.read_csv(\"../../PubChem_assay_summary_processed.csv\", usecols= lambda x: x not in not_to_be_selected_list)\nmito = mito.rename(columns={\"InChICode_standardised\": \"StdInChI\"})\nmito",
"_____no_output_____"
],
[
"GO= pd.read_csv(\"../../../Gene_Expression/GO_transforemed_inchi.csv\")\nGO= GO.rename(columns={\"InChICode_standardised\" : \"StdInChI\"})\n#GO = GO.drop(\"pert_id\", axis=1)\nGO",
"_____no_output_____"
],
[
"df= pd.merge(mito, GO, how='inner', on=['StdInChI'])\ndf",
"_____no_output_____"
],
[
"print(df.PUBCHEM_ACTIVITY_OUTCOME.value_counts())\nlen(df)",
"_____no_output_____"
],
[
"not_to_be_selected_list=[\n 'Activity Summary', 'Viability Activity', 'PUBCHEM_ACTIVITY_SCORE',\n 'Viability Potency (uM)', 'Viability Efficacy (%)', \"index\"]",
"_____no_output_____"
],
[
"df = df[df.columns[~df.columns.isin(not_to_be_selected_list)]]",
"_____no_output_____"
],
[
"df =df[df.PUBCHEM_ACTIVITY_OUTCOME != \"Inconclusive\"]\ndf = df.replace({'PUBCHEM_ACTIVITY_OUTCOME': {\"Active\": 1, \"Inactive\": 0, \"Inconclusive\":3}})\ndf.reset_index(drop=True)",
"_____no_output_____"
],
[
"print(df.PUBCHEM_ACTIVITY_OUTCOME.value_counts())\nlen(df)",
"_____no_output_____"
],
[
"#No need to remove cell death as we are using only GO\ndf.to_csv(\"GO_MitoOverlap.csv\", index=False)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e72cb39d48f67d7e878b1c42d0c37e9c83767492 | 15,202 | ipynb | Jupyter Notebook | Chapter10/Chapter_10_Transfer_learning_Example.ipynb | PacktPublishing/Hands-On-Neural-Networks-with-Keras | 47d6a2d449434c6bbc085fb42290b98b2c5b0795 | [
"MIT"
] | 21 | 2019-03-30T02:42:11.000Z | 2021-11-08T13:12:14.000Z | Chapter10/Chapter_10_Transfer_learning_Example.ipynb | PacktPublishing/Hands-On-Neural-Networks-with-Keras | 47d6a2d449434c6bbc085fb42290b98b2c5b0795 | [
"MIT"
] | null | null | null | Chapter10/Chapter_10_Transfer_learning_Example.ipynb | PacktPublishing/Hands-On-Neural-Networks-with-Keras | 47d6a2d449434c6bbc085fb42290b98b2c5b0795 | [
"MIT"
] | 9 | 2019-04-29T02:09:05.000Z | 2021-04-14T05:34:12.000Z | 46.206687 | 245 | 0.538087 | [
[
[
"from keras import applications\nfrom keras import optimizers\nfrom keras.models import Sequential, Model \nfrom keras.layers import Dropout, Flatten, Dense, GlobalAveragePooling2D\nfrom keras import backend as k \nfrom keras.datasets import cifar10\nfrom keras import utils",
"D:\\Anaconda\\lib\\site-packages\\h5py\\__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.\n from ._conv import register_converters as _register_converters\nUsing TensorFlow backend.\n"
],
[
"(x_train, y_train), (x_test, y_test) = cifar10.load_data()",
"_____no_output_____"
],
[
"img_width, img_height = 32, 32\nbatch_size = 128\nepochs = 50\nnum_classes = 10\nmodel = applications.VGG19(weights = \"imagenet\", include_top=False, input_shape = (img_width, img_height, 3))\n",
"_____no_output_____"
],
[
"x_train = x_train.astype('float32')\nx_test = x_test.astype('float32')\nx_train /= 255\nx_test /= 255",
"_____no_output_____"
],
[
"y_train = utils.to_categorical(y_train, num_classes)\ny_test = utils.to_categorical(y_test, num_classes)",
"_____no_output_____"
],
[
"# Freeze the layers which you don't want to train. Here I am freezing the first 5 layers.\nfor layer in model.layers[:5]:\n layer.trainable = False\n",
"_____no_output_____"
],
[
"#Adding custom Layers \nx = model.output\nx = Flatten()(x)\nx = Dense(1024, activation=\"relu\")(x)\nx = Dropout(0.5)(x)\nx = Dense(1024, activation=\"relu\")(x)\npredictions = Dense(10, activation=\"softmax\")(x)",
"_____no_output_____"
],
[
"# creating the final model \nmodel_final = Model(input = model.input, output = predictions)\nmodel_final.summary()",
"_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ninput_1 (InputLayer) (None, 32, 32, 3) 0 \n_________________________________________________________________\nblock1_conv1 (Conv2D) (None, 32, 32, 64) 1792 \n_________________________________________________________________\nblock1_conv2 (Conv2D) (None, 32, 32, 64) 36928 \n_________________________________________________________________\nblock1_pool (MaxPooling2D) (None, 16, 16, 64) 0 \n_________________________________________________________________\nblock2_conv1 (Conv2D) (None, 16, 16, 128) 73856 \n_________________________________________________________________\nblock2_conv2 (Conv2D) (None, 16, 16, 128) 147584 \n_________________________________________________________________\nblock2_pool (MaxPooling2D) (None, 8, 8, 128) 0 \n_________________________________________________________________\nblock3_conv1 (Conv2D) (None, 8, 8, 256) 295168 \n_________________________________________________________________\nblock3_conv2 (Conv2D) (None, 8, 8, 256) 590080 \n_________________________________________________________________\nblock3_conv3 (Conv2D) (None, 8, 8, 256) 590080 \n_________________________________________________________________\nblock3_conv4 (Conv2D) (None, 8, 8, 256) 590080 \n_________________________________________________________________\nblock3_pool (MaxPooling2D) (None, 4, 4, 256) 0 \n_________________________________________________________________\nblock4_conv1 (Conv2D) (None, 4, 4, 512) 1180160 \n_________________________________________________________________\nblock4_conv2 (Conv2D) (None, 4, 4, 512) 2359808 \n_________________________________________________________________\nblock4_conv3 (Conv2D) (None, 4, 4, 512) 2359808 \n_________________________________________________________________\nblock4_conv4 (Conv2D) (None, 4, 4, 512) 2359808 \n_________________________________________________________________\nblock4_pool (MaxPooling2D) (None, 2, 2, 512) 0 \n_________________________________________________________________\nblock5_conv1 (Conv2D) (None, 2, 2, 512) 2359808 \n_________________________________________________________________\nblock5_conv2 (Conv2D) (None, 2, 2, 512) 2359808 \n_________________________________________________________________\nblock5_conv3 (Conv2D) (None, 2, 2, 512) 2359808 \n_________________________________________________________________\nblock5_conv4 (Conv2D) (None, 2, 2, 512) 2359808 \n_________________________________________________________________\nblock5_pool (MaxPooling2D) (None, 1, 1, 512) 0 \n_________________________________________________________________\nflatten_1 (Flatten) (None, 512) 0 \n_________________________________________________________________\ndense_1 (Dense) (None, 1024) 525312 \n_________________________________________________________________\ndropout_1 (Dropout) (None, 1024) 0 \n_________________________________________________________________\ndense_2 (Dense) (None, 1024) 1049600 \n_________________________________________________________________\ndense_3 (Dense) (None, 10) 10250 \n=================================================================\nTotal params: 21,609,546\nTrainable params: 21,496,970\nNon-trainable params: 112,576\n_________________________________________________________________\n"
],
[
"# compile the model \nmodel_final.compile(loss = \"categorical_crossentropy\", optimizer = optimizers.SGD(lr=0.0001, momentum=0.9), metrics=[\"accuracy\"])\n",
"_____no_output_____"
],
[
"# Save the model according to the conditions \ncheckpoint = ModelCheckpoint(\"vgg16_1.h5\", monitor='val_acc', verbose=1, save_best_only=True, save_weights_only=False, mode='auto', period=1)\nearly = EarlyStopping(monitor='val_acc', min_delta=0, patience=10, verbose=1, mode='auto')\n",
"_____no_output_____"
],
[
"# Train the model \nmodel_final.fit(x_train,y_train,\nbatch_size=batch_size,\nepochs = epochs,\nvalidation_data = (x_test,y_test))",
"Train on 50000 samples, validate on 10000 samples\nEpoch 1/50\n50000/50000 [==============================] - 67s 1ms/step - loss: 2.0521 - acc: 0.2456 - val_loss: 1.4976 - val_acc: 0.4639\nEpoch 2/50\n50000/50000 [==============================] - 63s 1ms/step - loss: 1.3411 - acc: 0.5246 - val_loss: 1.0518 - val_acc: 0.6275\nEpoch 3/50\n50000/50000 [==============================] - 63s 1ms/step - loss: 1.0865 - acc: 0.6199 - val_loss: 0.9572 - val_acc: 0.6651\nEpoch 4/50\n50000/50000 [==============================] - 64s 1ms/step - loss: 0.9666 - acc: 0.6647 - val_loss: 0.8485 - val_acc: 0.7020\nEpoch 5/50\n50000/50000 [==============================] - 64s 1ms/step - loss: 0.8881 - acc: 0.6946 - val_loss: 0.8231 - val_acc: 0.7164\nEpoch 6/50\n50000/50000 [==============================] - 65s 1ms/step - loss: 0.8253 - acc: 0.7127 - val_loss: 0.7643 - val_acc: 0.7350\nEpoch 7/50\n50000/50000 [==============================] - 65s 1ms/step - loss: 0.7822 - acc: 0.7309 - val_loss: 0.7565 - val_acc: 0.7379\nEpoch 8/50\n50000/50000 [==============================] - 64s 1ms/step - loss: 0.7509 - acc: 0.7419 - val_loss: 0.7356 - val_acc: 0.7415\nEpoch 9/50\n50000/50000 [==============================] - 64s 1ms/step - loss: 0.7145 - acc: 0.7530 - val_loss: 0.7193 - val_acc: 0.7530\nEpoch 10/50\n50000/50000 [==============================] - 64s 1ms/step - loss: 0.6848 - acc: 0.7642 - val_loss: 0.6956 - val_acc: 0.7609\nEpoch 11/50\n50000/50000 [==============================] - 64s 1ms/step - loss: 0.6612 - acc: 0.7730 - val_loss: 0.6728 - val_acc: 0.7652\nEpoch 12/50\n50000/50000 [==============================] - 64s 1ms/step - loss: 0.6422 - acc: 0.7791 - val_loss: 0.6724 - val_acc: 0.7678\nEpoch 13/50\n50000/50000 [==============================] - 64s 1ms/step - loss: 0.6176 - acc: 0.7870 - val_loss: 0.6548 - val_acc: 0.7748\nEpoch 14/50\n50000/50000 [==============================] - 64s 1ms/step - loss: 0.6026 - acc: 0.7923 - val_loss: 0.6246 - val_acc: 0.7848\nEpoch 15/50\n50000/50000 [==============================] - 65s 1ms/step - loss: 0.5788 - acc: 0.8004 - val_loss: 0.6380 - val_acc: 0.7820\nEpoch 16/50\n50000/50000 [==============================] - 65s 1ms/step - loss: 0.5666 - acc: 0.8051 - val_loss: 0.6193 - val_acc: 0.7846\nEpoch 17/50\n50000/50000 [==============================] - 64s 1ms/step - loss: 0.5484 - acc: 0.8112 - val_loss: 0.6321 - val_acc: 0.7851\nEpoch 18/50\n50000/50000 [==============================] - 64s 1ms/step - loss: 0.5401 - acc: 0.8133 - val_loss: 0.6044 - val_acc: 0.7942\nEpoch 19/50\n50000/50000 [==============================] - 64s 1ms/step - loss: 0.5207 - acc: 0.8203 - val_loss: 0.6112 - val_acc: 0.7912\nEpoch 20/50\n50000/50000 [==============================] - 64s 1ms/step - loss: 0.5040 - acc: 0.8272 - val_loss: 0.6218 - val_acc: 0.7928\nEpoch 21/50\n50000/50000 [==============================] - 64s 1ms/step - loss: 0.4904 - acc: 0.8324 - val_loss: 0.5804 - val_acc: 0.8036\nEpoch 22/50\n50000/50000 [==============================] - 64s 1ms/step - loss: 0.4835 - acc: 0.8341 - val_loss: 0.6582 - val_acc: 0.7834\nEpoch 23/50\n50000/50000 [==============================] - 64s 1ms/step - loss: 0.4669 - acc: 0.8382 - val_loss: 0.5779 - val_acc: 0.8018\nEpoch 24/50\n50000/50000 [==============================] - 64s 1ms/step - loss: 0.4509 - acc: 0.8451 - val_loss: 0.5853 - val_acc: 0.8049\nEpoch 25/50\n50000/50000 [==============================] - 64s 1ms/step - loss: 0.4412 - acc: 0.8491 - val_loss: 0.5613 - val_acc: 0.8100\nEpoch 26/50\n50000/50000 [==============================] - 65s 1ms/step - loss: 0.4288 - acc: 0.8522 - val_loss: 0.5920 - val_acc: 0.8051\nEpoch 27/50\n50000/50000 [==============================] - 64s 1ms/step - loss: 0.4220 - acc: 0.8554 - val_loss: 0.5801 - val_acc: 0.8046\nEpoch 28/50\n50000/50000 [==============================] - 64s 1ms/step - loss: 0.4081 - acc: 0.8588 - val_loss: 0.5568 - val_acc: 0.8122\nEpoch 29/50\n50000/50000 [==============================] - 64s 1ms/step - loss: 0.3967 - acc: 0.8628 - val_loss: 0.5615 - val_acc: 0.8149\nEpoch 30/50\n50000/50000 [==============================] - 64s 1ms/step - loss: 0.3881 - acc: 0.8682 - val_loss: 0.5683 - val_acc: 0.8140\nEpoch 31/50\n50000/50000 [==============================] - 65s 1ms/step - loss: 0.3776 - acc: 0.8686 - val_loss: 0.5923 - val_acc: 0.8063\nEpoch 32/50\n50000/50000 [==============================] - 65s 1ms/step - loss: 0.3672 - acc: 0.8726 - val_loss: 0.5693 - val_acc: 0.8118\nEpoch 33/50\n50000/50000 [==============================] - 267s 5ms/step - loss: 0.3565 - acc: 0.8776 - val_loss: 0.5699 - val_acc: 0.8158\nEpoch 34/50\n50000/50000 [==============================] - 762s 15ms/step - loss: 0.3445 - acc: 0.8819 - val_loss: 0.5747 - val_acc: 0.8169\nEpoch 35/50\n50000/50000 [==============================] - 765s 15ms/step - loss: 0.3345 - acc: 0.8857 - val_loss: 0.5849 - val_acc: 0.8138\nEpoch 36/50\n43264/50000 [========================>.....] - ETA: 1:36 - loss: 0.3339 - acc: 0.8864"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e72cbdca3819bcbb8fd5e648b8b6327148a84a7f | 404,152 | ipynb | Jupyter Notebook | notebooks/02_ARIMA-playground.ipynb | benlindsay/walmart-forecasting | 23253da39564338bdfc10c4fbf40a6b970aa1100 | [
"MIT"
] | 1 | 2019-06-20T13:43:24.000Z | 2019-06-20T13:43:24.000Z | notebooks/02_ARIMA-playground.ipynb | benlindsay/walmart-forecasting | 23253da39564338bdfc10c4fbf40a6b970aa1100 | [
"MIT"
] | 4 | 2020-03-24T16:07:51.000Z | 2021-04-30T20:36:34.000Z | notebooks/02_ARIMA-playground.ipynb | benlindsay/walmart-forecasting | 23253da39564338bdfc10c4fbf40a6b970aa1100 | [
"MIT"
] | null | null | null | 505.822278 | 68,872 | 0.935232 | [
[
[
"%load_ext autoreload\n%autoreload 2",
"_____no_output_____"
],
[
"%matplotlib inline\nfrom datetime import timedelta\nfrom dotenv import find_dotenv\nfrom os.path import dirname\nfrom os.path import exists\nfrom os.path import join\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\n\nfrom src.load import load_train_df\nfrom src.load import load_test_df\nfrom src.transform import get_week_by_dept_df\nfrom src.transform import unpivot_week_by_dept_df\nfrom src.features import make_id_column\n\n# Root directory of repo\nproject_dir = dirname(find_dotenv())\n\n# Use custom matplotlib style\nplt.style.use(join(project_dir, 'big-darkgrid.mplstyle'))",
"_____no_output_____"
],
[
"week_by_dept = get_week_by_dept_df()\npd.tools.plotting.autocorrelation_plot(week_by_dept.iloc[:, 0])\nplt.show()",
"/Users/benlindsay/miniconda/envs/walmart/lib/python3.6/site-packages/ipykernel_launcher.py:2: FutureWarning: 'pandas.tools.plotting.autocorrelation_plot' is deprecated, import 'pandas.plotting.autocorrelation_plot' instead.\n \n"
],
[
"from statsmodels.tsa.seasonal import seasonal_decompose\none_dept = week_by_dept.iloc[:, 0]\ndecomposition = seasonal_decompose(one_dept, freq=52)\nfig = plt.figure() \nfig = decomposition.plot() \nfig.set_size_inches(10, 8)\nplt.tight_layout()",
"_____no_output_____"
],
[
"# from http://www.seanabu.com/2016/03/22/time-series-seasonal-ARIMA-model-in-python/\nfrom statsmodels.tsa.stattools import adfuller\ndef test_stationarity(timeseries):\n\n #Determing rolling statistics\n rolmean = timeseries.rolling(window=52).mean()\n rolstd = timeseries.rolling(window=52).std()\n\n #Plot rolling statistics:\n fig = plt.figure(figsize=(12, 8))\n orig = plt.plot(timeseries, color='blue',label='Original')\n mean = plt.plot(rolmean, color='red', label='Rolling Mean')\n std = plt.plot(rolstd, color='black', label = 'Rolling Std')\n plt.legend(loc='best')\n plt.title('Rolling Mean & Standard Deviation')\n plt.show()\n \n #Perform Dickey-Fuller test:\n print('Results of Dickey-Fuller Test:')\n dftest = adfuller(timeseries, autolag='AIC')\n dfoutput = pd.Series(dftest[0:4], index=['Test Statistic','p-value','#Lags Used','Number of Observations Used'])\n for key,value in dftest[4].items():\n dfoutput['Critical Value (%s)'%key] = value\n print(dfoutput)",
"_____no_output_____"
],
[
"first_difference = one_dept - one_dept.shift(1)\ntest_stationarity(first_difference.dropna())",
"_____no_output_____"
],
[
"seasonal_first_difference = first_difference - first_difference.shift(52)\ntest_stationarity(seasonal_first_difference.dropna())",
"_____no_output_____"
],
[
"result = seasonal_decompose(week_by_dept, freq=52)",
"_____no_output_____"
],
[
"result.seasonal.head()",
"_____no_output_____"
],
[
"from statsmodels.tsa.statespace.sarimax import SARIMAX\ndf_pred = pd.DataFrame(index=week_by_dept.index[104:])\nfor c in week_by_dept.columns[:2]:\n model = SARIMAX(week_by_dept[c].iloc[:104], trend='n', order=(0,1,0),\n seasonal_order=(0,1,0,52))\n results = model.fit()\n y_pred = results.predict(start=104, end=len(week_by_dept), dynamic= True) \n df_pred[c] = y_pred",
"/Users/benlindsay/miniconda/envs/walmart/lib/python3.6/site-packages/statsmodels/tsa/base/tsa_model.py:171: ValueWarning: No frequency information was provided, so inferred frequency W-FRI will be used.\n % freq, ValueWarning)\n/Users/benlindsay/miniconda/envs/walmart/lib/python3.6/site-packages/statsmodels/tsa/statespace/representation.py:375: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.\n return matrix[[slice(None)]*(matrix.ndim-1) + [0]]\n/Users/benlindsay/miniconda/envs/walmart/lib/python3.6/site-packages/statsmodels/tsa/base/tsa_model.py:171: ValueWarning: No frequency information was provided, so inferred frequency W-FRI will be used.\n % freq, ValueWarning)\n"
],
[
"fig, ax = plt.subplots()\nweek_by_dept.iloc[:, :2].plot(ax=ax)\ndf_pred.plot(ax=ax)",
"_____no_output_____"
],
[
"fig, ax = plt.subplots()\ny_pred = pd.Series(y_pred, index=one_dept.index[104:])\none_dept.plot(ax=ax)\ny_pred.plot(ax=ax)",
"_____no_output_____"
],
[
"from statsmodels.tsa.arima_model import ARIMA\nX = week_by_dept.iloc[:, 0]\nsize = int(len(X) * 0.66)\ntrain, test = X[0:size], X[size:len(X)]\nmodel = ARIMA(train, order=(0,1,0))\nmodel_fit = model.fit(disp=0)\noutput = model_fit.forecast(steps=len(test))\npredictions = output[0]",
"/Users/benlindsay/miniconda/envs/walmart/lib/python3.6/site-packages/statsmodels/tsa/base/tsa_model.py:171: ValueWarning: No frequency information was provided, so inferred frequency W-FRI will be used.\n % freq, ValueWarning)\n/Users/benlindsay/miniconda/envs/walmart/lib/python3.6/site-packages/statsmodels/tsa/base/tsa_model.py:171: ValueWarning: No frequency information was provided, so inferred frequency W-FRI will be used.\n % freq, ValueWarning)\n/Users/benlindsay/miniconda/envs/walmart/lib/python3.6/site-packages/scipy/signal/signaltools.py:1341: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.\n out_full[ind] += zi\n/Users/benlindsay/miniconda/envs/walmart/lib/python3.6/site-packages/scipy/signal/signaltools.py:1344: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.\n out = out_full[ind]\n/Users/benlindsay/miniconda/envs/walmart/lib/python3.6/site-packages/scipy/signal/signaltools.py:1350: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.\n zf = out_full[ind]\n"
],
[
"fig, ax = plt.subplots()\nX.plot(ax=ax)\ny_pred = pd.Series(predictions, index=test.index)\ny_pred.plot(ax=ax)",
"_____no_output_____"
],
[
"from statsmodels.tsa.arima_model import ARIMA\nX = week_by_dept.iloc[:, 0]\nsize = int(len(X) * 0.66)\ntrain, test = X[0:size], X[size:len(X)]\nhistory = [x for x in train]\npredictions = list()\nfor t in range(len(test)):\n model = ARIMA(history, order=(0,1,0))\n model_fit = model.fit(disp=0)\n output = model_fit.forecast()\n yhat = output[0][0]\n predictions.append(yhat)\n obs = test[t]\n history.append(obs)\n print('predicted=%f, expected=%f' % (yhat, obs))",
"/Users/benlindsay/miniconda/envs/walmart/lib/python3.6/site-packages/scipy/signal/signaltools.py:1341: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.\n out_full[ind] += zi\n/Users/benlindsay/miniconda/envs/walmart/lib/python3.6/site-packages/scipy/signal/signaltools.py:1344: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.\n out = out_full[ind]\n/Users/benlindsay/miniconda/envs/walmart/lib/python3.6/site-packages/scipy/signal/signaltools.py:1350: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.\n zf = out_full[ind]\n"
],
[
"fig, ax = plt.subplots()\nX.plot(ax=ax)\ny_pred = pd.Series(predictions, index=test.index)\ny_pred.plot(ax=ax)",
"_____no_output_____"
],
[
"# df = pd.DataFrame({'pred': predictions, 'actual': history}, index=test.index)\n# df.plot(ax=ax)\nplt.show()",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
e72ce9db418689f8a3f2c5b0fd68575e72c3fe92 | 159,862 | ipynb | Jupyter Notebook | tsa/jose/UDEMY_TSA_FINAL (1)/05-Time-Series-Analysis-with-Statsmodels/02-EWMA-Exponentially-Weighted-Moving-Average.ipynb | juspreet51/ml_templates | 60c9219f27a2ada97cde0b701c5be9321dda38c4 | [
"MIT"
] | 3 | 2021-04-09T03:01:00.000Z | 2021-08-09T19:50:39.000Z | 05-Time-Series-Analysis-with-Statsmodels/02-EWMA-Exponentially-Weighted-Moving-Average.ipynb | mullazeeshan/Time-Series-Analysis-With-Python-TSA- | d83d613001ce8579fdcc7d803f3f0316e9e31ada | [
"Apache-2.0"
] | null | null | null | 05-Time-Series-Analysis-with-Statsmodels/02-EWMA-Exponentially-Weighted-Moving-Average.ipynb | mullazeeshan/Time-Series-Analysis-With-Python-TSA- | d83d613001ce8579fdcc7d803f3f0316e9e31ada | [
"Apache-2.0"
] | 4 | 2021-04-09T14:39:55.000Z | 2022-03-28T11:41:23.000Z | 291.187614 | 71,972 | 0.911317 | [
[
[
"___\n\n<a href='http://www.pieriandata.com'><img src='../Pierian_Data_Logo.png'/></a>\n___\n<center><em>Copyright Pierian Data</em></center>\n<center><em>For more information, visit us at <a href='http://www.pieriandata.com'>www.pieriandata.com</a></em></center>",
"_____no_output_____"
],
[
"# MA\n## Moving Averages\nIn this section we'll compare <em>Simple Moving Averages</em> to <em>Exponentially Weighted Moving Averages</em> in terms of complexity and performance.\n\n<div class=\"alert alert-info\"><h3>Related Functions:</h3>\n<tt><strong><a href='https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.rolling.html'>pandas.DataFrame.rolling</a></strong><font color=black>(window)</font> \nProvides rolling window calculations<br>\n<strong><a href='https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.ewm.html'>pandas.DataFrame.ewm</a></strong><font color=black>(span)</font> \nProvides exponential weighted functions</tt></div></div>\n\n### Perform standard imports and load the dataset\nFor these examples we'll use the International Airline Passengers dataset, which gives monthly totals in thousands from January 1949 to December 1960.",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\n%matplotlib inline",
"_____no_output_____"
],
[
"airline = pd.read_csv('../Data/airline_passengers.csv',index_col='Month',parse_dates=True)",
"_____no_output_____"
],
[
"airline.dropna(inplace=True)",
"_____no_output_____"
],
[
"airline.head()",
"_____no_output_____"
]
],
[
[
"___\n# SMA\n## Simple Moving Average\n\nWe've already shown how to create a <a href='https://en.wikipedia.org/wiki/Moving_average#Simple_moving_average'>simple moving average</a> by applying a <tt>mean</tt> function to a rolling window.\n\nFor a quick review:",
"_____no_output_____"
]
],
[
[
"airline['6-month-SMA'] = airline['Thousands of Passengers'].rolling(window=6).mean()\nairline['12-month-SMA'] = airline['Thousands of Passengers'].rolling(window=12).mean()",
"_____no_output_____"
],
[
"airline.head(15)",
"_____no_output_____"
],
[
"airline.plot();",
"_____no_output_____"
]
],
[
[
"___\n# EWMA\n## Exponentially Weighted Moving Average \n\nWe just showed how to calculate the SMA based on some window. However, basic SMA has some weaknesses:\n* Smaller windows will lead to more noise, rather than signal\n* It will always lag by the size of the window\n* It will never reach to full peak or valley of the data due to the averaging.\n* Does not really inform you about possible future behavior, all it really does is describe trends in your data.\n* Extreme historical values can skew your SMA significantly\n\nTo help fix some of these issues, we can use an <a href='https://en.wikipedia.org/wiki/Exponential_smoothing'>EWMA (Exponentially weighted moving average)</a>.",
"_____no_output_____"
],
[
"EWMA will allow us to reduce the lag effect from SMA and it will put more weight on values that occured more recently (by applying more weight to the more recent values, thus the name). The amount of weight applied to the most recent values will depend on the actual parameters used in the EWMA and the number of periods given a window size.\n[Full details on Mathematics behind this can be found here](http://pandas.pydata.org/pandas-docs/stable/user_guide/computation.html#exponentially-weighted-windows).\nHere is the shorter version of the explanation behind EWMA.\n\nThe formula for EWMA is:\n### $y_t = \\frac{\\sum\\limits_{i=0}^t w_i x_{t-i}}{\\sum\\limits_{i=0}^t w_i}$",
"_____no_output_____"
],
[
"Where $x_t$ is the input value, $w_i$ is the applied weight (Note how it can change from $i=0$ to $t$), and $y_t$ is the output.\n\nNow the question is, how to we define the weight term $w_i$?\n\nThis depends on the <tt>adjust</tt> parameter you provide to the <tt>.ewm()</tt> method.\n\nWhen <tt>adjust=True</tt> (default) is used, weighted averages are calculated using weights equal to $w_i = (1 - \\alpha)^i$\n\nwhich gives\n\n### $y_t = \\frac{x_t + (1 - \\alpha)x_{t-1} + (1 - \\alpha)^2 x_{t-2} + ...\n+ (1 - \\alpha)^t x_{0}}{1 + (1 - \\alpha) + (1 - \\alpha)^2 + ...\n+ (1 - \\alpha)^t}$",
"_____no_output_____"
],
[
"When <tt>adjust=False</tt> is specified, moving averages are calculated as:\n\n### $\\begin{split}y_0 &= x_0 \\\\\ny_t &= (1 - \\alpha) y_{t-1} + \\alpha x_t,\\end{split}$\n\nwhich is equivalent to using weights:\n\n \\begin{split}w_i = \\begin{cases}\n \\alpha (1 - \\alpha)^i & \\text{if } i < t \\\\\n (1 - \\alpha)^i & \\text{if } i = t.\n\\end{cases}\\end{split}",
"_____no_output_____"
],
[
"When <tt>adjust=True</tt> we have $y_0=x_0$ and from the last representation above we have \n$y_t=\\alpha x_t+(1−α)y_{t−1}$, therefore there is an assumption that $x_0$ is not an ordinary value but rather an exponentially weighted moment of the infinite series up to that point.\n\nFor the smoothing factor $\\alpha$ one must have $0<\\alpha≤1$, and while it is possible to pass <em>alpha</em> directly, it’s often easier to think about either the <em>span</em>, <em>center of mass</em> (com) or <em>half-life</em> of an EW moment:",
"_____no_output_____"
],
[
"\\begin{split}\\alpha =\n \\begin{cases}\n \\frac{2}{s + 1}, & \\text{for span}\\ s \\geq 1\\\\\n \\frac{1}{1 + c}, & \\text{for center of mass}\\ c \\geq 0\\\\\n 1 - \\exp^{\\frac{\\log 0.5}{h}}, & \\text{for half-life}\\ h > 0\n \\end{cases}\\end{split}",
"_____no_output_____"
],
[
"* <strong>Span</strong> corresponds to what is commonly called an “N-day EW moving average”.\n* <strong>Center of mass</strong> has a more physical interpretation and can be thought of in terms of span: $c=(s−1)/2$\n* <strong>Half-life</strong> is the period of time for the exponential weight to reduce to one half.\n* <strong>Alpha</strong> specifies the smoothing factor directly.\n\nWe have to pass precisely one of the above into the <tt>.ewm()</tt> function. For our data we'll use <tt>span=12</tt>.",
"_____no_output_____"
]
],
[
[
"airline['EWMA12'] = airline['Thousands of Passengers'].ewm(span=12,adjust=False).mean()",
"_____no_output_____"
],
[
"airline[['Thousands of Passengers','EWMA12']].plot();",
"_____no_output_____"
]
],
[
[
"## Comparing SMA to EWMA",
"_____no_output_____"
]
],
[
[
"airline[['Thousands of Passengers','EWMA12','12-month-SMA']].plot(figsize=(12,8)).autoscale(axis='x',tight=True);",
"_____no_output_____"
]
],
[
[
"## Simple Exponential Smoothing\nThe above example employed <em>Simple Exponential Smoothing</em> with one smoothing factor <strong>α</strong>. Unfortunately, this technique does a poor job of forecasting when there is a trend in the data as seen above. In the next section we'll look at <em>Double</em> and <em>Triple Exponential Smoothing</em> with the Holt-Winters Methods.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
e72d00ed943dc5eed4a0f22f816d56ccc8cd9274 | 320 | ipynb | Jupyter Notebook | notebooks/book1/18/hinge_loss_plot.ipynb | patel-zeel/pyprobml | 027ef3c13a2a63d958e05fdedb68fd7b8f0e0261 | [
"MIT"
] | null | null | null | notebooks/book1/18/hinge_loss_plot.ipynb | patel-zeel/pyprobml | 027ef3c13a2a63d958e05fdedb68fd7b8f0e0261 | [
"MIT"
] | 1 | 2022-03-27T04:59:50.000Z | 2022-03-27T04:59:50.000Z | notebooks/book1/18/hinge_loss_plot.ipynb | patel-zeel/pyprobml | 027ef3c13a2a63d958e05fdedb68fd7b8f0e0261 | [
"MIT"
] | 2 | 2022-03-26T11:52:36.000Z | 2022-03-27T05:17:48.000Z | 20 | 149 | 0.615625 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
e72d09ef33d8614fbdcf5719c68c87baa8902d06 | 90,632 | ipynb | Jupyter Notebook | OCEA-267/Lectures/W4_L7.ipynb | profxj/ocea200 | 562077f498d4283fb5d456b634e8f2f0bcaf539c | [
"BSD-3-Clause"
] | null | null | null | OCEA-267/Lectures/W4_L7.ipynb | profxj/ocea200 | 562077f498d4283fb5d456b634e8f2f0bcaf539c | [
"BSD-3-Clause"
] | 3 | 2019-10-09T04:04:54.000Z | 2019-11-28T16:12:30.000Z | OCEA-267/Lectures/W4_L7.ipynb | profxj/ocea200 | 562077f498d4283fb5d456b634e8f2f0bcaf539c | [
"BSD-3-Clause"
] | null | null | null | 159.003509 | 28,136 | 0.861914 | [
[
[
"# Lecture 7",
"_____no_output_____"
]
],
[
[
"# imports \n# imports\nimport numpy as np\nfrom scipy.ndimage import uniform_filter1d\nfrom scipy.stats import shapiro\nfrom matplotlib import pyplot as plt\nimport pandas\n\nfrom statsmodels.tsa.seasonal import seasonal_decompose\nimport statsmodels.api as sm\nfrom statsmodels.stats.stattools import durbin_watson\nimport statsmodels.formula.api as smf",
"_____no_output_____"
],
[
"def set_fontsize(ax,fsz):\n \"\"\"\n Set the fontsize throughout an Axis\n \n Args:\n ax (Matplotlib Axis): \n fsz (float): Font size\n\n Returns:\n\n \"\"\"\n for item in ([ax.title, ax.xaxis.label, ax.yaxis.label] +\n ax.get_xticklabels() + ax.get_yticklabels()):\n item.set_fontsize(fsz)",
"_____no_output_____"
]
],
[
[
"# Monte Carlo",
"_____no_output_____"
]
],
[
[
"nrand = 100",
"_____no_output_____"
]
],
[
[
"## Random ",
"_____no_output_____"
]
],
[
[
"def grab_norm(size=nrand):\n return np.random.normal(size=size)",
"_____no_output_____"
],
[
"time = np.arange(r_norm.size)",
"_____no_output_____"
],
[
"data = pandas.DataFrame()\ndata['time'] = time",
"_____no_output_____"
]
],
[
[
"### Fit",
"_____no_output_____"
]
],
[
[
"data['norm'] = grab_norm()\nformula = \"norm ~ time\"\nmod1 = smf.glm(formula=formula, data=data).fit()#, family=sm.families.Binomial()).fit()",
"_____no_output_____"
],
[
"mod1.summary()",
"_____no_output_____"
],
[
"mod1.pvalues.Intercept",
"_____no_output_____"
]
],
[
[
"### Plot",
"_____no_output_____"
]
],
[
[
"def plot_me(data, model, entry):\n plt.clf()\n fig = plt.figure(figsize=(12,8))\n #\n ax = plt.gca()\n ax.plot(data['time'], data[entry], 'o', ms=2)\n # Fit\n ax.plot(data['time'], mod1.fittedvalues, label=f'p-value({entry}) = {mod1.pvalues.Intercept}')\n #\n set_fontsize(ax, 17)\n ax.legend(fontsize=17)\n #\n plt.show()",
"_____no_output_____"
]
],
[
[
"### Run a bunch",
"_____no_output_____"
]
],
[
[
"key = 'norm'\ndata[key] = grab_norm()\nformula = f\"{key} ~ time\"\nmod1 = smf.glm(formula=formula, data=data).fit()#, family=sm.families.Binomial()).fit()\nplot_me(data, mod1, key)",
"_____no_output_____"
]
],
[
[
"# Log-normal",
"_____no_output_____"
]
],
[
[
"def grab_lognorm(size=nrand):\n return np.random.lognormal(size=size)",
"_____no_output_____"
],
[
"key = 'lnorm'\ndata[key] = grab_lognorm()\nformula = f\"{key} ~ time\"\nmod1 = smf.glm(formula=formula, data=data).fit()#, family=sm.families.Binomial()).fit()\nplot_me(data, mod1, key)",
"_____no_output_____"
],
[
"mod1.summary()",
"_____no_output_____"
]
],
[
[
"# Auto-correlated data",
"_____no_output_____"
]
],
[
[
"## Stolen from the internet...",
"_____no_output_____"
],
[
"def sample_signal(n_samples, corr, mu=0, sigma=1):\n assert 0 < corr < 1, \"Auto-correlation must be between 0 and 1\"\n\n # Find out the offset `c` and the std of the white noise `sigma_e`\n # that produce a signal with the desired mean and variance.\n # See https://en.wikipedia.org/wiki/Autoregressive_model\n # under section \"Example: An AR(1) process\".\n c = mu * (1 - corr)\n sigma_e = np.sqrt((sigma ** 2) * (1 - corr ** 2))\n\n # Sample the auto-regressive process.\n signal = [c + np.random.normal(0, sigma_e)]\n for _ in range(1, n_samples):\n signal.append(c + corr * signal[-1] + np.random.normal(0, sigma_e))\n\n return np.array(signal)\n\ndef compute_corr_lag_1(signal):\n return np.corrcoef(signal[:-1], signal[1:])[0][1]",
"_____no_output_____"
],
[
"key = 'corr'\ndata[key] = sample_signal(nrand, 0.9)\nformula = f\"{key} ~ time\"\nmod1 = smf.glm(formula=formula, data=data).fit()#, family=sm.families.Binomial()).fit()\nplot_me(data, mod1, key)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
e72d1b16c1dc70ba1ea51886524f0940f3b1e69e | 100,642 | ipynb | Jupyter Notebook | notebooks/1_0_agglomerative_clustering.ipynb | nymarya/school-budgets-for-education | c25b13bdac001e6523d55a6c4192f4d6cc67c6ff | [
"MIT"
] | 4 | 2019-03-18T14:27:19.000Z | 2019-04-16T00:01:24.000Z | notebooks/1_0_agglomerative_clustering.ipynb | nymarya/school-budgets-for-education | c25b13bdac001e6523d55a6c4192f4d6cc67c6ff | [
"MIT"
] | null | null | null | notebooks/1_0_agglomerative_clustering.ipynb | nymarya/school-budgets-for-education | c25b13bdac001e6523d55a6c4192f4d6cc67c6ff | [
"MIT"
] | 1 | 2020-03-24T12:06:54.000Z | 2020-03-24T12:06:54.000Z | 76.708841 | 21,356 | 0.739791 | [
[
[
"## Import libraries and load files",
"_____no_output_____"
]
],
[
[
"from tensorflow.python.client import device_lib\n\ndef get_available_gpus():\n local_device_protos = device_lib.list_local_devices()\n return [x.name for x in local_device_protos if x.device_type == 'GPU']",
"_____no_output_____"
],
[
"get_available_gpus()",
"_____no_output_____"
],
[
"import tensorflow as tf\n\nimport pandas as pd\nimport numpy as np\nfrom sklearn.feature_extraction.text import HashingVectorizer\nfrom sklearn.pipeline import FeatureUnion, Pipeline\nfrom sklearn.impute import SimpleImputer\nfrom sklearn.preprocessing import FunctionTransformer, LabelEncoder\nfrom sklearn.decomposition import TruncatedSVD\nfrom sklearn.metrics import SCORERS\n\nfrom sklearn.model_selection import KFold\nfrom sklearn.cluster import AgglomerativeClustering\nfrom sklearn.model_selection import GridSearchCV\nfrom sklearn.decomposition import TruncatedSVD\nfrom warnings import warn\n\nimport numpy as np\nimport pandas as pd\nfrom datetime import datetime\n\nfrom imblearn.under_sampling import RandomUnderSampler\nfrom sklearn.metrics import silhouette_samples, silhouette_score, davies_bouldin_score, adjusted_rand_score",
"_____no_output_____"
],
[
"#Read the data using the Unnamed (probably id) as index\nurl = 'https://s3.amazonaws.com/drivendata/data/4/public/81e8f2de-9915-4934-b9ae-9705685c9d50.csv'\n#url = '../src/data/raw/training.csv'\ntraining = pd.read_csv(url, index_col='Unnamed: 0')\n\nlabels = ['Function', 'Object_Type', 'Operating_Status', 'Position_Type', 'Pre_K', 'Reporting', \n 'Sharing', 'Student_Type', 'Use']\n\nnumeric = ['FTE', 'Total']\n\ncategoric = [ 'Facility_or_Department', 'Function_Description', \n 'Fund_Description', 'Job_Title_Description', 'Location_Description', \n 'Object_Description', 'Position_Extra', 'Program_Description', 'SubFund_Description', \n 'Sub_Object_Description', \n 'Text_1', 'Text_2', 'Text_3', 'Text_4']",
"_____no_output_____"
]
],
[
[
"### FunctionTransformers",
"_____no_output_____"
]
],
[
[
"# Define combine_text_columns()\ndef combine_text_columns(data_frame):\n \"\"\" converts all text in each row of data_frame to single vector \"\"\"\n \n # Drop non-text columns that are in the df\n text_data = data_frame[categoric].copy()\n \n # Replace nans with blanks\n text_data.fillna(\"\", inplace=True)\n \n for category in categoric:\n training.loc[:,category] = training[category].str.lower()\n \n \n # Join all text items in a row that have a space in between\n return text_data.apply(lambda x: \" \".join(x), axis=1)",
"_____no_output_____"
],
[
"groupped_FTE = training[['FTE', 'Object_Type']].groupby(by='Object_Type')\ngroupped_total = training[['Total', 'Object_Type']].groupby(by='Object_Type')\n# Define combine_numeric_columns()\ndef combine_numeric_columns(data_frame, groupped_FTE=groupped_FTE, groupped_total=groupped_total):\n \"\"\" process all the numeric data \"\"\"\n \n # Drop non-numeric columns that are in the df\n data = data_frame[numeric].copy()\n \n #Remove inconsistent data\n data.loc[(data[numeric[0]] < 0) | (data[numeric[0]] > 1), numeric[0]] = np.nan\n data.loc[(data[numeric[1]] < 0), numeric[1]] = np.nan\n \n #Impute the missing data with the median from each class\n for group in groupped_FTE.median().index:\n indexes_FTE = groupped_FTE.get_group(group).index.values\n indexes_total = groupped_total.get_group(group).index.values\n data.loc[ data.FTE.isnull() & np.isin(data.index.values,indexes_FTE), 'FTE'] = groupped_FTE.median().loc[group, \"FTE\"]\n data.loc[ data.Total.isnull() & np.isin(data.index.values,indexes_total), 'Total'] = groupped_total.median().loc[group,\"Total\"]\n \n return data",
"_____no_output_____"
],
[
"# Preprocess the text data: get_text_data\nget_text_data = FunctionTransformer(combine_text_columns, validate=False)\n\n# Preprocess the numeric data: get_numeric_data\nget_numeric_data = FunctionTransformer(combine_numeric_columns, validate=False)",
"_____no_output_____"
],
[
"# Recover the targets and split the data\ny = pd.get_dummies(training['Object_Type'])\n\nX = training.drop(columns=labels)\n\n# rus = RandomUnderSampler(random_state=0)\n# X_resampled, y_resampled = rus.fit_resample(X, y)",
"_____no_output_____"
]
],
[
[
"### Pipeline\n\nApply the transformations on numeric and categorica data. Neither dimension reduction or standard scaler are used.",
"_____no_output_____"
]
],
[
[
"pl = Pipeline([\n ('union', FeatureUnion(\n transformer_list = [\n ('numeric_features', Pipeline([\n ('selector', get_numeric_data),\n ('imp', SimpleImputer())\n ])),\n ('text_features', Pipeline([\n ('selector', get_text_data),\n ('vectorizer',HashingVectorizer(token_pattern=\"[A-Za-z0-9]+(?=\\\\s+)\", \n norm=None, \n binary=False,\n ngram_range=(1,2)) \n )\n ]))\n ]\n )),\n ('reduce_dim', TruncatedSVD(n_components = 100)),\n # ('clf', AgglomerativeClustering(memory='mycachedir', \n # compute_full_tree=True, n_clusters=3))\n \n ])",
"_____no_output_____"
],
[
"col_names = list(range(0,11))",
"_____no_output_____"
],
[
"y.columns = col_names",
"_____no_output_____"
],
[
"labels_true = y.idxmax(axis=1)",
"_____no_output_____"
]
],
[
[
"Applying the steps, we got a sparse matrix with 1048578 features.",
"_____no_output_____"
]
],
[
[
"data_X= pl.fit_transform(X, labels_true)",
"_____no_output_____"
],
[
"rus = RandomUnderSampler()\nX_resampled, y_resampled = rus.fit_resample(data_X, labels_true)",
"_____no_output_____"
],
[
"X_resampled.shape",
"_____no_output_____"
],
[
"y_resampled.shape",
"_____no_output_____"
]
],
[
[
"With the purpose of calculating the adjusted rand score, we need to set the labels to numbers between 0 and 10.",
"_____no_output_____"
],
[
"## Training\nThe model is trained and tested using the number of groups varying between 2 and 20. As the agglomerative clustering method is deterministic, the model is fitted only one time.",
"_____no_output_____"
]
],
[
[
"results = []",
"_____no_output_____"
],
[
"for k in range(2, 21):\n agg = AgglomerativeClustering(memory='mycachedir', \n compute_full_tree=True, n_clusters=k)\n start = datetime.now()\n with tf.device('/gpu:0'):\n #fit model to data\n cluster_labels = agg.fit_predict(X_resampled)\n \n end = datetime.now()\n # The silhouette_score gives the average value for all the samples.\n # This gives a perspective into the density and separation of the formed\n # clusters\n silhouette_avg = silhouette_score(X_resampled, cluster_labels)\n print(\"For n_clusters =\", k,\n \"The average silhouette_score is :\", silhouette_avg)\n \n db_avg = davies_bouldin_score(X_resampled, cluster_labels)\n print(\"The average db_score is :\", db_avg)\n \n ars_avg = adjusted_rand_score(y_resampled, cluster_labels)\n print(\"The average ars_score is :\", ars_avg)\n \n \n # Append the results\n results.append({'k':k, 'silhouette': silhouette_avg,\n 'db': db_avg,'time': end-start, 'ars':ars_avg})",
"For n_clusters = 2 The average silhouette_score is : 0.9973498370915256\nThe average db_score is : 0.4211945242711094\nThe average ars_score is : 6.826789991637767e-08\n"
],
[
"len(results)",
"_____no_output_____"
],
[
"results_df = pd.DataFrame(results)",
"_____no_output_____"
],
[
"results_df.head()",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"results_df['seconds'] = results_df.time.apply(lambda x: x.total_seconds() )",
"_____no_output_____"
],
[
"results_df.head()",
"_____no_output_____"
],
[
"import seaborn as sns",
"_____no_output_____"
],
[
"sns.lineplot(results_df.k, results_df.silhouette)",
"_____no_output_____"
],
[
"ax = sns.lineplot(\"k\", \"db\", data=results_df,markers=True)",
"_____no_output_____"
],
[
"ax = sns.lineplot(\"k\", \"seconds\", data=results_df)",
"_____no_output_____"
],
[
"ax = sns.lineplot(\"k\", \"ars\", data=results_df)",
"_____no_output_____"
],
[
"results_df.to_csv('agg_clustering.csv')",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.