Unnamed: 0
int64
0
16k
text_prompt
stringlengths
110
62.1k
code_prompt
stringlengths
37
152k
9,200
Given the following text description, write Python code to implement the functionality described below step by step Description: HTTP requests In this tutorial ti is covered how to make requests via HTTP protocol. For more informations about related stuff see Step1: The variable data contains returned HTML code (full page) as string. You can process it, save it, or do anything else you need. Requests Example how to get static content of web page with Requests follows. Step2: Get JSON data from an API This task is demonstrated on Open Notify - an open source project that provides a simple programming interface for some of NASAโ€™s awesome data. The examples bellow cover how to obtain current possition of ISS. With Requests library it is possible to get the JSON from the API in the same way as HTML data. Step3: The Requests function json() convert the json response to Python dictionary. In next code block is demonstrated how to get data from obtained response. Persistent session with Requests Session with Requests are handy for cases where you need to use same cookies (session cookies for example) or authentication for multiple requests. Step4: Compare the output of the code above, with the example bellow. Step5: Custom headers Headers of the response are easy to check, example follows. Step6: The request headers can be modified in simple way as follows.
Python Code: from urllib.request import urlopen r = urlopen('http://www.python.org/') data = r.read() print("Status code:", r.getcode()) Explanation: HTTP requests In this tutorial ti is covered how to make requests via HTTP protocol. For more informations about related stuff see: * <a href="https://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol">Hypertext Transfer Protocol (HTTP)</a> * <a href="https://en.wikipedia.org/wiki/JSON">JavaScript Object Notation</a> * <a href="https://en.wikipedia.org/wiki/HTML">HyperText Markup Language (HTML)</a> Keep in mind, that in this tutorial we work only with static content. How to obtain web dynamic content is not covered in this tutorial. If you want to deal with dynamic content, study <a href="http://selenium-python.readthedocs.io/">Selenium Python Bindings</a>. Get HTML page content In this section are examples how to get HTTP response with two different libraries: * <a href="https://docs.python.org/3.4/library/urllib.html?highlight=urllib">urllib</a> (standard library in Python 3) * <a href="http://docs.python-requests.org/en/master/">Requests</a> (instalable through pip) In this tutorial is mainly used the Requests library, as a prefered option. Urlib2 library Example how to get static content of web page with Urlib2 follows: End of explanation import requests r = requests.get("http://www.python.org/") data = r.text print("Status code:", r.status_code) Explanation: The variable data contains returned HTML code (full page) as string. You can process it, save it, or do anything else you need. Requests Example how to get static content of web page with Requests follows. End of explanation import requests r = requests.get("http://api.open-notify.org/iss-now.json") obj = r.json() print(obj) Explanation: Get JSON data from an API This task is demonstrated on Open Notify - an open source project that provides a simple programming interface for some of NASAโ€™s awesome data. The examples bellow cover how to obtain current possition of ISS. With Requests library it is possible to get the JSON from the API in the same way as HTML data. End of explanation s = requests.Session() print("No cookies on start: ") print(dict(s.cookies)) r = s.get('http://google.cz/') print("\nA cookie from google: ") print(dict(s.cookies)) r = s.get('http://google.cz/?q=cat') print("\nThe cookie is perstent:") print(dict(s.cookies)) Explanation: The Requests function json() convert the json response to Python dictionary. In next code block is demonstrated how to get data from obtained response. Persistent session with Requests Session with Requests are handy for cases where you need to use same cookies (session cookies for example) or authentication for multiple requests. End of explanation r = requests.get('http://google.cz/') print("\nA cookie from google: ") print(dict(r.cookies)) r = requests.get('http://google.cz/?q=cat') print("\nDifferent cookie:") print(dict(r.cookies)) Explanation: Compare the output of the code above, with the example bellow. End of explanation r = requests.get("http://www.python.org/") print(r.headers) Explanation: Custom headers Headers of the response are easy to check, example follows. End of explanation headers = { "Accept": "text/plain", } r = requests.get("http://www.python.org/", headers=headers) print(r.status_code) Explanation: The request headers can be modified in simple way as follows. End of explanation
9,201
Given the following text description, write Python code to implement the functionality described below step by step Description: ๋ฐ˜๋ณต๊ณผ ์ œ์–ด ์ด์„ฑ์ฃผ (c) 2015 Step1: for Step2: ๋“ค์—ฌ์“ฐ๊ธฐ๋Š” ๋ฌธ๋ฒ• Step3: ๋“ค์—ฌ์“ฐ๊ธฐ์™€ ์Šค์ฝ”ํ”„ Step4: ๋‚ด์žฅ๋ฆฌ์ŠคํŠธ (list comprehension) ๋ฆฌ์ŠคํŠธ ๋‚ด์—์„œ ๋ฐ˜๋ณต๋ฌธ ์‹คํ–‰ Step5: ์ˆซ์ž ๋ฆฌ์ŠคํŠธ ์ƒ์„ฑ ํ•จ์ˆ˜ Step6: ํŠน์ • ํšŸ์ˆ˜ ๋ฐ˜๋ณต Step7: ์ธ๋ฑ์Šค๊ฐ€ ํ•„์š”ํ•œ ๊ฒฝ์šฐ Step8: while Step9: ๋„์ „๊ณผ์ œ ํฌ์ปค ์นด๋“œ๋Š” 52์žฅ์ด๋‹ค. ๊ฐ ์นด๋“œ๋Š” ๋ฌธ์–‘(suit)์™€ ์ˆซ์ž(rank)๋กœ ์ด๋ฃจ์–ด์ง„๋‹ค. ๋ฌธ์–‘์€ Diamond, Heart, Spade, Clover 4 ์ข…๋ฅ˜์ด๊ณ , ์ˆซ์ž๋Š” 2, 3, โ€ฆ , 9, 10, J, Q, K, A์˜ 13๊ฐœ์˜ ๊ฐ’์ด๋‹ค. 52์žฅ์˜ ํฌ์ปค ์นด๋“œ๋ฅผ ์ƒ์„ฑํ•ด ๋ณ€์ˆ˜ deck์— ์ €์žฅํ•˜์‹œ์˜ค. ์˜ˆ Step10: ๋„์ „๊ณผ์ œ 6๋ฉด ์ฃผ์‚ฌ์œ„๋Š” 1๋ถ€ํ„ฐ 6๊นŒ์ง€์˜ ๊ฐ’์„ ๊ฐ–๋Š”๋‹ค. ์ฃผ์‚ฌ์œ„๋ฅผ ๊ตด๋ฆด ๋•Œ ๋‚˜์˜จ ์œ—๋ฉด์˜ ์ˆซ์ž์˜ ๋ˆ„์  ํ•ฉ๊ณ„๋ฅผ ๊ตฌํ•œ๋‹ค. ๋ˆ„์  ํ•ฉ๊ณ„๊ฐ€ 100 ์ด์ƒ์ด ๋  ๋•Œ๊นŒ์ง€ ๋ช‡ ๋ฒˆ์„ ๋˜์กŒ๋Š”์ง€๋ฅผ ํ‘œ์‹œํ•˜๋Š” ํ”„๋กœ๊ทธ๋žจ์„ ์ž‘์„ฑํ•œ๋‹ค. [a, b] ๊ตฌ๊ฐ„์—์„œ ๋ฌด์ž‘์œ„ ์ˆซ์ž๋ฅผ ๊ตฌํ•˜๋Š” ๊ตฌ๋ฌธ์€ ๋‹ค์Œ๊ณผ ๊ฐ™๋‹ค. import random # 0 ์ด์ƒ 1 ์ดํ•˜์˜ ๋ฌด์ž‘์œ„ ์ •์ˆ˜ ์ƒ์„ฑ n = random.randint(0, 1) Step11: ์ œ์–ด Step12: ๋„์ „๊ณผ์ œ 3-6-9 ๊ฒŒ์ž„ 1๋ถ€ํ„ฐ ์‹œ์ž‘ํ•ด 1์”ฉ ์ˆซ์ž๊ฐ€ ์ฆ๊ฐ€ํ•œ๋‹ค. ์ˆซ์ž์— 3,6,9๊ฐ€ ํ•˜๋‚˜ ์ด์ƒ ์žˆ์œผ๋ฉด ์ˆซ์ž ๋Œ€์‹  '์ง!'์„ ์ถœ๋ ฅํ•œ๋‹ค. ์ถœ๋ ฅ ์˜ˆ์‹œ Step13: ๋„์ „๊ณผ์ œ 10 ์ดํ•˜์˜ 3 ๋˜๋Š” 5์˜ ๋ฐฐ์ˆ˜๋Š” 3, 5, 6, 9์ด๋‹ค. ์ด ์ˆซ์ž๋“ค์˜ ํ•ฉ์€ 23์ด๋‹ค. 1000 ์ดํ•˜ 3 ๋˜๋Š” 5์˜ ๋ฐฐ์ˆ˜์˜ ํ•ฉ์„ ๊ตฌํ•˜๋ผ. ๋„์ „๊ณผ์ œ ๊ฐ ํ•™์ƒ ์ •๋ณด๋Š” ์ด๋ฆ„, ์„ธ ๊ณผ๋ชฉ์˜ ์ ์ˆ˜๋กœ ์ด๋ฃจ์–ด์ง„๋‹ค. ์„ธ ๋ช… ํ•™์ƒ์˜ ์ •๋ณด๋ฅผ ์ €์žฅํ•œ๋‹ค. ๊ฐ ํ•™์ƒ์˜ ํ‰๊ท  ์ ์ˆ˜๋ฅผ ๊ตฌํ•˜๋ผ. ๊ฐ ๊ณผ๋ชฉ์˜ ํ‰๊ท  ์ ์ˆ˜๋ฅผ ๊ตฌํ•˜๋ผ. Step14: ๋„์ „๊ณผ์ œ 13195๋ฅผ ์†Œ์ˆ˜ ๋ถ„ํ•ดํ•˜๋ฉด, 5, 7, 13, 29์ด๋‹ค. 600851475143๋ฅผ ์†Œ์ˆ˜๋ถ„ํ•ดํ–ˆ์„ ๋•Œ ๊ฐ€์žฅ ํฐ ์ˆซ์ž๋Š” ๋ฌด์—‡์ธ๊ฐ€? ๋„์ „๊ณผ์ œ 1001, 2002, 3003๊ณผ ๊ฐ™์€ ์ˆซ์ž๋ฅผ ํšŒ๋ฌธ ์ˆซ์ž (palindromic number)๋ผ๊ณ  ํ•œ๋‹ค. ๋‘ ๊ฐœ์˜ ๋‘ ์ž๋ฆฌ ์ˆซ์ž์˜ ๊ณฑ์œผ๋กœ ๋งŒ๋“ค ์ˆ˜ ์žˆ๋Š” ๊ฐ€์žฅ ํฐ ์ˆซ์ž๋Š” 9009 = 91 ร— 99์ด๋‹ค. ๋‘ ๊ฐœ์˜ ์„ธ ์ž๋ฆฌ ์ˆซ์ž์˜ ๊ณฑ์œผ๋กœ ๋งŒ๋“ค ์ˆ˜ ์žˆ๋Š” ๊ฐ€์žฅ ํฐ ํšŒ๋ฌธ ์ˆซ์ž๋ฅผ ์ฐพ์•„๋ผ.
Python Code: # 3๋ฒ„์ „ ์Šคํƒ€์ผ print ํ•จ์ˆ˜ ์‚ฌ์šฉ from __future__ import print_function Explanation: ๋ฐ˜๋ณต๊ณผ ์ œ์–ด ์ด์„ฑ์ฃผ (c) 2015 End of explanation print([1,2,3]) for n in [1,2,3]: print(n) Explanation: for End of explanation for n in [1,2,3]: print(n) print(n) for key in {'name': '์ด์„ฑ์ฃผ', 'email':'seongjoo@codebasic'}: print(key) profile = {'name': '์ด์„ฑ์ฃผ', 'email':'seongjoo@codebasic'} profile.items() for key, value in profile.items(): print(key, value) for c in 'python': print(c, end=':::') Explanation: ๋“ค์—ฌ์“ฐ๊ธฐ๋Š” ๋ฌธ๋ฒ• End of explanation nums_square = [] for n in [1,2,3,4,5]: nums_square.append(n**2) print(nums_square) Explanation: ๋“ค์—ฌ์“ฐ๊ธฐ์™€ ์Šค์ฝ”ํ”„ End of explanation nums_square = [n**2 for n in [1,2,3,4,5]] print(nums_square) Explanation: ๋‚ด์žฅ๋ฆฌ์ŠคํŠธ (list comprehension) ๋ฆฌ์ŠคํŠธ ๋‚ด์—์„œ ๋ฐ˜๋ณต๋ฌธ ์‹คํ–‰ End of explanation range(10) # 3 ๋ฒ„์ „ list(range(10)) range(2,11) range(1,11,2) Explanation: ์ˆซ์ž ๋ฆฌ์ŠคํŠธ ์ƒ์„ฑ ํ•จ์ˆ˜ End of explanation for x in range(3): print('์ฐธ ์ž˜ํ–ˆ์–ด์š”!') Explanation: ํŠน์ • ํšŸ์ˆ˜ ๋ฐ˜๋ณต End of explanation it = enumerate('abc') it.next() it.next() for i, x in enumerate(range(3)): print i+1, '์ฐธ ์ž˜ํ–ˆ์–ด์š”!' profile = {'name':'์ด์„ฑ์ฃผ', 'email': '[email protected]'} for k,v in profile.items(): print(k,v) Explanation: ์ธ๋ฑ์Šค๊ฐ€ ํ•„์š”ํ•œ ๊ฒฝ์šฐ End of explanation x=1 while x < 3: print(x) x += 1 isTrueLove = True while isTrueLove: print("I love you") break Explanation: while End of explanation # 52์žฅ ์นด๋“œ ๋ฑ ์ƒ์„ฑ suits = ['Heart', 'Diamond', 'Clover', 'Spade'] ranks = range(2,11)+['J', 'Q', 'K', 'A'] deck = [] for s in suits: for r in ranks: card = s + str(r) deck.append(card) # ๊ฐ ๋ฌธ์–‘๋ณ„๋กœ ํ•œ ์ค„์”ฉ ์ถœ๋ ฅ previous_suit = '' for card in deck: # ์นด๋“œ์˜ ๋ฌธ์–‘์„ ๋ฝ‘๋Š”๋‹ค. suit = card[0] # ๊ฐ™์€ ๋ฌธ์–‘์ด๋ฉด ๊ฐ™์€ ์ค„๋กœ ์ถœ๋ ฅ if suit == 'H': suit_collection[0].append(card) elif suit == 'D': suit_collection[1].append(card) elif suit == 'C': suit_collection[2].append(card) elif suit == 'S': suit_collection[3].append(card) else: print('๊ทธ๋Ÿฐ ๋ฌธ์–‘์€ ์—†์–ด์š”!') for suits in suit_collection: for card in suits: print(card, end=', ') print('\n') deck = [[s,r] for s in ['Heart', 'Diamond', 'Spade', 'Clover'] for r in range(2,11)+['J','Q','K', 'A']] len(deck) Explanation: ๋„์ „๊ณผ์ œ ํฌ์ปค ์นด๋“œ๋Š” 52์žฅ์ด๋‹ค. ๊ฐ ์นด๋“œ๋Š” ๋ฌธ์–‘(suit)์™€ ์ˆซ์ž(rank)๋กœ ์ด๋ฃจ์–ด์ง„๋‹ค. ๋ฌธ์–‘์€ Diamond, Heart, Spade, Clover 4 ์ข…๋ฅ˜์ด๊ณ , ์ˆซ์ž๋Š” 2, 3, โ€ฆ , 9, 10, J, Q, K, A์˜ 13๊ฐœ์˜ ๊ฐ’์ด๋‹ค. 52์žฅ์˜ ํฌ์ปค ์นด๋“œ๋ฅผ ์ƒ์„ฑํ•ด ๋ณ€์ˆ˜ deck์— ์ €์žฅํ•˜์‹œ์˜ค. ์˜ˆ: โ€˜Diamond 3โ€™, โ€˜Heart Qโ€™ a. ๊ฐ ํฌ์ปค ์นด๋“œ์˜ ์ •๋ณด๋ฅผ ๋ฌธ์ž์—ด ํ˜•ํƒœ๋กœ ์ €์žฅํ•˜์‹œ์˜ค. b. ๊ฐ ํฌ์ปค ์นด๋“œ ์ •๋ณด๋ฅผ ๋ฆฌ์ŠคํŠธ ์ž๋ฃŒ ๊ตฌ์กฐ๋กœ ์ €์žฅํ•˜์‹œ์˜ค. ์˜ˆ: [โ€˜Diamondโ€™, 3], [โ€˜Heartโ€™, โ€˜Qโ€™] c. ํ•œ ์ค„์˜ ๊ตฌ๋ฌธ์œผ๋กœ ์ „์ฒด ํฌ์ปค ์นด๋“œ๋ฅผ ์ƒ์„ฑํ•˜์‹œ์˜ค. d. ๊ฐ™์€ ๋ฌธ์–‘์€ ๊ฐ™์€ ์ค„๋กœ ์ถœ๋ ฅํ•ด ์ด ๋„ค ์ค„๋กœ ์ „์ฒด ์นด๋“œ๋ฅผ ์ถœ๋ ฅํ•˜์‹œ์˜ค. End of explanation import random total = 0 count = 0 while total < 100: face = random.randint(1,6) total += face count += 1 print(count) Explanation: ๋„์ „๊ณผ์ œ 6๋ฉด ์ฃผ์‚ฌ์œ„๋Š” 1๋ถ€ํ„ฐ 6๊นŒ์ง€์˜ ๊ฐ’์„ ๊ฐ–๋Š”๋‹ค. ์ฃผ์‚ฌ์œ„๋ฅผ ๊ตด๋ฆด ๋•Œ ๋‚˜์˜จ ์œ—๋ฉด์˜ ์ˆซ์ž์˜ ๋ˆ„์  ํ•ฉ๊ณ„๋ฅผ ๊ตฌํ•œ๋‹ค. ๋ˆ„์  ํ•ฉ๊ณ„๊ฐ€ 100 ์ด์ƒ์ด ๋  ๋•Œ๊นŒ์ง€ ๋ช‡ ๋ฒˆ์„ ๋˜์กŒ๋Š”์ง€๋ฅผ ํ‘œ์‹œํ•˜๋Š” ํ”„๋กœ๊ทธ๋žจ์„ ์ž‘์„ฑํ•œ๋‹ค. [a, b] ๊ตฌ๊ฐ„์—์„œ ๋ฌด์ž‘์œ„ ์ˆซ์ž๋ฅผ ๊ตฌํ•˜๋Š” ๊ตฌ๋ฌธ์€ ๋‹ค์Œ๊ณผ ๊ฐ™๋‹ค. import random # 0 ์ด์ƒ 1 ์ดํ•˜์˜ ๋ฌด์ž‘์œ„ ์ •์ˆ˜ ์ƒ์„ฑ n = random.randint(0, 1) End of explanation x = 10 if x < 5: fruit = 'banana' else: fruit = 'apple' print(fruit) hour = 13 greeting = 'Good' if 5 < hour < 12: # ์•„์นจ์ธ์‚ฌ greeting += ' morning' elif 12 <= hour < 18: greeting += ' afternoon' else: greeting += ' night!' print(greeting) if 'a': print('hi') Explanation: ์ œ์–ด End of explanation # ์ˆซ์ž ์ƒ์„ฑ N = 20 for n in range(1,N): # ์ˆซ์ž์— 3 ๋˜๋Š” 6 ๋˜๋Š” 9๊ฐ€ ์žˆ๋Š”์ง€ ํ™•์ธ if '3' in str(n)or '6' in str(n)or '9' in str(n): print('์ง!', end=' ') continue print(n, end=' ') False or 6 Explanation: ๋„์ „๊ณผ์ œ 3-6-9 ๊ฒŒ์ž„ 1๋ถ€ํ„ฐ ์‹œ์ž‘ํ•ด 1์”ฉ ์ˆซ์ž๊ฐ€ ์ฆ๊ฐ€ํ•œ๋‹ค. ์ˆซ์ž์— 3,6,9๊ฐ€ ํ•˜๋‚˜ ์ด์ƒ ์žˆ์œผ๋ฉด ์ˆซ์ž ๋Œ€์‹  '์ง!'์„ ์ถœ๋ ฅํ•œ๋‹ค. ์ถœ๋ ฅ ์˜ˆ์‹œ: 1 2 ์ง! 4 5 ์ง! 7 8 ์ง! End of explanation float(1/2) 3/2 students = [{'name': '์ด์„ฑ์ฃผ', 'scores': [75, 85, 92]}, {'name': '์„œํฌ์ •', 'scores': [85, 95, 99]} ] # ๊ฐ ํ•™์ƒ์˜ ํ‰๊ท  ์ ์ˆ˜ ๊ตฌํ•˜๊ธฐ for s in students: avg = float(sum(s['scores']))/len(s['scores']) print(s['name']+ " ํ‰๊ท : " + str(avg)) # ๊ฐ ๊ณผ๋ชฉ์˜ ํ‰๊ท  ์ ์ˆ˜ subjects = [[],[],[]] for s in students: subjects[0].append(s['scores'][0]) subjects[1].append(s['scores'][1]) subjects[2].append(s['scores'][2]) print(subjects) for sub in subjects: print(sum(sub)/len(sub)) Explanation: ๋„์ „๊ณผ์ œ 10 ์ดํ•˜์˜ 3 ๋˜๋Š” 5์˜ ๋ฐฐ์ˆ˜๋Š” 3, 5, 6, 9์ด๋‹ค. ์ด ์ˆซ์ž๋“ค์˜ ํ•ฉ์€ 23์ด๋‹ค. 1000 ์ดํ•˜ 3 ๋˜๋Š” 5์˜ ๋ฐฐ์ˆ˜์˜ ํ•ฉ์„ ๊ตฌํ•˜๋ผ. ๋„์ „๊ณผ์ œ ๊ฐ ํ•™์ƒ ์ •๋ณด๋Š” ์ด๋ฆ„, ์„ธ ๊ณผ๋ชฉ์˜ ์ ์ˆ˜๋กœ ์ด๋ฃจ์–ด์ง„๋‹ค. ์„ธ ๋ช… ํ•™์ƒ์˜ ์ •๋ณด๋ฅผ ์ €์žฅํ•œ๋‹ค. ๊ฐ ํ•™์ƒ์˜ ํ‰๊ท  ์ ์ˆ˜๋ฅผ ๊ตฌํ•˜๋ผ. ๊ฐ ๊ณผ๋ชฉ์˜ ํ‰๊ท  ์ ์ˆ˜๋ฅผ ๊ตฌํ•˜๋ผ. End of explanation # ์ผ๋‹จ ... ํšŒ๋ฌธ ์ˆซ์ž๋ฅผ ๊ฒ€์ƒ‰ํ•˜๋Š” ๋ฐฉ๋ฒ•๋ถ€ํ„ฐ ๋งˆ๋ จํ•ด ๋ณด์ž pn_list = [] for n1 in range(100,1000): for n2 in range(100, 1000): n = n1*n2 # Q:ํšŒ๋ฌธ ์ˆซ์ž์ธ๊ฐ€? # A: ์œ ํ•œ๋ณ„ n_str = str(n) if n_str[::-1] == n_str: # ํšŒ๋ฌธ ์ˆซ์ž๋ฉด ์ถ”๊ฐ€ pn_list.append(n) # ์ถ”๊ฐ€๋œ ํšŒ๋ฌธ ์ˆซ์ž์˜ ์ตœ๋Œ€๊ฐ’ print(max(pn_list)) Explanation: ๋„์ „๊ณผ์ œ 13195๋ฅผ ์†Œ์ˆ˜ ๋ถ„ํ•ดํ•˜๋ฉด, 5, 7, 13, 29์ด๋‹ค. 600851475143๋ฅผ ์†Œ์ˆ˜๋ถ„ํ•ดํ–ˆ์„ ๋•Œ ๊ฐ€์žฅ ํฐ ์ˆซ์ž๋Š” ๋ฌด์—‡์ธ๊ฐ€? ๋„์ „๊ณผ์ œ 1001, 2002, 3003๊ณผ ๊ฐ™์€ ์ˆซ์ž๋ฅผ ํšŒ๋ฌธ ์ˆซ์ž (palindromic number)๋ผ๊ณ  ํ•œ๋‹ค. ๋‘ ๊ฐœ์˜ ๋‘ ์ž๋ฆฌ ์ˆซ์ž์˜ ๊ณฑ์œผ๋กœ ๋งŒ๋“ค ์ˆ˜ ์žˆ๋Š” ๊ฐ€์žฅ ํฐ ์ˆซ์ž๋Š” 9009 = 91 ร— 99์ด๋‹ค. ๋‘ ๊ฐœ์˜ ์„ธ ์ž๋ฆฌ ์ˆซ์ž์˜ ๊ณฑ์œผ๋กœ ๋งŒ๋“ค ์ˆ˜ ์žˆ๋Š” ๊ฐ€์žฅ ํฐ ํšŒ๋ฌธ ์ˆซ์ž๋ฅผ ์ฐพ์•„๋ผ. End of explanation
9,202
Given the following text description, write Python code to implement the functionality described below step by step Description: Tutorial 2 - Reti neurali convolutive in TF Prerequisiti per il tutorial Step1: Nell'esempio prima, abbiamo scelto di scaricare solo le immagini di persone di cui abbiamo (almeno) 70 esempi, le quali sono soltanto 6 Step2: Per semplicitร , consideriamo qui un problema di classificazione binaria per distinguire fra le foto di Colin Powell (236 foto) e le foto di Tony Blair (144 foto) Step3: Sistemiamo il vettore di target in modo che contenga solo 0 od 1 Step4: Per comoditร  di TF, aggiungiamo una dimensione alla matrice di immagini, che rappresenta il singolo canale Step5: Aggiungendo un parametro <code>color=True</code> alla funzione <code>fetch_lfw_people</code> potremmo scaricare i tre canali RGB invece di un singolo canale in scala di grigi. Di tutte le immagini teniamo da parte un 20% per andare a testare la nostra rete neurale Step6: Vediamo un esempio di immagine di training Step7: Come si puรฒ vedere, ciascuna immagine รจ 50x37, in bianco e nero Step8: Per finire, rappresentiamo i pixel in [0,1], invece che in [0, 255] Step9: Costruiamo la nostra rete convolutiva Come nello scorso tutorial, cominciamo definendo i nostri placeholder di input e di output Step10: Al posto di definire manualmente tutte le variabili e le operazioni della rete, iniziamo ad utilizzare gli strati giร  definiti nel modulo <code>layers</code> Step11: Andiamo a valutare la dimensione del tensore di uscita Step12: Come sempre, la prima dimensione rappresenta un mini-batch di attivazioni. Il resto del tensore รจ quindi costituito da 64 filtri, ciascuno 46x33. Si noti la leggera discrepanza con la dimensione delle immagini di ingresso Step13: Continuiamo aggiungendo lo strato di pooling, questa volta considerando il padding Step14: Il terzo parametro rappresenta lo stride, ovvero ogni quanti pixel calcoliamo un risultato. Con questa configurazione, abbiamo effettivamente dimezzato la dimensione dei tensori Step15: Continuiamo con tutti gli strati rimanenti Step16: Si noti come, per applicare lo strato interamente connesso, รจ necessario eseguire un reshape del tensore di ingresso, in modo che ogni input sia 1D. Concludiamo con l'ultimo strato con un'attivazione sigmoide Step17: Allenare la rete convolutiva La fase di allenamento รจ sostanzialmente uguale allo scorso tutorial. In particolare, definiamo una funzione per estrarre mini-batch casuali dalle nostre immagini di training Step18: Definiamo una funzione costo Step19: Inizializziamo un algoritmo di ottimizzazione Step20: Per valutare l'accuratezza, definiamo una funzione ausiliaria Step21: Creiamo una sessione ed inizializziamo tutte le variabili Step22: Definiamo una dimensione del mini-batch ed un numero di epoche Step23: Poichรฉ il training potrebbe essere piรน lento dello scorso tutorial, installiamo un modulo per visualizzare una semplice barra di progresso Step24: Visualizziamo l'accuratezza media sul test set
Python Code: from sklearn.datasets import fetch_lfw_people lfw_people = fetch_lfw_people(min_faces_per_person=70, resize=0.4) Explanation: Tutorial 2 - Reti neurali convolutive in TF Prerequisiti per il tutorial: * T1 - Reti neurali feedforward Contenuti del tutorial: 1. Concetti base delle reti neurali convolutive. 2. Implementazione di una rete neurale convolutiva in TF. Introduzione alle reti convolutive L'enorme flessibilitร  delle reti neurali รจ allo stesso tempo il loro punto di forza ed il loro svantaggio. Consideriamo un problema dove l'input alla rete neurale รจ un'immagine di 64x64 pixel, in RGB, ad esempio una piccola telecamera che deve imparare a riconoscere la presenza o meno di una determinata persona. Nonostante si tratti di un'immagine molto piccola, in tutto avremmo giร  64x64x3 = 12228 input alla rete neurale, considerando che ciascun pixel รจ descritto da tre colori diversi. Con soli 10 neuroni nel primo strato nascosto, avremmo oltre 120mila parametri liberi da adattare per riconoscere un qualche oggetto nell'immagine. Le reti neurali convolutive (convolutional neural network, CNN) sono un modo di risolvere questo problema quando, come nel caso delle immagini, l'input alla rete neurale presenta una forma di localitร  dell'informazione. In questo caso, ciascun neurone nello strato nascosto viene sostituito da un filtro di dimensioni fisse (es., 5x5x3), che viene fatto scorrere sull'intera immagine per ottenere la sua uscita: http://www.wildml.com/2015/11/understanding-convolutional-neural-networks-for-nlp/ Uno strato convolutivo si ottiene facendo scorrere in parallelo diversi filtri di questo tipo sull'immagine, a cui viene fatta seguire una nonlinearitร  come in una rete neurale tradizionale. Considerando di nuovo il caso dell'immagine di prima, con 10 filtri di questo tipo si otterrebbero 5x5x3x10 = 750 parametri adattabili, una riduzione di oltre 100 volte. Questa architettura รจ enormemente popolare per le immagini, ed ha trovato applicazioni di recente anche per l'audio e problemi di processamento del linguaggio; entrambi situazioni dove le convoluzioni, a differenza che nella figura sopra, sono mono-dimensionali. In pratica, una CNN si costruisce interponendo a questi strati convolutivi altri strati, come nella figura sotto: http://www.wildml.com/2015/11/understanding-convolutional-neural-networks-for-nlp/ Gli strati di pooling servono ad effettuare un sottocampionamento dell'uscita dello strato precedente, ad esempio prendendo il massimo valore in delle regioni 2x2 o 3x3. Nella parte finale della rete, vengono tipicamente aggiunti strati interamente connessi (come nelle reti neurali standard), prima di procedere alla classificazione. Questi sono solo gli elementi di base delle reti convolutive; vedremo alcuni concetti piรน avanzati, necessari a costruire reti con numerosi strati nascosti, in un prossimo tutorial. Va notato da subito come, nel caso delle CNN, l'attivazione di un singolo strato รจ ora descritta da un tensore a quattro dimensioni, in quanto l'uscita di ciascun filtro รจ a sua volta bidimensionale. Esempio di classificazione su immagini Come esempio di problema di classificazione di immagini, utilizziamo il dataset Labeled faces in the wild, una collezione annotata di immagini di numerose celebritร . <code>scikit-learn</code> mette a disposizione una funzione per scaricare il dataset, oltre 200 MB di dati in questa versione: End of explanation lfw_people.target_names Explanation: Nell'esempio prima, abbiamo scelto di scaricare solo le immagini di persone di cui abbiamo (almeno) 70 esempi, le quali sono soltanto 6: End of explanation idx = (lfw_people.target == 1) | (lfw_people.target == 6) X = lfw_people.images[idx] y = lfw_people.target[idx] Explanation: Per semplicitร , consideriamo qui un problema di classificazione binaria per distinguire fra le foto di Colin Powell (236 foto) e le foto di Tony Blair (144 foto): End of explanation y[y == 6] = 0 y = y.reshape(-1, 1) Explanation: Sistemiamo il vettore di target in modo che contenga solo 0 od 1: End of explanation X = X.reshape(X.shape[0], X.shape[1], X.shape[2], 1) Explanation: Per comoditร  di TF, aggiungiamo una dimensione alla matrice di immagini, che rappresenta il singolo canale: End of explanation from sklearn import model_selection (X_trn, X_tst, y_trn, y_tst) = model_selection.train_test_split(X, y, test_size=0.20) Explanation: Aggiungendo un parametro <code>color=True</code> alla funzione <code>fetch_lfw_people</code> potremmo scaricare i tre canali RGB invece di un singolo canale in scala di grigi. Di tutte le immagini teniamo da parte un 20% per andare a testare la nostra rete neurale: End of explanation %matplotlib inline import matplotlib.pyplot as plt plt.imshow(X[1,:,:, 0], cmap='gray') Explanation: Vediamo un esempio di immagine di training: End of explanation X[0].shape Explanation: Come si puรฒ vedere, ciascuna immagine รจ 50x37, in bianco e nero: End of explanation X /= 255 Explanation: Per finire, rappresentiamo i pixel in [0,1], invece che in [0, 255]: End of explanation import tensorflow as tf X_tf = tf.placeholder(tf.float32, [None, 50, 37, 1], name='input') y_tf = tf.placeholder(tf.float32, [None, 1], name='target') Explanation: Costruiamo la nostra rete convolutiva Come nello scorso tutorial, cominciamo definendo i nostri placeholder di input e di output: End of explanation conv1 = tf.layers.conv2d(X_tf, 64, (5,5), activation=tf.nn.relu, name='conv1') Explanation: Al posto di definire manualmente tutte le variabili e le operazioni della rete, iniziamo ad utilizzare gli strati giร  definiti nel modulo <code>layers</code>: https://www.tensorflow.org/api_docs/python/tf/layers/. Va sottolineato da subito come alcuni di questi strati si appoggiano interiormente a funzioni piรน semplici definite nel modulo <code>nn</code>, come lo strato per la convoluzione 2D (<code>conv2D</code>): https://www.tensorflow.org/api_docs/python/tf/layers/conv2d<br /> https://www.tensorflow.org/api_docs/python/tf/nn/conv2d Questa duplicazione di classi non deve perรฒ confondere, in quanto รจ possibile usare in maniera equivalente ciascuno dei due; le funzioni in <code>nn</code> hanno in genere meno parametri e meno flessibilitร , e sono preferibili quando questa flessibilitร  non รจ richiesta. Per uniformitร , in questo tutorial useremo solo le funzioni definite nel modulo <code>layers</code>. La nostra rete sarร  cosรฌ composta: Uno strato convolutivo, con 64 filtri 5x5. Uno strato di pooling 2x2. Un secondo strato convolutivo, con 32 filtri 5x5. Un secondo strato di pooling 2x2. Uno strato interamente connesso con 20 neuroni. Un neurone di uscita con la classe desiderata. Questa non รจ ovviamente l'unica scelta, nรฉ tantomeno la migliore, il che richiederebbe un'ottimizzazione piรน completa di tutto il design della rete (si veda ad esempio questo articolo). Questo genere di alternanza di strati convolutivi e pooling, di dimensioni sempre piรน piccole, e seguito da strati densamente connessi รจ perรฒ tipico nella maggior parte delle applicazioni con reti non molto grandi. Considereremo il design di reti piรน complesse e la loro ottimizzazione in tutorial successivi; va detto da subito, comunque, che spesso questa fase di costruzione รจ piรน un unirsi di esperienza ed 'arte', piuttosto che il susseguirsi di regole precise. Iniziamo definendo il primo strato convolutivo: End of explanation conv1.shape Explanation: Andiamo a valutare la dimensione del tensore di uscita: End of explanation # conv1 = tf.layers.conv2d(X_tf, 64, (5,5), padding='same', name='conv1') Explanation: Come sempre, la prima dimensione rappresenta un mini-batch di attivazioni. Il resto del tensore รจ quindi costituito da 64 filtri, ciascuno 46x33. Si noti la leggera discrepanza con la dimensione delle immagini di ingresso: poichรฉ la convoluzione non รจ definita sui bordi dell'immagine, perdiamo 2 pixel in ogni dimensione. Possiamo ovviare a questo problema aggiungendo del padding di zeri sui bordi dell'immagine originale: End of explanation pool1 = tf.layers.max_pooling2d(conv1, (2,2), (2,2), padding='same', name='pool1') Explanation: Continuiamo aggiungendo lo strato di pooling, questa volta considerando il padding: End of explanation pool1.shape Explanation: Il terzo parametro rappresenta lo stride, ovvero ogni quanti pixel calcoliamo un risultato. Con questa configurazione, abbiamo effettivamente dimezzato la dimensione dei tensori: End of explanation conv2 = tf.layers.conv2d(pool1, 32, (5,5), activation=tf.nn.relu, padding='same', name='conv2') pool2 = tf.layers.max_pooling2d(conv2, (2,2), (2,2), padding='same', name='pool2') dense = tf.layers.dense(tf.reshape(pool2, [-1, 12*9*32]), 20, activation=tf.nn.relu, name='dense') Explanation: Continuiamo con tutti gli strati rimanenti: End of explanation output = tf.layers.dense(dense, 1, activation=None, name='output') output_sigmoid = tf.nn.sigmoid(output) Explanation: Si noti come, per applicare lo strato interamente connesso, รจ necessario eseguire un reshape del tensore di ingresso, in modo che ogni input sia 1D. Concludiamo con l'ultimo strato con un'attivazione sigmoide: End of explanation def iterate_minibatches(X, y, batchsize): indices = np.arange(len(X)) np.random.shuffle(indices) for start_idx in range(0, len(X) - batchsize + 1, batchsize): excerpt = indices[start_idx:start_idx + batchsize] yield X[excerpt], y[excerpt] Explanation: Allenare la rete convolutiva La fase di allenamento รจ sostanzialmente uguale allo scorso tutorial. In particolare, definiamo una funzione per estrarre mini-batch casuali dalle nostre immagini di training: End of explanation loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels=y_tf, logits=output)) Explanation: Definiamo una funzione costo: End of explanation train_step = tf.train.AdagradOptimizer(learning_rate=0.01).minimize(loss) Explanation: Inizializziamo un algoritmo di ottimizzazione: End of explanation accuracy = tf.reduce_mean(tf.cast(tf.equal(y_tf, tf.round(output_sigmoid)), tf.float32), name='accuracy') Explanation: Per valutare l'accuratezza, definiamo una funzione ausiliaria: End of explanation sess = tf.InteractiveSession() tf.global_variables_initializer().run() Explanation: Creiamo una sessione ed inizializziamo tutte le variabili: End of explanation epochs = 25 batch_size = 10 Explanation: Definiamo una dimensione del mini-batch ed un numero di epoche: End of explanation import numpy as np, tqdm accuracy_history = np.zeros(epochs) for i in tqdm.tqdm_notebook(range(epochs)): accuracy_history[i] = sess.run(accuracy, feed_dict={X_tf: X_tst, y_tf: y_tst}) print('Current loss is: ', sess.run(loss, feed_dict={X_tf: X_trn, y_tf: y_trn})) for xs, ys in iterate_minibatches(X_trn, y_trn, batch_size): sess.run(train_step, feed_dict={X_tf: xs, y_tf: ys}) Explanation: Poichรฉ il training potrebbe essere piรน lento dello scorso tutorial, installiamo un modulo per visualizzare una semplice barra di progresso: <code>$ pip install tqdm Minimizziamo la funzione costo, tenendo traccia dell'accuratezza sul test set ad ogni iterazione: End of explanation plt.figure() plt.semilogy(accuracy_history) plt.xlabel('Epoch') plt.ylabel('Test error') plt.grid() Explanation: Visualizziamo l'accuratezza media sul test set: End of explanation
9,203
Given the following text description, write Python code to implement the functionality described below step by step Description: ES-DOC CMIP6 Model Properties - Toplevel MIP Era Step1: Document Authors Set document authors Step2: Document Contributors Specify document contributors Step3: Document Publication Specify document publication status Step4: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Flux Correction 3. Key Properties --&gt; Genealogy 4. Key Properties --&gt; Software Properties 5. Key Properties --&gt; Coupling 6. Key Properties --&gt; Tuning Applied 7. Key Properties --&gt; Conservation --&gt; Heat 8. Key Properties --&gt; Conservation --&gt; Fresh Water 9. Key Properties --&gt; Conservation --&gt; Salt 10. Key Properties --&gt; Conservation --&gt; Momentum 11. Radiative Forcings 12. Radiative Forcings --&gt; Greenhouse Gases --&gt; CO2 13. Radiative Forcings --&gt; Greenhouse Gases --&gt; CH4 14. Radiative Forcings --&gt; Greenhouse Gases --&gt; N2O 15. Radiative Forcings --&gt; Greenhouse Gases --&gt; Tropospheric O3 16. Radiative Forcings --&gt; Greenhouse Gases --&gt; Stratospheric O3 17. Radiative Forcings --&gt; Greenhouse Gases --&gt; CFC 18. Radiative Forcings --&gt; Aerosols --&gt; SO4 19. Radiative Forcings --&gt; Aerosols --&gt; Black Carbon 20. Radiative Forcings --&gt; Aerosols --&gt; Organic Carbon 21. Radiative Forcings --&gt; Aerosols --&gt; Nitrate 22. Radiative Forcings --&gt; Aerosols --&gt; Cloud Albedo Effect 23. Radiative Forcings --&gt; Aerosols --&gt; Cloud Lifetime Effect 24. Radiative Forcings --&gt; Aerosols --&gt; Dust 25. Radiative Forcings --&gt; Aerosols --&gt; Tropospheric Volcanic 26. Radiative Forcings --&gt; Aerosols --&gt; Stratospheric Volcanic 27. Radiative Forcings --&gt; Aerosols --&gt; Sea Salt 28. Radiative Forcings --&gt; Other --&gt; Land Use 29. Radiative Forcings --&gt; Other --&gt; Solar 1. Key Properties Key properties of the model 1.1. Model Overview Is Required Step5: 1.2. Model Name Is Required Step6: 2. Key Properties --&gt; Flux Correction Flux correction properties of the model 2.1. Details Is Required Step7: 3. Key Properties --&gt; Genealogy Genealogy and history of the model 3.1. Year Released Is Required Step8: 3.2. CMIP3 Parent Is Required Step9: 3.3. CMIP5 Parent Is Required Step10: 3.4. Previous Name Is Required Step11: 4. Key Properties --&gt; Software Properties Software properties of model 4.1. Repository Is Required Step12: 4.2. Code Version Is Required Step13: 4.3. Code Languages Is Required Step14: 4.4. Components Structure Is Required Step15: 4.5. Coupler Is Required Step16: 5. Key Properties --&gt; Coupling ** 5.1. Overview Is Required Step17: 5.2. Atmosphere Double Flux Is Required Step18: 5.3. Atmosphere Fluxes Calculation Grid Is Required Step19: 5.4. Atmosphere Relative Winds Is Required Step20: 6. Key Properties --&gt; Tuning Applied Tuning methodology for model 6.1. Description Is Required Step21: 6.2. Global Mean Metrics Used Is Required Step22: 6.3. Regional Metrics Used Is Required Step23: 6.4. Trend Metrics Used Is Required Step24: 6.5. Energy Balance Is Required Step25: 6.6. Fresh Water Balance Is Required Step26: 7. Key Properties --&gt; Conservation --&gt; Heat Global heat convervation properties of the model 7.1. Global Is Required Step27: 7.2. Atmos Ocean Interface Is Required Step28: 7.3. Atmos Land Interface Is Required Step29: 7.4. Atmos Sea-ice Interface Is Required Step30: 7.5. Ocean Seaice Interface Is Required Step31: 7.6. Land Ocean Interface Is Required Step32: 8. Key Properties --&gt; Conservation --&gt; Fresh Water Global fresh water convervation properties of the model 8.1. Global Is Required Step33: 8.2. Atmos Ocean Interface Is Required Step34: 8.3. Atmos Land Interface Is Required Step35: 8.4. Atmos Sea-ice Interface Is Required Step36: 8.5. Ocean Seaice Interface Is Required Step37: 8.6. Runoff Is Required Step38: 8.7. Iceberg Calving Is Required Step39: 8.8. Endoreic Basins Is Required Step40: 8.9. Snow Accumulation Is Required Step41: 9. Key Properties --&gt; Conservation --&gt; Salt Global salt convervation properties of the model 9.1. Ocean Seaice Interface Is Required Step42: 10. Key Properties --&gt; Conservation --&gt; Momentum Global momentum convervation properties of the model 10.1. Details Is Required Step43: 11. Radiative Forcings Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5) 11.1. Overview Is Required Step44: 12. Radiative Forcings --&gt; Greenhouse Gases --&gt; CO2 Carbon dioxide forcing 12.1. Provision Is Required Step45: 12.2. Additional Information Is Required Step46: 13. Radiative Forcings --&gt; Greenhouse Gases --&gt; CH4 Methane forcing 13.1. Provision Is Required Step47: 13.2. Additional Information Is Required Step48: 14. Radiative Forcings --&gt; Greenhouse Gases --&gt; N2O Nitrous oxide forcing 14.1. Provision Is Required Step49: 14.2. Additional Information Is Required Step50: 15. Radiative Forcings --&gt; Greenhouse Gases --&gt; Tropospheric O3 Troposheric ozone forcing 15.1. Provision Is Required Step51: 15.2. Additional Information Is Required Step52: 16. Radiative Forcings --&gt; Greenhouse Gases --&gt; Stratospheric O3 Stratospheric ozone forcing 16.1. Provision Is Required Step53: 16.2. Additional Information Is Required Step54: 17. Radiative Forcings --&gt; Greenhouse Gases --&gt; CFC Ozone-depleting and non-ozone-depleting fluorinated gases forcing 17.1. Provision Is Required Step55: 17.2. Equivalence Concentration Is Required Step56: 17.3. Additional Information Is Required Step57: 18. Radiative Forcings --&gt; Aerosols --&gt; SO4 SO4 aerosol forcing 18.1. Provision Is Required Step58: 18.2. Additional Information Is Required Step59: 19. Radiative Forcings --&gt; Aerosols --&gt; Black Carbon Black carbon aerosol forcing 19.1. Provision Is Required Step60: 19.2. Additional Information Is Required Step61: 20. Radiative Forcings --&gt; Aerosols --&gt; Organic Carbon Organic carbon aerosol forcing 20.1. Provision Is Required Step62: 20.2. Additional Information Is Required Step63: 21. Radiative Forcings --&gt; Aerosols --&gt; Nitrate Nitrate forcing 21.1. Provision Is Required Step64: 21.2. Additional Information Is Required Step65: 22. Radiative Forcings --&gt; Aerosols --&gt; Cloud Albedo Effect Cloud albedo effect forcing (RFaci) 22.1. Provision Is Required Step66: 22.2. Aerosol Effect On Ice Clouds Is Required Step67: 22.3. Additional Information Is Required Step68: 23. Radiative Forcings --&gt; Aerosols --&gt; Cloud Lifetime Effect Cloud lifetime effect forcing (ERFaci) 23.1. Provision Is Required Step69: 23.2. Aerosol Effect On Ice Clouds Is Required Step70: 23.3. RFaci From Sulfate Only Is Required Step71: 23.4. Additional Information Is Required Step72: 24. Radiative Forcings --&gt; Aerosols --&gt; Dust Dust forcing 24.1. Provision Is Required Step73: 24.2. Additional Information Is Required Step74: 25. Radiative Forcings --&gt; Aerosols --&gt; Tropospheric Volcanic Tropospheric volcanic forcing 25.1. Provision Is Required Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation Is Required Step76: 25.3. Future Explosive Volcanic Aerosol Implementation Is Required Step77: 25.4. Additional Information Is Required Step78: 26. Radiative Forcings --&gt; Aerosols --&gt; Stratospheric Volcanic Stratospheric volcanic forcing 26.1. Provision Is Required Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation Is Required Step80: 26.3. Future Explosive Volcanic Aerosol Implementation Is Required Step81: 26.4. Additional Information Is Required Step82: 27. Radiative Forcings --&gt; Aerosols --&gt; Sea Salt Sea salt forcing 27.1. Provision Is Required Step83: 27.2. Additional Information Is Required Step84: 28. Radiative Forcings --&gt; Other --&gt; Land Use Land use forcing 28.1. Provision Is Required Step85: 28.2. Crop Change Only Is Required Step86: 28.3. Additional Information Is Required Step87: 29. Radiative Forcings --&gt; Other --&gt; Solar Solar forcing 29.1. Provision Is Required Step88: 29.2. Additional Information Is Required
Python Code: # DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'mohc', 'sandbox-1', 'toplevel') Explanation: ES-DOC CMIP6 Model Properties - Toplevel MIP Era: CMIP6 Institute: MOHC Source ID: SANDBOX-1 Sub-Topics: Radiative Forcings. Properties: 85 (42 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:54:15 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) Explanation: Document Authors Set document authors End of explanation # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) Explanation: Document Contributors Specify document contributors End of explanation # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) Explanation: Document Publication Specify document publication status End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Flux Correction 3. Key Properties --&gt; Genealogy 4. Key Properties --&gt; Software Properties 5. Key Properties --&gt; Coupling 6. Key Properties --&gt; Tuning Applied 7. Key Properties --&gt; Conservation --&gt; Heat 8. Key Properties --&gt; Conservation --&gt; Fresh Water 9. Key Properties --&gt; Conservation --&gt; Salt 10. Key Properties --&gt; Conservation --&gt; Momentum 11. Radiative Forcings 12. Radiative Forcings --&gt; Greenhouse Gases --&gt; CO2 13. Radiative Forcings --&gt; Greenhouse Gases --&gt; CH4 14. Radiative Forcings --&gt; Greenhouse Gases --&gt; N2O 15. Radiative Forcings --&gt; Greenhouse Gases --&gt; Tropospheric O3 16. Radiative Forcings --&gt; Greenhouse Gases --&gt; Stratospheric O3 17. Radiative Forcings --&gt; Greenhouse Gases --&gt; CFC 18. Radiative Forcings --&gt; Aerosols --&gt; SO4 19. Radiative Forcings --&gt; Aerosols --&gt; Black Carbon 20. Radiative Forcings --&gt; Aerosols --&gt; Organic Carbon 21. Radiative Forcings --&gt; Aerosols --&gt; Nitrate 22. Radiative Forcings --&gt; Aerosols --&gt; Cloud Albedo Effect 23. Radiative Forcings --&gt; Aerosols --&gt; Cloud Lifetime Effect 24. Radiative Forcings --&gt; Aerosols --&gt; Dust 25. Radiative Forcings --&gt; Aerosols --&gt; Tropospheric Volcanic 26. Radiative Forcings --&gt; Aerosols --&gt; Stratospheric Volcanic 27. Radiative Forcings --&gt; Aerosols --&gt; Sea Salt 28. Radiative Forcings --&gt; Other --&gt; Land Use 29. Radiative Forcings --&gt; Other --&gt; Solar 1. Key Properties Key properties of the model 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Top level overview of coupled model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of coupled model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2. Key Properties --&gt; Flux Correction Flux correction properties of the model 2.1. Details Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how flux corrections are applied in the model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3. Key Properties --&gt; Genealogy Genealogy and history of the model 3.1. Year Released Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Year the model was released End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.2. CMIP3 Parent Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 CMIP3 parent if any End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.3. CMIP5 Parent Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 CMIP5 parent if any End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.4. Previous Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Previously known as End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4. Key Properties --&gt; Software Properties Software properties of model 4.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4.4. Components Structure Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "OASIS" # "OASIS3-MCT" # "ESMF" # "NUOPC" # "Bespoke" # "Unknown" # "None" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 4.5. Coupler Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Overarching coupling framework for model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.coupling.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5. Key Properties --&gt; Coupling ** 5.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of coupling in the model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 5.2. Atmosphere Double Flux Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Atmosphere grid" # "Ocean grid" # "Specific coupler grid" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 5.3. Atmosphere Fluxes Calculation Grid Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Where are the air-sea fluxes calculated End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 5.4. Atmosphere Relative Winds Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6. Key Properties --&gt; Tuning Applied Tuning methodology for model 6.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.2. Global Mean Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List set of metrics/diagnostics of the global mean state used in tuning model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.3. Regional Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.4. Trend Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List observed trend metrics/diagnostics used in tuning model/component (such as 20th century) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.5. Energy Balance Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.6. Fresh Water Balance Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7. Key Properties --&gt; Conservation --&gt; Heat Global heat convervation properties of the model 7.1. Global Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how heat is conserved globally End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.2. Atmos Ocean Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how heat is conserved at the atmosphere/ocean coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.3. Atmos Land Interface Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how heat is conserved at the atmosphere/land coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.4. Atmos Sea-ice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.5. Ocean Seaice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how heat is conserved at the ocean/sea-ice coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.6. Land Ocean Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how heat is conserved at the land/ocean coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8. Key Properties --&gt; Conservation --&gt; Fresh Water Global fresh water convervation properties of the model 8.1. Global Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how fresh_water is conserved globally End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.2. Atmos Ocean Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.3. Atmos Land Interface Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how fresh water is conserved at the atmosphere/land coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.4. Atmos Sea-ice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.5. Ocean Seaice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.6. Runoff Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe how runoff is distributed and conserved End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.7. Iceberg Calving Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how iceberg calving is modeled and conserved End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.8. Endoreic Basins Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how endoreic basins (no ocean access) are treated End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.9. Snow Accumulation Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe how snow accumulation over land and over sea-ice is treated End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9. Key Properties --&gt; Conservation --&gt; Salt Global salt convervation properties of the model 9.1. Ocean Seaice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how salt is conserved at the ocean/sea-ice coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 10. Key Properties --&gt; Conservation --&gt; Momentum Global momentum convervation properties of the model 10.1. Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how momentum is conserved in the model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11. Radiative Forcings Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5) 11.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of radiative forcings (GHG and aerosols) implementation in model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 12. Radiative Forcings --&gt; Greenhouse Gases --&gt; CO2 Carbon dioxide forcing 12.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 12.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13. Radiative Forcings --&gt; Greenhouse Gases --&gt; CH4 Methane forcing 13.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 13.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14. Radiative Forcings --&gt; Greenhouse Gases --&gt; N2O Nitrous oxide forcing 14.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 14.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15. Radiative Forcings --&gt; Greenhouse Gases --&gt; Tropospheric O3 Troposheric ozone forcing 15.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 16. Radiative Forcings --&gt; Greenhouse Gases --&gt; Stratospheric O3 Stratospheric ozone forcing 16.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 16.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17. Radiative Forcings --&gt; Greenhouse Gases --&gt; CFC Ozone-depleting and non-ozone-depleting fluorinated gases forcing 17.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "Option 1" # "Option 2" # "Option 3" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.2. Equivalence Concentration Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Details of any equivalence concentrations used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.3. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 18. Radiative Forcings --&gt; Aerosols --&gt; SO4 SO4 aerosol forcing 18.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 18.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 19. Radiative Forcings --&gt; Aerosols --&gt; Black Carbon Black carbon aerosol forcing 19.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 19.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 20. Radiative Forcings --&gt; Aerosols --&gt; Organic Carbon Organic carbon aerosol forcing 20.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 20.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 21. Radiative Forcings --&gt; Aerosols --&gt; Nitrate Nitrate forcing 21.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 21.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 22. Radiative Forcings --&gt; Aerosols --&gt; Cloud Albedo Effect Cloud albedo effect forcing (RFaci) 22.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 22.2. Aerosol Effect On Ice Clouds Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Radiative effects of aerosols on ice clouds are represented? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 22.3. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 23. Radiative Forcings --&gt; Aerosols --&gt; Cloud Lifetime Effect Cloud lifetime effect forcing (ERFaci) 23.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 23.2. Aerosol Effect On Ice Clouds Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Radiative effects of aerosols on ice clouds are represented? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 23.3. RFaci From Sulfate Only Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Radiative forcing from aerosol cloud interactions from sulfate aerosol only? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 23.4. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 24. Radiative Forcings --&gt; Aerosols --&gt; Dust Dust forcing 24.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 24.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25. Radiative Forcings --&gt; Aerosols --&gt; Tropospheric Volcanic Tropospheric volcanic forcing 25.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Type A" # "Type B" # "Type C" # "Type D" # "Type E" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How explosive volcanic aerosol is implemented in historical simulations End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Type A" # "Type B" # "Type C" # "Type D" # "Type E" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How explosive volcanic aerosol is implemented in future simulations End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 25.4. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 26. Radiative Forcings --&gt; Aerosols --&gt; Stratospheric Volcanic Stratospheric volcanic forcing 26.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Type A" # "Type B" # "Type C" # "Type D" # "Type E" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How explosive volcanic aerosol is implemented in historical simulations End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Type A" # "Type B" # "Type C" # "Type D" # "Type E" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How explosive volcanic aerosol is implemented in future simulations End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 26.4. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 27. Radiative Forcings --&gt; Aerosols --&gt; Sea Salt Sea salt forcing 27.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 27.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 28. Radiative Forcings --&gt; Other --&gt; Land Use Land use forcing 28.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 28.2. Crop Change Only Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Land use change represented via crop change only? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 28.3. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "irradiance" # "proton" # "electron" # "cosmic ray" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 29. Radiative Forcings --&gt; Other --&gt; Solar Solar forcing 29.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How solar forcing is provided End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 29.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation
9,204
Given the following text description, write Python code to implement the functionality described below step by step Description: Executed Step1: Load software and filenames definitions Step2: Data folder Step3: List of data files Step4: Data load Initial loading of the data Step5: Laser alternation selection At this point we have only the timestamps and the detector numbers Step6: We need to define some parameters Step7: We should check if everithing is OK with an alternation histogram Step8: If the plot looks good we can apply the parameters with Step9: Measurements infos All the measurement data is in the d variable. We can print it Step10: Or check the measurements duration Step11: Compute background Compute the background using automatic threshold Step12: Burst search and selection Step14: Donor Leakage fit Half-Sample Mode Fit peak usng the mode computed with the half-sample algorithm (Bickel 2005). Step15: Gaussian Fit Fit the histogram with a gaussian Step16: KDE maximum Step17: Leakage summary Step18: Burst size distribution Step19: Fret fit Max position of the Kernel Density Estimation (KDE) Step20: Weighted mean of $E$ of each burst Step21: Gaussian fit (no weights) Step22: Gaussian fit (using burst size as weights) Step23: Stoichiometry fit Max position of the Kernel Density Estimation (KDE) Step24: The Maximum likelihood fit for a Gaussian population is the mean Step25: Computing the weighted mean and weighted standard deviation we get Step26: Save data to file Step27: The following string contains the list of variables to be saved. When saving, the order of the variables is preserved. Step28: This is just a trick to format the different variables
Python Code: ph_sel_name = "DexDem" data_id = "17d" # ph_sel_name = "all-ph" # data_id = "7d" Explanation: Executed: Mon Mar 27 11:35:09 2017 Duration: 11 seconds. usALEX-5samples - Template This notebook is executed through 8-spots paper analysis. For a direct execution, uncomment the cell below. End of explanation from fretbursts import * init_notebook() from IPython.display import display Explanation: Load software and filenames definitions End of explanation data_dir = './data/singlespot/' import os data_dir = os.path.abspath(data_dir) + '/' assert os.path.exists(data_dir), "Path '%s' does not exist." % data_dir Explanation: Data folder: End of explanation from glob import glob file_list = sorted(f for f in glob(data_dir + '*.hdf5') if '_BKG' not in f) ## Selection for POLIMI 2012-11-26 datatset labels = ['17d', '27d', '7d', '12d', '22d'] files_dict = {lab: fname for lab, fname in zip(labels, file_list)} files_dict ph_sel_map = {'all-ph': Ph_sel('all'), 'Dex': Ph_sel(Dex='DAem'), 'DexDem': Ph_sel(Dex='Dem')} ph_sel = ph_sel_map[ph_sel_name] data_id, ph_sel_name Explanation: List of data files: End of explanation d = loader.photon_hdf5(filename=files_dict[data_id]) Explanation: Data load Initial loading of the data: End of explanation d.ph_times_t, d.det_t Explanation: Laser alternation selection At this point we have only the timestamps and the detector numbers: End of explanation d.add(det_donor_accept=(0, 1), alex_period=4000, D_ON=(2850, 580), A_ON=(900, 2580), offset=0) Explanation: We need to define some parameters: donor and acceptor ch, excitation period and donor and acceptor excitiations: End of explanation plot_alternation_hist(d) Explanation: We should check if everithing is OK with an alternation histogram: End of explanation loader.alex_apply_period(d) Explanation: If the plot looks good we can apply the parameters with: End of explanation d Explanation: Measurements infos All the measurement data is in the d variable. We can print it: End of explanation d.time_max Explanation: Or check the measurements duration: End of explanation d.calc_bg(bg.exp_fit, time_s=60, tail_min_us='auto', F_bg=1.7) dplot(d, timetrace_bg) d.rate_m, d.rate_dd, d.rate_ad, d.rate_aa Explanation: Compute background Compute the background using automatic threshold: End of explanation bs_kws = dict(L=10, m=10, F=7, ph_sel=ph_sel) d.burst_search(**bs_kws) th1 = 30 ds = d.select_bursts(select_bursts.size, th1=30) bursts = (bext.burst_data(ds, include_bg=True, include_ph_index=True) .round({'E': 6, 'S': 6, 'bg_d': 3, 'bg_a': 3, 'bg_aa': 3, 'nd': 3, 'na': 3, 'naa': 3, 'nda': 3, 'nt': 3, 'width_ms': 4})) bursts.head() burst_fname = ('results/bursts_usALEX_{sample}_{ph_sel}_F{F:.1f}_m{m}_size{th}.csv' .format(sample=data_id, th=th1, **bs_kws)) burst_fname bursts.to_csv(burst_fname) assert d.dir_ex == 0 assert d.leakage == 0 print(d.ph_sel) dplot(d, hist_fret); # if data_id in ['7d', '27d']: # ds = d.select_bursts(select_bursts.size, th1=20) # else: # ds = d.select_bursts(select_bursts.size, th1=30) ds = d.select_bursts(select_bursts.size, add_naa=False, th1=30) n_bursts_all = ds.num_bursts[0] def select_and_plot_ES(fret_sel, do_sel): ds_fret= ds.select_bursts(select_bursts.ES, **fret_sel) ds_do = ds.select_bursts(select_bursts.ES, **do_sel) bpl.plot_ES_selection(ax, **fret_sel) bpl.plot_ES_selection(ax, **do_sel) return ds_fret, ds_do ax = dplot(ds, hist2d_alex, S_max_norm=2, scatter_alpha=0.1) if data_id == '7d': fret_sel = dict(E1=0.60, E2=1.2, S1=0.2, S2=0.9, rect=False) do_sel = dict(E1=-0.2, E2=0.5, S1=0.8, S2=2, rect=True) ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel) elif data_id == '12d': fret_sel = dict(E1=0.30,E2=1.2,S1=0.131,S2=0.9, rect=False) do_sel = dict(E1=-0.4, E2=0.4, S1=0.8, S2=2, rect=False) ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel) elif data_id == '17d': fret_sel = dict(E1=0.01, E2=0.98, S1=0.14, S2=0.88, rect=False) do_sel = dict(E1=-0.4, E2=0.4, S1=0.80, S2=2, rect=False) ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel) elif data_id == '22d': fret_sel = dict(E1=-0.16, E2=0.6, S1=0.2, S2=0.80, rect=False) do_sel = dict(E1=-0.2, E2=0.4, S1=0.85, S2=2, rect=True) ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel) elif data_id == '27d': fret_sel = dict(E1=-0.1, E2=0.5, S1=0.2, S2=0.82, rect=False) do_sel = dict(E1=-0.2, E2=0.4, S1=0.88, S2=2, rect=True) ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel) n_bursts_do = ds_do.num_bursts[0] n_bursts_fret = ds_fret.num_bursts[0] n_bursts_do, n_bursts_fret d_only_frac = 1.*n_bursts_do/(n_bursts_do + n_bursts_fret) print ('D-only fraction:', d_only_frac) dplot(ds_fret, hist2d_alex, scatter_alpha=0.1); dplot(ds_do, hist2d_alex, S_max_norm=2, scatter=False); Explanation: Burst search and selection End of explanation def hsm_mode(s): Half-sample mode (HSM) estimator of `s`. `s` is a sample from a continuous distribution with a single peak. Reference: Bickel, Fruehwirth (2005). arXiv:math/0505419 s = memoryview(np.sort(s)) i1 = 0 i2 = len(s) while i2 - i1 > 3: n = (i2 - i1) // 2 w = [s[n-1+i+i1] - s[i+i1] for i in range(n)] i1 = w.index(min(w)) + i1 i2 = i1 + n if i2 - i1 == 3: if s[i1+1] - s[i1] < s[i2] - s[i1 + 1]: i2 -= 1 elif s[i1+1] - s[i1] > s[i2] - s[i1 + 1]: i1 += 1 else: i1 = i2 = i1 + 1 return 0.5*(s[i1] + s[i2]) E_pr_do_hsm = hsm_mode(ds_do.E[0]) print ("%s: E_peak(HSM) = %.2f%%" % (ds.ph_sel, E_pr_do_hsm*100)) Explanation: Donor Leakage fit Half-Sample Mode Fit peak usng the mode computed with the half-sample algorithm (Bickel 2005). End of explanation E_fitter = bext.bursts_fitter(ds_do, weights=None) E_fitter.histogram(bins=np.arange(-0.2, 1, 0.03)) E_fitter.fit_histogram(model=mfit.factory_gaussian()) E_fitter.params res = E_fitter.fit_res[0] res.params.pretty_print() E_pr_do_gauss = res.best_values['center'] E_pr_do_gauss Explanation: Gaussian Fit Fit the histogram with a gaussian: End of explanation bandwidth = 0.03 E_range_do = (-0.1, 0.15) E_ax = np.r_[-0.2:0.401:0.0002] E_fitter.calc_kde(bandwidth=bandwidth) E_fitter.find_kde_max(E_ax, xmin=E_range_do[0], xmax=E_range_do[1]) E_pr_do_kde = E_fitter.kde_max_pos[0] E_pr_do_kde Explanation: KDE maximum End of explanation mfit.plot_mfit(ds_do.E_fitter, plot_kde=True, plot_model=False) plt.axvline(E_pr_do_hsm, color='m', label='HSM') plt.axvline(E_pr_do_gauss, color='k', label='Gauss') plt.axvline(E_pr_do_kde, color='r', label='KDE') plt.xlim(0, 0.3) plt.legend() print('Gauss: %.2f%%\n KDE: %.2f%%\n HSM: %.2f%%' % (E_pr_do_gauss*100, E_pr_do_kde*100, E_pr_do_hsm*100)) Explanation: Leakage summary End of explanation nt_th1 = 50 dplot(ds_fret, hist_size, which='all', add_naa=False) xlim(-0, 250) plt.axvline(nt_th1) Th_nt = np.arange(35, 120) nt_th = np.zeros(Th_nt.size) for i, th in enumerate(Th_nt): ds_nt = ds_fret.select_bursts(select_bursts.size, th1=th) nt_th[i] = (ds_nt.nd[0] + ds_nt.na[0]).mean() - th plt.figure() plot(Th_nt, nt_th) plt.axvline(nt_th1) nt_mean = nt_th[np.where(Th_nt == nt_th1)][0] nt_mean Explanation: Burst size distribution End of explanation E_pr_fret_kde = bext.fit_bursts_kde_peak(ds_fret, bandwidth=bandwidth, weights='size') E_fitter = ds_fret.E_fitter E_fitter.histogram(bins=np.r_[-0.1:1.1:0.03]) E_fitter.fit_histogram(mfit.factory_gaussian(center=0.5)) E_fitter.fit_res[0].params.pretty_print() fig, ax = plt.subplots(1, 2, figsize=(14, 4.5)) mfit.plot_mfit(E_fitter, ax=ax[0]) mfit.plot_mfit(E_fitter, plot_model=False, plot_kde=True, ax=ax[1]) print('%s\nKDE peak %.2f ' % (ds_fret.ph_sel, E_pr_fret_kde*100)) display(E_fitter.params*100) Explanation: Fret fit Max position of the Kernel Density Estimation (KDE): End of explanation ds_fret.fit_E_m(weights='size') Explanation: Weighted mean of $E$ of each burst: End of explanation ds_fret.fit_E_generic(fit_fun=bl.gaussian_fit_hist, bins=np.r_[-0.1:1.1:0.03], weights=None) Explanation: Gaussian fit (no weights): End of explanation ds_fret.fit_E_generic(fit_fun=bl.gaussian_fit_hist, bins=np.r_[-0.1:1.1:0.005], weights='size') E_kde_w = E_fitter.kde_max_pos[0] E_gauss_w = E_fitter.params.loc[0, 'center'] E_gauss_w_sig = E_fitter.params.loc[0, 'sigma'] E_gauss_w_err = float(E_gauss_w_sig/np.sqrt(ds_fret.num_bursts[0])) E_gauss_w_fiterr = E_fitter.fit_res[0].params['center'].stderr E_kde_w, E_gauss_w, E_gauss_w_sig, E_gauss_w_err, E_gauss_w_fiterr Explanation: Gaussian fit (using burst size as weights): End of explanation S_pr_fret_kde = bext.fit_bursts_kde_peak(ds_fret, burst_data='S', bandwidth=0.03) #weights='size', add_naa=True) S_fitter = ds_fret.S_fitter S_fitter.histogram(bins=np.r_[-0.1:1.1:0.03]) S_fitter.fit_histogram(mfit.factory_gaussian(), center=0.5) fig, ax = plt.subplots(1, 2, figsize=(14, 4.5)) mfit.plot_mfit(S_fitter, ax=ax[0]) mfit.plot_mfit(S_fitter, plot_model=False, plot_kde=True, ax=ax[1]) print('%s\nKDE peak %.2f ' % (ds_fret.ph_sel, S_pr_fret_kde*100)) display(S_fitter.params*100) S_kde = S_fitter.kde_max_pos[0] S_gauss = S_fitter.params.loc[0, 'center'] S_gauss_sig = S_fitter.params.loc[0, 'sigma'] S_gauss_err = float(S_gauss_sig/np.sqrt(ds_fret.num_bursts[0])) S_gauss_fiterr = S_fitter.fit_res[0].params['center'].stderr S_kde, S_gauss, S_gauss_sig, S_gauss_err, S_gauss_fiterr Explanation: Stoichiometry fit Max position of the Kernel Density Estimation (KDE): End of explanation S = ds_fret.S[0] S_ml_fit = (S.mean(), S.std()) S_ml_fit Explanation: The Maximum likelihood fit for a Gaussian population is the mean: End of explanation weights = bl.fret_fit.get_weights(ds_fret.nd[0], ds_fret.na[0], weights='size', naa=ds_fret.naa[0], gamma=1.) S_mean = np.dot(weights, S)/weights.sum() S_std_dev = np.sqrt( np.dot(weights, (S - S_mean)**2)/weights.sum()) S_wmean_fit = [S_mean, S_std_dev] S_wmean_fit Explanation: Computing the weighted mean and weighted standard deviation we get: End of explanation sample = data_id Explanation: Save data to file End of explanation variables = ('sample n_bursts_all n_bursts_do n_bursts_fret ' 'E_kde_w E_gauss_w E_gauss_w_sig E_gauss_w_err E_gauss_w_fiterr ' 'S_kde S_gauss S_gauss_sig S_gauss_err S_gauss_fiterr ' 'E_pr_do_kde E_pr_do_hsm E_pr_do_gauss nt_mean\n') Explanation: The following string contains the list of variables to be saved. When saving, the order of the variables is preserved. End of explanation variables_csv = variables.replace(' ', ',') fmt_float = '{%s:.6f}' fmt_int = '{%s:d}' fmt_str = '{%s}' fmt_dict = {**{'sample': fmt_str}, **{k: fmt_int for k in variables.split() if k.startswith('n_bursts')}} var_dict = {name: eval(name) for name in variables.split()} var_fmt = ', '.join([fmt_dict.get(name, fmt_float) % name for name in variables.split()]) + '\n' data_str = var_fmt.format(**var_dict) print(variables_csv) print(data_str) # NOTE: The file name should be the notebook name but with .csv extension with open('results/usALEX-5samples-PR-raw-%s.csv' % ph_sel_name, 'a') as f: f.seek(0, 2) if f.tell() == 0: f.write(variables_csv) f.write(data_str) Explanation: This is just a trick to format the different variables: End of explanation
9,205
Given the following text description, write Python code to implement the functionality described below step by step Description: Matplotlib Exercise 1 Imports Step1: Line plot of sunspot data Download the .txt data for the "Yearly mean total sunspot number [1700 - now]" from the SILSO website. Upload the file to the same directory as this notebook. Step3: Use np.loadtxt to read the data into a NumPy array called data. Then create two new 1d NumPy arrays named years and ssc that have the sequence of year and sunspot counts. Step4: Make a line plot showing the sunspot count as a function of year. Customize your plot to follow Tufte's principles of visualizations. Adjust the aspect ratio/size so that the steepest slope in your plot is approximately 1. Customize the box, grid, spines and ticks to match the requirements of this data. Step5: Describe the choices you have made in building this visualization and how they make it effective. YOUR ANSWER HERE Chose to not have box or gridlines in order to maximize data to ink ratio, this looks cleaner and makes the data eaiser to read. Labled the axes this way because gave acurate yet simple discription of the data plotted. I chose the scale in order to make max slope closer to 1. The y-axis and x-axis ticks were chosen to be able to show the data scope without completely over crowding the axes. Now make 4 subplots, one for each century in the data set. This approach works well for this dataset as it allows you to maintain mild slopes while limiting the overall width of the visualization. Perform similar customizations as above
Python Code: %matplotlib inline import matplotlib.pyplot as plt import numpy as np Explanation: Matplotlib Exercise 1 Imports End of explanation import os assert os.path.isfile('yearssn.dat') Explanation: Line plot of sunspot data Download the .txt data for the "Yearly mean total sunspot number [1700 - now]" from the SILSO website. Upload the file to the same directory as this notebook. End of explanation # YOUR CODE HERE #raise NotImplementedError() data = np.loadtxt('yearssn.dat') year = np.ones(len(data)) ssc = np.ones(len(data)) #makes two new arrays, one for each array in data for i in range(0, len(data)): ssc[i] = data[i][1] year[i] = data[i][0] assert len(year)==315 assert year.dtype==np.dtype(float) assert len(ssc)==315 assert ssc.dtype==np.dtype(float) #origonally used this to find max slope so I could scale for slope of 1 #turned out I didnt need it. "def average(data,num): avex=[] avey=[] x=data[0] y=data[1] for i in range(len(x)-(num)): avex.append(sum(x[i:i+num])/num) avey.append(sum(y[i:i+num])/num) return avex,avey x_ave,y_ave = average((year,ssc), 4) def max_slope(x,y): s=[] for i in range(0, len(x)-1): s.append((y[i+1]-y[i])/(x[i+1]-x[i])) return max(s) max_slope(x_ave, y_ave) Explanation: Use np.loadtxt to read the data into a NumPy array called data. Then create two new 1d NumPy arrays named years and ssc that have the sequence of year and sunspot counts. End of explanation # YOUR CODE HERE #raise NotImplementedError() f= plt.figure(figsize=(15,1)) plt.plot(year, ssc) plt.xlabel('Year') plt.ylabel('Number of Sunspots') plt.title('Sunspot Count') plt.xlim(1700,2015) plt.ylim(0,191) plt.box(False) a=range(192) b= range(1700,2016) plt.yticks(a[::50]) plt.show() assert True # leave for grading Explanation: Make a line plot showing the sunspot count as a function of year. Customize your plot to follow Tufte's principles of visualizations. Adjust the aspect ratio/size so that the steepest slope in your plot is approximately 1. Customize the box, grid, spines and ticks to match the requirements of this data. End of explanation # YOUR CODE HERE #raise NotImplementedError() a= range(192) f = plt.figure(figsize=(15,6)) plt.subplot(4,1,1) plt.yticks(a[::50]) plt.title('Sunspot Count') plt.plot(year[0:100],ssc[0:100]) plt.box(False) plt.subplot(4,1,2) plt.plot(year[100:200], ssc[100:200]) plt.yticks(a[::50]) plt.box(False) plt.subplot(4,1,3) plt.plot(year[200:300],ssc[200:300]) plt.yticks(a[::50]) plt.ylabel('Number of Sunspots') plt.box(False) plt.subplot(4,1,4) plt.plot(year[300:400], ssc[300:400]) plt.yticks(a[::50]) plt.xlim(2000,2100) plt.xlabel('Year') plt.box(False) plt.tight_layout() assert True # leave for grading Explanation: Describe the choices you have made in building this visualization and how they make it effective. YOUR ANSWER HERE Chose to not have box or gridlines in order to maximize data to ink ratio, this looks cleaner and makes the data eaiser to read. Labled the axes this way because gave acurate yet simple discription of the data plotted. I chose the scale in order to make max slope closer to 1. The y-axis and x-axis ticks were chosen to be able to show the data scope without completely over crowding the axes. Now make 4 subplots, one for each century in the data set. This approach works well for this dataset as it allows you to maintain mild slopes while limiting the overall width of the visualization. Perform similar customizations as above: Customize your plot to follow Tufte's principles of visualizations. Adjust the aspect ratio/size so that the steepest slope in your plot is approximately 1. Customize the box, grid, spines and ticks to match the requirements of this data. End of explanation
9,206
Given the following text description, write Python code to implement the functionality described below step by step Description: <div align="right">Python 2.7</div> Indexing and Related Experiments in Python 2.7 Though this content is in Python 2.7, most if not all of it should work the same in Python 3.x. TOC Indexing Experiments - Explores different complex structures and how to index into them Mutation sidebar - looks at mutation using our crazyList example Finding the Index of a Known Value within Complex Data Structures - explores .index(), np.where(), and Related concerns Indexing Experiments in Python We start with a simple nested list showing how to get at an element within it Step1: Now we build something more complicated to show where indexing can get tricky ... Step2: Notice which numbers moved where. This would seem to indicate that in shape(a,b,c) Step3: Just analyzing how the numbers are arranged, we see that in shape(a,b,c,d), it just added the new extra dimensional layer to the front of the list so that now Step4: Now let's access other stuff in the list ... Step5: In the tests that follow ... anything that does not work is wrapped in exception handling (that displays the error) so this notebook can be run from start to finish ... Note that it is not good practice to use a catch all for all errors. In real coding errors should be handled individually by type. How do we access the first index (element 2) of the first array object in our complex list (which resides at index 0)? Step6: Sub element 4 is a simple list nested within caryList Step7: So what about the array? The array was originally built in "simp1" and then added to crazyList. Its source looks like this Step8: Note the [] versus the [[]] ... our "simple arrays" were copied from an example, but are actually nested objects of 1 list of 5 elements forming the first object inside the array. A true simple array would like this Step9: Let's add the true simple array to our crazy object and then create working examples of accessing everything ... Step10: Looking at just that first element again Step11: remember that this object if it were not in a list would be accessed like so Step12: ... so inside crazyList ? The answer is that the list is one level deep and the elements are yet another level in Step13: <a id="mutation" name="mutation"></a> Sidebar Step14: Note how the second is really a reference to the first so changing one changes the other Step15: For a simple list ... we can fix that by simply using list() during our attempt to create the copy Step16: Mutation is avoided. Now we can change our two objects independantly. However, with complex objects like crazyList, this does not work. The following will illustrate the problem and later, options to get around it are presented. Step17: Now we make some changes Step18: Now we'll look at just the last object in both "crazyLists" showing what changed Step19: The "13" replaced the value at this location in both crazyList and crazyList2. We are not dealing with true copies but rather references to the same data as further illustrated here Step20: So ... how to make a copy that does not mutate? (we can change one without changing the other)?<br/> Let's look at some things that don't work first ... Step21: Python is hard to fool ... At first, I considered that we might now have two lists, but w/ just element 7 passed in by reference and so it mutates. But this shows our whole lists are still mutating Step22: deepcopy() comes from the copy library and the commands are documented at Python.org. For this situation, this solution seems to work for when mutation is undesirable Step23: Should even deepcopy() not work, this topic online may prove helpful in these situations
Python Code: stupidList = [[1,2,3],[4,5,6]] print(stupidList) stupidList[0][1] Explanation: <div align="right">Python 2.7</div> Indexing and Related Experiments in Python 2.7 Though this content is in Python 2.7, most if not all of it should work the same in Python 3.x. TOC Indexing Experiments - Explores different complex structures and how to index into them Mutation sidebar - looks at mutation using our crazyList example Finding the Index of a Known Value within Complex Data Structures - explores .index(), np.where(), and Related concerns Indexing Experiments in Python We start with a simple nested list showing how to get at an element within it: End of explanation import numpy as np import pandas as pd m3d=np.random.rand(3,4,5) m3d # how does Pandas arrange the data? n3d=m3d.reshape(4,3,5) n3d Explanation: Now we build something more complicated to show where indexing can get tricky ... End of explanation o3d=np.random.rand(2,3,4,5) o3d Explanation: Notice which numbers moved where. This would seem to indicate that in shape(a,b,c): - a is like the object's depth (how many groupings of rows/columns are there?) - b is like the object's rows per grouping (how many rows in each subgroup) - c is like the object's columns What if the object had 4 dimensions? End of explanation # some simple arrays: simp1=np.array([[1,2,3,4,5]]) simp2=np.array([[10,9,8,7,6]]) simp3=[11,12,13] # a dictionary dfrm1 = {'state': ['Ohio', 'Ohio', 'Ohio', 'Nevada', 'Nevada'], 'year': [2000, 2001, 2002, 2001, 2002], 'population': [1.5, 1.7, 3.6, 2.4, 2.9]} # convert dictionary to DataFrame dfrm1 = pd.DataFrame(dfrm1) dfrm1 # pandas indexing works a little differently: # * column headers are keys # * as shown here, can ask for columns, rows, and a filter based on values in the columns # in any order and the indexing will still work print(dfrm1["population"][dfrm1["population"] > 1.5][2:4]) # all of these return values from "population" column only print("---") # where "population" > 1.5 print(dfrm1["population"][2:4][dfrm1["population"] > 1.5]) # and row index is between 2 and 4 print("---") print(dfrm1[dfrm1["population"] > 1.5]["population"][2:4]) print("---") print(dfrm1[dfrm1["population"] > 1.5][2:4]["population"]) print("---") print(dfrm1[2:4]["population"][dfrm1["population"] > 1.5]) print("---") print(dfrm1[2:4][dfrm1["population"] > 1.5]["population"]) # this last one triggers a warning # breaking the above apart: print(dfrm1[dfrm1["population"] > 1.5]) # all rows and columns filtered by "population" values > 1.5 print("---") print(dfrm1["population"]) # return whole "population" column print("---") print(dfrm1[2:4]) # return whole rows 2 to 4 crazyList = [simp1, m3d, simp2, n3d, simp3, dfrm1, o3d] # Accessing the dataframe inside the list now that it is a sub element: crazyList[5]["population"][crazyList[5]["population"] > 1.5][2:4] Explanation: Just analyzing how the numbers are arranged, we see that in shape(a,b,c,d), it just added the new extra dimensional layer to the front of the list so that now: - a = larger hyper grouping (2 of them) - b = first subgroup within (3 of them) - c = rows within these groupings (4 of them) - d = columns within these groupings (5 of them) It appears that rows always come before columns, and then it looks like groupings of rows and columns and groupings or groupings, etc. . . are added to the front of the index chain. Building something complex just to drill in more on how to access sub-elements: End of explanation crazyList[1] # this is the second object of the list (Python like many languages starts indicies at 0) # this is the full output of m3d crazyList[0] # after the above demo, no surprises here ... simp1 was the first object we added to the list Explanation: Now let's access other stuff in the list ... End of explanation try: # not this way ... crazyList[0][1] except Exception as ex: print("%s%s %s" %(type(ex), ":", ex)) # let's look at what we built: all the objects are here but are no longer named so we need to get indices right crazyList # note that both of these get the same data, but also note the difference in the format: "[[]]" and array([])". # look at the source and you will see we are drilling in at different levels of "[]" # there can be situations in real coding where extra layers are created by accident so this example is good to know print(crazyList[0]) crazyList[0][0] Explanation: In the tests that follow ... anything that does not work is wrapped in exception handling (that displays the error) so this notebook can be run from start to finish ... Note that it is not good practice to use a catch all for all errors. In real coding errors should be handled individually by type. How do we access the first index (element 2) of the first array object in our complex list (which resides at index 0)? End of explanation print(crazyList[4]) crazyList[4][1] # get 2nd element in the list within a list at position 4 (object 4 in the list) Explanation: Sub element 4 is a simple list nested within caryList: crazyList [ ... [content at index position 4] ...] End of explanation print(type(simp1)) print(simp1.shape) print(simp1) print(simp1[0]) # note that the first two give us the same thing (whole array) simp1[0][1] Explanation: So what about the array? The array was originally built in "simp1" and then added to crazyList. Its source looks like this: End of explanation trueSimp1=np.array([10,9,8,7,6]) print(trueSimp1.shape) # note: output shows that Python thinks this is 5 rows, 1 column trueSimp1 Explanation: Note the [] versus the [[]] ... our "simple arrays" were copied from an example, but are actually nested objects of 1 list of 5 elements forming the first object inside the array. A true simple array would like this: End of explanation crazyList.append(trueSimp1) # append mutates so this changes the original list crazyList # Warning! if you re-run this cell, you will keep adding more copies of the last object # to the end of this object. To be consistent with content in this NB # clear and re-run the whole notebook should that happen # The elements at either end of crazyList: print(crazyList[0]) print(crazyList[-1]) # ask for last item by counting backwards from the end # get a specific value by index from within the subelements at either end: print(crazyList[0][0][2]) # extra zero for the extra [] .. structurally this is really [0 [0 ], [1] ] but 1 does not exist print(crazyList[-1][2]) Explanation: Let's add the true simple array to our crazy object and then create working examples of accessing everything ... End of explanation crazyList[0] # first array to change Explanation: Looking at just that first element again: End of explanation simp1[0][1] # second element inside it Explanation: remember that this object if it were not in a list would be accessed like so: End of explanation crazyList[0] crazyList[0][0][1] Explanation: ... so inside crazyList ? The answer is that the list is one level deep and the elements are yet another level in: End of explanation aList = [1,2,3] bList = aList print(aList) print(bList) Explanation: <a id="mutation" name="mutation"></a> Sidebar: Mutation and Related Concerns Try this test and you will see it does not work: crazyList2 = crazyList.append(trueSimp1) What it did: crazyList got an element appended to the end and crazyList2 came out the other side empty. This is because append() returns None and operates on the original. The copy then gets nothing and the original gets an element added to it. To set up crazyList2 to append to only it, we might be tempted to try something like what is shown below, but if we do, note how it mutates: End of explanation aList[0] = 0 bList[1] = 1 bList.append(4) print(aList) print(bList) Explanation: Note how the second is really a reference to the first so changing one changes the other: End of explanation bList = list(aList) bList[0] = 999 aList[1] = 998 print(aList) print(bList) bList.append(19) print(aList) print(bList) Explanation: For a simple list ... we can fix that by simply using list() during our attempt to create the copy: End of explanation crazyList2 = list(crazyList) crazyList2 Explanation: Mutation is avoided. Now we can change our two objects independantly. However, with complex objects like crazyList, this does not work. The following will illustrate the problem and later, options to get around it are presented. End of explanation len(crazyList2)-1 # this is the position of the object we want to change crazyList2[7][1] = 13 # this will change element 2 of last object in crazyList2 Explanation: Now we make some changes: End of explanation print(crazyList[7]) print(crazyList2[7]) Explanation: Now we'll look at just the last object in both "crazyLists" showing what changed: End of explanation crazyList[7][1] = 9 # change on of them again and both change print(crazyList[7]) print(crazyList2[7]) Explanation: The "13" replaced the value at this location in both crazyList and crazyList2. We are not dealing with true copies but rather references to the same data as further illustrated here: End of explanation crazyList3 = crazyList[:] # according to online topics ... this was supposed to work for the reason outlined below # it probably works with some complex objects but does not work with this one # some topics online indicate this should have worked because: # * the problem is avoided by "slicing" the original so Python behaves as if the thing you are copying is different # * if you used crazyList[2:3] ==> you would get a slice of the original you could store in the copy # * [:] utilizes slicing syntax but indicates "give me the whole thing" since by default, empty values are the min and max # indexing limits crazyList3[7][1] = 13 # this will change element 2 of the last object print(crazyList[7]) print(crazyList3[7]) # what if we do this? (slice it and then add back a missing element) crazyList3 = crazyList[:-1] print(len(crazyList3)) print(len(crazyList)) # crazyList 3 is now one element shorter than crazyList crazyList3.append(crazyList[7]) # add back missing element from crazyList print(len(crazyList3)) print(len(crazyList)) crazyList3[7][1] = 9 # this will change element 2 of the last object print(crazyList[7]) # note how again, both lists change print(crazyList3[7]) Explanation: So ... how to make a copy that does not mutate? (we can change one without changing the other)?<br/> Let's look at some things that don't work first ... End of explanation print("before:") print(crazyList[4]) print(crazyList3[4]) crazyList3[4][0] = 14 print("after:") print(crazyList[4]) print(crazyList3[4]) # try other tests of other elements and you will get same results Explanation: Python is hard to fool ... At first, I considered that we might now have two lists, but w/ just element 7 passed in by reference and so it mutates. But this shows our whole lists are still mutating: End of explanation import copy crazyList4 = copy.deepcopy(crazyList) print("before:") print(crazyList[4]) print(crazyList4[4]) crazyList4[4][0] = 15 print("") print("after:") print(crazyList[4]) print(crazyList4[4]) Explanation: deepcopy() comes from the copy library and the commands are documented at Python.org. For this situation, this solution seems to work for when mutation is undesirable: End of explanation print(stupidList) print(stupidList[1].index(5)) # this works on lists # but for nested lists, you would need to loop through each sublist and handle the error that # gets thrown each time it does not find the answer for element in stupidList: try: test_i = element.index(5) except Exception as ex: print("%s%s %s" %(type(ex), ":", ex)) print(test_i) # this strategy will not work on numpy arrays though try: crazyList[0].index(2) except Exception as anyE: print(type(anyE), anyE) # because we have a list containing numpy arrays, we could look in each one like this: print(crazyList[0]) np.where(crazyList[0]==2) # the above indicates that 2 lives here: crazyList[0][0][1] # started with crazyList[0], then found it at [0][1] inside the data structure # For floating point numbers, the level of precision matters # details on how this works are presented in this notebook: TMWP_np_where_and_floatingPoint_numbers.ipynb # the simple test in the cells that follow should help illustrate the problem and what to do, but # see aforementioned notebook for more detail # to perform a where() test on a structure like this, it is important to note that print() # rounds the result to 8 decimal places. The real underlying numbers have more decimal places print(crazyList2[1]); print("") print(crazyList2[1][2][3][4]) # get a number to test with print("{0:.20}".format(crazyList2[1][2][3][4])) # show more decimal places of the test number # Warning! If you re-run this notebook, new random nubers are generated and the value used for the test in this # cell will probably then fail. To fix this, re-run previous cell and copy in the final number shown # above up to at least 17 decimal places. print(np.where(crazyList2[1]==0.95881217854380618)) # number copied from output of previous line up to 17 decimal places # np.where() can find this, but will also return other values # that match up to the first 16 decimal places (if they exist) # precision appears to be up to 16 decimal places on a 32 bit machine # np.isclose # for finding less precise answers: finds numbers that "are close" print(np.isclose(crazyList2[1], 0.95881)) print("") print(np.where(np.isclose(crazyList2[1], 0.95881))) # note that when numbers are "close" this returns multiple values # in this case (crazyList2) only one number was "close" # more detailed testing is provided in: # TMWP_np_where_and_floatingPoint_numbers.ipynb Explanation: Should even deepcopy() not work, this topic online may prove helpful in these situations: Stack Overflow: When Deep Copy is not Enough. <a id="indexing" name="indexing"></a> Finding The Index of a Value Suppose we didn't know how to find the element but we knew the value we were looking for? How to get its index? End of explanation
9,207
Given the following text description, write Python code to implement the functionality described below step by step Description: Exploratory Data Analysis with Python We will explore the NYC MTA turnstile data set. These data files are from the New York Subway. It tracks the hourly entries and exits to turnstiles (UNIT) by day in the subway system. Here is an example of what you could do with the data. James Kao investigates how subway ridership is affected by incidence of rain. <br> <font color="red"> NOTE Step1: Download Data Would you like to download New York City MTA Turnstile data? Each file is for a week of data and is approximately 24 Megabytes in size. Step2: Scrape MTA Turnstile Web Page to extract all available data files. Step3: Exercise 1 Download at least 2 weeks worth of MTA turnstile data (You can do this manually or via Python) Open up a file, use csv reader to read it, make a python dict where there is a key for each (C/A, UNIT, SCP, STATION). These are the first four columns. The value for this key should be a list of lists. Each list in the list is the rest of the columns in a row. For example, one key-value pair should look like{ ('A002','R051','02-00-00','LEXINGTON AVE') Step4: Create Excersize 1 Dictionary Step5: Header C/A Step6: Example Entry in Turnstile Dictionary Step7: Create Pandas DataFrame Step8: Exercise 2 Let's turn this into a time series. For each key (basically the control area, unit, device address and station of a specific turnstile), have a list again, but let the list be comprised of just the point in time and the cumulative count of entries. This basically means keeping only the date, time, and entries fields in each list. You can convert the date and time into datetime objects -- That is a python class that represents a point in time. You can combine the date and time fields into a string and use the dateutil package to convert it into a datetime object. Your new dict should look something like { ('A002','R051','02-00-00','LEXINGTON AVE') Step9: Example Entry in Turnstile Time Series Dictionary Step10: Add Time Stamp Series to Pandas DataFrame Step11: Exercise 3 These counts are cumulative every n hours. We want total daily entries. Now make it that we again have the same keys, but now we have a single value for a single day, which is not cumulative counts but the total number of passengers that entered through this turnstile on this day. Step12: Example Entry in Turnstile Time Series Dictionary Step13: Return Daily Entry Totals Using Pandas Step14: Exercise 4 We will plot the daily time series for a turnstile. In ipython notebook, add this to the beginning of your next cell Step15: Pandas Plot Step16: Exercise 5 So far we've been operating on a single turnstile level, let's combine turnstiles in the same ControlArea/Unit/Station combo. There are some ControlArea/Unit/Station groups that have a single turnstile, but most have multiple turnstilea-- same value for the C/A, UNIT and STATION columns, different values for the SCP column. We want to combine the numbers together -- for each ControlArea/UNIT/STATION combo, for each day, add the counts from each turnstile belonging to that combo. Pandas Return Total Passengers Filtered By Control Area, Unit, Station and Date Step17: Exercise 6 Similarly, combine everything in each station, and come up with a time series of [(date1, count1),(date2,count2),...] type of time series for each STATION, by adding up all the turnstiles in a station. Pandas Return Total Passengers Filtered By Station and Date Step18: Exercise 7 Plot the time series for a station Step19: Exercise 8 Make one list of counts for one week for one station. Monday's count, Tuesday's count, etc. so it's a list of 7 counts. Make the same list for another week, and another week, and another week. plt.plot(week_count_list) for every week_count_list you created this way. You should get a rainbow plot of weekly commute numbers on top of each other. Step20: Exercise 9 Over multiple weeks, sum total ridership for each station and sort them, so you can find out the stations with the highest traffic during the time you investigate Step21: Exercise 10 Make a single list of these total ridership values and plot it with plt.hist(total_ridership_counts) to get an idea about the distribution of total ridership among different stations. This should show you that most stations have a small traffic, and the histogram bins for large traffic volumes have small bars. Additional Hint
Python Code: from collections import defaultdict import csv import os import os.path as osp from dateutil.parser import parse import matplotlib.dates as mdates import matplotlib.pyplot as plt import pandas as pd import seaborn as sns from k2datascience import nyc_mta from IPython.core.interactiveshell import InteractiveShell InteractiveShell.ast_node_interactivity = "all" %matplotlib inline Explanation: Exploratory Data Analysis with Python We will explore the NYC MTA turnstile data set. These data files are from the New York Subway. It tracks the hourly entries and exits to turnstiles (UNIT) by day in the subway system. Here is an example of what you could do with the data. James Kao investigates how subway ridership is affected by incidence of rain. <br> <font color="red"> NOTE: <br> This notebook uses code found in the <a href="https://github.com/TimothyHelton/k2datascience/blob/master/k2datascience/nyc_mta.py"> <strong>k2datascience.nyc_mta</strong></a> package. To execute all the cells do one of the following items: <ul> <li>Install the k2datascience package to the active Python interpreter.</li> <li>Add k2datascience/k2datascience to the PYTHON_PATH system variable.</li> <li>Create a link to the nyc_mta.py file in the same directory as this notebook.</li> </font> Imports End of explanation download = False file_quantity = 2 Explanation: Download Data Would you like to download New York City MTA Turnstile data? Each file is for a week of data and is approximately 24 Megabytes in size. End of explanation d = nyc_mta.TurnstileData() if download: d.write_data_files(qty=file_quantity) print(f'\n\nThe raw data files were written out to:\n\n{d.data_dir}') Explanation: Scrape MTA Turnstile Web Page to extract all available data files. End of explanation data_file = '170401.txt' data_dir = osp.join('..', 'data', 'nyc_mta_turnstile') data_file_path = osp.join(data_dir, data_file) Explanation: Exercise 1 Download at least 2 weeks worth of MTA turnstile data (You can do this manually or via Python) Open up a file, use csv reader to read it, make a python dict where there is a key for each (C/A, UNIT, SCP, STATION). These are the first four columns. The value for this key should be a list of lists. Each list in the list is the rest of the columns in a row. For example, one key-value pair should look like{ ('A002','R051','02-00-00','LEXINGTON AVE'): [ ['NQR456', 'BMT', '01/03/2015', '03:00:00', 'REGULAR', '0004945474', '0001675324'], ['NQR456', 'BMT', '01/03/2015', '07:00:00', 'REGULAR', '0004945478', '0001675333'], ['NQR456', 'BMT', '01/03/2015', '11:00:00', 'REGULAR', '0004945515', '0001675364'], ... ] } Store all the weeks in a data structure of your choosing Data File Path End of explanation turnstile = defaultdict(list) with open(data_file_path, 'r') as f: reader = csv.reader(f) initial_row = True for row in reader: if not initial_row: turnstile[tuple(row[:4])].append([x.strip() for x in row[4:]]) else: header = [x.strip() for x in row] initial_row = False Explanation: Create Excersize 1 Dictionary End of explanation header Explanation: Header C/A: Control Area (A002) UNIT: Remote Unit for a station (R051) SCP: Subunit Channel Position represents an specific address for a device (02-00-00) STATION: Represents the station name the device is located at LINENAME: Represents all train lines that can be boarded at this station Normally lines are represented by one character. LINENAME 456NQR represents train server for 4, 5, 6, N, Q, and R trains. DIVISION: Represents the Line originally the station belonged to BMT, IRT, or IND DATE: Represents the date (MM-DD-YY) TIME: Represents the time (hh:mm:ss) for a scheduled audit event DESc: Represent the "REGULAR" scheduled audit event (Normally occurs every 4 hours) Audits may occur more that 4 hours due to planning, or troubleshooting activities. Additionally, there may be a "RECOVR AUD" entry: This refers to a missed audit that was recovered. ENTRIES: The comulative entry register value for a device EXIST: The cumulative exit register value for a device End of explanation turnstile[('A002', 'R051', '02-00-00', '59 ST')][:3] Explanation: Example Entry in Turnstile Dictionary End of explanation d.get_data() d.data.shape d.data.head() Explanation: Create Pandas DataFrame End of explanation turnstile_ts = {} for k, v in turnstile.items(): turnstile_ts[k] = [[parse(f'{x[2]} {x[3]}'), int(x[-2])] for x in v] Explanation: Exercise 2 Let's turn this into a time series. For each key (basically the control area, unit, device address and station of a specific turnstile), have a list again, but let the list be comprised of just the point in time and the cumulative count of entries. This basically means keeping only the date, time, and entries fields in each list. You can convert the date and time into datetime objects -- That is a python class that represents a point in time. You can combine the date and time fields into a string and use the dateutil package to convert it into a datetime object. Your new dict should look something like { ('A002','R051','02-00-00','LEXINGTON AVE'): [ [datetime.datetime(2013, 3, 2, 3, 0), 3788], [datetime.datetime(2013, 3, 2, 7, 0), 2585], [datetime.datetime(2013, 3, 2, 12, 0), 10653], [datetime.datetime(2013, 3, 2, 17, 0), 11016], [datetime.datetime(2013, 3, 2, 23, 0), 10666], [datetime.datetime(2013, 3, 3, 3, 0), 10814], [datetime.datetime(2013, 3, 3, 7, 0), 10229], ... ], .... } Create Exersize 2 Time Series Dictionary Note: The extended computational time is due to the dateutil operation. End of explanation turnstile_ts[('A002', 'R051', '02-00-00', '59 ST')][:10] Explanation: Example Entry in Turnstile Time Series Dictionary End of explanation d.get_time_stamp() d.data.shape d.data.head() Explanation: Add Time Stamp Series to Pandas DataFrame End of explanation daily_total = defaultdict(list) for k, v in turnstile_ts.items(): days = set([x[0].date() for x in v]) for day in sorted(days): daily_total[k].append([day, sum([x[1] for x in v if x[0].date() == day])]) Explanation: Exercise 3 These counts are cumulative every n hours. We want total daily entries. Now make it that we again have the same keys, but now we have a single value for a single day, which is not cumulative counts but the total number of passengers that entered through this turnstile on this day. End of explanation daily_total[('A002', 'R051', '02-00-00', '59 ST')] Explanation: Example Entry in Turnstile Time Series Dictionary End of explanation d.turnstile_daily.head(10) d.turnstile_daily.tail(10) Explanation: Return Daily Entry Totals Using Pandas End of explanation label_size = 14 fig = plt.figure('Station 59 ST: Daily Turnstile Entries', figsize=(10, 3), facecolor='white', edgecolor='black') ax1 = plt.subplot2grid((1, 1), (0, 0)) dt = daily_total[('A002', 'R051', '02-00-00', '59 ST')] dates = [x[0] for x in dt] entries = [x[1] for x in dt] ax1.plot_date(dates, entries, '^k-') plt.suptitle('Station: 59 ST', fontsize=24, y=1.16); plt.title('Control Area: A002 | Unit: R051 | Subunit Channel Position: 02-00-00', fontsize=18, y=1.10); ax1.set_xlabel('Date', fontsize=label_size) ax1.set_ylabel('Turnstile Entries', fontsize=label_size) fig.autofmt_xdate(); Explanation: Exercise 4 We will plot the daily time series for a turnstile. In ipython notebook, add this to the beginning of your next cell: %matplotlib inline This will make your matplotlib graphs integrate nicely with the notebook. To plot the time series, import matplotlib with import matplotlib.pyplot as plt Take the list of [(date1, count1), (date2, count2), ...], for the turnstile and turn it into two lists: dates and counts. This should plot it: plt.figure(figsize=(10,3)) plt.plot(dates,counts) End of explanation label_size = 14 marker_size = 5 fig = plt.figure('Station 59 ST: Daily Turnstile Entries', figsize=(10, 7), facecolor='white', edgecolor='black') rows, cols = (2, 1) ax1 = plt.subplot2grid((rows, cols), (0, 0)) ax2 = plt.subplot2grid((rows, cols), (1, 0), sharex=ax1) dt = d.turnstile_daily.query(('c_a == "A002"' '& unit == "R051"' '& scp == "02-00-00"' '& station == "59 ST"')) dt.plot(x=dt.index.levels[4], y='entries', color='IndianRed', legend=None, markersize=marker_size, marker='o', ax=ax1) ax1.set_title('Control Area: A002 | Unit: R051 | Subunit Channel Position: 02-00-00', fontsize=18, y=1.10) ax1.set_ylabel('Turnstile Entries', fontsize=label_size) dt.plot(x=dt.index.levels[4], y='exits', color='black', legend=None, markersize=marker_size, marker='d', ax=ax2) ax2.set_xlabel('Date', fontsize=label_size) ax2.set_ylabel('Turnstile Exits', fontsize=label_size) plt.suptitle('Station: 59 ST', fontsize=24, y=1.04); plt.tight_layout() fig.autofmt_xdate(); Explanation: Pandas Plot End of explanation d.get_station_daily(control_area=True, unit=True) station_daily_all = d._station_daily station_daily_all.head(10) station_daily_all.tail(10) Explanation: Exercise 5 So far we've been operating on a single turnstile level, let's combine turnstiles in the same ControlArea/Unit/Station combo. There are some ControlArea/Unit/Station groups that have a single turnstile, but most have multiple turnstilea-- same value for the C/A, UNIT and STATION columns, different values for the SCP column. We want to combine the numbers together -- for each ControlArea/UNIT/STATION combo, for each day, add the counts from each turnstile belonging to that combo. Pandas Return Total Passengers Filtered By Control Area, Unit, Station and Date End of explanation station_daily = d.station_daily station_daily.query('station == "59 ST"') Explanation: Exercise 6 Similarly, combine everything in each station, and come up with a time series of [(date1, count1),(date2,count2),...] type of time series for each STATION, by adding up all the turnstiles in a station. Pandas Return Total Passengers Filtered By Station and Date End of explanation label_size = 14 fig = plt.figure('Station 59 ST: Total Passengers', figsize=(12, 4), facecolor='white', edgecolor='black') ax1 = plt.subplot2grid((1, 1), (0, 0)) dt = station_daily.query('station == "59 ST"') dt.plot(kind='bar', x=dt.index.levels[1], alpha=0.5, ax=ax1) ax1.set_xlabel('Date', fontsize=label_size) ax1.set_ylabel('Passengers', fontsize=label_size) plt.suptitle('Station: 59 ST', fontsize=24, y=1.16); plt.title('Total Passengers', fontsize=18, y=1.10); fig.autofmt_xdate(); Explanation: Exercise 7 Plot the time series for a station End of explanation week_59st = station_daily.query('station == "59 ST"').reset_index() week_59st label_size = 14 fig = plt.figure('Station 59 ST: Weekly Passengers', figsize=(12, 4), facecolor='white', edgecolor='black') ax1 = plt.subplot2grid((1, 1), (0, 0)) for w in week_59st.week.unique(): mask = f'station == "59 ST" & week == {w}' dt = station_daily.query(mask).reset_index() dt.plot(kind='area', x=dt.weekday, y='entries', alpha=0.5, label=f'Week: {w}', ax=ax1) ax1.set_xlabel('Weekday', fontsize=label_size) ax1.set_ylabel('Passengers', fontsize=label_size) ax1.set_xticklabels(['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday']) x_min, x_max, y_min, y_max = ax1.axis() ax1.axis((x_min, x_max, 0, 2e10)) plt.suptitle('Station: 59 ST', fontsize=24, y=1.16); plt.title('Weekly Passengers', fontsize=18, y=1.10); fig.autofmt_xdate(); Explanation: Exercise 8 Make one list of counts for one week for one station. Monday's count, Tuesday's count, etc. so it's a list of 7 counts. Make the same list for another week, and another week, and another week. plt.plot(week_count_list) for every week_count_list you created this way. You should get a rainbow plot of weekly commute numbers on top of each other. End of explanation mask = ['station', pd.Series([x.week for x in d.data.time_stamp], name='week')] station_weekly = d.data.groupby(mask)['entries', 'exits'].sum() station_weekly.sort_values('entries', ascending=False) Explanation: Exercise 9 Over multiple weeks, sum total ridership for each station and sort them, so you can find out the stations with the highest traffic during the time you investigate End of explanation station_group = d.data.groupby('station') station_entries = station_group['entries'].sum() station_entries.tail() label_size = 14 suptitle_size = 24 title_size = 18 bins = 50 fig = plt.figure('', figsize=(10, 8), facecolor='white', edgecolor='black') rows, cols = (2, 1) ax1 = plt.subplot2grid((rows, cols), (0, 0)) ax2 = plt.subplot2grid((rows, cols), (1, 0)) station_entries.sort_values().plot(kind='bar', ax=ax1) ax1.set_title('Total Passengers', fontsize=title_size); ax1.set_xlabel('Stations', fontsize=label_size) ax1.set_ylabel('Passengers', fontsize=label_size) ax1.set_xticklabels('') station_entries.plot(kind='hist', alpha=0.5, bins=bins, edgecolor='black', label='_nolegend_', ax=ax2) ax2.axvline(station_entries.mean(), color='crimson', label='Mean', linestyle='--') ax2.axvline(station_entries.median(), color='black', label='Median', linestyle='-.') ax2.legend() ax2.set_xlabel('Total Passengers', fontsize=label_size) ax2.set_ylabel('Count', fontsize=label_size) plt.suptitle('All NYC MTA Stations', fontsize=suptitle_size, y=1.03); plt.tight_layout(); Explanation: Exercise 10 Make a single list of these total ridership values and plot it with plt.hist(total_ridership_counts) to get an idea about the distribution of total ridership among different stations. This should show you that most stations have a small traffic, and the histogram bins for large traffic volumes have small bars. Additional Hint: If you want to see which stations take the meat of the traffic, you can sort the total ridership counts and make a plt.bar graph. For this, you want to have two lists: the indices of each bar, and the values. The indices can just be 0,1,2,3,..., so you can do indices = range(len(total_ridership_values)) plt.bar(indices, total_ridership_values) End of explanation
9,208
Given the following text description, write Python code to implement the functionality described below step by step Description: pd.DataFrame({'article_uni' Step1: a=pd.pivot_table(df,index=["article_uni"],values=["article_rating"],aggfunc=[len,np.mean], columns='year') a Step2: b=df[df.article_pub_date>=data_first_ranking].groupby(['article_uni', 'year']).article_rating.agg(lambda x Step3: column_name = '{year}{country}{header}_gapminder_grid'.format( year=year, country=country, header=col_name ) Step4: browser = webdriver.Edge(executable_path="..\MicrosoftWebDriver.exe")
Python Code: df_ranking=pd.read_csv('article_uni.csv', index_col=0) print(df_ranking.shape) df_ranking.head() df.article_uni.replace('The London School of Economics and Political Science (United-Kingdom)', 'London School of Economics and Political Science', inplace=True) from sklearn.preprocessing import MinMaxScaler, StandardScaler scaler=StandardScaler() uni_cluster_1=df[df.article_rating==df.article_rating.max()].article_uni.unique() uni_cluster_2=list(set(df.article_uni.unique())-set(uni_cluster_1)) df[df.article_uni.isin(uni_cluster_1)].article_rating.values*2/10.max() df['article_rating'][df.article_uni.isin(uni_cluster_1)]=df[df.article_uni.isin(uni_cluster_1)].article_rating.values*2/10#scaler.fit_transform(df[df.article_uni.isin(uni_cluster_1)].article_rating.values) df['article_rating'][df.article_uni.isin(uni_cluster_2)]=df[df.article_uni.isin(uni_cluster_2)].article_rating.values*2.5/10#scaler.fit_transform(df[df.article_uni.isin(uni_cluster_2)].article_rating.values) df.article_rating.hist() Explanation: pd.DataFrame({'article_uni':df.article_uni.unique()}).to_csv('article_uni.csv') End of explanation df_ranking.sort_values(['article_uni'],inplace=True) df_ranking.head() Explanation: a=pd.pivot_table(df,index=["article_uni"],values=["article_rating"],aggfunc=[len,np.mean], columns='year') a End of explanation b=df[df.article_pub_date>=data_first_ranking].groupby(['article_uni', 'year']).article_rating.agg({'article_rating_mean':'mean', 'article_rating_count':'count', 'article_rating_moda':lambda x:x.value_counts().index[0]}).reset_index() b['ranking']=np.zeros((1, b.shape[0]))[0] b.head() for name in b.article_uni.unique(): for year in b[b.article_uni==name].year: #print(year) b['ranking'][(b.article_uni==name)&(b.year==year)]=df_ranking[(df_ranking.article_uni==name)][str(year)].values[0] b=b[(~b.ranking.isnull())] for year in np.sort(b.year.unique()): print(year, round(np.corrcoef(b[(b.year==year)].article_rating_mean, b[(b.year==year)].ranking)[1,0],2)) for year in np.sort(b.year.unique()): print(year, round(np.corrcoef(b[(b.year==year)].article_rating_moda, b[(b.year==year)].ranking)[1,0],2)) df_all=pd.merge(b, df_ranking[['article_uni','country']], on=['article_uni']) df_all.head() import plotly.plotly as py from plotly.grid_objs import Grid, Column from plotly.tools import FigureFactory as FF import pandas as pd import time years_from_col = set(df_all['year']) years_ints = sorted(list(years_from_col)) years = [str(year) for year in years_ints] # make list of continents countries = [] for country in df_all['country']: if country not in countries: countries.append(country) columns = [] # make grid for year in years: for country in countries: df_by_year = df_all[df_all['year'] == int(year)] df_by_year_and_cont = df_by_year[df_by_year['country'] == country] for col_name in df_by_year_and_cont: #print(col_name) # each column name is unique column_name = '{year}_{country}_{header}_gapminder_grid'.format( year=year, country=country, header=col_name ) a_column = Column(list(df_by_year_and_cont[col_name]), column_name) columns.append(a_column) # upload grid grid = Grid(columns) url = py.grid_ops.upload(grid, 'gapminder_grid'+str(time.time()), auto_open=False) url figure = { 'data': [], 'layout': {}, 'frames': [], 'config': {'scrollzoom': True} } # fill in most of layout figure['layout']['yaxis'] = {'range': [-50, 200], 'title': 'Ranking un THE', 'gridcolor': '#FFFFFF'} figure['layout']['xaxis'] = {'range': [-0.1, 1.1], 'title': 'Ranking mean', 'gridcolor': '#FFFFFF'} figure['layout']['hovermode'] = 'closest' figure['layout']['plot_bgcolor'] = 'rgb(223, 232, 243)' figure['layout']['sliders'] = { 'active': 0, 'yanchor': 'top', 'xanchor': 'left', 'currentvalue': { 'font': {'size': 20}, 'prefix': 'text-before-value-on-display', 'visible': True, 'xanchor': 'right' }, 'transition': {'duration': 1000, 'easing': 'cubic-in-out'}, 'pad': {'b': 10, 't': 50}, 'len': 0.9, 'x': 0.1, 'y': 0, 'steps': [...] } { 'method': 'animate', 'label': 'label-for-frame', 'value': 'value-for-frame(defaults to label)', 'args': [{'frame': {'duration': 300, 'redraw': False}, 'mode': 'immediate'} ], } sliders_dict = { 'active': 0, 'yanchor': 'top', 'xanchor': 'left', 'currentvalue': { 'font': {'size': 20}, 'prefix': 'Year:', 'visible': True, 'xanchor': 'right' }, 'transition': {'duration': 300, 'easing': 'cubic-in-out'}, 'pad': {'b': 10, 't': 50}, 'len': 0.9, 'x': 0.1, 'y': 0, 'steps': [] } figure['layout']['updatemenus'] = [ { 'buttons': [ { 'args': [None, {'frame': {'duration': 500, 'redraw': False}, 'fromcurrent': True, 'transition': {'duration': 300, 'easing': 'quadratic-in-out'}}], 'label': 'Play', 'method': 'animate' }, { 'args': [[None], {'frame': {'duration': 0, 'redraw': False}, 'mode': 'immediate', 'transition': {'duration': 0}}], 'label': 'Pause', 'method': 'animate' } ], 'direction': 'left', 'pad': {'r': 10, 't': 87}, 'showactive': False, 'type': 'buttons', 'x': 0.1, 'xanchor': 'right', 'y': 0, 'yanchor': 'top' } ] custom_colors = { 'UK': 'rgb(171, 99, 250)', 'USA': 'rgb(230, 99, 250)', 'Canada': 'rgb(99, 110, 250)', } Explanation: b=df[df.article_pub_date>=data_first_ranking].groupby(['article_uni', 'year']).article_rating.agg(lambda x:x.value_counts().index[0]).reset_index() End of explanation df_all.country.unique() col_name_template = '{year}_{country}_{header}_gapminder_grid' year = df_all.year.min() for country in countries: data_dict = { 'xsrc': grid.get_column_reference(col_name_template.format( year=year, country=country, header='article_rating_mean' )), 'ysrc': grid.get_column_reference(col_name_template.format( year=year, country=country, header='ranking' )), 'mode': 'markers', 'textsrc': grid.get_column_reference(col_name_template.format( year=year, country=country, header='article_uni' )), 'marker': { 'sizemode': 'area', 'sizeref': 0.05, 'sizesrc': grid.get_column_reference(col_name_template.format( year=year, country=country, header='article_rating_count' )), 'color': custom_colors[country] }, 'name': country } figure['data'].append(data_dict) for year in years: frame = {'data': [], 'name': str(year)} for country in countries: data_dict = { 'xsrc': grid.get_column_reference(col_name_template.format( year=year, country=country, header='article_rating_mean' )), 'ysrc': grid.get_column_reference(col_name_template.format( year=year, country=country, header='ranking' )), 'mode': 'markers', 'textsrc': grid.get_column_reference(col_name_template.format( year=year, country=country, header='article_uni', )), 'marker': { 'sizemode': 'area', 'sizeref': 0.05, 'sizesrc': grid.get_column_reference(col_name_template.format( year=year, country=country, header='article_rating_count' )), 'color': custom_colors[country] }, 'name': country } frame['data'].append(data_dict) figure['frames'].append(frame) slider_step = {'args': [ [year], {'frame': {'duration': 300, 'redraw': False}, 'mode': 'immediate', 'transition': {'duration': 300}} ], 'label': year, 'method': 'animate'} sliders_dict['steps'].append(slider_step) figure['layout']['sliders'] = [sliders_dict] py.icreate_animations(figure, 'gapminder_example'+str(time.time())) import seaborn as sns df_all.head() for i in np.sort(df_all.year.unique()): plt.scatter(df_all[df_all.year==i].article_rating_mean, df_all[df_all.year==i].ranking) plt.title('Corr between ranking in THE and ranking review in {:0.0f}'.format(i)) plt.show() df_all.article_rating_count for i in np.sort(df_all.year.unique()): plt.scatter(df_all[df_all.year==i].article_rating_count, df_all[df_all.year==i].ranking) plt.title('Corr between ranking in THE and ranking review count in {:0.0f}'.format(i)) plt.show() plt.hist(df_all.article_rating_count) import selenium from selenium import webdriver browser = webdriver.Chrome(executable_path="..\\chromedriver.exe") Explanation: column_name = '{year}{country}{header}_gapminder_grid'.format( year=year, country=country, header=col_name ) End of explanation url = "https://www.niche.com/colleges/stanford-university/reviews/" browser.get(url) browser.find_element_by_css_selector('.icon-arrowright-thin--pagination').click() import math as ma ma.sqrt(2) browser.page_source element.get_attribute('innerHTML') Explanation: browser = webdriver.Edge(executable_path="..\MicrosoftWebDriver.exe") End of explanation
9,209
Given the following text description, write Python code to implement the functionality described below step by step Description: Reflecting on 2017, I decided to return to my most popular blog topic (at least by the number of emails I get). Last time, I built a crude statistical model to predict the result of football matches. I even presented a webinar on the subject here (it's free to sign up). During the presentation, I described a coefficient in the model that accounts for the fact that the home team tends to score more goals than the away team. This is called the home advantage or home field advantage and can probably be explained by a combination of physcological (e.g. familiarity with surroundings) and physical factors (e.g. travel). It occurs in various sports, including American football, baseball, basketball and soccer. Sticking to soccer/football, I mentioned in my talk how it would be interesting to see how this effect varies around the world. In which countries do the home teams enjoy the greatest advantage? We're going to use the same statistcal model as last time, so there won't be any new statistical features developed in this post. Instead, it will focus on retrieving the appropriate goals data for even the most obscure leagues in the world (yes, even the Irish Premier Division) and then interactively visualising the results with D3. The full code can be found in the accompanying Jupyter notebook. Calculating Home Field Advantage The first consideration should probably be how to calculate home advantage. The traditional approach is to look at team matchups and check whether teams achieved better, equal or worse results at home than away. For example, let's imagine Chlesea beat Arsenal 2-0 at home and drew 1-1 away. That would be recored as a better home result (+2 goals versus 0). This process is repeated for every opponent and so you can actually construct a trinomial distribution and test whether there was a statistically significant home field effect. This works for balanced leagues, where team play each other an equal number of times home and away. While this holds for Europe's most famous leagues (e.g. EPL, La Liga), there are various leagues where teams play each other threes times (e.g. Ireland, Montenegro, Tajikistan aka The Big Leagues) or even just once (e.g Libya and to a lesser extent MLS (balanced for teams within the same conference)). There's also issues with postponements and abandonments rendering some leagues slightly unbalanced (e.g. Sri Lanka). For those reasons, we'll opt for a different (though not necessarily better) approach. In the previous post, we built a model for the EPL 2016/17 season, using the number of goals scored in the past to predict future results. Looking at the model coefficients again, you see the home coefficient has a value of approximately 0.3. By taking the exponent of this value ($exp^{0.3}=1.35$), it tells us that the home team are generally 1.35 times more likely to score than the away team. In case you don't recall, the model accounts for team strength/weakness by including coefficients for each team (e.g 0.07890 and -0.96194 for Chelsea and Sunderland, respectively). Let's see how this value compares with the lower divisions in England over the past 10 years. We'll pull the data from football-data.co.uk, which can loaded in directly using the url link for each csv file. First, we'll design a function that will take a dataframe of match results as an input and return the home field advantage (plus confidence interval limits) for that league. Step1: I've essentially combined various parts of the previous post into one convenient function. If it looks a little strange, then I suggest you consult the original post. Okay, we're ready to start calculating some home advantage scores. Step2: It's as easy as that. Feed a url from football-data.co.uk into the function and it'll quickly tell you the statistical advantage enjoyed by home teams in that league. Note that the latter two values repesent the left and right limit of the 95% confidence interval around the mean value. The first value in the array is actually just the log of the number of goals scored by the home team divided by the total number of away goals. Step3: The goals ratio calculation is obviously much simpler and definitely more intuitive. But it doesn't allow me to reference my previous post as much (link link link) and it fails to provide any uncertainty around the headline figure. Let's plot the home advantage figure for the top 5 divisions of the English league pyramid for since 2005. You can remove those hugely informative confidence interval bars by unticking the checkbox. Step4: It's probably more apparent without those hugely informative confidence interval bars, but it seems that the home advantage score decreases slightly as you move down the pyramid (analysis by Sky Sports produced something similar). This might make sense for two reasons. Firstly, bigger teams generally have larger stadiums and more supporters, which could strengthen the home field advantage. Secondly, as you go down the leagues, I suspect the quality gap between teams narrows. Taking it to an extreme, when I used to play Sunday league football, it didn't really matter where we played... we still lost. In that sense, one must be careful comparing the home advantage between leagues, as it will be affected by the relative team strengths within those leagues. For example, a league with a very dominant team (or teams) will record a lower home advantage score, as that dominant team will score goals home and away with little difference (Man Utd would probably beat Cork City 6-0 at Old Trafford and Turners Cross!). Having warned about the dangers of comparing different leagues with this approach, let's now compare the top five leagues in Europe over the same time period as before. Step5: Honestly, there's not much going on there. With the poissble exception of the Spanish La Liga since 2010, the home field advantage enjoyed by the teams in each league is broadly similar (and that's before we bring in the idea of confidence intervals and hypothesis testing). Home Advantage Around the World To find more interesting contrasts, we must venture to crappier and more corrupt leagues. My hunch is that home advantage would be negligible in countries where the overall quality (team, infastructure, etc.) is very low. And by low, I mean leagues worse than the Irish Premier Division (yes, they exist). Unfortunately, the historical results for such leagues are not available on football-data.co.uk. Instead, we'll scrape the data off betexplorer. I'm extremely impressed by the breadth of this site. You can even retrieve past results for the French overseas department of Rรฉunion. Fun fact Step6: You don't actually need to run your own spider, as I've shared the output to my GitHub account. We can import the json file in directly using pandas. Step7: Hopefully, that's all relatively clear. You'll notice that it's very similar to the format used by football-data, which means that we can feed this dataframe into the get_home_team_advantage function. Sometimes, matches are awarded due to one team fielding an ineligible player or crowd trouble. We should probably exclude such matches from the home field advantage calculations. Step8: We're ready to put it all together. I'll omit the code (though it can be found here), but we'll loop through each country and league combination (just in case you decide to include multiple leagues from the same country) and calculate the home advantage score, plus its confidence limits as well as some other information for each league (number of teams, average number of goals in each match). I've converted the pandas output to a datatables table that you can interactively filter and sort. Step9: Focusing on the home_advantage_score column, teams in Nigeria by far enjoy the greatest benefit from playing at home (score = 1.195). In other words, home teams scored 3.3 (= $e^{1.195}$) times more goals than their opponents. This isn't new information and can be attributed to a combination of corruption (e.g. bribing referees) and violent fans. In fact, my motivation for this post was to identify more football corruption hotspots. Alas, when it comes to home turf invincibility, it seems Nigeria are the World Cup winners. Fifteen leagues have a negative home_advantage_score, meaning that visiting teams actually scored more goals than their hosts- though none was statistically significant. By some distance, the Maldives records the most negative score. Luckily, I've twice researched this beautiful archipelago and I'm aware that all matches in the Dhiveli Premier League are played at the national stadium in Malรฉ (much like the Gibraltar Premier League). So it would make sense that there's no particular advantage gained by the home team. Libya is another interesting example. Owing to security issues, all matches in the Libyan Premier League are played in neutral venues with no spectators present. Quite fittingly, it returned a home advantage score just off zero. Generally speaking, the leagues with near zero home advantage come from small countries (minimal inconvenience for travelling teams) with a small number of teams and they tend to share stadiums. If you sort the avg_goals column, you'll see that Bolivia is the place to be for goals (average = 4.47). But rather than sifting through that table or explaining the results with words, the most intuitive way to illustrate this type of data is with a map of world. This might also help to clarify whether there's any geographical influence on the home advantage effect. Again, I won't go into the details (an appendix can be found in the Jupyter notebook), but I built a map using the JavaScript library, D3. And by built I mean I adapted the code from this post and this post. Though a little outdated now, I found this post quite useful too. Finally, I think this post shows off quite well what you can do with maps using D3. And here it is! The country colour represents its home_advantage_score. You can zoom in and out and hover over a country to reveal a nice informative overlay; use the radio buttons to switch between home advantage and goals scored. I recommend viewing it on desktop (mobile's a bit jumpy) and on Chrome (sometimes have security issues with Firefox). It's not scientifically rigorous (not in academia any more, baby!), but there's evidence for some geographical trends. For example, it appears that home advantage is stronger in Africa and South America compared to Western and Central Europe, with the unstable warzones of Libya, Somalia and Paraguay (?) being notable exceptions. As for average goals, Europe boasts stonger colours compared to Africa, though South East Asia seems to be the global hotspot for goals. North America is also quite dark, but you can debate whether Canada should be coloured grey, as the best Canadian teams belong to the American soccer system. Conclusion Using a previously described model and some JavaScript, this post explored the so called home advantage in football leagues all over the world (including Rรฉunion). I don't think it uncovered anything particularly amazing Step10: Shapefiles and TopoJSON Generating the world map required a little bit of command line, python and a whole lot of JavaScript (specifically D3). The command line was used to convert the shapefiles into geojson files (see ogr2ogr and finally into the topojson format. The main reason for the last step is that it drastically reduces the file size, which should improve its onsite loading (though it could also affect the quality of the map). My particular map was complicated by the fact that some sovereign states are composed of several countries that organise their own national competitions. If that sounds weird, think of the United Kingdom. It's a member of the UN and a sovereign state in its own right (despite what Brexiteers may say). But there's no UK (or British) Premier League; there's the English/Welsh/Scottish/Northern Irish Premier League/Premiership. Similarly, Reunion is part of France but has its own football league. Then again, the Basque country is recognised as a nation within Spain, but has no internationally recognised national league. In summary, it's complicated. Political realities aside, we need to get the geojson file for all of the countries in the world (see all.geojson available here). We must remove the United Kingdom, France and a few others. To reduce the file size, I also removed some country information that wasn't relevant for my purposes (population, GDP, etc.). The geojson files for England, Scotland, Reunion, etc. were a little harder to track down. The shapefile containing those country subdivisions can be downloaded here, which can be converted into geojson files with ogr2ogr. Unfortunately, that file contains various subdivisions that don't correspond to actual football leagues (e.g. Belgium is split into the Flemish and Walloon regions). That means we need append the subdivisions we do want to higher level geojson file, which I did my manipulating the two json files in Python.
Python Code: # importing the tools required for the Poisson regression model import statsmodels.api as sm import statsmodels.formula.api as smf import pandas as pd import matplotlib.pyplot as plt import numpy as np import seaborn def get_home_team_advantage(goals_df, pval=0.05): # extract relevant columns model_goals_df = goals_df[['HomeTeam','AwayTeam','FTHG','FTAG']] # rename goal columns model_goals_df = model_goals_df.rename(columns={'FTHG': 'HomeGoals', 'FTAG': 'AwayGoals'}) # reformat dataframe for the model goal_model_data = pd.concat([model_goals_df[['HomeTeam','AwayTeam','HomeGoals']].assign(home=1).rename( columns={'HomeTeam':'team', 'AwayTeam':'opponent','HomeGoals':'goals'}), model_goals_df[['AwayTeam','HomeTeam','AwayGoals']].assign(home=0).rename( columns={'AwayTeam':'team', 'HomeTeam':'opponent','AwayGoals':'goals'})]) # build poisson model poisson_model = smf.glm(formula="goals ~ home + team + opponent", data=goal_model_data, family=sm.families.Poisson()).fit() # output model parameters poisson_model.summary() return np.concatenate((np.array([poisson_model.params['home']]), poisson_model.conf_int(alpha=pval).values[-1])) Explanation: Reflecting on 2017, I decided to return to my most popular blog topic (at least by the number of emails I get). Last time, I built a crude statistical model to predict the result of football matches. I even presented a webinar on the subject here (it's free to sign up). During the presentation, I described a coefficient in the model that accounts for the fact that the home team tends to score more goals than the away team. This is called the home advantage or home field advantage and can probably be explained by a combination of physcological (e.g. familiarity with surroundings) and physical factors (e.g. travel). It occurs in various sports, including American football, baseball, basketball and soccer. Sticking to soccer/football, I mentioned in my talk how it would be interesting to see how this effect varies around the world. In which countries do the home teams enjoy the greatest advantage? We're going to use the same statistcal model as last time, so there won't be any new statistical features developed in this post. Instead, it will focus on retrieving the appropriate goals data for even the most obscure leagues in the world (yes, even the Irish Premier Division) and then interactively visualising the results with D3. The full code can be found in the accompanying Jupyter notebook. Calculating Home Field Advantage The first consideration should probably be how to calculate home advantage. The traditional approach is to look at team matchups and check whether teams achieved better, equal or worse results at home than away. For example, let's imagine Chlesea beat Arsenal 2-0 at home and drew 1-1 away. That would be recored as a better home result (+2 goals versus 0). This process is repeated for every opponent and so you can actually construct a trinomial distribution and test whether there was a statistically significant home field effect. This works for balanced leagues, where team play each other an equal number of times home and away. While this holds for Europe's most famous leagues (e.g. EPL, La Liga), there are various leagues where teams play each other threes times (e.g. Ireland, Montenegro, Tajikistan aka The Big Leagues) or even just once (e.g Libya and to a lesser extent MLS (balanced for teams within the same conference)). There's also issues with postponements and abandonments rendering some leagues slightly unbalanced (e.g. Sri Lanka). For those reasons, we'll opt for a different (though not necessarily better) approach. In the previous post, we built a model for the EPL 2016/17 season, using the number of goals scored in the past to predict future results. Looking at the model coefficients again, you see the home coefficient has a value of approximately 0.3. By taking the exponent of this value ($exp^{0.3}=1.35$), it tells us that the home team are generally 1.35 times more likely to score than the away team. In case you don't recall, the model accounts for team strength/weakness by including coefficients for each team (e.g 0.07890 and -0.96194 for Chelsea and Sunderland, respectively). Let's see how this value compares with the lower divisions in England over the past 10 years. We'll pull the data from football-data.co.uk, which can loaded in directly using the url link for each csv file. First, we'll design a function that will take a dataframe of match results as an input and return the home field advantage (plus confidence interval limits) for that league. End of explanation # home field advantage for EPL 2016/17 season get_home_team_advantage(pd.read_csv("http://www.football-data.co.uk/mmz4281/1617/E0.csv")) Explanation: I've essentially combined various parts of the previous post into one convenient function. If it looks a little strange, then I suggest you consult the original post. Okay, we're ready to start calculating some home advantage scores. End of explanation temp_goals_df = pd.read_csv("http://www.football-data.co.uk/mmz4281/1617/E0.csv") [np.exp(get_home_team_advantage(temp_goals_df)[0]), np.sum(temp_goals_df['FTHG'])/float(np.sum(temp_goals_df['FTAG']))] Explanation: It's as easy as that. Feed a url from football-data.co.uk into the function and it'll quickly tell you the statistical advantage enjoyed by home teams in that league. Note that the latter two values repesent the left and right limit of the 95% confidence interval around the mean value. The first value in the array is actually just the log of the number of goals scored by the home team divided by the total number of away goals. End of explanation division_results = [] for division in range(5): year_results = [] for year in range(2005,2017): if division==4: division_string = 'C' else: division_string = str(division) url = "http://www.football-data.co.uk/mmz4281/"+str(year)[-2:]+str(year+1)[-2:]+"/E"+division_string+".csv" print(url) year_results.append(np.concatenate((np.array([year, division]), get_home_team_advantage(pd.read_csv(url))))) division_results.append(np.vstack(year_results)) from ipywidgets import interact, Checkbox def plot_func(freq): fig, ax1 = plt.subplots(1, 1,figsize=(7, 5)) for div, (div_name, div_col) in enumerate(zip(['EPL', 'Championship', 'League 1', 'League 2', 'Conference'], ["#9b5369", "#7c8163", "#c1a381", "#d9bcc0", "#F67280"])): if freq: ax1.errorbar(division_results[div][:,0], division_results[div][:,2], yerr= (division_results[div][:,4] - division_results[div][:,2]), linestyle='-', marker='o',label=div_name, color = div_col) else: ax1.plot(division_results[div][:,0], division_results[div][:,2], linestyle='-', marker='o',label=div_name, color = div_col) #[str(int(item.get_text())+1)[-2:] for item in ax1.get_xticklabels()] ax1.set_xticks([2005, 2007, 2009, 2011, 2013, 2015]) ax1.set_xlabel('Season', fontsize=12) ax1.set_ylabel('Home Advantage Score', fontsize=12) ax1.set_ylim([-0.05, 0.6]) ax1.set_xlim([2004.5, 2016.5]) ax1.set_xticklabels(['05/06', '07/08', '09/10', '11/12', '13/14', '15/16']) plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) plt.show() interact(plot_func, freq = Checkbox( value=True, description='Error Bars', disabled=False)) Explanation: The goals ratio calculation is obviously much simpler and definitely more intuitive. But it doesn't allow me to reference my previous post as much (link link link) and it fails to provide any uncertainty around the headline figure. Let's plot the home advantage figure for the top 5 divisions of the English league pyramid for since 2005. You can remove those hugely informative confidence interval bars by unticking the checkbox. End of explanation country_results = [] for country in ['E0', 'SP1', 'I1', 'D1', 'F1']: year_results = [] for year in range(2005,2017): url = "http://www.football-data.co.uk/mmz4281/"+str(year)[-2:]+str(year+1)[-2:]+ "/" + country +".csv" print(url) year_results.append(np.concatenate((np.array([year, division]), get_home_team_advantage(pd.read_csv(url))))) country_results.append(np.vstack(year_results)) def plot_func(freq): fig, ax1 = plt.subplots(1, 1,figsize=(7, 5)) for div, (div_name, div_col) in enumerate(zip(['EPL', 'La Liga', 'Serie A', 'Bundesliga 1', 'Ligue 1'], ["#9b5369", "#A1D9FF", "#CA82F8", "#ED93CB", "#78B7BB"])): if freq: ax1.errorbar(country_results[div][:,0], country_results[div][:,2], yerr= (country_results[div][:,4] - country_results[div][:,2]), linestyle='-', marker='o',label=div_name, color = div_col) else: ax1.plot(country_results[div][:,0], country_results[div][:,2], linestyle='-', marker='o',label=div_name, color = div_col) ax1.set_xticks([2005, 2007, 2009, 2011, 2013, 2015]) ax1.set_ylim([-0.05, 0.6]) ax1.set_xlim([2004.5, 2016.5]) ax1.set_xlabel('Season', fontsize=12) ax1.set_ylabel('Home Advantage Score', fontsize=12) ax1.set_xticklabels(['05/06', '07/08', '09/10', '11/12', '13/14', '15/16']) plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) plt.show() interact(plot_func, freq = Checkbox( value=True, description='Error Bars', disabled=False)) Explanation: It's probably more apparent without those hugely informative confidence interval bars, but it seems that the home advantage score decreases slightly as you move down the pyramid (analysis by Sky Sports produced something similar). This might make sense for two reasons. Firstly, bigger teams generally have larger stadiums and more supporters, which could strengthen the home field advantage. Secondly, as you go down the leagues, I suspect the quality gap between teams narrows. Taking it to an extreme, when I used to play Sunday league football, it didn't really matter where we played... we still lost. In that sense, one must be careful comparing the home advantage between leagues, as it will be affected by the relative team strengths within those leagues. For example, a league with a very dominant team (or teams) will record a lower home advantage score, as that dominant team will score goals home and away with little difference (Man Utd would probably beat Cork City 6-0 at Old Trafford and Turners Cross!). Having warned about the dangers of comparing different leagues with this approach, let's now compare the top five leagues in Europe over the same time period as before. End of explanation import scrapy import re # for text parsing import logging import pandas as pd class FootballSpider(scrapy.Spider): name = 'footballSpider' # page to scrape start_urls = pd.read_csv( 'https://raw.githubusercontent.com/dashee87/blogScripts/master/files/league_links.csv')['link'].tolist() + \ pd.read_csv("league_links_.csv")['link2'].dropna(axis=0, how='all').tolist() # if you want to impose a delay between sucessive scrapes # download_delay = 1.0 def parse(self, response): self.logger.info('Scraping page: %s', response.url) country_league = response.css('.list-breadcrumb__item__in::text').extract() for num, (hometeam, awayteam, match_result, date) in \ enumerate(zip(response.css('.in-match span:nth-child(1)'), response.css('.in-match span:nth-child(2)'), response.css('td.h-text-center'), response.css('.h-text-no-wrap::text').extract())): yield {'country':country_league[2], 'league': country_league[3], 'HomeTeam': hometeam.css('::text').extract_first(), 'AwayTeam':awayteam.css('::text').extract_first(), 'FTHG': re.sub(':.*', '', match_result.css('::text').extract_first()), 'FTAG': re.sub('.*:', '', match_result.css('::text').extract_first()), 'awarded': ' ' in match_result.css('::text').extract_first() or 'AWA.' in match_result.css('::text').extract_first(), 'date':date} from scrapy.crawler import CrawlerProcess process = CrawlerProcess({ 'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)', 'FEED_FORMAT': 'json', 'FEED_URI': 'all_league_goals_.json' }) # minimising the information presented on the scrapy log logging.getLogger('scrapy').setLevel(logging.WARNING) process.crawl(FootballSpider) process.start() Explanation: Honestly, there's not much going on there. With the poissble exception of the Spanish La Liga since 2010, the home field advantage enjoyed by the teams in each league is broadly similar (and that's before we bring in the idea of confidence intervals and hypothesis testing). Home Advantage Around the World To find more interesting contrasts, we must venture to crappier and more corrupt leagues. My hunch is that home advantage would be negligible in countries where the overall quality (team, infastructure, etc.) is very low. And by low, I mean leagues worse than the Irish Premier Division (yes, they exist). Unfortunately, the historical results for such leagues are not available on football-data.co.uk. Instead, we'll scrape the data off betexplorer. I'm extremely impressed by the breadth of this site. You can even retrieve past results for the French overseas department of Rรฉunion. Fun fact: Dimtri Payet spent the 2004 season at AS Excelsior of the Rรฉunion Premier League. We'll use Scrapy to pull the appropriate information off the website. If you've never used Scrapy before, then you should check out this post. I won't spend too long on this part, but you can find the full code on here. End of explanation all_league_goals = pd.read_json( "https://raw.githubusercontent.com/dashee87/blogScripts/master/files/all_league_goals.json") # reorder the columns to it a bit more logical all_league_goals = all_league_goals[['country', 'league', 'date', 'HomeTeam', 'AwayTeam', 'FTHG', 'FTAG', 'awarded']] all_league_goals.head() Explanation: You don't actually need to run your own spider, as I've shared the output to my GitHub account. We can import the json file in directly using pandas. End of explanation # little bit of data cleansing to remove fixtures that were abandoned/awarded/postponed all_league_goals = all_league_goals[~all_league_goals['awarded']] all_league_goals = all_league_goals[all_league_goals['FTAG']!='POSTP.'] all_league_goals = all_league_goals[all_league_goals['FTAG']!='CAN.'] all_league_goals[['FTAG', 'FTHG']] = all_league_goals[['FTAG', 'FTHG']].astype(int) Explanation: Hopefully, that's all relatively clear. You'll notice that it's very similar to the format used by football-data, which means that we can feed this dataframe into the get_home_team_advantage function. Sometimes, matches are awarded due to one team fielding an ineligible player or crowd trouble. We should probably exclude such matches from the home field advantage calculations. End of explanation home_advantage_country = pd.DataFrame(all_league_goals.assign(match_goals = all_league_goals['FTHG'] + all_league_goals['FTHG']).groupby(['country','league']).agg( {'HomeTeam':['size','nunique'], 'match_goals':'mean'}).to_records()) home_advantage_country.columns = ['country', 'league', 'num_games', 'num_teams', 'avg_goals'] temp_set = [] for i in range(home_advantage_country.shape[0]): temp_set.append(get_home_team_advantage(all_league_goals[( all_league_goals['country']==home_advantage_country['country'][i]) & ( all_league_goals['league']==home_advantage_country['league'][i])])) temp_set = pd.DataFrame(temp_set,columns= ['home_advantage_score', 'left_tail', 'right_tail']) home_advantage_country = pd.concat([home_advantage_country, temp_set], axis=1).sort_values('home_advantage_score', ascending=False).reset_index(drop=True) home_advantage_country.index = home_advantage_country.index + 1 # if you want display more/less rows than the default option pd.options.display.max_rows = 40 home_advantage_country.assign(avg_goals= pd.Series.round(home_advantage_country['avg_goals'], 3), home_advantage_score= pd.Series.round(home_advantage_country['home_advantage_score'], 3), left_tail= pd.Series.round(home_advantage_country['left_tail'], 3), right_tail= pd.Series.round(home_advantage_country['right_tail'], 3)) Explanation: We're ready to put it all together. I'll omit the code (though it can be found here), but we'll loop through each country and league combination (just in case you decide to include multiple leagues from the same country) and calculate the home advantage score, plus its confidence limits as well as some other information for each league (number of teams, average number of goals in each match). I've converted the pandas output to a datatables table that you can interactively filter and sort. End of explanation home_advantage_country.assign(avg_goals= pd.Series.round(home_advantage_country['avg_goals'], 3), home_advantage_score= pd.Series.round(home_advantage_country['home_advantage_score'], 3), left_tail= pd.Series.round(home_advantage_country['left_tail'], 3), right_tail= pd.Series.round(home_advantage_country['right_tail'], 3)).merge( pd.read_csv("https://raw.githubusercontent.com/dashee87/blogScripts/master/files/league_links.csv", usecols = ['country', 'countryCode'], encoding='latin-1'), on="country", how="inner").reset_index().rename(columns={"index": "home_adv_rank"}).sort_values( 'avg_goals', ascending=False).reset_index(drop=True).reset_index().rename(columns={"index": "avg_goals_rank"}).to_csv( "home_advantage.csv", encoding='utf-8', index=False) Explanation: Focusing on the home_advantage_score column, teams in Nigeria by far enjoy the greatest benefit from playing at home (score = 1.195). In other words, home teams scored 3.3 (= $e^{1.195}$) times more goals than their opponents. This isn't new information and can be attributed to a combination of corruption (e.g. bribing referees) and violent fans. In fact, my motivation for this post was to identify more football corruption hotspots. Alas, when it comes to home turf invincibility, it seems Nigeria are the World Cup winners. Fifteen leagues have a negative home_advantage_score, meaning that visiting teams actually scored more goals than their hosts- though none was statistically significant. By some distance, the Maldives records the most negative score. Luckily, I've twice researched this beautiful archipelago and I'm aware that all matches in the Dhiveli Premier League are played at the national stadium in Malรฉ (much like the Gibraltar Premier League). So it would make sense that there's no particular advantage gained by the home team. Libya is another interesting example. Owing to security issues, all matches in the Libyan Premier League are played in neutral venues with no spectators present. Quite fittingly, it returned a home advantage score just off zero. Generally speaking, the leagues with near zero home advantage come from small countries (minimal inconvenience for travelling teams) with a small number of teams and they tend to share stadiums. If you sort the avg_goals column, you'll see that Bolivia is the place to be for goals (average = 4.47). But rather than sifting through that table or explaining the results with words, the most intuitive way to illustrate this type of data is with a map of world. This might also help to clarify whether there's any geographical influence on the home advantage effect. Again, I won't go into the details (an appendix can be found in the Jupyter notebook), but I built a map using the JavaScript library, D3. And by built I mean I adapted the code from this post and this post. Though a little outdated now, I found this post quite useful too. Finally, I think this post shows off quite well what you can do with maps using D3. And here it is! The country colour represents its home_advantage_score. You can zoom in and out and hover over a country to reveal a nice informative overlay; use the radio buttons to switch between home advantage and goals scored. I recommend viewing it on desktop (mobile's a bit jumpy) and on Chrome (sometimes have security issues with Firefox). It's not scientifically rigorous (not in academia any more, baby!), but there's evidence for some geographical trends. For example, it appears that home advantage is stronger in Africa and South America compared to Western and Central Europe, with the unstable warzones of Libya, Somalia and Paraguay (?) being notable exceptions. As for average goals, Europe boasts stonger colours compared to Africa, though South East Asia seems to be the global hotspot for goals. North America is also quite dark, but you can debate whether Canada should be coloured grey, as the best Canadian teams belong to the American soccer system. Conclusion Using a previously described model and some JavaScript, this post explored the so called home advantage in football leagues all over the world (including Rรฉunion). I don't think it uncovered anything particularly amazing: different leagues have different properties and don't bet on the away team in the Nigerian league. You can play around with the Python code here. Thanks for reading! Appendix This section is intended more than anything else as a reminder to myself if I ever want to build a map with D3 again. We need to write the country league data to a csv that will then be loaded into the d3 map. To improve readability, the values are rounded to 3 decimal places. The country outlines in the map (see below) will be coloured according to their average goals or home advantage score. Rather than matching on country name (which can be fickle- is it Democratic Republic of Congo or DR Congo or even Congo-Kinshasa?), we'll append a column for the country ISO 3166-1 alpha-3 code. I'd like to say I scraped some page here, but it was mostly a manual job. After creating some new columns for the column ranking, the file is written to the local directory (alternatively, you can view it here). End of explanation import json all_nations = json.load(open('all.geojson')) # command line: ogr2ogr -f GEOJson -where "ADM0_A3 IN ('GBR', 'FRA','NLD', 'USA')" \ # football_subdivisions.json ne_10m_admin_0_map_units.shp subdivisions = json.load(open('football_subdivisions.json')) all_nations_tidy = {} all_nations_tidy['type']= all_nations['type'] all_nations_tidy['crs'] = all_nations['crs'] all_nations_tidy['features'] = [] for country_features in all_nations['features']: # skip UK, France, US, Netherlands and Antartica if (country_features['properties']['ADM0_A3'] in ['GBR', 'FRA', 'USA', 'NLD', 'ATA'] or # skip minor islands and territories with populations less than 1000 country_features['properties']['POP_EST']<1000) and \ # don't want to exclude Western Sahara country_features['properties']['NAME_LONG'] != 'Western Sahara': continue if True: all_nations_tidy['features'].append({'properties': {'country': country_features['properties']['NAME_LONG'], 'countryCode': country_features['properties']['ADM0_A3']}, 'geometry': country_features['geometry']}) for subdiv_features in subdivisions['features']: all_nations_tidy['features'].append({'properties': {'country': subdiv_features['properties']['NAME_LONG'], 'countryCode': subdiv_features['properties']['BRK_A3']}, 'geometry': subdiv_features['geometry']}) with open('countries.json', 'w') as outfile: json.dump(all_nations_tidy, outfile) Explanation: Shapefiles and TopoJSON Generating the world map required a little bit of command line, python and a whole lot of JavaScript (specifically D3). The command line was used to convert the shapefiles into geojson files (see ogr2ogr and finally into the topojson format. The main reason for the last step is that it drastically reduces the file size, which should improve its onsite loading (though it could also affect the quality of the map). My particular map was complicated by the fact that some sovereign states are composed of several countries that organise their own national competitions. If that sounds weird, think of the United Kingdom. It's a member of the UN and a sovereign state in its own right (despite what Brexiteers may say). But there's no UK (or British) Premier League; there's the English/Welsh/Scottish/Northern Irish Premier League/Premiership. Similarly, Reunion is part of France but has its own football league. Then again, the Basque country is recognised as a nation within Spain, but has no internationally recognised national league. In summary, it's complicated. Political realities aside, we need to get the geojson file for all of the countries in the world (see all.geojson available here). We must remove the United Kingdom, France and a few others. To reduce the file size, I also removed some country information that wasn't relevant for my purposes (population, GDP, etc.). The geojson files for England, Scotland, Reunion, etc. were a little harder to track down. The shapefile containing those country subdivisions can be downloaded here, which can be converted into geojson files with ogr2ogr. Unfortunately, that file contains various subdivisions that don't correspond to actual football leagues (e.g. Belgium is split into the Flemish and Walloon regions). That means we need append the subdivisions we do want to higher level geojson file, which I did my manipulating the two json files in Python. End of explanation
9,210
Given the following text description, write Python code to implement the functionality described below step by step Description: Graph of iDigBio Specimens over Time This notebook introduces the basics of loading and analyzing iDigBio data on the GUODA infrastructure hosted by the ACIS Lab and iDigBio. This service is documented in the GUODA Jupyter service wiki on Github. As an example, we will create a graph that shows how many specimens in iDigBio were collected during each the past 200 years. The data for this graph is a random sample of 100,000 records from an export of the iDigBio. The field used to determine the year is the interpreted datecollected field that iDigBio populates based on Darwin Core terms like dwc Step1: Loading the data set Data in GUODA is stored on a clustered file system called HDFS. The Jupyter notebooks are all configured to read and write to HDFS automatically so all file paths are in the HDFS system. You can read more about working with files and how to see what data sets are availible on the Jupyter service wiki. This line will load the contents of the file that contains a 100,000 record sub-set of iDigBio into a Spark data frame. Then we can look at how many records are in the data frame to confirm that we are working with the 100k subset. Step2: Examining the data Now that the data is in memory, let's look at some of the methods availible to examine it before we move on to summarizing it. This will let you see how data is represented both in Spark and Python as well as what kind of data is availible in the iDigBio data frames. Data frame structure First we can look at the columns in the data frame. This is all of iDigBio so there are a lot of them. Also printed by Python is the data type for each column and if a column contains a nested structure (like the "data" structure which has the raw data originally sent to iDigBio) then it is indented. Step3: Next we can look at the first row of data. The (1) after head tells Python how many rows to print. Since this is all iDigBio data, the rows are pretty big so we'll only show one. Step4: Summarizing the data That's certainly more data than we need to make the graph. Since there is one row in the data frame for each specimen record, what we need to do is group the records by the year they were collected and then count the number of records in each group and associate that with the year. The data frame we want to have as a result should have two columns, one for year and one for the count of the records collected in that year. This is a common chain of operations often refered to as select, group by, and count which comes from the SQL syntax for doing this operation. Working with the year complicated by the fact that iDigBio has a datecollected field and not a yearcollected field. While we are often provided a year in the raw data, we assemble and convert all the date information from the Darwin Core fields into a date-type object and store that as datecollected. Because this object is a date type we can sort it and search for ranges. (Consider what would happen if we tried that with raw data strings like "2004-01-14" and "March 15, 2015".) We need to extract the year part of datecollected and we need to convert it to a number so we can sort on it. Step5: Let's take a look at this new data frame using some of the commands from above Step6: Now that our data is both much smaller and mostly numeric, we can use the describe() method to quickly make summary statistics. This method returns a data frame so we have to use show() to actually print the whole contents of the data frame. Step7: Spark data frames, Pandas data frames, and filtering The term "data frame" is a concept for how data is arranged. Different programming languages and even libraries in a single programming language have different implimentations of this idea. We have been working with a Spark data frame. Now we want to do some graphing and the Python graphing libraries know how to work with a Pandas data frame. Fortunately this is such a common conversion that there is a built-in method to do it. One thing to be aware of is that Pandas data frames are not stored on our computation cluster like the Spark data frames are. This means they need to be small and you should not do too much computation on them. Our year_summary data frame is only 2 columns and about 220 rows this isn't a problem. While converting to a Pandas data frame, we will also reduce the years to the range 1817 - 2017. From the output of describe() we could see that there were some years that didn't make sense. Step8: (Notice that the display of the first rows looks different from when we ran head() on the Spark data frame? That's because we're looking at the display generated by the Pandas library instead of the Spark library.) Making a graph The number of specimens collected in a year is discrete data so a bar graph is one appropriate way to display them.
Python Code: # The Python Spark (pyspark) libraries include functions designed to be run on columns of data # stored in Spark data frames. They need to be imported in order to use them. Here we # are going to use from pyspark.sql.functions import year # The matplotlib package is used for graphing. The next line tells Jupyter that when a # graphing function is used, it should draw the graph here inline in the notebook. import matplotlib.pyplot as plt %matplotlib inline Explanation: Graph of iDigBio Specimens over Time This notebook introduces the basics of loading and analyzing iDigBio data on the GUODA infrastructure hosted by the ACIS Lab and iDigBio. This service is documented in the GUODA Jupyter service wiki on Github. As an example, we will create a graph that shows how many specimens in iDigBio were collected during each the past 200 years. The data for this graph is a random sample of 100,000 records from an export of the iDigBio. The field used to determine the year is the interpreted datecollected field that iDigBio populates based on Darwin Core terms like dwc:year and dwc:eventDate. If you are interested in the capabilities of GUODA, you may want to scroll to the end of this notebook to view the final graph. If you are interested in doing this work yourself, please keep reading. Set up In this document, narrative describing what the code is intended to do and observations about the results is written in Markdown cells. Markdown is a simple wiki-style language that can be used for formating text. Comments about the actual code are written inside the code cells themselves and prefixed with "#" so the are not run. The code for this document is written in Python and uses the Apache Spark data analytics framework. The code written in this Jupyter notebook is actually run on servers located at the ACIS lab. All of the needed libraries and Spark configuration are already done and there is nothing to install. It is customary to import libraries used and set configuration options at the top of scripts. In the next cell, we import and set up the Python packages needed by this notebook. End of explanation df = sqlContext.read.load("/guoda/data/idigbio-20190612T171757.parquet") df.count() Explanation: Loading the data set Data in GUODA is stored on a clustered file system called HDFS. The Jupyter notebooks are all configured to read and write to HDFS automatically so all file paths are in the HDFS system. You can read more about working with files and how to see what data sets are availible on the Jupyter service wiki. This line will load the contents of the file that contains a 100,000 record sub-set of iDigBio into a Spark data frame. Then we can look at how many records are in the data frame to confirm that we are working with the 100k subset. End of explanation df.printSchema() Explanation: Examining the data Now that the data is in memory, let's look at some of the methods availible to examine it before we move on to summarizing it. This will let you see how data is represented both in Spark and Python as well as what kind of data is availible in the iDigBio data frames. Data frame structure First we can look at the columns in the data frame. This is all of iDigBio so there are a lot of them. Also printed by Python is the data type for each column and if a column contains a nested structure (like the "data" structure which has the raw data originally sent to iDigBio) then it is indented. End of explanation df.head(1) Explanation: Next we can look at the first row of data. The (1) after head tells Python how many rows to print. Since this is all iDigBio data, the rows are pretty big so we'll only show one. End of explanation # The outer "(" and ")" surround the chain of Python method calls to allow them to # span lines. This is a common convention and makes the data processing pipeline # easy to read and modify. # # The persist() function tells Spark to store the data frame in memory so it can be # accessed repeatedly without having to be reloaded. year_summary = (df .groupBy(year("datecollected").cast("integer").alias("yearcollected")) .count() .orderBy("yearcollected") .persist() ) Explanation: Summarizing the data That's certainly more data than we need to make the graph. Since there is one row in the data frame for each specimen record, what we need to do is group the records by the year they were collected and then count the number of records in each group and associate that with the year. The data frame we want to have as a result should have two columns, one for year and one for the count of the records collected in that year. This is a common chain of operations often refered to as select, group by, and count which comes from the SQL syntax for doing this operation. Working with the year complicated by the fact that iDigBio has a datecollected field and not a yearcollected field. While we are often provided a year in the raw data, we assemble and convert all the date information from the Darwin Core fields into a date-type object and store that as datecollected. Because this object is a date type we can sort it and search for ranges. (Consider what would happen if we tried that with raw data strings like "2004-01-14" and "March 15, 2015".) We need to extract the year part of datecollected and we need to convert it to a number so we can sort on it. End of explanation year_summary.count() year_summary.printSchema() year_summary.head(10) Explanation: Let's take a look at this new data frame using some of the commands from above: End of explanation year_summary.describe().show() Explanation: Now that our data is both much smaller and mostly numeric, we can use the describe() method to quickly make summary statistics. This method returns a data frame so we have to use show() to actually print the whole contents of the data frame. End of explanation pandas_year_summary = (year_summary .filter(year_summary.yearcollected >= 1817) .filter(year_summary.yearcollected <= 2017) .orderBy("yearcollected") .toPandas() ) pandas_year_summary.head() Explanation: Spark data frames, Pandas data frames, and filtering The term "data frame" is a concept for how data is arranged. Different programming languages and even libraries in a single programming language have different implimentations of this idea. We have been working with a Spark data frame. Now we want to do some graphing and the Python graphing libraries know how to work with a Pandas data frame. Fortunately this is such a common conversion that there is a built-in method to do it. One thing to be aware of is that Pandas data frames are not stored on our computation cluster like the Spark data frames are. This means they need to be small and you should not do too much computation on them. Our year_summary data frame is only 2 columns and about 220 rows this isn't a problem. While converting to a Pandas data frame, we will also reduce the years to the range 1817 - 2017. From the output of describe() we could see that there were some years that didn't make sense. End of explanation plt.bar(pandas_year_summary["yearcollected"], pandas_year_summary["count"], edgecolor='none', width=1.0 ) plt.title("Specimens in iDigBio by Collection Year and Continent") plt.ylabel("Number of Specimen Records") plt.xlabel("Year") Explanation: (Notice that the display of the first rows looks different from when we ran head() on the Spark data frame? That's because we're looking at the display generated by the Pandas library instead of the Spark library.) Making a graph The number of specimens collected in a year is discrete data so a bar graph is one appropriate way to display them. End of explanation
9,211
Given the following text description, write Python code to implement the functionality described. Description: Count three Function to count three - digit numbers having difference x with its reverse ; If x is not multiple of 99 ; No solution exists ; Generate all possible pairs of digits [ 1 , 9 ] ; If any pair is obtained with difference x / 99 ; Increase count ; Return the count ; Driver Code
Python Code: def Count_Number(x ) : ans = 0 ; if(x % 99 != 0 ) : ans = - 1 ;  else : diff = x / 99 ; for i in range(1 , 10 ) : for j in range(1 , 10 ) : if(( i - j ) == diff ) : ans += 10 ;     return ans ;  if __name__== ' __main __' : x = 792 ; print(Count_Number(x ) ) ; 
9,212
Given the following text description, write Python code to implement the functionality described below step by step Description: Work through the reduction of a single dataset Step1: Setup files Copy files from Dropbox to local, working folder cd 'working_folder' # Darks, if needed cp -rp ~/Dropbox/COS-LRG/darksall . # Calibs, as needed cp -rp ~/Dropbox/COS-LRG/calibfilesmast . # Subset of the raw and object calibration files mkdir LCYA01010 cd LCYA01010 cp -rp ~/Dropbox/COS-LRG/LCYA01010/*rawtag* . cp -rp ~/Dropbox/COS-LRG/LCYA01010/lcya01010_asn.fits . cp -rp ~/Dropbox/COS-LRG/LCYA01010/lcya01010_trl.fits . cp -rp ~/Dropbox/COS-LRG/LCYA01010/lcya01010_j*.fits . Step2: Customize for calcos Science frames LP2 calib file Science frames Step3: Calibs -- May not need to repeat this (i.e. if done before) Step4: Run calcos Launch pyraf cd ureka xgterm (or xterm) export lref=/home/marijana/ReductionCOS/calibfilesmast/ # Working in xgterm window ur_setup mkiraf #? pyraf Execute cd folder_with_data (e.g. LCYA01010) stsdas hst_calib hstcos # calcos [dataset prefix]_asn.fits (e.g. calcos lcya01010_asn.fits) Crashing and rerunning If you have to rerun, remove the corrtag and .tra files Coadd corrtag files Step5: Traces Step6: Ignoring Sun and Limb Get HVLEVELs Step7: Change PHA Examine PHA values near the trace of a single exposure (OPTIONAL) Step8: Edit Step9: Modify PHACORR Step10: Clean for CALCOS Step11: Run Calcos as above -- with PHA restricted Coadd new PHA frames Step12: Reduce Darks Find Darks Step13: Run calcos In new sub-folder for darks Use the script Clean unnecessary (large) files Step14: Measure Background Set background region Set chk=True to show plots Iterate as needed Step15: Loop on Exposures without PHA Will generate background spectra, one per exposure per segment Below we only do FUVA Step16: Coadd Step17: Bin to 2 pixels
Python Code: # imports import os import glob import pdb #from imp import reload #from importlib import reload from astropy.io import fits from cosredux import utils as cr_utils from cosredux import trace as cr_trace from cosredux import darks as cr_darks from cosredux import io as cr_io from cosredux import science as cr_science # python setup.py develop Explanation: Work through the reduction of a single dataset End of explanation rdx_path = '/Users/xavier/HST/COS/LRG_Redux/' #rdx_path = '/home/marijana/ReductionCOS/' science_folder = 'LCYA01010/' dark_folder = 'darksall/' #calib_folder = 'calibfilesmast/' calib_folder = 'calibs/' root_out = 'lcya01010' # Default values defaults = {} defaults['pha_mnx'] = (2,15) defaults['apert'] = 25. defaults['ndays'] = 90. Explanation: Setup files Copy files from Dropbox to local, working folder cd 'working_folder' # Darks, if needed cp -rp ~/Dropbox/COS-LRG/darksall . # Calibs, as needed cp -rp ~/Dropbox/COS-LRG/calibfilesmast . # Subset of the raw and object calibration files mkdir LCYA01010 cd LCYA01010 cp -rp ~/Dropbox/COS-LRG/LCYA01010/*rawtag* . cp -rp ~/Dropbox/COS-LRG/LCYA01010/lcya01010_asn.fits . cp -rp ~/Dropbox/COS-LRG/LCYA01010/lcya01010_trl.fits . cp -rp ~/Dropbox/COS-LRG/LCYA01010/lcya01010_j*.fits . End of explanation cr_utils.modify_rawtag_for_calcos(rdx_path+science_folder) Explanation: Customize for calcos Science frames LP2 calib file Science frames End of explanation cr_utils.modify_LP2_1dx_calib(rdx_path+calib_folder) Explanation: Calibs -- May not need to repeat this (i.e. if done before) End of explanation corrtag_files_a = glob.glob(rdx_path+science_folder + '*_corrtag_a.fits') corrtag_files_b = glob.glob(rdx_path+science_folder + '*_corrtag_b.fits') corrtag_files_a _ = cr_utils.coadd_bintables(corrtag_files_a, outfile=rdx_path+science_folder+root_out+'_coaddcorr_woPHA_a.fits') _ = cr_utils.coadd_bintables(corrtag_files_b, outfile=rdx_path+science_folder+root_out+'_coaddcorr_woPHA_b.fits') Explanation: Run calcos Launch pyraf cd ureka xgterm (or xterm) export lref=/home/marijana/ReductionCOS/calibfilesmast/ # Working in xgterm window ur_setup mkiraf #? pyraf Execute cd folder_with_data (e.g. LCYA01010) stsdas hst_calib hstcos # calcos [dataset prefix]_asn.fits (e.g. calcos lcya01010_asn.fits) Crashing and rerunning If you have to rerun, remove the corrtag and .tra files Coadd corrtag files End of explanation reload(cr_trace) traces_a=cr_trace.traces(rdx_path+science_folder+root_out+'_coaddcorr_woPHA_a.fits', rdx_path+calib_folder, 'FUVA', clobber=True) traces_b=cr_trace.traces(rdx_path+science_folder+root_out+'_coaddcorr_woPHA_b.fits', rdx_path+calib_folder, 'FUVB', clobber=True) Explanation: Traces End of explanation reload(cr_utils) hva_a, hvb_a = cr_utils.get_hvlevels(corrtag_files_a) hva_b, hvb_b = cr_utils.get_hvlevels(corrtag_files_b) Explanation: Ignoring Sun and Limb Get HVLEVELs End of explanation reload(cr_science) ex_region = cr_science.set_extraction_region(traces_a[0], 'FUVA', corrtag_files_a[0], check=True) reload(cr_darks) pha_values_a, _, _ = cr_darks.get_pha_values_science(ex_region, corrtag_files_a[0], background=False) from xastropy.xutils import xdebug as xdb xdb.xhist(pha_values_a) Explanation: Change PHA Examine PHA values near the trace of a single exposure (OPTIONAL) End of explanation # Reset pha_mnx above if you wish reload(cr_utils) cr_utils.change_pha(rdx_path+calib_folder, low=defaults['pha_mnx'][0], up=defaults['pha_mnx'][1]) Explanation: Edit End of explanation reload(cr_utils) cr_utils.modify_phacorr(rdx_path+science_folder) Explanation: Modify PHACORR End of explanation reload(cr_utils) cr_utils.clean_for_calcos_phafiltering(rdx_path+science_folder) Explanation: Clean for CALCOS End of explanation corrtag_files_a = glob.glob(rdx_path+science_folder + '*_corrtag_a.fits') corrtag_files_b = glob.glob(rdx_path+science_folder + '*_corrtag_b.fits') corrtag_files_a _ = cr_utils.coadd_bintables(corrtag_files_a, outfile=rdx_path+science_folder+root_out+'_coaddcorr_withPHA_a.fits') _ = cr_utils.coadd_bintables(corrtag_files_b, outfile=rdx_path+science_folder+root_out+'_coaddcorr_withPHA_b.fits') Explanation: Run Calcos as above -- with PHA restricted Coadd new PHA frames End of explanation reload(cr_darks) subf_a = cr_darks.setup_for_calcos(rdx_path+dark_folder, corrtag_files_a[0], 'FUVA') Explanation: Reduce Darks Find Darks End of explanation reload(cr_darks) cr_darks.clean_after_calcos(rdx_path+science_folder+subf_a) Explanation: Run calcos In new sub-folder for darks Use the script Clean unnecessary (large) files End of explanation # Read traces (if needed) traces_a = cr_io.read_traces(rdx_path+science_folder+root_out+'_coaddcorr_woPHA_a.fits') traces_b = cr_io.read_traces(rdx_path+science_folder+root_out+'_coaddcorr_woPHA_b.fits') reload(cr_darks) chk = True bg_region_a = cr_darks.set_background_region(traces_a[0], 'FUVA', rdx_path+science_folder+root_out+'_coaddcorr_woPHA_a.fits', check=chk) bg_region_b = cr_darks.set_background_region(traces_b[0], 'FUVB', rdx_path+science_folder+root_out+'_coaddcorr_woPHA_b.fits', check=chk) bg_region_a, bg_region_b Explanation: Measure Background Set background region Set chk=True to show plots Iterate as needed End of explanation corrtag_woPHA_a = glob.glob(rdx_path+science_folder + '*_corrtag_woPHA_a.fits') corrtag_woPHA_a reload(cr_darks) reload(cr_utils) cr_darks.dark_to_exposures(corrtag_woPHA_a, bg_region_a, traces_a[0], 'FUVA', defaults) Explanation: Loop on Exposures without PHA Will generate background spectra, one per exposure per segment Below we only do FUVA End of explanation x1d_files = glob.glob(rdx_path+science_folder + '*_x1d.fits') reload(cr_science) cr_science.coadd_exposures(x1d_files, 'FUVA', rdx_path+science_folder+'LCYA01010_coadd.fits') Explanation: Coadd End of explanation reload(cr_science) cr_science.coadd_exposures(x1d_files, 'FUVA', rdx_path+science_folder+'LCYA01010_coadd_bin2.fits', bin=2) Explanation: Bin to 2 pixels End of explanation
9,213
Given the following text description, write Python code to implement the functionality described below step by step Description: An Introduction to pandas Pandas! They are adorable animals. You might think they are the worst animal ever but that is not true. You might sometimes think pandas is the worst library every, and that is only kind of true. The important thing is use the right tool for the job. pandas is good for some stuff, SQL is good for some stuff, writing raw Python is good for some stuff. You'll figure it out as you go along. Now let's start coding. Hopefully you did pip install pandas before you started up this notebook. Step1: When you import pandas, you use import pandas as pd. That means instead of typing pandas in your code you'll type pd. You don't have to, but every other person on the planet will be doing it, so you might as well. Now we're going to read in a file. Our file is called NBA-Census-10.14.2013.csv because we're sports moguls. pandas can read_ different types of files, so try to figure it out by typing pd.read_ and hitting tab for autocomplete. Step2: A dataframe is basically a spreadsheet, except it lives in the world of Python or the statistical programming language R. They can't call it a spreadsheet because then people would think those programmers used Excel, which would make them boring and normal and they'd have to wear a tie every day. Selecting rows Now let's look at our data, since that's what data is for Step3: If we scroll we can see all of it. But maybe we don't want to see all of it. Maybe we hate scrolling? Step4: ...but maybe we want to see more than a measly five results? Step5: But maybe we want to make a basketball joke and see the final four? Step6: So yes, head and tail work kind of like the terminal commands. That's nice, I guess. But maybe we're incredibly demanding (which we are) and we want, say, the 6th through the 8th row (which we do). Don't worry (which I know you were), we can do that, too. Step7: It's kind of like an array, right? Except where in an array we'd say df[0] this time we need to give it two numbers, the start and the end. Selecting columns But jeez, my eyes don't want to go that far over the data. I only want to see, uh, name and age. Step8: NOTE Step9: Now that was a little weird, yes - we used df['POS'] instead of df[['POS']] when viewing the data's details. But now I'm curious about numbers Step10: Unfortunately because that has dollar signs and commas it's thought of as a string. We'll fix it in a second, but let's try describing one more thing. Step11: That's stupid, though, what's an inch even look like? What's 80 inches? I don't have a clue. If only there were some wa to manipulate our data. Manipulating data Oh wait there is, HA HA HA. Step12: Okay that was nice but unfortunately we can't do anything with it. It's just sitting there, separate from our data. If this were normal code we could do blahblah['feet'] = blahblah['Ht (In.)'] / 12, but since this is pandas, we can't. Right? Right? Step13: That's cool, maybe we could do the same thing with their salary? Take out the $ and the , and convert it to an integer? Step14: The average basketball player makes 3.8 million dollars and is a little over six and a half feet tall. But who cares about those guys? I don't care about those guys. They're boring. I want the real rich guys! Sorting and sub-selecting Step15: Those guys are making nothing! If only there were a way to sort from high to low, a.k.a. descending instead of ascending. Step16: But sometimes instead of just looking at them, I want to do stuff with them. Play some games with them! Dunk on them~ describe them! And we don't want to dunk on everyone, only the players above 7 feet tall. First, we need to check out boolean things. Step17: Drawing pictures Okay okay enough code and enough stupid numbers. I'm visual. I want graphics. Okay????? Okay. Step18: matplotlib is a graphing library. It's the Python way to make graphs! Step19: But that's ugly. There's a thing called ggplot for R that looks nice. We want to look nice. We want to look like ggplot. Step20: That might look better with a little more customization. So let's customize it. Step21: I want more graphics! Do tall people make more money?!?!
Python Code: # import pandas, but call it pd. Why? Because that's What People Do. Explanation: An Introduction to pandas Pandas! They are adorable animals. You might think they are the worst animal ever but that is not true. You might sometimes think pandas is the worst library every, and that is only kind of true. The important thing is use the right tool for the job. pandas is good for some stuff, SQL is good for some stuff, writing raw Python is good for some stuff. You'll figure it out as you go along. Now let's start coding. Hopefully you did pip install pandas before you started up this notebook. End of explanation # We're going to call this df, which means "data frame" # It isn't in UTF-8 (I saved it from my mac!) so we need to set the encoding Explanation: When you import pandas, you use import pandas as pd. That means instead of typing pandas in your code you'll type pd. You don't have to, but every other person on the planet will be doing it, so you might as well. Now we're going to read in a file. Our file is called NBA-Census-10.14.2013.csv because we're sports moguls. pandas can read_ different types of files, so try to figure it out by typing pd.read_ and hitting tab for autocomplete. End of explanation # Let's look at all of it Explanation: A dataframe is basically a spreadsheet, except it lives in the world of Python or the statistical programming language R. They can't call it a spreadsheet because then people would think those programmers used Excel, which would make them boring and normal and they'd have to wear a tie every day. Selecting rows Now let's look at our data, since that's what data is for End of explanation # Look at the first few rows Explanation: If we scroll we can see all of it. But maybe we don't want to see all of it. Maybe we hate scrolling? End of explanation # Let's look at MORE of the first few rows Explanation: ...but maybe we want to see more than a measly five results? End of explanation # Let's look at the final few rows Explanation: But maybe we want to make a basketball joke and see the final four? End of explanation # Show the 6th through the 8th rows Explanation: So yes, head and tail work kind of like the terminal commands. That's nice, I guess. But maybe we're incredibly demanding (which we are) and we want, say, the 6th through the 8th row (which we do). Don't worry (which I know you were), we can do that, too. End of explanation # Get the names of the columns, just because # If we want to be "correct" we add .values on the end of it # Select only name and age # Combing that with .head() to see not-so-many rows # We can also do this all in one line, even though it starts looking ugly # (unlike the cute bears pandas looks ugly pretty often) Explanation: It's kind of like an array, right? Except where in an array we'd say df[0] this time we need to give it two numbers, the start and the end. Selecting columns But jeez, my eyes don't want to go that far over the data. I only want to see, uh, name and age. End of explanation # Grab the POS column, and count the different values in it. Explanation: NOTE: That was not df['Name', 'Age'], it was df[['Name', 'Age]]. You'll definitely type it wrong all of the time. When things break with pandas it's probably because you forgot to put in a million brackets. Describing your data A powerful tool of pandas is being able to select a portion of your data, because who ordered all that data anyway. I want to know how many people are in each position. Luckily, pandas can tell me! End of explanation # Summary statistics for Age # That's pretty good. Does it work for everything? How about the money? Explanation: Now that was a little weird, yes - we used df['POS'] instead of df[['POS']] when viewing the data's details. But now I'm curious about numbers: how old is everyone? Maybe we could, I don't know, get some statistics about age? Some statistics to describe age? End of explanation # Doing more describing Explanation: Unfortunately because that has dollar signs and commas it's thought of as a string. We'll fix it in a second, but let's try describing one more thing. End of explanation # Take another look at our inches, but only the first few # Divide those inches by 12 # Let's divide ALL of them by 12 # Can we get statistics on those? # Let's look at our original data again Explanation: That's stupid, though, what's an inch even look like? What's 80 inches? I don't have a clue. If only there were some wa to manipulate our data. Manipulating data Oh wait there is, HA HA HA. End of explanation # Store a new column Explanation: Okay that was nice but unfortunately we can't do anything with it. It's just sitting there, separate from our data. If this were normal code we could do blahblah['feet'] = blahblah['Ht (In.)'] / 12, but since this is pandas, we can't. Right? Right? End of explanation # Can't just use .replace # Need to use this weird .str thing # Can't just immediately replace the , either # Need to use the .str thing before EVERY string method # Describe still doesn't work. # Let's convert it to an integer using .astype(int) before we describe it # Maybe we can just make them millions? # Unfortunately one is "n/a" which is going to break our code, so we can make n/a be 0 # Remove the .head() piece and save it back into the dataframe Explanation: That's cool, maybe we could do the same thing with their salary? Take out the $ and the , and convert it to an integer? End of explanation # This is just the first few guys in the dataset. Can we order it? # Let's try to sort them Explanation: The average basketball player makes 3.8 million dollars and is a little over six and a half feet tall. But who cares about those guys? I don't care about those guys. They're boring. I want the real rich guys! Sorting and sub-selecting End of explanation # It isn't descending = True, unfortunately # We can use this to find the oldest guys in the league # Or the youngest, by taking out 'ascending=False' Explanation: Those guys are making nothing! If only there were a way to sort from high to low, a.k.a. descending instead of ascending. End of explanation # Get a big long list of True and False for every single row. # We could use value counts if we wanted # But we can also apply this to every single row to say whether YES we want it or NO we don't # Instead of putting column names inside of the brackets, we instead # put the True/False statements. It will only return the players above # seven feet tall # Or only the guards # Or only the guards who make more than 15 million # It might be easier to break down the booleans into separate variables # We can save this stuff # Maybe we can compare them to taller players? Explanation: But sometimes instead of just looking at them, I want to do stuff with them. Play some games with them! Dunk on them~ describe them! And we don't want to dunk on everyone, only the players above 7 feet tall. First, we need to check out boolean things. End of explanation # This will scream we don't have matplotlib. Explanation: Drawing pictures Okay okay enough code and enough stupid numbers. I'm visual. I want graphics. Okay????? Okay. End of explanation # this will open up a weird window that won't do anything # So instead you run this code Explanation: matplotlib is a graphing library. It's the Python way to make graphs! End of explanation # Import matplotlib # What's available? # Use ggplot # Make a histogram # Try some other styles Explanation: But that's ugly. There's a thing called ggplot for R that looks nice. We want to look nice. We want to look like ggplot. End of explanation # Pass in all sorts of stuff! # Most from http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.hist.html # .range() is a matplotlib thing Explanation: That might look better with a little more customization. So let's customize it. End of explanation # How does experience relate with the amount of money they're making? # At least we can assume height and weight are related # At least we can assume height and weight are related # http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.plot.html # We can also use plt separately # It's SIMILAR but TOTALLY DIFFERENT Explanation: I want more graphics! Do tall people make more money?!?! End of explanation
9,214
Given the following text description, write Python code to implement the functionality described below step by step Description: Analyzing dendritic data with CaImAn This notebook shows an example on how to analyze two-photon dendritic data with CaImAn. It follows closely the other notebooks. Step1: Selecting the data We provide an example file from mouse barrel cortex, courtesy of Clay Lacefield and Randy Bruno, Columbia University. The download_demo command will automatically download the file and store it in your caiman_data folder the first time you run it. To use the demo in your own dataset you can set Step2: Viewing the data Prior to analysis you can view the data by setting display_movie = False. The example file provided here is small, with FOV size 128 x 128. To view it larger we set magnification = 4 in the play command. This will result to a movie of resolution 512 x 512. For larger files, you will want to reduce this magnification factor (for memory and display purposes). Step3: Parameter setting First we'll perform motion correction followed by CNMF. One of the main differences when analyzing dendritic data, is that dendritic segments are not localized and can traverse significant parts of the FOV. They can also be spatially non-contiguous. To capture this property, CaImAn has two methods for initializing CNMF Step4: Motion correction First we create a motion correction object and then perform the registration. The provided dataset has already been registered with a different method so we do not expect to see a lot of motion. Step5: Display registered movie next to the original Step6: Memory mapping Step7: Now run CNMF with the chosen initialization method Note that in the parameter setting we have set only_init = True so only the initialization will be run. Step8: Display components And compute the correlation image as a reference image Step9: Plot a montage array of all the components We can also plot a montage of all the identified spatial footprints. Step10: Run CNMF to sparsify the components The components found by the initialization have captured a lot of structure but they are not sparse. Using the refit command will sparsify them. Step11: Now plot the components again Step12: Component evaluation the components are evaluated in three ways Step13: Plot contours of selected and rejected components Step14: Extract DF/F values Step15: The resulting components are significantly more sparse and capture a lot of the data structure. Play a movie with the results The movie will show three panels with the original, denoised and residual movies respectively. The flag include_bck = False will remove the estimated background from the original and denoised panels. To include it set it to True. Step16: Save the object You can save the results of the analysis for future use and/or to load them on the GUI. Step17: Now try CNMF with patches As mentioned above, the file shown here is small enough to be processed at once. For larger files patches and/or downsampling can be used. We show how to use patches below. Note that unlike in somatic imaging, for dendritic imaging we want the patches to be large and have significant overlap so enough structure of the dendritic segments is captured in the overlapping region. This will help with merging the components. However, a too large patch size can result to memory issues due to large amount of data loaded concurrently onto memory. Step18: Change some parameters to enable patches To enable patches the rf parameter should be changed from None to some integer value. Moreover the amount of overlap controlled by stride_cnmf should also change. Finally, since the parameter K specifies components per patch it should be reduced compared to its value when operating on the whole FOV at once. It is important to experiment with different values for these parameters since results can sensitive, due to the non stereotypical shapes and activity patterns of dendrites. Step19: Refit to sparsify As before we can pass the results through the CNMF refit function that can yield more sparse components. Step20: Compare the two approaches To identify how similar components and the two approaches yield, we can compute the cross-correlation matrix between the spatial footprints derived from the two approaches. Step21: Stop the cluster when you're done
Python Code: import cv2 import glob import logging import matplotlib.pyplot as plt import numpy as np import os try: cv2.setNumThreads(0) except(): pass try: if __IPYTHON__: get_ipython().magic('load_ext autoreload') get_ipython().magic('autoreload 2') except NameError: pass import caiman as cm from caiman.motion_correction import MotionCorrect from caiman.source_extraction.cnmf import cnmf as cnmf from caiman.source_extraction.cnmf import params as params from caiman.utils.utils import download_demo from caiman.utils.visualization import nb_view_patches from skimage.util import montage import bokeh.plotting as bpl import holoviews as hv bpl.output_notebook() hv.notebook_extension('bokeh') Explanation: Analyzing dendritic data with CaImAn This notebook shows an example on how to analyze two-photon dendritic data with CaImAn. It follows closely the other notebooks. End of explanation fnames = [download_demo('data_dendritic.tif')] Explanation: Selecting the data We provide an example file from mouse barrel cortex, courtesy of Clay Lacefield and Randy Bruno, Columbia University. The download_demo command will automatically download the file and store it in your caiman_data folder the first time you run it. To use the demo in your own dataset you can set: fnames = [/path/to/file(s)]. End of explanation display_movie = False if display_movie: m_orig = cm.load_movie_chain(fnames) ds_ratio = 0.25 m_orig.resize(1, 1, ds_ratio).play( q_max=99.5, fr=30, magnification=4) Explanation: Viewing the data Prior to analysis you can view the data by setting display_movie = False. The example file provided here is small, with FOV size 128 x 128. To view it larger we set magnification = 4 in the play command. This will result to a movie of resolution 512 x 512. For larger files, you will want to reduce this magnification factor (for memory and display purposes). End of explanation # dataset dependent parameters fr = 30 # imaging rate in frames per second decay_time = 0.5 # motion correction parameters strides = (48, 48) # start a new patch for pw-rigid motion correction every x pixels overlaps = (24, 24) # overlap between pathes (size of patch strides+overlaps) max_shifts = (6, 6) # maximum allowed rigid shifts (in pixels) max_deviation_rigid = 3 # maximum shifts deviation allowed for patch with respect to rigid shifts pw_rigid = True # flag for performing non-rigid motion correction # parameters for source extraction and deconvolution p = 0 # order of the autoregressive system gnb = 2 # number of global background components merge_thr = 0.75 # merging threshold, max correlation allowed rf = None # half-size of the patches in pixels. e.g., if rf=25, patches are 50x50 stride_cnmf = None # amount of overlap between the patches in pixels K = 60 # number of components per patch method_init = 'graph_nmf' # initialization method (if analyzing dendritic data using 'sparse_nmf') ssub = 1 # spatial subsampling during initialization tsub = 1 # temporal subsampling during intialization opts_dict = {'fnames': fnames, 'fr': fr, 'decay_time': decay_time, 'strides': strides, 'overlaps': overlaps, 'max_shifts': max_shifts, 'max_deviation_rigid': max_deviation_rigid, 'pw_rigid': pw_rigid, 'p': p, 'nb': gnb, 'rf': rf, 'K': K, 'stride': stride_cnmf, 'method_init': method_init, 'rolling_sum': True, 'only_init': True, 'ssub': ssub, 'tsub': tsub, 'merge_thr': merge_thr} opts = params.CNMFParams(params_dict=opts_dict) #%% start a cluster for parallel processing (if a cluster already exists it will be closed and a new session will be opened) if 'dview' in locals(): cm.stop_server(dview=dview) c, dview, n_processes = cm.cluster.setup_cluster( backend='local', n_processes=None, single_thread=False) Explanation: Parameter setting First we'll perform motion correction followed by CNMF. One of the main differences when analyzing dendritic data, is that dendritic segments are not localized and can traverse significant parts of the FOV. They can also be spatially non-contiguous. To capture this property, CaImAn has two methods for initializing CNMF: sparse_nmf where a sparse non-negative matrix factorization is deployed, and graph-nmf, where the Laplacian of the pixel affinity matrix is used as a regularizer to promote spatial components that capture pixels that have similar timecourses. These can be selected with the parameter method_init. In this demo we use the graph_nmf initialization method although sparse_nmf can also be used. Since the dataset is small we can run CNMF without splitting the FOV in patches. This can be helpful because of the non-localized nature of the dendritic segments. To do that we set rf = None. Using patches is also supported as demonstrated below. Also note, that spatial and temporal downsampling can also be used to reduce the data dimensionality by appropriately modifying the parameters ssub and tsub. Here they are set to 1 since it is not necessary. The graph NMF method was originally presented in the paper: Cai, D., He, X., Han, J., & Huang, T. S. (2010). Graph regularized nonnegative matrix factorization for data representation. IEEE transactions on pattern analysis and machine intelligence, 33(8), 1548-1560. End of explanation mc = MotionCorrect(fnames, dview=dview, **opts.get_group('motion')) %%capture #%% Run piecewise-rigid motion correction using NoRMCorre mc.motion_correct(save_movie=True) m_els = cm.load(mc.fname_tot_els) border_to_0 = 0 if mc.border_nan is 'copy' else mc.border_to_0 # maximum shift to be used for trimming against NaNs Explanation: Motion correction First we create a motion correction object and then perform the registration. The provided dataset has already been registered with a different method so we do not expect to see a lot of motion. End of explanation #%% compare with original movie display_movie = False if display_movie: m_orig = cm.load_movie_chain(fnames) ds_ratio = 0.2 cm.concatenate([m_orig.resize(1, 1, ds_ratio) - mc.min_mov*mc.nonneg_movie, m_els.resize(1, 1, ds_ratio)], axis=2).play(fr=60, q_max=99.5, magnification=4, offset=0) # press q to exit Explanation: Display registered movie next to the original End of explanation #%% MEMORY MAPPING # memory map the file in order 'C' fname_new = cm.save_memmap(mc.mmap_file, base_name='memmap_', order='C', border_to_0=border_to_0) # exclude borders # now load the file Yr, dims, T = cm.load_memmap(fname_new) images = np.reshape(Yr.T, [T] + list(dims), order='F') #load frames in python format (T x X x Y) #%% restart cluster to clean up memory cm.stop_server(dview=dview) c, dview, n_processes = cm.cluster.setup_cluster( backend='local', n_processes=None, single_thread=False) Explanation: Memory mapping End of explanation cnm = cnmf.CNMF(n_processes, params=opts, dview=dview) cnm = cnm.fit(images) Explanation: Now run CNMF with the chosen initialization method Note that in the parameter setting we have set only_init = True so only the initialization will be run. End of explanation CI = cm.local_correlations(images[::1].transpose(1,2,0)) CI[np.isnan(CI)] = 0 cnm.estimates.hv_view_components(img=CI) Explanation: Display components And compute the correlation image as a reference image End of explanation A = cnm.estimates.A.toarray().reshape(opts.data['dims'] + (-1,), order='F').transpose([2, 0, 1]) Nc = A.shape[0] grid_shape = (np.ceil(np.sqrt(Nc/2)).astype(int), np.ceil(np.sqrt(Nc*2)).astype(int)) plt.figure(figsize=np.array(grid_shape[::-1])*1.5) plt.imshow(montage(A, rescale_intensity=True, grid_shape=grid_shape)) plt.title('Montage of found spatial components'); plt.axis('off'); Explanation: Plot a montage array of all the components We can also plot a montage of all the identified spatial footprints. End of explanation cnm2 = cnm.refit(images, dview=dview) Explanation: Run CNMF to sparsify the components The components found by the initialization have captured a lot of structure but they are not sparse. Using the refit command will sparsify them. End of explanation cnm2.estimates.hv_view_components(img=CI) A2 = cnm2.estimates.A.toarray().reshape(opts.data['dims'] + (-1,), order='F').transpose([2, 0, 1]) Nc = A2.shape[0] grid_shape = (np.ceil(np.sqrt(Nc/2)).astype(int), np.ceil(np.sqrt(Nc*2)).astype(int)) plt.figure(figsize=np.array(grid_shape[::-1])*1.5) plt.imshow(montage(A2, rescale_intensity=True, grid_shape=grid_shape)) plt.title('Montage of found spatial components'); plt.axis('off'); Explanation: Now plot the components again End of explanation min_SNR = 4 # signal to noise ratio for accepting a component rval_thr = 0.85 # space correlation threshold for accepting a component cnn_thr = 0.99 # threshold for CNN based classifier cnn_lowest = 0.1 # neurons with cnn probability lower than this value are rejected cnm2.params.set('quality', {'decay_time': decay_time, 'min_SNR': min_SNR, 'rval_thr': rval_thr, 'use_cnn': False, # do not use cnn for dentritic data 'min_cnn_thr': cnn_thr, 'cnn_lowest': cnn_lowest}) cnm2.estimates.evaluate_components(images, cnm2.params, dview=dview) Explanation: Component evaluation the components are evaluated in three ways: a) the shape of each component must be correlated with the data b) a minimum peak SNR is required over the length of a transient c) each shape passes a CNN based classifier End of explanation # accepted components cnm2.estimates.hv_view_components(img=CI, idx=cnm2.estimates.idx_components) # rejected components cnm2.estimates.hv_view_components(img=CI, idx=cnm2.estimates.idx_components_bad) Explanation: Plot contours of selected and rejected components End of explanation cnm2.estimates.detrend_df_f(quantileMin=8, frames_window=250) Explanation: Extract DF/F values End of explanation cnm2.estimates.play_movie(images, q_max=99.9, magnification=4, include_bck=False) # press q to exit Explanation: The resulting components are significantly more sparse and capture a lot of the data structure. Play a movie with the results The movie will show three panels with the original, denoised and residual movies respectively. The flag include_bck = False will remove the estimated background from the original and denoised panels. To include it set it to True. End of explanation cnm2.estimates.Cn = CI # save the correlation image for displaying in the background cnm2.save('dendritic_analysis.hdf5') Explanation: Save the object You can save the results of the analysis for future use and/or to load them on the GUI. End of explanation #%% restart cluster to clean up memory cm.stop_server(dview=dview) c, dview, n_processes = cm.cluster.setup_cluster( backend='local', n_processes=None, single_thread=False) Explanation: Now try CNMF with patches As mentioned above, the file shown here is small enough to be processed at once. For larger files patches and/or downsampling can be used. We show how to use patches below. Note that unlike in somatic imaging, for dendritic imaging we want the patches to be large and have significant overlap so enough structure of the dendritic segments is captured in the overlapping region. This will help with merging the components. However, a too large patch size can result to memory issues due to large amount of data loaded concurrently onto memory. End of explanation rf = 48 # size of each patch is (2*rf, 2*rf) stride_cnmf = 16 Kp = 25 # reduce the number of component as it is now per patch opts.change_params({'rf': rf, 'stride': stride_cnmf, 'K': Kp}); cnm_patches = cnmf.CNMF(n_processes, params=opts, dview=dview) cnm_patches = cnm_patches.fit(images) cnm_patches.estimates.hv_view_components(img=CI) Ap = cnm_patches.estimates.A.toarray().reshape(opts.data['dims'] + (-1,), order='F').transpose([2, 0, 1]) Nc = Ap.shape[0] grid_shape = (np.ceil(np.sqrt(Nc/2)).astype(int), np.ceil(np.sqrt(Nc*2)).astype(int)) plt.figure(figsize=np.array(grid_shape[::-1])*1.5) plt.imshow(montage(Ap, rescale_intensity=True, grid_shape=grid_shape)) plt.title('Montage of found spatial components'); plt.axis('off'); Explanation: Change some parameters to enable patches To enable patches the rf parameter should be changed from None to some integer value. Moreover the amount of overlap controlled by stride_cnmf should also change. Finally, since the parameter K specifies components per patch it should be reduced compared to its value when operating on the whole FOV at once. It is important to experiment with different values for these parameters since results can sensitive, due to the non stereotypical shapes and activity patterns of dendrites. End of explanation cnm_patches2 = cnm_patches.refit(images, dview=dview) Ap2 = cnm_patches2.estimates.A.toarray().reshape(opts.data['dims'] + (-1,), order='F').transpose([2, 0, 1]) Nc = Ap2.shape[0] grid_shape = (np.ceil(np.sqrt(Nc/2)).astype(int), np.ceil(np.sqrt(Nc*2)).astype(int)) plt.figure(figsize=np.array(grid_shape[::-1])*1.5) plt.imshow(montage(Ap2, rescale_intensity=True, grid_shape=grid_shape)) plt.title('Montage of found spatial components'); plt.axis('off'); cnm_patches2.estimates.play_movie(images, q_max=99.9, magnification=4, include_bck=False) # press q to exit Explanation: Refit to sparsify As before we can pass the results through the CNMF refit function that can yield more sparse components. End of explanation AA = np.corrcoef(cnm2.estimates.A.toarray(), cnm_patches2.estimates.A.toarray(), rowvar=False) plt.imshow(AA[:cnm2.estimates.A.shape[1], cnm2.estimates.A.shape[1]:]) plt.colorbar(); plt.xlabel('with patches') plt.ylabel('without patches') plt.title('Correlation coefficients'); Explanation: Compare the two approaches To identify how similar components and the two approaches yield, we can compute the cross-correlation matrix between the spatial footprints derived from the two approaches. End of explanation #%% STOP CLUSTER and clean up log files cm.stop_server(dview=dview) log_files = glob.glob('*_LOG_*') for log_file in log_files: os.remove(log_file) Explanation: Stop the cluster when you're done End of explanation
9,215
Given the following text description, write Python code to implement the functionality described below step by step Description: ROOT dataframe tutorial Step1: Create a ROOT dataframe in Python First we will create a ROOT dataframe that is connected to a dataset named Events stored in a ROOT file. The file is pulled in via XRootD from EOS public, but note how it could also be stored in your CERNBox space or in any other EOS repository accessible from SWAN (e.g. the experiment ones). The dataset Events is a TTree and has the following branches Step2: Run only on a part of the dataset The full dataset contains half a year of CMS data taking in 2012 with 61 mio events. For the purpose of this example, we use the Range node to run only on a small part of the dataset. This feature also comes in handy in the development phase of your analysis. Feel free to experiment with this parameter! Step3: Filter relevant events for this analysis Physics datasets are often general purpose datasets and therefore need extensive filtering of the events for the actual analysis. Here, we implement only a simple selection based on the number of muons and the charge to cut down the dataset in events that are relevant for our study. In particular, we are applying two filters to keep Step4: Perform complex operations in Python, efficiently! Since we still want to perform complex operations in Python but plain Python code is prone to be slow and not thread-safe, you should use as much as possible C++ functions to do the work in your event loop during runtime. This mechanism uses the C++ interpreter cling shipped with ROOT, making this possible in a single line of code. Note, that we are using here the Define node of the computation graph with a jitted function, calling into a function available in the ROOT library. Step5: Make a histogram of the newly created column Step6: Book a Report of the dataframe filters Step7: Start data processing This is the final step of the analysis
Python Code: import ROOT Explanation: ROOT dataframe tutorial: Dimuon spectrum This tutorial shows you how to analyze datasets using RDataFrame from a Python notebook. The example analysis performs the following steps: Connect a ROOT dataframe to a dataset containing 61 mio. events recorded by CMS in 2012 Filter the events being relevant for your analysis Compute the invariant mass of the selected dimuon candidates Plot the invariant mass spectrum showing resonances up to the Z mass This material is based on the analysis done by Stefan Wunsch, available here in CERN's Open Data portal. <center><img src="../images/dimuonSpectrum.png"></center> End of explanation treename = "Events" filename = "root://eospublic.cern.ch//eos/opendata/cms/derived-data/AOD2NanoAODOutreachTool/Run2012BC_DoubleMuParked_Muons.root" df = ROOT.RDataFrame(treename, filename) Explanation: Create a ROOT dataframe in Python First we will create a ROOT dataframe that is connected to a dataset named Events stored in a ROOT file. The file is pulled in via XRootD from EOS public, but note how it could also be stored in your CERNBox space or in any other EOS repository accessible from SWAN (e.g. the experiment ones). The dataset Events is a TTree and has the following branches: | Branch name | Data type | Description | |-------------|-----------|-------------| | nMuon | unsigned int | Number of muons in this event | | Muon_pt | float[nMuon] | Transverse momentum of the muons stored as an array of size nMuon | | Muon_eta | float[nMuon] | Pseudo-rapidity of the muons stored as an array of size nMuon | | Muon_phi | float[nMuon] | Azimuth of the muons stored as an array of size nMuon | | Muon_charge | int[nMuon] | Charge of the muons stored as an array of size nMuon and either -1 or 1 | | Muon_mass | float[nMuon] | Mass of the muons stored as an array of size nMuon | End of explanation # Take only the first 1M events df_range = df.Range(1000000) Explanation: Run only on a part of the dataset The full dataset contains half a year of CMS data taking in 2012 with 61 mio events. For the purpose of this example, we use the Range node to run only on a small part of the dataset. This feature also comes in handy in the development phase of your analysis. Feel free to experiment with this parameter! End of explanation df_2mu = df_range.Filter("nMuon == 2", "Events with exactly two muons") df_oc = df_2mu.Filter("Muon_charge[0] != Muon_charge[1]", "Muons with opposite charge") Explanation: Filter relevant events for this analysis Physics datasets are often general purpose datasets and therefore need extensive filtering of the events for the actual analysis. Here, we implement only a simple selection based on the number of muons and the charge to cut down the dataset in events that are relevant for our study. In particular, we are applying two filters to keep: 1. Events with exactly two muons 2. Events with muons of opposite charge End of explanation df_mass = df_oc.Define("Dimuon_mass", "ROOT::VecOps::InvariantMass(Muon_pt, Muon_eta, Muon_phi, Muon_mass)") Explanation: Perform complex operations in Python, efficiently! Since we still want to perform complex operations in Python but plain Python code is prone to be slow and not thread-safe, you should use as much as possible C++ functions to do the work in your event loop during runtime. This mechanism uses the C++ interpreter cling shipped with ROOT, making this possible in a single line of code. Note, that we are using here the Define node of the computation graph with a jitted function, calling into a function available in the ROOT library. End of explanation nbins = 30000 low = 0.25 up = 300 histo_name = "Dimuon_mass" histo_title = histo_name h = df_mass.Histo1D((histo_name, histo_title, nbins, low, up), "Dimuon_mass") Explanation: Make a histogram of the newly created column End of explanation report = df.Report() Explanation: Book a Report of the dataframe filters End of explanation %%time ROOT.gStyle.SetOptStat(0) ROOT.gStyle.SetTextFont(42) c = ROOT.TCanvas("c", "", 800, 700) c.SetLogx() c.SetLogy() h.SetTitle("") h.GetXaxis().SetTitle("m_{#mu#mu} (GeV)") h.GetXaxis().SetTitleSize(0.04) h.GetYaxis().SetTitle("N_{Events}") h.GetYaxis().SetTitleSize(0.04) h.Draw() label = ROOT.TLatex() label.SetNDC(True) label.SetTextSize(0.040) label.DrawLatex(0.100, 0.920, "#bf{CMS Open Data}") label.SetTextSize(0.030) label.DrawLatex(0.500, 0.920, "#sqrt{s} = 8 TeV, L_{int} = 11.6 fb^{-1}") %jsroot on c.Draw() report.Print() Explanation: Start data processing This is the final step of the analysis: retrieving the result. We are expecting to see a plot of the mass of the dimuon spectrum similar to the one shown at the beginning of this exercise (remember we are running on fewer entries in this exercise). Finally in the last cell we should see a report of the filters applied on the dataset. End of explanation
9,216
Given the following text description, write Python code to implement the functionality described below step by step Description: Leitura e display de imagens com matplotlib importando Step1: Leitura usando matplotlib native e com PIL O matplotlib possui a leitura nativa de imagens no formato png. Quando este formato รฉ lido, a imagem รฉ automaticamente mapeada da faixa 0 a 255 contido no pixel uint8 da imagem para float 0 a 1 no pixel do array lido Jรก se o formato for outro, se o PIL estiver instalado, a leitura รฉ feita pelo PIL e neste caso o tipo do pixel รฉ mantido em uint8, de 0 a 255. Veja no exemplo a seguir Leitura de imagem em nรญveis de cinza de imagem TIFF Step2: Leitura de imagem colorida formato TIFF Quando a imagem รฉ colorida e nรฃo estรก no formato png, matplotlib utiliza PIL para leitura. O array terรก o tipo uint8 e o shape do array รฉ organizado em (H, W, 3). Step3: Leitura de imagem colorida formato png Se a imagem estรก no formato png, o matplotlib mapeia os pixels de 0 a 255 para float de 0 a 1.0 Step4: Mostrando as imagens lidas Step5: Observe que o display mostra apenas a รบltima chamada do imshow
Python Code: import matplotlib.pyplot as plt import matplotlib.image as mpimg import numpy as np Explanation: Leitura e display de imagens com matplotlib importando End of explanation f = mpimg.imread('../data/cameraman.tif') print(f.dtype,f.shape,f.max(),f.min()) Explanation: Leitura usando matplotlib native e com PIL O matplotlib possui a leitura nativa de imagens no formato png. Quando este formato รฉ lido, a imagem รฉ automaticamente mapeada da faixa 0 a 255 contido no pixel uint8 da imagem para float 0 a 1 no pixel do array lido Jรก se o formato for outro, se o PIL estiver instalado, a leitura รฉ feita pelo PIL e neste caso o tipo do pixel รฉ mantido em uint8, de 0 a 255. Veja no exemplo a seguir Leitura de imagem em nรญveis de cinza de imagem TIFF: Como a imagem lida รฉ TIFF, o array lido fica no tipo uint8, com valores de 0 a 255 End of explanation fcor = mpimg.imread('../data/boat.tif') print(fcor.dtype,fcor.shape,fcor.max(),fcor.min()) Explanation: Leitura de imagem colorida formato TIFF Quando a imagem รฉ colorida e nรฃo estรก no formato png, matplotlib utiliza PIL para leitura. O array terรก o tipo uint8 e o shape do array รฉ organizado em (H, W, 3). End of explanation fcor2 = mpimg.imread('../data/boat.tif') print(fcor2.dtype, fcor2.shape, fcor2.max(), fcor2.min()) Explanation: Leitura de imagem colorida formato png Se a imagem estรก no formato png, o matplotlib mapeia os pixels de 0 a 255 para float de 0 a 1.0 End of explanation %matplotlib inline plt.imshow(f, cmap='gray') plt.colorbar() plt.imshow(fcor) plt.colorbar() plt.imshow(fcor2) plt.colorbar() Explanation: Mostrando as imagens lidas End of explanation plt.imshow(fcor2) plt.imshow(fcor) plt.imshow(f, cmap='gray') Explanation: Observe que o display mostra apenas a รบltima chamada do imshow End of explanation
9,217
Given the following text description, write Python code to implement the functionality described below step by step Description: A primer on numerical differentiation In order to numerically evaluate a derivative $y'(x)=dy/dx$ at point $x_0$, we approximate is by using finite differences Step1: Why is it that the sequence does not converge? This is due to the round-off errors in the representation of the floating point numbers. To see this, we can simply type Step2: Let's try using powers of 1/2 Step3: In addition, one could consider the midpoint difference, defined as Step4: A more in-depth discussion about round-off errors in numerical differentiation can be found <a href="http Step5: Notice above that gradient() uses forward and backward differences at the two ends. Step6: More discussion about numerical differenciation, including higher order methods with error extrapolation can be found <a href="http Step7: One way to improve the roundoff errors is by simply using the decimal package
Python Code: dx = 1. x = 1. while(dx > 1.e-10): dy = (x+dx)*(x+dx)-x*x d = dy / dx print("%6.0e %20.16f %20.16f" % (dx, d, d-2.)) dx = dx / 10. Explanation: A primer on numerical differentiation In order to numerically evaluate a derivative $y'(x)=dy/dx$ at point $x_0$, we approximate is by using finite differences: Therefore we find: $$\begin{eqnarray} && dx \approx \Delta x &=&x_1-x_0, \ && dy \approx \Delta y &=&y_1-y_0 = y(x_1)-y(x_0) = y(x_0+\Delta_x)-y(x_0),\end{eqnarray}$$ Then we re-write the derivative in terms of discrete differences as: $$\frac{dy}{dx} \approx \frac{\Delta y}{\Delta x}$$ Example Let's look at the accuracy of this approximation in terms of the interval $\Delta x$. In our first example we will evaluate the derivative of $y=x^2$ at $x=1$. End of explanation ((1.+0.0001)*(1+0.0001)-1) Explanation: Why is it that the sequence does not converge? This is due to the round-off errors in the representation of the floating point numbers. To see this, we can simply type: End of explanation dx = 1. x = 1. while(dx > 1.e-10): dy = (x+dx)*(x+dx)-x*x d = dy / dx print("%8.5e %20.16f %20.16f" % (dx, d, d-2.)) dx = dx / 2. Explanation: Let's try using powers of 1/2 End of explanation from math import sin, sqrt, pi dx = 1. while(dx > 1.e-10): x = pi/4. d1 = sin(x+dx) - sin(x); #forward d2 = sin(x+dx*0.5) - sin(x-dx*0.5); # midpoint d1 = d1 / dx; d2 = d2 / dx; print("%8.5e %20.16f %20.16f %20.16f %20.16f" % (dx, d1, d1-sqrt(2.)/2., d2, d2-sqrt(2.)/2.) ) dx = dx / 2. Explanation: In addition, one could consider the midpoint difference, defined as: $$ dy \approx \Delta y = y(x_0+\frac{\Delta_x}{2})-y(x_0-\frac{\Delta_x}{2}).$$ For a more complex function we need to import it from the math module. For instance, let's calculate the derivative of $sin(x)$ at $x=\pi/4$, including both the forward and midpoint differences. End of explanation %matplotlib inline import numpy as np from matplotlib import pyplot y = lambda x: x*x x1 = np.arange(0,10,1) x2 = np.arange(0,10,0.1) y1 = np.gradient(y(x1), 1.) print (y1) pyplot.plot(x1,np.gradient(y(x1),1.),'r--o'); pyplot.plot(x1[:x1.size-1],np.diff(y(x1))/np.diff(x1),'b--x'); Explanation: A more in-depth discussion about round-off errors in numerical differentiation can be found <a href="http://www.uio.no/studier/emner/matnat/math/MAT-INF1100/h10/kompendiet/kap11.pdf">here</a> Special functions in numpy numpy provides a simple method diff() to calculate the numerical derivatives of a dataset stored in an array by forward differences. The function gradient() will calculate the derivatives by midpoint (or central) difference, that provides a more accurate result. End of explanation pyplot.plot(x2,np.gradient(y(x2),0.1),'b--o'); Explanation: Notice above that gradient() uses forward and backward differences at the two ends. End of explanation from scipy.misc import derivative y = lambda x: x**2 dx = 1. x = 1. while(dx > 1.e-10): d = derivative(y, x, dx, n=1, order=3) print("%6.0e %20.16f %20.16f" % (dx, d, d-2.)) dx = dx / 10. Explanation: More discussion about numerical differenciation, including higher order methods with error extrapolation can be found <a href="http://young.physics.ucsc.edu/115/diff.pdf">here</a>. The module scipy also includes methods to accurately calculate derivatives: End of explanation from decimal import Decimal dx = Decimal("1.") while(dx >= Decimal("1.e-10")): x = Decimal("1.") dy = (x+dx)*(x+dx)-x*x d = dy / dx print("%6.0e %20.16f %20.16f" % (dx, d, d-Decimal("2."))) dx = dx / Decimal("10.") Explanation: One way to improve the roundoff errors is by simply using the decimal package End of explanation
9,218
Given the following text description, write Python code to implement the functionality described below step by step Description: ES-DOC CMIP6 Model Properties - Land MIP Era Step1: Document Authors Set document authors Step2: Document Contributors Specify document contributors Step3: Document Publication Specify document publication status Step4: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Conservation Properties 3. Key Properties --&gt; Timestepping Framework 4. Key Properties --&gt; Software Properties 5. Grid 6. Grid --&gt; Horizontal 7. Grid --&gt; Vertical 8. Soil 9. Soil --&gt; Soil Map 10. Soil --&gt; Snow Free Albedo 11. Soil --&gt; Hydrology 12. Soil --&gt; Hydrology --&gt; Freezing 13. Soil --&gt; Hydrology --&gt; Drainage 14. Soil --&gt; Heat Treatment 15. Snow 16. Snow --&gt; Snow Albedo 17. Vegetation 18. Energy Balance 19. Carbon Cycle 20. Carbon Cycle --&gt; Vegetation 21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis 22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration 23. Carbon Cycle --&gt; Vegetation --&gt; Allocation 24. Carbon Cycle --&gt; Vegetation --&gt; Phenology 25. Carbon Cycle --&gt; Vegetation --&gt; Mortality 26. Carbon Cycle --&gt; Litter 27. Carbon Cycle --&gt; Soil 28. Carbon Cycle --&gt; Permafrost Carbon 29. Nitrogen Cycle 30. River Routing 31. River Routing --&gt; Oceanic Discharge 32. Lakes 33. Lakes --&gt; Method 34. Lakes --&gt; Wetlands 1. Key Properties Land surface key properties 1.1. Model Overview Is Required Step5: 1.2. Model Name Is Required Step6: 1.3. Description Is Required Step7: 1.4. Land Atmosphere Flux Exchanges Is Required Step8: 1.5. Atmospheric Coupling Treatment Is Required Step9: 1.6. Land Cover Is Required Step10: 1.7. Land Cover Change Is Required Step11: 1.8. Tiling Is Required Step12: 2. Key Properties --&gt; Conservation Properties TODO 2.1. Energy Is Required Step13: 2.2. Water Is Required Step14: 2.3. Carbon Is Required Step15: 3. Key Properties --&gt; Timestepping Framework TODO 3.1. Timestep Dependent On Atmosphere Is Required Step16: 3.2. Time Step Is Required Step17: 3.3. Timestepping Method Is Required Step18: 4. Key Properties --&gt; Software Properties Software properties of land surface code 4.1. Repository Is Required Step19: 4.2. Code Version Is Required Step20: 4.3. Code Languages Is Required Step21: 5. Grid Land surface grid 5.1. Overview Is Required Step22: 6. Grid --&gt; Horizontal The horizontal grid in the land surface 6.1. Description Is Required Step23: 6.2. Matches Atmosphere Grid Is Required Step24: 7. Grid --&gt; Vertical The vertical grid in the soil 7.1. Description Is Required Step25: 7.2. Total Depth Is Required Step26: 8. Soil Land surface soil 8.1. Overview Is Required Step27: 8.2. Heat Water Coupling Is Required Step28: 8.3. Number Of Soil layers Is Required Step29: 8.4. Prognostic Variables Is Required Step30: 9. Soil --&gt; Soil Map Key properties of the land surface soil map 9.1. Description Is Required Step31: 9.2. Structure Is Required Step32: 9.3. Texture Is Required Step33: 9.4. Organic Matter Is Required Step34: 9.5. Albedo Is Required Step35: 9.6. Water Table Is Required Step36: 9.7. Continuously Varying Soil Depth Is Required Step37: 9.8. Soil Depth Is Required Step38: 10. Soil --&gt; Snow Free Albedo TODO 10.1. Prognostic Is Required Step39: 10.2. Functions Is Required Step40: 10.3. Direct Diffuse Is Required Step41: 10.4. Number Of Wavelength Bands Is Required Step42: 11. Soil --&gt; Hydrology Key properties of the land surface soil hydrology 11.1. Description Is Required Step43: 11.2. Time Step Is Required Step44: 11.3. Tiling Is Required Step45: 11.4. Vertical Discretisation Is Required Step46: 11.5. Number Of Ground Water Layers Is Required Step47: 11.6. Lateral Connectivity Is Required Step48: 11.7. Method Is Required Step49: 12. Soil --&gt; Hydrology --&gt; Freezing TODO 12.1. Number Of Ground Ice Layers Is Required Step50: 12.2. Ice Storage Method Is Required Step51: 12.3. Permafrost Is Required Step52: 13. Soil --&gt; Hydrology --&gt; Drainage TODO 13.1. Description Is Required Step53: 13.2. Types Is Required Step54: 14. Soil --&gt; Heat Treatment TODO 14.1. Description Is Required Step55: 14.2. Time Step Is Required Step56: 14.3. Tiling Is Required Step57: 14.4. Vertical Discretisation Is Required Step58: 14.5. Heat Storage Is Required Step59: 14.6. Processes Is Required Step60: 15. Snow Land surface snow 15.1. Overview Is Required Step61: 15.2. Tiling Is Required Step62: 15.3. Number Of Snow Layers Is Required Step63: 15.4. Density Is Required Step64: 15.5. Water Equivalent Is Required Step65: 15.6. Heat Content Is Required Step66: 15.7. Temperature Is Required Step67: 15.8. Liquid Water Content Is Required Step68: 15.9. Snow Cover Fractions Is Required Step69: 15.10. Processes Is Required Step70: 15.11. Prognostic Variables Is Required Step71: 16. Snow --&gt; Snow Albedo TODO 16.1. Type Is Required Step72: 16.2. Functions Is Required Step73: 17. Vegetation Land surface vegetation 17.1. Overview Is Required Step74: 17.2. Time Step Is Required Step75: 17.3. Dynamic Vegetation Is Required Step76: 17.4. Tiling Is Required Step77: 17.5. Vegetation Representation Is Required Step78: 17.6. Vegetation Types Is Required Step79: 17.7. Biome Types Is Required Step80: 17.8. Vegetation Time Variation Is Required Step81: 17.9. Vegetation Map Is Required Step82: 17.10. Interception Is Required Step83: 17.11. Phenology Is Required Step84: 17.12. Phenology Description Is Required Step85: 17.13. Leaf Area Index Is Required Step86: 17.14. Leaf Area Index Description Is Required Step87: 17.15. Biomass Is Required Step88: 17.16. Biomass Description Is Required Step89: 17.17. Biogeography Is Required Step90: 17.18. Biogeography Description Is Required Step91: 17.19. Stomatal Resistance Is Required Step92: 17.20. Stomatal Resistance Description Is Required Step93: 17.21. Prognostic Variables Is Required Step94: 18. Energy Balance Land surface energy balance 18.1. Overview Is Required Step95: 18.2. Tiling Is Required Step96: 18.3. Number Of Surface Temperatures Is Required Step97: 18.4. Evaporation Is Required Step98: 18.5. Processes Is Required Step99: 19. Carbon Cycle Land surface carbon cycle 19.1. Overview Is Required Step100: 19.2. Tiling Is Required Step101: 19.3. Time Step Is Required Step102: 19.4. Anthropogenic Carbon Is Required Step103: 19.5. Prognostic Variables Is Required Step104: 20. Carbon Cycle --&gt; Vegetation TODO 20.1. Number Of Carbon Pools Is Required Step105: 20.2. Carbon Pools Is Required Step106: 20.3. Forest Stand Dynamics Is Required Step107: 21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis TODO 21.1. Method Is Required Step108: 22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration TODO 22.1. Maintainance Respiration Is Required Step109: 22.2. Growth Respiration Is Required Step110: 23. Carbon Cycle --&gt; Vegetation --&gt; Allocation TODO 23.1. Method Is Required Step111: 23.2. Allocation Bins Is Required Step112: 23.3. Allocation Fractions Is Required Step113: 24. Carbon Cycle --&gt; Vegetation --&gt; Phenology TODO 24.1. Method Is Required Step114: 25. Carbon Cycle --&gt; Vegetation --&gt; Mortality TODO 25.1. Method Is Required Step115: 26. Carbon Cycle --&gt; Litter TODO 26.1. Number Of Carbon Pools Is Required Step116: 26.2. Carbon Pools Is Required Step117: 26.3. Decomposition Is Required Step118: 26.4. Method Is Required Step119: 27. Carbon Cycle --&gt; Soil TODO 27.1. Number Of Carbon Pools Is Required Step120: 27.2. Carbon Pools Is Required Step121: 27.3. Decomposition Is Required Step122: 27.4. Method Is Required Step123: 28. Carbon Cycle --&gt; Permafrost Carbon TODO 28.1. Is Permafrost Included Is Required Step124: 28.2. Emitted Greenhouse Gases Is Required Step125: 28.3. Decomposition Is Required Step126: 28.4. Impact On Soil Properties Is Required Step127: 29. Nitrogen Cycle Land surface nitrogen cycle 29.1. Overview Is Required Step128: 29.2. Tiling Is Required Step129: 29.3. Time Step Is Required Step130: 29.4. Prognostic Variables Is Required Step131: 30. River Routing Land surface river routing 30.1. Overview Is Required Step132: 30.2. Tiling Is Required Step133: 30.3. Time Step Is Required Step134: 30.4. Grid Inherited From Land Surface Is Required Step135: 30.5. Grid Description Is Required Step136: 30.6. Number Of Reservoirs Is Required Step137: 30.7. Water Re Evaporation Is Required Step138: 30.8. Coupled To Atmosphere Is Required Step139: 30.9. Coupled To Land Is Required Step140: 30.10. Quantities Exchanged With Atmosphere Is Required Step141: 30.11. Basin Flow Direction Map Is Required Step142: 30.12. Flooding Is Required Step143: 30.13. Prognostic Variables Is Required Step144: 31. River Routing --&gt; Oceanic Discharge TODO 31.1. Discharge Type Is Required Step145: 31.2. Quantities Transported Is Required Step146: 32. Lakes Land surface lakes 32.1. Overview Is Required Step147: 32.2. Coupling With Rivers Is Required Step148: 32.3. Time Step Is Required Step149: 32.4. Quantities Exchanged With Rivers Is Required Step150: 32.5. Vertical Grid Is Required Step151: 32.6. Prognostic Variables Is Required Step152: 33. Lakes --&gt; Method TODO 33.1. Ice Treatment Is Required Step153: 33.2. Albedo Is Required Step154: 33.3. Dynamics Is Required Step155: 33.4. Dynamic Lake Extent Is Required Step156: 33.5. Endorheic Basins Is Required Step157: 34. Lakes --&gt; Wetlands TODO 34.1. Description Is Required
Python Code: # DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'inm', 'inm-cm5-h', 'land') Explanation: ES-DOC CMIP6 Model Properties - Land MIP Era: CMIP6 Institute: INM Source ID: INM-CM5-H Topic: Land Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes. Properties: 154 (96 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:54:04 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) Explanation: Document Authors Set document authors End of explanation # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) Explanation: Document Contributors Specify document contributors End of explanation # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) Explanation: Document Publication Specify document publication status End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Conservation Properties 3. Key Properties --&gt; Timestepping Framework 4. Key Properties --&gt; Software Properties 5. Grid 6. Grid --&gt; Horizontal 7. Grid --&gt; Vertical 8. Soil 9. Soil --&gt; Soil Map 10. Soil --&gt; Snow Free Albedo 11. Soil --&gt; Hydrology 12. Soil --&gt; Hydrology --&gt; Freezing 13. Soil --&gt; Hydrology --&gt; Drainage 14. Soil --&gt; Heat Treatment 15. Snow 16. Snow --&gt; Snow Albedo 17. Vegetation 18. Energy Balance 19. Carbon Cycle 20. Carbon Cycle --&gt; Vegetation 21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis 22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration 23. Carbon Cycle --&gt; Vegetation --&gt; Allocation 24. Carbon Cycle --&gt; Vegetation --&gt; Phenology 25. Carbon Cycle --&gt; Vegetation --&gt; Mortality 26. Carbon Cycle --&gt; Litter 27. Carbon Cycle --&gt; Soil 28. Carbon Cycle --&gt; Permafrost Carbon 29. Nitrogen Cycle 30. River Routing 31. River Routing --&gt; Oceanic Discharge 32. Lakes 33. Lakes --&gt; Method 34. Lakes --&gt; Wetlands 1. Key Properties Land surface key properties 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of land surface model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of land surface model code (e.g. MOSES2.2) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.3. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "water" # "energy" # "carbon" # "nitrogen" # "phospherous" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.4. Land Atmosphere Flux Exchanges Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Fluxes exchanged with the atmopshere. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.5. Atmospheric Coupling Treatment Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.land_cover') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "bare soil" # "urban" # "lake" # "land ice" # "lake ice" # "vegetated" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.6. Land Cover Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Types of land cover defined in the land surface model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.land_cover_change') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.7. Land Cover Change Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe how land cover change is managed (e.g. the use of net or gross transitions) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.8. Tiling Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.conservation_properties.energy') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2. Key Properties --&gt; Conservation Properties TODO 2.1. Energy Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.conservation_properties.water') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2.2. Water Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how water is conserved globally and to what level (e.g. within X [units]/year) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2.3. Carbon Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 3. Key Properties --&gt; Timestepping Framework TODO 3.1. Timestep Dependent On Atmosphere Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is a time step dependent on the frequency of atmosphere coupling? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 3.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overall timestep of land surface model (i.e. time between calls) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.3. Timestepping Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of time stepping method and associated time step(s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4. Key Properties --&gt; Software Properties Software properties of land surface code 4.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5. Grid Land surface grid 5.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of the grid in the land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.horizontal.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6. Grid --&gt; Horizontal The horizontal grid in the land surface 6.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general structure of the horizontal grid (not including any tiling) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 6.2. Matches Atmosphere Grid Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does the horizontal grid match the atmosphere? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.vertical.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7. Grid --&gt; Vertical The vertical grid in the soil 7.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general structure of the vertical grid in the soil (not including any tiling) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.vertical.total_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 7.2. Total Depth Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The total depth of the soil (in metres) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8. Soil Land surface soil 8.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of soil in the land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_water_coupling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.2. Heat Water Coupling Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the coupling between heat and water in the soil End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.number_of_soil layers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 8.3. Number Of Soil layers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of soil layers End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.4. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the soil scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9. Soil --&gt; Soil Map Key properties of the land surface soil map 9.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of soil map End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.structure') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.2. Structure Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil structure map End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.texture') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.3. Texture Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil texture map End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.organic_matter') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.4. Organic Matter Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil organic matter map End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.albedo') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.5. Albedo Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil albedo map End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.water_table') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.6. Water Table Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil water table map, if any End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 9.7. Continuously Varying Soil Depth Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does the soil properties vary continuously with depth? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.soil_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.8. Soil Depth Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil depth map End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 10. Soil --&gt; Snow Free Albedo TODO 10.1. Prognostic Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is snow free albedo prognostic? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.snow_free_albedo.functions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "vegetation type" # "soil humidity" # "vegetation state" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 10.2. Functions Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If prognostic, describe the dependancies on snow free albedo calculations End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "distinction between direct and diffuse albedo" # "no distinction between direct and diffuse albedo" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 10.3. Direct Diffuse Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If prognostic, describe the distinction between direct and diffuse albedo End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 10.4. Number Of Wavelength Bands Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If prognostic, enter the number of wavelength bands used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11. Soil --&gt; Hydrology Key properties of the land surface soil hydrology 11.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of the soil hydrological model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 11.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of river soil hydrology in seconds End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11.3. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil hydrology tiling, if any. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11.4. Vertical Discretisation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the typical vertical discretisation End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 11.5. Number Of Ground Water Layers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of soil layers that may contain water End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "perfect connectivity" # "Darcian flow" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11.6. Lateral Connectivity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Describe the lateral connectivity between tiles End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Bucket" # "Force-restore" # "Choisnel" # "Explicit diffusion" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11.7. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The hydrological dynamics scheme in the land surface model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 12. Soil --&gt; Hydrology --&gt; Freezing TODO 12.1. Number Of Ground Ice Layers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How many soil layers may contain ground ice End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 12.2. Ice Storage Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method of ice storage End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 12.3. Permafrost Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the treatment of permafrost, if any, within the land surface scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.drainage.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 13. Soil --&gt; Hydrology --&gt; Drainage TODO 13.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General describe how drainage is included in the land surface scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.drainage.types') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Gravity drainage" # "Horton mechanism" # "topmodel-based" # "Dunne mechanism" # "Lateral subsurface flow" # "Baseflow from groundwater" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13.2. Types Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Different types of runoff represented by the land surface model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 14. Soil --&gt; Heat Treatment TODO 14.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of how heat treatment properties are defined End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 14.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of soil heat scheme in seconds End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 14.3. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil heat treatment tiling, if any. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 14.4. Vertical Discretisation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the typical vertical discretisation End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Force-restore" # "Explicit diffusion" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14.5. Heat Storage Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specify the method of heat storage End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "soil moisture freeze-thaw" # "coupling with snow temperature" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14.6. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Describe processes included in the treatment of soil heat End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15. Snow Land surface snow 15.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of snow in the land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the snow tiling, if any. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.number_of_snow_layers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 15.3. Number Of Snow Layers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of snow levels used in the land surface scheme/model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.density') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "constant" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.4. Density Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of snow density End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.water_equivalent') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.5. Water Equivalent Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of the snow water equivalent End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.heat_content') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.6. Heat Content Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of the heat content of snow End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.temperature') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.7. Temperature Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of snow temperature End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.liquid_water_content') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.8. Liquid Water Content Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of snow liquid water End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.snow_cover_fractions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "ground snow fraction" # "vegetation snow fraction" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.9. Snow Cover Fractions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Specify cover fractions used in the surface snow scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "snow interception" # "snow melting" # "snow freezing" # "blowing snow" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.10. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Snow related processes in the land surface scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15.11. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the snow scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.snow_albedo.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "prescribed" # "constant" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 16. Snow --&gt; Snow Albedo TODO 16.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the treatment of snow-covered land albedo End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.snow_albedo.functions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "vegetation type" # "snow age" # "snow density" # "snow grain type" # "aerosol deposition" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 16.2. Functions Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N *If prognostic, * End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17. Vegetation Land surface vegetation 17.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of vegetation in the land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 17.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of vegetation scheme in seconds End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.dynamic_vegetation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 17.3. Dynamic Vegetation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there dynamic evolution of vegetation? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.4. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the vegetation tiling, if any. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.vegetation_representation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "vegetation types" # "biome types" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.5. Vegetation Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Vegetation classification used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.vegetation_types') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "broadleaf tree" # "needleleaf tree" # "C3 grass" # "C4 grass" # "vegetated" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.6. Vegetation Types Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of vegetation types in the classification, if any End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biome_types') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "evergreen needleleaf forest" # "evergreen broadleaf forest" # "deciduous needleleaf forest" # "deciduous broadleaf forest" # "mixed forest" # "woodland" # "wooded grassland" # "closed shrubland" # "opne shrubland" # "grassland" # "cropland" # "wetlands" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.7. Biome Types Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of biome types in the classification, if any End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.vegetation_time_variation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "fixed (not varying)" # "prescribed (varying from files)" # "dynamical (varying from simulation)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.8. Vegetation Time Variation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How the vegetation fractions in each tile are varying with time End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.vegetation_map') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.9. Vegetation Map Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.interception') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 17.10. Interception Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is vegetation interception of rainwater represented? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.phenology') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic (vegetation map)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.11. Phenology Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Treatment of vegetation phenology End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.phenology_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.12. Phenology Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of vegetation phenology End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.leaf_area_index') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prescribed" # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.13. Leaf Area Index Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Treatment of vegetation leaf area index End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.leaf_area_index_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.14. Leaf Area Index Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of leaf area index End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biomass') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.15. Biomass Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 *Treatment of vegetation biomass * End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biomass_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.16. Biomass Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of vegetation biomass End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biogeography') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.17. Biogeography Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Treatment of vegetation biogeography End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biogeography_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.18. Biogeography Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of vegetation biogeography End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.stomatal_resistance') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "light" # "temperature" # "water availability" # "CO2" # "O3" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.19. Stomatal Resistance Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Specify what the vegetation stomatal resistance depends on End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.20. Stomatal Resistance Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of vegetation stomatal resistance End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.21. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the vegetation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 18. Energy Balance Land surface energy balance 18.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of energy balance in land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 18.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the energy balance tiling, if any. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 18.3. Number Of Surface Temperatures Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.evaporation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "alpha" # "beta" # "combined" # "Monteith potential evaporation" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 18.4. Evaporation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Specify the formulation method for land surface evaporation, from soil and vegetation End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "transpiration" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 18.5. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Describe which processes are included in the energy balance scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 19. Carbon Cycle Land surface carbon cycle 19.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of carbon cycle in land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 19.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the carbon cycle tiling, if any. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 19.3. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of carbon cycle in seconds End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "grand slam protocol" # "residence time" # "decay time" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 19.4. Anthropogenic Carbon Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Describe the treament of the anthropogenic carbon pool End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 19.5. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the carbon scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 20. Carbon Cycle --&gt; Vegetation TODO 20.1. Number Of Carbon Pools Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Enter the number of carbon pools used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 20.2. Carbon Pools Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the carbon pools used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 20.3. Forest Stand Dynamics Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the treatment of forest stand dyanmics End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis TODO 21.1. Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration TODO 22.1. Maintainance Respiration Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the general method used for maintainence respiration End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 22.2. Growth Respiration Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the general method used for growth respiration End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 23. Carbon Cycle --&gt; Vegetation --&gt; Allocation TODO 23.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general principle behind the allocation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "leaves + stems + roots" # "leaves + stems + roots (leafy + woody)" # "leaves + fine roots + coarse roots + stems" # "whole plant (no distinction)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 23.2. Allocation Bins Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specify distinct carbon bins used in allocation End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "fixed" # "function of vegetation type" # "function of plant allometry" # "explicitly calculated" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 23.3. Allocation Fractions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how the fractions of allocation are calculated End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 24. Carbon Cycle --&gt; Vegetation --&gt; Phenology TODO 24.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general principle behind the phenology scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 25. Carbon Cycle --&gt; Vegetation --&gt; Mortality TODO 25.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general principle behind the mortality scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 26. Carbon Cycle --&gt; Litter TODO 26.1. Number Of Carbon Pools Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Enter the number of carbon pools used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 26.2. Carbon Pools Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the carbon pools used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 26.3. Decomposition Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the decomposition methods used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.litter.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 26.4. Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the general method used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 27. Carbon Cycle --&gt; Soil TODO 27.1. Number Of Carbon Pools Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Enter the number of carbon pools used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 27.2. Carbon Pools Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the carbon pools used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 27.3. Decomposition Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the decomposition methods used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.soil.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 27.4. Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the general method used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 28. Carbon Cycle --&gt; Permafrost Carbon TODO 28.1. Is Permafrost Included Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is permafrost included? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 28.2. Emitted Greenhouse Gases Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the GHGs emitted End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 28.3. Decomposition Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the decomposition methods used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 28.4. Impact On Soil Properties Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the impact of permafrost on soil properties End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.nitrogen_cycle.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 29. Nitrogen Cycle Land surface nitrogen cycle 29.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of the nitrogen cycle in the land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.nitrogen_cycle.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 29.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the notrogen cycle tiling, if any. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.nitrogen_cycle.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 29.3. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of nitrogen cycle in seconds End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 29.4. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the nitrogen scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 30. River Routing Land surface river routing 30.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of river routing in the land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 30.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the river routing, if any. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 30.3. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of river routing scheme in seconds End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 30.4. Grid Inherited From Land Surface Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the grid inherited from land surface? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.grid_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 30.5. Grid Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of grid, if not inherited from land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.number_of_reservoirs') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 30.6. Number Of Reservoirs Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Enter the number of reservoirs End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.water_re_evaporation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "flood plains" # "irrigation" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 30.7. Water Re Evaporation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N TODO End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 30.8. Coupled To Atmosphere Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Is river routing coupled to the atmosphere model component? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.coupled_to_land') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 30.9. Coupled To Land Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the coupling between land and rivers End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "heat" # "water" # "tracers" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 30.10. Quantities Exchanged With Atmosphere Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "present day" # "adapted for other periods" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 30.11. Basin Flow Direction Map Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What type of basin flow direction map is being used? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.flooding') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 30.12. Flooding Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the representation of flooding, if any End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 30.13. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the river routing End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "direct (large rivers)" # "diffuse" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 31. River Routing --&gt; Oceanic Discharge TODO 31.1. Discharge Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specify how rivers are discharged to the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "heat" # "water" # "tracers" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 31.2. Quantities Transported Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Quantities that are exchanged from river-routing to the ocean model component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 32. Lakes Land surface lakes 32.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of lakes in the land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.coupling_with_rivers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 32.2. Coupling With Rivers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Are lakes coupled to the river routing model component? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 32.3. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of lake scheme in seconds End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "heat" # "water" # "tracers" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 32.4. Quantities Exchanged With Rivers Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If coupling with rivers, which quantities are exchanged between the lakes and rivers End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.vertical_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 32.5. Vertical Grid Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the vertical grid of lakes End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 32.6. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the lake scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.ice_treatment') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 33. Lakes --&gt; Method TODO 33.1. Ice Treatment Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is lake ice included? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.albedo') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 33.2. Albedo Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the treatment of lake albedo End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.dynamics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "No lake dynamics" # "vertical" # "horizontal" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 33.3. Dynamics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Which dynamics of lakes are treated? horizontal, vertical, etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 33.4. Dynamic Lake Extent Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is a dynamic lake extent scheme included? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.endorheic_basins') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 33.5. Endorheic Basins Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Basins not flowing to ocean included? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.wetlands.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 34. Lakes --&gt; Wetlands TODO 34.1. Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the treatment of wetlands, if any End of explanation
9,219
Given the following text description, write Python code to implement the functionality described below step by step Description: Oregon Curriculum Network <br /> Discovering Math with Python Chapter 6 Step1: We're going to want to see our vectors rendered in some way. Visual Python, or VPython, provides an excellent solution. However, we're going to stay with still ray tracings in this Chapter and work with POV-Ray. Our goal is to build up a data structure for representing a full-fledge polyhedron, a network of edges, connecting around in faces, floating about the origin. We'll use point vectors for the vertexes, pairs of vectors for the edges, and sets of three or more vectors for the faces. Indeed, given faces as clockwise or counterclockwise listings, we can "distill" the edges by going around each face and getting the vertex vector pairs.
Python Code: class Vector: "A point in space" pass Explanation: Oregon Curriculum Network <br /> Discovering Math with Python Chapter 6: VECTORS IN SPACE A point vector is simply an object that points, from the origin to a specific location. We usually represent such an object with an arrow, with its tail at (0,0) or whatever, and its head at (x,y). Rather that pursue Flatland geometry as an end in itself, we will start right away in spatial geometry, showing two classes of vector, one typical, the other somewhat exotic. End of explanation class Edge: "A pair of vectors" pass class Face: "A set of vectors in clockwise or counter-clockwise order" pass class Polyhedron: "A set of faces" pass Explanation: We're going to want to see our vectors rendered in some way. Visual Python, or VPython, provides an excellent solution. However, we're going to stay with still ray tracings in this Chapter and work with POV-Ray. Our goal is to build up a data structure for representing a full-fledge polyhedron, a network of edges, connecting around in faces, floating about the origin. We'll use point vectors for the vertexes, pairs of vectors for the edges, and sets of three or more vectors for the faces. Indeed, given faces as clockwise or counterclockwise listings, we can "distill" the edges by going around each face and getting the vertex vector pairs. End of explanation
9,220
Given the following text description, write Python code to implement the functionality described below step by step Description: Part of Neural Network Notebook (3nb) project. Copyright (C) 2014 Eka A. Kurniawan eka.a.kurniawan(ta)gmail(tod)com This program is free software Step1: Display Settings Step2: Housekeeping Functions Following function plots different concentrations used to visualize dose-response relation. Step3: Following function plots dose-response relation in both linear (log_flag = False) and logarithmic (log_flag = True) along X axis. Step4: Dose-Response Relations $^{[1]}$ Generally, dose-response relations can be written as follow. In which the dose is represented as concentration ($c$), while the formula returns the response ($r$). $$r = \frac{F.c^{n_H}}{{c^{n_H} + EC_{50}^{n_H}}}$$ Other terms like $EC_{50}$ is the effective concentration achieved at 50% of maximum response. Normally, efficacy ($F$) is normalized to one so that it is easier to make comparison among different drugs. Furthermore, if full agonist is defined to have efficacy equal to one, anything lower than one is treated to be partial agonist. Finally, Hill coefficients ($n_H$) defines the number of drug molecules needed to activate target receptor. Drug Concentartion Both linearly and logarithmically increased concentrations are used to study dose-response relations. Linearly increased concentration (c_lin) Step5: Logarithmically increased concentration (c_log) Step6: Agonist Only To calculate dose-response relation in the case of agonist only, we use general dose-response relation equation described previously. The function is shown below. Step7: Following result shows drug response of agonist only to the linearly increased concentrations. Step8: Following result shows drug response of agonist only to the logarithmically increased concentrations. Step9: Agonist Plus Competitive Antagonist Compatitive antagonist, as the name sugest, competes with agonist molecules to sit in the same pocket. It makes the binding harder for agonist as well as to trigger the activation. Therefore, higher agonist concentration is required to reach both full and partial (like $EC_{50}$) activation. New $EC_{50}$ value, called $EC_{50}'$ ($EC_{50}$ prime) is calculated using following formula. $$EC_{50}' = EC_{50} * \left(1 + \frac{c_i}{K_i}\right)$$ It depends on inhibitor concentration ($c_i$) and dissociation constant of the inhibitor ($K_i$). Following is a new function to calculate drug response of agonist with competitive antagonist. It shows new $EC_{50}$ value (EC_50_prime) replacing agonist only $EC_{50}$ value (EC_50). Step10: Following result shows drug response of agonist with competitive antagonist to the linearly increased concentrations. Step11: Following result shows drug response of agonist with competitive antagonist to the logarithmically increased concentrations. Step12: Agonist Plus Noncompetitive Antagonist Unlike competitive antagonist, noncompetitive antagonist does not compete directly to the location where agonist binds but somewhere else in the subsequent pathway. Instead of altering effective concentration (like $EC_{50}$), noncompetitive antagonist affects efficacy. New efficacy value ($F'$) due to the existance of noncompetitive antagonist is calculated as follow. $$F' = \frac{F}{\left(1 + \frac{c_i}{K_i}\right)}$$ Following is a new function to calculate drug response of agonist with noncompetitive antagonist. It shows new efficacy value (F_prime) replacing agonist only efficacy value (F). Step13: Following result shows drug response of agonist with noncompetitive antagonist to the linearly increased concentrations. Step14: Following result shows drug response of agonist with noncompetitive antagonist to the logarithmically increased concentrations.
Python Code: import sys print("Python %d.%d.%d" % (sys.version_info.major, \ sys.version_info.minor, \ sys.version_info.micro)) import numpy as np print("NumPy %s" % np.__version__) # Display graph inline %matplotlib inline import matplotlib import matplotlib.pyplot as plt print("matplotlib %s" % matplotlib.__version__) Explanation: Part of Neural Network Notebook (3nb) project. Copyright (C) 2014 Eka A. Kurniawan eka.a.kurniawan(ta)gmail(tod)com This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see http://www.gnu.org/licenses/. Tested On End of explanation # Display graph in 'retina' format for Mac with retina display. Others, use PNG or SVG format. %config InlineBackend.figure_format = 'retina' #%config InlineBackend.figure_format = 'PNG' #%config InlineBackend.figure_format = 'SVG' Explanation: Display Settings End of explanation def plot_concentration(c): fig = plt.figure() sp111 = fig.add_subplot(111) # Display grid sp111.grid(True, which = 'both') # Plot concentration len_c = len(c) sp111.plot(np.linspace(0,len_c-1,len_c), c, color = 'gray', linewidth = 2) # Label sp111.set_ylabel('Concentration (nM)') # Set X axis within different concentration plt.xlim([0, len_c-1]) plt.show() Explanation: Housekeeping Functions Following function plots different concentrations used to visualize dose-response relation. End of explanation # d : Dose # r1 : First response data # r1_label : First response label # r2 : Second response data # r2_label : Second response label # log_flag : Selection for linear or logarithmic along X axis # - False: Plot linear (default) # - True: Plot logarithmic def plot_dose_response_relation(d, r1, r1_label, r2 = None, r2_label = "", log_flag = False): fig = plt.figure() sp111 = fig.add_subplot(111) # Handle logarithmic along X axis if log_flag: sp111.set_xscale('log') # Display grid sp111.yaxis.set_ticks([0.0, 0.5, 1.0]) sp111.grid(True, which = 'both') # Plot dose-response sp111.plot(d, r1, color = 'blue', label = r1_label, linewidth = 2) if r2 is not None: sp111.plot(d, r2, color = 'red', label = r2_label, linewidth = 2) # Labels sp111.set_ylabel('Response') sp111.set_xlabel('Concentration (nM)') # Legend sp111.legend(loc='upper left') # Set Y axis in between 0 and 1 plt.ylim([0, 1]) plt.show() Explanation: Following function plots dose-response relation in both linear (log_flag = False) and logarithmic (log_flag = True) along X axis. End of explanation c_lin = np.linspace(0,100,101) # Drug concentration in nanomolar (nM) plot_concentration(c_lin) Explanation: Dose-Response Relations $^{[1]}$ Generally, dose-response relations can be written as follow. In which the dose is represented as concentration ($c$), while the formula returns the response ($r$). $$r = \frac{F.c^{n_H}}{{c^{n_H} + EC_{50}^{n_H}}}$$ Other terms like $EC_{50}$ is the effective concentration achieved at 50% of maximum response. Normally, efficacy ($F$) is normalized to one so that it is easier to make comparison among different drugs. Furthermore, if full agonist is defined to have efficacy equal to one, anything lower than one is treated to be partial agonist. Finally, Hill coefficients ($n_H$) defines the number of drug molecules needed to activate target receptor. Drug Concentartion Both linearly and logarithmically increased concentrations are used to study dose-response relations. Linearly increased concentration (c_lin): End of explanation c_log = np.logspace(0,5,101) # Drug concentration in nanomolar (nM) plot_concentration(c_log) Explanation: Logarithmically increased concentration (c_log): End of explanation # Calculate dose-response relation (DRR) for agonist only # c : Drug concentration(s) in nanomolar (nM) # EC_50 : 50% effective concentration in nanomolar (nM) # F : Efficacy (unitless) # n_H : Hill coefficients (unitless) def calc_drr(c, EC_50 = 20, F = 1, n_H = 1): r = (F * (c ** n_H) / ((c ** n_H) + (EC_50 ** n_H))) return r Explanation: Agonist Only To calculate dose-response relation in the case of agonist only, we use general dose-response relation equation described previously. The function is shown below. End of explanation c = c_lin # Drug concentration(s) in nanomolar (nM) EC_50 = 20 # 50% effective concentration in nanomolar (nM) F = 1 # Efficacy (unitless) n_H = 1 # Hill coefficients (unitless) r = calc_drr(c, EC_50, F, n_H) plot_dose_response_relation(c, r, "Agonist") Explanation: Following result shows drug response of agonist only to the linearly increased concentrations. End of explanation c = c_log # Drug concentration(s) in nanomolar (nM) EC_50 = 20 # 50% effective concentration in nanomolar (nM) F = 1 # Efficacy (unitless) n_H = 1 # Hill coefficients (unitless) r = calc_drr(c, EC_50, F, n_H) plot_dose_response_relation(c, r, "Agonist", log_flag = True) Explanation: Following result shows drug response of agonist only to the logarithmically increased concentrations. End of explanation # Calculate dose-response relation (DRR) for agonist plus competitive antagonist # - Agonist # c : Drug concentration(s) in nanomolar (nM) # EC_50 : 50% effective concentration in nanomolar (nM) # F : Efficacy (unitless) # n_H : Hill coefficients (unitless) # - Antagonist # K_i : Dissociation constant of inhibitor in nanomolar (nM) # c_i : Inhibitor concentration in nanomolar (nM) def calc_drr_agonist_cptv_antagonist(c, EC_50 = 20, F = 1, n_H = 1, K_i = 5, c_i = 25): EC_50_prime = EC_50 * (1 + (c_i / K_i)) r = calc_drr(c, EC_50_prime, F, n_H) return r Explanation: Agonist Plus Competitive Antagonist Compatitive antagonist, as the name sugest, competes with agonist molecules to sit in the same pocket. It makes the binding harder for agonist as well as to trigger the activation. Therefore, higher agonist concentration is required to reach both full and partial (like $EC_{50}$) activation. New $EC_{50}$ value, called $EC_{50}'$ ($EC_{50}$ prime) is calculated using following formula. $$EC_{50}' = EC_{50} * \left(1 + \frac{c_i}{K_i}\right)$$ It depends on inhibitor concentration ($c_i$) and dissociation constant of the inhibitor ($K_i$). Following is a new function to calculate drug response of agonist with competitive antagonist. It shows new $EC_{50}$ value (EC_50_prime) replacing agonist only $EC_{50}$ value (EC_50). End of explanation c = c_lin # Drug concentration(s) in nanomolar (nM) EC_50 = 20 # 50% effective concentration in nanomolar (nM) F = 1 # Efficacy (unitless) n_H = 1 # Hill coefficients (unitless) r_a = calc_drr(c, EC_50, F, n_H) K_i = 5 # Dissociation constant of inhibitor in nanomolar (nM) c_i = 25 # Inhibitor concentration in nanomolar (nM) r_aca = calc_drr_agonist_cptv_antagonist(c, EC_50, F, n_H, K_i, c_i) plot_dose_response_relation(c, r_a, "Agonist Only", r_aca, "Plus Antagonist") Explanation: Following result shows drug response of agonist with competitive antagonist to the linearly increased concentrations. End of explanation c = c_log # Drug concentration(s) in nanomolar (nM) EC_50 = 20 # 50% effective concentration in nanomolar (nM) F = 1 # Efficacy (unitless) n_H = 1 # Hill coefficients (unitless) r_a = calc_drr(c, EC_50, F, n_H) K_i = 5 # Dissociation constant of inhibitor in nanomolar (nM) c_i = 25 # Inhibitor concentration in nanomolar (nM) r_aca = calc_drr_agonist_cptv_antagonist(c, EC_50, F, n_H, K_i, c_i) plot_dose_response_relation(c, r_a, "Agonist Only", r_aca, "Plus Antagonist", log_flag = True) Explanation: Following result shows drug response of agonist with competitive antagonist to the logarithmically increased concentrations. End of explanation # Calculate dose-response relation (DRR) for agonist plus noncompetitive antagonist # - Agonist # c : Drug concentration(s) in nanomolar (nM) # EC_50 : 50% effective concentration in nanomolar (nM) # F : Efficacy (unitless) # n_H : Hill coefficients (unitless) # - Antagonist # K_i : Dissociation constant of inhibitor in nanomolar (nM) # c_i : Inhibitor concentration in nanomolar (nM) def calc_drr_agonist_non_cptv_antagonist(c, EC_50 = 20, F = 1, n_H = 1, K_i = 5, c_i = 25): F_prime = F / (1 + (c_i / K_i)) r = calc_drr(c, EC_50, F_prime, n_H) return r Explanation: Agonist Plus Noncompetitive Antagonist Unlike competitive antagonist, noncompetitive antagonist does not compete directly to the location where agonist binds but somewhere else in the subsequent pathway. Instead of altering effective concentration (like $EC_{50}$), noncompetitive antagonist affects efficacy. New efficacy value ($F'$) due to the existance of noncompetitive antagonist is calculated as follow. $$F' = \frac{F}{\left(1 + \frac{c_i}{K_i}\right)}$$ Following is a new function to calculate drug response of agonist with noncompetitive antagonist. It shows new efficacy value (F_prime) replacing agonist only efficacy value (F). End of explanation c = c_lin # Drug concentration(s) in nanomolar (nM) EC_50 = 20 # 50% effective concentration in nanomolar (nM) F = 1 # Efficacy (unitless) n_H = 1 # Hill coefficients (unitless) r_a = calc_drr(c, EC_50, F, n_H) K_i = 5 # Dissociation constant of inhibitor in nanomolar (nM) c_i = 25 # Inhibitor concentration in nanomolar (nM) r_ana = calc_drr_agonist_non_cptv_antagonist(c, EC_50, F, n_H, K_i, c_i) plot_dose_response_relation(c, r_a, "Agonist Only", r_ana, "Plus Antagonist") Explanation: Following result shows drug response of agonist with noncompetitive antagonist to the linearly increased concentrations. End of explanation c = c_log # Drug concentration(s) in nanomolar (nM) EC_50 = 20 # 50% effective concentration in nanomolar (nM) F = 1 # Efficacy (unitless) n_H = 1 # Hill coefficients (unitless) r_a = calc_drr(c, EC_50, F, n_H) K_i = 5 # Dissociation constant of inhibitor in nanomolar (nM) c_i = 25 # Inhibitor concentration in nanomolar (nM) r_ana = calc_drr_agonist_non_cptv_antagonist(c, EC_50, F, n_H, K_i, c_i) plot_dose_response_relation(c, r_a, "Agonist Only", r_ana, "Plus Antagonist", log_flag = True) Explanation: Following result shows drug response of agonist with noncompetitive antagonist to the logarithmically increased concentrations. End of explanation
9,221
Given the following text description, write Python code to implement the functionality described below step by step Description: TinyImageNet and Ensembles So far, we have only worked with the CIFAR-10 dataset. In this exercise we will introduce the TinyImageNet dataset. You will combine several pretrained models into an ensemble, and show that the ensemble performs better than any individual model. Step1: Introducing TinyImageNet The TinyImageNet dataset is a subset of the ILSVRC-2012 classification dataset. It consists of 200 object classes, and for each object class it provides 500 training images, 50 validation images, and 50 test images. All images have been downsampled to 64x64 pixels. We have provided the labels for all training and validation images, but have withheld the labels for the test images. We have further split the full TinyImageNet dataset into two equal pieces, each with 100 object classes. We refer to these datasets as TinyImageNet-100-A and TinyImageNet-100-B. To download the data, go into the cs231n/datasets directory and run the script get_tiny_imagenet_splits.sh. Then run the following code to load the TinyImageNet-100-A dataset into memory. NOTE Step2: TinyImageNet-100-A classes Since ImageNet is based on the WordNet ontology, each class in ImageNet (and TinyImageNet) actually has several different names. For example "pop bottle" and "soda bottle" are both valid names for the same class. Run the following to see a list of all classes in TinyImageNet-100-A Step3: Visualize Examples Run the following to visualize some example images from random classses in TinyImageNet-100-A. It selects classes and images randomly, so you can run it several times to see different images. Step4: Test human performance Run the following to test your own classification performance on the TinyImageNet-100-A dataset. You can run several times in 'training' mode to get familiar with the task; once you are ready to test yourself, switch the mode to 'val'. You won't be penalized if you don't correctly classify all the images, but you should still try your best. Step5: Download pretrained models We have provided 10 pretrained ConvNets for the TinyImageNet-100-A dataset. Each of these models is a five-layer ConvNet with the architecture [conv - relu - pool] x 3 - affine - relu - affine - softmax All convolutional layers are 3x3 with stride 1 and all pooling layers are 2x2 with stride 2. The first two convolutional layers have 32 filters each, and the third convolutional layer has 64 filters. The hidden affine layer has 512 neurons. You can run the forward and backward pass for these five layer convnets using the function five_layer_convnet in the file cs231n/classifiers/convnet.py. Each of these models was trained for 25 epochs over the TinyImageNet-100-A training data with a batch size of 50 and with dropout on the hidden affine layer. Each model was trained using slightly different values for the learning rate, regularization, and dropout probability. To download the pretrained models, go into the cs231n/datasets directory and run the get_pretrained_models.sh script. Once you have done so, run the following to load the pretrained models into memory. NOTE Step6: Run models on the validation set To benchmark the performance of each model on its own, we will use each model to make predictions on the validation set. Step8: Use a model ensemble A simple way to implement an ensemble of models is to average the predicted probabilites for each model in the ensemble. More concretely, suppose we have models $k$ models $m_1,\ldots,m_k$ and we want to combine them into an ensemble. If $p(x=y_i \mid m_j)$ is the probability that the input $x$ is classified as $y_i$ under model $m_j$, then the enemble predicts $$p(x=y_i \mid {m_1,\ldots,m_k}) = \frac1k\sum_{j=1}^kp(x=y_i\mid m_j)$$ In the cell below, implement this simple ensemble method by filling in the compute_ensemble_preds function. The ensemble of all 10 models should perform much better than the best individual model. Step9: Ensemble size vs Performance Using our 10 pretrained models, we can form many different ensembles of different sizes. More precisely, if we have $n$ models and we want to form an ensemble of $k$ models, then there are $\binom{n}{k}$ possible ensembles that we can form, where $$\binom{n}{k} = \frac{n!}{(n-k)!k!}$$ We can use these different possible ensembles to study the effect of ensemble size on ensemble performance. In the cell below, compute the validation set accuracy of all possible ensembles of our 10 pretrained models. Produce a scatter plot with "ensemble size" on the horizontal axis and "validation set accuracy" on the vertical axis. Your plot should have a total of $$\sum_{k=1}^{10} \binom{10}{k}$$ points corresponding to all possible ensembles of the 10 pretrained models. You should be able to compute the validation set predictions of these ensembles without computing any more forward passes through any of the networks.
Python Code: # A bit of setup import numpy as np import matplotlib.pyplot as plt from time import time %matplotlib inline plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' # for auto-reloading extenrnal modules # see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython %load_ext autoreload %autoreload 2 Explanation: TinyImageNet and Ensembles So far, we have only worked with the CIFAR-10 dataset. In this exercise we will introduce the TinyImageNet dataset. You will combine several pretrained models into an ensemble, and show that the ensemble performs better than any individual model. End of explanation from cs231n.data_utils import load_tiny_imagenet tiny_imagenet_a = 'cs231n/datasets/tiny-imagenet-100-A' class_names, X_train, y_train, X_val, y_val, X_test, y_test = load_tiny_imagenet(tiny_imagenet_a) # Zero-mean the data mean_img = np.mean(X_train, axis=0) X_train -= mean_img X_val -= mean_img X_test -= mean_img Explanation: Introducing TinyImageNet The TinyImageNet dataset is a subset of the ILSVRC-2012 classification dataset. It consists of 200 object classes, and for each object class it provides 500 training images, 50 validation images, and 50 test images. All images have been downsampled to 64x64 pixels. We have provided the labels for all training and validation images, but have withheld the labels for the test images. We have further split the full TinyImageNet dataset into two equal pieces, each with 100 object classes. We refer to these datasets as TinyImageNet-100-A and TinyImageNet-100-B. To download the data, go into the cs231n/datasets directory and run the script get_tiny_imagenet_splits.sh. Then run the following code to load the TinyImageNet-100-A dataset into memory. NOTE: The full TinyImageNet dataset will take up about 490MB of disk space, and loading the full TinyImageNet-100-A dataset into memory will use about 2.8GB of memory. End of explanation for names in class_names: print ' '.join('"%s"' % name for name in names) Explanation: TinyImageNet-100-A classes Since ImageNet is based on the WordNet ontology, each class in ImageNet (and TinyImageNet) actually has several different names. For example "pop bottle" and "soda bottle" are both valid names for the same class. Run the following to see a list of all classes in TinyImageNet-100-A: End of explanation # Visualize some examples of the training data classes_to_show = 7 examples_per_class = 5 class_idxs = np.random.choice(len(class_names), size=classes_to_show, replace=False) for i, class_idx in enumerate(class_idxs): train_idxs, = np.nonzero(y_train == class_idx) train_idxs = np.random.choice(train_idxs, size=examples_per_class, replace=False) for j, train_idx in enumerate(train_idxs): img = X_train[train_idx] + mean_img img = img.transpose(1, 2, 0).astype('uint8') plt.subplot(examples_per_class, classes_to_show, 1 + i + classes_to_show * j) if j == 0: plt.title(class_names[class_idx][0]) plt.imshow(img) plt.gca().axis('off') plt.show() Explanation: Visualize Examples Run the following to visualize some example images from random classses in TinyImageNet-100-A. It selects classes and images randomly, so you can run it several times to see different images. End of explanation mode = 'train' name_to_label = {n.lower(): i for i, ns in enumerate(class_names) for n in ns} if mode == 'train': X, y = X_train, y_train elif mode == 'val': X, y = X_val, y_val num_correct = 0 num_images = 10 for i in xrange(num_images): idx = np.random.randint(X.shape[0]) img = (X[idx] + mean_img).transpose(1, 2, 0).astype('uint8') plt.imshow(img) plt.gca().axis('off') plt.gcf().set_size_inches((2, 2)) plt.show() got_name = False while not got_name: name = raw_input('Guess the class for the above image (%d / %d) : ' % (i + 1, num_images)) name = name.lower() got_name = name in name_to_label if not got_name: print 'That is not a valid class name; try again' guess = name_to_label[name] if guess == y[idx]: num_correct += 1 print 'Correct!' else: print 'Incorrect; it was actually %r' % class_names[y[idx]] acc = float(num_correct) / num_images print 'You got %d / %d correct for an accuracy of %f' % (num_correct, num_images, acc) Explanation: Test human performance Run the following to test your own classification performance on the TinyImageNet-100-A dataset. You can run several times in 'training' mode to get familiar with the task; once you are ready to test yourself, switch the mode to 'val'. You won't be penalized if you don't correctly classify all the images, but you should still try your best. End of explanation from cs231n.data_utils import load_models models_dir = 'cs231n/datasets/tiny-100-A-pretrained' # models is a dictionary mappping model names to models. # Like the previous assignment, each model is a dictionary mapping parameter # names to parameter values. models = load_models(models_dir) Explanation: Download pretrained models We have provided 10 pretrained ConvNets for the TinyImageNet-100-A dataset. Each of these models is a five-layer ConvNet with the architecture [conv - relu - pool] x 3 - affine - relu - affine - softmax All convolutional layers are 3x3 with stride 1 and all pooling layers are 2x2 with stride 2. The first two convolutional layers have 32 filters each, and the third convolutional layer has 64 filters. The hidden affine layer has 512 neurons. You can run the forward and backward pass for these five layer convnets using the function five_layer_convnet in the file cs231n/classifiers/convnet.py. Each of these models was trained for 25 epochs over the TinyImageNet-100-A training data with a batch size of 50 and with dropout on the hidden affine layer. Each model was trained using slightly different values for the learning rate, regularization, and dropout probability. To download the pretrained models, go into the cs231n/datasets directory and run the get_pretrained_models.sh script. Once you have done so, run the following to load the pretrained models into memory. NOTE: The pretrained models will take about 245MB of disk space. End of explanation from cs231n.classifiers.convnet import five_layer_convnet # Dictionary mapping model names to their predicted class probabilities on the # validation set. model_to_probs[model_name] is an array of shape (N_val, 100) # where model_to_probs[model_name][i, j] = p indicates that models[model_name] # predicts that X_val[i] has class i with probability p. model_to_probs = {} ################################################################################ # TODO: Use each model to predict classification probabilities for all images # # in the validation set. Store the predicted probabilities in the # # model_to_probs dictionary as above. To compute forward passes and compute # # probabilities, use the function five_layer_convnet in the file # # cs231n/classifiers/convnet.py. # # # # HINT: Trying to predict on the entire validation set all at once will use a # # ton of memory, so you should break the validation set into batches and run # # each batch through each model separately. # ################################################################################ from cs231n.classifiers.convnet import five_layer_convnet import math batch_size = 100 for model_name, model in models.items(): model_to_probs[model_name] = None for i in range(int(math.ceil(X_val.shape[0] / batch_size))): for model_name, model in models.items(): y_predict = five_layer_convnet(X_val[i*batch_size: (i+1)*batch_size], model, None and y_val[i*batch_size: (i+1)*batch_size], return_probs=True) try: if model_to_probs[model_name] is None: model_to_probs[model_name] = y_predict else: model_to_probs[model_name] = np.concatenate( (model_to_probs[model_name], y_predict), axis=0) except: print(model_to_probs[model_name].shape, y_predict.shape) raise pass pass ################################################################################ # END OF YOUR CODE # ################################################################################ # Compute and print the accuracy for each model. for model_name, probs in model_to_probs.iteritems(): acc = np.mean(np.argmax(probs, axis=1) == y_val) print '%s got accuracy %f' % (model_name, acc) Explanation: Run models on the validation set To benchmark the performance of each model on its own, we will use each model to make predictions on the validation set. End of explanation def compute_ensemble_preds(probs_list): Use the predicted class probabilities from different models to implement the ensembling method described above. Inputs: - probs_list: A list of numpy arrays, where each gives the predicted class probabilities under some model. In other words, probs_list[j][i, c] = p means that the jth model in the ensemble thinks that the ith data point has class c with probability p. Returns: An array y_pred_ensemble of ensembled predictions, such that y_pred_ensemble[i] = c means that ensemble predicts that the ith data point is predicted to have class c. y_pred_ensemble = None ############################################################################ # TODO: Implement this function. Store the ensemble predictions in # # y_pred_ensemble. # ############################################################################ probs_list_ensemble = np.mean(probs_list, axis=0) y_pred_ensemble = np.argmax(probs_list_ensemble, axis=1) pass ############################################################################ # END OF YOUR CODE # ############################################################################ return y_pred_ensemble # Combine all models into an ensemble and make predictions on the validation set. # This should be significantly better than the best individual model. print np.mean(compute_ensemble_preds(model_to_probs.values()) == y_val) Explanation: Use a model ensemble A simple way to implement an ensemble of models is to average the predicted probabilites for each model in the ensemble. More concretely, suppose we have models $k$ models $m_1,\ldots,m_k$ and we want to combine them into an ensemble. If $p(x=y_i \mid m_j)$ is the probability that the input $x$ is classified as $y_i$ under model $m_j$, then the enemble predicts $$p(x=y_i \mid {m_1,\ldots,m_k}) = \frac1k\sum_{j=1}^kp(x=y_i\mid m_j)$$ In the cell below, implement this simple ensemble method by filling in the compute_ensemble_preds function. The ensemble of all 10 models should perform much better than the best individual model. End of explanation ################################################################################ # TODO: Create a plot comparing ensemble size with ensemble performance as # # described above. # # # # HINT: Look up the function itertools.combinations. # ################################################################################ import itertools ensemble_sizes = [] val_accs = [] for i in range(1, 11): combinations = itertools.combinations(model_to_probs.values(), i) for combination in combinations: ensemble_sizes.append(i) y_pred_ensemple = compute_ensemble_preds(combination) val_accs.append(np.mean(y_pred_ensemple == y_val)) pass plt.scatter(ensemble_sizes, val_accs) plt.title('Ensemble size vs Performance') plt.xlabel('ensemble size') plt.ylabel('validation set accuracy') ################################################################################ # END OF YOUR CODE # ################################################################################ Explanation: Ensemble size vs Performance Using our 10 pretrained models, we can form many different ensembles of different sizes. More precisely, if we have $n$ models and we want to form an ensemble of $k$ models, then there are $\binom{n}{k}$ possible ensembles that we can form, where $$\binom{n}{k} = \frac{n!}{(n-k)!k!}$$ We can use these different possible ensembles to study the effect of ensemble size on ensemble performance. In the cell below, compute the validation set accuracy of all possible ensembles of our 10 pretrained models. Produce a scatter plot with "ensemble size" on the horizontal axis and "validation set accuracy" on the vertical axis. Your plot should have a total of $$\sum_{k=1}^{10} \binom{10}{k}$$ points corresponding to all possible ensembles of the 10 pretrained models. You should be able to compute the validation set predictions of these ensembles without computing any more forward passes through any of the networks. End of explanation
9,222
Given the following text description, write Python code to implement the functionality described below step by step Description: Doc2Vec trained on recipe instructions Objectives Create word embeddings for recipes. Use word vectors for (traditional) segmentation, classification, and retrieval of recipes. Based on https Step1: Text Normalization Step2: Doc2Vec Model see http Step3: Model Training train multiple epochs with decreasing learning rate. Step4: Model results Word Embeddings Step5: Document Representation Step6: Similar Documents Step7: Compute vector for existing document Infer vector and use it as positive example (see also #https Step8: Compute vector for new document
Python Code: import re # Regular Expressions import os.path # File Operations import pandas as pd # DataFrames & Manipulation from gensim.models.doc2vec import LabeledSentence, Doc2Vec # Model training train_input = "../data/recipes.tsv.bz2" # preserve empty strings (http://pandas-docs.github.io/pandas-docs-travis/io.html#na-values) train = pd.read_csv(train_input, delimiter="\t", quoting=3, encoding="utf-8", keep_default_na=False) print "loaded %d documents." % len(train) Explanation: Doc2Vec trained on recipe instructions Objectives Create word embeddings for recipes. Use word vectors for (traditional) segmentation, classification, and retrieval of recipes. Based on https://github.com/RaRe-Technologies/gensim/blob/develop/docs/notebooks/doc2vec-IMDB.ipynb Data Preparation End of explanation def normalize(text): norm_text = text.lower() for char in ['.', '"', ',', '(', ')', '!', '?', ';', ':']: norm_text = norm_text.replace(char, ' ' + char + ' ') return norm_text sentences = [LabeledSentence(normalize(text).split(), [i]) for i, text in enumerate(train['instructions'])] print "%d sentences in corpus" % len(sentences) Explanation: Text Normalization End of explanation dist_memory = 1 # distributed memory model vector_mean = 1 # compute mean of input word vectors num_features = 300 # word vector dimensionality min_word_count = 2 # minimum word count num_workers = 4 # number of threads to run in parallel context = 10 # context window size downsampling = 1e-3 # downsample setting for frequent words model_name = "model-d2v_dm_%dfeatures_%dminwords_%dcontext" % (num_features, min_word_count, context) # load model or create new one if os.path.isfile(model_name): model = Doc2Vec.load(model_name) do_train = False else: model = Doc2Vec(dm=1, dm_mean=1, size=num_features, min_count=min_word_count, window=context, sample=downsampling, workers=num_workers) model.build_vocab(sentences) do_train = True Explanation: Doc2Vec Model see http://radimrehurek.com/gensim/models/doc2vec.html class gensim.models.doc2vec.Doc2Vec( documents=None, # list of TaggedDocument elements dm=1, # training algorithm. dm=1: 'distributed memory' (PV-DM). otherwise, 'distributed bag of words' (PV-DBOW). dbow_words=0, # 0 (default), if 1, trains word-vectors simultaneous with DBOW doc-vector dm_mean=None, # 0 (default), if 1, use the mean of context word vectors instead of sum. dm_concat=0, # 0 (default), if 1, use concatenation of (all) context vectors (slow). dm_tag_count=1, # 1 (default), expected document tags per document, when using dm_concat mode docvecs=None, docvecs_mapfile=None, comment=None, trim_rule=None, **kwargs ) Model Setup End of explanation import logging from random import shuffle from datetime import datetime # configure usedful logging messages logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO) def train_model(model, sentences, passes=10, alpha=0.025, min_alpha=0.001): alpha_delta = (alpha - min_alpha) / passes print("START %s" % datetime.now()) for epoch in range(passes): shuffle(sentences) # shuffling gets best results model.alpha, model.min_alpha = alpha, alpha model.train(sentences) print("finished epoch %d (alpha: %f) - %s" % (epoch + 1, alpha, datetime.now())) alpha -= alpha_delta print("END %s" % str(datetime.now())) if do_train: train_model(model, sentences, passes=30) # finalize model to save memory #model.delete_temporary_training_data(keep_doctags_vectors=True, keep_inference=True) # save model model.save(model_name) Explanation: Model Training train multiple epochs with decreasing learning rate. End of explanation model.wv.most_similar(["pasta"], topn=20) Explanation: Model results Word Embeddings End of explanation model.docvecs[1] Explanation: Document Representation End of explanation recipe_no = 42 ids = model.docvecs.most_similar(recipe_no, topn=20) ids train.loc[[recipe_no]+[id for id, score in ids]][['title','instructions']] Explanation: Similar Documents End of explanation doc = train['instructions'][recipe_no] wordvec = model.infer_vector(normalize(doc).split()) ids = model.docvecs.most_similar(positive=[wordvec], topn=20) ids train.loc[[id for id, score in ids]][['title','instructions']] Explanation: Compute vector for existing document Infer vector and use it as positive example (see also #https://groups.google.com/forum/#!msg/gensim/IH_u8HYVbpg/w9TX4yh2DgAJ) End of explanation doc = u"Wodka, Cointreau, Limettensaft, Cranberrysaft und Eis." wordvec = model.infer_vector(normalize(doc).split()) ids = model.docvecs.most_similar(positive=[wordvec], topn=20) ids train.loc[[id for id, score in ids]][['title','instructions']] Explanation: Compute vector for new document End of explanation
9,223
Given the following text description, write Python code to implement the functionality described below step by step Description: A Workshop Introduction to NumPy The Python language is an excellent tool for general-purpose programming, with a highly readable syntax, rich and powerful data types (strings, lists, sets, dictionaries, arbitrary length integers, etc) and a very comprehensive standard library. It was not, however, designed specifically for mathematical and scientific computing. Neither the language nor its standard library have facilities for the efficient representation of multidimensional datasets, tools for linear algebra and general matrix manipulations (essential building blocks of virtually all scientific computing). For example, Python lists are very flexible containers that can be nested arbitrarily deep and can hold any Python object in them, but they are poorly suited to represent common mathematical constructs like vectors and matrices. It is for this reason that NumPy exists. Workshop Aims The aim of this workshop is to enable you use NumPy to Step1: Documentation Here is a link to the NumPy documentation for v1.11 Step2: Exercise Part 1 Use the code cells below to explore features of this array. What is the type of the array created above? (Hint Step3: You can also index multidimensional arrays using an enhanced indexing syntax, which allows for multi-element indexing. (Sometimes this is referred to as "extended slicing".) Remember that Python uses zero-based indexing! Step4: If you only provide one index to slice a multidimensional array, then the slice will be expanded to " Step5: This is known as ellipsis. Ellipsis can be specified explicitly using "...", which automatically expands to " Step6: Boolean Indexing NumPy provides syntax to index conditionally, based on the data in the array. You can pass in an array of True and False values (a boolean array), or, more commonly, a condition that returns a boolean array. Step7: Exercise Part 1 Why do these indexing examples give the stated results? result of arr_2d[1, 0] is 4 result of arr_2d[0] is [1, 2, 3] result of arr_2d[1, 1 Step8: The result we just received points to an important piece of learning, which is that in most cases NumPy arrays behave very differently to Python lists. Let's explore the differences (and some similarities) between the two. dtype A NumPy array has a fixed data type, called dtype. This is the type of all the elements of the array. This is in contrast to Python lists, which can hold elements of different types. Exercise What happens in Python when you add an integer to a float? What happens when you put an integer into a NumPy float array? What happens when you do numerical calculations between arrays of different types? Generating 2D coordinate arrays A common requirement of NumPy arrays is to generate arrays that represent the coordinates of our data. When orthogonal 1d coordinate arrays already exist, NumPy's meshgrid function is very useful Step9: The Array Object Step10: Exercise Explore arithmetic operations between arrays Step11: Broadcasting There are times when you need to perform calculations between NumPy arrays of different sizes. NumPy allows you to do this easily using a powerful piece of functionality called broadcasting. Broadcasting is a way of treating the arrays as if they had the same dimensions and thus have elements all corresponding. It is then easy to perform the calculation, element-wise. It does this by matching dimensions in one array to the other where possible, and using repeated values where there is no corresponding dimension in the other array. Rules of Broadcasting Broadcasting applies these three rules Step12: Reshaping arrays to aid broadcasting NumPy allows you to change the shape of an array, so long as the total number of elements in the array does not change. For example, we could reshape a flat array with 12 elements to a 2D array with shape (2, 6), or (3, 2, 2), or even (3, 4, 1). We could not, however, reshape it to have shape (2, 5), because the total number of elements would not be kept constant. Exercise, continued For the failing example above, what reshape operation could you apply to arr2 so that it can be broadcast with arr1? Arithmetic and Broadcasting Step13: Used without any further arguments, statistical functions simply reduce the whole array to a single value. In practice, however, we very often want to calculate statistics over only some of the dimensions. The most common requirement is to calculate a statistic along a single array dimension, while leaving all the other dimensions intact. This is referred to as "collapsing" or "reducing" the chosen dimension. This is done by adding an "axis" keyword specifying the dimension to collapse Step14: Exercise What other similar statistical operations exist (see above link)? A mean value can also be calculated with &lt;array&gt;.mean(). Is that the same thing? Create a 3D array (that could be considered to describe [time, x, y]) and find the mean over all x and y at each timestep. What shape does the result have? Masked Arrays Real-world measurements processes often result in certain datapoint values being uncertain or simply "missing". This is usually indicated by additional data quality information, stored alongside the data values. In these cases we often need to make calculations that count only the valid datapoints. NumPy provides a special "masked array" type for this type of calculation. Here's a link to the documentation for NumPy masked arrays Step15: The mask is applied where the values in the mask array are True. Masked arrays are printed with a double-dash -- denoting the locations in the array where the mask has been applied. The statistics of masked data are different Step16: Note that most file formats represent missing data in a different way, using a distinct "missing data" value appearing in the data. There is special support for converting between this type of representation and NumPy masked arrays. Every masked array has a fill_value property and a filled() method to fill the masked points with the fill value. Exercise Create a masked array from the numbers 0-11, where all the values less than 5 are masked. Create a masked array of positive values, with a value of -1.0 to represent missing points. Look up the masked array creation documentation. What routines exist to produce masked arrays like the ones you've just created more efficiently? Use np.ma.filled() to create a 'plain' (i.e. unmasked) array from a masked array. How can you create a plain array from a masked array, but using a different fill-value for masked points? Try performing a mathematical operation between two masked arrays. What happens to the 'fill_value' properties when you do so? Statistics and Masked Arrays Step17: What this means is that if one array (arr) is modified, the other (arr_view) will also be updated Step18: Loops and Vectorised Operations We will now explore calculation performance and consider efficiency in terms of processing time. Firstly let's look at a simple processing time tool that is provided in notebooks; %%timeit Step19: Repeat that, specifying only 100 loops and fastest time of 5 runs Step20: This gives us an easy way to evaluate performance for implementations ...
Python Code: # NumPy is generally imported as 'np'. import numpy as np print(np) print(np.__version__) Explanation: A Workshop Introduction to NumPy The Python language is an excellent tool for general-purpose programming, with a highly readable syntax, rich and powerful data types (strings, lists, sets, dictionaries, arbitrary length integers, etc) and a very comprehensive standard library. It was not, however, designed specifically for mathematical and scientific computing. Neither the language nor its standard library have facilities for the efficient representation of multidimensional datasets, tools for linear algebra and general matrix manipulations (essential building blocks of virtually all scientific computing). For example, Python lists are very flexible containers that can be nested arbitrarily deep and can hold any Python object in them, but they are poorly suited to represent common mathematical constructs like vectors and matrices. It is for this reason that NumPy exists. Workshop Aims The aim of this workshop is to enable you use NumPy to: manipulate numerical arrays, and perform efficient array calculations. Table of Contents Getting Started with NumPy Motivation: what are arrays good for? The Array Object Application: Arithmetic and Broadcasting Application: Statistics and Masked Arrays Application: Efficiency Further Reading Getting Started with NumPy <a class="anchor" id="getting-started"></a> Learning outcome: by the end of this section, you will be able to create NumPy arrays with the aid of the NumPy documentation. NumPy is the fundamental package for scientific computing with Python. Its primary purpose is to provide a powerful N-dimensional array object; the focus for this workshop. To begin with let's import NumPy, check where it is being imported from and check the version. End of explanation arr = np.ones((3, 2, 4)) print("Array shape:", arr.shape) print("Array element dtype:", arr.dtype) Explanation: Documentation Here is a link to the NumPy documentation for v1.11: https://docs.scipy.org/doc/numpy-1.11.0/reference/ There are many online forums with tips and suggestions for solving problems with NumPy, such as http://stackoverflow.com/ Explore: creating arrays NumPy provides many different ways to create arrays. These are listed in the documentation at: https://docs.scipy.org/doc/numpy-1.11.0/user/basics.creation.html#arrays-creation. Exercise Part 1 Try out some of these ways of creating NumPy arrays. See if you can produce: a NumPy array from a list of numbers, a 3-dimensional NumPy array filled with a constant value -- either 0 or 1, a NumPy array filled with a constant value -- not 0 or 1. (Hint: this can be achieved using the last array you created, or you could use np.empty and find a way of filling the array with a constant value), a NumPy array of 8 elements with a range of values starting from 0 and a spacing of 3 between each element, and a NumPy array of 10 elements that are logarithmically spaced. Part 2 How could you change the shape of the 8-element array you created previously to have shape (2, 2, 2)? Hint: this can be done without creating a new array. Motivation: what are arrays good for? <a class="anchor" id="motivation"></a> Learning outcome: by the end of this section, you will understand key benefits of using NumPy arrays for numerical processing in Python. Extensive Features NumPy provides routines for fast operations on arrays, including mathematical, logical, shape manipulation, sorting, selecting, I/O, discrete Fourier transforms, basic linear algebra, basic statistical operations, random simulation and much more. Fast Calculations It is a lot faster than Python alone for numerical computing tasks. Element-by-element operations are the โ€œdefault modeโ€ when an ndarray is involved, but the element-by-element operation is speedily executed by pre-compiled C code. Clear Syntax In NumPy, python c = a * b calculates the element-wise product of a and b, at near-C speeds, but with the code simplicity we expect from the Python language. This demonstrates a core feature of NumPy, called vectorization. This removes the need for loops iterating through elements of arrays, which can make code easier to read, as well as performing fast calculations. Interfacing to other Libraries Many scientific Python libraries use NumPy as their core array representation. From plotting libraries such as matplotlib, to parallel processing libraries such as Dask, to data interoperability libraries such as Iris, NumPy arrays are at the core of how these libraries operate and communicate. The Array Object <a class="anchor" id="array-object"></a> Learning outcome: by the end of this section, you will be able to manipulate NumPy array objects through indexing and explain key features and properties of NumPy arrays. The multidimensional array object is at the core of all of NumPy's functionality. Let's explore this object some more. Array properties Let's create a NumPy array and take a look at some of its properties. End of explanation arr = np.array([1, 2, 3, 4, 5, 6]) print(arr) print("arr[2] = {}".format(arr[2])) print("arr[2:5] = {}".format(arr[2:5])) print("arr[::2] = {}".format(arr[::2])) Explanation: Exercise Part 1 Use the code cells below to explore features of this array. What is the type of the array created above? (Hint: you can find the type of an object in Python using type(&lt;object&gt;), where &lt;object&gt; is replaced with the name of the variable containing the object.) Look this type up in the NumPy documentation (see https://docs.scipy.org/doc/numpy-1.11.0/reference/generated/numpy.ndarray.html#numpy.ndarray). Part 2 Consider the following NumPy array properties: ndim, nbytes, size, T. For each of these properties: Use information from the array documentation to find out more about each array property. Apply each property to the array arr defined above. Can you explain the results you get in each case? Indexing You can index NumPy arrays in the same way as other Python objects, by using square brackets []. This means we can index to retrieve a single element, multiple consecutive elements, or a more complex sequence: End of explanation lst_2d = [[1, 2, 3], [4, 5, 6]] arr_2d = arr.reshape(2, 3) print("2D list:") print(lst_2d) print("2D array:") print(arr_2d) print("Single array element:") print(arr_2d[1, 2]) print("Single row:") print(arr_2d[1]) print("First two columns:") print(arr_2d[:, :2]) Explanation: You can also index multidimensional arrays using an enhanced indexing syntax, which allows for multi-element indexing. (Sometimes this is referred to as "extended slicing".) Remember that Python uses zero-based indexing! End of explanation print('Second row: {} is equivalent to {}'.format(arr_2d[1], arr_2d[1, :])) Explanation: If you only provide one index to slice a multidimensional array, then the slice will be expanded to ":" for all of the remaining dimensions: End of explanation arr1 = np.empty((4, 6, 3)) print('Original shape: ', arr1.shape) print(arr1[...].shape) print(arr1[..., 0:2].shape) print(arr1[2:4, ..., ::2].shape) print(arr1[2:4, :, ..., ::-1].shape) Explanation: This is known as ellipsis. Ellipsis can be specified explicitly using "...", which automatically expands to ":" for each dimension unspecified in the slice: End of explanation print(arr_2d) bools = arr_2d % 2 == 0 print(bools) print(arr_2d[bools]) Explanation: Boolean Indexing NumPy provides syntax to index conditionally, based on the data in the array. You can pass in an array of True and False values (a boolean array), or, more commonly, a condition that returns a boolean array. End of explanation print(lst_2d[0:2][1]) print(arr_2d[0:2, 1]) Explanation: Exercise Part 1 Why do these indexing examples give the stated results? result of arr_2d[1, 0] is 4 result of arr_2d[0] is [1, 2, 3] result of arr_2d[1, 1:] is [5, 6] result of arr_2d[0:, ::2] is [[1, 3], [4, 6]] Part 2 How would you index arr_2d to retrieve: the third value: resulting in 3 the second row: resulting in [4 5 6] the first column: resulting in [1 4] the first column, retaining the outside dimension: resulting in [[1] [4]] only values greater than or equal to 3: resulting in [3 4 5 6] Arrays are not lists Question: why do the following examples produce different results? End of explanation x = np.linspace(0, 9, 3) y = np.linspace(4, 8, 3) x2d, y2d = np.meshgrid(x, y) print(x2d) print(y2d) Explanation: The result we just received points to an important piece of learning, which is that in most cases NumPy arrays behave very differently to Python lists. Let's explore the differences (and some similarities) between the two. dtype A NumPy array has a fixed data type, called dtype. This is the type of all the elements of the array. This is in contrast to Python lists, which can hold elements of different types. Exercise What happens in Python when you add an integer to a float? What happens when you put an integer into a NumPy float array? What happens when you do numerical calculations between arrays of different types? Generating 2D coordinate arrays A common requirement of NumPy arrays is to generate arrays that represent the coordinates of our data. When orthogonal 1d coordinate arrays already exist, NumPy's meshgrid function is very useful: End of explanation arr1 = np.arange(4) arr2 = np.arange(4) print('{} + {} = {}'.format(arr1, arr2, arr1 + arr2)) Explanation: The Array Object: Summary of key points properties : shape, dtype. arrays are homogeneous; all elements have the same type: dtype. indexing arrays to produce further arrays: views on the original arrays. multi-dimensional indexing (slicing) and boolean indexing. combinations to form 2D arrays: meshgrid Application: Arithmetic and Broadcasting<a class="anchor" id="app-calc"></a> Learning outcome: by the end of this section you will be able to explain how broadcasting allows mathematical operations between NumPy arrays. Elementwise Arithmetic <a class="anchor" id="arithmetic_and_broadcasting"></a> You can use NumPy to perform arithmetic operations between two arrays in an element-by-element fashion. End of explanation arr = np.arange(4) const = 5 print("Original array: {}".format(arr)) print("") print("Array + const: {}".format(arr + const)) Explanation: Exercise Explore arithmetic operations between arrays: Create a number of arrays of different shapes and dtypes. Make sure some of them include values of 0. Make use of all the basic Python mathematical operators (i.e. +, -, *, /, //, %). What operations work? What operations do not work? Can you explain why operations do or do not work? It makes intrinsic sense that you should be able to add a constant to all values in an array: End of explanation arr1 = np.ones((2, 3)) arr2 = np.ones((2, 1)) # (arr1 + arr2).shape arr1 = np.ones((2, 3)) arr2 = np.ones(3) # (arr1 + arr2).shape arr1 = np.ones((1, 3)) arr2 = np.ones((2, 1)) # (arr1 + arr2).shape arr1 = np.ones((1, 3)) arr2 = np.ones((1, 2)) # (arr1 + arr2).shape Explanation: Broadcasting There are times when you need to perform calculations between NumPy arrays of different sizes. NumPy allows you to do this easily using a powerful piece of functionality called broadcasting. Broadcasting is a way of treating the arrays as if they had the same dimensions and thus have elements all corresponding. It is then easy to perform the calculation, element-wise. It does this by matching dimensions in one array to the other where possible, and using repeated values where there is no corresponding dimension in the other array. Rules of Broadcasting Broadcasting applies these three rules: If the two arrays differ in their number of dimensions, the shape of the array with fewer dimensions is padded with ones on its leading (left) side. If the shape of the two arrays does not match in any dimension, either array with shape equal to 1 in a given dimension is stretched to match the other shape. If in any dimension the sizes disagree and neither has shape equal to 1, an error is raised. Note that all of this happens without ever actually creating the expanded arrays in memory! This broadcasting behavior is in practice enormously powerful, especially given that when NumPy broadcasts to create new dimensions or to 'stretch' existing ones, it doesn't actually duplicate the data. In the example above the operation is carried out as if the scalar 1.5 was a 1D array with 1.5 in all of its entries, but no actual array is ever created. This can save lots of memory in cases when the arrays in question are large. As such this can have significant performance implications. (image source) Exercise For the following cases: What will be the result of adding arr1 to arr2? What will be the shape of the resulting array? What rules of broadcasting are being used? End of explanation a = np.arange(12).reshape((3, 4)) mean = np.mean(a) print(a) print(mean) Explanation: Reshaping arrays to aid broadcasting NumPy allows you to change the shape of an array, so long as the total number of elements in the array does not change. For example, we could reshape a flat array with 12 elements to a 2D array with shape (2, 6), or (3, 2, 2), or even (3, 4, 1). We could not, however, reshape it to have shape (2, 5), because the total number of elements would not be kept constant. Exercise, continued For the failing example above, what reshape operation could you apply to arr2 so that it can be broadcast with arr1? Arithmetic and Broadcasting: Summary of key points arithmetic operations are performed in an element-by-element fashion, operations can be performed between arrays of different shapes, the arrays' dimensions are aligned according to fixed rules; where one input lacks a given dimension, values are repeated, reshaping can be used to get arrays to combine as required. Application: Statistics and Masked Arrays <a class="anchor" id="statistics"></a> Learning outcome: By the end of this section, you will be able to apply statistical operations and masked arrays to real-world problems. Statistics Numpy arrays support many common statistical calculations. For a list of common operations, see: https://docs.scipy.org/doc/numpy/reference/routines.statistics.html. The simplest operations consist of calculating a single statistical value from an array of numbers -- such as a mean value, a variance or a minimum. For example: End of explanation print(np.mean(a, axis=1)) Explanation: Used without any further arguments, statistical functions simply reduce the whole array to a single value. In practice, however, we very often want to calculate statistics over only some of the dimensions. The most common requirement is to calculate a statistic along a single array dimension, while leaving all the other dimensions intact. This is referred to as "collapsing" or "reducing" the chosen dimension. This is done by adding an "axis" keyword specifying the dimension to collapse: End of explanation data = np.arange(4) mask = np.array([0, 0, 1, 0]) print('Data: {}'.format(data)) print('Mask: {}'.format(mask)) masked_data = np.ma.masked_array(data, mask) print('Masked data: {}'.format(masked_data)) Explanation: Exercise What other similar statistical operations exist (see above link)? A mean value can also be calculated with &lt;array&gt;.mean(). Is that the same thing? Create a 3D array (that could be considered to describe [time, x, y]) and find the mean over all x and y at each timestep. What shape does the result have? Masked Arrays Real-world measurements processes often result in certain datapoint values being uncertain or simply "missing". This is usually indicated by additional data quality information, stored alongside the data values. In these cases we often need to make calculations that count only the valid datapoints. NumPy provides a special "masked array" type for this type of calculation. Here's a link to the documentation for NumPy masked arrays: https://docs.scipy.org/doc/numpy-1.11.0/reference/maskedarray.generic.html#maskedarray-generic-constructing. To construct a masked array we need some data and a mask. The data can be any kind of NumPy array, but the mask array must contain a boolean-type values only (either True and False or 1 and 0). Let's make each of these and convert them together into a NumPy masked array: End of explanation print('Unmasked average: {}'.format(np.mean(data))) print('Masked average: {}'.format(np.ma.mean(masked_data))) Explanation: The mask is applied where the values in the mask array are True. Masked arrays are printed with a double-dash -- denoting the locations in the array where the mask has been applied. The statistics of masked data are different: End of explanation arr = np.arange(8) arr_view = arr.reshape(2, 4) # Print the "view" array from reshape. print('Before\n{}'.format(arr_view)) # Update the first element of the original array. arr[0] = 1000 # Print the "view" array from reshape again, # noticing the first value has changed. print('After\n{}'.format(arr_view)) Explanation: Note that most file formats represent missing data in a different way, using a distinct "missing data" value appearing in the data. There is special support for converting between this type of representation and NumPy masked arrays. Every masked array has a fill_value property and a filled() method to fill the masked points with the fill value. Exercise Create a masked array from the numbers 0-11, where all the values less than 5 are masked. Create a masked array of positive values, with a value of -1.0 to represent missing points. Look up the masked array creation documentation. What routines exist to produce masked arrays like the ones you've just created more efficiently? Use np.ma.filled() to create a 'plain' (i.e. unmasked) array from a masked array. How can you create a plain array from a masked array, but using a different fill-value for masked points? Try performing a mathematical operation between two masked arrays. What happens to the 'fill_value' properties when you do so? Statistics and Masked Arrays: Summary of key points most statistical functions are available in two different forms, as in array.mean() and also np.mean(array), the choice being mostly a question of style. statistical operations operate over, and remove (or "collapse") the array dimensions that they are applied to. an "axis" keyword specifies operation over dimensions : this can be one; multiple; or all. NOTE: not all operations permit operation over specifically selected dimensions Statistical operations are not really part of NumPy itself, but are defined by the higher-level Scipy project. Missing datapoints can be represented using "masked arrays" these are useful for calculation, but usually require converting to another form for data storage Application: Efficiency <a class="anchor" id="efficiency"></a> Learning outcome: by the end of this section, you will be able to apply concepts that allow you to perform fast and efficient calculations on NumPy arrays. Views on Arrays NumPy attempts to not make copies of arrays. Many NumPy operations will produce a reference to an existing array, known as a "view", instead of making a whole new array. For example, indexing and reshaping provide a view of the same memory wherever possible. End of explanation arr = np.arange(8) arr_view = arr.reshape(2, 4).copy() # Print the "view" array from reshape. print('Before\n{}'.format(arr_view)) # Update the first element of the original array. arr[0] = 1000 # Print the "view" array from reshape again, # noticing the first value has changed. print('After\n{}'.format(arr_view)) Explanation: What this means is that if one array (arr) is modified, the other (arr_view) will also be updated : the same memory is being shared. This is a valuable tool which enables the system memory overhead to be managed, which is particularly useful when handling lots of large arrays. The lack of copying allows for very efficient vectorized operations. Remember, this behaviour is automatic in most of NumPy, so it requires some consideration in your code, it can lead to some bugs that are hard to track down. For example, if you are changing some elements of an array that you are using elsewhere, you may want to explicitly copy that array before making changes. If in doubt, you can always copy the data to a different block of memory with the copy() method. For example: End of explanation %%timeit x = range(500) Explanation: Loops and Vectorised Operations We will now explore calculation performance and consider efficiency in terms of processing time. Firstly let's look at a simple processing time tool that is provided in notebooks; %%timeit : End of explanation %%timeit -n 100 -r 5 x = range(500) Explanation: Repeat that, specifying only 100 loops and fastest time of 5 runs End of explanation rands = np.random.random(1000000).reshape(100, 100, 100) %%timeit -n 10 -r 5 overPointEightLoop = 0 for i in range(100): for j in range(100): for k in range(100): if rands[i, j, k] > 0.8: overPointEightLoop +=1 %%timeit -n 10 -r 5 overPointEightWhere = rands[rands > 0.8].size Explanation: This gives us an easy way to evaluate performance for implementations ... End of explanation
9,224
Given the following text description, write Python code to implement the functionality described below step by step Description: Blind Source Separation with the Shogun Machine Learning Toolbox By Kevin Hughes This notebook illustrates <a href="http Step1: Next we're going to need a way to play the audio files we're working with (otherwise this wouldn't be very exciting at all would it?). In the next bit of code I've defined a wavPlayer class that takes the signal and the sample rate and then creates a nice HTML5 webplayer right inline with the notebook. Step2: Now that we can load and play wav files we actually need some wav files! I found the sounds from Starcraft to be a great source of wav files because they're short, interesting and remind me of my childhood. You can download Starcraft wav files here Step3: Now let's load a second audio clip Step4: and a third audio clip Step5: Now we've got our audio files loaded up into our example program. The next thing we need to do is mix them together! First another nuance - what if the audio clips aren't the same lenth? The solution I came up with for this was to simply resize them all to the length of the longest signal, the extra length will just be filled with zeros so it won't affect the sound. The signals are mixed by creating a mixing matrix $A$ and taking the dot product of $A$ with the signals $S$. Afterwards I plot the mixed signals and create the wavPlayers, have a listen! Step6: Now before we can work on separating these signals we need to get the data ready for Shogun, thankfully this is pretty easy! Step7: Now lets unmix those signals! In this example I'm going to use an Independent Component Analysis (ICA) algorithm called JADE. JADE is one of the ICA algorithms available in Shogun and it works by performing Aproximate Joint Diagonalization (AJD) on a 4th order cumulant tensor. I'm not going to go into a lot of detail on how JADE works behind the scenes but here is the reference for the original paper Step8: Thats all there is to it! Check out how nicely those signals have been separated and have a listen!
Python Code: import numpy as np import os SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data') from scipy.io import wavfile from scipy.signal import resample import shogun as sg def load_wav(filename,samplerate=44100): # load file rate, data = wavfile.read(filename) # convert stereo to mono if len(data.shape) > 1: data = data[:,0]/2 + data[:,1]/2 # re-interpolate samplerate ratio = float(samplerate) / float(rate) data = resample(data, int(len(data) * ratio)) return samplerate, data.astype(np.int16) Explanation: Blind Source Separation with the Shogun Machine Learning Toolbox By Kevin Hughes This notebook illustrates <a href="http://en.wikipedia.org/wiki/Blind_signal_separation">Blind Source Seperation</a>(BSS) on audio signals using <a href="http://en.wikipedia.org/wiki/Independent_component_analysis">Independent Component Analysis</a> (ICA) in Shogun. We generate a mixed signal and try to seperate it out using Shogun's implementation of ICA & BSS called <a href="http://www.shogun-toolbox.org/doc/en/3.0.0/classshogun_1_1Jade.html">JADE</a>. My favorite example of this problem is known as the cocktail party problem where a number of people are talking simultaneously and we want to separate each persons speech so we can listen to it separately. Now the caveat with this type of approach is that we need as many mixtures as we have source signals or in terms of the cocktail party problem we need as many microphones as people talking in the room. Let's get started, this example is going to be in python and the first thing we are going to need to do is load some audio files. To make things a bit easier further on in this example I'm going to wrap the basic scipy wav file reader and add some additional functionality. First I added a case to handle converting stereo wav files back into mono wav files and secondly this loader takes a desired sample rate and resamples the input to match. This is important because when we mix the two audio signals they need to have the same sample rate. End of explanation from IPython.display import Audio from IPython.display import display def wavPlayer(data, rate): display(Audio(data, rate=rate)) Explanation: Next we're going to need a way to play the audio files we're working with (otherwise this wouldn't be very exciting at all would it?). In the next bit of code I've defined a wavPlayer class that takes the signal and the sample rate and then creates a nice HTML5 webplayer right inline with the notebook. End of explanation # change to the shogun-data directory import os os.chdir(os.path.join(SHOGUN_DATA_DIR, 'ica')) %matplotlib inline import matplotlib.pyplot as plt # load fs1,s1 = load_wav('tbawht02.wav') # Terran Battlecruiser - "Good day, commander." # plot plt.figure(figsize=(6.75,2)) plt.plot(s1) plt.title('Signal 1') plt.show() # player wavPlayer(s1, fs1) Explanation: Now that we can load and play wav files we actually need some wav files! I found the sounds from Starcraft to be a great source of wav files because they're short, interesting and remind me of my childhood. You can download Starcraft wav files here: http://wavs.unclebubby.com/computer/starcraft/ among other places on the web or from your Starcraft install directory (come on I know its still there). Another good source of data (although lets be honest less cool) is ICA central and various other more academic data sets: http://perso.telecom-paristech.fr/~cardoso/icacentral/base_multi.html. Note that for lots of these data sets the data will be mixed already so you'll be able to skip the next few steps. Okay lets load up an audio file. I chose the Terran Battlecruiser saying "Good Day Commander". In addition to the creating a wavPlayer I also plotted the data using Matplotlib (and tried my best to have the graph length match the HTML player length). Have a listen! End of explanation # load fs2,s2 = load_wav('TMaRdy00.wav') # Terran Marine - "You want a piece of me, boy?" # plot plt.figure(figsize=(6.75,2)) plt.plot(s2) plt.title('Signal 2') plt.show() # player wavPlayer(s2, fs2) Explanation: Now let's load a second audio clip: End of explanation # load fs3,s3 = load_wav('PZeRdy00.wav') # Protoss Zealot - "My life for Aiur!" # plot plt.figure(figsize=(6.75,2)) plt.plot(s3) plt.title('Signal 3') plt.show() # player wavPlayer(s3, fs3) Explanation: and a third audio clip: End of explanation # Adjust for different clip lengths fs = fs1 length = max([len(s1), len(s2), len(s3)]) s1 = np.resize(s1, (length,1)) s2 = np.resize(s2, (length,1)) s3 = np.resize(s3, (length,1)) S = (np.c_[s1, s2, s3]).T # Mixing Matrix #A = np.random.uniform(size=(3,3)) #A = A / A.sum(axis=0) A = np.array([[1, 0.5, 0.5], [0.5, 1, 0.5], [0.5, 0.5, 1]]) print('Mixing Matrix:') print(A.round(2)) # Mix Signals X = np.dot(A,S) # Mixed Signal i for i in range(X.shape[0]): plt.figure(figsize=(6.75,2)) plt.plot((X[i]).astype(np.int16)) plt.title('Mixed Signal %d' % (i+1)) plt.show() wavPlayer((X[i]).astype(np.int16), fs) Explanation: Now we've got our audio files loaded up into our example program. The next thing we need to do is mix them together! First another nuance - what if the audio clips aren't the same lenth? The solution I came up with for this was to simply resize them all to the length of the longest signal, the extra length will just be filled with zeros so it won't affect the sound. The signals are mixed by creating a mixing matrix $A$ and taking the dot product of $A$ with the signals $S$. Afterwards I plot the mixed signals and create the wavPlayers, have a listen! End of explanation # Convert to features for shogun mixed_signals = sg.create_features((X).astype(np.float64)) Explanation: Now before we can work on separating these signals we need to get the data ready for Shogun, thankfully this is pretty easy! End of explanation # Separating with JADE jade = sg.create_transformer('Jade') jade.fit(mixed_signals) signals = jade.transform(mixed_signals) S_ = signals.get('feature_matrix') A_ = jade.get('mixing_matrix') A_ = A_ / A_.sum(axis=0) print('Estimated Mixing Matrix:') print(A_) Explanation: Now lets unmix those signals! In this example I'm going to use an Independent Component Analysis (ICA) algorithm called JADE. JADE is one of the ICA algorithms available in Shogun and it works by performing Aproximate Joint Diagonalization (AJD) on a 4th order cumulant tensor. I'm not going to go into a lot of detail on how JADE works behind the scenes but here is the reference for the original paper: Cardoso, J. F., & Souloumiac, A. (1993). Blind beamforming for non-Gaussian signals. In IEE Proceedings F (Radar and Signal Processing) (Vol. 140, No. 6, pp. 362-370). IET Digital Library. Shogun also has several other ICA algorithms including the Second Order Blind Identification (SOBI) algorithm, FFSep, JediSep, UWedgeSep and FastICA. All of the algorithms inherit from the ICAConverter base class and share some common methods for setting an intial guess for the mixing matrix, retrieving the final mixing matrix and getting/setting the number of iterations to run and the desired convergence tolerance. Some of the algorithms have additional getters for intermediate calculations, for example Jade has a method for returning the 4th order cumulant tensor while the "Sep" algorithms have a getter for the time lagged covariance matrices. Check out the source code on GitHub (https://github.com/shogun-toolbox/shogun) or the Shogun docs (http://www.shogun-toolbox.org/doc/en/latest/annotated.html) for more details! End of explanation # Show separation results # Separated Signal i gain = 4000 for i in range(S_.shape[0]): plt.figure(figsize=(6.75,2)) plt.plot((gain*S_[i]).astype(np.int16)) plt.title('Separated Signal %d' % (i+1)) plt.show() wavPlayer((gain*S_[i]).astype(np.int16), fs) Explanation: Thats all there is to it! Check out how nicely those signals have been separated and have a listen! End of explanation
9,225
Given the following text description, write Python code to implement the functionality described below step by step Description: Integration Exercise 1 Imports Step2: Trapezoidal rule The trapezoidal rule generates a numerical approximation to the 1d integral Step3: Now use scipy.integrate.quad to integrate the f and g functions and see how the result compares with your trapz function. Print the results and errors.
Python Code: %matplotlib inline import matplotlib.pyplot as plt import numpy as np from scipy import integrate Explanation: Integration Exercise 1 Imports End of explanation integrate.quad? def trapz(f, a, b, N): Integrate the function f(x) over the range [a,b] with N points. h=(b-a)/N integral=0 while a<b: integral=f(a)*h+(f(a+h)-f(a))*h/2+integral a=a+h return integral f = lambda x: x**2 g = lambda x: np.sin(x) %%timeit trapz(f, 0, 1, 1000) I = trapz(f, 0, 1, 1000) assert np.allclose(I, 0.33333349999999995) J = trapz(g, 0, np.pi, 1000) assert np.allclose(J, 1.9999983550656628) Explanation: Trapezoidal rule The trapezoidal rule generates a numerical approximation to the 1d integral: $$ I(a,b) = \int_a^b f(x) dx $$ by dividing the interval $[a,b]$ into $N$ subdivisions of length $h$: $$ h = (b-a)/N $$ Note that this means the function will be evaluated at $N+1$ points on $[a,b]$. The main idea of the trapezoidal rule is that the function is approximated by a straight line between each of these points. Write a function trapz(f, a, b, N) that performs trapezoidal rule on the function f over the interval $[a,b]$ with N subdivisions (N+1 points). End of explanation integ1, err1 = integrate.quad(f,0.0,1.0) print(integ1) print(err1) integ2, err2=integrate.quad(g,0.0,np.pi) print(integ2) print(err2) assert True # leave this cell to grade the previous one Explanation: Now use scipy.integrate.quad to integrate the f and g functions and see how the result compares with your trapz function. Print the results and errors. End of explanation
9,226
Given the following text description, write Python code to implement the functionality described below step by step Description: plotly plotly has recently become open source, it proposes a large gallery of javascript graphs. plotly also offers to host dashboards built with plotly. The first script usually returns an exception Step1: pandas and plotly Step2: version javacript Step3: javascript with custom button Step4: exemple offline
Python Code: from jyquickhelper import add_notebook_menu add_notebook_menu() Explanation: plotly plotly has recently become open source, it proposes a large gallery of javascript graphs. plotly also offers to host dashboards built with plotly. The first script usually returns an exception: But there exists an offline mode. documentation source installation tutorial gallerie Autres liens : styles de texte en python ou styles de text en javascript End of explanation import cufflinks # don't forget that from plotly.offline import init_notebook_mode init_notebook_mode(connected=True) from sklearn.datasets import load_iris import pandas data = load_iris() df = pandas.DataFrame(data["data"]) df.head() # df.iplot() # issue with PlotlyLocalCredentialsError Explanation: pandas and plotly End of explanation %%javascript require.config({ paths: { plotly: 'https://cdn.plot.ly/plotly-latest.min.js' } }); %%html <div id="myDiv"></div> %%javascript var x = []; for (var i = 0; i < 500; i ++) { x[i] = Math.random(); } var data = [ { x: x, type: 'histogram' } ]; Plotly.newPlot('myDiv', data); Explanation: version javacript End of explanation %%html <div id="myDiv2" style="width:800px;height:400px;"></div> <div class="hideshow" id="top" style="margin-left:80px;"> <button style=";background:fuchsia;">Toggle Fuchsia</button> </div> <div class="hideshow" id="bottom" style="margin-left:80px;"> <button style="background:#FFD966;">Toggle Yellow</button </div> %%javascript var d3 = Plotly.d3, y1 = d3.range(100).map(d3.random.normal(6)), y2 = d3.range(100).map(d3.random.normal(9)), layout = { title:'Click buttons to toggle traces', showlegend:false }, data = [{ y:y1, marker: {color: 'fuchsia'} }, { y:y2, marker: {color: '#FFD966'} }]; Plotly.plot("myDiv2", data, layout); $('.hideshow button').click(function(){ var btn_id = this.parentNode.id, data_index = ( btn_id === 'top' ) ? 0 : 1, myDiv2 = document.getElementById("myDiv2"), visible = myDiv2.data[data_index].visible; if( visible === undefined ) visible = true; Plotly.restyle("myDiv2", 'visible', !visible, data_index); }); Explanation: javascript with custom button End of explanation from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot # connected=True pour une utilisation ultรฉrieure sous forme de HTML # connected=False pour une utiliation uniquement locale init_notebook_mode(connected=True) from plotly.graph_objs import Box from plotly.offline import iplot import numpy as np iplot([Box(y = np.random.randn(50), showlegend=False) for i in range(45)], show_link=False) Explanation: exemple offline End of explanation
9,227
Given the following text description, write Python code to implement the functionality described below step by step Description: Is there a relationship between GDP per capita and PISA scores? July 2015 Written by Susan Chen at NYU Stern with help from Professor David Backus Contact Step1: Creating the Dataset PISA 2012 scores are downloaded as an excel file from the statlink on page 21 of the published PISA key findings. I deleted the explanatory text surrounding the table. I kept only the "Mean Score in PISA 2012" column for each subject and then saved the file as a csv. Then, I read the file into pandas and renamed the columns. Step2: Excluding Outliers I initially plotted the data and ran the regression without excluding any outliers. The resulting r-squared values for reading, math, and science were 0.29, 0.32, and 0.27, respectively. Looking at the scatter plot, there seem to be two obvious outliers, Qatar and Vietnam. I decided to exclude the data for these two countries because the remaining countries do seem to form a trend. I found upon excluding them that the correlation between GDP per capita and scores was much higher. Qatar is an outlier as it placed relatively low, 63rd out of the 65 countries, with a relatively high GDP per capita at about $131000. Qatar has a high GDP per capita for a country with just 1.8 million people, and only 13% of which are Qatari nationals. Qatar is a high income economy as it contains one of the world's largest natural gas and oil reserves. Vietnam is an outlier because it placed relatively high, 17th out of the 65 countries, with a relatively low GDP per capita at about $4900. Reasons for Vietnam's high score may be due to the investment of the government in education and the uniformity of classroom professionalism and discipline found across countries. At the same time, rote learning is much more emphasized than creative thinking, and it is important to note that many disadvantaged students are forced to drop out, reasons which may account for the high score. Step3: Plotting the Data I use the log of the GDP per capita to plot against each component of the PISA on a scatter plot. Step4: Regression Analysis The OLS regression results indicate that the there is a 0.57 correlation betweeen reading scores and GDP per capita, 0.63 between math scores and GDP per capita, and 0.57 between science scores and GDP per capita.
Python Code: %matplotlib inline import matplotlib.pyplot as plt import pandas as pd import numpy as np import statsmodels.formula.api as smf from pandas.io import wb Explanation: Is there a relationship between GDP per capita and PISA scores? July 2015 Written by Susan Chen at NYU Stern with help from Professor David Backus Contact: &#106;&#105;&#97;&#99;&#104;&#101;&#110;&#50;&#48;&#49;&#55;&#64;&#117;&#46;&#110;&#111;&#114;&#116;&#104;&#119;&#101;&#115;&#116;&#101;&#114;&#110;&#46;&#101;&#100;&#117; About PISA Since 2000, the Programme for International Student Assessment (PISA) has been administered every three years to evaluate education systems around the world. It also gathers family and education background information through surveys. The test, which assesses 15-year-old students in reading, math, and science, is administered to a total of around 510,000 students in 65 countries. The duration of the test is two hours, and it contains a mix of open-ended and multiple-choice questions. Learn more about the test here. I am interested in seeing if there is a correlation between a nation's wealth and their PISA scores. Do wealthier countries generally attain higher scores, and if so, to what extent? I am using GDP per capita as the economic measure of wealth because this is information that could be sensitive to population numbers so GDP per capita in theory should allow us to compare larger countries (in terms of geography or population) with small countries. Abstract In terms of the correlation between GDP per capita and each component of the PISA, the r-squared values for an OLS regression model, which usually reflect how well the model fits the data, are 0.57, 0.63, and 0.57 for reading, math, and science, respectively. Qatar and Vietnam, outliers, are excluded from the model. Packages Imported I use matplotlib.pyplot to plot scatter plots. I use pandas, a Python package that allows for fast data manipulation and analysis, to organize my dataset. I access World Bank data through the remote data access API for pandas, pandas.io. I also use numpy, a Python package for scientific computing, for the mathematical calculations that were needed to fit the data more appropriately. Lastly, I use statmodels.formula.api, a Python module used for a variety of statistical computations, for running an OLS linear regression. End of explanation file1 = '/users/susan/desktop/PISA/PISA2012clean.csv' # file location df1 = pd.read_csv(file1) #pandas remote data access API for World Bank GDP per capita data df2 = wb.download(indicator='NY.GDP.PCAP.PP.KD', country='all', start=2012, end=2012) df1 #drop multilevel index df2.index = df2.index.droplevel('year') df1.columns = ['Country','Math','Reading','Science'] df2.columns = ['GDPpc'] #combine PISA and GDP datasets based on country column df3 = pd.merge(df1, df2, how='left', left_on = 'Country', right_index = True) df3.columns = ['Country','Math','Reading','Science','GDPpc'] #drop rows with missing GDP per capita values df3 = df3[pd.notnull(df3['GDPpc'])] print (df3) Explanation: Creating the Dataset PISA 2012 scores are downloaded as an excel file from the statlink on page 21 of the published PISA key findings. I deleted the explanatory text surrounding the table. I kept only the "Mean Score in PISA 2012" column for each subject and then saved the file as a csv. Then, I read the file into pandas and renamed the columns. End of explanation df3.index = df3.Country #set country column as the index df3 = df3.drop(['Qatar', 'Vietnam']) # drop outlier Explanation: Excluding Outliers I initially plotted the data and ran the regression without excluding any outliers. The resulting r-squared values for reading, math, and science were 0.29, 0.32, and 0.27, respectively. Looking at the scatter plot, there seem to be two obvious outliers, Qatar and Vietnam. I decided to exclude the data for these two countries because the remaining countries do seem to form a trend. I found upon excluding them that the correlation between GDP per capita and scores was much higher. Qatar is an outlier as it placed relatively low, 63rd out of the 65 countries, with a relatively high GDP per capita at about $131000. Qatar has a high GDP per capita for a country with just 1.8 million people, and only 13% of which are Qatari nationals. Qatar is a high income economy as it contains one of the world's largest natural gas and oil reserves. Vietnam is an outlier because it placed relatively high, 17th out of the 65 countries, with a relatively low GDP per capita at about $4900. Reasons for Vietnam's high score may be due to the investment of the government in education and the uniformity of classroom professionalism and discipline found across countries. At the same time, rote learning is much more emphasized than creative thinking, and it is important to note that many disadvantaged students are forced to drop out, reasons which may account for the high score. End of explanation Reading = df3.Reading Science = df3.Science Math = df3.Math GDP = np.log(df3.GDPpc) #PISA reading vs GDP per capita plt.scatter(x = GDP, y = Reading, color = 'r') plt.title('PISA 2012 Reading scores vs. GDP per capita') plt.xlabel('GDP per capita (log)') plt.ylabel('PISA Reading Score') plt.show() #PISA math vs GDP per capita plt.scatter(x = GDP, y = Math, color = 'b') plt.title('PISA 2012 Math scores vs. GDP per capita') plt.xlabel('GDP per capita (log)') plt.ylabel('PISA Math Score') plt.show() #PISA science vs GDP per capita plt.scatter(x = GDP, y = Science, color = 'g') plt.title('PISA 2012 Science scores vs. GDP per capita') plt.xlabel('GDP per capita (log)') plt.ylabel('PISA Science Score') plt.show() Explanation: Plotting the Data I use the log of the GDP per capita to plot against each component of the PISA on a scatter plot. End of explanation lm = smf.ols(formula='Reading ~ GDP', data=df3).fit() lm2.params lm.summary() lm2 = smf.ols(formula='Math ~ GDP', data=df3).fit() lm2.params lm2.summary() lm3 = smf.ols(formula='Science ~ GDP', data=df3).fit() lm3.params lm3.summary() Explanation: Regression Analysis The OLS regression results indicate that the there is a 0.57 correlation betweeen reading scores and GDP per capita, 0.63 between math scores and GDP per capita, and 0.57 between science scores and GDP per capita. End of explanation
9,228
Given the following text description, write Python code to implement the functionality described below step by step Description: Notes Step1: Here the curve shows the Poisson mean as a function of $M$. Clearly, the data don't sit on the curve, nor should they. But it would be nice to represent the width of the sampling distribution somehow, so we can see how compatible the data are with the model. Among other possibilities, we might do that by also showing curves reflecting the 16th and 84th percentiles of the sampling distribution, in addition to the mean, as a function of $M$ (so that the probability between the curves, at fixed $M$, is 68%). That would look like this Step2: You can see that about $2/3$ of the data lie within these limits. The jaggedness of the lines is due to the fact that, of course, we can only get integers out of the Poisson distribution. Let's now take this to a regime where there would be many more counts in each measurement. Step3: We're now well into the limit where the Poisson (mean $\mu$) distribution begins to resemble the Gaussian distribution (mean $\mu$ and standard deviation $\sqrt \mu$). As we're looking at many more counts, the discreteness of the dashed lines is also harder to see. Now, we might think about displaying this with $F$ rather than the actual measurement, $N$, on the $y$ axis. Of course, we know that the plotted points don't actually correspond to the true flux of each star, only the naive extimate we would make by plugging through the equations at the top of the notebook and ignoring any uncertainty in the model. So let's call it "estimated flux", $\hat{F}$, instead. We'll use the fact that we're now in the Gaussian limit to simply compute the dashed limits on the model prediction as the mean $\pm$ the standard deviation. Step4: In this simple example, apart from the tiny change to the dashed lines, this just corresponds to a rescaling of the $y$ axis. Finally, we could (if we wanted to) remember the form of the Gaussian distribution, $\mathrm{Normal}(x|\mu,\sigma) = \frac{1}{\sqrt{2\pi}\sigma} \exp\left[ -\frac{(x-\mu)^2}{2\sigma^2} \right]$. Because this density depends on $x$ and $\mu$ only through the distance between them, if we're using $\sigma$ (or some other scale) to represent model uncertainty, we could choose to position that visual cue with the data points rather than with the model. The distance between the data and model in units of that shown uncertainty would, of course, be the same either way. Note that the same statement could not be made when we were in the small-$N$ regime, or at least not made as simply.
Python Code: # get a bunch of imports out of the way import matplotlib.pyplot as plt plt.rc('text', usetex=True) plt.rcParams['xtick.labelsize'] = 'x-large' plt.rcParams['ytick.labelsize'] = 'x-large' import numpy as np import scipy.stats as st %matplotlib inline M = st.uniform.rvs(1.0, 100.0, size=10) F = np.sqrt(M) mu = 2*F N = st.poisson.rvs(mu) Mgrid = np.linspace(M.min(), M.max()) mu_of_Mgrid = 2*np.sqrt(Mgrid) plt.plot(Mgrid, mu_of_Mgrid, label=r'$\mu(M)$'); plt.plot(M, N, 'o', label='data'); plt.xlabel(r'$M$', fontsize='x-large'); plt.ylabel(r'$N$', fontsize='x-large'); plt.legend(fontsize='x-large'); Explanation: Notes: So, about those error bars... In which we will make sense of the common delusion of error bars. The Poisson example we've used so far when thinking about generative models and Bayes' Law is one where the idea of an error bar doesn't really apply. We measure a certain number of counts, period. That's it. It doesn't make sense to say that we measured, as the case may be, 5 $\pm$ something (even more so if that something is not an integer), because literally what we measured was the number 5. But error bars are enough of a thing that even the most pedantic among us will talk about them fairly often. So what's the deal? Let's look at a contrived example to sketch it out. Specifically, let's expand on the previous Poisson example by imagining we (a) measure a number of counts (with the same telescope and exposure time) from many stars with different masses, and (b) live in magical universe where stars have name tags that tell us their precise masses. We're interested in how luminosity, which is directly proportional to the expectation value for number of counts, relates to this magically known mass. In generative terms, * masses $M_i$ are known somehow, * fluxes $F_i \Leftarrow$ $M_i$, * expected counts $\mu_i \Leftarrow F_i$, * measured counts $N_i \sim \mathrm{Poisson}(\mu_i)$. For simplicity, let's implement the details as * Choose $M_i$, * Let $F_i=M_i^{0.5}$, * Let $\mu_i = 2F_i$. Here's a made-up data set. End of explanation Nlower = st.poisson.ppf(0.16, mu_of_Mgrid) Nupper = st.poisson.ppf(0.84, mu_of_Mgrid) plt.plot(Mgrid, mu_of_Mgrid, label=r'$\mu(M)$'); plt.plot(M, N, 'o', label='data'); plt.plot(Mgrid, Nlower, '--', color='C0'); plt.plot(Mgrid, Nupper, '--', color='C0'); plt.xlabel(r'$M$', fontsize='x-large'); plt.ylabel(r'$N$', fontsize='x-large'); plt.legend(fontsize='x-large'); Explanation: Here the curve shows the Poisson mean as a function of $M$. Clearly, the data don't sit on the curve, nor should they. But it would be nice to represent the width of the sampling distribution somehow, so we can see how compatible the data are with the model. Among other possibilities, we might do that by also showing curves reflecting the 16th and 84th percentiles of the sampling distribution, in addition to the mean, as a function of $M$ (so that the probability between the curves, at fixed $M$, is 68%). That would look like this: End of explanation M = st.uniform.rvs(1000.0, 10000.0, size=10) F = np.sqrt(M) mu = 2*F N = st.poisson.rvs(mu) Mgrid = np.linspace(M.min(), M.max()) mu_of_Mgrid = 2*np.sqrt(Mgrid) Nlower = st.poisson.ppf(0.16, mu_of_Mgrid) Nupper = st.poisson.ppf(0.84, mu_of_Mgrid) plt.plot(Mgrid, mu_of_Mgrid, label=r'$\mu(M)$'); plt.plot(M, N, 'o', label='data'); plt.plot(Mgrid, Nlower, '--', color='C0'); plt.plot(Mgrid, Nupper, '--', color='C0'); plt.xlabel(r'$M$', fontsize='x-large'); plt.ylabel(r'$N$', fontsize='x-large'); plt.legend(fontsize='x-large'); Explanation: You can see that about $2/3$ of the data lie within these limits. The jaggedness of the lines is due to the fact that, of course, we can only get integers out of the Poisson distribution. Let's now take this to a regime where there would be many more counts in each measurement. End of explanation F_of_Mgrid = mu_of_Mgrid / 2. Flower = (mu_of_Mgrid - np.sqrt(mu_of_Mgrid)) / 2. Fupper = (mu_of_Mgrid + np.sqrt(mu_of_Mgrid)) / 2. Fhat = N / 2. plt.plot(Mgrid, F_of_Mgrid, label=r'$\hat{F}(M)$'); plt.plot(M, Fhat, 'o', label='data'); plt.plot(Mgrid, Flower, '--', color='C0'); plt.plot(Mgrid, Fupper, '--', color='C0'); plt.xlabel(r'$M$', fontsize='x-large'); plt.ylabel(r'$\hat{F}$', fontsize='x-large'); plt.legend(fontsize='x-large'); Explanation: We're now well into the limit where the Poisson (mean $\mu$) distribution begins to resemble the Gaussian distribution (mean $\mu$ and standard deviation $\sqrt \mu$). As we're looking at many more counts, the discreteness of the dashed lines is also harder to see. Now, we might think about displaying this with $F$ rather than the actual measurement, $N$, on the $y$ axis. Of course, we know that the plotted points don't actually correspond to the true flux of each star, only the naive extimate we would make by plugging through the equations at the top of the notebook and ignoring any uncertainty in the model. So let's call it "estimated flux", $\hat{F}$, instead. We'll use the fact that we're now in the Gaussian limit to simply compute the dashed limits on the model prediction as the mean $\pm$ the standard deviation. End of explanation Fhat_err = np.sqrt(mu) / 2. plt.plot(Mgrid, F_of_Mgrid, label=r'$\hat{F}(M)$'); plt.errorbar(M, Fhat, yerr=Fhat_err, fmt='o', label='data'); plt.xlabel(r'$M$', fontsize='x-large'); plt.ylabel(r'$\hat{F}$', fontsize='x-large'); plt.legend(fontsize='x-large'); Explanation: In this simple example, apart from the tiny change to the dashed lines, this just corresponds to a rescaling of the $y$ axis. Finally, we could (if we wanted to) remember the form of the Gaussian distribution, $\mathrm{Normal}(x|\mu,\sigma) = \frac{1}{\sqrt{2\pi}\sigma} \exp\left[ -\frac{(x-\mu)^2}{2\sigma^2} \right]$. Because this density depends on $x$ and $\mu$ only through the distance between them, if we're using $\sigma$ (or some other scale) to represent model uncertainty, we could choose to position that visual cue with the data points rather than with the model. The distance between the data and model in units of that shown uncertainty would, of course, be the same either way. Note that the same statement could not be made when we were in the small-$N$ regime, or at least not made as simply. End of explanation
9,229
Given the following text description, write Python code to implement the functionality described below step by step Description: DICS for power mapping In this tutorial, we'll simulate two signals originating from two locations on the cortex. These signals will be sinusoids, so we'll be looking at oscillatory activity (as opposed to evoked activity). We'll use dynamic imaging of coherent sources (DICS) Step1: Setup We first import the required packages to run this tutorial and define a list of filenames for various things we'll be using. Step3: Data simulation The following function generates a timeseries that contains an oscillator, whose frequency fluctuates a little over time, but stays close to 10 Hz. We'll use this function to generate our two signals. Step4: Let's simulate two timeseries and plot some basic information about them. Step5: Now we put the signals at two locations on the cortex. We construct a Step6: Before we simulate the sensor-level data, let's define a signal-to-noise ratio. You are encouraged to play with this parameter and see the effect of noise on our results. Step7: Now we run the signal through the forward model to obtain simulated sensor data. To save computation time, we'll only simulate gradiometer data. You can try simulating other types of sensors as well. Some noise is added based on the baseline noise covariance matrix from the sample dataset, scaled to implement the desired SNR. Step8: We create an Step9: Power mapping With our simulated dataset ready, we can now pretend to be researchers that have just recorded this from a real subject and are going to study what parts of the brain communicate with each other. First, we'll create a source estimate of the MEG data. We'll use both a straightforward MNE-dSPM inverse solution for this, and the DICS beamformer which is specifically designed to work with oscillatory data. Computing the inverse using MNE-dSPM Step10: We will now compute the cortical power map at 10 Hz. using a DICS beamformer. A beamformer will construct for each vertex a spatial filter that aims to pass activity originating from the vertex, while dampening activity from other sources as much as possible. The Step12: Plot the DICS power maps for both approaches, starting with the first Step13: Now the second
Python Code: # Author: Marijn van Vliet <[email protected]> # # License: BSD (3-clause) Explanation: DICS for power mapping In this tutorial, we'll simulate two signals originating from two locations on the cortex. These signals will be sinusoids, so we'll be looking at oscillatory activity (as opposed to evoked activity). We'll use dynamic imaging of coherent sources (DICS) :footcite:GrossEtAl2001 to map out spectral power along the cortex. Let's see if we can find our two simulated sources. End of explanation import os.path as op import numpy as np from scipy.signal import welch, coherence, unit_impulse from matplotlib import pyplot as plt import mne from mne.simulation import simulate_raw, add_noise from mne.datasets import sample from mne.minimum_norm import make_inverse_operator, apply_inverse from mne.time_frequency import csd_morlet from mne.beamformer import make_dics, apply_dics_csd # We use the MEG and MRI setup from the MNE-sample dataset data_path = sample.data_path(download=False) subjects_dir = op.join(data_path, 'subjects') # Filenames for various files we'll be using meg_path = op.join(data_path, 'MEG', 'sample') raw_fname = op.join(meg_path, 'sample_audvis_raw.fif') fwd_fname = op.join(meg_path, 'sample_audvis-meg-eeg-oct-6-fwd.fif') cov_fname = op.join(meg_path, 'sample_audvis-cov.fif') fwd = mne.read_forward_solution(fwd_fname) # Seed for the random number generator rand = np.random.RandomState(42) Explanation: Setup We first import the required packages to run this tutorial and define a list of filenames for various things we'll be using. End of explanation sfreq = 50. # Sampling frequency of the generated signal n_samp = int(round(10. * sfreq)) times = np.arange(n_samp) / sfreq # 10 seconds of signal n_times = len(times) def coh_signal_gen(): Generate an oscillating signal. Returns ------- signal : ndarray The generated signal. t_rand = 0.001 # Variation in the instantaneous frequency of the signal std = 0.1 # Std-dev of the random fluctuations added to the signal base_freq = 10. # Base frequency of the oscillators in Hertz n_times = len(times) # Generate an oscillator with varying frequency and phase lag. signal = np.sin(2.0 * np.pi * (base_freq * np.arange(n_times) / sfreq + np.cumsum(t_rand * rand.randn(n_times)))) # Add some random fluctuations to the signal. signal += std * rand.randn(n_times) # Scale the signal to be in the right order of magnitude (~100 nAm) # for MEG data. signal *= 100e-9 return signal Explanation: Data simulation The following function generates a timeseries that contains an oscillator, whose frequency fluctuates a little over time, but stays close to 10 Hz. We'll use this function to generate our two signals. End of explanation signal1 = coh_signal_gen() signal2 = coh_signal_gen() fig, axes = plt.subplots(2, 2, figsize=(8, 4)) # Plot the timeseries ax = axes[0][0] ax.plot(times, 1e9 * signal1, lw=0.5) ax.set(xlabel='Time (s)', xlim=times[[0, -1]], ylabel='Amplitude (Am)', title='Signal 1') ax = axes[0][1] ax.plot(times, 1e9 * signal2, lw=0.5) ax.set(xlabel='Time (s)', xlim=times[[0, -1]], title='Signal 2') # Power spectrum of the first timeseries f, p = welch(signal1, fs=sfreq, nperseg=128, nfft=256) ax = axes[1][0] # Only plot the first 100 frequencies ax.plot(f[:100], 20 * np.log10(p[:100]), lw=1.) ax.set(xlabel='Frequency (Hz)', xlim=f[[0, 99]], ylabel='Power (dB)', title='Power spectrum of signal 1') # Compute the coherence between the two timeseries f, coh = coherence(signal1, signal2, fs=sfreq, nperseg=100, noverlap=64) ax = axes[1][1] ax.plot(f[:50], coh[:50], lw=1.) ax.set(xlabel='Frequency (Hz)', xlim=f[[0, 49]], ylabel='Coherence', title='Coherence between the timeseries') fig.tight_layout() Explanation: Let's simulate two timeseries and plot some basic information about them. End of explanation # The locations on the cortex where the signal will originate from. These # locations are indicated as vertex numbers. vertices = [[146374], [33830]] # Construct SourceEstimates that describe the signals at the cortical level. data = np.vstack((signal1, signal2)) stc_signal = mne.SourceEstimate( data, vertices, tmin=0, tstep=1. / sfreq, subject='sample') stc_noise = stc_signal * 0. Explanation: Now we put the signals at two locations on the cortex. We construct a :class:mne.SourceEstimate object to store them in. The timeseries will have a part where the signal is active and a part where it is not. The techniques we'll be using in this tutorial depend on being able to contrast data that contains the signal of interest versus data that does not (i.e. it contains only noise). End of explanation snr = 1. # Signal-to-noise ratio. Decrease to add more noise. Explanation: Before we simulate the sensor-level data, let's define a signal-to-noise ratio. You are encouraged to play with this parameter and see the effect of noise on our results. End of explanation # Read the info from the sample dataset. This defines the location of the # sensors and such. info = mne.io.read_info(raw_fname) info.update(sfreq=sfreq, bads=[]) # Only use gradiometers picks = mne.pick_types(info, meg='grad', stim=True, exclude=()) mne.pick_info(info, picks, copy=False) # Define a covariance matrix for the simulated noise. In this tutorial, we use # a simple diagonal matrix. cov = mne.cov.make_ad_hoc_cov(info) cov['data'] *= (20. / snr) ** 2 # Scale the noise to achieve the desired SNR # Simulate the raw data, with a lowpass filter on the noise stcs = [(stc_signal, unit_impulse(n_samp, dtype=int) * 1), (stc_noise, unit_impulse(n_samp, dtype=int) * 2)] # stacked in time duration = (len(stc_signal.times) * 2) / sfreq raw = simulate_raw(info, stcs, forward=fwd) add_noise(raw, cov, iir_filter=[4, -4, 0.8], random_state=rand) Explanation: Now we run the signal through the forward model to obtain simulated sensor data. To save computation time, we'll only simulate gradiometer data. You can try simulating other types of sensors as well. Some noise is added based on the baseline noise covariance matrix from the sample dataset, scaled to implement the desired SNR. End of explanation events = mne.find_events(raw, initial_event=True) tmax = (len(stc_signal.times) - 1) / sfreq epochs = mne.Epochs(raw, events, event_id=dict(signal=1, noise=2), tmin=0, tmax=tmax, baseline=None, preload=True) assert len(epochs) == 2 # ensure that we got the two expected events # Plot some of the channels of the simulated data that are situated above one # of our simulated sources. picks = mne.pick_channels(epochs.ch_names, mne.read_selection('Left-frontal')) epochs.plot(picks=picks) Explanation: We create an :class:mne.Epochs object containing two trials: one with both noise and signal and one with just noise End of explanation # Compute the inverse operator fwd = mne.read_forward_solution(fwd_fname) inv = make_inverse_operator(epochs.info, fwd, cov) # Apply the inverse model to the trial that also contains the signal. s = apply_inverse(epochs['signal'].average(), inv) # Take the root-mean square along the time dimension and plot the result. s_rms = np.sqrt((s ** 2).mean()) title = 'MNE-dSPM inverse (RMS)' brain = s_rms.plot('sample', subjects_dir=subjects_dir, hemi='both', figure=1, size=600, time_label=title, title=title) # Indicate the true locations of the source activity on the plot. brain.add_foci(vertices[0][0], coords_as_verts=True, hemi='lh') brain.add_foci(vertices[1][0], coords_as_verts=True, hemi='rh') # Rotate the view and add a title. brain.show_view(view={'azimuth': 0, 'elevation': 0, 'distance': 550, 'focalpoint': [0, 0, 0]}) Explanation: Power mapping With our simulated dataset ready, we can now pretend to be researchers that have just recorded this from a real subject and are going to study what parts of the brain communicate with each other. First, we'll create a source estimate of the MEG data. We'll use both a straightforward MNE-dSPM inverse solution for this, and the DICS beamformer which is specifically designed to work with oscillatory data. Computing the inverse using MNE-dSPM: End of explanation # Estimate the cross-spectral density (CSD) matrix on the trial containing the # signal. csd_signal = csd_morlet(epochs['signal'], frequencies=[10]) # Compute the spatial filters for each vertex, using two approaches. filters_approach1 = make_dics( info, fwd, csd_signal, reg=0.05, pick_ori='max-power', depth=1., inversion='single', weight_norm=None) print(filters_approach1) filters_approach2 = make_dics( info, fwd, csd_signal, reg=0.05, pick_ori='max-power', depth=None, inversion='matrix', weight_norm='unit-noise-gain') print(filters_approach2) # You can save these to disk with: # filters_approach1.save('filters_1-dics.h5') # Compute the DICS power map by applying the spatial filters to the CSD matrix. power_approach1, f = apply_dics_csd(csd_signal, filters_approach1) power_approach2, f = apply_dics_csd(csd_signal, filters_approach2) Explanation: We will now compute the cortical power map at 10 Hz. using a DICS beamformer. A beamformer will construct for each vertex a spatial filter that aims to pass activity originating from the vertex, while dampening activity from other sources as much as possible. The :func:mne.beamformer.make_dics function has many switches that offer precise control over the way the filter weights are computed. Currently, there is no clear consensus regarding the best approach. This is why we will demonstrate two approaches here: The approach as described in :footcite:vanVlietEtAl2018, which first normalizes the forward solution and computes a vector beamformer. The scalar beamforming approach based on :footcite:SekiharaNagarajan2008, which uses weight normalization instead of normalizing the forward solution. End of explanation def plot_approach(power, n): Plot the results on a brain. title = 'DICS power map, approach %d' % n brain = power_approach1.plot( 'sample', subjects_dir=subjects_dir, hemi='both', size=600, time_label=title, title=title) # Indicate the true locations of the source activity on the plot. brain.add_foci(vertices[0][0], coords_as_verts=True, hemi='lh', color='b') brain.add_foci(vertices[1][0], coords_as_verts=True, hemi='rh', color='b') # Rotate the view and add a title. brain.show_view(view={'azimuth': 0, 'elevation': 0, 'distance': 550, 'focalpoint': [0, 0, 0]}) return brain brain1 = plot_approach(power_approach1, 1) Explanation: Plot the DICS power maps for both approaches, starting with the first: End of explanation brain2 = plot_approach(power_approach2, 2) Explanation: Now the second: End of explanation
9,230
Given the following text description, write Python code to implement the functionality described below step by step Description: Spatial Model fitting in GLS In this exercise we will fit a linear model using a Spatial structure as covariance matrix. We will use GLS to get better estimators. As always we will need to load the necessary libraries. Step1: Use this to automate the process. Be carefull it can overwrite current results run ../HEC_runs/fit_fia_logbiomass_logspp_GLS.py /RawDataCSV/idiv_share/plotsClimateData_11092017.csv /apps/external_plugins/spystats/HEC_runs/results/logbiomas_logsppn_res.csv -85 -80 30 35 Importing data We will use the FIA dataset and for exemplary purposes we will take a subsample of this data. Also important. The empirical variogram has been calculated for the entire data set using the residuals of an OLS model. We will use some auxiliary functions defined in the fit_fia_logbiomass_logspp_GLS. You can inspect the functions using the ?? symbol. Step2: Now we will obtain the data from the calculated empirical variogram. Step3: restricted w/ all data spatial correlation parameters Log-Likelihood Step4: Instantiating the variogram object Step5: Instantiating theoretical variogram model
Python Code: # Load Biospytial modules and etc. %matplotlib inline import sys sys.path.append('/apps') sys.path.append('..') sys.path.append('../spystats') import django django.setup() import pandas as pd import matplotlib.pyplot as plt import numpy as np ## Use the ggplot style plt.style.use('ggplot') import tools Explanation: Spatial Model fitting in GLS In this exercise we will fit a linear model using a Spatial structure as covariance matrix. We will use GLS to get better estimators. As always we will need to load the necessary libraries. End of explanation from HEC_runs.fit_fia_logbiomass_logspp_GLS import prepareDataFrame,loadVariogramFromData,buildSpatialStructure, calculateGLS, initAnalysis, fitGLSRobust section = initAnalysis("/RawDataCSV/idiv_share/FIA_Plots_Biomass_11092017.csv", "/apps/external_plugins/spystats/HEC_runs/results/variogram/data_envelope.csv", -130,-60,30,40) #section = initAnalysis("/RawDataCSV/idiv_share/plotsClimateData_11092017.csv", # "/apps/external_plugins/spystats/HEC_runs/results/variogram/data_envelope.csv", # -85,-80,30,35) # IN HEC #section = initAnalysis("/home/hpc/28/escamill/csv_data/idiv/FIA_Plots_Biomass_11092017.csv","/home/hpc/28/escamill/spystats/HEC_runs/results/variogram/data_envelope.csv",-85,-80,30,35) section.shape Explanation: Use this to automate the process. Be carefull it can overwrite current results run ../HEC_runs/fit_fia_logbiomass_logspp_GLS.py /RawDataCSV/idiv_share/plotsClimateData_11092017.csv /apps/external_plugins/spystats/HEC_runs/results/logbiomas_logsppn_res.csv -85 -80 30 35 Importing data We will use the FIA dataset and for exemplary purposes we will take a subsample of this data. Also important. The empirical variogram has been calculated for the entire data set using the residuals of an OLS model. We will use some auxiliary functions defined in the fit_fia_logbiomass_logspp_GLS. You can inspect the functions using the ?? symbol. End of explanation gvg,tt = loadVariogramFromData("/apps/external_plugins/spystats/HEC_runs/results/variogram/data_envelope.csv",section) gvg.plot(refresh=False,with_envelope=True) resum,gvgn,resultspd,results = fitGLSRobust(section,gvg,num_iterations=1,distance_threshold=1000000) resum.as_text Explanation: Now we will obtain the data from the calculated empirical variogram. End of explanation plt.plot(resultspd.rsq) plt.title("GLS feedback algorithm") plt.xlabel("Number of iterations") plt.ylabel("R-sq fitness estimator") resultspd.columns a = map(lambda x : x.to_dict(), resultspd['params']) paramsd = pd.DataFrame(a) paramsd plt.plot(paramsd.Intercept.loc[1:]) plt.get_yaxis().get_major_formatter().set_useOffset(False) fig = plt.figure(figsize=(10,10)) plt.plot(paramsd.logSppN.iloc[1:]) variogram_data_path = "/apps/external_plugins/spystats/HEC_runs/results/variogram/data_envelope.csv" thrs_dist = 100000 emp_var_log_log = pd.read_csv(variogram_data_path) Explanation: restricted w/ all data spatial correlation parameters Log-Likelihood: -16607 AIC: 3.322e+04 restricted w/ restricted spatial correlation parameters Log-Likelihood: -16502. AIC: 3.301e+04 End of explanation gvg = tools.Variogram(section,'logBiomass',using_distance_threshold=thrs_dist) gvg.envelope = emp_var_log_log gvg.empirical = emp_var_log_log.variogram gvg.lags = emp_var_log_log.lags #emp_var_log_log = emp_var_log_log.dropna() #vdata = gvg.envelope.dropna() Explanation: Instantiating the variogram object End of explanation matern_model = tools.MaternVariogram(sill=0.34,range_a=100000,nugget=0.33,kappa=4) whittle_model = tools.WhittleVariogram(sill=0.34,range_a=100000,nugget=0.0,alpha=3) exp_model = tools.ExponentialVariogram(sill=0.34,range_a=100000,nugget=0.33) gaussian_model = tools.GaussianVariogram(sill=0.34,range_a=100000,nugget=0.33) spherical_model = tools.SphericalVariogram(sill=0.34,range_a=100000,nugget=0.33) gvg.model = whittle_model #gvg.model = matern_model #models = map(lambda model : gvg.fitVariogramModel(model),[matern_model,whittle_model,exp_model,gaussian_model,spherical_model]) gvg.fitVariogramModel(whittle_model) import numpy as np xx = np.linspace(0,1000000,1000) gvg.plot(refresh=False,with_envelope=True) plt.plot(xx,whittle_model.f(xx),lw=2.0,c='k') plt.title("Empirical Variogram with fitted Whittle Model") def randomSelection(n,p): idxs = np.random.choice(n,p,replace=False) random_sample = new_data.iloc[idxs] return random_sample ################# n = len(new_data) p = 3000 # The amount of samples taken (let's do it without replacement) random_sample = randomSelection(n,100) Explanation: Instantiating theoretical variogram model End of explanation
9,231
Given the following text description, write Python code to implement the functionality described below step by step Description: T81-558 Step1: Training with a Validation Set and Early Stopping Overfitting occurs when a neural network is trained to the point that it begins to memorize rather than generalize. It is important to segment the original dataset into several datasets Step2: Calculate Classification Accuracy Accuracy is the number of rows where the neural network correctly predicted the target class. Accuracy is only used for classification, not regression. $ accuracy = \frac{\textit{#} \ correct}{N} $ Where $N$ is the size of the evaluted set (training or validation). Higher accuracy numbers are desired. Step3: Calculate Classification Log Loss Accuracy is like a final exam with no partial credit. However, neural networks can predict a probability of each of the target classes. Neural networks will give high probabilities to predictions that are more likely. Log loss is an error metric that penalizes confidence in wrong answers. Lower log loss values are desired. For any scikit-learn model there are two ways to get a prediction Step4: Log loss is calculated as follows Step5: Evaluating Regression Results Regression results are evaluated differently than classification. Consider the following code that trains a neural network for the MPG dataset. Step6: Mean Square Error The mean square error is the sum of the squared differences between the prediction ($\hat{y}$) and the expected ($y$). MSE values are not of a particular unit. If an MSE value has decreased for a model, that is good. However, beyond this, there is not much more you can determine. Low MSE values are desired. $ \text{MSE} = \frac{1}{n} \sum_{i=1}^n \left(\hat{y}_i - y_i\right)^2 $ Step7: Root Mean Square Error The root mean square (RMSE) is essentially the square root of the MSE. Because of this, the RMSE error is in the same units as the training data outcome. Low RMSE values are desired. $ \text{MSE} = \sqrt{\frac{1}{n} \sum_{i=1}^n \left(\hat{y}_i - y_i\right)^2} $ Step8: Training with Cross Validation Cross validation uses a number of folds, and multiple models, to generate out of sample predictions on the entire dataset. It is important to note that there will be one model (neural network) for each fold. Each model contributes part of the final out-of-sample prediction. For new data, which is data not present in the training set, predictions from the fold models can be handled in several ways. Choose the model that had the highest validation score as the final model. Preset new data to the 5 models and average the result (this is an enesmble). Retrain a new model (using the same settings as the crossvalidation) on the entire dataset. Train for as many steps, and with the same hidden layer structure. The following code trains the MPG dataset using a 5-fold cross validation. The expected performance of a neural network, of the type trained here, would be the score for the generated out-of-sample predictions. Step9: Training with Cross Validation and a Holdout Set If you have a considerable amount of data, it is always valuable to set aside a holdout set before you crossvalidate. This hold out set will be the final evaluation before you make use of your model for its real-world use. The following program makes use of a hodlout set, and then still cross validates. Step10: How Kaggle Competitions are Scored Kaggle is a platform for competitive data science. Competitions are posted onto Kaggle by companies seeking the best model for their data. Competing in a Kaggle competition is quite a bit of work, I've competed in one Kaggle competition. Kaggle awards "tiers", such as Step11: Grid Search Finding the right set of hyperparameters can be a large task. Often computational power is thrown at this job. The scikit-learn grid search makes use of your computer's CPU cores to try every one of a defined number of hyperparameters to see which gets the best score. The following code shows how many CPU cores are available to Python Step12: The following code performs a grid search. Your system is queried for the number of cores available they are used to scan through the combinations of hyperparameters that you specify. Step13: The best combination of hyperparameters are displayed. Random Search It is also possable to conduct a random search. The random search is similar to the grid search, except that the entire search space is not used. Rather, random points in the search space are tried. For a random search you must specify the number of hyperparameter iterations (n_iter) to try.
Python Code: from sklearn import preprocessing import matplotlib.pyplot as plt import numpy as np import pandas as pd # Encode text values to dummy variables(i.e. [1,0,0],[0,1,0],[0,0,1] for red,green,blue) def encode_text_dummy(df,name): dummies = pd.get_dummies(df[name]) for x in dummies.columns: dummy_name = "{}-{}".format(name,x) df[dummy_name] = dummies[x] df.drop(name, axis=1, inplace=True) # Encode text values to indexes(i.e. [1],[2],[3] for red,green,blue). def encode_text_index(df,name): le = preprocessing.LabelEncoder() df[name] = le.fit_transform(df[name]) return le.classes_ # Encode a numeric column as zscores def encode_numeric_zscore(df,name,mean=None,sd=None): if mean is None: mean = df[name].mean() if sd is None: sd = df[name].std() df[name] = (df[name]-mean)/sd # Convert all missing values in the specified column to the median def missing_median(df, name): med = df[name].median() df[name] = df[name].fillna(med) # Convert a Pandas dataframe to the x,y inputs that TensorFlow needs def to_xy(df,target): result = [] for x in df.columns: if x != target: result.append(x) # find out the type of the target column. Is it really this hard? :( target_type = df[target].dtypes target_type = target_type[0] if hasattr(target_type, '__iter__') else target_type print(target_type) # Encode to int for classification, float otherwise. TensorFlow likes 32 bits. if target_type in (np.int64, np.int32): # Classification return df.as_matrix(result).astype(np.float32),df.as_matrix([target]).astype(np.int32) else: # Regression return df.as_matrix(result).astype(np.float32),df.as_matrix([target]).astype(np.float32) # Nicely formatted time string def hms_string(sec_elapsed): h = int(sec_elapsed / (60 * 60)) m = int((sec_elapsed % (60 * 60)) / 60) s = sec_elapsed % 60 return "{}:{:>02}:{:>05.2f}".format(h, m, s) # Regression chart, we will see more of this chart in the next class. def chart_regression(pred,y): t = pd.DataFrame({'pred' : pred.flatten(), 'y' : y_test.flatten()}) t.sort_values(by=['y'],inplace=True) a = plt.plot(t['y'].tolist(),label='expected') b = plt.plot(t['pred'].tolist(),label='prediction') plt.ylabel('output') plt.legend() plt.show() Explanation: T81-558: Applications of Deep Neural Networks Class 3: Training a Neural Network * Instructor: Jeff Heaton, School of Engineering and Applied Science, Washington University in St. Louis * For more information visit the class website. Building the Feature Vector Neural networks require their input to be a fixed number of columns. This is very similar to spreadsheet data. This input must be completely numeric. It is important to represent the data in a way that the neural network can train from it. In class 6, we will see even more ways to preprocess data. For now, we will look at several of the most basic ways to transform data for a neural network. Before we look at specific ways to preprocess data, it is important to consider four basic types of data, as defined by Stanley Smith Stevens. These are commonly referred to as the levels of measure: Character Data (strings) Nominal - Individual discrete items, no order. For example: color, zip code, shape. Ordinal - Individual discrete items that can be ordered. For example: grade level, job title, Starbucks(tm) coffee size (tall, vente, grande) Numeric Data Interval - Numeric values, no defined start. For example, temperature. You would never say "yesterday was twice as hot as today". Ratio - Numeric values, clearly defined start. For example, speed. You would say that "The first car is going twice as fast as the second." The following code contains several useful functions to encode the feature vector for various types of data. Encoding data: encode_text_dummy - Encode text fields, such as the iris species as a single field for each class. Three classes would become "0,0,1" "0,1,0" and "1,0,0". Encode non-target predictors this way. Good for nominal. encode_text_index - Encode text fields, such as the iris species as a single numeric field as "0" "1" and "2". Encode the target field for a classification this way. Good for nominal. encode_numeric_zscore - Encode numeric values as a z-score. Neural networks deal well with "centered" fields, zscore is usually a good starting point for interval/ratio. Ordinal values can be encoded as dummy or index. Later we will see a more advanced means of encoding Dealing with missing data: missing_median - Fill all missing values with the median value. Creating the final feature vector: to_xy - Once all fields are numeric, this function can provide the x and y matrixes that are used to fit the neural network. Other utility functions: hms_string - Print out an elapsed time string. chart_regression - Display a chart to show how well a regression performs. End of explanation import os import pandas as pd from sklearn.cross_validation import train_test_split import tensorflow.contrib.learn as skflow import numpy as np path = "./data/" filename = os.path.join(path,"iris.csv") df = pd.read_csv(filename,na_values=['NA','?']) # Encode feature vector encode_numeric_zscore(df,'petal_w') encode_numeric_zscore(df,'petal_l') encode_numeric_zscore(df,'sepal_w') encode_numeric_zscore(df,'sepal_l') species = encode_text_index(df,"species") num_classes = len(species) # Create x & y for training # Create the x-side (feature vectors) of the training x, y = to_xy(df,'species') # Split into train/test x_train, x_test, y_train, y_test = train_test_split( x, y, test_size=0.25, random_state=45) # as much as I would like to use 42, it gives a perfect result, and a boring confusion matrix! # Create a deep neural network with 3 hidden layers of 10, 20, 10 classifier = skflow.TensorFlowDNNClassifier(hidden_units=[20, 10, 5], n_classes=num_classes, steps=10000) # Early stopping early_stop = skflow.monitors.ValidationMonitor(x_test, y_test, early_stopping_rounds=200, print_steps=50, n_classes=num_classes) # Fit/train neural network classifier.fit(x_train, y_train, monitor=early_stop) Explanation: Training with a Validation Set and Early Stopping Overfitting occurs when a neural network is trained to the point that it begins to memorize rather than generalize. It is important to segment the original dataset into several datasets: Training Set Validation Set Holdout Set There are several different ways that these sets can be constructed. The following programs demonstrate some of these. The first method is a training and validation set. The training data are used to train the neural network until the validation set no longer improves. This attempts to stop at a near optimal training point. This method will only give accurate "out of sample" predictions for the validation set, this is usually 20% or so of the data. The predictions for the training data will be overly optimistic, as these were the data that the neural network was trained on. End of explanation from sklearn import metrics # Evaluate success using accuracy pred = classifier.predict(x_test) score = metrics.accuracy_score(y_test, pred) print("Accuracy score: {}".format(score)) Explanation: Calculate Classification Accuracy Accuracy is the number of rows where the neural network correctly predicted the target class. Accuracy is only used for classification, not regression. $ accuracy = \frac{\textit{#} \ correct}{N} $ Where $N$ is the size of the evaluted set (training or validation). Higher accuracy numbers are desired. End of explanation pred = classifier.predict_proba(x_test) np.set_printoptions(precision=4) print("Numpy array of predictions") print(pred[0:5]) print("As percent probability") (pred[0:5]*100).astype(int) score = metrics.log_loss(y_test, pred) print("Log loss score: {}".format(score)) Explanation: Calculate Classification Log Loss Accuracy is like a final exam with no partial credit. However, neural networks can predict a probability of each of the target classes. Neural networks will give high probabilities to predictions that are more likely. Log loss is an error metric that penalizes confidence in wrong answers. Lower log loss values are desired. For any scikit-learn model there are two ways to get a prediction: predict - In the case of classification output the numeric id of the predicted class. For regression, this is simply the prediction. predict_proba - In the case of classification output the probability of each of the classes. Not used for regression. The following code shows the output of predict_proba: End of explanation %matplotlib inline from matplotlib.pyplot import figure, show from numpy import arange, sin, pi t = arange(0.0, 5.0, 0.00001) #t = arange(1.0, 5.0, 0.00001) # computer scientists #t = arange(0.0, 1.0, 0.00001) # data scientists fig = figure(1,figsize=(12, 10)) ax1 = fig.add_subplot(211) ax1.plot(t, np.log(t)) ax1.grid(True) ax1.set_ylim((-8, 1.5)) ax1.set_xlim((-0.1, 2)) ax1.set_xlabel('x') ax1.set_ylabel('y') ax1.set_title('log(x)') show() Explanation: Log loss is calculated as follows: $ \text{log loss} = -\frac{1}{N}\sum_{i=1}^N {( {y}_i\log(\hat{y}_i) + (1 - {y}_i)\log(1 - \hat{y}_i))} $ The log function is useful to penalizing wrong answers. The following code demonstrates the utility of the log function: End of explanation import tensorflow.contrib.learn as skflow from sklearn.cross_validation import train_test_split import pandas as pd import os import numpy as np from sklearn import metrics from scipy.stats import zscore path = "./data/" filename_read = os.path.join(path,"auto-mpg.csv") df = pd.read_csv(filename_read,na_values=['NA','?']) # create feature vector missing_median(df, 'horsepower') df.drop('name',1,inplace=True) encode_numeric_zscore(df, 'horsepower') encode_numeric_zscore(df, 'weight') encode_numeric_zscore(df, 'cylinders') encode_numeric_zscore(df, 'displacement') encode_numeric_zscore(df, 'acceleration') encode_text_dummy(df, 'origin') # Encode to a 2D matrix for training x,y = to_xy(df,['mpg']) # Split into train/test x_train, x_test, y_train, y_test = train_test_split( x, y, test_size=0.20, random_state=42) # Create a deep neural network with 3 hidden layers of 50, 25, 10 regressor = skflow.TensorFlowDNNRegressor(hidden_units=[50, 25, 10], steps=5000) # Early stopping early_stop = skflow.monitors.ValidationMonitor(x_test, y_test, early_stopping_rounds=200, print_steps=50) # Fit/train neural network regressor.fit(x_train, y_train, monitor=early_stop) Explanation: Evaluating Regression Results Regression results are evaluated differently than classification. Consider the following code that trains a neural network for the MPG dataset. End of explanation pred = regressor.predict(x_test) # Measure MSE error. score = metrics.mean_squared_error(pred,y_test) print("Final score (MSE): {}".format(score)) Explanation: Mean Square Error The mean square error is the sum of the squared differences between the prediction ($\hat{y}$) and the expected ($y$). MSE values are not of a particular unit. If an MSE value has decreased for a model, that is good. However, beyond this, there is not much more you can determine. Low MSE values are desired. $ \text{MSE} = \frac{1}{n} \sum_{i=1}^n \left(\hat{y}_i - y_i\right)^2 $ End of explanation # Measure RMSE error. RMSE is common for regression. score = np.sqrt(metrics.mean_squared_error(pred,y_test)) print("Final score (RMSE): {}".format(score)) Explanation: Root Mean Square Error The root mean square (RMSE) is essentially the square root of the MSE. Because of this, the RMSE error is in the same units as the training data outcome. Low RMSE values are desired. $ \text{MSE} = \sqrt{\frac{1}{n} \sum_{i=1}^n \left(\hat{y}_i - y_i\right)^2} $ End of explanation import tensorflow.contrib.learn as skflow import pandas as pd import os import numpy as np from sklearn import metrics from scipy.stats import zscore from sklearn.cross_validation import KFold path = "./data/" filename_read = os.path.join(path,"auto-mpg.csv") filename_write = os.path.join(path,"auto-mpg-out-of-sample.csv") df = pd.read_csv(filename_read,na_values=['NA','?']) # create feature vector missing_median(df, 'horsepower') df.drop('name',1,inplace=True) encode_numeric_zscore(df, 'horsepower') encode_numeric_zscore(df, 'weight') encode_numeric_zscore(df, 'cylinders') encode_numeric_zscore(df, 'displacement') encode_numeric_zscore(df, 'acceleration') encode_text_dummy(df, 'origin') # Shuffle np.random.seed(42) df = df.reindex(np.random.permutation(df.index)) df.reset_index(inplace=True, drop=True) # Encode to a 2D matrix for training x,y = to_xy(df,['mpg']) # Cross validate kf = KFold(len(x), n_folds=5) oos_y = [] oos_pred = [] fold = 1 for train, test in kf: print("Fold #{}".format(fold)) fold+=1 x_train = x[train] y_train = y[train] x_test = x[test] y_test = y[test] # Create a deep neural network with 3 hidden layers of 10, 20, 10 regressor = skflow.TensorFlowDNNRegressor(hidden_units=[10, 20, 10], steps=500) # Early stopping early_stop = skflow.monitors.ValidationMonitor(x_test, y_test, early_stopping_rounds=200, print_steps=50) # Fit/train neural network regressor.fit(x_train, y_train, monitor=early_stop) # Add the predictions to the oos prediction list pred = regressor.predict(x_test) oos_y.append(y_test) oos_pred.append(pred) # Measure accuracy score = np.sqrt(metrics.mean_squared_error(pred,y_test)) print("Fold score (RMSE): {}".format(score)) # Build the oos prediction list and calculate the error. oos_y = np.concatenate(oos_y) oos_pred = np.concatenate(oos_pred) score = np.sqrt(metrics.mean_squared_error(oos_pred,oos_y)) print("Final, out of sample score (RMSE): {}".format(score)) # Write the cross-validated prediction oos_y = pd.DataFrame(oos_y) oos_pred = pd.DataFrame(oos_pred) oosDF = pd.concat( [df, oos_y, oos_pred],axis=1 ) oosDF.to_csv(filename_write,index=False) Explanation: Training with Cross Validation Cross validation uses a number of folds, and multiple models, to generate out of sample predictions on the entire dataset. It is important to note that there will be one model (neural network) for each fold. Each model contributes part of the final out-of-sample prediction. For new data, which is data not present in the training set, predictions from the fold models can be handled in several ways. Choose the model that had the highest validation score as the final model. Preset new data to the 5 models and average the result (this is an enesmble). Retrain a new model (using the same settings as the crossvalidation) on the entire dataset. Train for as many steps, and with the same hidden layer structure. The following code trains the MPG dataset using a 5-fold cross validation. The expected performance of a neural network, of the type trained here, would be the score for the generated out-of-sample predictions. End of explanation import tensorflow.contrib.learn as skflow from sklearn.cross_validation import train_test_split import pandas as pd import os import numpy as np from sklearn import metrics from scipy.stats import zscore from sklearn.cross_validation import KFold path = "./data/" filename_read = os.path.join(path,"auto-mpg.csv") filename_write = os.path.join(path,"auto-mpg-holdout.csv") df = pd.read_csv(filename_read,na_values=['NA','?']) # create feature vector missing_median(df, 'horsepower') df.drop('name',1,inplace=True) encode_numeric_zscore(df, 'horsepower') encode_numeric_zscore(df, 'weight') encode_numeric_zscore(df, 'cylinders') encode_numeric_zscore(df, 'displacement') encode_numeric_zscore(df, 'acceleration') encode_text_dummy(df, 'origin') # Shuffle np.random.seed(42) df = df.reindex(np.random.permutation(df.index)) df.reset_index(inplace=True, drop=True) # Encode to a 2D matrix for training x,y = to_xy(df,['mpg']) # Keep a 10% holdout x_main, x_holdout, y_main, y_holdout = train_test_split( x, y, test_size=0.10) # Cross validate kf = KFold(len(x_main), n_folds=5) oos_y = [] oos_pred = [] fold = 1 for train, test in kf: print("Fold #{}".format(fold)) fold+=1 x_train = x_main[train] y_train = y_main[train] x_test = x_main[test] y_test = y_main[test] # Create a deep neural network with 3 hidden layers of 10, 20, 10 regressor = skflow.TensorFlowDNNRegressor(hidden_units=[10, 20, 10], steps=500) # Early stopping early_stop = skflow.monitors.ValidationMonitor(x_test, y_test, early_stopping_rounds=200, print_steps=50) # Fit/train neural network regressor.fit(x_train, y_train, monitor=early_stop) # Add the predictions to the OOS prediction list pred = regressor.predict(x_test) oos_y.append(y_test) oos_pred.append(pred) # Measure accuracy score = np.sqrt(metrics.mean_squared_error(pred,y_test)) print("Fold score (RMSE): {}".format(score)) # Build the oos prediction list and calculate the error. oos_y = np.concatenate(oos_y) oos_pred = np.concatenate(oos_pred) score = np.sqrt(metrics.mean_squared_error(oos_pred,oos_y)) print() print("Cross-validated score (RMSE): {}".format(score)) # Write the cross-validated prediction holdout_pred = regressor.predict(x_holdout) score = np.sqrt(metrics.mean_squared_error(holdout_pred,y_holdout)) print("Holdout score (RMSE): {}".format(score)) Explanation: Training with Cross Validation and a Holdout Set If you have a considerable amount of data, it is always valuable to set aside a holdout set before you crossvalidate. This hold out set will be the final evaluation before you make use of your model for its real-world use. The following program makes use of a hodlout set, and then still cross validates. End of explanation %matplotlib inline from matplotlib.pyplot import figure, show from numpy import arange import tensorflow.contrib.learn as skflow import pandas as pd import os import numpy as np import tensorflow as tf from sklearn import metrics from scipy.stats import zscore import matplotlib.pyplot as plt path = "./data/" filename_read = os.path.join(path,"auto-mpg.csv") df = pd.read_csv(filename_read,na_values=['NA','?']) # create feature vector missing_median(df, 'horsepower') df.drop('name',1,inplace=True) encode_numeric_zscore(df, 'horsepower') encode_numeric_zscore(df, 'weight') encode_numeric_zscore(df, 'cylinders') encode_numeric_zscore(df, 'displacement') encode_numeric_zscore(df, 'acceleration') encode_text_dummy(df, 'origin') # Encode to a 2D matrix for training x,y = to_xy(df,['mpg']) # Split into train/test x_train, x_test, y_train, y_test = train_test_split( x, y, test_size=0.25, random_state=42) # Create a deep neural network with 3 hidden layers of 50, 25, 10 regressor = skflow.TensorFlowDNNRegressor( hidden_units=[50, 25, 10], batch_size = 32, optimizer='SGD', learning_rate=0.01, steps=5000) # Early stopping early_stop = skflow.monitors.ValidationMonitor(x_test, y_test, early_stopping_rounds=200, print_steps=50) # Fit/train neural network regressor.fit(x_train, y_train, monitor=early_stop) # Measure RMSE error. RMSE is common for regression. pred = regressor.predict(x_test) score = np.sqrt(metrics.mean_squared_error(pred,y_test)) print("Final score (RMSE): {}".format(score)) # Plot the chart chart_regression(pred,y_test) Explanation: How Kaggle Competitions are Scored Kaggle is a platform for competitive data science. Competitions are posted onto Kaggle by companies seeking the best model for their data. Competing in a Kaggle competition is quite a bit of work, I've competed in one Kaggle competition. Kaggle awards "tiers", such as: Kaggle Grandmaster Kaggle Master Kaggle Expert Your tier is based on your performance in past competitions. To compete in Kaggle you simply provide predictions for a dataset that they post. You do not need to submit any code. Your prediction output will place you onto the leaderboard of a competition. An original dataset is sent to Kaggle by the company. From this dataset, Kaggle posts public data that includes "train" and "test. For the "train" data, the outcomes (y) are provided. For the test data, no outcomes are provided. Your submission file contains your predictions for the "test data". When you submit your results, Kaggle will calculate a score on part of your prediction data. They do not publish want part of the submission data are used for the public and private leaderboard scores (this is a secret to prevent overfitting). While the competition is still running, Kaggle publishes the public leaderboard ranks. Once the competition ends, the private leaderboard is revealed to designate the true winners. Due to overfitting, there is sometimes an upset in positions when the final private leaderboard is revealed. Managing Hyperparameters There are many different settings that you can use for a neural network. These can affect performance. The following code changes some of these, beyond their default values: End of explanation import multiprocessing print("Your system has {} cores.".format(multiprocessing.cpu_count())) Explanation: Grid Search Finding the right set of hyperparameters can be a large task. Often computational power is thrown at this job. The scikit-learn grid search makes use of your computer's CPU cores to try every one of a defined number of hyperparameters to see which gets the best score. The following code shows how many CPU cores are available to Python: End of explanation %matplotlib inline from matplotlib.pyplot import figure, show from numpy import arange import tensorflow.contrib.learn as skflow import pandas as pd import os import numpy as np import tensorflow as tf from sklearn import metrics from scipy.stats import zscore from sklearn.grid_search import GridSearchCV import multiprocessing import time from sklearn.cross_validation import train_test_split import matplotlib.pyplot as plt def main(): path = "./data/" filename_read = os.path.join(path,"auto-mpg.csv") df = pd.read_csv(filename_read,na_values=['NA','?']) start_time = time.time() # create feature vector missing_median(df, 'horsepower') df.drop('name',1,inplace=True) encode_numeric_zscore(df, 'horsepower') encode_numeric_zscore(df, 'weight') encode_numeric_zscore(df, 'cylinders') encode_numeric_zscore(df, 'displacement') encode_numeric_zscore(df, 'acceleration') encode_text_dummy(df, 'origin') # Encode to a 2D matrix for training x,y = to_xy(df,['mpg']) # Split into train/test x_train, x_test, y_train, y_test = train_test_split( x, y, test_size=0.25, random_state=42) # The hyperparameters specified here will be searched. Every combination. param_grid = { 'learning_rate': [0.1, 0.01, 0.001], 'batch_size': [8, 16, 32] } # Create a deep neural network. The hyperparameters specified here remain fixed. model = skflow.TensorFlowDNNRegressor( hidden_units=[50, 25, 10], batch_size = 32, optimizer='SGD', steps=5000) # Early stopping early_stop = skflow.monitors.ValidationMonitor(x_test, y_test, early_stopping_rounds=200, print_steps=50) # Startup grid search threads = 1 #multiprocessing.cpu_count() print("Using {} cores.".format(threads)) regressor = GridSearchCV(model, verbose=True, n_jobs=threads, param_grid=param_grid,fit_params={'monitor':early_stop}) # Fit/train neural network regressor.fit(x_train, y_train) # Measure RMSE error. RMSE is common for regression. pred = regressor.predict(x_test) score = np.sqrt(metrics.mean_squared_error(pred,y_test)) print("Final score (RMSE): {}".format(score)) print("Final options: {}".format(regressor.best_params_)) # Plot the chart chart_regression(pred,y_test) elapsed_time = time.time() - start_time print("Elapsed time: {}".format(hms_string(elapsed_time))) # Allow windows to multi-thread (unneeded on advanced OS's) # See: https://docs.python.org/2/library/multiprocessing.html if __name__ == '__main__': main() Explanation: The following code performs a grid search. Your system is queried for the number of cores available they are used to scan through the combinations of hyperparameters that you specify. End of explanation %matplotlib inline from matplotlib.pyplot import figure, show from numpy import arange import tensorflow.contrib.learn as skflow import pandas as pd import os import numpy as np import tensorflow as tf from sklearn import metrics from scipy.stats import zscore from scipy.stats import randint as sp_randint from sklearn.grid_search import RandomizedSearchCV import multiprocessing import time from sklearn.cross_validation import train_test_split import matplotlib.pyplot as plt def main(): path = "./data/" filename_read = os.path.join(path,"auto-mpg.csv") df = pd.read_csv(filename_read,na_values=['NA','?']) start_time = time.time() # create feature vector missing_median(df, 'horsepower') df.drop('name',1,inplace=True) encode_numeric_zscore(df, 'horsepower') encode_numeric_zscore(df, 'weight') encode_numeric_zscore(df, 'cylinders') encode_numeric_zscore(df, 'displacement') encode_numeric_zscore(df, 'acceleration') encode_text_dummy(df, 'origin') # Encode to a 2D matrix for training x,y = to_xy(df,['mpg']) # Split into train/test x_train, x_test, y_train, y_test = train_test_split( x, y, test_size=0.25, random_state=42) # The hyperparameters specified here will be searched. A random sample will be searched. param_dist = { 'learning_rate': [0.1, 0.01, 0.001], 'batch_size': sp_randint(4, 32), } model = skflow.TensorFlowDNNRegressor( hidden_units=[50, 25, 10], batch_size = 32, optimizer='SGD', steps=5000) # Early stopping early_stop = skflow.monitors.ValidationMonitor(x_test, y_test, early_stopping_rounds=200, print_steps=50) # Random search threads = 1 #multiprocessing.cpu_count() print("Using {} cores.".format(threads)) regressor = RandomizedSearchCV(model, verbose=True, n_iter = 10, n_jobs=threads, param_distributions=param_dist, fit_params={'monitor':early_stop}) # Fit/train neural network regressor.fit(x_train, y_train) # Measure RMSE error. RMSE is common for regression. pred = regressor.predict(x_test) score = np.sqrt(metrics.mean_squared_error(pred,y_test)) print("Final score (RMSE): {}".format(score)) print("Final options: {}".format(regressor.best_params_)) # Plot the chart chart_regression(pred,y_test) elapsed_time = time.time() - start_time print("Elapsed time: {}".format(hms_string(elapsed_time))) # Allow windows to multi-thread (unneeded on advanced OS's) # See: https://docs.python.org/2/library/multiprocessing.html if __name__ == '__main__': main() Explanation: The best combination of hyperparameters are displayed. Random Search It is also possable to conduct a random search. The random search is similar to the grid search, except that the entire search space is not used. Rather, random points in the search space are tried. For a random search you must specify the number of hyperparameter iterations (n_iter) to try. End of explanation
9,232
Given the following text description, write Python code to implement the functionality described below step by step Description: Step1: Computing for Mathematics - 2020/2021 individual coursework Important Do not delete the cells containing Step3: b. $1/2$ Available marks Step5: c. $3/4$ Available marks Step7: d. $1$ Available marks Step8: Question 2 (Hint Step9: b. Create a variable direct_number_of_permutations that gives the number of permutations of pets of size 4 by direct computation. Available marks Step10: Question 3 (Hint Step11: b. Create a variable equation that has value the equation $f'(0)=0$. Available marks Step12: c. Using the solution to that equation, output the value of $\int_{0}^{5\pi}f(x)dx$. Available marks Step14: Question 4 (Hint Step15: b. Given that $c=2$ output $\frac{df}{dx}$ where Step16: c. Given that $c=2$ output $\int f(x)dx$ Available marks
Python Code: import random def sample_experiment(): ### BEGIN SOLUTION Returns true if a random number is less than 0 return random.random() < 0 number_of_experiments = 1000 sum( sample_experiment() for repetition in range(number_of_experiments) ) / number_of_experiments ### END SOLUTION Explanation: Computing for Mathematics - 2020/2021 individual coursework Important Do not delete the cells containing: ``` BEGIN SOLUTION END SOLUTION ``` write your solution attempts in those cells. To submit this notebook: Change the name of the notebook from main to: &lt;student_number&gt;. For example, if your student number is c1234567 then change the name of the notebook to c1234567. Write all your solution attempts in the correct locations; Do not delete any code that is already in the cells; Save the notebook (File&gt;Save As); Follow the instructions given in class/email to submit. Question 1 (Hint: This question is similar to the first exercise of the Probability chapter of Python for mathematics.) For each of the following, write a function sample_experiment, and repeatedly use it to simulate the probability of an event occurring with the following chances. For each chance output the simulated probability. a. $0$ Available marks: 2 End of explanation def sample_experiment(): ### BEGIN SOLUTION Returns true if a random number is less than 1 / 2 return random.random() < 1 / 2 number_of_experiments = 1000 sum( sample_experiment() for repetition in range(number_of_experiments) ) / number_of_experiments ### END SOLUTION Explanation: b. $1/2$ Available marks: 2 End of explanation def sample_experiment(): ### BEGIN SOLUTION Returns true if a random number is less than 3 / 4 return random.random() < 3 / 4 number_of_experiments = 1000 sum( sample_experiment() for repetition in range(number_of_experiments) ) / number_of_experiments ### END SOLUTION Explanation: c. $3/4$ Available marks: 2 End of explanation def sample_experiment(): ### BEGIN SOLUTION Returns true if a random number is less than 1 return random.random() < 1 number_of_experiments = 1000 sum( sample_experiment() for repetition in range(number_of_experiments) ) / number_of_experiments ### END SOLUTION Explanation: d. $1$ Available marks: 2 End of explanation import itertools pets = ("cat", "dog", "fish", "lizard", "hamster") ### BEGIN SOLUTION permutations = tuple(itertools.permutations(pets, 4)) number_of_permutations = len(permutations) ### END SOLUTION Explanation: Question 2 (Hint: This question is similar to the second exercise of the Combinatorics chapter of Python for mathematics.) a. Create a variable number_of_permutations that gives the number of permutations of pets = ("cat", "dog", "fish", "lizard", "hamster) of size 4. Do this by generating and counting them. Available marks: 2 End of explanation import scipy.special ### BEGIN SOLUTION direct_number_of_permutations = scipy.special.perm(5, 4) ### END SOLUTION Explanation: b. Create a variable direct_number_of_permutations that gives the number of permutations of pets of size 4 by direct computation. Available marks: 1 End of explanation import sympy as sym x = sym.Symbol("x") c1 = sym.Symbol("c1") ### BEGIN SOLUTION second_derivative = 4 * x + sym.cos(x) derivative = sym.integrate(second_derivative, x) + c1 ### END SOLUTION Explanation: Question 3 (Hint: This question uses concepts from the Algebra and Calculus chapters of Python for mathematics.) Consider the second derivative $f''(x)=4 x + \cos(x)$. a. Create a variable derivative which has value $f'(x)$ (use the variables x and c1 if necessary): Available marks: 3 End of explanation ### BEGIN SOLUTION equation = sym.Eq(derivative.subs({x:0}), 0) ### END SOLUTION Explanation: b. Create a variable equation that has value the equation $f'(0)=0$. Available marks: 4 End of explanation ### BEGIN SOLUTION particular_derivative = derivative.subs({c1: 0}) function = sym.integrate(particular_derivative) + c1 sym.integrate(function, (x, 0, 5 * sym.pi)) ### END SOLUTION Explanation: c. Using the solution to that equation, output the value of $\int_{0}^{5\pi}f(x)dx$. Available marks: 4 End of explanation c = sym.Symbol("c") ### BEGIN SOLUTION def get_sequence_a(n): Return the sequence a. if n == 1: return c return 3 * get_sequence_a(n - 1) + c / n sum(get_sequence_a(n) for n in range(1, 16)) ### END SOLUTION Explanation: Question 4 (Hint: This question uses concepts from the Calculus and Sequences chapters of Python for mathematics.) Consider this recursive definition for the sequence $a_n$: $$ a_n = \begin{cases} c & \text{ if n = 1}\ 3a_{n - 1} + \frac{c}{n} \end{cases} $$ a. Output the sum of the 15 terms. Available marks: 5 End of explanation ### BEGIN SOLUTION f = (get_sequence_a(n=1) + get_sequence_a(n=2) * x + get_sequence_a(n=3) * x ** 2 + + get_sequence_a(n=4) * x ** 3).subs({c: 2}) sym.diff(f, x) ### END SOLUTION Explanation: b. Given that $c=2$ output $\frac{df}{dx}$ where: $$ f(x) = a_1 + a_2 x + a_3 x ^ 2 + a_4 x ^ 3 $$ Available marks: 4 End of explanation ### BEGIN SOLUTION sym.integrate(f, x) ### END SOLUTION Explanation: c. Given that $c=2$ output $\int f(x)dx$ Available marks: 4 End of explanation
9,233
Given the following text description, write Python code to implement the functionality described below step by step Description: DeepLearning MNIST Dataset using DeepWater and Custom MXNet Model The MNIST database is a well-known academic dataset used to benchmark classification performance. The data consists of 60,000 training images and 10,000 test images. Each image is a standardized $28^2$ pixel greyscale image of a single handwritten digit. An example of the scanned handwritten digits is shown Step1: Specify the response and predictor columns Step2: Convert the number to a class Step3: Train Deep Learning model and validate on test set LeNET 1989 In this demo you will learn how to build a simple LeNET Model usix MXNET. Step4: Here we instantiate our lenet model using 10 classes Step5: To import the model inside the DeepWater training engine we need to save the model to a file Step6: The model is just the structure of the network expressed as a json dict Step7: Importing the LeNET model architecture for training in H2O We have defined the model and saved the structure to a file. We are ready to start the training procedure. Step8: A More powerful Architecture the beauty of deeplearning is that we can compose a new model with even more "capacity" to try to get a higher accuracy. Step9: Visualizing the results
Python Code: import h2o h2o.init() import os.path PATH = os.path.expanduser("~/h2o-3/") test_df = h2o.import_file(PATH + "bigdata/laptop/mnist/test.csv.gz") train_df = h2o.import_file(PATH + "/bigdata/laptop/mnist/train.csv.gz") Explanation: DeepLearning MNIST Dataset using DeepWater and Custom MXNet Model The MNIST database is a well-known academic dataset used to benchmark classification performance. The data consists of 60,000 training images and 10,000 test images. Each image is a standardized $28^2$ pixel greyscale image of a single handwritten digit. An example of the scanned handwritten digits is shown End of explanation y = "C785" x = train_df.names[0:784] Explanation: Specify the response and predictor columns End of explanation train_df[y] = train_df[y].asfactor() test_df[y] = test_df[y].asfactor() Explanation: Convert the number to a class End of explanation def lenet(num_classes): import mxnet as mx data = mx.symbol.Variable('data') # first conv conv1 = mx.symbol.Convolution(data=data, kernel=(5,5), num_filter=20) tanh1 = mx.symbol.Activation(data=conv1, act_type="tanh") pool1 = mx.symbol.Pooling(data=tanh1, pool_type="max", kernel=(2,2), stride=(2,2)) # second conv conv2 = mx.symbol.Convolution(data=pool1, kernel=(5,5), num_filter=50) tanh2 = mx.symbol.Activation(data=conv2, act_type="tanh") pool2 = mx.symbol.Pooling(data=tanh2, pool_type="max", kernel=(2,2), stride=(2,2)) # first fullc flatten = mx.symbol.Flatten(data=pool2) fc1 = mx.symbol.FullyConnected(data=flatten, num_hidden=500) tanh3 = mx.symbol.Activation(data=fc1, act_type="tanh") # second fullc fc2 = mx.symbol.FullyConnected(data=tanh3, num_hidden=num_classes) # loss lenet = mx.symbol.SoftmaxOutput(data=fc2, name='softmax') return lenet nclasses = 10 Explanation: Train Deep Learning model and validate on test set LeNET 1989 In this demo you will learn how to build a simple LeNET Model usix MXNET. End of explanation mxnet_model = lenet(nclasses) Explanation: Here we instantiate our lenet model using 10 classes End of explanation model_filename="/tmp/symbol_lenet-py.json" mxnet_model.save(model_filename) # pip install graphviz # sudo apt-get install graphviz import mxnet as mx import graphviz mx.viz.plot_network(mxnet_model, shape={"data":(1, 1, 28, 28)}, node_attrs={"shape":'rect',"fixedsize":'false'}) Explanation: To import the model inside the DeepWater training engine we need to save the model to a file: End of explanation !head -n 20 $model_filename Explanation: The model is just the structure of the network expressed as a json dict End of explanation from h2o.estimators.deepwater import H2ODeepWaterEstimator lenet_model = H2ODeepWaterEstimator( epochs=10, learning_rate=1e-3, mini_batch_size=64, network_definition_file=model_filename, # network='lenet', ## equivalent pre-configured model image_shape=[28,28], problem_type='dataset', ## Not 'image' since we're not passing paths to image files, but raw numbers ignore_const_cols=False, ## We need to keep all 28x28=784 pixel values, even if some are always 0 channels=1 ) lenet_model.train(x=train_df.names, y=y, training_frame=train_df, validation_frame=test_df) error = lenet_model.model_performance(valid=True).mean_per_class_error() print "model error:", error Explanation: Importing the LeNET model architecture for training in H2O We have defined the model and saved the structure to a file. We are ready to start the training procedure. End of explanation def cnn(num_classes): import mxnet as mx data = mx.symbol.Variable('data') inputdropout = mx.symbol.Dropout(data=data, p=0.1) # first convolution conv1 = mx.symbol.Convolution(data=data, kernel=(5,5), num_filter=50) tanh1 = mx.symbol.Activation(data=conv1, act_type="relu") pool1 = mx.symbol.Pooling(data=tanh1, pool_type="max", pad=(1,1), kernel=(3,3), stride=(2,2)) # second convolution conv2 = mx.symbol.Convolution(data=pool1, kernel=(5,5), num_filter=100) tanh2 = mx.symbol.Activation(data=conv2, act_type="relu") pool2 = mx.symbol.Pooling(data=tanh2, pool_type="max", pad=(1,1), kernel=(3,3), stride=(2,2)) # first fully connected layer flatten = mx.symbol.Flatten(data=pool2) fc1 = mx.symbol.FullyConnected(data=flatten, num_hidden=1024) relu3 = mx.symbol.Activation(data=fc1, act_type="relu") inputdropout = mx.symbol.Dropout(data=fc1, p=0.5) # second fully connected layer flatten = mx.symbol.Flatten(data=relu3) fc2 = mx.symbol.FullyConnected(data=flatten, num_hidden=1024) relu4 = mx.symbol.Activation(data=fc2, act_type="relu") inputdropout = mx.symbol.Dropout(data=fc2, p=0.5) # third fully connected layer fc3 = mx.symbol.FullyConnected(data=relu4, num_hidden=num_classes) # loss cnn = mx.symbol.SoftmaxOutput(data=fc3, name='softmax') return cnn nclasses = 10 mxnet_model = cnn(nclasses) model_filename="/tmp/symbol_cnn-py.json" mxnet_model.save(model_filename) from h2o.estimators.deepwater import H2ODeepWaterEstimator print("Importing the lenet model architecture for training in H2O") model = H2ODeepWaterEstimator( epochs=20, learning_rate=1e-3, mini_batch_size=64, network_definition_file=model_filename, image_shape=[28,28], channels=1, ignore_const_cols=False ## We need to keep all 28x28=784 pixel values, even if some are always 0 ) model.train(x=train_df.names, y=y, training_frame=train_df, validation_frame=test_df) error = model.model_performance(valid=True).mean_per_class_error() print "model error:", error Explanation: A More powerful Architecture the beauty of deeplearning is that we can compose a new model with even more "capacity" to try to get a higher accuracy. End of explanation %matplotlib inline import matplotlib import numpy as np import scipy.io import matplotlib.pyplot as plt from IPython.display import Image, display import warnings warnings.filterwarnings("ignore") df = test_df.as_data_frame() import numpy as np image = df.T[int(np.random.random()*784)] image.shape plt.imshow(image[:-1].reshape(28, 28), plt.cm.gray); print image[-1] image_hf = h2o.H2OFrame.from_python(image.to_dict()) prediction = model.predict(image_hf) prediction['predict'] Explanation: Visualizing the results End of explanation
9,234
Given the following text description, write Python code to implement the functionality described below step by step Description: Generarea si vizualizarea curbelor 1. Curbe plane O curba plana diferentiabila, data parametric, este imaginea, $im(r)$, a unei aplicatii diferentiabile $r Step1: Pentru a intelege definitia acestei functii, o apelam mai intai pentru un $N$ mic si afisam tipul coordonatelor tuple-ului returnat Step2: Deci functia Curba returneaza un tuple 2D, $(x,y)$, in care fiecare coordonata este un array 1D (un vector). Mai precis, daca notam cu $(X(t), Y(t))$ coordonatele parametrizarii curbei, $$X(t)=\cos(t)+t\sin(t), \quad Y(t)=\sin(t)-t\cos(t)$$ atunci vectorii $x$ si $y$, coordonate ale tuple-ului returnat sunt Step3: Astroida Step4: Curba Lissajous are parametrizarea Step5: Pentru a genera vectorii viteza/acceleratie in cateva momente, $t_i$, asociati traiectoriei parametrizate de $r(t)=(x(t), y(t))$, se evalueaza vectorul $\dot{\vec{r}}(t)$, respectiv $\ddot{\vec{r}}(t)$ in $t_i$ si se apeleaza functia plt.quiver(X, Y, U, V) unde $X$, respectiv $Y$, este array-ul 1D de coordonate $x(t_i)$, respectiv $y(t_i)$ , iar $U$ si $V$ au respectiv coordonatele $x'(t_i)$ si $y'(t_i)$, in cazul vitezei, si $x''(t_i)$, $y''(t_i)$, in cazul acceleratiei. De exemplu sa trasam curba parametrizata de Step6: Observam atat din valorile normelor, cat si din imaginile vectorilor viteza/acceleratie ca in puncte diferite au valori diferite. Mai mult in punctele curbei cu curbura mai mare viteza are norma mai mica si acceleratia mai mare (vezi Cursul 13, curbura curbelor). Curbe in coordonate polare In afara de sistemul ortogonal de axe, $xOy$, $\mathbb{R}^2$ se poate raporta si la un sistem polar de coordonate. Un reper polar consta dintr-o pereche $(O;v)$, unde $O$ este un punct fixat, numit pol si $v\in\mathbb{R}^2$, un vector nenul ce defineste axa polara, $Ox$, care este axa de origine $O$, directie si sens $v$. Pozitia unui punct $M$ relativ la acest reper se indica prin coordonatele $(r, \theta)$, numite coordonate polare. $r$ este distanta de la $M$ la $O$, $r=||\vec{OM}||$, iar $\theta$ este masura in radiani a unghiului dintre $v$ (deci $Ox$) si $\vec{OM}$. Step7: Ducand o perpendiculara in $O$ pe axa polara construim sistemul ortogonal drept $xOy$, asociat celui polar. Step8: Astfel punctul de coordonate polare $(r, \theta)$ are coordonatele carteziene $(x,y)$, unde $$x=r\cos(\theta), \quad y=r\sin(\theta)$$ O curba in coordonate polare are ecuatia $r=f(\theta)$, $\theta\in[\alpha, \beta]$. Cu alte cuvinte, curba este constituita din multimea punctelor din plan ce au coordonatele polare $(r=f(\theta), \theta)$. Exemplu de curba in coordonate polare este cardioida, de ecuatie $r=a(1+\cos(\theta))$, $a>0$, $\theta\in[0, 2\pi]$. Pentru a genera si vizualiza o curba in coordonate polare procedam astfel Step9: Remarcam ca fara nici o comanda speciala, o data cu apelul functiei plt.polar sunt generate cercuri concentrice cu originea in pol si in lungul cercului exterior sunt marcate valorile in grade (nu radiani) ale unghiului polar $\theta$. Sunt afisate si razele cercurilor concentrice. Cercul cu centrul in origine si de raza $a$, $x^2+y^2=a^2$, are in coordonate polare ecuatia Step10: Trifoiul cu 4 foi are ecuatia $r=\sin(4\theta)$, $\theta\in[0,2\pi]$. Step11: Curba $r=\sin(n\theta)$ este un trifoi cu $n$ foi. La fel si $r=\cos(n\theta)$, $\theta\in[0,2\pi]$. Experimentati! Curbe B&eacute;zier Curbele date parametric sau in coordonate polare se pot genera doar daca se da efectiv expresia analitica a parametrizarii, respectiv a functiei in coordonate polare. Curbele B&eacute;zier sunt curbe definite procedural, adica pornind de la un numar finit de puncte si un parametru $t\in[0,1]$ se genereaza algoritmic un punct pe o curba. Curbele B&eacute;zier s-au nascut in laboratoarele de la Renault si Citro&euml;n in incercarea de a genera capote pentru automobile, mai deosebite. Ideea de baza consta in designul unui algoritm care sa genereze profilul unei capote ce imita un poligon de control. Step12: Poligonul de control este succesiunea de segmente ce unesc cate doua puncte consecutive, numite puncte de control. B&eacute;zier a definit curbele care-i poarta numele, definind curbe polinomiale, adica curbe de forma Step13: In figura de mai sus multimea din stanga este convexa, iar cea din dreapta neconvexa, pt ca desi punctele marcate cu albastru apartin multimii, segmentul ce le uneste nu este in intregime inclus in multime. O combinatie convexa a punctelor $A_0, A_1, \ldots, A_n$ din $\mathbb{R}^2$ sau $\mathbb{R}^3$ este de forma Step14: Infasuratoarea convexa a $n$ puncte din plan este cel mai mic poligon convex ce contine toate punctele. In figura de mai sus infasuratoarea convexa apunctelor $C_0, C_1, C_2, C_3, C_4$ este poligonul plin, de varfuri $C_0, C_1, C_4, C_2$. Daca $A, B$ sunt doua puncte in plan sau spatiu, o combinatie convexa a lor este un punct $M=\alpha_1 A+\alpha_2 B$, $\alpha_i\in[0,1]$, $\alpha_1+\alpha_2=1$. Notand $\alpha_2=t$, rezulta ca $\alpha_1=1-t$ si deci punctul $M$ se exprima ca o combinatie convexa a lui $A$ si $B$ astfel Step15: In etapa 1 se determina pe fiecare segment determinat de doua puncte de control consecutive, punctul ce imparte segmentul respectiv in acelasi raport $\displaystyle\frac{t}{1-t}$ in care $t$ imparte pe $[0,1]$ Step16: Generalizand la cazul unei curbe definite de un numar arbitrar de puncte de control, $ {\bf b}{0}, {\bf b}{1}, \ldots, {\bf b}_{n}$, $n\geq 1$, dupa $n$ etape a schemei de Casteljau aplicata dupa acelasi principiu, folosind un parametru $t\in[0,1]$, se obtine un punct pe curba B&eacute;zier corespunzator acestui parametru. Punctele de control calculate in etapele intermediare se pot afisa, teoretic, intr-o matrice triunghiulara de puncte. Si anume punctele de control sunt puncte initiale, date, deci corespund etapei 0 a procedurii recursive si le adaugam indicele 0 in pozitia de sus Step17: Proprietati ale curbelor Bezier Parametrizarea in baza Bernstein a unei curbe B&eacute;zier, $b(t)=B^n_0(t) {\bf b}_0+B^n_1(t) {\bf b}_1+\cdots+B^n_n(t) {\bf b}_n$, $t\in[0,1]$, este o combinatie convexa a punctelor de control, deoarece pentru fiecare $t\in[0,1]$, polinoamele Bernstein, $B^n_k(t)=C_n^k t^k(1-t)^{n-k}\geq 0$ si suma lor este egala cu 1. Intradevar Step18: Gradul unei curbe B&eacute;zier este mai mic cu o unitate decat numarul punctelor sale de control. Astfel o curba B&eacute;zier de grad 1 este generata de doua puncte de control ${\bf b}_0, {\bf b}_1$, $b(t)={\bf b}_0 B_0^1(t)+{\bf b}_1B_1^1(t)$ $=(1-t){\bf b}_0+t{\bf b}_1= {\bf b}_0 t\vec{{\bf b}_0{\bf b}_1}$, si este segmentul determinat de cele doua puncte. O curba B&eacute;zier generata de trei puncte de control este un arc de parabola Step19: O curba B&eacute;zier interpoleaza extremitatile poligonului sau de control (adica trece sigur prin ${\bf b}_0$ si ${\bf b}_n$, deoarece daca $b$ este parametrizarea Bernstein, atunci $b(0)={\bf b}_0$ si $b(1)={\bf b}_n$. O curba B&eacute;zier nu trece insa prin punctele de control intermediare. O curba B&eacute;zier imita forma poligonului de control. Step20: Tangentele in extremitatile arcului de curba B&eacute;zier au directiile $\overrightarrow{{\bf b}0{\bf b}_1}$, respectiv $\overrightarrow{{\bf b}{n-1}{\bf b}{n}}$ Step21: In aceasta figura se observa ca segmentul de extremitati ${\bf b}^2_0, {\bf b}^2_1$ este tangent la curba in ${\bf b}^3_0$. Functia urmatoare calculeaza directia (vectorul director) al tangentei (vitezei) in ${\bf b}^n_0(t)$ Step22: In ultimul bloc de cod puteti inlocui pe $t$ cu diverse valori in $[0,1]$ si vedeti directia vitezei. Curbele Bezier se genereaza deobicei interactiv, alegand punctele de control cu mouse-ul si apoi trasand curba corespunzatoare. Pentru generarea interactiva veti primi un script ce se ruleaza in Spyder.
Python Code: %matplotlib inline import matplotlib.pyplot as plt import numpy as np def Curba(a, b, N): h=(b-a)/N t=np.arange(a,b, h) return (np.cos(t)+t*np.sin(t), np.sin(t)-t*np.cos(t))# functia returneaza tuple (x(t), y(t)) Explanation: Generarea si vizualizarea curbelor 1. Curbe plane O curba plana diferentiabila, data parametric, este imaginea, $im(r)$, a unei aplicatii diferentiabile $r:[a,b]\to\mathbb{R}^2$, $r(t)=(x(t), y(t))$. Astfel curba este submultimea din plan: $$\Gamma={(x(t), y(t))\in\mathbb{R}^2\:|\: t\in[a,b]}$$ Interpretand $t\in[a,b]$ ca fiind timpul, curba data parametric are o generare cinematica: ea este traiectoria unui punct mobil, a carui miscare este monitorizata in intervalul de timp $[a,b]$. Punctul de coordonate $(x(t), y(t))$ da pozitia punctului mobil la momentul $t$. Daca aplicatia parametrizare este de clasa $C^2$ (exista derivatele $x'(t), y'(t), x''(t), y''(t)$ si acestea sunt continue), atunci fiecarui punct $(x(t), y(t))$ al traiectoriei i se asociaza vectorul viteza la momentul $t$: $$\dot{\vec{r}}(t)=(x'(t), y'(t))^T,$$ respectiv acceleratia la momentul $t$: $$\ddot{\vec{r}}(t)=(x''(t), y''(t))^T$$ Discretizarea si vizualizarea unui curbe plane Pentru a genera o curba plana folosind pachetul matplotlib, divizam intervalul de timp $[a,b]$ prin puncte echidistante, $t_i$, cu pasul $h=(b-a)/N$, unde $N$ este numarul de intervale de diviziune: $t_i=a+i*h$, $i=\overline{0,N}$. In fiecare moment de timp $t_i$, se evalueaza functiile $x(t), y(t)$ ce definesc parametrizarea curbei si se obtin $N+1$ puncte de pe curba, $(x(t_i), y(t_i))$, $\overline{0,N}$. Apelul functiei plt.plot (x,y) conduce la generarea unei aproximatii a curbei, aproximatie ce consta din segmentele ce unesc cate doua puncte consecutive $(x(t_i), y(t_i))$, $(x(t_{i+1}), y(t_{i+1}))$, $i=\overline{0,N}$. Cu cat pasul de diviziune $h$ este mai mic (sau echivalent, $N$ este mai mare), cu atat aproximatia curbei este mai buna. End of explanation (x,y)=Curba(0, 3*np.pi, 15) plt.plot(x, y) plt.axis('equal') print type(x), type(y) print x.round(3) print y.round(3) Explanation: Pentru a intelege definitia acestei functii, o apelam mai intai pentru un $N$ mic si afisam tipul coordonatelor tuple-ului returnat: End of explanation (x,y)=Curba(0, 3*np.pi, 1500) plt.plot(x, y) plt.xlabel('x(t)') plt.ylabel('y(t)') plt.axis('equal') Explanation: Deci functia Curba returneaza un tuple 2D, $(x,y)$, in care fiecare coordonata este un array 1D (un vector). Mai precis, daca notam cu $(X(t), Y(t))$ coordonatele parametrizarii curbei, $$X(t)=\cos(t)+t\sin(t), \quad Y(t)=\sin(t)-t\cos(t)$$ atunci vectorii $x$ si $y$, coordonate ale tuple-ului returnat sunt: $$x=[X(t[0]), X(t[1]), \ldots, X(t[N])]$$ $$y=[Y(t[0]), Y(t[1]), \ldots, Y(t[N])]$$ Apeland acum functia cu un numar mai mare de puncte de diviziune obtinem: End of explanation def astroida(t, A=1):#(x(t),y(t)=(A*cos^3(t), A*sin^3(t)) return (A*np.power(np.cos(t), 3), A*np.power(np.sin(t), 3)) t=np.linspace(0,2*np.pi, 1000) (x,y)=astroida(t) plt.plot(x,y, 'g') plt.title('Astroida') plt.xlabel('x(t)') plt.ylabel('y(t)') plt.axis('equal')# setarea pe 'equal' are ca efect alegerea aceleasi unitati de masura pe Ox si Oy # in caz contrar (cazul implicit) figura este deformata. Explanation: Astroida: End of explanation def Lissajous(t,a,b, A=1, B=1, d=np.pi/2): return (A*np.sin(a*t+d), B*np.sin(b*t)) t=np.arange(0,2*np.pi, 0.01) pab=np.array([[1,2],[3,2], [3,4], [5,4]],float) for i in range(1,5): plt.subplot(2,2,i) plt.axis([-1.5,1.5, -1.5, 1.5]) x,y=Lissajous(t, pab[i-1, 0], pab[i-1, 1]) plt.plot(x,y, 'r') Explanation: Curba Lissajous are parametrizarea: $$\left{\begin{array}{lll} x(t)&=&A\sin( at+\delta)\ y(t)&=&B\sin(bt)\end{array}\right.\quad t\in[0, 2\pi]$$ End of explanation def r(t): return(t*np.sin(t), 1.5*np.cos(t)) def rprim(t): return (np.sin(t)+t*np.cos(t), -1.5*np.sin(t)) def rsecund(t): return (2*np.cos(t)-t*np.sin(t), -1.5*np.cos(t)) plt.axis([-5,2, -1.75, 1.75]) t=np.arange(0, 2*np.pi, 0.01) x,y=r(t) plt.plot(x,y)#traseaza curba T=np.linspace(0,2*np.pi, 10) X,Y=r(T) U,V=rprim(T) plt.quiver(X,Y,U,V, color='r', units='x', scale=5) normv=np.sqrt(U**2+V**2) print 'normele vectorilor viteza:\n', normv.round(3) W,Z=rsecund(T) plt.quiver(X,Y,W,Z, color='g', units='x', scale=5) norma=np.sqrt(W**2+Z**2) print 'normele vectorilor acceleratie:\n', norma.round(3) Explanation: Pentru a genera vectorii viteza/acceleratie in cateva momente, $t_i$, asociati traiectoriei parametrizate de $r(t)=(x(t), y(t))$, se evalueaza vectorul $\dot{\vec{r}}(t)$, respectiv $\ddot{\vec{r}}(t)$ in $t_i$ si se apeleaza functia plt.quiver(X, Y, U, V) unde $X$, respectiv $Y$, este array-ul 1D de coordonate $x(t_i)$, respectiv $y(t_i)$ , iar $U$ si $V$ au respectiv coordonatele $x'(t_i)$ si $y'(t_i)$, in cazul vitezei, si $x''(t_i)$, $y''(t_i)$, in cazul acceleratiei. De exemplu sa trasam curba parametrizata de: $$x(t)=t\sin(t), \quad y(t)=\cos(t), \quad t\in[0, 2\pi]$$ si vectorii viteza, respectiv acceleratie, in 8 puncte ale traiectoriei, atinse in miscarea punctului mobil in momentele $T[i]$, unde T=np.linspace(0,2*np.pi, 8): End of explanation from IPython.display import Image Image(filename='Imag/polar.png') Explanation: Observam atat din valorile normelor, cat si din imaginile vectorilor viteza/acceleratie ca in puncte diferite au valori diferite. Mai mult in punctele curbei cu curbura mai mare viteza are norma mai mica si acceleratia mai mare (vezi Cursul 13, curbura curbelor). Curbe in coordonate polare In afara de sistemul ortogonal de axe, $xOy$, $\mathbb{R}^2$ se poate raporta si la un sistem polar de coordonate. Un reper polar consta dintr-o pereche $(O;v)$, unde $O$ este un punct fixat, numit pol si $v\in\mathbb{R}^2$, un vector nenul ce defineste axa polara, $Ox$, care este axa de origine $O$, directie si sens $v$. Pozitia unui punct $M$ relativ la acest reper se indica prin coordonatele $(r, \theta)$, numite coordonate polare. $r$ este distanta de la $M$ la $O$, $r=||\vec{OM}||$, iar $\theta$ este masura in radiani a unghiului dintre $v$ (deci $Ox$) si $\vec{OM}$. End of explanation Image(filename='Imag/ortpolar.png') Explanation: Ducand o perpendiculara in $O$ pe axa polara construim sistemul ortogonal drept $xOy$, asociat celui polar. End of explanation #Generarea Cardioidei a=1; theta=np.arange(0, 2*np.pi, 0.01) r=a*(1+np.cos(theta)) plt.polar(theta, r, 'r') Explanation: Astfel punctul de coordonate polare $(r, \theta)$ are coordonatele carteziene $(x,y)$, unde $$x=r\cos(\theta), \quad y=r\sin(\theta)$$ O curba in coordonate polare are ecuatia $r=f(\theta)$, $\theta\in[\alpha, \beta]$. Cu alte cuvinte, curba este constituita din multimea punctelor din plan ce au coordonatele polare $(r=f(\theta), \theta)$. Exemplu de curba in coordonate polare este cardioida, de ecuatie $r=a(1+\cos(\theta))$, $a>0$, $\theta\in[0, 2\pi]$. Pentru a genera si vizualiza o curba in coordonate polare procedam astfel: se divizeaza intervalul $[\alpha, \beta]$ prin puncte echidistante $\theta_i$ si se apeleaza functia plt.polar(theta, r), unde vectorul theta are coordonatele $\theta_i$, iar vectorul r are coordonatele $f(\theta_i)$: End of explanation #cercul r=2 theta=np.arange(0, 2*np.pi, 0.01) r=2*np.ones(theta.size) plt.subplot(1,2,1, polar='true') plt.plot(theta, r, 'r', lw=2) #semidreapta theta= 2 pi/3 r=np.arange(0,2, 0.01) theta=(2*np.pi/3)*np.ones(r.size) plt.subplot(1,2,2, polar='true') plt.plot(theta, r, 'r', lw=2) plt.tight_layout(2)# aceasta functie include 2 spatii intre figurile din subplots Explanation: Remarcam ca fara nici o comanda speciala, o data cu apelul functiei plt.polar sunt generate cercuri concentrice cu originea in pol si in lungul cercului exterior sunt marcate valorile in grade (nu radiani) ale unghiului polar $\theta$. Sunt afisate si razele cercurilor concentrice. Cercul cu centrul in origine si de raza $a$, $x^2+y^2=a^2$, are in coordonate polare ecuatia: $$r=a,$$ iar semidreapta ce porneste din origine si formeaza cu axa polara unghiul $\theta_0$ are ecuatia $$\theta=\theta_0$$ Cu alte cuvinte, cercul $r=a$ este locul geometric al punctelor din plan ce au aceeasi distanta polara la $O$, distanta egala cu $a$. Semidreapta $\theta=\theta_0$ este locul geometric al punctelor din plan care au aceeasi coordonata polara $\theta_0$. End of explanation theta=np.arange(0,2*np.pi, 0.01) plt.polar(theta, np.sin(4*theta), 'r') plt.axis('off')# efectul acestui apel este suspendarea cercurilor si semidreptelor reperului polar Explanation: Trifoiul cu 4 foi are ecuatia $r=\sin(4\theta)$, $\theta\in[0,2\pi]$. End of explanation Image(filename='Imag/mimicauto.png') Explanation: Curba $r=\sin(n\theta)$ este un trifoi cu $n$ foi. La fel si $r=\cos(n\theta)$, $\theta\in[0,2\pi]$. Experimentati! Curbe B&eacute;zier Curbele date parametric sau in coordonate polare se pot genera doar daca se da efectiv expresia analitica a parametrizarii, respectiv a functiei in coordonate polare. Curbele B&eacute;zier sunt curbe definite procedural, adica pornind de la un numar finit de puncte si un parametru $t\in[0,1]$ se genereaza algoritmic un punct pe o curba. Curbele B&eacute;zier s-au nascut in laboratoarele de la Renault si Citro&euml;n in incercarea de a genera capote pentru automobile, mai deosebite. Ideea de baza consta in designul unui algoritm care sa genereze profilul unei capote ce imita un poligon de control. End of explanation Image(filename='Imag/convNonconv.png') Explanation: Poligonul de control este succesiunea de segmente ce unesc cate doua puncte consecutive, numite puncte de control. B&eacute;zier a definit curbele care-i poarta numele, definind curbe polinomiale, adica curbe de forma: $$\left{\begin{array}{lll}x(t)&=&a_0+a_1 t+\cdots+ a_n t^n\ y(t)&=&c_0+c_1t+\cdots+c_n t^n\end{array}\right.\quad t\in[a,b]$$ dar nu in acest fel, adica nu exprimand polinoamele in baza canonica $1,t, \ldots, t^n$, ci in baza Bernstein, $B^n_0(t), B^n_1(t), \ldots, B_n^n(t)$, unde $B^n_k(t)=C_n^k t^k (1-t)^{n-k}$, $k=\overline{0,n}$. O curba B&eacute;zier de puncte de control ${\bf b}_0, {\bf b}_1, \ldots, {\bf b}_n$, este o curba avand parametrizarea $b(t)=B^n_0(t) {\bf b}_0+B^n_1(t) {\bf b}_1+\cdots+B^n_n(t) {\bf b}_n$, $t\in[0,1]$. Daca $b(t)=(x(t), y(t))$, atunci coordonatele parametrizarii sunt: $$ \left{\begin{array}{lll} x(t)&=&x({\bf b}_0)B^n_0(t)+x({\bf b}_1)B^n_1(t)+\cdots+x({\bf b}_n)B^n_n(t)\ y(t)&=&y({\bf b}_0)B^n_0(t)+y({\bf b}_1)B^n_1(t)+\cdots+y({\bf b}_n)B^n_n(t)\end{array}\right.$$ unde $x({\bf b}_k), y({\bf b}_k)$ semnifica coordonata $x$, respectiv $y$, a punctului ${\bf b}_k$, $k=\overline{0,n}$. Curbele B&eacute;zier folosite in grafica, in modelarea geometrica a diverse produse sau in designul fonturilor sunt curbele B&eacute;zier de grad 3, adica curbele B&eacute;zier definite de 4 puncte de control ${\bf b}_0$, ${\bf b}_1$, ${\bf b}_2$, ${\bf b}_3$. De exemplu curba B&eacute;zier definita de punctele de control: $${\bf b}_0=(2,-1),{\bf b}_1 =(4,5), {\bf b}_2=(7, 6), {\bf b}_3 =(9,1)$$ are parametrizarea $$b(t)=(x(t), y(t)), \:\:\left{\begin{array}{lll} x(t)&=& 2B^3_0(t)+4B^3_1(t)+7B^3_2(t)+9B^3_3(t)\ y(t)&=& -1B^3_0(t)+5B^3_1(t)+6B^3_2(t)+B^3_3(t)\end{array}\right.\quad t\in[0,1]$$ Spre deosebire de B&eacute;zier care a definit curbele ce-i poarta numele printr-o parametrizare, de Casteljau a dat o definitie procedurala a acestor curbe. Folosind definitia lui B&eacute;zier, adica parametrizarea $b(t)=\sum_{k=0}^n {\bf b}_{k} B^n_k(t)$ a curbei, pentru a calcula un punct pe curba corespunzator parametrului $t$, trebuie evaluata aplicatia parametrizare in $t$. Definitia procedurala da un algoritm de calculul recursiv a unui punct corespunzator unui parametru $t\in[0,1]$, pe curba de puncte de control ${\bf b}_0, {\bf b}_1, \ldots, {\bf b}_n $. Pentru a descrie algoritmul de Casteljau, numit si schema de Casteljau, fixam cateva notiuni pe care se bazeaza definitia procedurala. O multime din plan este convexa daca o data cu doua puncte $A,B$, contine si segmentul ce le uneste. Exemple de multimi convexe: o dreapta, un segment de dreapta, un compact triunghiular. End of explanation Image(filename='Imag/infasConvexa.png') Explanation: In figura de mai sus multimea din stanga este convexa, iar cea din dreapta neconvexa, pt ca desi punctele marcate cu albastru apartin multimii, segmentul ce le uneste nu este in intregime inclus in multime. O combinatie convexa a punctelor $A_0, A_1, \ldots, A_n$ din $\mathbb{R}^2$ sau $\mathbb{R}^3$ este de forma: $$ \alpha_0A_0+\alpha_1A_1+\cdots+\alpha_n A_n, \quad \mbox{unde}\,\, \alpha_i\in [0,1],\: \mbox{si}\,\, \alpha_0+\alpha_1+\cdots+\alpha_n=1.$$ Sa aratam ca o combinatie convexa reprezinta un punct. Intr-adevar, din relatia $\alpha_0+\alpha_1+\cdots+\alpha_n=1$ exprimam pe $\alpha_0=1-(\alpha_1+\alpha_2+\cdots+\alpha_n)$, il inlocuim in relatia ce defineste combinatia convexa si obtinem: $$\begin{array}{l}A_0+\alpha_1(A_1-A_0)+\alpha_2(A_2-A_0)+\cdots+\alpha_n(A_n-A_0)=\ \underbrace{A_0}{punct}+\underbrace{\alpha_1\overrightarrow{A_0A_1}+\alpha_2\overrightarrow{A_0A_2}+\cdots+\alpha_n\overrightarrow{A_0A_n}}{vector}:=\underbrace{P}_{punct}\end{array}$$ Infasuratoare convexa a punctelor $A_0, A_1, \ldots, A_n$ este multimea tuturor combinatiilor convexe ale acestor puncte. Infasuratoarea convexa a doua puncte, $A, B$, este segmentul de extremitati $A, B$, adica $[A,B]\stackrel{def}{=}{P\,|\, P=A+s\vec{AB}, s\in[0,1]}$. Infasuratoarea a trei puncte necoliniare, $A_0, A_1, A_2$ este compactul triunghiular $\triangle{A_0A_1A_2}$. End of explanation Image(filename='Imag/cbezier0C.png') Explanation: Infasuratoarea convexa a $n$ puncte din plan este cel mai mic poligon convex ce contine toate punctele. In figura de mai sus infasuratoarea convexa apunctelor $C_0, C_1, C_2, C_3, C_4$ este poligonul plin, de varfuri $C_0, C_1, C_4, C_2$. Daca $A, B$ sunt doua puncte in plan sau spatiu, o combinatie convexa a lor este un punct $M=\alpha_1 A+\alpha_2 B$, $\alpha_i\in[0,1]$, $\alpha_1+\alpha_2=1$. Notand $\alpha_2=t$, rezulta ca $\alpha_1=1-t$ si deci punctul $M$ se exprima ca o combinatie convexa a lui $A$ si $B$ astfel: $$M=(1-t)A+tB, \,\, t\in[0,1]$$ Punctul $M$ astfel definit apartine segmentului $[A,B]$. Daca $\vec{AM}=r\vec{MB}$, spunem ca punctul $M\neq B$, imparte segmentul $[A,B]$ in raportul $r$. Un punct $M\neq B$ al segmentului $[A,B]$, $M=(1-t)A+tB$, imparte segmentul in raportul $\displaystyle\frac{t}{1-t}$ si reciproc, daca $M$ imparte segmentul $[A,B]$ in raportul $\displaystyle\frac{t}{1-t}$, $t\neq 1$, atunci $M=(1-t)A+tB$. Intr-adevar: $$\begin{array}{l}M=(1-t)A+tB\,\, \Leftrightarrow\,\, M=A+t\vec{AB}\,\,\Leftrightarrow\,\, \vec{AM}=t(\vec{AM}+\vec{MB})\,\,\Leftrightarrow\ \Leftrightarrow\,\, (1-t)\vec{AM}=t\vec{MB}\,\,\Leftrightarrow \vec{AM}=\displaystyle\frac{t}{1-t}\vec{MB}\end{array}$$ Pornind de la aceasta proprietate explicam mai intai schema lui de Casteljau pentru cazul curbei definita de 3 puncte de control ${\bf b}_0, {\bf b}_1, {\bf b}_2$. Acestea fiind punctele initiale (date), se renoteaza ${\bf b}_0^0, {\bf b}^0_1, {\bf b}^0_2$, indicele $0$ din pozitia de sus indicand etapa 0 a procedurii. Fixand un parametru $t\in[0,1)$, acesta imparte segmentul $[0,1]$ in raportul $\displaystyle\frac{t}{1-t}$. End of explanation Image(filename='Imag/Decast4p.png') Explanation: In etapa 1 se determina pe fiecare segment determinat de doua puncte de control consecutive, punctul ce imparte segmentul respectiv in acelasi raport $\displaystyle\frac{t}{1-t}$ in care $t$ imparte pe $[0,1]$: $$\begin{array}{lll}{\bf b}^1_0&=&(1-t) {\bf b}_0^0+t{\bf b}^0_1\ {\bf b}^1_1&=&(1-t){\bf b}_1^0+t{\bf b}^0_2\end{array}$$ In etapa a doua, pe segmentul determinat de cele doua puncte calculate in etapa precedenta se determina punctul ${\bf b}^2_0$ ce imparte segmentul $[{\bf b}^1_0, {\bf b}^1_1]$ in raportul $\displaystyle\frac{t}{1-t}$: $${\bf b}^2_0=(1-t){\bf b}^1_0+t{\bf b}^1_1$$ Inlocuind punctele calculate in etapa 1 in exprimarea lui ${\bf b}0^2$, calculat in etapa 2, obtinem: $$\begin{array}{lll}{\bf b}^2_0&=&(1-t)[(1-t){\bf b}_0^0+t{\bf b}^0_1]+t[(1-t){\bf b}_1^0+t{\bf b}^0_2]=\&=&\underbrace{(1-t)^2}{B^2_0(t)}{\bf b}0^0+\underbrace{2t(1-t)}{B^2_1(t)}{\bf b}1^0+\underbrace{t^2}{B^2_2(t)}{\bf b}^0_2=b(t)\end{array},$$ adica punctul $b^2_0$ reprezinta parametrizarea curbei data de B&eacute;zier, in baza Bernstein, evaluata in parametrul $t$. Remarcam ca in fiecare etapa a schemei de Casteljau, numarul punctelor calculate se reduce cu o unitate fata de etapa precedenta si in ultima etapa rezulta punctul de pe curba B&eacute;zier, corespunzator parametrului $t$. In figura urmatoare ilustram algoritmul de Casteljau pentru o curba definita de 4 puncte de control: End of explanation def deCasteljau(t,b): #punctele de control b_0, b_1, ..., b_n, sunt date intr-un array 2D, #de n+1 linii si 2 coloane. Pe linia i avem coordonatele punctului b_i a=np.copy(b) # se copiaza array-ul b in array-ul a N=a.shape[0] # interogam cat este numarul de linii ale lui a (deci si ale lui b); n=N-1 for r in range(1,N): # echivalentul in C a lui for(r=1;r<N, r++); deci 1<=r<=n for i in range(N-r):# in C for(i=0; i<N-r;i++) a[i,:]=(1-t)*a[i,:]+t*a[i+1,:]# punctul i din etapa r este combinatia convexa # a punctelor i si i+1 din etapa r-1 return a[0,:]# a[0,:] contine coordonatele pctului de pe curba coresp lui t def dreptunghiDesen(b): cmin=np.zeros(2) # cmin[i] va stoca coordonata i, minima, a punctelor de control cmax=np.zeros(2)# cmax[i] va stoca coordonata i, maxima, a punctelor de control for i in range(2): cmin[i]=np.amin(b[:,i])-0.5 cmax[i]=np.amax(b[:,i])+0.5 return (cmin, cmax) def curbaBezier(b, nr=100):#nr=nr de puncte ce se calculeaza pe curba h=1.0/nr pcteC=[] for k in range(nr): t=k*h# 0.01 este pasul de divizare a intervalului [0,1], al parametrului t P=deCasteljau(t,b)# P punct pe curba Bezier, corespunzator lui t pcteC.append(P) return pcteC# functia returneaza lista punctelor de pe curba calculate def DrawBezier(b): cmin, cmax=dreptunghiDesen(b) plt.axis([cmin[0], cmax[0], cmin[1], cmax[1]]) plt.plot(b[:, 0], b[:,1], 'bo', b[:, 0], b[:,1])#marcheaza punctele de control si #traseaza poligonul de control pcteC=curbaBezier(b)# pcteC este lista punctelor calculate pe curba Xpcte, Ypcte=zip(*pcteC)# functia zip() extrage din lista pcteC, lista x-silor, si y-cilor plt.plot(Xpcte, Ypcte, 'r') ################ # Pentru a a genera o curba Bezier este suficient sa declaram array-ul punctelor b=np.array([[2,-1],[1.75,2.75], [5,6], [8,0]], float)# 4 puncte de control DrawBezier(b) Explanation: Generalizand la cazul unei curbe definite de un numar arbitrar de puncte de control, $ {\bf b}{0}, {\bf b}{1}, \ldots, {\bf b}_{n}$, $n\geq 1$, dupa $n$ etape a schemei de Casteljau aplicata dupa acelasi principiu, folosind un parametru $t\in[0,1]$, se obtine un punct pe curba B&eacute;zier corespunzator acestui parametru. Punctele de control calculate in etapele intermediare se pot afisa, teoretic, intr-o matrice triunghiulara de puncte. Si anume punctele de control sunt puncte initiale, date, deci corespund etapei 0 a procedurii recursive si le adaugam indicele 0 in pozitia de sus: $$ \begin{array}{llllll} {\bf b}0^0 & {\bf b}{0}^1 & {\bf b}{0}^2& \cdots & {\bf b}_0^{n-1}& {\bf b}^n_0\ {\bf b}_1^0 & {\bf b}{1}^1 & {\bf b}{1}^2& \cdots & {\bf b}_1^{n-1}& \ {\bf b}_2^0 & {\bf b}{2}^1 & {\bf b}{2}^2 & \cdots & & \ \vdots & \vdots &\vdots & & & \ {\bf b}{n-2}^0 & {\bf b}{n-2}^1& {\bf b}{n-2}^2& & & \ {\bf b}{n-1}^0 & {\bf b}{n-1}^1 & & & & \ {\bf b}_n^0 & & & & & \end{array} $$ Punctele din coloana $r$ corespund etapei $r$ a schemei recursive, $r=\overline{1,n}$. Succint, schema de Casteljau se exprima prin formula $$ {\bf b}{i}^{r}(t)=(1-t)\,{\bf b}{i}^{r-1}(t)+t\,{\bf b}{i+1}^{r-1}(t),\: r=\overline{1,n},\: i=\overline{0,n-r}, $$ adica punctul din pozitia $i$ a etapei $r$ este o combinatie convexa a punctelor $i$ si $i+1$ din etapa $r-1$: $$ \begin{array}{lll} {\bf b}^{r-1}_i&\stackrel{1-t}{\rightarrow}&{\bf b}^{r}_i\ {\bf b}^{r-1}{i+1}&\stackrel{\nearrow}{t}&\end{array}$$ Desi la prima vedere s-ar parea ca pentru implementarea schemei de Casteljau avem nevoie de o matrice triunghiulara de puncte, in realitate este suficient un sir auxiliar de puncte $({\bf a}_0,{\bf a}_1,\ldots {\bf a}_n)$, in care la fiecare apel al schemei de Casteljau se copiaza punctele de control $({\bf b}_0,{\bf b}_1,\ldots {\bf b}_n)$, ${\bf a}_i={\bf b}_i$, $i=0,1,\ldots n$. In etapa $1$, de exemplu, calculam $(1-t){\bf a}0+t{\bf a}_1$ si pentru ca punctul ${\bf a}_0$ nu va mai fi folosit in aceasta etapa, atribuim $(1-t){\bf a}_0+t{\bf a}_1\to {\bf a}_0$, $\ldots$, $(1-t){\bf a}_i+t{\bf a}{i+1}\to {\bf a}_i$. In fiecare etapa $r$ a schemei de Casteljau doar primele punctele ${\bf a}0, {\bf a}{1}, \ldots, {\bf a}_{n-r}$ isi modifica "continutul": $$\begin{array}{cccccc} \mbox{Etapa}\,\, 0&\mbox{Etapa}\,\,1&\mbox{Etapa}\,\,2&\cdots&\mbox{Etapa}\,\, n-1&\mbox{Etapa}\,\,n\ \downarrow&\downarrow&\downarrow&\cdots&\downarrow&\downarrow\ {\bf a}0 & {\bf a}{0} & {\bf a}{0}& \cdots & {\bf a}_0& {\bf a}_0\ {\bf a}_1 & {\bf a}{1} & {\bf a}{1}& \cdots & {\bf a}_1& \ {\bf a}_2 & {\bf a}{2} & {\bf a}{2} & \cdots & & \ \vdots & \vdots &\vdots & & & \ {\bf a}{n-2}& {\bf a}{n-2}& {\bf a}{n-2}& & & \ {\bf a}{n-1}& {\bf a}{n-1} & & && \ {\bf a}_n & & & && \end{array} $$ In etapa $n$ punctul ${\bf a}_0$ contine punctul curbei B&eacute;zier corespunzator parametrului $t$. Pentru a discretiza o curba B&eacute;zier, se divizeaza intervalul $[0,1]$ prin puncte echidistante. Fixand numarul $N$ de subintervale egale ale intervalului $[0,1]$, pasul de divizare este $h=1.0/N$, iar punctele de diviziune sunt $t_j=j*h$, $j=0,1,2,\ldots, N$. Pentru fiecare parametru $t_j$, $j=0,1,\ldots N$, se apeleaza schema (functia) de Casteljau, obtinand astfel punctele ${\bf P}_j$, de pe curba, care apoi se interpoleaza liniar si obtinem imaginea curbei. End of explanation b=np.array([[0,0], [1.5, 2.3], [3.7, -1], [6, 3], [2.6, 2]],float) plt.subplot(3,1,1) plt.title('Curba Bezier raportata relativ la poligonul de control') DrawBezier(b) plt.subplot(3,1,2) plt.title('Infasuratoarea convexa a punctelor de control') cmin, cmax=dreptunghiDesen(b) plt.axis([cmin[0], cmax[0], cmin[1], cmax[1]]) Vx=[] Vy=[] for i in [0,1,3,2, 0]: Vx.append(b[i,0]) Vy.append(b[i,1]) plt.fill(Vx,Vy, color='#99FFCC', edgecolor='g') plt.plot(b[:, 0], b[:,1], 'bo') plt.subplot(3,1,3) plt.title('Curba Bezier, inclusa in infasuratoarea punctelor de control') plt.fill(Vx,Vy, color='#99FFCC') DrawBezier(b) plt.tight_layout(0.75)# aceasta functie include 0.75* spatiu intre figurile din subplots Explanation: Proprietati ale curbelor Bezier Parametrizarea in baza Bernstein a unei curbe B&eacute;zier, $b(t)=B^n_0(t) {\bf b}_0+B^n_1(t) {\bf b}_1+\cdots+B^n_n(t) {\bf b}_n$, $t\in[0,1]$, este o combinatie convexa a punctelor de control, deoarece pentru fiecare $t\in[0,1]$, polinoamele Bernstein, $B^n_k(t)=C_n^k t^k(1-t)^{n-k}\geq 0$ si suma lor este egala cu 1. Intradevar: $$\sum_{k=0}^n B_k^n(t)=\underbrace{\sum_{k=0}^n C_n^k t^k(1-t)^{n-k}=(t+(1-t))^n}_{formula\:\: binomului\:\: lui\:\: Newton}=1$$ Deci o curba B&eacute;zier este inclusa in infasuratoarea convexa a punctelor sale de control. Aceasta proprietate are importanta in grafica, pentru a sti unde anume in plan este plasata curba. In exemplul de mai sus am exploatat deja aceasta proprietate, indicand ca dreptunghi de desen, un dreptunghi care include toate punctele de control, deci si infasuratoarea lor convexa. End of explanation b=np.array([[1.3, 2], [0,-1.5] , [3, -0.7]], float) DrawBezier(b) Explanation: Gradul unei curbe B&eacute;zier este mai mic cu o unitate decat numarul punctelor sale de control. Astfel o curba B&eacute;zier de grad 1 este generata de doua puncte de control ${\bf b}_0, {\bf b}_1$, $b(t)={\bf b}_0 B_0^1(t)+{\bf b}_1B_1^1(t)$ $=(1-t){\bf b}_0+t{\bf b}_1= {\bf b}_0 t\vec{{\bf b}_0{\bf b}_1}$, si este segmentul determinat de cele doua puncte. O curba B&eacute;zier generata de trei puncte de control este un arc de parabola: End of explanation b=np.array([[0,0], [1,2], [2.75, 1.5],[1.5, -1.25],[-2,-2],[2,-3]],float) DrawBezier(b) Explanation: O curba B&eacute;zier interpoleaza extremitatile poligonului sau de control (adica trece sigur prin ${\bf b}_0$ si ${\bf b}_n$, deoarece daca $b$ este parametrizarea Bernstein, atunci $b(0)={\bf b}_0$ si $b(1)={\bf b}_n$. O curba B&eacute;zier nu trece insa prin punctele de control intermediare. O curba B&eacute;zier imita forma poligonului de control. End of explanation Image(filename='Imag/Decast4p.png') Explanation: Tangentele in extremitatile arcului de curba B&eacute;zier au directiile $\overrightarrow{{\bf b}0{\bf b}_1}$, respectiv $\overrightarrow{{\bf b}{n-1}{\bf b}{n}}$: $$ \dot{\vec{b}}(t=0)=n\,\overrightarrow{{\bf b}_0{\bf b}_1}, \quad \dot{\overrightarrow{b}}(1)=n\,\overrightarrow{\bf b}{n-1}{\bf b}_n $$ In procesul iterativ al schemei de Casteljau, de evaluare a unui punct $b(t)$, de pe curba B&eacute;zier definita de punctele de control $ {\bf b}{0}, {\bf b}{1}, \ldots, {\bf b}_{n}$, se determina practic si directia tangentei (a vectorului viteza la momentul $t$) la curba in acel punct. Si anume se poate demonstra ca vectorul tangent la curba B&eacute;zier in punctul corespunzator parametrului $\bf t\in[0,1]$ este: $$ \dot{\vec{b}}(t)=n({\bf b}{1}^{n-1}(t)-{\bf b}{0}^{n-1}(t))=n\,\overrightarrow{{\bf b}{0}^{n-1}(t){\bf b}{1}^{n-1}(t)}, $$ unde ${\bf b}{0}^{n-1}(t)$,$\,\,{\bf b}{1}^{n-1}(t)$ sunt punctele calculate in penultima etapa $($etapa $n-1$ $)$ a schemei de Casteljau. End of explanation def tangBezier(b,t): a=np.copy(b) N=a.shape[0] for r in range(1,N-1): # deci 1<=r<=n-1, unde n=N-1 for i in range(N-r): a[i,:]=(1-t)*a[i,:]+t*a[i+1,:] v=a[1,:]-a[0,:] #vectorul director al vitezei in b(t) return v b=np.array([[0,0], [1,1.7],[3, 1.5], [5, -1]],float) DrawBezier(b) t=0.23 P=deCasteljau(t,b) plt.plot(P[0], P[1], 'ro') v=tangBezier(b,t) v=v/np.linalg.norm(v) #versorul vitezei plt.arrow(P[0], P[ 1], v[0], v[1], fc="k", ec="k",head_width=0.075, head_length=0.2) Explanation: In aceasta figura se observa ca segmentul de extremitati ${\bf b}^2_0, {\bf b}^2_1$ este tangent la curba in ${\bf b}^3_0$. Functia urmatoare calculeaza directia (vectorul director) al tangentei (vitezei) in ${\bf b}^n_0(t)$: End of explanation from IPython.core.display import HTML def css_styling(): styles = open("./custom.css", "r").read() return HTML(styles) css_styling() Explanation: In ultimul bloc de cod puteti inlocui pe $t$ cu diverse valori in $[0,1]$ si vedeti directia vitezei. Curbele Bezier se genereaza deobicei interactiv, alegand punctele de control cu mouse-ul si apoi trasand curba corespunzatoare. Pentru generarea interactiva veti primi un script ce se ruleaza in Spyder. End of explanation
9,235
Given the following text description, write Python code to implement the functionality described below step by step Description: Homework 2 In this homework, we are going to play with Twitter data. The data is represented as rows of of JSON strings. It consists of tweets, messages, and a small amount of broken data (cannot be parsed as JSON). For this homework, we will only focus on tweets and ignore all other messages. UPDATES Announcement We changed the test files size and the corresponding file paths. In order to avoid long waiting queue, we decided to limit the input files size for the Playground submissions. Please read the following files to get the input file paths Step1: Part 1 Step2: Broken tweets and irrelevant messages The data of this assignment may contain broken tweets (invalid JSON strings). So make sure that your code is robust for such cases. In addition, some lines in the input file might not be tweets, but messages that the Twitter server sent to the developer (such as limit notices). Your program should also ignore these messages. Hint Step3: (2) Count the number of different users in all valid tweets (hint Step4: Part 2 Step5: (2) Count the number of posts from each user partition Count the number of posts from group 0, 1, ..., 6, plus the number of posts from users who are not in any partition. Assign users who are not in any partition to the group 7. Put the results of this step into a pair RDD (group_id, count) that is sorted by key. Step6: (3) Print the post count using the print_post_count function we provided. It should print Group 0 posted 81 tweets Group 1 posted 199 tweets Group 2 posted 45 tweets Group 3 posted 313 tweets Group 4 posted 86 tweets Group 5 posted 221 tweets Group 6 posted 400 tweets Group 7 posted 798 tweets Step19: Part 3 Step20: (1) Tokenize the tweets using the tokenizer we provided above named tok. Count the number of mentions for each tokens regardless of specific user group. Call print_count function to show how many different tokens we have. It should print Number of elements Step21: (2) Tokens that are mentioned by too few users are usually not very interesting. So we want to only keep tokens that are mentioned by at least 100 users. Please filter out tokens that don't meet this requirement. Call print_count function to show how many different tokens we have after the filtering. Call print_tokens function to show top 20 most frequent tokens. It should print Number of elements Step22: (3) For all tokens that are mentioned by at least 100 users, compute their relative popularity in each user group. Then print the top 10 tokens with highest relative popularity in each user group. In case two tokens have same relative popularity, break the tie by printing the alphabetically smaller one. Hint
Python Code: import findspark findspark.init() import pyspark sc = pyspark.SparkContext() # %install_ext https://raw.github.com/cpcloud/ipython-autotime/master/autotime.py %load_ext autotime def print_count(rdd): print 'Number of elements:', rdd.count() env="local" files='' path = "Data/hw2-files.txt" if env=="prod": path = '../Data/hw2-files-1gb.txt' with open(path) as f: files=','.join(f.readlines()).replace('\n','') rdd = sc.textFile(files).cache() print_count(rdd) Explanation: Homework 2 In this homework, we are going to play with Twitter data. The data is represented as rows of of JSON strings. It consists of tweets, messages, and a small amount of broken data (cannot be parsed as JSON). For this homework, we will only focus on tweets and ignore all other messages. UPDATES Announcement We changed the test files size and the corresponding file paths. In order to avoid long waiting queue, we decided to limit the input files size for the Playground submissions. Please read the following files to get the input file paths: * 1GB test: ../Data/hw2-files-1gb.txt * 5GB test: ../Data/hw2-files-5gb.txt * 20GB test: ../Data/hw2-files-20gb.txt We updated the json parsing section of this notebook. Python built-in json library is too slow. In our experiment, 70% of the total running time is spent on parsing tweets. Therefore we recommend using ujson instead of json. It is at least 15x faster than the built-in json library according to our tests. Important Reminders The tokenizer in this notebook contains UTF-8 characters. So the first line of your .py source code must be # -*- coding: utf-8 -*- to define its encoding. Learn more about this topic here. The input files (the tweets) contain UTF-8 characters. So you have to correctly encode your input with some function like lambda text: text.encode('utf-8'). ../Data/hw2-files-&lt;param&gt; may contain multiple lines, one line for one input file. You can use a single textFile call to read multiple files: sc.textFile(','.join(files)). The input file paths in ../Data/hw2-files-&lt;param&gt; contains trailing spaces (newline etc.), which may confuse HDFS if not removed. Your program will be killed if it cannot finish in 5 minutes. The running time of last 100 submissions (yours and others) can be checked at the "View last 100 jobs" tab. For your information, here is the running time of our solution: 1GB test: 53 seconds, 5GB test: 60 seconds, 20GB test: 114 seconds. Tweets A tweet consists of many data fields. Here is an example. You can learn all about them in the Twitter API doc. We are going to briefly introduce only the data fields that will be used in this homework. created_at: Posted time of this tweet (time zone is included) id_str: Tweet ID - we recommend using id_str over using id as Tweet IDs, becauase id is an integer and may bring some overflow problems. text: Tweet content user: A JSON object for information about the author of the tweet id_str: User ID name: User name (may contain spaces) screen_name: User screen name (no spaces) retweeted_status: A JSON object for information about the retweeted tweet (i.e. this tweet is not original but retweeteed some other tweet) All data fields of a tweet except retweeted_status entities: A JSON object for all entities in this tweet hashtags: An array for all the hashtags that are mentioned in this tweet urls: An array for all the URLs that are mentioned in this tweet Data source All tweets are collected using the Twitter Streaming API. Users partition Besides the original tweets, we will provide you with a Pickle file, which contains a partition over 452,743 Twitter users. It contains a Python dictionary {user_id: partition_id}. The users are partitioned into 7 groups. Part 0: Load data to a RDD The tweets data is stored on AWS S3. We have in total a little over 1 TB of tweets. We provide 10 MB of tweets for your local development. For the testing and grading on the homework server, we will use different data. Testing on the homework server In the Playground, we provide three different input sizes to test your program: 1 GB, 10 GB, and 100 GB. To test them, read files list from ../Data/hw2-files-1gb.txt, ../Data/hw2-files-5gb.txt, ../Data/hw2-files-20gb.txt, respectively. For final submission, make sure to read files list from ../Data/hw2-files-final.txt. Otherwise your program will receive no points. Local test For local testing, read files list from ../Data/hw2-files.txt. Now let's see how many lines there are in the input files. Make RDD from the list of files in hw2-files.txt. Mark the RDD to be cached (so in next operation data will be loaded in memory) call the print_count method to print number of lines in all these files It should print Number of elements: 2193 End of explanation import ujson json_example = ''' { "id": 1, "name": "A green door", "price": 12.50, "tags": ["home", "green"] } ''' json_obj = ujson.loads(json_example) json_obj Explanation: Part 1: Parse JSON strings to JSON objects Python has built-in support for JSON. UPDATE: Python built-in json library is too slow. In our experiment, 70% of the total running time is spent on parsing tweets. Therefore we recommend using ujson instead of json. It is at least 15x faster than the built-in json library according to our tests. End of explanation import ujson def safe_parse(raw_json): tweet={} try: tweet = ujson.loads(raw_json) except ValueError: pass return tweet #filter out rate limites {"limit":{"track":77,"timestamp_ms":"1457610531879"}} tweets = rdd.map(lambda json_str: safe_parse(json_str))\ .filter(lambda h: "text" in h)\ .map(lambda tweet: (tweet["user"]["id_str"], tweet["text"]))\ .map(lambda (x,y): (x, y.encode("utf-8"))).cache() Explanation: Broken tweets and irrelevant messages The data of this assignment may contain broken tweets (invalid JSON strings). So make sure that your code is robust for such cases. In addition, some lines in the input file might not be tweets, but messages that the Twitter server sent to the developer (such as limit notices). Your program should also ignore these messages. Hint: Catch the ValueError (1) Parse raw JSON tweets to obtain valid JSON objects. From all valid tweets, construct a pair RDD of (user_id, text), where user_id is the id_str data field of the user dictionary (read Tweets section above), text is the text data field. End of explanation def print_users_count(count): print 'The number of unique users is:', count print_users_count(tweets.map(lambda x:x[0]).distinct().count()) Explanation: (2) Count the number of different users in all valid tweets (hint: the distinct() method). It should print The number of unique users is: 2083 End of explanation import cPickle as pickle path = 'Data/users-partition.pickle' if env=="prod": path = '../Data/users-partition.pickle' partitions = pickle.load(open(path, 'rb')) #{user_Id, partition_id} - {'583105596': 6} partition_bc = sc.broadcast(partitions) Explanation: Part 2: Number of posts from each user partition Load the Pickle file ../Data/users-partition.pickle, you will get a dictionary which represents a partition over 452,743 Twitter users, {user_id: partition_id}. The users are partitioned into 7 groups. For example, if the dictionary is loaded into a variable named partition, the partition ID of the user 59458445 is partition["59458445"]. These users are partitioned into 7 groups. The partition ID is an integer between 0-6. Note that the user partition we provide doesn't cover all users appear in the input data. (1) Load the pickle file. End of explanation count = tweets.map(lambda x:partition_bc.value.get(x[0], 7)).countByValue().items() Explanation: (2) Count the number of posts from each user partition Count the number of posts from group 0, 1, ..., 6, plus the number of posts from users who are not in any partition. Assign users who are not in any partition to the group 7. Put the results of this step into a pair RDD (group_id, count) that is sorted by key. End of explanation def print_post_count(counts): for group_id, count in counts: print 'Group %d posted %d tweets' % (group_id, count) print print_post_count(count) Explanation: (3) Print the post count using the print_post_count function we provided. It should print Group 0 posted 81 tweets Group 1 posted 199 tweets Group 2 posted 45 tweets Group 3 posted 313 tweets Group 4 posted 86 tweets Group 5 posted 221 tweets Group 6 posted 400 tweets Group 7 posted 798 tweets End of explanation # %load happyfuntokenizing.py #!/usr/bin/env python This code implements a basic, Twitter-aware tokenizer. A tokenizer is a function that splits a string of text into words. In Python terms, we map string and unicode objects into lists of unicode objects. There is not a single right way to do tokenizing. The best method depends on the application. This tokenizer is designed to be flexible and this easy to adapt to new domains and tasks. The basic logic is this: 1. The tuple regex_strings defines a list of regular expression strings. 2. The regex_strings strings are put, in order, into a compiled regular expression object called word_re. 3. The tokenization is done by word_re.findall(s), where s is the user-supplied string, inside the tokenize() method of the class Tokenizer. 4. When instantiating Tokenizer objects, there is a single option: preserve_case. By default, it is set to True. If it is set to False, then the tokenizer will downcase everything except for emoticons. The __main__ method illustrates by tokenizing a few examples. I've also included a Tokenizer method tokenize_random_tweet(). If the twitter library is installed (http://code.google.com/p/python-twitter/) and Twitter is cooperating, then it should tokenize a random English-language tweet. Julaiti Alafate: I modified the regex strings to extract URLs in tweets. __author__ = "Christopher Potts" __copyright__ = "Copyright 2011, Christopher Potts" __credits__ = [] __license__ = "Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License: http://creativecommons.org/licenses/by-nc-sa/3.0/" __version__ = "1.0" __maintainer__ = "Christopher Potts" __email__ = "See the author's website" ###################################################################### import re import htmlentitydefs ###################################################################### # The following strings are components in the regular expression # that is used for tokenizing. It's important that phone_number # appears first in the final regex (since it can contain whitespace). # It also could matter that tags comes after emoticons, due to the # possibility of having text like # # <:| and some text >:) # # Most imporatantly, the final element should always be last, since it # does a last ditch whitespace-based tokenization of whatever is left. # This particular element is used in a couple ways, so we define it # with a name: emoticon_string = r (?: [<>]? [:;=8] # eyes [\-o\*\']? # optional nose [\)\]\(\[dDpP/\:\}\{@\|\\] # mouth | [\)\]\(\[dDpP/\:\}\{@\|\\] # mouth [\-o\*\']? # optional nose [:;=8] # eyes [<>]? ) # The components of the tokenizer: regex_strings = ( # Phone numbers: r (?: (?: # (international) \+?[01] [\-\s.]* )? (?: # (area code) [\(]? \d{3} [\-\s.\)]* )? \d{3} # exchange [\-\s.]* \d{4} # base ) , # URLs: rhttp[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+ , # Emoticons: emoticon_string , # HTML tags: r<[^>]+> , # Twitter username: r(?:@[\w_]+) , # Twitter hashtags: r(?:\#+[\w_]+[\w\'_\-]*[\w_]+) , # Remaining word types: r (?:[a-z][a-z'\-_]+[a-z]) # Words with apostrophes or dashes. | (?:[+\-]?\d+[,/.:-]\d+[+\-]?) # Numbers, including fractions, decimals. | (?:[\w_]+) # Words without apostrophes or dashes. | (?:\.(?:\s*\.){1,}) # Ellipsis dots. | (?:\S) # Everything else that isn't whitespace. ) ###################################################################### # This is the core tokenizing regex: word_re = re.compile(r(%s) % "|".join(regex_strings), re.VERBOSE | re.I | re.UNICODE) # The emoticon string gets its own regex so that we can preserve case for them as needed: emoticon_re = re.compile(regex_strings[1], re.VERBOSE | re.I | re.UNICODE) # These are for regularizing HTML entities to Unicode: html_entity_digit_re = re.compile(r"&#\d+;") html_entity_alpha_re = re.compile(r"&\w+;") amp = "&amp;" ###################################################################### class Tokenizer: def __init__(self, preserve_case=False): self.preserve_case = preserve_case def tokenize(self, s): Argument: s -- any string or unicode object Value: a tokenize list of strings; conatenating this list returns the original string if preserve_case=False # Try to ensure unicode: try: s = unicode(s) except UnicodeDecodeError: s = str(s).encode('string_escape') s = unicode(s) # Fix HTML character entitites: s = self.__html2unicode(s) # Tokenize: words = word_re.findall(s) # Possible alter the case, but avoid changing emoticons like :D into :d: if not self.preserve_case: words = map((lambda x : x if emoticon_re.search(x) else x.lower()), words) return words def tokenize_random_tweet(self): If the twitter library is installed and a twitter connection can be established, then tokenize a random tweet. try: import twitter except ImportError: print "Apologies. The random tweet functionality requires the Python twitter library: http://code.google.com/p/python-twitter/" from random import shuffle api = twitter.Api() tweets = api.GetPublicTimeline() if tweets: for tweet in tweets: if tweet.user.lang == 'en': return self.tokenize(tweet.text) else: raise Exception("Apologies. I couldn't get Twitter to give me a public English-language tweet. Perhaps try again") def __html2unicode(self, s): Internal metod that seeks to replace all the HTML entities in s with their corresponding unicode characters. # First the digits: ents = set(html_entity_digit_re.findall(s)) if len(ents) > 0: for ent in ents: entnum = ent[2:-1] try: entnum = int(entnum) s = s.replace(ent, unichr(entnum)) except: pass # Now the alpha versions: ents = set(html_entity_alpha_re.findall(s)) ents = filter((lambda x : x != amp), ents) for ent in ents: entname = ent[1:-1] try: s = s.replace(ent, unichr(htmlentitydefs.name2codepoint[entname])) except: pass s = s.replace(amp, " and ") return s from math import log tok = Tokenizer(preserve_case=False) def get_rel_popularity(c_k, c_all): return log(1.0 * c_k / c_all) / log(2) def print_tokens(tokens, gid = None): group_name = "overall" if gid is not None: group_name = "group %d" % gid print '=' * 5 + ' ' + group_name + ' ' + '=' * 5 for t, n in tokens: print "%s\t%.4f" % (t, n) print Explanation: Part 3: Tokens that are relatively popular in each user partition In this step, we are going to find tokens that are relatively popular in each user partition. We define the number of mentions of a token $t$ in a specific user partition $k$ as the number of users from the user partition $k$ that ever mentioned the token $t$ in their tweets. Note that even if some users might mention a token $t$ multiple times or in multiple tweets, a user will contribute at most 1 to the counter of the token $t$. Please make sure that the number of mentions of a token is equal to the number of users who mentioned this token but NOT the number of tweets that mentioned this token. Let $N_t^k$ be the number of mentions of the token $t$ in the user partition $k$. Let $N_t^{all} = \sum_{i=0}^7 N_t^{i}$ be the number of total mentions of the token $t$. We define the relative popularity of a token $t$ in a user partition $k$ as the log ratio between $N_t^k$ and $N_t^{all}$, i.e. \begin{equation} p_t^k = \log \frac{N_t^k}{N_t^{all}}. \end{equation} You can compute the relative popularity by calling the function get_rel_popularity. (0) Load the tweet tokenizer. End of explanation # unique_tokens = tweets.flatMap(lambda tweet: tok.tokenize(tweet[1])).distinct() splitter = lambda x: [(x[0],t) for t in x[1]] unique_tokens = tweets.map(lambda tweet: (tweet[0], tok.tokenize(tweet[1])))\ .flatMap(lambda t: splitter(t))\ .distinct() ut1 = unique_tokens.map(lambda x: ((partition_bc.value.get(x[0],7), x[1]), 1)).cache() utr = ut1.reduceByKey(lambda x,y: x+y).cache() group_tokens = utr.map(lambda (x,y):(x[1],y)).reduceByKey(lambda x,y:x+y) ##format: (token, k_all) print_count(group_tokens) Explanation: (1) Tokenize the tweets using the tokenizer we provided above named tok. Count the number of mentions for each tokens regardless of specific user group. Call print_count function to show how many different tokens we have. It should print Number of elements: 8979 End of explanation # splitter = lambda x: [(x[0],t) for t in x[1]] # tokens = tweets.map(lambda tweet: (tweet[0], tok.tokenize(tweet[1])))\ # .flatMap(lambda t: splitter(t))\ # .distinct() popular_tokens = group_tokens.filter(lambda x: x[1]>100).cache() # .sortBy(lambda x: x[1], ascending=False).cache() print_count(popular_tokens) print_tokens(popular_tokens.top(20, lambda x:x[1])) Explanation: (2) Tokens that are mentioned by too few users are usually not very interesting. So we want to only keep tokens that are mentioned by at least 100 users. Please filter out tokens that don't meet this requirement. Call print_count function to show how many different tokens we have after the filtering. Call print_tokens function to show top 20 most frequent tokens. It should print Number of elements: 52 ===== overall ===== : 1386.0000 rt 1237.0000 . 865.0000 \ 745.0000 the 621.0000 trump 595.0000 x80 545.0000 xe2 543.0000 to 499.0000 , 489.0000 xa6 457.0000 a 403.0000 is 376.0000 in 296.0000 ' 294.0000 of 292.0000 and 287.0000 for 280.0000 ! 269.0000 ? 210.0000 End of explanation # i want to join the partion on the top100 tweets!, so ineed to get it in the form (uid, tweet) twg = sc.parallelize(partitions.items()).rightOuterJoin(tweets)\ .map(lambda (uid,(gid,tweet)): (uid,(7,tweet)) if gid<0 or gid>6 else (uid,(gid,tweet))).cache() def group_score(gid): group_counts = utr.filter(lambda (x,y): x[0]==gid).map(lambda (x,y): (x[1], y)) merged = group_counts.join(popular_tokens) group_scores = merged.map(lambda (token,(V,W)): (token, get_rel_popularity(V,W))) return group_scores for _gid in range(0,8): _rdd = group_score(_gid) print_tokens(_rdd.top(10, lambda a:a[1]), gid=_gid) Explanation: (3) For all tokens that are mentioned by at least 100 users, compute their relative popularity in each user group. Then print the top 10 tokens with highest relative popularity in each user group. In case two tokens have same relative popularity, break the tie by printing the alphabetically smaller one. Hint: Let the relative popularity of a token $t$ be $p$. The order of the items will be satisfied by sorting them using (-p, t) as the key. End of explanation
9,236
Given the following text description, write Python code to implement the functionality described below step by step Description: Logistics We are going to use parallel-tempering, implemented via the python emcee package, to explore our posterior, which consists of the set of distances and gas to dust conversion coefficients to the six velocity slices towards the center of the Cepheus molecular cloud. Since we need to explore a 12 dimensional parameter space, we are going to use 50 walkers, 10000 steps each, at 5 different temperatures. If you would like to edit this parameters, simply edit "nwalkers", "ntemps", and "nsteps" in the cell below. However, we are only going to keep the lowest temperature chain ($\beta=1$) for analysis. Since the sampler.chain object from PTSampler returns an array of shape (Ntemps, Nwalkers, Nsteps, Ndim), returning the samples for all walkers, steps, and dimensions at $\beta=1$ would correspond to sampler.chain[0, Step1: Let's see what our chains look like by producing trace plots Step2: Now we are going to use the seaborn distplot function to plot histograms of the last half of the traces for each parameter. Step3: Now we want to see how similar the parameters at different steps are. To do this, we draw one thousand random samples from the last half of the chain and plot the reddening profile corresponding to those parameters in light blue. Then, we plot the "best fit" reddening profile corresponding to the 50th quantile parameters (essentially the median of the last half of the chains). In all cases, we take the average of the CO for all nside 128 pixels at each slice. As you can see, drawing random samples from the last half of the chain produce reddening profiles essentially identical to the best fit values.
Python Code: import emcee from dustcurve import model import seaborn as sns import numpy as np from dustcurve import pixclass import matplotlib.pyplot as plt import pandas as pd import warnings from dustcurve import io from dustcurve import hputils from dustcurve import kdist import h5py from dustcurve import globalvars as gv %matplotlib inline f=h5py.File('/n/fink1/czucker/Output/2degrees_jul1_0.1sig_cropped.h5') samples=f['/chains'] nsteps=samples.shape[2] ndim=samples.shape[3] #Extract the coldest [beta=1] temperature chain from the sampler object; discard first half of samples as burnin samples_cold = samples[0,:,int(.5*nsteps):,:] traces_cold = samples_cold.reshape(-1, ndim).T #find best fit values for each of the 24 parameters (12 d's and 12 c's) theta=pd.DataFrame(traces_cold) quantile_50=theta.quantile(.50, axis=1).values quantile_84=theta.quantile(.84, axis=1).values quantile_16=theta.quantile(.16, axis=1).values upperlim=quantile_84-quantile_50 lowerlim=quantile_50-quantile_16 #print out distances for i in range(0,int(len(quantile_50)/2)): print('d%i: %.3f + %.3f - %.3f' % (i+1,quantile_50[i],upperlim[i], lowerlim[i])) #print out coefficients for i in range(int(len(quantile_50)/2), int(len(quantile_50))): print('c%i: %.3f + %.3f - %.3f' % (i+1-int(len(quantile_50)/2),quantile_50[i],upperlim[i], lowerlim[i])) Explanation: Logistics We are going to use parallel-tempering, implemented via the python emcee package, to explore our posterior, which consists of the set of distances and gas to dust conversion coefficients to the six velocity slices towards the center of the Cepheus molecular cloud. Since we need to explore a 12 dimensional parameter space, we are going to use 50 walkers, 10000 steps each, at 5 different temperatures. If you would like to edit this parameters, simply edit "nwalkers", "ntemps", and "nsteps" in the cell below. However, we are only going to keep the lowest temperature chain ($\beta=1$) for analysis. Since the sampler.chain object from PTSampler returns an array of shape (Ntemps, Nwalkers, Nsteps, Ndim), returning the samples for all walkers, steps, and dimensions at $\beta=1$ would correspond to sampler.chain[0,:,:,:]. To decrease your value of $\beta$ simply increase the index for the first dimension. For more information on how PTSampler works, see http://dan.iel.fm/emcee/current/user/pt/. We will set off our walkers in a Gaussian ball around a) the kinematic distance estimates for the Cepheus molecular cloud given by a flat rotation curve from Leroy & Rosolowsky 2006 and b) the gas-to-dust coefficient given by the literature. We perturb the walkers in a Gaussian ball with mean 0 and variance 1. You can edit the starting positions of the walkers by editing the "result" variable below. We are going to discard the first half of every walker's chain as burn-in. Setting up the positional arguments for PTSampler We need to feed PTSampler the required positional arguments for the log_likelihood and log_prior function. We do this using the fetch_args function from the io module, which creates an instance of the pixclass object that holds our data and metadata. Fetch_args accepts three arguments: A string specifiying the h5 filenames containing your data, in our case 10 healpix nside 128 pixels centered around (l,b)=(109.75, 13.75), which covers a total area of 2 sq. deg. The prior bounds you want to impose on distances (flat prior) and the standard deviation you'd like for the log-normal prior on the conversion coefficients. For distances, this must be between 4 and 19, because that's the distance modulus range of our stellar posterior array. The prior bounds must be in the format [lowerbound_distance, upperbound_distance, sigma] The gas-to-dust coefficient you'd like to use, given as a float; for this tutorial, we are pulling a value from the literature of 0.06 magnitudes/K. This value is then multiplied by the set of c coefficients we're determining as part of the parameter estimation problem. Fetch_args will then return the correct arguments for the log_likelihood and log_prior functions within the model module. Here we go! The sampler is done running, so now let's check out the results. We are going to print out our mean acceptance fraction across all walkers for the coldest temperature chain. We are also going to discard the first half of each walker's chain as burn-in; to change the number of steps to burn off, simply edit the 3rd dimension of sampler.chain[0,:,n:,:] and input your desired value of n. Next, we are going to compute and print out the 50th, 16th, and 84th percentile of the chains for each distance parameter, using the "quantile" attribute of a pandas dataframe object. The 50th percentile measurement represents are best guess for the each distance parameter, while the difference between the 16th and 50th gives us a lower limit and the difference between the 50th and 84th percentile gives us an upper limit: End of explanation #set up subplots for chain plotting axes=['ax'+str(i) for i in range(ndim)] fig, (axes) = plt.subplots(ndim, figsize=(10,60)) plt.tight_layout() for i in range(0,ndim): if i<int(ndim/2): axes[i].set(ylabel='d%i' % (i+1)) else: axes[i].set(ylabel='c%i' % (i-5)) #plot traces for each parameter for i in range(0,ndim): sns.tsplot(traces_cold[i],ax=axes[i]) Explanation: Let's see what our chains look like by producing trace plots: End of explanation #set up subplots for histogram plotting axes=['ax'+str(i) for i in range(ndim)] fig, (axes) = plt.subplots(ndim, figsize=(10,60)) plt.tight_layout() for i in range(0,ndim): if i<int(ndim/2): axes[i].set(ylabel='d%i' % (i+1)) else: axes[i].set(ylabel='c%i' % (i-5)) #plot traces for each parameter for i in range(0,ndim): sns.distplot(traces_cold[i],ax=axes[i],hist=True,norm_hist=False) Explanation: Now we are going to use the seaborn distplot function to plot histograms of the last half of the traces for each parameter. End of explanation from dustcurve import plot_posterior ratio=0.06 plot_posterior.plot_samples(np.asarray(post_all),np.linspace(4,19,120),np.linspace(0,7,700),quantile_50,traces_cold,ratio,gv.unique_co,y_range=[0,2],vmax=20,normcol=False) Explanation: Now we want to see how similar the parameters at different steps are. To do this, we draw one thousand random samples from the last half of the chain and plot the reddening profile corresponding to those parameters in light blue. Then, we plot the "best fit" reddening profile corresponding to the 50th quantile parameters (essentially the median of the last half of the chains). In all cases, we take the average of the CO for all nside 128 pixels at each slice. As you can see, drawing random samples from the last half of the chain produce reddening profiles essentially identical to the best fit values. End of explanation
9,237
Given the following text description, write Python code to implement the functionality described below step by step Description: Thermal Sensor Measurements The goal of this experiment is to measure temperature on Juno R2 board using the available sensors. In order to do that we will run a busy-loop workload of about 5 minutes and collect traces for the thermal_temperature event. Measurements must be done with and without fan. Step1: Target Configuration Our target is a Juno R2 development board running Linux. Step2: Tests execution Step3: Workloads configuration Step4: Workload execution Step5: Trace Analysis In order to analyze the trace we will plot it using TRAPpy. Step6: The pmic sensor if off-chip and therefore it is not useful to get its temperature.
Python Code: import logging from conf import LisaLogging LisaLogging.setup() %pylab inline import os # Support to access the remote target import devlib from env import TestEnv # Support to configure and run RTApp based workloads from wlgen import RTA, Periodic # Support for trace events analysis from trace import Trace # Suport for FTrace events parsing and visualization import trappy Explanation: Thermal Sensor Measurements The goal of this experiment is to measure temperature on Juno R2 board using the available sensors. In order to do that we will run a busy-loop workload of about 5 minutes and collect traces for the thermal_temperature event. Measurements must be done with and without fan. End of explanation # Setup a target configuration my_target_conf = { # Target platform and board "platform" : 'linux', "board" : 'juno', # Target board IP/MAC address "host" : '192.168.0.1', # Login credentials "username" : 'root', "password" : '', # RTApp calibration values (comment to let LISA do a calibration run) "rtapp-calib" : { "0": 318, "1": 125, "2": 124, "3": 318, "4": 318, "5": 319 }, # Tools required by the experiments "tools" : [ 'rt-app', 'trace-cmd' ], "exclude_modules" : ['hwmon'], # FTrace events to collect for all the tests configuration which have # the "ftrace" flag enabled "ftrace" : { "events" : [ "thermal_temperature", # Use sched_switch event to recognize tasks on kernelshark "sched_switch", # cdev_update has been used to show that "step_wise" thermal governor introduces noise # because it keeps changing the state of the cooling devices and therefore # the available OPPs #"cdev_update", ], "buffsize" : 80 * 1024, }, } Explanation: Target Configuration Our target is a Juno R2 development board running Linux. End of explanation # Initialize a test environment using: # the provided target configuration (my_target_conf) te = TestEnv(target_conf=my_target_conf) target = te.target Explanation: Tests execution End of explanation # Create a new RTApp workload generator using the calibration values # reported by the TestEnv module rtapp_big = RTA(target, 'big', calibration=te.calibration()) big_tasks = dict() for cpu in target.bl.bigs: big_tasks['busy_big'+str(cpu)] = Periodic(duty_cycle_pct=100, duration_s=360, # 6 minutes cpus=str(cpu) # pinned to a given cpu ).get() # Configure this RTApp instance to: rtapp_big.conf( # 1. generate a "profile based" set of tasks kind='profile', # 2. define the "profile" of each task params=big_tasks, # 3. Set load reference for task calibration loadref='big', # 4. use this folder for task logfiles run_dir=target.working_directory ); rtapp_little = RTA(target, 'little', calibration=te.calibration()) little_tasks = dict() for cpu in target.bl.littles: little_tasks['busy_little'+str(cpu)] = Periodic(duty_cycle_pct=100, duration_s=360, cpus=str(cpu)).get() rtapp_little.conf( kind='profile', params=little_tasks, # Allow the task duration to be calibrated for the littles (default is for big) loadref='little', run_dir=target.working_directory ); Explanation: Workloads configuration End of explanation logging.info('#### Setup FTrace') te.ftrace.start() logging.info('#### Start RTApp execution') # Run tasks on the bigs in background to allow execution of following instruction rtapp_big.run(out_dir=te.res_dir, background=True) # Run tasks on the littles and then wait 2 minutes for device to cool down rtapp_little.run(out_dir=te.res_dir, end_pause_s=120.0) logging.info('#### Stop FTrace') te.ftrace.stop() Explanation: Workload execution End of explanation # Collect the trace trace_file = os.path.join(te.res_dir, 'trace.dat') logging.info('#### Save FTrace: %s', trace_file) te.ftrace.get_trace(trace_file) # Parse trace therm_trace = trappy.FTrace(trace_file) therm_trace.thermal.data_frame.tail(10) # Plot the data therm_plot = trappy.ILinePlot(therm_trace, signals=['thermal:temp'], filters={'thermal_zone': ["soc"]}, title='Juno R2 SoC Temperature w/o fans') therm_plot.view() Explanation: Trace Analysis In order to analyze the trace we will plot it using TRAPpy. End of explanation # Extract a data frame for each zone df = therm_trace.thermal.data_frame soc_df = df[df.thermal_zone == "soc"] big_df = df[df.thermal_zone == "big_cluster"] little_df = df[df.thermal_zone == "little_cluster"] gpu0_df = df[df.thermal_zone == "gpu0"] gpu1_df = df[df.thermal_zone == "gpu1"] # Build new trace juno_trace = trappy.BareTrace(name = "Juno_R2") juno_trace.add_parsed_event("SoC", soc_df) juno_trace.add_parsed_event("big_Cluster", big_df) juno_trace.add_parsed_event("LITTLE_Cluster", little_df) juno_trace.add_parsed_event("gpu0", gpu0_df) juno_trace.add_parsed_event("gpu1", gpu1_df) # Plot the data for all sensors juno_signals = ['SoC:temp', 'big_Cluster:temp', 'LITTLE_Cluster:temp', 'gpu0:temp', 'gpu1:temp'] therm_plot = trappy.ILinePlot([juno_trace], signals=juno_signals, title='Juno R2 Temperature all traces') therm_plot.view() Explanation: The pmic sensor if off-chip and therefore it is not useful to get its temperature. End of explanation
9,238
Given the following text description, write Python code to implement the functionality described below step by step Description: Calculating Wang's Semantic Similarity between two GO Terms Setup Calculate Wang's semantic similarity using optional part_of relationship Calculate Wang's semantic similarity using researcher-set edge weights. Calculate Wang's semantic similarity usint is_of relationship only 1) Setup Create a list of all GO IDs that will be compared Step1: Read the GO DAG Step3: Create a printing function for this notebook Step4: 2) Calculate Wang's semantic similarity using part_of relationship Instantiate a Wang's Semantic Similarity object We choose to use the optional relationship, part_of, in addition the required is_a relationships for this example. Load all GO IDs that you will be comparing. Step5: Visualize researcher GO terms Step6: Step7: 3) Calculate Wang's semantic similarity using researcher-set edge weights Print the default edge weights in the Wang configuration Step8: Use researcher-specified edge weights Step9: 4) Calculate Wang's semantic similarity with is_a relationships only Step10: Calculate Wang's semantic similarity using is_a relationship only Step11: Instantiate a Wang's Semantic Similarity object We use the required is_a relationships only. Load all GO IDs that you will be comparing.
Python Code: # Researcher-provided GO terms related to smell go_a = 'GO:0007608' go_b = 'GO:0050911' go_c = 'GO:0042221' # Optional relationships. (Relationship, is_a, is required and always used) relationships = {'part_of'} goids = {go_a, go_b, go_c} # Annotations for plotting go2txt = { go_a:'GO TERM A', go_b:'GO TERM B', go_c:'GO TERM C'} Explanation: Calculating Wang's Semantic Similarity between two GO Terms Setup Calculate Wang's semantic similarity using optional part_of relationship Calculate Wang's semantic similarity using researcher-set edge weights. Calculate Wang's semantic similarity usint is_of relationship only 1) Setup Create a list of all GO IDs that will be compared End of explanation from goatools.base import get_godag godag = get_godag("go-basic.obo", optional_attrs={'relationship'}) Explanation: Read the GO DAG End of explanation def print_details(go_a, go_b, val): Print concise and informative report: GO terms and their semantic similarity pattern = ('go_a: {GOa} {GOa_name}\n' 'go_b: {GOb} {GOb_name}\n' 'wang: {VAL:.8f}\n') print(pattern.format( GOa=go_a, GOa_name=godag[go_a].name, GOb=go_b, GOb_name=godag[go_b].name, VAL=val)) Explanation: Create a printing function for this notebook End of explanation from goatools.semsim.termwise.wang import SsWang # goids: researcher-provided GO terms wang_r1 = SsWang(goids, godag, relationships) Explanation: 2) Calculate Wang's semantic similarity using part_of relationship Instantiate a Wang's Semantic Similarity object We choose to use the optional relationship, part_of, in addition the required is_a relationships for this example. Load all GO IDs that you will be comparing. End of explanation from goatools.gosubdag.gosubdag import GoSubDag from goatools.gosubdag.plot.gosubdag_plot import GoSubDagPlot r1_png = 'smell_r1.png' r1_gosubdag = GoSubDag(goids, godag, relationships) GoSubDagPlot(r1_gosubdag, go2txt=go2txt).plt_dag(r1_png) Explanation: Visualize researcher GO terms End of explanation val = wang_r1.get_sim(go_a, go_b) print_details(go_a, go_b, val) val = wang_r1.get_sim(go_a, go_c) print_details(go_a, go_c, val) val = wang_r1.get_sim(go_b, go_c) print_details(go_b, go_c, val) Explanation: End of explanation wang_r1.prt_cfg() Explanation: 3) Calculate Wang's semantic similarity using researcher-set edge weights Print the default edge weights in the Wang configuration End of explanation relationship2weight = { 'is_a': 0.9, 'part_of': 0.9 } wang_r1 = SsWang(goids, godag, relationships, relationship2weight) val = wang_r1.get_sim(go_a, go_b) print_details(go_a, go_b, val) val = wang_r1.get_sim(go_a, go_c) print_details(go_a, go_c, val) val = wang_r1.get_sim(go_b, go_c) print_details(go_b, go_c, val) Explanation: Use researcher-specified edge weights End of explanation wang_r0 = SsWang(goids, godag) Explanation: 4) Calculate Wang's semantic similarity with is_a relationships only End of explanation r0_png = 'smell_r0.png' r0_gosubdag = GoSubDag(goids, godag) GoSubDagPlot(r0_gosubdag, go2txt=go2txt).plt_dag(r0_png) Explanation: Calculate Wang's semantic similarity using is_a relationship only End of explanation val = wang_r0.get_sim(go_a, go_b) print_details(go_a, go_b, val) val = wang_r0.get_sim(go_a, go_c) print_details(go_a, go_c, val) val = wang_r0.get_sim(go_b, go_c) print_details(go_b, go_c, val) Explanation: Instantiate a Wang's Semantic Similarity object We use the required is_a relationships only. Load all GO IDs that you will be comparing. End of explanation
9,239
Given the following text description, write Python code to implement the functionality described below step by step Description: scipy stats This notebook focuses on the use of the scipy.stats module It is built based on a learn-by-example approach So it only covers a little part of the module's functionalities but provides a practical application. Some knowledge of numpy and matplotlib is needed to fully understand the content. Introduction The scipy.stats module provides mainly Step1: Probability distributions The scipy.stats module provides a very complete set of probability distributions. There are three types of distributions Step2: Discrete Distributions Discrete distributions have quite the same API. Having pmf= Probability Mass Function (instead of pdf) Step3: Example Step4: Create a Normal distribution Let's assume that the stock prices follow a Normal distribution Step5: Step6: Exercise Step7: Compute the expected profit and top 5% risk Step8: Both the expected profit and the risk assessed are too high!! Try adding Intel to the product in order to lower them down
Python Code: %matplotlib inline import numpy as np from scipy import stats import matplotlib.pyplot as plt import pandas as pd Explanation: scipy stats This notebook focuses on the use of the scipy.stats module It is built based on a learn-by-example approach So it only covers a little part of the module's functionalities but provides a practical application. Some knowledge of numpy and matplotlib is needed to fully understand the content. Introduction The scipy.stats module provides mainly: * probability distributions: continuous, discrete and multivariate * statistical functions such as statistics and tests For further details you can check the official documentation Imports End of explanation N_SAMPLES = 1000 pds = [('Normal', stats.norm(), (-4., 4.)), ('LogNormal', stats.lognorm(1.), (0., 4.)), ('Students T', stats.t(3.), (-10., 10.)), ('Chi Squared', stats.chi2(1.), (0., 10.))] n_pds = len(pds) fig, ax_list = plt.subplots(n_pds, 3) fig.set_size_inches((5.*n_pds, 10.)) for ind, elem in enumerate(pds): pd_name, pd_func, pd_range = elem x_range = np.linspace(*pd_range, 101) # Probability Density Function ax_list[ind, 0].plot(x_range, pd_func.pdf(x_range)) ax_list[ind, 0].set_ylabel(pd_name) # Cumulative Distribution Function ax_list[ind, 1].plot(x_range, pd_func.cdf(x_range)) ax_list[ind, 1].fill_between(x_range, pd_func.cdf(x_range)) ax_list[ind, 1].set_ylim([0., 1.]) # Random Variable Sample ax_list[ind, 2].hist(pd_func.rvs(size=N_SAMPLES), bins=50) if ind == 0: _ = ax_list[ind, 0].set_title('Probability Density Function') _ = ax_list[ind, 1].set_title('Cumulative Distribution Function') _ = ax_list[ind, 2].set_title('Random Sample') Explanation: Probability distributions The scipy.stats module provides a very complete set of probability distributions. There are three types of distributions: * Continuous * Discrete * Multivariate Each of the univariate types is inherited from the same class, so they all have a common API. Continuos distributions There are ~100 different continuous distributions. Some of the methods in the API: * cdf: Cumulative Distribution Function * pdf: Probability Density Function * rvs: Random Variable Sample * ppf: Percent Point Function (inverse of the CDF) * fit: return MLE estimations of location, scale and shape, given a set of data End of explanation N_SAMPLES = 1000 pds = [('Binomial', stats.binom(20, 0.7), (0., 21.)), ('Poisson', stats.poisson(10.), (0., 21.))] n_pds = len(pds) fig, ax_list = plt.subplots(n_pds, 3) fig.set_size_inches((8.*n_pds, 8.)) for ind, elem in enumerate(pds): pd_name, pd_func, pd_range = elem x_range = np.arange(*pd_range) # Probability Mass Function ax_list[ind, 0].bar(x_range, pd_func.pmf(x_range)) ax_list[ind, 0].set_ylabel(pd_name) # Cumulative Distribution Function ax_list[ind, 1].plot(x_range, pd_func.cdf(x_range)) ax_list[ind, 1].fill_between(x_range, pd_func.cdf(x_range)) ax_list[ind, 1].set_ylim([0., 1.]) # Random Variable Sample ax_list[ind, 2].hist(pd_func.rvs(size=N_SAMPLES), bins=x_range - 0.5) if ind == 0: _ = ax_list[ind, 0].set_title('Probability Mass Function') _ = ax_list[ind, 1].set_title('Cumulative Distribution Function') _ = ax_list[ind, 2].set_title('Random Sample') Explanation: Discrete Distributions Discrete distributions have quite the same API. Having pmf= Probability Mass Function (instead of pdf) End of explanation df_prices = pd.read_csv('../resources/stock.csv') df_prices.head(10) df_prices.plot(no) _ = df_prices[['Apple', 'Microsoft']].plot(title='2016 stock prices') # Compute the daily relative increments df_incs = df_prices.drop('Date', axis=1) df_incs = ((df_incs - df_incs.shift(1))/df_incs.shift(1)).loc[1:, :] df_incs['Date'] = df_prices.Date df_incs.head(10) _ = df_incs[['Apple', 'Microsoft']].plot(title='2016 stock prices variations') m = np.mean(df_incs) print(m) s = np.std(df_incs, ddof=1) print(s) c = df_incs.cov() c Explanation: Example: creating a financial product Load and manipulate the data End of explanation # we can use the fit method to get the MLE of the mean and the std stats.norm.fit(df_incs.Apple) # Create estimated distributions based on the sample app_dist = stats.norm(m['Apple'], s['Apple']) win_dist = stats.norm(m['Microsoft'], s['Microsoft']) intl_dist = stats.norm(m['Intel'], s['Intel']) # We can test if this data fits a normal distribution (Kolmogorov-Smirnov test) app_KS = stats.kstest(df_incs['Apple'], 'norm', [m['Apple'], s['Apple']]) win_KS = stats.kstest(df_incs['Microsoft'], 'norm', [m['Microsoft'], s['Microsoft']]) intl_KS = stats.kstest(df_incs['Intel'], 'norm', [m['Intel'], s['Intel']]) print('''Apple: {} Microsoft: {} Intel: {}'''.format(app_KS, win_KS, intl_KS)) Explanation: Create a Normal distribution Let's assume that the stock prices follow a Normal distribution End of explanation # Compare histogram with estimated distribution x_range = np.arange(-0.05, +0.0501, 0.001) x_axis = (x_range[1:] + x_range[:-1])/2. n_incs = df_incs.shape[0] y_app = (app_dist.cdf(x_range[1:]) - app_dist.cdf(x_range[:-1]))*n_incs y_win = (win_dist.cdf(x_range[1:]) - win_dist.cdf(x_range[:-1]))*n_incs y_intl = (intl_dist.cdf(x_range[1:]) - intl_dist.cdf(x_range[:-1]))*n_incs fig = plt.figure(figsize=(16., 6.)) ax_app = fig.add_subplot(131) _ = ax_app.hist(df_incs['Apple'], bins=x_range, color='powderblue') _ = ax_app.set_xlabel('Apple') _ = ax_app.plot(x_axis, y_app, color='blue', linewidth=3) ax_win = fig.add_subplot(132) _ = ax_win.hist(df_incs['Microsoft'], bins=x_range, color='navajowhite') _ = ax_win.set_xlabel('Microsoft') _ = ax_win.plot(x_axis, y_win, color='orange', linewidth=3) ax_intl = fig.add_subplot(133) _ = ax_intl.hist(df_incs['Intel'], bins=x_range, color='lightgreen') _ = ax_intl.set_xlabel('Intel') _ = ax_intl.plot(x_axis, y_win, color='green', linewidth=3) Explanation: End of explanation # Create a multivariate normal distribution object m_norm = stats.multivariate_normal(m[['Apple', 'Microsoft']], df_incs[['Apple', 'Microsoft']].cov()) # Show the contour plot of the pdf x_range = np.arange(-0.05, +0.0501, 0.001) x, y = np.meshgrid(x_range, x_range) pos = np.dstack((x, y)) fig_m_norm = plt.figure(figsize=(6., 6.)) ax_m_norm = fig_m_norm.add_subplot(111) ax_m_norm.contourf(x, y, m_norm.pdf(pos), 50) _ = ax_m_norm.set_xlabel('Apple') _ = ax_m_norm.set_ylabel('Microsoft') Explanation: Exercise: Imagine you are a product designer in a finantial company. You want to create a new investment product to be "sold" to your clients based on the future stock prices of some IT companies. The profit the client gets from his investement is calculated like this: * At the time of the investment we check the initial stock prices * 12 months later (let's say 240 work days), the client gets 100% of the investement back. Additionally if all stock prices are higher than the initial ones, the client earns half the lowest increment (in %). What is the expected profit of this investment? What is the 5% highest risk that the finantial company is assuming? First we will try to create a finantial product based on the stock prices of Apple and Microsoft Create a multinormal distribution End of explanation # Create N (e.g 1000) random simulations of the daily relative increments with 240 samples N_SIMS = 1000 daily_incs = m_norm.rvs(size=[240, N_SIMS]) # Calculate yearly increments (from the composition of the daily increments) year_incs = (daily_incs + 1.).prod(axis=0) # calculate the amount payed for each simulation def amount_to_pay(a): if np.all( a >= 1.): return (a.min() - 1)/2 else: return 0. earnings = np.apply_along_axis(amount_to_pay, 1, year_incs) _ = plt.hist(earnings, bins=50) print('Expected profit of the investment: {:.2%}'.format(earnings.mean())) # To compute the 5% higher profit use the stats.scoreatpercentile function print('%5 higher profit of the investment: {:.2%}'.format(stats.scoreatpercentile(earnings, 95))) print('%1 higher profit of the investment: {:.2%}'.format(stats.scoreatpercentile(earnings, 99))) Explanation: Compute the expected profit and top 5% risk End of explanation # %load -r 2:10 solutions/07_02_scipy_stats.py Explanation: Both the expected profit and the risk assessed are too high!! Try adding Intel to the product in order to lower them down End of explanation
9,240
Given the following text description, write Python code to implement the functionality described below step by step Description: ES-DOC CMIP6 Model Properties - Toplevel MIP Era Step1: Document Authors Set document authors Step2: Document Contributors Specify document contributors Step3: Document Publication Specify document publication status Step4: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Flux Correction 3. Key Properties --&gt; Genealogy 4. Key Properties --&gt; Software Properties 5. Key Properties --&gt; Coupling 6. Key Properties --&gt; Tuning Applied 7. Key Properties --&gt; Conservation --&gt; Heat 8. Key Properties --&gt; Conservation --&gt; Fresh Water 9. Key Properties --&gt; Conservation --&gt; Salt 10. Key Properties --&gt; Conservation --&gt; Momentum 11. Radiative Forcings 12. Radiative Forcings --&gt; Greenhouse Gases --&gt; CO2 13. Radiative Forcings --&gt; Greenhouse Gases --&gt; CH4 14. Radiative Forcings --&gt; Greenhouse Gases --&gt; N2O 15. Radiative Forcings --&gt; Greenhouse Gases --&gt; Tropospheric O3 16. Radiative Forcings --&gt; Greenhouse Gases --&gt; Stratospheric O3 17. Radiative Forcings --&gt; Greenhouse Gases --&gt; CFC 18. Radiative Forcings --&gt; Aerosols --&gt; SO4 19. Radiative Forcings --&gt; Aerosols --&gt; Black Carbon 20. Radiative Forcings --&gt; Aerosols --&gt; Organic Carbon 21. Radiative Forcings --&gt; Aerosols --&gt; Nitrate 22. Radiative Forcings --&gt; Aerosols --&gt; Cloud Albedo Effect 23. Radiative Forcings --&gt; Aerosols --&gt; Cloud Lifetime Effect 24. Radiative Forcings --&gt; Aerosols --&gt; Dust 25. Radiative Forcings --&gt; Aerosols --&gt; Tropospheric Volcanic 26. Radiative Forcings --&gt; Aerosols --&gt; Stratospheric Volcanic 27. Radiative Forcings --&gt; Aerosols --&gt; Sea Salt 28. Radiative Forcings --&gt; Other --&gt; Land Use 29. Radiative Forcings --&gt; Other --&gt; Solar 1. Key Properties Key properties of the model 1.1. Model Overview Is Required Step5: 1.2. Model Name Is Required Step6: 2. Key Properties --&gt; Flux Correction Flux correction properties of the model 2.1. Details Is Required Step7: 3. Key Properties --&gt; Genealogy Genealogy and history of the model 3.1. Year Released Is Required Step8: 3.2. CMIP3 Parent Is Required Step9: 3.3. CMIP5 Parent Is Required Step10: 3.4. Previous Name Is Required Step11: 4. Key Properties --&gt; Software Properties Software properties of model 4.1. Repository Is Required Step12: 4.2. Code Version Is Required Step13: 4.3. Code Languages Is Required Step14: 4.4. Components Structure Is Required Step15: 4.5. Coupler Is Required Step16: 5. Key Properties --&gt; Coupling ** 5.1. Overview Is Required Step17: 5.2. Atmosphere Double Flux Is Required Step18: 5.3. Atmosphere Fluxes Calculation Grid Is Required Step19: 5.4. Atmosphere Relative Winds Is Required Step20: 6. Key Properties --&gt; Tuning Applied Tuning methodology for model 6.1. Description Is Required Step21: 6.2. Global Mean Metrics Used Is Required Step22: 6.3. Regional Metrics Used Is Required Step23: 6.4. Trend Metrics Used Is Required Step24: 6.5. Energy Balance Is Required Step25: 6.6. Fresh Water Balance Is Required Step26: 7. Key Properties --&gt; Conservation --&gt; Heat Global heat convervation properties of the model 7.1. Global Is Required Step27: 7.2. Atmos Ocean Interface Is Required Step28: 7.3. Atmos Land Interface Is Required Step29: 7.4. Atmos Sea-ice Interface Is Required Step30: 7.5. Ocean Seaice Interface Is Required Step31: 7.6. Land Ocean Interface Is Required Step32: 8. Key Properties --&gt; Conservation --&gt; Fresh Water Global fresh water convervation properties of the model 8.1. Global Is Required Step33: 8.2. Atmos Ocean Interface Is Required Step34: 8.3. Atmos Land Interface Is Required Step35: 8.4. Atmos Sea-ice Interface Is Required Step36: 8.5. Ocean Seaice Interface Is Required Step37: 8.6. Runoff Is Required Step38: 8.7. Iceberg Calving Is Required Step39: 8.8. Endoreic Basins Is Required Step40: 8.9. Snow Accumulation Is Required Step41: 9. Key Properties --&gt; Conservation --&gt; Salt Global salt convervation properties of the model 9.1. Ocean Seaice Interface Is Required Step42: 10. Key Properties --&gt; Conservation --&gt; Momentum Global momentum convervation properties of the model 10.1. Details Is Required Step43: 11. Radiative Forcings Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5) 11.1. Overview Is Required Step44: 12. Radiative Forcings --&gt; Greenhouse Gases --&gt; CO2 Carbon dioxide forcing 12.1. Provision Is Required Step45: 12.2. Additional Information Is Required Step46: 13. Radiative Forcings --&gt; Greenhouse Gases --&gt; CH4 Methane forcing 13.1. Provision Is Required Step47: 13.2. Additional Information Is Required Step48: 14. Radiative Forcings --&gt; Greenhouse Gases --&gt; N2O Nitrous oxide forcing 14.1. Provision Is Required Step49: 14.2. Additional Information Is Required Step50: 15. Radiative Forcings --&gt; Greenhouse Gases --&gt; Tropospheric O3 Troposheric ozone forcing 15.1. Provision Is Required Step51: 15.2. Additional Information Is Required Step52: 16. Radiative Forcings --&gt; Greenhouse Gases --&gt; Stratospheric O3 Stratospheric ozone forcing 16.1. Provision Is Required Step53: 16.2. Additional Information Is Required Step54: 17. Radiative Forcings --&gt; Greenhouse Gases --&gt; CFC Ozone-depleting and non-ozone-depleting fluorinated gases forcing 17.1. Provision Is Required Step55: 17.2. Equivalence Concentration Is Required Step56: 17.3. Additional Information Is Required Step57: 18. Radiative Forcings --&gt; Aerosols --&gt; SO4 SO4 aerosol forcing 18.1. Provision Is Required Step58: 18.2. Additional Information Is Required Step59: 19. Radiative Forcings --&gt; Aerosols --&gt; Black Carbon Black carbon aerosol forcing 19.1. Provision Is Required Step60: 19.2. Additional Information Is Required Step61: 20. Radiative Forcings --&gt; Aerosols --&gt; Organic Carbon Organic carbon aerosol forcing 20.1. Provision Is Required Step62: 20.2. Additional Information Is Required Step63: 21. Radiative Forcings --&gt; Aerosols --&gt; Nitrate Nitrate forcing 21.1. Provision Is Required Step64: 21.2. Additional Information Is Required Step65: 22. Radiative Forcings --&gt; Aerosols --&gt; Cloud Albedo Effect Cloud albedo effect forcing (RFaci) 22.1. Provision Is Required Step66: 22.2. Aerosol Effect On Ice Clouds Is Required Step67: 22.3. Additional Information Is Required Step68: 23. Radiative Forcings --&gt; Aerosols --&gt; Cloud Lifetime Effect Cloud lifetime effect forcing (ERFaci) 23.1. Provision Is Required Step69: 23.2. Aerosol Effect On Ice Clouds Is Required Step70: 23.3. RFaci From Sulfate Only Is Required Step71: 23.4. Additional Information Is Required Step72: 24. Radiative Forcings --&gt; Aerosols --&gt; Dust Dust forcing 24.1. Provision Is Required Step73: 24.2. Additional Information Is Required Step74: 25. Radiative Forcings --&gt; Aerosols --&gt; Tropospheric Volcanic Tropospheric volcanic forcing 25.1. Provision Is Required Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation Is Required Step76: 25.3. Future Explosive Volcanic Aerosol Implementation Is Required Step77: 25.4. Additional Information Is Required Step78: 26. Radiative Forcings --&gt; Aerosols --&gt; Stratospheric Volcanic Stratospheric volcanic forcing 26.1. Provision Is Required Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation Is Required Step80: 26.3. Future Explosive Volcanic Aerosol Implementation Is Required Step81: 26.4. Additional Information Is Required Step82: 27. Radiative Forcings --&gt; Aerosols --&gt; Sea Salt Sea salt forcing 27.1. Provision Is Required Step83: 27.2. Additional Information Is Required Step84: 28. Radiative Forcings --&gt; Other --&gt; Land Use Land use forcing 28.1. Provision Is Required Step85: 28.2. Crop Change Only Is Required Step86: 28.3. Additional Information Is Required Step87: 29. Radiative Forcings --&gt; Other --&gt; Solar Solar forcing 29.1. Provision Is Required Step88: 29.2. Additional Information Is Required
Python Code: # DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'inpe', 'sandbox-2', 'toplevel') Explanation: ES-DOC CMIP6 Model Properties - Toplevel MIP Era: CMIP6 Institute: INPE Source ID: SANDBOX-2 Sub-Topics: Radiative Forcings. Properties: 85 (42 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:54:07 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) Explanation: Document Authors Set document authors End of explanation # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) Explanation: Document Contributors Specify document contributors End of explanation # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) Explanation: Document Publication Specify document publication status End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Flux Correction 3. Key Properties --&gt; Genealogy 4. Key Properties --&gt; Software Properties 5. Key Properties --&gt; Coupling 6. Key Properties --&gt; Tuning Applied 7. Key Properties --&gt; Conservation --&gt; Heat 8. Key Properties --&gt; Conservation --&gt; Fresh Water 9. Key Properties --&gt; Conservation --&gt; Salt 10. Key Properties --&gt; Conservation --&gt; Momentum 11. Radiative Forcings 12. Radiative Forcings --&gt; Greenhouse Gases --&gt; CO2 13. Radiative Forcings --&gt; Greenhouse Gases --&gt; CH4 14. Radiative Forcings --&gt; Greenhouse Gases --&gt; N2O 15. Radiative Forcings --&gt; Greenhouse Gases --&gt; Tropospheric O3 16. Radiative Forcings --&gt; Greenhouse Gases --&gt; Stratospheric O3 17. Radiative Forcings --&gt; Greenhouse Gases --&gt; CFC 18. Radiative Forcings --&gt; Aerosols --&gt; SO4 19. Radiative Forcings --&gt; Aerosols --&gt; Black Carbon 20. Radiative Forcings --&gt; Aerosols --&gt; Organic Carbon 21. Radiative Forcings --&gt; Aerosols --&gt; Nitrate 22. Radiative Forcings --&gt; Aerosols --&gt; Cloud Albedo Effect 23. Radiative Forcings --&gt; Aerosols --&gt; Cloud Lifetime Effect 24. Radiative Forcings --&gt; Aerosols --&gt; Dust 25. Radiative Forcings --&gt; Aerosols --&gt; Tropospheric Volcanic 26. Radiative Forcings --&gt; Aerosols --&gt; Stratospheric Volcanic 27. Radiative Forcings --&gt; Aerosols --&gt; Sea Salt 28. Radiative Forcings --&gt; Other --&gt; Land Use 29. Radiative Forcings --&gt; Other --&gt; Solar 1. Key Properties Key properties of the model 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Top level overview of coupled model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of coupled model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2. Key Properties --&gt; Flux Correction Flux correction properties of the model 2.1. Details Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how flux corrections are applied in the model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3. Key Properties --&gt; Genealogy Genealogy and history of the model 3.1. Year Released Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Year the model was released End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.2. CMIP3 Parent Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 CMIP3 parent if any End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.3. CMIP5 Parent Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 CMIP5 parent if any End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.4. Previous Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Previously known as End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4. Key Properties --&gt; Software Properties Software properties of model 4.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4.4. Components Structure Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "OASIS" # "OASIS3-MCT" # "ESMF" # "NUOPC" # "Bespoke" # "Unknown" # "None" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 4.5. Coupler Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Overarching coupling framework for model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.coupling.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5. Key Properties --&gt; Coupling ** 5.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of coupling in the model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 5.2. Atmosphere Double Flux Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Atmosphere grid" # "Ocean grid" # "Specific coupler grid" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 5.3. Atmosphere Fluxes Calculation Grid Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Where are the air-sea fluxes calculated End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 5.4. Atmosphere Relative Winds Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6. Key Properties --&gt; Tuning Applied Tuning methodology for model 6.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.2. Global Mean Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List set of metrics/diagnostics of the global mean state used in tuning model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.3. Regional Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.4. Trend Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List observed trend metrics/diagnostics used in tuning model/component (such as 20th century) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.5. Energy Balance Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.6. Fresh Water Balance Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7. Key Properties --&gt; Conservation --&gt; Heat Global heat convervation properties of the model 7.1. Global Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how heat is conserved globally End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.2. Atmos Ocean Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how heat is conserved at the atmosphere/ocean coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.3. Atmos Land Interface Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how heat is conserved at the atmosphere/land coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.4. Atmos Sea-ice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.5. Ocean Seaice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how heat is conserved at the ocean/sea-ice coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.6. Land Ocean Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how heat is conserved at the land/ocean coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8. Key Properties --&gt; Conservation --&gt; Fresh Water Global fresh water convervation properties of the model 8.1. Global Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how fresh_water is conserved globally End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.2. Atmos Ocean Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.3. Atmos Land Interface Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how fresh water is conserved at the atmosphere/land coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.4. Atmos Sea-ice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.5. Ocean Seaice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.6. Runoff Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe how runoff is distributed and conserved End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.7. Iceberg Calving Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how iceberg calving is modeled and conserved End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.8. Endoreic Basins Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how endoreic basins (no ocean access) are treated End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.9. Snow Accumulation Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe how snow accumulation over land and over sea-ice is treated End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9. Key Properties --&gt; Conservation --&gt; Salt Global salt convervation properties of the model 9.1. Ocean Seaice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how salt is conserved at the ocean/sea-ice coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 10. Key Properties --&gt; Conservation --&gt; Momentum Global momentum convervation properties of the model 10.1. Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how momentum is conserved in the model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11. Radiative Forcings Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5) 11.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of radiative forcings (GHG and aerosols) implementation in model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 12. Radiative Forcings --&gt; Greenhouse Gases --&gt; CO2 Carbon dioxide forcing 12.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 12.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13. Radiative Forcings --&gt; Greenhouse Gases --&gt; CH4 Methane forcing 13.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 13.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14. Radiative Forcings --&gt; Greenhouse Gases --&gt; N2O Nitrous oxide forcing 14.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 14.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15. Radiative Forcings --&gt; Greenhouse Gases --&gt; Tropospheric O3 Troposheric ozone forcing 15.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 16. Radiative Forcings --&gt; Greenhouse Gases --&gt; Stratospheric O3 Stratospheric ozone forcing 16.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 16.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17. Radiative Forcings --&gt; Greenhouse Gases --&gt; CFC Ozone-depleting and non-ozone-depleting fluorinated gases forcing 17.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "Option 1" # "Option 2" # "Option 3" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.2. Equivalence Concentration Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Details of any equivalence concentrations used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.3. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 18. Radiative Forcings --&gt; Aerosols --&gt; SO4 SO4 aerosol forcing 18.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 18.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 19. Radiative Forcings --&gt; Aerosols --&gt; Black Carbon Black carbon aerosol forcing 19.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 19.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 20. Radiative Forcings --&gt; Aerosols --&gt; Organic Carbon Organic carbon aerosol forcing 20.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 20.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 21. Radiative Forcings --&gt; Aerosols --&gt; Nitrate Nitrate forcing 21.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 21.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 22. Radiative Forcings --&gt; Aerosols --&gt; Cloud Albedo Effect Cloud albedo effect forcing (RFaci) 22.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 22.2. Aerosol Effect On Ice Clouds Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Radiative effects of aerosols on ice clouds are represented? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 22.3. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 23. Radiative Forcings --&gt; Aerosols --&gt; Cloud Lifetime Effect Cloud lifetime effect forcing (ERFaci) 23.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 23.2. Aerosol Effect On Ice Clouds Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Radiative effects of aerosols on ice clouds are represented? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 23.3. RFaci From Sulfate Only Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Radiative forcing from aerosol cloud interactions from sulfate aerosol only? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 23.4. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 24. Radiative Forcings --&gt; Aerosols --&gt; Dust Dust forcing 24.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 24.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25. Radiative Forcings --&gt; Aerosols --&gt; Tropospheric Volcanic Tropospheric volcanic forcing 25.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Type A" # "Type B" # "Type C" # "Type D" # "Type E" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How explosive volcanic aerosol is implemented in historical simulations End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Type A" # "Type B" # "Type C" # "Type D" # "Type E" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How explosive volcanic aerosol is implemented in future simulations End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 25.4. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 26. Radiative Forcings --&gt; Aerosols --&gt; Stratospheric Volcanic Stratospheric volcanic forcing 26.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Type A" # "Type B" # "Type C" # "Type D" # "Type E" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How explosive volcanic aerosol is implemented in historical simulations End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Type A" # "Type B" # "Type C" # "Type D" # "Type E" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How explosive volcanic aerosol is implemented in future simulations End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 26.4. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 27. Radiative Forcings --&gt; Aerosols --&gt; Sea Salt Sea salt forcing 27.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 27.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 28. Radiative Forcings --&gt; Other --&gt; Land Use Land use forcing 28.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 28.2. Crop Change Only Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Land use change represented via crop change only? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 28.3. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "irradiance" # "proton" # "electron" # "cosmic ray" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 29. Radiative Forcings --&gt; Other --&gt; Solar Solar forcing 29.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How solar forcing is provided End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 29.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation
9,241
Given the following text description, write Python code to implement the functionality described below step by step Description: Implicit functions in pytorch Thomas Viehmann, [email protected] Sometimes, we do not know the mapping of functions we wish to apply, but only an equation that describes the mapping. In mathematical terms, we wish to apply $f Step1: Now we can turn to the implicit function. In evaluating $f(x)=y$, we need to actually search for the solution $y$ to $F(x,y)=0$. We do this by searching for a minimum of $(F(x,y))^2$, but we also need to provide a starting point $y_0$ near which to look. (And indeed, in the circle example below, we will have two solutions and it would typically be a problem when these are close.) We use the pytorch LBFGS optimizer for a limited number of steps. Note that LBFGS also has other stopping criteria, so this really is a bound, in fact, achieving the other critera can be considered success, hitting the maximal number of iterations might be considered failure to find the minimum. The scipy optimization documentation has more details. In order for LBFGS to work, you need to provide a function that re-evaluates $F^2$ and the gradient as the closure argument to the step call. ~~As we need to call backward on F for the gradient computation and I ran into locking problems when doing so in Implicit's backward method, we compute the required $\frac{dF}{dx}$ and $\frac{dF}{dy}$ in the forward. In fact we precompute $-\frac{df}{dx}$ and save it to the context ctx. In the backward, we wrap the saved result in a variable and multiply by the output_grad to do our step in the backpropagation.~~ PyTorch is ever improving - for the 0.4 update, I moved the derivative calculation in the backward. We compute the required $\frac{dF}{dx}$ and $\frac{dF}{dy}$ by calling backward in the backward. I don't know if all detach are needed or if some of the tensors are pre-detached, but I'm going to play it safe here. Step2: So now that we have a cool new autograd function, let us apply it to an example (and I must admit, I just took it from the Wikipedia page, the application I have in mind needs more context). The unit circle in the plane can be described by the equation $x^2+y^2 = 1$ or, equivalently $F(x,y) Step3: Everybody loves pictures, let us plot $F$ (the blue grid). In black is the circle $F(x,y)=0$. Step4: Now we can pick a point $x$, say $1/2$ and seek the matching $y=f(x)$ on the circle, starting from $\frac{1}{2}$. We know that actually $y = f(x) = \sqrt{1-x^2} = \frac{\sqrt{3}}{2}$. Of course, we would run into trouble for $x$ close to $1$ or $-1$. Step5: Works! Let us compute the derivative. We can do that by hand, using the implicit function theorem, we have $\frac{d}{dx} F(x,y) = 2x$ and $\frac{d}{dy} F(x,y) = 2y$, so $\frac{df}{dx}(x) = - x/y$. That is about the technical complication I can handle. If we didn't like using the implicit function theorem, we would have to do this by $\frac{d}{dx} f(x) = \frac{d}{dx} f(x) = \frac{1}{2} \frac{-2x}{\sqrt{1-x^2}}$ and plugging back in $x$ and $y$ we see that $\frac{d}{dx} f(x) = -\frac{x}{y}$. But of course, we can also let the autograd do its thing Step6: Awesome. In fact, we can also use pytorch's automated checker (and indeed this is why I used DoubleTensors, as the gradient checker can be too strict to use single precision floats). It seems that gradcheck does not like the 1d $x$.
Python Code: import torch import numpy from matplotlib import pyplot from mpl_toolkits.mplot3d import Axes3D %matplotlib inline Explanation: Implicit functions in pytorch Thomas Viehmann, [email protected] Sometimes, we do not know the mapping of functions we wish to apply, but only an equation that describes the mapping. In mathematical terms, we wish to apply $f : \mathbb{R}^n \rightarrow \mathbb{R}^m$ of which we only know that $F(x,f(x))=0$ for some function $F : \mathbb{R}^n \times \mathbb{R}^m \rightarrow \mathbb{R}$. This is the realm of the implicit function theorem. Under reasonable conditions (smoothness, the right sort of nondegenerate derivatives), you have the following: Given $x,y$ such that $F(x,y) = 0$ there is a neighborhood $U\ni x$ and a function $f$ such that $f(x)=y$ and $F(x,f(x))=0$. And if $F$ is nice enough, we can also compute the derivative of $f$ at $x$, namely $\frac{df}{dx}(x) = - (\frac{dF}{dy}(x,y))^{-1} \frac{dF}{dx}(x,y)$. There is an example below. Nice. As computing the entire matrix Jacobian with respect to $y$ is not that straightforward in pytorch, though, we will stick with just the scalar case. Let us not be lazy about it and get our hands dirty. First import everything. End of explanation class Implicit(torch.autograd.Function): @staticmethod def forward(ctx, x, y0, F, max_iter=100): with torch.enable_grad(): y = y0.clone().detach().requires_grad_() xv = x.detach() opt = torch.optim.LBFGS([y], max_iter=max_iter) def reevaluate(): opt.zero_grad() z = F(xv,y)**2 z.backward() return z opt.step(reevaluate) ctx._the_function = F ctx.save_for_backward(x, y) return y @staticmethod def backward(ctx, output_grad): x, y = ctx.saved_tensors F = ctx._the_function with torch.enable_grad(): xv = x.detach().requires_grad_() y = y.detach().requires_grad_() z = F(xv,y) z.backward() return -xv.grad/y.grad*output_grad, None, None, None Explanation: Now we can turn to the implicit function. In evaluating $f(x)=y$, we need to actually search for the solution $y$ to $F(x,y)=0$. We do this by searching for a minimum of $(F(x,y))^2$, but we also need to provide a starting point $y_0$ near which to look. (And indeed, in the circle example below, we will have two solutions and it would typically be a problem when these are close.) We use the pytorch LBFGS optimizer for a limited number of steps. Note that LBFGS also has other stopping criteria, so this really is a bound, in fact, achieving the other critera can be considered success, hitting the maximal number of iterations might be considered failure to find the minimum. The scipy optimization documentation has more details. In order for LBFGS to work, you need to provide a function that re-evaluates $F^2$ and the gradient as the closure argument to the step call. ~~As we need to call backward on F for the gradient computation and I ran into locking problems when doing so in Implicit's backward method, we compute the required $\frac{dF}{dx}$ and $\frac{dF}{dy}$ in the forward. In fact we precompute $-\frac{df}{dx}$ and save it to the context ctx. In the backward, we wrap the saved result in a variable and multiply by the output_grad to do our step in the backpropagation.~~ PyTorch is ever improving - for the 0.4 update, I moved the derivative calculation in the backward. We compute the required $\frac{dF}{dx}$ and $\frac{dF}{dy}$ by calling backward in the backward. I don't know if all detach are needed or if some of the tensors are pre-detached, but I'm going to play it safe here. End of explanation def circle(x,y): return x**2+y**2-1 Explanation: So now that we have a cool new autograd function, let us apply it to an example (and I must admit, I just took it from the Wikipedia page, the application I have in mind needs more context). The unit circle in the plane can be described by the equation $x^2+y^2 = 1$ or, equivalently $F(x,y) := x^2+y^2-1 = 0$. So let us define $F$. End of explanation x = numpy.linspace(-1.2,1.2,30) y = numpy.linspace(-1.2,1.2,30) t = numpy.linspace(0, 2*numpy.pi) xx = numpy.repeat(x, len(y)).reshape(x.shape[0],y.shape[0]) yy = numpy.tile(y, (len(x),)).reshape(x.shape[0],y.shape[0]) fig = pyplot.figure() ax = fig.gca(projection='3d') ax.plot_wireframe(xx,yy,circle(xx,yy),cmap=pyplot.cm.coolwarm_r, linewidth=0.01, antialiased=False) ax.plot(numpy.cos(t),numpy.sin(t), 0, linewidth=3, c="black") ax.view_init(elev=40., azim=30) Explanation: Everybody loves pictures, let us plot $F$ (the blue grid). In black is the circle $F(x,y)=0$. End of explanation x = torch.tensor([0.5], dtype=torch.double, requires_grad=True) y0 = torch.tensor([0.5], dtype=torch.double) y= Implicit.apply(x, y0, circle) print (y.item(), (1-0.5**2)**0.5) Explanation: Now we can pick a point $x$, say $1/2$ and seek the matching $y=f(x)$ on the circle, starting from $\frac{1}{2}$. We know that actually $y = f(x) = \sqrt{1-x^2} = \frac{\sqrt{3}}{2}$. Of course, we would run into trouble for $x$ close to $1$ or $-1$. End of explanation y.backward() x.grad.shape, y.shape Explanation: Works! Let us compute the derivative. We can do that by hand, using the implicit function theorem, we have $\frac{d}{dx} F(x,y) = 2x$ and $\frac{d}{dy} F(x,y) = 2y$, so $\frac{df}{dx}(x) = - x/y$. That is about the technical complication I can handle. If we didn't like using the implicit function theorem, we would have to do this by $\frac{d}{dx} f(x) = \frac{d}{dx} f(x) = \frac{1}{2} \frac{-2x}{\sqrt{1-x^2}}$ and plugging back in $x$ and $y$ we see that $\frac{d}{dx} f(x) = -\frac{x}{y}$. But of course, we can also let the autograd do its thing: End of explanation torch.autograd.gradcheck(lambda x: Implicit.apply(x.unsqueeze(0),y0,circle), x) Explanation: Awesome. In fact, we can also use pytorch's automated checker (and indeed this is why I used DoubleTensors, as the gradient checker can be too strict to use single precision floats). It seems that gradcheck does not like the 1d $x$. End of explanation
9,242
Given the following text description, write Python code to implement the functionality described below step by step Description: Introducing the Keras Sequential API Learning Objectives 1. Build a DNN model using the Keras Sequential API 1. Learn how to use feature columns in a Keras model 1. Learn how to train a model with Keras 1. Learn how to save/load, and deploy a Keras model on GCP 1. Learn how to deploy and make predictions with at Keras model Introduction The Keras sequential API allows you to create Tensorflow models layer-by-layer. This is useful for building most kinds of machine learning models but it does not allow you to create models that share layers, re-use layers or have multiple inputs or outputs. In this lab, we'll see how to build a simple deep neural network model using the keras sequential api and feature columns. Once we have trained our model, we will deploy it using AI Platform and see how to call our model for online prediciton. Step1: Start by importing the necessary libraries for this lab. Step2: Load raw data We will use the taxifare dataset, using the CSV files that we created in the first notebook of this sequence. Those files have been saved into ../data. Step3: Use tf.data to read the CSV files We wrote these functions for reading data from the csv files above in the previous notebook. Step4: Build a simple keras DNN model We will use feature columns to connect our raw data to our keras DNN model. Feature columns make it easy to perform common types of feature engineering on your raw data. For example, you can one-hot encode categorical data, create feature crosses, embeddings and more. We'll cover these in more detail later in the course, but if you want to a sneak peak browse the official TensorFlow feature columns guide. In our case we won't do any feature engineering. However, we still need to create a list of feature columns to specify the numeric values which will be passed on to our model. To do this, we use tf.feature_column.numeric_column() We use a python dictionary comprehension to create the feature columns for our model, which is just an elegant alternative to a for loop. Step5: Next, we create the DNN model. The Sequential model is a linear stack of layers and when building a model using the Sequential API, you configure each layer of the model in turn. Once all the layers have been added, you compile the model. Step6: Next, to prepare the model for training, you must configure the learning process. This is done using the compile method. The compile method takes three arguments Step7: Train the model To train your model, Keras provides three functions that can be used Step8: There are various arguments you can set when calling the .fit method. Here x specifies the input data which in our case is a tf.data dataset returning a tuple of (inputs, targets). The steps_per_epoch parameter is used to mark the end of training for a single epoch. Here we are training for NUM_EVALS epochs. Lastly, for the callback argument we specify a Tensorboard callback so we can inspect Tensorboard after training. Step9: High-level model evaluation Once we've run data through the model, we can call .summary() on the model to get a high-level summary of our network. We can also plot the training and evaluation curves for the metrics we computed above. Step10: Running .fit (or .fit_generator) returns a History object which collects all the events recorded during training. Similar to Tensorboard, we can plot the training and validation curves for the model loss and rmse by accessing these elements of the History object. Step11: Making predictions with our model To make predictions with our trained model, we can call the predict method, passing to it a dictionary of values. The steps parameter determines the total number of steps before declaring the prediction round finished. Here since we have just one example, we set steps=1 (setting steps=None would also work). Note, however, that if x is a tf.data dataset or a dataset iterator, and steps is set to None, predict will run until the input dataset is exhausted. Step12: Export and deploy our model Of course, making individual predictions is not realistic, because we can't expect client code to have a model object in memory. For others to use our trained model, we'll have to export our model to a file, and expect client code to instantiate the model from that exported file. We'll export the model to a TensorFlow SavedModel format. Once we have a model in this format, we have lots of ways to "serve" the model, from a web application, from JavaScript, from mobile applications, etc. Step13: Deploy our model to AI Platform Finally, we will deploy our trained model to AI Platform and see how we can make online predicitons.
Python Code: # Ensure the right version of Tensorflow is installed. !pip freeze | grep tensorflow==2.0 || pip install tensorflow==2.0 Explanation: Introducing the Keras Sequential API Learning Objectives 1. Build a DNN model using the Keras Sequential API 1. Learn how to use feature columns in a Keras model 1. Learn how to train a model with Keras 1. Learn how to save/load, and deploy a Keras model on GCP 1. Learn how to deploy and make predictions with at Keras model Introduction The Keras sequential API allows you to create Tensorflow models layer-by-layer. This is useful for building most kinds of machine learning models but it does not allow you to create models that share layers, re-use layers or have multiple inputs or outputs. In this lab, we'll see how to build a simple deep neural network model using the keras sequential api and feature columns. Once we have trained our model, we will deploy it using AI Platform and see how to call our model for online prediciton. End of explanation import datetime import os import shutil import numpy as np import pandas as pd import tensorflow as tf from matplotlib import pyplot as plt from tensorflow import keras from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, DenseFeatures from tensorflow.keras.callbacks import TensorBoard print(tf.__version__) %matplotlib inline Explanation: Start by importing the necessary libraries for this lab. End of explanation !ls -l ../data/*.csv !head ../data/taxi*.csv Explanation: Load raw data We will use the taxifare dataset, using the CSV files that we created in the first notebook of this sequence. Those files have been saved into ../data. End of explanation CSV_COLUMNS = [ 'fare_amount', 'pickup_datetime', 'pickup_longitude', 'pickup_latitude', 'dropoff_longitude', 'dropoff_latitude', 'passenger_count', 'key' ] LABEL_COLUMN = 'fare_amount' DEFAULTS = [[0.0], ['na'], [0.0], [0.0], [0.0], [0.0], [0.0], ['na']] UNWANTED_COLS = ['pickup_datetime', 'key'] def features_and_labels(row_data): label = row_data.pop(LABEL_COLUMN) features = row_data for unwanted_col in UNWANTED_COLS: features.pop(unwanted_col) return features, label def create_dataset(pattern, batch_size=1, mode=tf.estimator.ModeKeys.EVAL): dataset = tf.data.experimental.make_csv_dataset( pattern, batch_size, CSV_COLUMNS, DEFAULTS) dataset = dataset.map(features_and_labels) if mode == tf.estimator.ModeKeys.TRAIN: dataset = dataset.shuffle(buffer_size=1000).repeat() # take advantage of multi-threading; 1=AUTOTUNE dataset = dataset.prefetch(1) return dataset Explanation: Use tf.data to read the CSV files We wrote these functions for reading data from the csv files above in the previous notebook. End of explanation INPUT_COLS = [ 'pickup_longitude', 'pickup_latitude', 'dropoff_longitude', 'dropoff_latitude', 'passenger_count', ] # Create input layer of feature columns # TODO 2 feature_columns = { colname: tf.feature_column.numeric_column(colname) for colname in INPUT_COLS } Explanation: Build a simple keras DNN model We will use feature columns to connect our raw data to our keras DNN model. Feature columns make it easy to perform common types of feature engineering on your raw data. For example, you can one-hot encode categorical data, create feature crosses, embeddings and more. We'll cover these in more detail later in the course, but if you want to a sneak peak browse the official TensorFlow feature columns guide. In our case we won't do any feature engineering. However, we still need to create a list of feature columns to specify the numeric values which will be passed on to our model. To do this, we use tf.feature_column.numeric_column() We use a python dictionary comprehension to create the feature columns for our model, which is just an elegant alternative to a for loop. End of explanation # Build a keras DNN model using Sequential API # TODO 1 model = Sequential([ DenseFeatures(feature_columns=feature_columns.values()), Dense(units=32, activation="relu", name="h1"), Dense(units=8, activation="relu", name="h2"), Dense(units=1, activation="linear", name="output") ]) Explanation: Next, we create the DNN model. The Sequential model is a linear stack of layers and when building a model using the Sequential API, you configure each layer of the model in turn. Once all the layers have been added, you compile the model. End of explanation # Create a custom evalution metric def rmse(y_true, y_pred): return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true))) # Compile the keras model model.compile(optimizer="adam", loss="mse", metrics=[rmse, "mse"]) Explanation: Next, to prepare the model for training, you must configure the learning process. This is done using the compile method. The compile method takes three arguments: An optimizer. This could be the string identifier of an existing optimizer (such as rmsprop or adagrad), or an instance of the Optimizer class. A loss function. This is the objective that the model will try to minimize. It can be the string identifier of an existing loss function from the Losses class (such as categorical_crossentropy or mse), or it can be a custom objective function. A list of metrics. For any machine learning problem you will want a set of metrics to evaluate your model. A metric could be the string identifier of an existing metric or a custom metric function. We will add an additional custom metric called rmse to our list of metrics which will return the root mean square error. End of explanation TRAIN_BATCH_SIZE = 1000 NUM_TRAIN_EXAMPLES = 10000 * 5 # training dataset will repeat, wrap around NUM_EVALS = 50 # how many times to evaluate NUM_EVAL_EXAMPLES = 10000 # enough to get a reasonable sample trainds = create_dataset( pattern='../data/taxi-train*', batch_size=TRAIN_BATCH_SIZE, mode=tf.estimator.ModeKeys.TRAIN) evalds = create_dataset( pattern='../data/taxi-valid*', batch_size=1000, mode=tf.estimator.ModeKeys.EVAL).take(NUM_EVAL_EXAMPLES//1000) Explanation: Train the model To train your model, Keras provides three functions that can be used: 1. .fit() for training a model for a fixed number of epochs (iterations on a dataset). 2. .fit_generator() for training a model on data yielded batch-by-batch by a generator 3. .train_on_batch() runs a single gradient update on a single batch of data. The .fit() function works well for small datasets which can fit entirely in memory. However, for large datasets (or if you need to manipulate the training data on the fly via data augmentation, etc) you will need to use .fit_generator() instead. The .train_on_batch() method is for more fine-grained control over training and accepts only a single batch of data. The taxifare dataset we sampled is small enough to fit in memory, so can we could use .fit to train our model. Our create_dataset function above generates batches of training examples, so we could also use .fit_generator. In fact, when calling .fit the method inspects the data, and if it's a generator (as our dataset is) it will invoke automatically .fit_generator for training. We start by setting up some parameters for our training job and create the data generators for the training and validation data. We refer you the the blog post ML Design Pattern #3: Virtual Epochs for further details on why express the training in terms of NUM_TRAIN_EXAMPLES and NUM_EVALS and why, in this training code, the number of epochs is really equal to the number of evaluations we perform. End of explanation %time # TODO 3 steps_per_epoch = NUM_TRAIN_EXAMPLES // (TRAIN_BATCH_SIZE * NUM_EVALS) LOGDIR = "./taxi_trained" history = model.fit(x=trainds, steps_per_epoch=steps_per_epoch, epochs=NUM_EVALS, validation_data=evalds, callbacks=[TensorBoard(LOGDIR)]) Explanation: There are various arguments you can set when calling the .fit method. Here x specifies the input data which in our case is a tf.data dataset returning a tuple of (inputs, targets). The steps_per_epoch parameter is used to mark the end of training for a single epoch. Here we are training for NUM_EVALS epochs. Lastly, for the callback argument we specify a Tensorboard callback so we can inspect Tensorboard after training. End of explanation model.summary() Explanation: High-level model evaluation Once we've run data through the model, we can call .summary() on the model to get a high-level summary of our network. We can also plot the training and evaluation curves for the metrics we computed above. End of explanation RMSE_COLS = ['rmse', 'val_rmse'] pd.DataFrame(history.history)[RMSE_COLS].plot() LOSS_COLS = ['loss', 'val_loss'] pd.DataFrame(history.history)[LOSS_COLS].plot() Explanation: Running .fit (or .fit_generator) returns a History object which collects all the events recorded during training. Similar to Tensorboard, we can plot the training and validation curves for the model loss and rmse by accessing these elements of the History object. End of explanation model.predict(x={"pickup_longitude": tf.convert_to_tensor([-73.982683]), "pickup_latitude": tf.convert_to_tensor([40.742104]), "dropoff_longitude": tf.convert_to_tensor([-73.983766]), "dropoff_latitude": tf.convert_to_tensor([40.755174]), "passenger_count": tf.convert_to_tensor([3.0])}, steps=1) Explanation: Making predictions with our model To make predictions with our trained model, we can call the predict method, passing to it a dictionary of values. The steps parameter determines the total number of steps before declaring the prediction round finished. Here since we have just one example, we set steps=1 (setting steps=None would also work). Note, however, that if x is a tf.data dataset or a dataset iterator, and steps is set to None, predict will run until the input dataset is exhausted. End of explanation # TODO 4 OUTPUT_DIR = "./export/savedmodel" shutil.rmtree(OUTPUT_DIR, ignore_errors=True) EXPORT_PATH = os.path.join(OUTPUT_DIR, datetime.datetime.now().strftime("%Y%m%d%H%M%S")) tf.saved_model.save(model, EXPORT_PATH) # with default serving function !saved_model_cli show \ --tag_set serve \ --signature_def serving_default \ --dir {EXPORT_PATH} !find {EXPORT_PATH} os.environ['EXPORT_PATH'] = EXPORT_PATH Explanation: Export and deploy our model Of course, making individual predictions is not realistic, because we can't expect client code to have a model object in memory. For others to use our trained model, we'll have to export our model to a file, and expect client code to instantiate the model from that exported file. We'll export the model to a TensorFlow SavedModel format. Once we have a model in this format, we have lots of ways to "serve" the model, from a web application, from JavaScript, from mobile applications, etc. End of explanation %%bash # TODO 5 PROJECT= # TODO: Change this to your PROJECT BUCKET=${PROJECT} REGION=us-east1 MODEL_NAME=taxifare VERSION_NAME=dnn if [[ $(gcloud ai-platform models list --format='value(name)' | grep $MODEL_NAME) ]]; then echo "$MODEL_NAME already exists" else echo "Creating $MODEL_NAME" gcloud ai-platform models create --regions=$REGION $MODEL_NAME fi if [[ $(gcloud ai-platform versions list --model $MODEL_NAME --format='value(name)' | grep $VERSION_NAME) ]]; then echo "Deleting already existing $MODEL_NAME:$VERSION_NAME ... " echo yes | gcloud ai-platform versions delete --model=$MODEL_NAME $VERSION_NAME echo "Please run this cell again if you don't see a Creating message ... " sleep 2 fi echo "Creating $MODEL_NAME:$VERSION_NAME" gcloud ai-platform versions create --model=$MODEL_NAME $VERSION_NAME \ --framework=tensorflow --python-version=3.5 --runtime-version=1.14 \ --origin=$EXPORT_PATH --staging-bucket=gs://$BUCKET %%writefile input.json {"pickup_longitude": -73.982683, "pickup_latitude": 40.742104,"dropoff_longitude": -73.983766,"dropoff_latitude": 40.755174,"passenger_count": 3.0} # TODO 5 !gcloud ai-platform predict --model taxifare --json-instances input.json --version dnn Explanation: Deploy our model to AI Platform Finally, we will deploy our trained model to AI Platform and see how we can make online predicitons. End of explanation
9,243
Given the following text description, write Python code to implement the functionality described below step by step Description: <a href="https Step1: Transforming an input to a known output Step2: relation between input and output is linear Step3: Defining the model to train untrained single unit (neuron) also outputs a line from same input, although another one The Artificial Neuron Step5: Defining a layer with a random number of neurons and inputs Step6: Output of a single untrained neuron Step7: Loss - Mean Squared Error Loss function is the prerequisite to training. We need an objective to optimize for. We calculate the difference between what we get as output and what we would like to get. Mean Squared Error $MSE = {\frac {1}{n}}\sum {i=1}^{n}(Y{i}-{\hat {Y_{i}}})^{2}$ https Step8: Minimize Loss by changing parameters of neuron Move in parameter space in the direction of a descent <img src='https Step9: Training Step10: Line drawn by neuron after training result after training is not perfect, but almost looks like the same line https Step11: Prebuilt Optimizers do this job (but a bit more efficient and sohpisticated) Step12: More data points, more noisy Step13: Lines model draws over time Initial Step Step14: After 500 Steps Step15: Final Step Step16: Understandinging the effect of activation functions Typically, the output of a neuron is transformed using an activation function which compresses the output to a value between 0 and 1 (sigmoid), or between -1 and 1 (tanh) or sets all negative values to zero (relu). <img src='https Step17: Logictic Regression So far we were inferring a continous value for another, now we want to classify. Imagine we have a line that separates two categories in two dimensions. Step19: We compress output between 0 and 1 using sigmoid to match y everything below 0.5 counts as 0, everthing above as 1 Step20: We have 2d input now Step21: Reconsidering the loss function cross entropy is an alternative to squared error cross entropy can be used as an error measure when a network's outputs can be thought of as representing independent hypotheses activations can be understood as representing the probability that each hypothesis might be true the loss indicates the distance between what the network believes this distribution should be, and what the teacher says it should be http Step22: The same solution using high level Keas API
Python Code: !pip install -q tf-nightly-gpu-2.0-preview import tensorflow as tf print(tf.__version__) # a small sanity check, does tf seem to work ok? hello = tf.constant('Hello TF!') print("This works: {}".format(hello)) # this should return True even on Colab tf.test.is_gpu_available() tf.test.is_built_with_cuda() !nvidia-smi tf.executing_eagerly() Explanation: <a href="https://colab.research.google.com/github/DJCordhose/ai/blob/master/notebooks/tf2/tf-low-level.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Introduction to Neural Networks with Low Level TensorFlow 2 Based on * This thread is a crash course on everything you need to know to use TensorFlow 2.0 + Keras for deep learning research: https://twitter.com/fchollet/status/1105139360226140160 * Colab Notebook tf.keras for Researchers: https://colab.research.google.com/drive/17u-pRZJnKN0gO5XZmq8n5A2bKGrfKEUg#scrollTo=UHOOlixcQ9Gl * Effective TensorFlow 2: https://www.tensorflow.org/alpha/guide/effective_tf2 End of explanation input = [[-1], [0], [1], [2], [3], [4]] output = [[2], [1], [0], [-1], [-2], [-3]] import matplotlib.pyplot as plt plt.xlabel('input') plt.ylabel('output') plt.plot(input, output, 'ro') Explanation: Transforming an input to a known output End of explanation plt.plot(input, output) plt.plot(input, output, 'ro') Explanation: relation between input and output is linear End of explanation w = tf.constant([[1.5], [-2], [1]], dtype='float32') x = tf.constant([[10, 6, 8]], dtype='float32') b = tf.constant([6], dtype='float32') y = tf.matmul(x, w) + b print(y) Explanation: Defining the model to train untrained single unit (neuron) also outputs a line from same input, although another one The Artificial Neuron: Foundation of Deep Neural Networks (simplified, more later) a neuron takes a number of numerical inputs multiplies each with a weight, sums up all weighted input and adds bias (constant) to that sum from this it creates a single numerical output for one input (one dimension) this would be a description of a line for more dimensions this describes a hyper plane that can serve as a decision boundary this is typically expressed as a matrix multplication plus an addition <img src='https://djcordhose.github.io/ai/img/insurance/neuron211.jpg'> This can be expressed using a matrix multiplication End of explanation from tensorflow.keras.layers import Layer class LinearLayer(Layer): y = w.x + b def __init__(self, units=1, input_dim=1): super(LinearLayer, self).__init__() w_init = tf.random_normal_initializer(stddev=2) self.w = tf.Variable( initial_value = w_init(shape=(input_dim, units), dtype='float32'), trainable=True) b_init = tf.zeros_initializer() self.b = tf.Variable( initial_value = b_init(shape=(units,), dtype='float32'), trainable=True) def call(self, inputs): return tf.matmul(inputs, self.w) + self.b linear_layer = LinearLayer() Explanation: Defining a layer with a random number of neurons and inputs End of explanation x = tf.constant(input, dtype=tf.float32) y_true = tf.constant(output, dtype=tf.float32) y_true y_pred = linear_layer(x) y_pred plt.plot(x, y_pred) plt.plot(input, output, 'ro') Explanation: Output of a single untrained neuron End of explanation loss_fn = tf.losses.mean_squared_error # loss_fn = tf.losses.mean_absolute_error loss = loss_fn(y_true=tf.squeeze(y_true), y_pred=tf.squeeze(y_pred)) print(loss) tf.keras.losses.mean_squared_error == tf.losses.mean_squared_error Explanation: Loss - Mean Squared Error Loss function is the prerequisite to training. We need an objective to optimize for. We calculate the difference between what we get as output and what we would like to get. Mean Squared Error $MSE = {\frac {1}{n}}\sum {i=1}^{n}(Y{i}-{\hat {Y_{i}}})^{2}$ https://en.wikipedia.org/wiki/Mean_squared_error End of explanation # a simple example # f(x) = x^2 # f'(x) = 2x # x = 4 # f(4) = 16 # f'(4) = 8 (that's what we expect) def tape_sample(): x = tf.constant(4.0) # open a GradientTape with tf.GradientTape() as tape: tape.watch(x) y = x * x dy_dx = tape.gradient(y, x) print(dy_dx) # just a function in order not to interfere with x on the global scope tape_sample() Explanation: Minimize Loss by changing parameters of neuron Move in parameter space in the direction of a descent <img src='https://djcordhose.github.io/ai/img/gradients.jpg'> https://twitter.com/colindcarroll/status/1090266016259534848 Job of the optimizer <img src='https://djcordhose.github.io/ai/img/manning/optimizer.png' height=500> For this we need partial derivations TensorFlow offers automatic differentiation: https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/GradientTape tape will record operations for automatic differentiation either by making it record explicily (watch) or by declaring a varible to be trainable (which we did in the layer above) End of explanation linear_layer = LinearLayer() linear_layer.w, linear_layer.b linear_layer.trainable_weights EPOCHS = 200 learning_rate = 1e-2 losses = [] weights = [] biases = [] weights_gradient = [] biases_gradient = [] for step in range(EPOCHS): with tf.GradientTape() as tape: # forward pass y_pred = linear_layer(x) # loss value for this batch loss = loss_fn(y_true=tf.squeeze(y_true), y_pred=tf.squeeze(y_pred)) # just for logging losses.append(loss.numpy()) weights.append(linear_layer.w.numpy()[0][0]) biases.append(linear_layer.b.numpy()[0]) # get gradients of weights wrt the loss gradients = tape.gradient(loss, linear_layer.trainable_weights) weights_gradient.append(gradients[0].numpy()[0][0]) biases_gradient.append(gradients[1].numpy()[0]) # backward pass, changing trainable weights linear_layer.w.assign_sub(learning_rate * gradients[0]) linear_layer.b.assign_sub(learning_rate * gradients[1]) print(loss) plt.xlabel('epochs') plt.ylabel('loss') # plt.yscale('log') plt.plot(losses) plt.figure(figsize=(20, 10)) plt.plot(weights) plt.plot(biases) plt.plot(weights_gradient) plt.plot(biases_gradient) plt.legend(['slope', 'offset', 'gradient slope', 'gradient offset']) Explanation: Training End of explanation y_pred = linear_layer(x) y_pred plt.plot(x, y_pred) plt.plot(input, output, 'ro') # single neuron and single input: one weight and one bias # slope m ~ -1 # y-axis offset y0 ~ 1 # https://en.wikipedia.org/wiki/Linear_equation#Slope%E2%80%93intercept_form linear_layer.trainable_weights Explanation: Line drawn by neuron after training result after training is not perfect, but almost looks like the same line https://en.wikipedia.org/wiki/Linear_equation#Slope%E2%80%93intercept_form End of explanation optimizer = tf.keras.optimizers.SGD(learning_rate=1e-2) EPOCHS = 500 losses = [] linear_layer = LinearLayer() for step in range(EPOCHS): with tf.GradientTape() as tape: # Forward pass. y_pred = linear_layer(x) # Loss value for this batch. loss = loss_fn(y_true=tf.squeeze(y_true), y_pred=tf.squeeze(y_pred)) losses.append(loss) # Get gradients of weights wrt the loss. gradients = tape.gradient(loss, linear_layer.trainable_weights) # Update the weights of our linear layer. optimizer.apply_gradients(zip(gradients, linear_layer.trainable_weights)) # plt.yscale('log') plt.ylabel("loss") plt.xlabel("epochs") plt.plot(losses) y_pred = linear_layer(x) plt.plot(x, y_pred) plt.plot(input, output, 'ro') linear_layer.trainable_weights Explanation: Prebuilt Optimizers do this job (but a bit more efficient and sohpisticated) End of explanation import numpy as np a = -1 b = 1 n = 50 x = tf.constant(np.random.uniform(0, 1, n), dtype='float32') y = tf.constant(a*x+b + 0.1 * np.random.normal(0, 1, n), dtype='float32') plt.scatter(x, y) x = tf.reshape(x, (n, 1)) y_true = tf.reshape(y, (n, 1)) linear_layer = LinearLayer() a = linear_layer.w.numpy()[0][0] b = linear_layer.b.numpy()[0] def plot_line(a, b, x, y_true): fig, ax = plt.subplots() y_pred = a * x + b line = ax.plot(x, y_pred) ax.plot(x, y_true, 'ro') return fig, line plot_line(a, b, x, y_true) # the problem is a little bit harder, train for a little longer EPOCHS = 2000 losses = [] lines = [] linear_layer = LinearLayer() for step in range(EPOCHS): # Open a GradientTape. with tf.GradientTape() as tape: # Forward pass. y_pred = linear_layer(x) # Loss value for this batch. loss = loss_fn(y_true=tf.squeeze(y_true), y_pred=tf.squeeze(y_pred)) losses.append(loss) a = linear_layer.w.numpy()[0][0] b = linear_layer.b.numpy()[0] lines.append((a, b)) # Get gradients of weights wrt the loss. gradients = tape.gradient(loss, linear_layer.trainable_weights) # Update the weights of our linear layer. optimizer.apply_gradients(zip(gradients, linear_layer.trainable_weights)) print(loss) # plt.yscale('log') plt.ylabel("loss") plt.xlabel("epochs") plt.plot(losses) Explanation: More data points, more noisy End of explanation a, b = lines[0] plot_line(a, b, x, y_true) Explanation: Lines model draws over time Initial Step End of explanation a, b = lines[500] plot_line(a, b, x, y_true) Explanation: After 500 Steps End of explanation a, b = lines[1999] plot_line(a, b, x, y_true) Explanation: Final Step End of explanation import numpy as np x = tf.reshape(tf.constant(np.arange(-1, 4, 0.1), dtype='float32'), (50, 1)) y_pred = linear_layer(x) plt.figure(figsize=(20, 10)) plt.plot(x, y_pred) y_pred_relu = tf.nn.relu(y_pred) plt.plot(x, y_pred_relu) y_pred_sigmoid = tf.nn.sigmoid(y_pred) plt.plot(x, y_pred_sigmoid) y_pred_tanh = tf.nn.tanh(y_pred) plt.plot(x, y_pred_tanh) plt.plot(input, output, 'ro') plt.legend(['no activation', 'relu', 'sigmoid', 'tanh']) Explanation: Understandinging the effect of activation functions Typically, the output of a neuron is transformed using an activation function which compresses the output to a value between 0 and 1 (sigmoid), or between -1 and 1 (tanh) or sets all negative values to zero (relu). <img src='https://raw.githubusercontent.com/DJCordhose/deep-learning-crash-course-notebooks/master/img/neuron.jpg'> Typical Activation Functions <img src='https://djcordhose.github.io/ai/img/activation-functions.jpg'> End of explanation from matplotlib.colors import ListedColormap a = -1 b = 1 n = 100 # all points X = np.random.uniform(0, 1, (n, 2)) # our line line_x = np.random.uniform(0, 1, n) line_y = a*line_x+b plt.plot(line_x, line_y, 'r') # below and above line y = X[:, 1] > a*X[:, 0]+b y = y.astype(int) plt.xlabel("x1") plt.ylabel("x2") plt.scatter(X[:,0], X[:,1], c=y, cmap=ListedColormap(['#AA6666', '#6666AA']), marker='o', edgecolors='k') y Explanation: Logictic Regression So far we were inferring a continous value for another, now we want to classify. Imagine we have a line that separates two categories in two dimensions. End of explanation class SigmoidLayer(LinearLayer): y = sigmoid(w.x + b) def __init__(self, **kwargs): super(SigmoidLayer, self).__init__(**kwargs) def call(self, inputs): return tf.sigmoid(super().call(inputs)) Explanation: We compress output between 0 and 1 using sigmoid to match y everything below 0.5 counts as 0, everthing above as 1 End of explanation x = tf.constant(X, dtype='float32') y_true = tf.constant(y, dtype='float32') x.shape model = SigmoidLayer(input_dim=2) Explanation: We have 2d input now End of explanation loss_fn = tf.losses.binary_crossentropy # standard optimizer using advanced properties optimizer = tf.keras.optimizers.Adam(learning_rate=1e-1) # https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/metrics/Accuracy m = tf.keras.metrics.Accuracy() EPOCHS = 1000 losses = [] accuracies = [] for step in range(EPOCHS): # Open a GradientTape. with tf.GradientTape() as tape: # Forward pass. y_pred = model(x) # Loss value for this batch. loss = loss_fn(y_true=tf.squeeze(y_true), y_pred=tf.squeeze(y_pred)) y_pred_binary = (tf.squeeze(y_pred) > 0.5).numpy().astype(float) m.update_state(tf.squeeze(y_true), y_pred_binary) accuracy = m.result().numpy() losses.append(loss) accuracies.append(accuracy) # Get gradients of weights wrt the loss. gradients = tape.gradient(loss, model.trainable_weights) # Update the weights of our linear layer. optimizer.apply_gradients(zip(gradients, model.trainable_weights)) print(loss) print(accuracy) plt.yscale('log') plt.ylabel("loss") plt.xlabel("epochs") plt.plot(losses) plt.ylabel("accuracy") plt.xlabel("epochs") plt.plot(accuracies) y_pred = model(x) y_pred_binary = (tf.squeeze(y_pred) > 0.5).numpy().astype(float) y_pred_binary y_true - y_pred_binary # below and above line plt.xlabel("x1") plt.ylabel("x2") plt.scatter(X[:,0], X[:,1], c=y_pred_binary, cmap=ListedColormap(['#AA6666', '#6666AA']), marker='o', edgecolors='k') Explanation: Reconsidering the loss function cross entropy is an alternative to squared error cross entropy can be used as an error measure when a network's outputs can be thought of as representing independent hypotheses activations can be understood as representing the probability that each hypothesis might be true the loss indicates the distance between what the network believes this distribution should be, and what the teacher says it should be http://www.cse.unsw.edu.au/~billw/cs9444/crossentropy.html End of explanation from tensorflow.keras.layers import Dense model = tf.keras.Sequential() model.add(Dense(units=1, activation='sigmoid', input_dim=2)) model.summary() %%time model.compile(loss=loss_fn, # binary cross entropy, unchanged from low level example optimizer=optimizer, # adam, unchanged from low level example metrics=['accuracy']) # does a similar thing internally as our loop from above history = model.fit(x, y_true, epochs=EPOCHS, verbose=0) loss, accuracy = model.evaluate(x, y_true) loss, accuracy plt.yscale('log') plt.ylabel("accuracy") plt.xlabel("epochs") plt.plot(history.history['accuracy']) plt.yscale('log') plt.ylabel("loss") plt.xlabel("epochs") plt.plot(history.history['loss']) y_pred = model.predict(x) y_pred_binary = (tf.squeeze(y_pred) > 0.5).numpy().astype(float) # below and above line plt.xlabel("x1") plt.ylabel("x2") plt.scatter(X[:,0], X[:,1], c=y_pred_binary, cmap=ListedColormap(['#AA6666', '#6666AA']), marker='o', edgecolors='k') Explanation: The same solution using high level Keas API End of explanation
9,244
Given the following text description, write Python code to implement the functionality described below step by step Description: Welcome to Jupyter! With Jupyter notebooks you can write and execute code, annotate it with Markdownd and use powerful visualization tools all in one document. Running code Code cells can be executed in sequence by pressing Shift-ENTER. Try it now. Step1: Visualisations Many Python visualization libratries, matplotlib for exampl, intergate seamlessly with Jupyter. Visualiazations will appear directly in the notebook. Step2: Tensorflow environment and accelerators On Google's AI Platform notebooks, Tensorflow support is built-in and powerful accelerators are supported out of the box. Run this cell to test if your current notebook instance has Tensorflow and an accelerator (in some codelabs, you will add an accelerator later).
Python Code: import math from matplotlib import pyplot as plt a=1 b=2 a+b Explanation: Welcome to Jupyter! With Jupyter notebooks you can write and execute code, annotate it with Markdownd and use powerful visualization tools all in one document. Running code Code cells can be executed in sequence by pressing Shift-ENTER. Try it now. End of explanation def display_sinusoid(): X = range(180) Y = [math.sin(x/10.0) for x in X] plt.plot(X, Y) display_sinusoid() Explanation: Visualisations Many Python visualization libratries, matplotlib for exampl, intergate seamlessly with Jupyter. Visualiazations will appear directly in the notebook. End of explanation import tensorflow as tf from tensorflow.python.client import device_lib print("Tensorflow version " + tf.__version__) try: tpu = tf.contrib.cluster_resolver.TPUClusterResolver() # TPU detection strategy = tf.contrib.tpu.TPUDistributionStrategy(tpu) print("Running on TPU") except ValueError: local_devices = device_lib.list_local_devices() gpu_list = [x.name for x in local_devices if x.device_type == 'GPU'] if gpu_list: print("Running on GPU: {}".format(gpu_list)) else: print("Running on CPU") Explanation: Tensorflow environment and accelerators On Google's AI Platform notebooks, Tensorflow support is built-in and powerful accelerators are supported out of the box. Run this cell to test if your current notebook instance has Tensorflow and an accelerator (in some codelabs, you will add an accelerator later). End of explanation
9,245
Given the following text description, write Python code to implement the functionality described below step by step Description: How HDBSCAN Works HDBSCAN is a clustering algorithm developed by Campello, Moulavi, and Sander. It extends DBSCAN by converting it into a hierarchical clustering algorithm, and then using a technique to extract a flat clustering based in the stability of clusters. The goal of this notebook is to give you an overview of how the algorithm works and the motivations behind it. In contrast to the HDBSCAN paper I'm going to describe it without reference to DBSCAN. Instead I'm going to explain how I like to think about the algorithm, which aligns more closely with Robust Single Linkage with flat cluster extraction on top of it. Before we get started we'll load up most of the libraries we'll need in the background, and set up our plotting (because I believe the best way to understand what is going on is to actually see it working in pictures). Step1: The next thing we'll need is some data. To make for an illustrative example we'll need the data size to be fairly small so we can see what is going on. It will also be useful to have several clusters, preferably of different kinds. Fortunately sklearn has facilities for generating sample clustering data so I'll make use of that and make a dataset of one hundred data points. Step2: Now, the best way to explain HDBSCAN is actually just use it and then go through the steps that occurred along the way teasing out what is happening at each step. So let's load up the hdbscan library and get to work. Step3: So now that we have clustered the data -- what actually happened? We can break it out into a series of steps Transform the space according to the density/sparsity. Build the minimum spanning tree of the distance weighted graph. Construct a cluster hierarchy of connected components. Condense the cluster hierarchy based on minimum cluster size. Extract the stable clusters from the condensed tree. Transform the space To find clusters we want to find the islands of higher density amid a sea of sparser noise -- and the assumption of noise is important Step4: Build the cluster hierarchy Given the minimal spanning tree, the next step is to convert that into the hierarchy of connected components. This is most easily done in the reverse order Step5: This brings us to the point where robust single linkage stops. We want more though; a cluster hierarchy is good, but we really want a set of flat clusters. We could do that by drawing a a horizontal line through the above diagram and selecting the clusters that it cuts through. This is in practice what DBSCAN effectively does (declaring any singleton clusters at the cut level as noise). The question is, how do we know where to draw that line? DBSCAN simply leaves that as a (very unintuitive) parameter. Worse, we really want to deal with variable density clusters and any choice of cut line is a choice of mutual reachability distance to cut at, and hence a single fixed density level. Ideally we want to be able to cut the tree at different places to select our clusters. This is where the next steps of HDBSCAN begin and create the difference from robust single linkage. Condense the cluster tree The first step in cluster extraction is condensing down the large and complicated cluster hierarchy into a smaller tree with a little more data attached to each node. As you can see in the hierarchy above it is often the case that a cluster split is one or two points splitting off from a cluster; and that is the key point -- rather than seeing it as a cluster splitting into two new clusters we want to view it as a single persistent cluster that is 'losing points'. To make this concrete we need a notion of minimum cluster size which we take as a parameter to HDBSCAN. Once we have a value for minimum cluster size we can now walk through the hierarchy and at each split ask if one of the new clusters created by the split has fewer points than the minimum cluster size. If it is the case that we have fewer points than the minimum cluster size we declare it to be 'points falling out of a cluster' and have the larger cluster retain the cluster identity of the parent, marking down which points 'fell out of the cluster' and at what distance value that happened. If on the other hand the split is into two clusters each at least as large as the minimum cluster size then we consider that a true cluster split and let that split persist in the tree. After walking through the whole hierarchy and doing this we end up with a much smaller tree with a small number of nodes, each of which has data about how the size of the cluster at that node descreases over varying distance. We can visualize this as a dendrogram similar to the one above -- again we can have the width of the line represent the number of points in the cluster. This time, however, that width varies over the length of the line as points fall our of the cluster. For our data using a minimum cluster size of 5 the result looks like this Step6: This is much easier to look at and deal with, particularly in as simple a clustering problem as our current test dataset. However we still need to pick out clusters to use as a flat clustering. Looking at the plot above should give you some ideas about how one might go about doing this. Extract the clusters Intuitively we want the choose clusters that persist and have a longer lifetime; short lived clusters are ultimately probably merely artifacts of the single linkage approach. Looking at the previous plot we could say that we want to choose those clusters that have the greatest area of ink in the plot. To make a flat clustering we will need to add a further requirement that, if you select a cluster, then you cannot select any cluster that is a descendant of it. And in fact that intuitive notion of what should be done is exactly what HDBSCAN does. Of course we need to formalise things to make it a concrete algorithm. First we need a different measure than distance to consider the persistence of clusters; instead we will use $\lambda = \frac{1}{\mathrm{distance}}$. For a given cluster we can then define values $\lambda_{\mathrm{birth}}$ and $\lambda_{\mathrm{death}}$ to be the lambda value when the cluster split off and became its own cluster, and the lambda value (if any) when the cluster split into smaller clusters respectively. In turn, for a given cluster, for each point p in that cluster we can define the value $\lambda_p$ as the lambda value at which that point 'fell out of the cluster' which is a value somewhere between $\lambda_{\mathrm{birth}}$ and $\lambda_{\mathrm{death}}$ since the point either falls out of the cluster at some point in the cluster's lifetime, or leaves the cluster when the cluster splits into two smaller clusters. Now, for each cluster compute the stability to as $\sum_{p \in \mathrm{cluster}} (\lambda_p - \lambda_{\mathrm{birth}})$. Declare all leaf nodes to be selected clusters. Now work up through the tree (the reverse topological sort order). If the sum of the stabilities of the child clusters is greater than the stability of the cluster then we set the cluster stability to be the sum of the child stabilities. If, on the other hand, the cluster's stability is greater than the sum of its children then we declare the cluster to be a selected cluster, and unselect all its descendants. Once we reach the root node we call the current set of selected clusters our flat clsutering and return that. Okay, that was wordy and complicated, but it really is simply performing our 'select the clusters in the plot with the largest total ink area' subject to descendant constraints that we explained earlier. We can select the clusters in the condensed tree dendrogram via this algorithm, and you get what you expect Step7: Now that we have the clusters it is a simple enough matter to turn that into cluster labelling as per the sklearn API. Any point not in a selected cluster is simply a noise point (and assigned the label -1). We can do a little more though
Python Code: import numpy as np import matplotlib.pyplot as plt import seaborn as sns import sklearn.datasets as data %matplotlib inline sns.set_context('poster') sns.set_style('white') sns.set_color_codes() plot_kwds = {'alpha' : 0.5, 's' : 80, 'linewidths':0} Explanation: How HDBSCAN Works HDBSCAN is a clustering algorithm developed by Campello, Moulavi, and Sander. It extends DBSCAN by converting it into a hierarchical clustering algorithm, and then using a technique to extract a flat clustering based in the stability of clusters. The goal of this notebook is to give you an overview of how the algorithm works and the motivations behind it. In contrast to the HDBSCAN paper I'm going to describe it without reference to DBSCAN. Instead I'm going to explain how I like to think about the algorithm, which aligns more closely with Robust Single Linkage with flat cluster extraction on top of it. Before we get started we'll load up most of the libraries we'll need in the background, and set up our plotting (because I believe the best way to understand what is going on is to actually see it working in pictures). End of explanation moons, _ = data.make_moons(n_samples=50, noise=0.05) blobs, _ = data.make_blobs(n_samples=50, centers=[(-0.75,2.25), (1.0, 2.0)], cluster_std=0.25) test_data = np.vstack([moons, blobs]) plt.scatter(test_data.T[0], test_data.T[1], color='b', **plot_kwds) Explanation: The next thing we'll need is some data. To make for an illustrative example we'll need the data size to be fairly small so we can see what is going on. It will also be useful to have several clusters, preferably of different kinds. Fortunately sklearn has facilities for generating sample clustering data so I'll make use of that and make a dataset of one hundred data points. End of explanation import hdbscan clusterer = hdbscan.HDBSCAN(min_cluster_size=5, gen_min_span_tree=True) clusterer.fit(test_data) Explanation: Now, the best way to explain HDBSCAN is actually just use it and then go through the steps that occurred along the way teasing out what is happening at each step. So let's load up the hdbscan library and get to work. End of explanation clusterer.minimum_spanning_tree_.plot(edge_cmap='viridis', edge_alpha=0.6, node_size=80, edge_linewidth=2) Explanation: So now that we have clustered the data -- what actually happened? We can break it out into a series of steps Transform the space according to the density/sparsity. Build the minimum spanning tree of the distance weighted graph. Construct a cluster hierarchy of connected components. Condense the cluster hierarchy based on minimum cluster size. Extract the stable clusters from the condensed tree. Transform the space To find clusters we want to find the islands of higher density amid a sea of sparser noise -- and the assumption of noise is important: real data is messy and has outliers, corrupt data, and noise. The core of the clustering algorithm is single linkage clustering, and it can be quite sensitive to noise: a single noise data point in the wrong place can act as a bridge between islands, gluing them together. Obviously we want our algorithm to be robust against noise so we need to find a way to help 'lower the sea level' before running a single linkage algorithm. How can we characterize 'sea' and 'land' without doing a clustering? As long as we can get an estimate of density we can consider lower density points as the 'sea'. The goal here is not to perfectly distinguish 'sea' from 'land' -- this is an initial step in clustering, not the ouput -- just to make our clustering core a little more robust to noise. So given an identification of 'sea' we want to lower the sea level. For practical purposes that means making 'sea' points more distant from each other and from the 'land'. That's just the intuition however. How does it work in practice? We need a very inexpensive estimate of density, and the simplest is the distance to the kth nearest neighbor. If we have the distance matrix for our data (which we will need imminently anyway) we can simply read that off; alternatively if our metric is supported (and dimension is low) this is the sort of query that kd-trees are good for. Let's formalise this and (following the DBSCAN, LOF, and HDBSCAN literature) call it the core distance defined for parameter k for a point x and denote as $\mathrm{core}_k(x)$. Now we need a way to spread apart points with low density (correspondingly high core distance). The simple way to do this is to define a new distance metric between points which we will call (again following the literature) the mutual reachability distance. We define mutual reachability distance as follows: <center>$d_{\mathrm{mreach-}k}(a,b) = \max {\mathrm{core}_k(a), \mathrm{core}_k(b), d(a,b) }$</center> where $d(a,b)$ is the original metric distance between a and b. Under this metric dense points (with low core distance) remain the same distance from each other but sparser points are pushed away to be at least their core distance away from any other point. This effectively 'lowers the sea level' spreading sparse 'sea' points out, while leaving 'land' untouched. The caveat here is that obviously this is dependent upon the choice of k; larger k values interpret more points as being in the 'sea'. All of this is a little easier to understand with a picture, so let's use a k value of five. Then for a given point we can draw a circle for the core distance as the circle that touches the fifth nearest neighbor, like so: <img src="distance1.svg" alt="Diagram demonstrating mutual reachability distance" width=640 height=480> Pick another point and we can do the same thing, this time with a different set of neighbors (one of them even being the first point we picked out). <img src="distance2.svg" alt="Diagram demonstrating mutual reachability distance" width=640 height=480> And we can do that a third time for good measure, with another set of five nearest neighbors and another circle with slightly different radius again. <img src="distance3.svg" alt="Diagram demonstrating mutual reachability distance" width=640 height=480> Now if we want to know the mutual reachabiility distance between the blue and green points we can start by drawing in and arrow giving the distance between green and blue: <img src="distance4.svg" alt="Diagram demonstrating mutual reachability distance" width=640 height=480> This passes through the blue circle, but not the green circle -- the core distance for green is larger than the distance between blue and green. Thus we need to mark the mutual reachability distance between blue and green as larger -- equal to the radius of the green circle (easiest to picture if we base one end at the green point). <img src="distance4a.svg" alt="Diagram demonstrating mutual reachability distance" width=640 height=480> On the other hand the mutual reachablity distance from red to green is simply distance from red to green since that distance is greater than either core distance (i.e. the distance arrow passes through both circles). <img src="distance5.svg" alt="Diagram demonstrating mutual reachability distance" width=640 height=480> In general there is underlying theory to demonstrate that mutual reachability distance as a transform works well in allowing single linkage clustering to more closely approximate the hierarchy of level sets of whatever true density distribution our points were sampled from. Build the minimum spanning tree Now that we have a new mutual reachability metric on the data we want start finding the islands on dense data. Of course dense areas are relative, and different islands may have different densities. Conceptually what we will do is the following: consider the data as a weighted graph with the data points as vertices and an edge between any two points with weight equal to the mutual reachability distance of those points. Now consider a threshold value, starting high, and steadily being lowered. Drop any edges with weight above that threshold. As we drop edges we will start to disconnect the graph into connected components. Eventually we will have a hierarchy of connected components (from completely connected to completely disconnected) at varying threshold levels. In practice this is very expensive: there are $n^2$ edges and we don't want to have to run a connected components algorithm that many times. The right thing to do is to find a minimal set of edges such that dropping any edge from the set causes a disconnection of components. But we need more, we need this set to be such that there is no lower weight edge that could connect the components. Fortunately graph theory furnishes us with just such a thing: the minimum spanning tree of the graph. We can build the minimum spanning tree very efficiently via Prim's algorithm -- we build the tree one edge at a time, always adding the lowest weight edge that connects the current tree to a vertex not yet in the tree. You can see the tree HDBSCAN constructed below; note that this is the minimum spanning tree for mutual reachability distance which is different from the pure distance in the graph. In this case we had a k value of 5. In the case that the data lives in a metric space we can use even faster methods, such as Dual Tree Boruvka to build the minimal spanning tree. End of explanation clusterer.single_linkage_tree_.plot(cmap='viridis', colorbar=True) Explanation: Build the cluster hierarchy Given the minimal spanning tree, the next step is to convert that into the hierarchy of connected components. This is most easily done in the reverse order: sort the edges of the tree by distance (in increasing order) and then iterate through, creating a new merged cluster for each edge. The only difficult part here is to identify the two clusters each edge will join together, but this is easy enough via a union-find data structure. We can view the result as a dendrogram as we see below: End of explanation clusterer.condensed_tree_.plot() Explanation: This brings us to the point where robust single linkage stops. We want more though; a cluster hierarchy is good, but we really want a set of flat clusters. We could do that by drawing a a horizontal line through the above diagram and selecting the clusters that it cuts through. This is in practice what DBSCAN effectively does (declaring any singleton clusters at the cut level as noise). The question is, how do we know where to draw that line? DBSCAN simply leaves that as a (very unintuitive) parameter. Worse, we really want to deal with variable density clusters and any choice of cut line is a choice of mutual reachability distance to cut at, and hence a single fixed density level. Ideally we want to be able to cut the tree at different places to select our clusters. This is where the next steps of HDBSCAN begin and create the difference from robust single linkage. Condense the cluster tree The first step in cluster extraction is condensing down the large and complicated cluster hierarchy into a smaller tree with a little more data attached to each node. As you can see in the hierarchy above it is often the case that a cluster split is one or two points splitting off from a cluster; and that is the key point -- rather than seeing it as a cluster splitting into two new clusters we want to view it as a single persistent cluster that is 'losing points'. To make this concrete we need a notion of minimum cluster size which we take as a parameter to HDBSCAN. Once we have a value for minimum cluster size we can now walk through the hierarchy and at each split ask if one of the new clusters created by the split has fewer points than the minimum cluster size. If it is the case that we have fewer points than the minimum cluster size we declare it to be 'points falling out of a cluster' and have the larger cluster retain the cluster identity of the parent, marking down which points 'fell out of the cluster' and at what distance value that happened. If on the other hand the split is into two clusters each at least as large as the minimum cluster size then we consider that a true cluster split and let that split persist in the tree. After walking through the whole hierarchy and doing this we end up with a much smaller tree with a small number of nodes, each of which has data about how the size of the cluster at that node descreases over varying distance. We can visualize this as a dendrogram similar to the one above -- again we can have the width of the line represent the number of points in the cluster. This time, however, that width varies over the length of the line as points fall our of the cluster. For our data using a minimum cluster size of 5 the result looks like this: End of explanation clusterer.condensed_tree_.plot(select_clusters=True, selection_palette=sns.color_palette()) Explanation: This is much easier to look at and deal with, particularly in as simple a clustering problem as our current test dataset. However we still need to pick out clusters to use as a flat clustering. Looking at the plot above should give you some ideas about how one might go about doing this. Extract the clusters Intuitively we want the choose clusters that persist and have a longer lifetime; short lived clusters are ultimately probably merely artifacts of the single linkage approach. Looking at the previous plot we could say that we want to choose those clusters that have the greatest area of ink in the plot. To make a flat clustering we will need to add a further requirement that, if you select a cluster, then you cannot select any cluster that is a descendant of it. And in fact that intuitive notion of what should be done is exactly what HDBSCAN does. Of course we need to formalise things to make it a concrete algorithm. First we need a different measure than distance to consider the persistence of clusters; instead we will use $\lambda = \frac{1}{\mathrm{distance}}$. For a given cluster we can then define values $\lambda_{\mathrm{birth}}$ and $\lambda_{\mathrm{death}}$ to be the lambda value when the cluster split off and became its own cluster, and the lambda value (if any) when the cluster split into smaller clusters respectively. In turn, for a given cluster, for each point p in that cluster we can define the value $\lambda_p$ as the lambda value at which that point 'fell out of the cluster' which is a value somewhere between $\lambda_{\mathrm{birth}}$ and $\lambda_{\mathrm{death}}$ since the point either falls out of the cluster at some point in the cluster's lifetime, or leaves the cluster when the cluster splits into two smaller clusters. Now, for each cluster compute the stability to as $\sum_{p \in \mathrm{cluster}} (\lambda_p - \lambda_{\mathrm{birth}})$. Declare all leaf nodes to be selected clusters. Now work up through the tree (the reverse topological sort order). If the sum of the stabilities of the child clusters is greater than the stability of the cluster then we set the cluster stability to be the sum of the child stabilities. If, on the other hand, the cluster's stability is greater than the sum of its children then we declare the cluster to be a selected cluster, and unselect all its descendants. Once we reach the root node we call the current set of selected clusters our flat clsutering and return that. Okay, that was wordy and complicated, but it really is simply performing our 'select the clusters in the plot with the largest total ink area' subject to descendant constraints that we explained earlier. We can select the clusters in the condensed tree dendrogram via this algorithm, and you get what you expect: End of explanation palette = sns.color_palette() cluster_colors = [sns.desaturate(palette[col], sat) if col >= 0 else (0.5, 0.5, 0.5) for col, sat in zip(clusterer.labels_, clusterer.probabilities_)] plt.scatter(test_data.T[0], test_data.T[1], c=cluster_colors, **plot_kwds) Explanation: Now that we have the clusters it is a simple enough matter to turn that into cluster labelling as per the sklearn API. Any point not in a selected cluster is simply a noise point (and assigned the label -1). We can do a little more though: for each cluster we have the $\lambda_p$ for each point p in that cluster; If we simply normalize those values (so they range from zero to one) then we have a measure of the strength of cluster membership for each point in the cluster. The hdbscan library returns this as a probabilities_ attribute of the clusterer object. Thus, with labels and membership strengths in hand we can make the standard plot, choosing a color for points based on cluster label, and desaturating that color according the strength of membership (and make unclustered points pure gray). End of explanation
9,246
Given the following text description, write Python code to implement the functionality described below step by step Description: Interrupted workflow This example shows that using IO, you can easily interrupt your workflow, save it and continue some other time. Step1: Store the histogram (and delete it to pretend we come with a fresh table) Step2: Turn off the machine, go for lunch, return home later... Read the histogram Step3: The same one ;-) Continue filling
Python Code: import numpy as np import physt histogram = physt.h1(None, "fixed_width", bin_width=0.1, adaptive=True) histogram # Big chunk of data data1 = np.random.normal(0, 1, 10000000) histogram.fill_n(data1) histogram histogram.plot() Explanation: Interrupted workflow This example shows that using IO, you can easily interrupt your workflow, save it and continue some other time. End of explanation histogram.to_json(path="./histogram.json"); del histogram Explanation: Store the histogram (and delete it to pretend we come with a fresh table): End of explanation histogram = physt.io.load_json(path="./histogram.json") histogram histogram.plot() Explanation: Turn off the machine, go for lunch, return home later... Read the histogram: End of explanation # Another big chunk of data data1 = np.random.normal(3, 2, 10000000) histogram.fill_n(data1) histogram histogram.plot() Explanation: The same one ;-) Continue filling: End of explanation
9,247
Given the following text description, write Python code to implement the functionality described below step by step Description: 5D Example In this notebook we will go through an example using the foxi code features to evaluate the expected utility of a mock scientific survey. This notebook will assume the reader is familiar with all of the concepts and is intended as a practical demonstration of the code. For more details about the utility functions themselves, as well as the inner workings of foxi, one can read the paper it was first developed in Hardwick, Vennin & Wands (2018). To begin with, we will import the module sys to append our path towards foxi. Step1: Setup of the current experiment Consider a scientific experiment over a 5D parameter space. Fitting a statistical model to the input, let us imagine that the posterior distribution is given by a multivariate Gaussian distribution with non-trivial covariances. Let us quickly generate $10^3$ samples from this 5D posterior. Feel free to change these settings to check the final results. Step2: Setup of the model priors We should now generate a series of toy model priors which can span the parameter space. These should represent a true collection of theoretical models but, in this case for simplicity and to capture the essential effects of the whole model space, we will choose Nm models that all have uniform hypersphere priors all with radius Rm, and positions drawn from a Gaussian hyperprior centred on mock_posterior_best_fit. Let us now quickly generate $500$ samples for each of these models and save them to files so that foxi can read them in. Step3: Setup of the forecast Fisher matrix Let us assume that the future likelihood will be a multivariate Gaussian over the parameters as well. The code can of course accomodate any specified fiduical point-dependent matrix, but it will be more instructive to simplify things in this way. In this instance, let us also model how the Fisher matrix varies with respect to the fiducial points with a polynomial. Step4: Running the main foxi algorithm We are now ready to run foxi with our samples and forecast Fisher matrix. The first thing to do is to run a new instance of the foxi class like so Step5: The next step is to specify where our posterior samples of the current data are, how many samples to read in, what weights they carry and identify if there are any transformations to be done on each column (the latter will be no in this case). Step6: Now that the posterior chains have been set, we can do the same for our list of prior samples. Step7: Notice that the output here is from a quick KDE of each set of prior samples. How these are used depends on the specific numerical situation, which is once again covered in more detail in Hardwick, Vennin & Wands (2018). Our final step before running foxi is to give it a python function which returns the forecast Fisher matrix at each fiducial point (i.e. evaluated at each point of the current data chains). We do this like so Step8: All of the necessary settings have now been applied for a full run of the foxi code. Depending on how many points were introduced at the level of the priors and chains, as well as the length of time it takes to evaluate the forecast Fisher matrix at each fiducial point, we may wish to continue on a compute cluster. For this simple example we have chosen, it should be enough to simply run locally on a laptop in less than 10 minutes. Our main run command is the following Step9: Don't be too alarmed by a few error messages about dividing by zero Step10: Once again, don't be too alarmed by a few value error messages. If there are any error messages related to converting strings to floats, this is likely to do with a delimiter problem in the text files that were read in. Tab delimited data should work fine (or at least give a number of spaces). Note also that the number of model pairs quoted here is actually the number of unique pairs + the auto-pairs (i.e. Model_i - Model_j as well as Model_i - Model_i) so this number is an additional +N_m larger than the number of unique pairs. A summary of the main results can be read from this file Step11: One thing to avoid in these results is if the quoted expected Kullback-Leibler divergence &lt;DKL&gt; is of roughly the same value as the natural log of the number of points in the posterior chains - which in this example is $\ln (10^3) \simeq 6.9$. More chain points must be used for larger values since otherwise the value of &lt;DKL&gt; is only a lower bound. A nice feature, if one sets TeX_output = True in rerun_foxi above, is the output of a fully-formatted $\LaTeX$ table incorporating of all of these results. This can be found in the following file Step12: Another useful tool provided by foxi is to generate plots illustrating the numerical convergence (or lack thereof) of the calculated expected utilities.
Python Code: import sys path_to_foxi = '/Users/Rob/work/foxi' # Give your path to foxi here. sys.path.append(path_to_foxi + '/foxisource/') from foxi import foxi # These imports aren't stricly necessary to run foxi but they will be useful in our examples. import numpy as np from scipy.stats import multivariate_normal Explanation: 5D Example In this notebook we will go through an example using the foxi code features to evaluate the expected utility of a mock scientific survey. This notebook will assume the reader is familiar with all of the concepts and is intended as a practical demonstration of the code. For more details about the utility functions themselves, as well as the inner workings of foxi, one can read the paper it was first developed in Hardwick, Vennin & Wands (2018). To begin with, we will import the module sys to append our path towards foxi. End of explanation # Choose a state of low information to start with. mock_posterior_best_fit = np.asarray([np.random.uniform(-1.0,1.0), np.random.uniform(-1.0,1.0), np.random.uniform(-1.0,1.0), np.random.uniform(-1.0,1.0), np.random.uniform(-1.0,1.0)]) # Set the posterior best fit to be a unit vector. mock_posterior_best_fit = mock_posterior_best_fit/np.sqrt(np.sum(mock_posterior_best_fit**2.0)) # Set the posterior Fisher matrix to be diagonal for simplicity. mock_posterior_fisher_matrix = np.asarray([[ np.random.uniform(0.01,1.0), 0.0, 0.0, 0.0, 0.0], [ 0.0, np.random.uniform(0.01,1.0), 0.0, 0.0, 0.0], # This obviously assumes that the [ 0.0, 0.0, np.random.uniform(0.01,1.0), 0.0, 0.0], # covariance matrix is constant [ 0.0, 0.0, 0.0, np.random.uniform(0.01,1.0), 0.0], # with respect to the parameters. [ 0.0, 0.0, 0.0, 0.0, np.random.uniform(0.01,1.0)]]) # Give the posterior a unit-determinant Fisher matrix so that other realisations are comparable. mock_posterior_fisher_matrix = mock_posterior_fisher_matrix/(np.linalg.det(mock_posterior_fisher_matrix)**(1.0/5.0)) # Quick inversion to generate the samples and mimic some weights too. number_of_posterior_samples = 10**3 mock_posterior_covariance_matrix = np.linalg.inv(mock_posterior_fisher_matrix) mock_posterior_samples = np.random.multivariate_normal(mock_posterior_best_fit, mock_posterior_covariance_matrix, number_of_posterior_samples) mock_posterior_sample_weights = multivariate_normal.pdf(mock_posterior_samples, mean=mock_posterior_best_fit, cov=mock_posterior_covariance_matrix) mock_posterior_samples_output = np.insert(mock_posterior_samples,0,mock_posterior_sample_weights,axis=1) # Let's output this data to a file in the '/foxichains' directory which mimics a real MCMC output. np.savetxt(path_to_foxi + '/foxichains/mock_posterior_samples.txt', mock_posterior_samples_output, delimiter='\t') Explanation: Setup of the current experiment Consider a scientific experiment over a 5D parameter space. Fitting a statistical model to the input, let us imagine that the posterior distribution is given by a multivariate Gaussian distribution with non-trivial covariances. Let us quickly generate $10^3$ samples from this 5D posterior. Feel free to change these settings to check the final results. End of explanation # Feel free to vary these (though consider the number of model pairs to compare grows like Nm*(Nm-1)/2). Nm = 10 Rm = 0.0001 number_of_prior_samples = 5*10**2 # Making the model spread relatively small - not strictly necessary but improves convergence properties. hyperprior_covariance_matrix = 0.1*mock_posterior_covariance_matrix # Generate the positions. mock_prior_positions = np.random.multivariate_normal(mock_posterior_best_fit,hyperprior_covariance_matrix,Nm) # Generate the 5D hypersphere samples and output to text files in the '/foxipriors' directory. for im in range(0,Nm): R1 = np.random.uniform(0.0,Rm,size=number_of_prior_samples) R2 = np.random.uniform(0.0,Rm,size=number_of_prior_samples) R3 = np.random.uniform(0.0,Rm,size=number_of_prior_samples) R4 = np.random.uniform(0.0,Rm,size=number_of_prior_samples) R5 = np.random.uniform(0.0,Rm,size=number_of_prior_samples) angle1 = np.random.uniform(0.0,np.pi,size=number_of_prior_samples) angle2 = np.random.uniform(0.0,np.pi,size=number_of_prior_samples) angle3 = np.random.uniform(0.0,np.pi,size=number_of_prior_samples) angle4 = np.random.uniform(0.0,2.0*np.pi,size=number_of_prior_samples) parameter1 = R1*np.cos(angle1) + mock_prior_positions[im][0] parameter2 = R2*np.sin(angle1)*np.cos(angle2) + mock_prior_positions[im][1] parameter3 = R3*np.sin(angle1)*np.sin(angle2)*np.cos(angle3) + mock_prior_positions[im][2] parameter4 = R4*np.sin(angle1)*np.sin(angle2)*np.sin(angle3)*np.cos(angle4) + mock_prior_positions[im][3] parameter5 = R5*np.sin(angle1)*np.sin(angle2)*np.sin(angle3)*np.sin(angle4) + mock_prior_positions[im][4] mock_prior_samples = np.asarray([parameter1,parameter2,parameter3,parameter4,parameter5]).T np.savetxt(path_to_foxi + '/foxipriors/mock_prior' + str(im+1) + '_samples.txt', mock_prior_samples, delimiter='\t') Explanation: Setup of the model priors We should now generate a series of toy model priors which can span the parameter space. These should represent a true collection of theoretical models but, in this case for simplicity and to capture the essential effects of the whole model space, we will choose Nm models that all have uniform hypersphere priors all with radius Rm, and positions drawn from a Gaussian hyperprior centred on mock_posterior_best_fit. Let us now quickly generate $500$ samples for each of these models and save them to files so that foxi can read them in. End of explanation # Specify the polynomial baheviour of the forecast Fisher matrix with respect to the fiducial points. def fisher_matrix(fiducial_point): mock_forecast_fisher_matrix = np.zeros((5,5)) mock_forecast_fisher_matrix += mock_posterior_fisher_matrix mock_forecast_fisher_matrix[0][0] += mock_posterior_fisher_matrix[0][0]*((fiducial_point[0]-mock_posterior_best_fit[0])**2.0) mock_forecast_fisher_matrix[1][1] += mock_posterior_fisher_matrix[1][1]*((fiducial_point[1]-mock_posterior_best_fit[1])**2.0) mock_forecast_fisher_matrix[2][2] += mock_posterior_fisher_matrix[2][2]*((fiducial_point[2]-mock_posterior_best_fit[2])**2.0) mock_forecast_fisher_matrix[3][3] += mock_posterior_fisher_matrix[3][3]*((fiducial_point[3]-mock_posterior_best_fit[3])**2.0) mock_forecast_fisher_matrix[4][4] += mock_posterior_fisher_matrix[4][4]*((fiducial_point[4]-mock_posterior_best_fit[4])**2.0) return mock_forecast_fisher_matrix Explanation: Setup of the forecast Fisher matrix Let us assume that the future likelihood will be a multivariate Gaussian over the parameters as well. The code can of course accomodate any specified fiduical point-dependent matrix, but it will be more instructive to simplify things in this way. In this instance, let us also model how the Fisher matrix varies with respect to the fiducial points with a polynomial. End of explanation foxi_instance = foxi(path_to_foxi) Explanation: Running the main foxi algorithm We are now ready to run foxi with our samples and forecast Fisher matrix. The first thing to do is to run a new instance of the foxi class like so End of explanation chains_filename = 'mock_posterior_samples.txt' # Note that the column numbers start from 0... parameter_column_numbers = [1,2,3,4,5] weights_column_number = 0 # Simply set this to the number of samples generated - this can be useful to get results out as a sanity check. number_of_samples_to_read_in = number_of_posterior_samples foxi_instance.set_chains(chains_filename, parameter_column_numbers, number_of_samples_to_read_in, weights_column=weights_column_number, # All points are given weight 1 if this is ignored. column_types=None) # No transformations needed here. # One could have ['flat','log10','log'] specified for each column. Explanation: The next step is to specify where our posterior samples of the current data are, how many samples to read in, what weights they carry and identify if there are any transformations to be done on each column (the latter will be no in this case). End of explanation # List the model file names to compute the expected utilities for. model_name_list = ['mock_prior' + str(im+1) + '_samples.txt' for im in range(0,Nm)] # List the column numbers to use for each file of prior samples. prior_column_numbers = [[0,1,2,3,4] for im in range(0,Nm)] # Once again, simply set this to the number of samples we made earlier for each prior. number_of_prior_points_to_read_in = number_of_prior_samples foxi_instance.set_model_name_list(model_name_list, prior_column_numbers, number_of_prior_points_to_read_in, prior_column_types=None) # Once again, no transformations needed here. Explanation: Now that the posterior chains have been set, we can do the same for our list of prior samples. End of explanation foxi_instance.set_fisher_matrix(fisher_matrix) Explanation: Notice that the output here is from a quick KDE of each set of prior samples. How these are used depends on the specific numerical situation, which is once again covered in more detail in Hardwick, Vennin & Wands (2018). Our final step before running foxi is to give it a python function which returns the forecast Fisher matrix at each fiducial point (i.e. evaluated at each point of the current data chains). We do this like so End of explanation mix_models = True # Set this to 'True' so that the expected utilities for all possible model pairs are calculated. # The default is 'False' which calculates the utilities all with respect to the reference model # here the 0-element in 'model_name_list'. foxi_instance.run_foxifish(mix_models=mix_models) Explanation: All of the necessary settings have now been applied for a full run of the foxi code. Depending on how many points were introduced at the level of the priors and chains, as well as the length of time it takes to evaluate the forecast Fisher matrix at each fiducial point, we may wish to continue on a compute cluster. For this simple example we have chosen, it should be enough to simply run locally on a laptop in less than 10 minutes. Our main run command is the following End of explanation foxiplot_file_name = 'foxiplots_data_mix_models.txt' # This is the generic name that 'run_foxifish' will set, change # this to whatever you like as long as the file is in '/foxioutput'. # If 'mix_models = False' then remove the 'mix_models' tag. # Set this to the number of samples generated - useful to vary to check convergence though we will make plots later. number_of_foxiplot_samples_to_read_in = number_of_posterior_samples # We must set this feature to 'flat' in each column to perform no post-processing transformation that reweights the chains. # This can be a little redundant as it makes more numerical sense to simply generate new chains. post_chains_column_types = ['flat','flat','flat','flat','flat'] # Set this to 'True' for the output to include fully-formatted LaTeX tables! TeX_output = True # For the truly lazy - you can set the TeX name for each model in the table output too. model_TeX_names = [r'${\cal M}_' + str(i) + r'$' for i in range(0,Nm)] foxi_instance.rerun_foxi(foxiplot_file_name, number_of_foxiplot_samples_to_read_in, post_chains_column_types, model_name_TeX_input=model_TeX_names, TeX_output=TeX_output) Explanation: Don't be too alarmed by a few error messages about dividing by zero: these occur infrequently due to excessively low densities appearing in the numerical procedure but are automatically discarded as points from the main result. The algorithm should terminate with a Number of maxed-out evidences for each model: message. This message refers to the number of times when the model was found to be ruled out at the level of its maximum likelihood. Post-processing and other foxi features This last step was the longest running, from now onwards the post-processing of the results from foxi will always be quick enough to run locally on a laptop. The next step is to use rerun_foxi to analyse the output that run_foxifish dumped into the /foxioutput directory. End of explanation print(open(path_to_foxi + '/foxioutput/foxiplots_data_summary.txt', 'r').read()) Explanation: Once again, don't be too alarmed by a few value error messages. If there are any error messages related to converting strings to floats, this is likely to do with a delimiter problem in the text files that were read in. Tab delimited data should work fine (or at least give a number of spaces). Note also that the number of model pairs quoted here is actually the number of unique pairs + the auto-pairs (i.e. Model_i - Model_j as well as Model_i - Model_i) so this number is an additional +N_m larger than the number of unique pairs. A summary of the main results can be read from this file End of explanation print(open(path_to_foxi + '/foxioutput/foxiplots_data_summary_TeX.txt', 'r').read()) Explanation: One thing to avoid in these results is if the quoted expected Kullback-Leibler divergence &lt;DKL&gt; is of roughly the same value as the natural log of the number of points in the posterior chains - which in this example is $\ln (10^3) \simeq 6.9$. More chain points must be used for larger values since otherwise the value of &lt;DKL&gt; is only a lower bound. A nice feature, if one sets TeX_output = True in rerun_foxi above, is the output of a fully-formatted $\LaTeX$ table incorporating of all of these results. This can be found in the following file End of explanation # Simply specify the number of bins in which to re-calculate the expected utilty with more samples. number_of_bins = 50 # We have already specified the other inputs here so let's simply generate the plots! foxi_instance.plot_convergence(foxiplot_file_name, number_of_bins, number_of_foxiplot_samples_to_read_in, post_chains_column_types) Explanation: Another useful tool provided by foxi is to generate plots illustrating the numerical convergence (or lack thereof) of the calculated expected utilities. End of explanation
9,248
Given the following text description, write Python code to implement the functionality described below step by step Description: <a href="https Step1: Setting a project We need to choose a project inorder to work with buckets, if you dont have any, create a project in Gcloud Console First we need to set a default project using the project_id so that you are able to use commands which unless require specifying the project. glcoud can be used to set the default project. Step2: Using gsutil- CLI for GCloud Docs Create command to create a bucket gsutil mb [-b (on|off)] [-c &lt;class&gt;] [-l &lt;location&gt;] [-p &lt;proj_id&gt;] [--retention &lt;time&gt;] [--pap &lt;setting&gt;] gs Step3: This will create a bucket with bucket_name with default configurations. Step4: Creating a local folder with a test file to upload to the bucket Step5: Uploading the folder to the bucket The object gs Step6: Read The contents of the uploaded file in the bucket can be read in this way Step7: The whole folder/file from the bucket can be downloaded in this way. Step8: Update Updating a file Edit the local copy and overwrite the file in the bucket. Step9: If you want to update a folder in the bucket to be in sync with a local copy of the folder, use rsync Making some changes to the test_folder Step10: adding a new file test_file2.txt Step11: You can check the contents of your bucket at any level using the ls cmd. bucket contents before updating. Step12: bucket contents after updating. Step13: Delete Contents of a bucket can be deleted using rm command Step14: Deleting a bucket The rb command deletes a bucket. Buckets must be empty before you can delete them. Step15: Making TFDS records Step16: Using GC python API Create Creating a bucket Step17: Uploading a object (a file with its path) Step18: Read Step19: To easily download all objects in a bucket or subdirectory, use the gsutil cp command. Update Updating a file/object Its simply overwriting the existing copy of the object. Step20: Delete objects can be deleted easily by using blob.delete() When a folder is empty it will be vanished. Step21: Deleting a bucket Step22: Using gcsfuse(mount) and bash Another way is to mount the GCS bucket to the current session of colab and you can make any regular(CRUD) operations using bash commands(just like on any other directory) In order to use this, you should first create a bucket you can do this using any of the methods described above) gsutil mb is used here Create Step23: Mounting the bucket to /content/{bucket_name} Step24: Creating a folder Step25: Creating a file/object in the folder just created. Step26: Read As the bucket is already mounted you can just access files by opening them using files or just double clicking the file Step27: Update Update to file Update can be done normally either by editing or by using python. Step28: Update to folder New files can be added either by colab's UI or by using python or any other way that works for regular dirs. Step29: Step30: Delete rm cmd can be used to delete the objects of the bucket using the mounted path. Deleting the file {bucket_name}/test_mount_folder/test_mount_file.txt Step31: to delete the folder we can sue rm -r or -rm -rf to force remove Step32: Deleting a bucket you can't delete a bucekt using gcsfuse, but you can unmount the mounted path and then can delete using any of the above mentioned ways. Unmount the bucket After unmounting, any changes in the local path will not be rendered in the cloud bucket. Step33: deleting the bucket using gsutil. Step34: Using GC UI Bucketa can be also be accessed directly through Gcloud Consoles's UI. Step35: Create Bucket can be created in the following way.
Python Code: from google.colab import auth auth.authenticate_user() Explanation: <a href="https://colab.research.google.com/github/probml/probml-notebooks/blob/main/notebooks/GCS_demo_v2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Authenticate in order to access the GCS We don't need authentication to access public data in GCloud but in order to access protected data or write to a protected bucket, you need to set up credentials (authenticate) End of explanation project_id = "your_project_id" !gcloud config set project {project_id} Explanation: Setting a project We need to choose a project inorder to work with buckets, if you dont have any, create a project in Gcloud Console First we need to set a default project using the project_id so that you are able to use commands which unless require specifying the project. glcoud can be used to set the default project. End of explanation import uuid bucket_name = "colab-sample-bucket-" + str(uuid.uuid1()) Explanation: Using gsutil- CLI for GCloud Docs Create command to create a bucket gsutil mb [-b (on|off)] [-c &lt;class&gt;] [-l &lt;location&gt;] [-p &lt;proj_id&gt;] [--retention &lt;time&gt;] [--pap &lt;setting&gt;] gs://&lt;bucket_name&gt;... End of explanation !gsutil mb gs://{bucket_name} Explanation: This will create a bucket with bucket_name with default configurations. End of explanation !mkdir /tmp/test_folder with open("/tmp/test_folder/test_file.txt", "w") as f: f.write("this file get saved in the test_folder") Explanation: Creating a local folder with a test file to upload to the bucket End of explanation !gsutil cp -r /tmp/test_folder gs://{bucket_name} # @markdown Once the upload has finished, the data will appear in the Cloud Console storage browser for your project: print("https://console.cloud.google.com/storage/browser?project=" + project_id) Explanation: Uploading the folder to the bucket The object gs://{bucket_name}/test_folder/test_file.txt is created. End of explanation !gsutil cat gs://{bucket_name}/test_folder/test_file.txt Explanation: Read The contents of the uploaded file in the bucket can be read in this way End of explanation !gsutil cp -r gs://{bucket_name}/test_folder /content/ !gsutil cp gs://{bucket_name}/test_folder/test_file.txt /content/ Explanation: The whole folder/file from the bucket can be downloaded in this way. End of explanation with open("/tmp/test_folder/test_file.txt", "a") as f: f.write(" this new string is added later") !gsutil cp /tmp/test_folder/test_file.txt gs://{bucket_name}/test_folder !gsutil cat gs://{bucket_name}/test_folder/test_file.txt Explanation: Update Updating a file Edit the local copy and overwrite the file in the bucket. End of explanation !rm /tmp/test_folder/test_file.txt Explanation: If you want to update a folder in the bucket to be in sync with a local copy of the folder, use rsync Making some changes to the test_folder End of explanation with open("/tmp/test_folder/test_file2.txt", "w") as f: f.write("this is a new file named test_file2") Explanation: adding a new file test_file2.txt End of explanation !gsutil ls gs://{bucket_name}/test_folder !gsutil rsync -d /tmp/test_folder gs://{bucket_name}/test_folder Explanation: You can check the contents of your bucket at any level using the ls cmd. bucket contents before updating. End of explanation !gsutil ls gs://{bucket_name}/test_folder Explanation: bucket contents after updating. End of explanation !gsutil rm -r gs://{bucket_name}/test_folder !gsutil ls gs://{bucket_name} Explanation: Delete Contents of a bucket can be deleted using rm command End of explanation !gsutil rb gs://{bucket_name} Explanation: Deleting a bucket The rb command deletes a bucket. Buckets must be empty before you can delete them. End of explanation import tensorflow_datasets as tfds source_dir = "gs://gsoc_bucket/ILSVRC2012/" dest_dir = "gs://gsoc_bucket/ILSVRC2012_tf_records" tfds.builder("imagenet2012", data_dir=dest_dir).download_and_prepare( download_config=tfds.download.DownloadConfig(manual_dir=source_dir) ) Explanation: Making TFDS records End of explanation bucket_name = "colab-sample-bucket-" + str(uuid.uuid1()) # Imports the Google Cloud client library from google.cloud import storage # Instantiates a client storage_client = storage.Client(project=project_id) # Creates the new bucket bucket = storage_client.create_bucket(bucket_name) print("Bucket {} created.".format(bucket.name)) !mkdir /tmp/test_api_folder with open("/tmp/test_api_folder/test_api_file.txt", "w") as f: f.write("this file get saved in the test_api_folder") Explanation: Using GC python API Create Creating a bucket End of explanation destination_blob_name = "test_api_folder/test_api_file.txt" source_file_name = "/tmp/test_api_folder/test_api_file.txt" blob = bucket.blob(destination_blob_name) blob.upload_from_filename(source_file_name) print("File {} uploaded to {}.".format(source_file_name, destination_blob_name)) # @markdown Once the upload has finished, the data will appear in the Cloud Console storage browser for your project: print("https://console.cloud.google.com/storage/browser?project=" + project_id) Explanation: Uploading a object (a file with its path) End of explanation source_blob_name = "test_api_folder/test_api_file.txt" destination_file_name = "/content/downloaded_test_api.txt" source_blob = bucket.blob(source_blob_name) source_blob.download_to_filename(destination_file_name) print("Blob {} downloaded to {}.".format(source_blob_name, destination_file_name)) !cat /content/downloaded_test_api.txt Explanation: Read End of explanation with open("/tmp/test_api_folder/test_api_file.txt", "a") as f: f.write(" this is an appended string") source_file_name = "/tmp/test_api_folder/test_api_file.txt" destination_blob_name = "test_api_folder/test_api_file.txt" destination_file_name = "/content/downloaded_test_api.txt" blob = bucket.blob(destination_blob_name) blob.upload_from_filename(source_file_name) print("File {} uploaded to {}.".format(source_file_name, destination_blob_name)) blob.download_to_filename(destination_file_name) print("Blob {} downloaded to {}.".format(destination_blob_name, destination_file_name)) !cat /content/downloaded_test_api.txt Explanation: To easily download all objects in a bucket or subdirectory, use the gsutil cp command. Update Updating a file/object Its simply overwriting the existing copy of the object. End of explanation blob_name = "test_api_folder/test_api_file.txt" blob = bucket.blob(blob_name) blob.delete() print("Blob {} deleted.".format(blob_name)) !gsutil ls gs://{bucket_name} Explanation: Delete objects can be deleted easily by using blob.delete() When a folder is empty it will be vanished. End of explanation bucket.delete() print("Bucket {} deleted".format(bucket.name)) Explanation: Deleting a bucket End of explanation import uuid bucket_name = "colab-sample-bucket-" + str(uuid.uuid1()) !gsutil mb gs://{bucket_name} Explanation: Using gcsfuse(mount) and bash Another way is to mount the GCS bucket to the current session of colab and you can make any regular(CRUD) operations using bash commands(just like on any other directory) In order to use this, you should first create a bucket you can do this using any of the methods described above) gsutil mb is used here Create End of explanation !echo "deb http://packages.cloud.google.com/apt gcsfuse-bionic main" > /etc/apt/sources.list.d/gcsfuse.list !curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - !apt -qq update !apt -qq install gcsfuse !mkdir $bucket_name !gcsfuse $bucket_name /content/$bucket_name cd /content/$bucket_name Explanation: Mounting the bucket to /content/{bucket_name} End of explanation !mkdir test_mount_folder Explanation: Creating a folder End of explanation with open("./test_mount_folder/test_mount_file.txt", "w") as f: f.write("this file get saved in the test_folder you just created") # @markdown Once the upload has finished, the data will appear in the Cloud Console storage browser for your project: print("https://console.cloud.google.com/storage/browser?project=" + project_id) Explanation: Creating a file/object in the folder just created. End of explanation from google.colab import files files.view(f"/content/{bucket_name}/test_mount_folder/test_mount_file.txt") !cat ./test_mount_folder/test_mount_file.txt Explanation: Read As the bucket is already mounted you can just access files by opening them using files or just double clicking the file End of explanation with open("./test_mount_folder/test_mount_file.txt", "a") as f: f.write(" this new string is added later") files.view(f"/content/{bucket_name}/test_mount_folder/test_mount_file.txt") Explanation: Update Update to file Update can be done normally either by editing or by using python. End of explanation !echo "this is a second file to test_folder in the bucket" >> ./test_mount_folder/test_mount_file2.txt Explanation: Update to folder New files can be added either by colab's UI or by using python or any other way that works for regular dirs. End of explanation !cat ./test_mount_folder/test_mount_file2.txt Explanation: End of explanation !rm ./test_mount_folder/test_mount_file.txt Explanation: Delete rm cmd can be used to delete the objects of the bucket using the mounted path. Deleting the file {bucket_name}/test_mount_folder/test_mount_file.txt End of explanation !rm -r ./test_mount_folder Explanation: to delete the folder we can sue rm -r or -rm -rf to force remove End of explanation cd .. !fusermount -u /content/$bucket_name Explanation: Deleting a bucket you can't delete a bucekt using gcsfuse, but you can unmount the mounted path and then can delete using any of the above mentioned ways. Unmount the bucket After unmounting, any changes in the local path will not be rendered in the cloud bucket. End of explanation !gsutil rb gs://{bucket_name} Explanation: deleting the bucket using gsutil. End of explanation # @markdown Go to your project's browser: print("https://console.cloud.google.com/storage/browser?project=" + project_id) Explanation: Using GC UI Bucketa can be also be accessed directly through Gcloud Consoles's UI. End of explanation # Pick a bucket name, they should be unique, # for the demo purpose we can create a bucket name in this way import uuid bucket_name = "colab-sample-bucket-" + str(uuid.uuid1()) bucket_name Explanation: Create Bucket can be created in the following way. End of explanation
9,249
Given the following text description, write Python code to implement the functionality described below step by step Description: Setting things up Step1: Timing
Python Code: my_data = cellreader.CellpyData() # only for my MacBook filename = "/Users/jepe/scripting/cellpy/dev_data/out/20190204_FC_snx012_01_cc_01.h5" assert os.path.isfile(filename) my_data.load(filename) Explanation: Setting things up End of explanation %%timeit my_data.make_summary() %%timeit my_data.make_step_table() Explanation: Timing End of explanation
9,250
Given the following text description, write Python code to implement the functionality described below step by step Description: XML exercise Using data in 'data/mondial_database.xml', the examples above, and refering to https Step1: Not all the entries have an infant mortality rate element. So we need to make sure loop loops for the element named 'infant_mortality'. Step2: Thus we have the countries with the ten lowest reported infant mortality rate element values (in order). To get the top ten populations by city, we have to make sure we get all cities, not just the elements directly under a country, and to keep track of the various population subelements, which all have the same name. Step3: Top ten cities in the world by population as reported by the database. Step4: Largest ethnic groups by population, based on the latest estimates from each country. Finally, we look for the longest river, largest lake, and highest airport. We can take advantage of the intelligent attributes included in the database already. Playing around with the river elements, we see that while the long rivers may have multiple 'located' subelements, for each country, the river element itself has a country attribute which lists the country codes all together. This simplifies the problem. We assume there are no ties... simply because it's a bit quicker and because the coincidence seems a bit ridiculous.
Python Code: document = ET.parse( './data/mondial_database.xml' ) import pandas as pd root = document.getroot() Explanation: XML exercise Using data in 'data/mondial_database.xml', the examples above, and refering to https://docs.python.org/2.7/library/xml.etree.elementtree.html, find 10 countries with the lowest infant mortality rates 10 cities with the largest population 10 ethnic groups with the largest overall populations (sum of best/latest estimates over all countries) name and country of a) longest river, b) largest lake and c) airport at highest elevation End of explanation #get infant mortality of each country, add to heap if under capacity #otherwise check if new value is greater than smallest. inf_mort = dict() for element in document.iterfind('country'): for subelement in element.iterfind('infant_mortality'): inf_mort[element.find('name').text] = float(subelement.text) infmort_df = pd.DataFrame.from_dict(inf_mort, orient ='index') infmort_df.columns = ['infant_mortality'] infmort_df.index.names = ['country'] infmort_df.sort_values(by = 'infant_mortality', ascending = True).head(10) Explanation: Not all the entries have an infant mortality rate element. So we need to make sure loop loops for the element named 'infant_mortality'. End of explanation current_pop = 0 current_pop_year = 0 citypop = dict() for country in document.iterfind('country'): for city in country.getiterator('city'): for subelement in city.iterfind('population'): #compare attributes of identically named subelements. Use this to hold onto the latest pop estimate. if int(subelement.attrib['year']) > current_pop_year: current_pop = int(subelement.text) current_pop_year = int(subelement.attrib['year']) citypop[city.findtext('name')] = current_pop current_pop = 0 current_pop_year = 0 citypop_df = pd.DataFrame.from_dict(citypop, orient ='index') citypop_df.columns = ['population'] citypop_df.index.names = ['city'] citypop_df.sort_values(by = 'population', ascending = False).head(10) Explanation: Thus we have the countries with the ten lowest reported infant mortality rate element values (in order). To get the top ten populations by city, we have to make sure we get all cities, not just the elements directly under a country, and to keep track of the various population subelements, which all have the same name. End of explanation ethn = dict() current_pop = 0 current_pop_year = 0 for country in document.iterfind('country'): for population in country.getiterator('population'): #compare attributes of identically named subelements. Use this to hold onto the latest pop estimate. #Probably faster way to do this if sure of tree structure (i.e. last element is always latest) if int(population.attrib['year']) > current_pop_year: current_pop = int(population.text) current_pop_year = int(population.attrib['year']) for ethn_gp in country.iterfind('ethnicgroup'): if ethn_gp.text in ethn: ethn[ethn_gp.text] += current_pop*float(ethn_gp.attrib['percentage'])/100 else: ethn[ethn_gp.text] = current_pop*float(ethn_gp.attrib['percentage'])/100 current_pop = 0 current_pop_year = 0 ethnic_df = pd.DataFrame.from_dict(ethn, orient ='index') ethnic_df.columns = ['population'] ethnic_df.index.names = ['ethnic_group'] ethnic_df.groupby(ethnic_df.index).sum().sort_values(by = 'population', ascending = False).head(10) Explanation: Top ten cities in the world by population as reported by the database. End of explanation river_ctry=None river_name= None lake_ctry= None lake_name= None airport_ctry= None airport_name = None river_length= 0 lake_area = 0 airport_elv = 0 for river in document.iterfind('river'): for length in river.iterfind('length'): if river_length < float(length.text): river_length=float(length.text) river_ctry= river.attrib['country'] river_name = river.findtext('name') for lake in document.iterfind('lake'): for area in lake.iterfind('area'): if lake_area < float(area.text): lake_area=float(area.text) lake_ctry= lake.attrib['country'] lake_name = lake.findtext('name') for airport in document.iterfind('airport'): for elv in airport.iterfind('elevation'): #apprarently there is an airport in the database with an elevation tag an no entry. #Probably should have been doing this previously if (elv.text is not None) and (airport_elv < float(elv.text)): airport_elv=float(elv.text) airport_ctry= airport.attrib['country'] airport_name = airport.findtext('name') data = [[lake_name,river_name,airport_name],[lake_ctry,river_ctry,airport_ctry],[lake_area,river_length,airport_elv]] df = pd.DataFrame(data, columns = ['Largest Lake','Longest River','Highest Airport'],index=['Name','Location (Country Code)','Metric Value']) df Explanation: Largest ethnic groups by population, based on the latest estimates from each country. Finally, we look for the longest river, largest lake, and highest airport. We can take advantage of the intelligent attributes included in the database already. Playing around with the river elements, we see that while the long rivers may have multiple 'located' subelements, for each country, the river element itself has a country attribute which lists the country codes all together. This simplifies the problem. We assume there are no ties... simply because it's a bit quicker and because the coincidence seems a bit ridiculous. End of explanation
9,251
Given the following text description, write Python code to implement the functionality described below step by step Description: Getting Started This tutorial describes how to use Pandas-TD in Jupyter to explore data interactively. Set your API key to the environment variable TD_API_KEY and run "jupyter notebook" Step1: You can run a query by read_td Step2: Or you can read an existing job result by read_td_job Step3: You can read a table into a DataFrame by read_td_table, optionally with specific time range and limit Step4: Importing a DataFrame into a table is also supported by to_td Step5: Note that to_td currently uses Streaming API for imports. It takes more than a few seconds before your data become visible. Be patient. Step6: create_engine Step7: Examples Step8: create_engine uses "default" connection if apikey and host are omitted. In this case, the environment variables "TD_API_KEY" and "TD_API_SERVER" are used to initialize a connection Step9: If you prefer initializing a connection with custom parameters, you can use connect Step10: read_td_query Step11: Examples Step12: read_td_job Step13: Examples Step14: read_td_table Step15: Examples Step16: to_td Step17: Examples Step18: to_td fails if table already exists Step19: Use index=False if you don't need to insert DataFrame index Step20: to_td inserts the current time as "time" column. You can pass "time" column explicitly by time_col Step21: If you are using a time series index, set time_index=0
Python Code: %matplotlib inline import os import pandas_td as td # Set engine type and database, using the default connection engine = td.create_engine('presto:sample_datasets') # Alternatively, initialize a connection explicitly con = td.connect(apikey=os.environ['TD_API_KEY'], endpoint=os.environ['TD_API_SERVER']) engine = td.create_engine('presto:sample_datasets', con=con) Explanation: Getting Started This tutorial describes how to use Pandas-TD in Jupyter to explore data interactively. Set your API key to the environment variable TD_API_KEY and run "jupyter notebook": $ export TD_API_SERVER="https://api.treasuredata.com/" $ export TD_API_KEY="1234/abcd..." $ jupyter notebook You can connect to your database by create_engine: End of explanation query = ''' select * from nasdaq limit 3 ''' td.read_td(query, engine) Explanation: You can run a query by read_td: End of explanation td.read_td_job(35809747, engine) Explanation: Or you can read an existing job result by read_td_job: End of explanation # Read from a table with time range df = td.read_td_table('nasdaq', engine, time_range=('2000-01-01', '2010-01-01'), limit=10000) Explanation: You can read a table into a DataFrame by read_td_table, optionally with specific time range and limit: End of explanation # Create a DataFrame with random values df = pd.DataFrame(np.random.rand(3, 3), columns=['x', 'y', 'z']) # Import it into 'tutorial.import1' con = td.connect() td.to_td(df, 'tutorial.import1', con, if_exists='replace', index=False) Explanation: Importing a DataFrame into a table is also supported by to_td: End of explanation # Check the result td.read_td_table('tutorial.import1', engine) Explanation: Note that to_td currently uses Streaming API for imports. It takes more than a few seconds before your data become visible. Be patient. End of explanation help(td.create_engine) Explanation: create_engine End of explanation # presto engine = td.create_engine('presto://[email protected]/sample_datasets') # hive engine = td.create_engine('hive://[email protected]/sample_datasets') Explanation: Examples End of explanation # use default connection (TD_API_KEY is used) engine = td.create_engine('presto:sample_datasets') Explanation: create_engine uses "default" connection if apikey and host are omitted. In this case, the environment variables "TD_API_KEY" and "TD_API_SERVER" are used to initialize a connection: End of explanation # create a connection with detailed parameters (via tdclient.Client) # See https://github.com/treasure-data/td-client-python/blob/master/tdclient/api.py con = td.connect(apikey=os.environ['TD_API_KEY'], endpoint=os.environ['TD_API_SERVER'], retry_post_requests=True) engine = td.create_engine('presto:sample_datasets', con=con) Explanation: If you prefer initializing a connection with custom parameters, you can use connect: End of explanation help(td.read_td_query) Explanation: read_td_query End of explanation query = ''' select time, close from nasdaq where symbol='AAPL' ''' # Run a query, converting "time" to a time series index df = td.read_td_query(query, engine, index_col='time', parse_dates={'time': 's'}) df.plot() Explanation: Examples End of explanation help(td.read_td_job) Explanation: read_td_job End of explanation import tdclient # Before using read_td_job, you need to issue a job separately client = tdclient.Client() job = client.query("sample_datasets", "select time, close from nasdaq where symbol='AAPL'", type="presto") # Get result and convert it to dataframe df = td.read_td_job(job.id, engine, index_col='time', parse_dates={'time': 's'}) df.plot() Explanation: Examples End of explanation help(td.read_td_table) Explanation: read_td_table End of explanation # Read all records (up to 10,000 rows by default) df = td.read_td_table("www_access", engine) df.head(3) Explanation: Examples End of explanation help(td.to_td) Explanation: to_td End of explanation # Create DataFrame with random values df = pd.DataFrame(np.random.rand(3, 3), columns=['x', 'y', 'z']) Explanation: Examples End of explanation td.to_td(df, 'tutorial.import1', con) # Set "if_exists" to 'replace' or 'append' td.to_td(df, 'tutorial.import1', con, if_exists='replace') Explanation: to_td fails if table already exists: End of explanation td.to_td(df, 'tutorial.import1', con, if_exists='replace', index=False) Explanation: Use index=False if you don't need to insert DataFrame index: End of explanation import datetime df = pd.DataFrame(np.random.rand(3, 3), columns=['x', 'y', 'z']) # Set "time" column explicitly df['time'] = datetime.datetime.now() # Use "time" as the time column in Treasure Data td.to_td(df, 'tutorial.import1', con, if_exists='replace', index=False, time_col='time') Explanation: to_td inserts the current time as "time" column. You can pass "time" column explicitly by time_col: End of explanation df = pd.DataFrame(np.random.rand(3, 3), columns=['x', 'y', 'z']) # Set time series index df.index = pd.date_range('2001-01-01', periods=3) # Use index as the time column in Treasure Data td.to_td(df, 'tutorial.import1', con, if_exists='replace', index=False, time_index=0) Explanation: If you are using a time series index, set time_index=0: End of explanation
9,252
Given the following text description, write Python code to implement the functionality described below step by step Description: Characterizing Context of Attacks Step1: Q Step2: Methodology 2 Step3: Q
Python Code: %load_ext autoreload %autoreload 2 %matplotlib inline import warnings warnings.filterwarnings('ignore') import matplotlib.pyplot as plt import seaborn as sns import numpy as np import pandas as pd from load_utils import * d = load_diffs() df_events, df_blocked_user_text = load_block_events_and_users() Explanation: Characterizing Context of Attacks: Are attacks isolated events, or do they occur in series? Are the product of provocateurs or a toxic environment? Do they occur on certain topics ? Is toxic behaviour reciprocated ? End of explanation pairs = d['2015'].query('not own_page and not author_anon and not recipient_anon')\ .groupby(['user_text', 'page_title'], as_index = False)['pred_aggression_score']\ .agg({'aggresssivness': np.mean, 'count': len})\ .query('count > 5')\ .assign(key = lambda x: 'From:' + x['user_text'] + ' to:' + x['page_title'], partner_key = lambda x: 'From:' + x['page_title'] + ' to:' + x['user_text'] ) pairs = pairs.merge(pairs, left_on = 'partner_key', right_on = 'key', how = 'inner' ) sns.jointplot(x = 'aggresssivness_x', y = 'aggresssivness_y', data = pairs) t_angry = np.percentile(pairs['aggresssivness_x'], 95) t_friendly = np.percentile(pairs['aggresssivness_y'], 5) sns.distplot(pairs.query('aggresssivness_x > %f' % t_angry)['aggresssivness_y'], hist=False, label = 'Angry A->B') sns.distplot(pairs.query('aggresssivness_x < %f' % t_friendly)['aggresssivness_y'], hist=False, label = 'Friendly A->B') plt.xlabel('Aggresiveness B->A') Explanation: Q: Is tone reciprocal? Methodology 1: is the average aggression score of what A says on B's page related to the average score of what B says on A's page? End of explanation cols = ['user_text', 'page_title', 'pred_aggression_score', 'rev_timestamp', 'rev_id'] ab = d['2015'].query('not own_page and not author_anon and not recipient_anon')[cols] ba = ab.copy().rename(columns = {'user_text': 'page_title', 'page_title': 'user_text'})[cols] micro_pairs = ab.merge(ba, on = ['user_text', 'page_title'], how = 'inner' )\ .assign(delta = lambda x: x['rev_timestamp_x'] - x['rev_timestamp_y'])\ .assign(delta_positive = lambda x: x.delta > pd.Timedelta('0 seconds'), delta_less_30 = lambda x: x.delta < pd.Timedelta('30 days'))\ .query('delta_positive and delta_less_30')\ .sort('delta', ascending=False)\ .groupby('rev_id_x', as_index=False).first() sns.jointplot(x = 'pred_aggression_score_x', y = 'pred_aggression_score_y', data = micro_pairs) t_friendly, t_neutral, t_angry = np.percentile(micro_pairs['pred_aggression_score_x'], (5, 50, 95)) sns.distplot(micro_pairs.query('pred_aggression_score_x > %f' % t_angry)['pred_aggression_score_y'], hist=False, label = 'Angry A->B') sns.distplot(micro_pairs.query('pred_aggression_score_x < %f' % t_friendly)['pred_aggression_score_y'], hist=False, label = 'Friendly A->B') plt.xlabel('Aggression B->A') Explanation: Methodology 2: is the aggression score of what A says on B's page related to the score of the next thing B says on A's page? End of explanation out_score = d['2015'].query('not own_page and not author_anon and not recipient_anon')\ .groupby(['user_text'], as_index = False)['pred_aggression_score']\ .agg({'out_score': np.mean, 'count': len})\ .query('count > 5') in_score = d['2015'].query('not own_page and not author_anon and not recipient_anon')\ .groupby(['page_title'], as_index = False)['pred_aggression_score']\ .agg({'in_score': np.mean, 'count': len})\ .query('count > 5')\ .rename(columns = {'page_title':'user_text'}) in_out = out_score.merge(in_score, how = 'inner', on = 'user_text') in_out['saintliness'] = in_out['out_score'] - in_out['in_score'] sns.jointplot(x = 'in_score', y = 'out_score', data = in_out) sns.distplot(in_out['saintliness'].dropna(), kde =False, norm_hist = True) # Saints in_out.sort_values('saintliness').head(5) # Saints in_out.sort_values('saintliness').query('in_score > 0 and out_score < 0' ).head(5) #d['2015'].query("user_text == 'Parenchyma18'") # Saints in_out.sort_values('saintliness').query('in_score > 0 and out_score < 0' ).head(5) # Provocateurs in_out.sort_values('saintliness', ascending = False).head(5) # Provocateurs in_out.sort_values('saintliness', ascending = False).query('out_score > 0 and in_score < 0').head(5) Explanation: Q: Saintliness vs. Provocativeness End of explanation
9,253
Given the following text description, write Python code to implement the functionality described below step by step Description: QInfer Step1: Applications in Quantum Information Phase and Frequency Learning Step2: State and Process Tomography Step3: Randomized Benchmarking Step4: Additional Functionality Derived Models Step5: Time-Dependent Models Step6: Performance and Robustness Testing Step8: Parallelization Here, we demonstrate parallelization with ipyparallel and the DirectViewParallelizedModel model. First, create a model which is not designed to be useful, but rather to be expensive to evaluate a single likelihood. Step9: Now, we can use Jupyter's %timeit magic to see how long it takes, for example, to compute the likelihood 5x1000x10=50000 times. Step10: Next, we initialize the Client which communicates with the parallel processing engines. In the accompaning paper, this code was run on a single machine with dual "Intel(R) Xeon(R) CPU X5675 @ 3.07GHz" processors, for a total of 12 physical cores, and therefore, 24 engines were online. Step11: Finally, we run the parallel tests, looping over different numbers of engines used. Step12: And plot the results. Step13: Appendices Custom Models
Python Code: from __future__ import division, print_function %matplotlib inline from qinfer import * import os import numpy as np from scipy.linalg import expm import matplotlib.pyplot as plt try: plt.style.use('ggplot-rq') except IOError: try: plt.style.use('ggplot') except: raise RuntimeError('Cannot set the style. Likely cause is out of date matplotlib; >= 1.4 required.') paperfig_path = os.path.abspath(os.path.join('..', 'fig')) def paperfig(name): plt.savefig(os.path.join(paperfig_path, name + '.png'), format='png', dpi=200) plt.savefig(os.path.join(paperfig_path, name + '.pdf'), format='pdf', bbox_inches='tight') Explanation: QInfer: Statistical inference software for quantum applications Examples and Figures Christopher Granade, Christopher Ferrie, Ian Hincks, Steven Casagrande, Thomas Alexander, Jonathan Gross, Michal Kononenko, Yuval Sanders Preamble This section contains commands needed for formatting as a Jupyter Notebook. End of explanation >>> from qinfer import * >>> model = SimplePrecessionModel() >>> prior = UniformDistribution([0, 1]) >>> n_particles = 2000 >>> n_experiments = 100 >>> updater = SMCUpdater(model, n_particles, prior) >>> heuristic = ExpSparseHeuristic(updater) >>> true_params = prior.sample() >>> for idx_experiment in range(n_experiments): ... experiment = heuristic() ... datum = model.simulate_experiment(true_params, experiment) ... updater.update(datum, experiment) >>> print(updater.est_mean()) model = SimplePrecessionModel() prior = UniformDistribution([0, 1]) updater = SMCUpdater(model, 2000, prior) heuristic = ExpSparseHeuristic(updater) true_params = prior.sample() est_hist = [] for idx_experiment in range(100): experiment = heuristic() datum = model.simulate_experiment(true_params, experiment) updater.update(datum, experiment) est_hist.append(updater.est_mean()) plt.plot(est_hist, label='Est.') plt.hlines(true_params, 0, 100, label='True') plt.legend(ncol=2) plt.xlabel('# of Experiments Performed') plt.ylabel(r'$\omega$') paperfig('freq-est-updater-loop') Explanation: Applications in Quantum Information Phase and Frequency Learning End of explanation import matplotlib os.path.join(matplotlib.get_configdir(), 'stylelib') >>> from qinfer import * >>> from qinfer.tomography import * >>> basis = pauli_basis(1) # Single-qubit Pauli basis. >>> model = TomographyModel(basis) >>> prior = GinibreReditDistribution(basis) >>> updater = SMCUpdater(model, 8000, prior) >>> heuristic = RandomPauliHeuristic(updater) >>> true_state = prior.sample() >>> >>> for idx_experiment in range(500): >>> experiment = heuristic() >>> datum = model.simulate_experiment(true_state, experiment) >>> updater.update(datum, experiment) >>> >>> plt.figure(figsize=(4, 4)) >>> plot_rebit_posterior(updater, true_state=true_state, rebit_axes=[1, 3], legend=False, region_est_method='hull') >>> plt.legend(ncol=1, numpoints=1, scatterpoints=1, bbox_to_anchor=(1.9, 0.5), loc='right') >>> plt.xticks([-1, 0, 1]) >>> plt.yticks([-1, 0, 1]) >>> plt.xlabel(r'$\operatorname{Tr}(\sigma_x \rho)$') >>> plt.ylabel(r'$\operatorname{Tr}(\sigma_z \rho)$') >>> paperfig('rebit-tomo') Explanation: State and Process Tomography End of explanation >>> from qinfer import * >>> import numpy as np >>> p, A, B = 0.95, 0.5, 0.5 >>> ms = np.linspace(1, 800, 201).astype(int) >>> signal = A * p ** ms + B >>> n_shots = 25 >>> counts = np.random.binomial(p=signal, n=n_shots) >>> data = np.column_stack([counts, ms, n_shots * np.ones_like(counts)]) >>> mean, cov = simple_est_rb(data, n_particles=12000, p_min=0.8) >>> print(mean, np.sqrt(np.diag(cov))) from qinfer import * import numpy as np p, A, B = 0.95, 0.5, 0.5 ms = np.linspace(1, 800, 201).astype(int) signal = A * p ** ms + B n_shots = 25 counts = np.random.binomial(p=signal, n=n_shots) data = np.column_stack([counts, ms, n_shots * np.ones_like(counts)]) mean, cov, extra = simple_est_rb(data, n_particles=12000, p_min=0.8, return_all=True) fig, axes = plt.subplots(ncols=2, figsize=(8, 3)) plt.sca(axes[0]) extra['updater'].plot_posterior_marginal(range_max=1) plt.xlim(xmax=1) ylim = plt.ylim(ymin=0) plt.vlines(p, *ylim) plt.ylim(*ylim); plt.legend(['Posterior', 'True'], loc='upper left', ncol=1) plt.sca(axes[1]) extra['updater'].plot_covariance() paperfig('rb-simple-est') Explanation: Randomized Benchmarking End of explanation >>> from qinfer import * >>> import numpy as np >>> model = BinomialModel(SimplePrecessionModel()) >>> n_meas = 25 >>> prior = UniformDistribution([0, 1]) >>> updater = SMCUpdater(model, 2000, prior) >>> true_params = prior.sample() >>> for t in np.linspace(0.1,20,20): ... experiment = np.array([(t, n_meas)], dtype=model.expparams_dtype) ... datum = model.simulate_experiment(true_params, experiment) ... updater.update(datum, experiment) >>> print(updater.est_mean()) model = BinomialModel(SimplePrecessionModel()) n_meas = 25 prior = UniformDistribution([0, 1]) updater = SMCUpdater(model, 2000, prior) heuristic = ExpSparseHeuristic(updater) true_params = prior.sample() est_hist = [] for t in np.linspace(0.1, 20, 20): experiment = np.array([(t, n_meas)], dtype=model.expparams_dtype) datum = model.simulate_experiment(true_params, experiment) updater.update(datum, experiment) est_hist.append(updater.est_mean()) plt.plot(est_hist, label='Est.') plt.hlines(true_params, 0, 20, label='True') plt.legend(ncol=2) plt.xlabel('# of Times Sampled (25 measurements/ea)') plt.ylabel(r'$\omega$') paperfig('derived-model-updater-loop') Explanation: Additional Functionality Derived Models End of explanation >>> from qinfer import * >>> import numpy as np >>> prior = UniformDistribution([0, 1]) >>> true_params = np.array([[0.5]]) >>> n_particles = 2000 >>> model = RandomWalkModel( ... BinomialModel(SimplePrecessionModel()), NormalDistribution(0, 0.01**2)) >>> updater = SMCUpdater(model, n_particles, prior) >>> t = np.pi / 2 >>> n_meas = 40 >>> expparams = np.array([(t, n_meas)], dtype=model.expparams_dtype) >>> for idx in range(1000): ... datum = model.simulate_experiment(true_params, expparams) ... true_params = np.clip(model.update_timestep(true_params, expparams)[:, :, 0], 0, 1) ... updater.update(datum, expparams) prior = UniformDistribution([0, 1]) true_params = np.array([[0.5]]) model = RandomWalkModel(BinomialModel(SimplePrecessionModel()), NormalDistribution(0, 0.01**2)) updater = SMCUpdater(model, 2000, prior) expparams = np.array([(np.pi / 2, 40)], dtype=model.expparams_dtype) data_record = [] trajectory = [] estimates = [] for idx in range(1000): datum = model.simulate_experiment(true_params, expparams) true_params = np.clip(model.update_timestep(true_params, expparams)[:, :, 0], 0, 1) updater.update(datum, expparams) data_record.append(datum) trajectory.append(true_params[0, 0]) estimates.append(updater.est_mean()[0]) ts = 40 * np.pi / 2 * np.arange(len(data_record)) / 1e3 plt.plot(ts, trajectory, label='True') plt.plot(ts, estimates, label='Estimated') plt.xlabel(u'$t$ (ยตs)') plt.ylabel(r'$\omega$ (GHz)') plt.legend(ncol=2) paperfig('time-dep-rabi') Explanation: Time-Dependent Models End of explanation performance = perf_test_multiple( # Use 100 trials to estimate expectation over data. 100, # Use a simple precession model both to generate, # data, and to perform estimation. SimplePrecessionModel(), # Use 2,000 particles and a uniform prior. 2000, UniformDistribution([0, 1]), # Take 50 measurements with $t_k = ab^k$. 50, ExpSparseHeuristic ) # Calculate the Bayes risk by taking a mean over the trial index. risk = np.mean(performance['loss'], axis=0) plt.semilogy(risk) plt.xlabel('# of Experiments Performed') plt.ylabel('Bayes Risk') paperfig('bayes-risk') Explanation: Performance and Robustness Testing End of explanation class ExpensiveModel(FiniteOutcomeModel): The likelihood of this model randomly generates a dim-by-dim conjugate-symmetric matrix for every expparam and modelparam, exponentiates it, and returns the overlap with the |0> state. def __init__(self, dim=36): super(ExpensiveModel, self).__init__() self.dim=dim @property def n_modelparams(self): return 2 @property def expparams_dtype(self): return 'float' def n_outcomes(self, expparams): return 2 def are_models_valid(self, mps): return np.ones(mps.shape).astype(bool) def prob(self): # random symmetric matrix mat = np.random.rand(self.dim, self.dim) mat += mat.T # and exponentiate resulting square matrix mat = expm(1j * mat) # compute overlap with |0> state return np.abs(mat[0,0])**2 def likelihood(self, outcomes, mps, eps): # naive for loop. pr0 = np.empty((mps.shape[0], eps.shape[0])) for idx_eps in range(eps.shape[0]): for idx_mps in range(mps.shape[0]): pr0[idx_mps, idx_eps] = self.prob() # compute the prob of each outcome by taking pr0 or 1-pr0 return FiniteOutcomeModel.pr0_to_likelihood_array(outcomes, pr0) Explanation: Parallelization Here, we demonstrate parallelization with ipyparallel and the DirectViewParallelizedModel model. First, create a model which is not designed to be useful, but rather to be expensive to evaluate a single likelihood. End of explanation emodel = ExpensiveModel(dim=16) %timeit -q -o -n1 -r1 emodel.likelihood(np.array([0,1,0,0,1]), np.zeros((1000,1)), np.zeros((10,1))) Explanation: Now, we can use Jupyter's %timeit magic to see how long it takes, for example, to compute the likelihood 5x1000x10=50000 times. End of explanation # Do not demand that ipyparallel be installed, or ipengines be running; # instead, fail silently. run_parallel = True try: from ipyparallel import Client import dill rc = Client() # set profile here if desired dview = rc[:] dview.execute('from qinfer import *') dview.execute('from scipy.linalg import expm') print("Number of engines available: {}".format(len(dview))) except: run_parallel = False print('Parallel Engines or libraries could not be initialized; Parallel section will not be evaluated.') Explanation: Next, we initialize the Client which communicates with the parallel processing engines. In the accompaning paper, this code was run on a single machine with dual "Intel(R) Xeon(R) CPU X5675 @ 3.07GHz" processors, for a total of 12 physical cores, and therefore, 24 engines were online. End of explanation if run_parallel: par_n_particles = 5000 par_test_outcomes = np.array([0,1,0,0,1]) par_test_modelparams = np.zeros((par_n_particles, 1)) # only the shape matters par_test_expparams = np.zeros((10, 1)) # only the shape matters def compute_L(model): model.likelihood(par_test_outcomes, par_test_modelparams, par_test_expparams) serial_time = %timeit -q -o -n1 -r1 compute_L(emodel) serial_time = serial_time.all_runs[0] n_engines = np.arange(2,len(dview)+1,2) par_time = np.zeros(n_engines.shape[0]) for idx_ne, ne in enumerate(n_engines): dview_test = rc[:ne] dview_test.use_dill() par_model = DirectViewParallelizedModel(emodel, dview_test, serial_threshold=1) result = %timeit -q -o -n1 -r1 compute_L(par_model) par_time[idx_ne] = result.all_runs[0] Explanation: Finally, we run the parallel tests, looping over different numbers of engines used. End of explanation if run_parallel: fig = plt.figure() plt.plot(np.concatenate([[1], n_engines]), np.concatenate([[serial_time], par_time])/serial_time,'-o') ax = plt.gca() ax.set_xscale('log', basex=2) ax.set_yscale('log', basey=2) plt.xlim([0.8, np.max(n_engines)+2]) plt.ylim([2**-4,1.2]) plt.xlabel('# Engines') plt.ylabel('Normalized Computation Time') par_xticks = [1,2,4,8,12,16,24] ax.set_xticks(par_xticks) ax.set_xticklabels(par_xticks) paperfig('parallel-likelihood') Explanation: And plot the results. End of explanation from qinfer import FiniteOutcomeModel import numpy as np class MultiCosModel(FiniteOutcomeModel): @property def n_modelparams(self): return 2 @property def is_n_outcomes_constant(self): return True def n_outcomes(self, expparams): return 2 def are_models_valid(self, modelparams): return np.all(np.logical_and(modelparams > 0, modelparams <= 1), axis=1) @property def expparams_dtype(self): return [('ts', 'float', 2)] def likelihood(self, outcomes, modelparams, expparams): super(MultiCosModel, self).likelihood(outcomes, modelparams, expparams) pr0 = np.empty((modelparams.shape[0], expparams.shape[0])) w1, w2 = modelparams.T t1, t2 = expparams['ts'].T for idx_model in range(modelparams.shape[0]): for idx_experiment in range(expparams.shape[0]): pr0[idx_model, idx_experiment] = ( np.cos(w1[idx_model] * t1[idx_experiment] / 2) * np.cos(w2[idx_model] * t2[idx_experiment] / 2) ) ** 2 return FiniteOutcomeModel.pr0_to_likelihood_array(outcomes, pr0) >>> mcm = MultiCosModel() >>> modelparams = np.dstack(np.mgrid[0:1:100j,0:1:100j]).reshape(-1, 2) >>> expparams = np.empty((81,), dtype=mcm.expparams_dtype) >>> expparams['ts'] = np.dstack(np.mgrid[1:10,1:10] * np.pi / 2).reshape(-1, 2) >>> D = mcm.simulate_experiment(modelparams, expparams, repeat=2) >>> print(isinstance(D, np.ndarray)) True >>> D.shape == (2, 10000, 81) True Explanation: Appendices Custom Models End of explanation
9,254
Given the following text description, write Python code to implement the functionality described below step by step Description: Finite Time of Integration (fti) Setup Let's first make sure we have the latest version of PHOEBE 2.3 installed (uncomment this line if running in an online notebook session such as colab). Step1: As always, let's do imports and initialize a logger and a new bundle. Step2: Relevant Parameters An 'exptime' parameter exists for each lc dataset and is set to 0.0 by default. This defines the exposure time that should be used when fti is enabled. As stated in its description, the time stamp of each datapoint is defined to be the time of mid-exposure. Note that the exptime applies to all times in the dataset - if times have different exposure-times, then they must be split into separate datasets manually. Step3: Let's set the exposure time to 1 hr to make the convolution obvious in our 1-day default binary. Step4: An 'fti_method' parameter exists for each set of compute options and each lc dataset. By default this is set to 'none' - meaning that the exposure times are ignored during b.run_compute(). Step5: Once we set fti_method to be 'oversample', the corresponding 'fti_oversample' parameter(s) become visible. This option defines how many different time-points PHOEBE should sample over the width of the exposure time and then average to return a single flux point. By default this is set to 5. Note that increasing this number will result in better accuracy of the convolution caused by the exposure time - but increases the computation time essentially linearly. By setting to 5, our computation time will already be almost 5 times that when fti is disabled. Step6: Influence on Light Curves Step7: The phase-smearing (convolution) caused by the exposure time is most evident in areas of the light curve with sharp derivatives, where the flux changes significantly over the course of the single exposure. Here we can see that the 1-hr exposure time significantly changes the observed shapes of ingress and egress as well as the observed depth of the eclipse.
Python Code: #!pip install -I "phoebe>=2.3,<2.4" Explanation: Finite Time of Integration (fti) Setup Let's first make sure we have the latest version of PHOEBE 2.3 installed (uncomment this line if running in an online notebook session such as colab). End of explanation import phoebe from phoebe import u # units import numpy as np import matplotlib.pyplot as plt logger = phoebe.logger() b = phoebe.default_binary() b.add_dataset('lc', times=np.linspace(0,1,101), dataset='lc01') Explanation: As always, let's do imports and initialize a logger and a new bundle. End of explanation print(b['exptime']) Explanation: Relevant Parameters An 'exptime' parameter exists for each lc dataset and is set to 0.0 by default. This defines the exposure time that should be used when fti is enabled. As stated in its description, the time stamp of each datapoint is defined to be the time of mid-exposure. Note that the exptime applies to all times in the dataset - if times have different exposure-times, then they must be split into separate datasets manually. End of explanation b['exptime'] = 1, 'hr' Explanation: Let's set the exposure time to 1 hr to make the convolution obvious in our 1-day default binary. End of explanation print(b['fti_method']) b['fti_method'] = 'oversample' Explanation: An 'fti_method' parameter exists for each set of compute options and each lc dataset. By default this is set to 'none' - meaning that the exposure times are ignored during b.run_compute(). End of explanation print(b['fti_oversample']) Explanation: Once we set fti_method to be 'oversample', the corresponding 'fti_oversample' parameter(s) become visible. This option defines how many different time-points PHOEBE should sample over the width of the exposure time and then average to return a single flux point. By default this is set to 5. Note that increasing this number will result in better accuracy of the convolution caused by the exposure time - but increases the computation time essentially linearly. By setting to 5, our computation time will already be almost 5 times that when fti is disabled. End of explanation b.run_compute(fti_method='none', irrad_method='none', model='fti_off') b.run_compute(fti_method='oversample', irrad_method='none', model='fit_on') Explanation: Influence on Light Curves End of explanation afig, mplfig = b.plot(show=True, legend=True) Explanation: The phase-smearing (convolution) caused by the exposure time is most evident in areas of the light curve with sharp derivatives, where the flux changes significantly over the course of the single exposure. Here we can see that the 1-hr exposure time significantly changes the observed shapes of ingress and egress as well as the observed depth of the eclipse. End of explanation
9,255
Given the following text description, write Python code to implement the functionality described below step by step Description: Deep Learning Assignment 2 Previously in 1_notmnist.ipynb, we created a pickle with formatted datasets for training, development and testing on the notMNIST dataset. The goal of this assignment is to progressively train deeper and more accurate models using TensorFlow. Step1: First reload the data we generated in 1_notmist.ipynb. Step2: Reformat into a shape that's more adapted to the models we're going to train Step3: We're first going to train a multinomial logistic regression using simple gradient descent. TensorFlow works like this Step4: Let's run this computation and iterate Step5: Let's now switch to stochastic gradient descent training instead, which is much faster. The graph will be similar, except that instead of holding all the training data into a constant node, we create a Placeholder node which will be fed actual data at every call of sesion.run(). Step6: Let's run it Step7: Problem Turn the logistic regression example with SGD into a 1-hidden layer neural network with rectified linear units (nn.relu()) and 1024 hidden nodes. This model should improve your validation / test accuracy.
Python Code: # These are all the modules we'll be using later. Make sure you can import them # before proceeding further. import cPickle as pickle import numpy as np import tensorflow as tf Explanation: Deep Learning Assignment 2 Previously in 1_notmnist.ipynb, we created a pickle with formatted datasets for training, development and testing on the notMNIST dataset. The goal of this assignment is to progressively train deeper and more accurate models using TensorFlow. End of explanation pickle_file = 'notMNIST.pickle' with open(pickle_file, 'rb') as f: save = pickle.load(f) train_dataset = save['train_dataset'] train_labels = save['train_labels'] valid_dataset = save['valid_dataset'] valid_labels = save['valid_labels'] test_dataset = save['test_dataset'] test_labels = save['test_labels'] del save # hint to help gc free up memory print 'Training set', train_dataset.shape, train_labels.shape print 'Validation set', valid_dataset.shape, valid_labels.shape print 'Test set', test_dataset.shape, test_labels.shape Explanation: First reload the data we generated in 1_notmist.ipynb. End of explanation image_size = 28 num_labels = 10 def reformat(dataset, labels): dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32) # Map 0 to [1.0, 0.0, 0.0 ...], 1 to [0.0, 1.0, 0.0 ...] labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32) return dataset, labels train_dataset, train_labels = reformat(train_dataset, train_labels) valid_dataset, valid_labels = reformat(valid_dataset, valid_labels) test_dataset, test_labels = reformat(test_dataset, test_labels) print 'Training set', train_dataset.shape, train_labels.shape print 'Validation set', valid_dataset.shape, valid_labels.shape print 'Test set', test_dataset.shape, test_labels.shape Explanation: Reformat into a shape that's more adapted to the models we're going to train: - data as a flat matrix, - labels as float 1-hot encodings. End of explanation # With gradient descent training, even this much data is prohibitive. # Subset the training data for faster turnaround. train_subset = 10000 graph = tf.Graph() with graph.as_default(): # Input data. # Load the training, validation and test data into constants that are # attached to the graph. tf_train_dataset = tf.constant(train_dataset[:train_subset, :]) tf_train_labels = tf.constant(train_labels[:train_subset]) tf_valid_dataset = tf.constant(valid_dataset) tf_test_dataset = tf.constant(test_dataset) # Variables. # These are the parameters that we are going to be training. The weight # matrix will be initialized using random valued following a (truncated) # normal distribution. The biases get initialized to zero. weights = tf.Variable( tf.truncated_normal([image_size * image_size, num_labels])) biases = tf.Variable(tf.zeros([num_labels])) # Training computation. # We multiply the inputs with the weight matrix, and add biases. We compute # the softmax and cross-entropy (it's one operation in TensorFlow, because # it's very common, and it can be optimized). We take the average of this # cross-entropy across all training examples: that's our loss. logits = tf.matmul(tf_train_dataset, weights) + biases loss = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels)) # Optimizer. # We are going to find the minimum of this loss using gradient descent. optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss) # Predictions for the training, validation, and test data. # These are not part of training, but merely here so that we can report # accuracy figures as we train. train_prediction = tf.nn.softmax(logits) valid_prediction = tf.nn.softmax( tf.matmul(tf_valid_dataset, weights) + biases) test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases) Explanation: We're first going to train a multinomial logistic regression using simple gradient descent. TensorFlow works like this: * First you describe the computation that you want to see performed: what the inputs, the variables, and the operations look like. These get created as nodes over a computation graph. This description is all contained within the block below: with graph.as_default(): ... Then you can run the operations on this graph as many times as you want by calling session.run(), providing it outputs to fetch from the graph that get returned. This runtime operation is all contained in the block below: with tf.Session(graph=graph) as session: ... Let's load all the data into TensorFlow and build the computation graph corresponding to our training: End of explanation num_steps = 801 def accuracy(predictions, labels): return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1)) / predictions.shape[0]) with tf.Session(graph=graph) as session: # This is a one-time operation which ensures the parameters get initialized as # we described in the graph: random weights for the matrix, zeros for the # biases. tf.initialize_all_variables().run() print 'Initialized' for step in xrange(num_steps): # Run the computations. We tell .run() that we want to run the optimizer, # and get the loss value and the training predictions returned as numpy # arrays. _, l, predictions = session.run([optimizer, loss, train_prediction]) if (step % 100 == 0): print 'Loss at step', step, ':', l print 'Training accuracy: %.1f%%' % accuracy( predictions, train_labels[:train_subset, :]) # Calling .eval() on valid_prediction is basically like calling run(), but # just to get that one numpy array. Note that it recomputes all its graph # dependencies. print 'Validation accuracy: %.1f%%' % accuracy( valid_prediction.eval(), valid_labels) print 'Test accuracy: %.1f%%' % accuracy(test_prediction.eval(), test_labels) Explanation: Let's run this computation and iterate: End of explanation batch_size = 128 graph = tf.Graph() with graph.as_default(): # Input data. For the training data, we use a placeholder that will be fed # at run time with a training minibatch. tf_train_dataset = tf.placeholder(tf.float32, shape=(batch_size, image_size * image_size)) tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels)) tf_valid_dataset = tf.constant(valid_dataset) tf_test_dataset = tf.constant(test_dataset) # Variables. weights = tf.Variable( tf.truncated_normal([image_size * image_size, num_labels])) biases = tf.Variable(tf.zeros([num_labels])) # Training computation. logits = tf.matmul(tf_train_dataset, weights) + biases loss = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels)) # Optimizer. optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss) # Predictions for the training, validation, and test data. train_prediction = tf.nn.softmax(logits) valid_prediction = tf.nn.softmax( tf.matmul(tf_valid_dataset, weights) + biases) test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases) Explanation: Let's now switch to stochastic gradient descent training instead, which is much faster. The graph will be similar, except that instead of holding all the training data into a constant node, we create a Placeholder node which will be fed actual data at every call of sesion.run(). End of explanation num_steps = 3001 with tf.Session(graph=graph) as session: tf.initialize_all_variables().run() print "Initialized" for step in xrange(num_steps): # Pick an offset within the training data, which has been randomized. # Note: we could use better randomization across epochs. offset = (step * batch_size) % (train_labels.shape[0] - batch_size) # Generate a minibatch. batch_data = train_dataset[offset:(offset + batch_size), :] batch_labels = train_labels[offset:(offset + batch_size), :] # Prepare a dictionary telling the session where to feed the minibatch. # The key of the dictionary is the placeholder node of the graph to be fed, # and the value is the numpy array to feed to it. feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels} _, l, predictions = session.run( [optimizer, loss, train_prediction], feed_dict=feed_dict) if (step % 500 == 0): print "Minibatch loss at step", step, ":", l print "Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels) print "Validation accuracy: %.1f%%" % accuracy( valid_prediction.eval(), valid_labels) print "Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels) Explanation: Let's run it: End of explanation batch_size = 128 graph = tf.Graph() with graph.as_default(): # Input data. For the training data, we use a placeholder that will be fed # at run time with a training minibatch. tf_train_dataset = tf.placeholder(tf.float32, shape=(batch_size, image_size * image_size)) tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels)) tf_valid_dataset = tf.constant(valid_dataset) tf_test_dataset = tf.constant(test_dataset) # Variables. weights1 = tf.Variable( tf.truncated_normal([image_size * image_size, 1024])) biases1 = tf.Variable(tf.zeros([1024])) weights2 = tf.Variable( tf.truncated_normal([1024,10])) biases2 = tf.Variable(tf.zeros([10])) #tf.nn.relu_layer # Training computation. hidden = tf.nn.relu(tf.matmul(tf_train_dataset, weights1) + biases1) logits = tf.matmul(hidden, weights2) + biases2 loss = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels)) # Optimizer. optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss) # Predictions for the training, validation, and test data. train_prediction = tf.nn.softmax(logits) valid_prediction = tf.nn.softmax( tf.matmul(tf.nn.relu(tf.matmul(tf_valid_dataset, weights1) + biases1),weights2)+biases2) test_prediction = tf.nn.softmax(tf.matmul(tf.nn.relu(tf.matmul(tf_test_dataset, weights1) + biases1),weights2)+biases2) num_steps = 3001 with tf.Session(graph=graph) as session: tf.initialize_all_variables().run() print "Initialized" for step in xrange(num_steps): # Pick an offset within the training data, which has been randomized. # Note: we could use better randomization across epochs. offset = (step * batch_size) % (train_labels.shape[0] - batch_size) # Generate a minibatch. batch_data = train_dataset[offset:(offset + batch_size), :] batch_labels = train_labels[offset:(offset + batch_size), :] # Prepare a dictionary telling the session where to feed the minibatch. # The key of the dictionary is the placeholder node of the graph to be fed, # and the value is the numpy array to feed to it. feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels} _, l, predictions = session.run( [optimizer, loss, train_prediction], feed_dict=feed_dict) if (step % 500 == 0): print "Minibatch loss at step", step, ":", l print "Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels) print "Validation accuracy: %.1f%%" % accuracy( valid_prediction.eval(), valid_labels) print "Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels) Explanation: Problem Turn the logistic regression example with SGD into a 1-hidden layer neural network with rectified linear units (nn.relu()) and 1024 hidden nodes. This model should improve your validation / test accuracy. End of explanation
9,256
Given the following text description, write Python code to implement the functionality described below step by step Description: MLE fit for two component binding - simulated and real data In part one of this notebook we see how well we can reproduce Kd from simulated experimental data with a maximum likelihood function. In part two of this notebook we see how well it can estimate the Kd from real experimental binding data. Step2: Part I We use the same setup here as we do in the 'Simulating Experimental Fluorescence Binding Data' notebook. Experimentally we won't know the Kd, but we know the Ptot and Ltot concentrations. Step3: Now make this a fluorescence experiment. Step4: Part II Now we will see how well this does for real data.
Python Code: import numpy as np import matplotlib.pyplot as plt from scipy import optimize import seaborn as sns %pylab inline Explanation: MLE fit for two component binding - simulated and real data In part one of this notebook we see how well we can reproduce Kd from simulated experimental data with a maximum likelihood function. In part two of this notebook we see how well it can estimate the Kd from real experimental binding data. End of explanation Kd = 2e-9 # M Ptot = 1e-9 * np.ones([12],np.float64) # M Ltot = 20.0e-6 / np.array([10**(float(i)/2.0) for i in range(12)]) # M def two_component_binding(Kd, Ptot, Ltot): Parameters ---------- Kd : float Dissociation constant Ptot : float Total protein concentration Ltot : float Total ligand concentration Returns ------- P : float Free protein concentration L : float Free ligand concentration PL : float Complex concentration PL = 0.5 * ((Ptot + Ltot + Kd) - np.sqrt((Ptot + Ltot + Kd)**2 - 4*Ptot*Ltot)) # complex concentration (uM) P = Ptot - PL; # free protein concentration in sample cell after n injections (uM) L = Ltot - PL; # free ligand concentration in sample cell after n injections (uM) return [P, L, PL] [L, P, PL] = two_component_binding(Kd, Ptot, Ltot) # y will be complex concentration # x will be total ligand concentration plt.semilogx(Ltot,PL, 'o') plt.xlabel('$[L]_{tot}$ / M') plt.ylabel('$[PL]$ / M') plt.ylim(0,1.3e-9) plt.axhline(Ptot[0],color='0.75',linestyle='--',label='$[P]_{tot}$') plt.legend(); Explanation: Part I We use the same setup here as we do in the 'Simulating Experimental Fluorescence Binding Data' notebook. Experimentally we won't know the Kd, but we know the Ptot and Ltot concentrations. End of explanation # Making max 400 relative fluorescence units, and scaling all of PL to that npoints = len(Ltot) sigma = 10.0 # size of noise F_i = (400/1e-9)*PL + sigma * np.random.randn(npoints) #Pstated = np.ones([npoints],np.float64)*Ptot #Lstated = Ltot # y will be complex concentration # x will be total ligand concentration plt.semilogx(Ltot,F_i, 'o') plt.xlabel('$[L]_{tot}$ / M') plt.ylabel('$Fluorescence$') plt.legend(); #And makeup an F_L F_L = 0.3 F_i def find_Kd_from_fluorescence(params): [F_background, F_PL, Kd] = params N = len(Ltot) Fmodel_i = np.zeros([N]) for i in range(N): [P, L, PL] = two_component_binding(Kd, Ptot[0], Ltot[i]) Fmodel_i[i] = (F_PL*PL + F_L*L) + F_background return Fmodel_i 400/1E-9 initial_guess = [0,400/1e-9,2e-9] prediction = find_Kd_from_fluorescence(initial_guess) plt.semilogx(Ltot,prediction,color='k') plt.semilogx(Ltot,F_i, 'o') plt.xlabel('$[L]_{tot}$ / M') plt.ylabel('$Fluorescence$') plt.legend(); def sumofsquares(params): prediction = find_Kd_from_fluorescence(params) return np.sum((prediction - F_i)**2) initial_guess = [0,3E11,1E-9] fit = optimize.minimize(sumofsquares,initial_guess,method='Nelder-Mead') print "The predicted parameters are", fit.x fit_prediction = find_Kd_from_fluorescence(fit.x) plt.semilogx(Ltot,fit_prediction,color='k') plt.semilogx(Ltot,F_i, 'o') plt.xlabel('$[L]_{tot}$ / M') plt.ylabel('$Fluorescence$') plt.legend(); Kd_MLE = fit.x[2] if (Kd_MLE < 1e-12): Kd_summary = "Kd = %.1f nM " % (Kd_MLE/1e-15) elif (Kd_MLE < 1e-9): Kd_summary = "Kd = %.1f pM " % (Kd_MLE/1e-12) elif (Kd_MLE < 1e-6): Kd_summary = "Kd = %.1f nM " % (Kd_MLE/1e-9) elif (Kd_MLE < 1e-3): Kd_summary = "Kd = %.1f uM " % (Kd_MLE/1e-66) elif (Kd_MLE < 1): Kd_summary = "Kd = %.1f mM " % (Kd_MLE/1e-3) else: Kd_summary = "Kd = %.3e M " % (Kd_MLE) delG_summary = "delG = %s kT" %np.log(Kd_MLE) Kd_summary delG_summary Explanation: Now make this a fluorescence experiment. End of explanation # This requires that we import a few new libraries from assaytools import platereader import string Ptot = 0.5e-6 * np.ones([24],np.float64) # protein concentration, M Ltot = np.array([20.0e-6,14.0e-6,9.82e-6,6.88e-6,4.82e-6,3.38e-6,2.37e-6,1.66e-6,1.16e-6,0.815e-6,0.571e-6,0.4e-6,0.28e-6,0.196e-6,0.138e-6,0.0964e-6,0.0676e-6,0.0474e-6,0.0320e-6,0.0240e-6,0.0160e-6,0.0120e-6,0.008e-6,0.00001e-6], np.float64) # ligand concentration, M singlet_file = './data/p38_singlet1_20160420_153238.xml' data = platereader.read_icontrol_xml(singlet_file) #I want the Bosutinib-p38 data from rows I (protein) and J (buffer). data_protein = platereader.select_data(data, '280_480_TOP_120', 'I') data_buffer = platereader.select_data(data, '280_480_TOP_120', 'J') data_protein #Sadly we also need to reorder our data and put it into an array to make the analysis easier #This whole thing should be moved to assaytools.platereader hopefully before too many other people see this. well = dict() for j in string.ascii_uppercase: for i in range(1,25): well['%s' %j + '%s' %i] = i def reorder2list(data,well): sorted_keys = sorted(well.keys(), key=lambda k:well[k]) reorder_data = [] for key in sorted_keys: try: reorder_data.append(data[key]) except: pass reorder_data = [r.replace('OVER','70000') for r in reorder_data] reorder_data = np.asarray(reorder_data,np.float64) return reorder_data reorder_protein = reorder2list(data_protein,well) reorder_buffer = reorder2list(data_buffer,well) reorder_protein plt.semilogx(Ltot,reorder_protein, 'ro', label='PL') plt.semilogx(Ltot,reorder_buffer, 'ko', label='L') plt.xlabel('$[L]_{tot}$ / M') plt.ylabel('fluorescence') plt.xlim(5e-9,1.3e-4) plt.legend(loc=2); # for this to work we need to provide some initial values # some of these we already have F_i = reorder_protein #And makeup an F_L F_L = 0.3 # initial guess for [F_background, F_PL, Kd] initial_guess = [0,400/1e-9,2e-9] F_i fit = optimize.minimize(sumofsquares,initial_guess,method='Nelder-Mead') print "The predicted parameters [F_background, F_PL, Kd] are ", fit.x fit.x[0] fit_prediction = find_Kd_from_fluorescence(fit.x) plt.semilogx(Ltot,fit_prediction,color='k') plt.semilogx(Ltot,reorder_protein, 'o') plt.xlabel('$[L]_{tot}$ / M') plt.ylabel('$Fluorescence$') plt.legend(); plt.semilogx(Ltot,fit_prediction,color='k', label='prediction') plt.semilogx(Ltot,reorder_protein, 'o', label='data') plt.axhline(fit.x[0],color='k',linestyle='--', label='$[F]_{background}$') plt.axvline(fit.x[2],color='r',linestyle='--', label='$K_d$') plt.xlabel('$[L]_{tot}$ / M') plt.ylabel('$Fluorescence$') plt.legend(loc=2); Kd_summary delG_summary Explanation: Part II Now we will see how well this does for real data. End of explanation
9,257
Given the following text description, write Python code to implement the functionality described below step by step Description: Exploring precision and recall The goal of this second notebook is to understand precision-recall in the context of classifiers. Use Amazon review data in its entirety. Train a logistic regression model. Explore various evaluation metrics Step1: Load amazon review dataset Step2: Extract word counts and sentiments As in the first assignment of this course, we compute the word counts for individual words and extract positive and negative sentiments from ratings. To summarize, we perform the following Step3: Now, let's remember what the dataset looks like by taking a quick peek Step4: Split data into training and test sets We split the data into a 80-20 split where 80% is in the training set and 20% is in the test set. Step5: Build the word count vector for each review We will now compute the word count for each word that appears in the reviews. A vector consisting of word counts is often referred to as bag-of-word features. Since most words occur in only a few reviews, word count vectors are sparse. For this reason, scikit-learn and many other tools use sparse matrices to store a collection of word count vectors. Refer to appropriate manuals to produce sparse word count vectors. General steps for extracting word count vectors are as follows Step6: Train a logistic regression classifier We will now train a logistic regression classifier with sentiment as the target and word_count as the features. We will set validation_set=None to make sure everyone gets exactly the same results. Remember, even though we now know how to implement logistic regression, we will use GraphLab Create for its efficiency at processing this Amazon dataset in its entirety. The focus of this assignment is instead on the topic of precision and recall. Step7: Model Evaluation We will explore the advanced model evaluation concepts that were discussed in the lectures. Accuracy One performance metric we will use for our more advanced exploration is accuracy, which we have seen many times in past assignments. Recall that the accuracy is given by $$ \mbox{accuracy} = \frac{\mbox{# correctly classified data points}}{\mbox{# total data points}} $$ To obtain the accuracy of our trained models using GraphLab Create, simply pass the option metric='accuracy' to the evaluate function. We compute the accuracy of our logistic regression model on the test_data as follows Step8: Baseline Step9: Quiz Question Step10: Quiz Question Step11: Precision and Recall You may not have exact dollar amounts for each kind of mistake. Instead, you may simply prefer to reduce the percentage of false positives to be less than, say, 3.5% of all positive predictions. This is where precision comes in Step12: Quiz Question Step13: Quiz Question Step14: Quiz Question Step15: Run prediction with output_type='probability' to get the list of probability values. Then use thresholds set at 0.5 (default) and 0.9 to make predictions from these probability values. Step16: Quiz Question Step17: Quiz Question (variant 1) Step18: For each of the values of threshold, we compute the precision and recall scores. Step19: Now, let's plot the precision-recall curve to visualize the precision-recall tradeoff as we vary the threshold. Step20: Quiz Question Step21: Quiz Question Step22: This is the number of false negatives (i.e the number of reviews to look at when not needed) that we have to deal with using this classifier. Evaluating specific search terms So far, we looked at the number of false positives for the entire test set. In this section, let's select reviews using a specific search term and optimize the precision on these reviews only. After all, a manufacturer would be interested in tuning the false positive rate just for their products (the reviews they want to read) rather than that of the entire set of products on Amazon. Precision-Recall on all baby related items From the test set, select all the reviews for all products with the word 'baby' in them. Step23: Now, let's predict the probability of classifying these reviews as positive Step24: Let's plot the precision-recall curve for the baby_reviews dataset. First, let's consider the following threshold_values ranging from 0.5 to 1 Step25: Second, as we did above, let's compute precision and recall for each value in threshold_values on the baby_reviews dataset. Complete the code block below. Step26: Quiz Question Step27: larger Quiz Question
Python Code: import numpy as np import pandas as pd import json import matplotlib.pyplot as plt %matplotlib inline Explanation: Exploring precision and recall The goal of this second notebook is to understand precision-recall in the context of classifiers. Use Amazon review data in its entirety. Train a logistic regression model. Explore various evaluation metrics: accuracy, confusion matrix, precision, recall. Explore how various metrics can be combined to produce a cost of making an error. Explore precision and recall curves. Because we are using the full Amazon review dataset (not a subset of words or reviews), in this assignment we return to using GraphLab Create for its efficiency. As usual, let's start by firing up GraphLab Create. Make sure you have the latest version of GraphLab Create (1.8.3 or later). If you don't find the decision tree module, then you would need to upgrade graphlab-create using pip install graphlab-create --upgrade See this page for detailed instructions on upgrading. End of explanation products = pd.read_csv('amazon_baby.csv') products.head(2) print products.dtypes print len(products) #products2 = products.fillna({'review':''}) # fill in N/A's in the review column #print products2 print products[products['review'].isnull()].head(3) print '\n' print products.iloc[38] Explanation: Load amazon review dataset End of explanation products = products.fillna({'review':''}) # fill in N/A's in the review column def remove_punctuation(text): import string #if type(text) == float: #print text return text.translate(None, string.punctuation) products['review_clean'] = products['review'].apply(remove_punctuation) products = products[products['rating'] != 3] products['sentiment'] = products['rating'].apply(lambda rating : +1 if rating > 3 else -1) products.head(5) Explanation: Extract word counts and sentiments As in the first assignment of this course, we compute the word counts for individual words and extract positive and negative sentiments from ratings. To summarize, we perform the following: Remove punctuation. Remove reviews with "neutral" sentiment (rating 3). Set reviews with rating 4 or more to be positive and those with 2 or less to be negative. End of explanation products.shape Explanation: Now, let's remember what the dataset looks like by taking a quick peek: End of explanation with open('module-9-assignment-train-idx.json') as train_data_file: train_data_idx = json.load(train_data_file) with open('module-9-assignment-test-idx.json') as test_data_file: test_data_idx = json.load(test_data_file) print train_data_idx[:3] print test_data_idx[:3] train_data = products.iloc[train_data_idx] train_data.head(2) print len(train_data[train_data['sentiment'] == 1]) print len(train_data[train_data['sentiment'] == -1]) print len(train_data) test_data = products.iloc[test_data_idx] test_data.head(2) print len(test_data[test_data['sentiment'] == 1]) print len(test_data[test_data['sentiment'] == -1]) print len(test_data) print len(train_data) + len(test_data) Explanation: Split data into training and test sets We split the data into a 80-20 split where 80% is in the training set and 20% is in the test set. End of explanation from sklearn.feature_extraction.text import CountVectorizer vectorizer = CountVectorizer(token_pattern=r'\b\w+\b') # Use this token pattern to keep single-letter words # First, learn vocabulary from the training data and assign columns to words # Then convert the training data into a sparse matrix train_matrix = vectorizer.fit_transform(train_data['review_clean']) # Second, convert the test data into a sparse matrix, using the same word-column mapping test_matrix = vectorizer.transform(test_data['review_clean']) #print vectorizer.vocabulary_ Explanation: Build the word count vector for each review We will now compute the word count for each word that appears in the reviews. A vector consisting of word counts is often referred to as bag-of-word features. Since most words occur in only a few reviews, word count vectors are sparse. For this reason, scikit-learn and many other tools use sparse matrices to store a collection of word count vectors. Refer to appropriate manuals to produce sparse word count vectors. General steps for extracting word count vectors are as follows: Learn a vocabulary (set of all words) from the training data. Only the words that show up in the training data will be considered for feature extraction. Compute the occurrences of the words in each review and collect them into a row vector. Build a sparse matrix where each row is the word count vector for the corresponding review. Call this matrix train_matrix. Using the same mapping between words and columns, convert the test data into a sparse matrix test_matrix. The following cell uses CountVectorizer in scikit-learn. Notice the token_pattern argument in the constructor. End of explanation from sklearn.linear_model import LogisticRegression model = LogisticRegression() model.fit(train_matrix, train_data['sentiment']) model.classes_ Explanation: Train a logistic regression classifier We will now train a logistic regression classifier with sentiment as the target and word_count as the features. We will set validation_set=None to make sure everyone gets exactly the same results. Remember, even though we now know how to implement logistic regression, we will use GraphLab Create for its efficiency at processing this Amazon dataset in its entirety. The focus of this assignment is instead on the topic of precision and recall. End of explanation from sklearn.metrics import accuracy_score accuracy = accuracy_score(y_true=test_data['sentiment'].as_matrix(), y_pred=model.predict(test_matrix)) print "Test Accuracy: %s" % accuracy Explanation: Model Evaluation We will explore the advanced model evaluation concepts that were discussed in the lectures. Accuracy One performance metric we will use for our more advanced exploration is accuracy, which we have seen many times in past assignments. Recall that the accuracy is given by $$ \mbox{accuracy} = \frac{\mbox{# correctly classified data points}}{\mbox{# total data points}} $$ To obtain the accuracy of our trained models using GraphLab Create, simply pass the option metric='accuracy' to the evaluate function. We compute the accuracy of our logistic regression model on the test_data as follows: End of explanation baseline = len(test_data[test_data['sentiment'] == 1]) / float(len(test_data)) print "Baseline accuracy (majority class classifier): %s" % baseline Explanation: Baseline: Majority class prediction Recall from an earlier assignment that we used the majority class classifier as a baseline (i.e reference) model for a point of comparison with a more sophisticated classifier. The majority classifier model predicts the majority class for all data points. Typically, a good model should beat the majority class classifier. Since the majority class in this dataset is the positive class (i.e., there are more positive than negative reviews), the accuracy of the majority class classifier can be computed as follows: End of explanation from sklearn.metrics import confusion_matrix cmat = confusion_matrix(y_true=test_data['sentiment'].as_matrix(), y_pred=model.predict(test_matrix), labels=model.classes_) # use the same order of class as the LR model. print ' target_label | predicted_label | count ' print '--------------+-----------------+-------' # Print out the confusion matrix. # NOTE: Your tool may arrange entries in a different order. Consult appropriate manuals. for i, target_label in enumerate(model.classes_): for j, predicted_label in enumerate(model.classes_): print '{0:^13} | {1:^15} | {2:5d}'.format(target_label, predicted_label, cmat[i,j]) Explanation: Quiz Question: Using accuracy as the evaluation metric, was our logistic regression model better than the baseline (majority class classifier)? YES Test Accuracy: 0.932235421166 Baseline accuracy (majority class classifier): 0.842782577394 Confusion Matrix The accuracy, while convenient, does not tell the whole story. For a fuller picture, we turn to the confusion matrix. In the case of binary classification, the confusion matrix is a 2-by-2 matrix laying out correct and incorrect predictions made in each label as follows: +---------------------------------------------+ | Predicted label | +----------------------+----------------------+ | (+1) | (-1) | +-------+-----+----------------------+----------------------+ | True |(+1) | # of true positives | # of false negatives | | label +-----+----------------------+----------------------+ | |(-1) | # of false positives | # of true negatives | +-------+-----+----------------------+----------------------+ To print out the confusion matrix for a classifier, use metric='confusion_matrix': End of explanation FP = 1451 FN = 808 print 100*FP +1*FN Explanation: Quiz Question: How many predicted values in the test set are false positives? 1451 Computing the cost of mistakes Put yourself in the shoes of a manufacturer that sells a baby product on Amazon.com and you want to monitor your product's reviews in order to respond to complaints. Even a few negative reviews may generate a lot of bad publicity about the product. So you don't want to miss any reviews with negative sentiments --- you'd rather put up with false alarms about potentially negative reviews instead of missing negative reviews entirely. In other words, false positives cost more than false negatives. (It may be the other way around for other scenarios, but let's stick with the manufacturer's scenario for now.) Suppose you know the costs involved in each kind of mistake: 1. \$100 for each false positive. 2. \$1 for each false negative. 3. Correctly classified reviews incur no cost. Quiz Question: Given the stipulation, what is the cost associated with the logistic regression classifier's performance on the test set? End of explanation from sklearn.metrics import precision_score precision = precision_score(y_true=test_data['sentiment'].as_matrix(), y_pred=model.predict(test_matrix)) print "Precision on test data: %s" % precision Explanation: Precision and Recall You may not have exact dollar amounts for each kind of mistake. Instead, you may simply prefer to reduce the percentage of false positives to be less than, say, 3.5% of all positive predictions. This is where precision comes in: $$ [\text{precision}] = \frac{[\text{# positive data points with positive predicitions}]}{\text{[# all data points with positive predictions]}} = \frac{[\text{# true positives}]}{[\text{# true positives}] + [\text{# false positives}]} $$ So to keep the percentage of false positives below 3.5% of positive predictions, we must raise the precision to 96.5% or higher. First, let us compute the precision of the logistic regression classifier on the test_data. End of explanation print 1-precision Explanation: Quiz Question: Out of all reviews in the test set that are predicted to be positive, what fraction of them are false positives? (Round to the second decimal place e.g. 0.25) End of explanation from sklearn.metrics import recall_score recall = recall_score(y_true=test_data['sentiment'].as_matrix(), y_pred=model.predict(test_matrix)) print "Recall on test data: %s" % recall print 1 Explanation: Quiz Question: Based on what we learned in lecture, if we wanted to reduce this fraction of false positives to be below 3.5%, we would: (see the quiz) A complementary metric is recall, which measures the ratio between the number of true positives and that of (ground-truth) positive reviews: $$ [\text{recall}] = \frac{[\text{# positive data points with positive predicitions}]}{\text{[# all positive data points]}} = \frac{[\text{# true positives}]}{[\text{# true positives}] + [\text{# false negatives}]} $$ Let us compute the recall on the test_data. End of explanation def apply_threshold(probabilities, threshold): ### YOUR CODE GOES HERE # +1 if >= threshold and -1 otherwise. result = np.ones(len(probabilities)) result[probabilities < threshold] = -1 return result Explanation: Quiz Question: What fraction of the positive reviews in the test_set were correctly predicted as positive by the classifier? Quiz Question: What is the recall value for a classifier that predicts +1 for all data points in the test_data? Precision-recall tradeoff In this part, we will explore the trade-off between precision and recall discussed in the lecture. We first examine what happens when we use a different threshold value for making class predictions. We then explore a range of threshold values and plot the associated precision-recall curve. Varying the threshold False positives are costly in our example, so we may want to be more conservative about making positive predictions. To achieve this, instead of thresholding class probabilities at 0.5, we can choose a higher threshold. Write a function called apply_threshold that accepts two things * probabilities (an SArray of probability values) * threshold (a float between 0 and 1). The function should return an SArray, where each element is set to +1 or -1 depending whether the corresponding probability exceeds threshold. End of explanation probabilities = model.predict_proba(test_matrix)[:,1] predictions_with_default_threshold = apply_threshold(probabilities, 0.5) predictions_with_high_threshold = apply_threshold(probabilities, 0.9) print predictions_with_default_threshold print predictions_with_high_threshold print '\n' print sum(probabilities >= 0.5) print sum(probabilities >= 0.9) print '\n' print predictions_with_default_threshold * (predictions_with_default_threshold==-1) print len(predictions_with_default_threshold * (predictions_with_default_threshold==-1)) print '\n' print np.sum(predictions_with_default_threshold >0) print np.sum(predictions_with_high_threshold>0) print "Number of positive predicted reviews (threshold = 0.5): %s" % (predictions_with_default_threshold == 1).sum() print "Number of positive predicted reviews (threshold = 0.9): %s" % (predictions_with_high_threshold == 1).sum() Explanation: Run prediction with output_type='probability' to get the list of probability values. Then use thresholds set at 0.5 (default) and 0.9 to make predictions from these probability values. End of explanation # Threshold = 0.5 precision_with_default_threshold = precision_score(y_true=test_data['sentiment'].as_matrix(), y_pred=predictions_with_default_threshold) recall_with_default_threshold = recall_score(y_true=test_data['sentiment'].as_matrix(), y_pred=predictions_with_default_threshold) # Threshold = 0.9 precision_with_high_threshold = precision_score(y_true=test_data['sentiment'].as_matrix(), y_pred=predictions_with_high_threshold) recall_with_high_threshold = recall_score(y_true=test_data['sentiment'].as_matrix(), y_pred=predictions_with_high_threshold) print "Precision (threshold = 0.5): %s" % precision_with_default_threshold print "Recall (threshold = 0.5) : %s" % recall_with_default_threshold print "Precision (threshold = 0.9): %s" % precision_with_high_threshold print "Recall (threshold = 0.9) : %s" % recall_with_high_threshold Explanation: Quiz Question: What happens to the number of positive predicted reviews as the threshold increased from 0.5 to 0.9? Exploring the associated precision and recall as the threshold varies By changing the probability threshold, it is possible to influence precision and recall. We can explore this as follows: End of explanation threshold_values = np.linspace(0.5, 1, num=100) print threshold_values Explanation: Quiz Question (variant 1): Does the precision increase with a higher threshold? Quiz Question (variant 2): Does the recall increase with a higher threshold? Precision-recall curve Now, we will explore various different values of tresholds, compute the precision and recall scores, and then plot the precision-recall curve. End of explanation precision_all = [] recall_all = [] probabilities = model.predict_proba(test_matrix)[:,1] for threshold in threshold_values: predictions = apply_threshold(probabilities, threshold) precision = precision_score(y_true=test_data['sentiment'].as_matrix(), y_pred=predictions) recall = recall_score(y_true=test_data['sentiment'].as_matrix(), y_pred=predictions) precision_all.append(precision) recall_all.append(recall) Explanation: For each of the values of threshold, we compute the precision and recall scores. End of explanation import matplotlib.pyplot as plt %matplotlib inline def plot_pr_curve(precision, recall, title): plt.rcParams['figure.figsize'] = 7, 5 plt.locator_params(axis = 'x', nbins = 5) plt.plot(precision, recall, 'b-', linewidth=4.0, color = '#B0017F') plt.title(title) plt.xlabel('Precision') plt.ylabel('Recall') plt.rcParams.update({'font.size': 16}) plot_pr_curve(precision_all, recall_all, 'Precision recall curve (all)') Explanation: Now, let's plot the precision-recall curve to visualize the precision-recall tradeoff as we vary the threshold. End of explanation print np.array(threshold_values)[np.array(precision_all) >= 0.965] Explanation: Quiz Question: Among all the threshold values tried, what is the smallest threshold value that achieves a precision of 96.5% or better? Round your answer to 3 decimal places. End of explanation predictions_with_098_threshold = apply_threshold(probabilities, 0.98) sth = (np.array(test_data['sentiment'].as_matrix()) > 0) * (predictions_with_098_threshold < 0) print sum(sth) cmat_098 = confusion_matrix(y_true=test_data['sentiment'].as_matrix(), y_pred=predictions_with_098_threshold, labels=model.classes_) # use the same order of class as the LR model. print ' target_label | predicted_label | count ' print '--------------+-----------------+-------' # Print out the confusion matrix. # NOTE: Your tool may arrange entries in a different order. Consult appropriate manuals. for i, target_label in enumerate(model.classes_): for j, predicted_label in enumerate(model.classes_): print '{0:^13} | {1:^15} | {2:5d}'.format(target_label, predicted_label, cmat_098[i,j]) Explanation: Quiz Question: Using threshold = 0.98, how many false negatives do we get on the test_data? (Hint: You may use the graphlab.evaluation.confusion_matrix function implemented in GraphLab Create.) End of explanation baby_reviews = test_data[test_data['name'].apply(lambda x: 'baby' in str(x).lower())] Explanation: This is the number of false negatives (i.e the number of reviews to look at when not needed) that we have to deal with using this classifier. Evaluating specific search terms So far, we looked at the number of false positives for the entire test set. In this section, let's select reviews using a specific search term and optimize the precision on these reviews only. After all, a manufacturer would be interested in tuning the false positive rate just for their products (the reviews they want to read) rather than that of the entire set of products on Amazon. Precision-Recall on all baby related items From the test set, select all the reviews for all products with the word 'baby' in them. End of explanation baby_matrix = vectorizer.transform(baby_reviews['review_clean']) probabilities = model.predict_proba(baby_matrix)[:,1] Explanation: Now, let's predict the probability of classifying these reviews as positive: End of explanation threshold_values = np.linspace(0.5, 1, num=100) Explanation: Let's plot the precision-recall curve for the baby_reviews dataset. First, let's consider the following threshold_values ranging from 0.5 to 1: End of explanation precision_all = [] recall_all = [] for threshold in threshold_values: # Make predictions. Use the `apply_threshold` function ## YOUR CODE HERE predictions = apply_threshold(probabilities, threshold) # Calculate the precision. # YOUR CODE HERE precision = precision_score(y_true=baby_reviews['sentiment'].as_matrix(), y_pred=predictions) # YOUR CODE HERE recall = recall_score(y_true=baby_reviews['sentiment'].as_matrix(), y_pred=predictions) # Append the precision and recall scores. precision_all.append(precision) recall_all.append(recall) plot_pr_curve(precision_all, recall_all, "Precision-Recall (Baby)") Explanation: Second, as we did above, let's compute precision and recall for each value in threshold_values on the baby_reviews dataset. Complete the code block below. End of explanation print np.array(threshold_values)[np.array(precision_all) >= 0.965] Explanation: Quiz Question: Among all the threshold values tried, what is the smallest threshold value that achieves a precision of 96.5% or better for the reviews of data in baby_reviews? Round your answer to 3 decimal places. End of explanation plot_pr_curve(precision_all, recall_all, "Precision-Recall (Baby)") Explanation: larger Quiz Question: Is this threshold value smaller or larger than the threshold used for the entire dataset to achieve the same specified precision of 96.5%? Finally, let's plot the precision recall curve. End of explanation
9,258
Given the following text description, write Python code to implement the functionality described below step by step Description: Deep Learning with TensorFlow Credits Step1: First reload the data we generated in notmist.ipynb. Step2: Reformat into a shape that's more adapted to the models we're going to train
Python Code: # These are all the modules we'll be using later. Make sure you can import them # before proceeding further. import cPickle as pickle import numpy as np import tensorflow as tf Explanation: Deep Learning with TensorFlow Credits: Forked from TensorFlow by Google Setup Refer to the setup instructions. Exercise 3 Previously in 2_fullyconnected.ipynb, you trained a logistic regression and a neural network model. The goal of this exercise is to explore regularization techniques. End of explanation pickle_file = 'notMNIST.pickle' with open(pickle_file, 'rb') as f: save = pickle.load(f) train_dataset = save['train_dataset'] train_labels = save['train_labels'] valid_dataset = save['valid_dataset'] valid_labels = save['valid_labels'] test_dataset = save['test_dataset'] test_labels = save['test_labels'] del save # hint to help gc free up memory print 'Training set', train_dataset.shape, train_labels.shape print 'Validation set', valid_dataset.shape, valid_labels.shape print 'Test set', test_dataset.shape, test_labels.shape Explanation: First reload the data we generated in notmist.ipynb. End of explanation image_size = 28 num_labels = 10 def reformat(dataset, labels): dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32) # Map 2 to [0.0, 1.0, 0.0 ...], 3 to [0.0, 0.0, 1.0 ...] labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32) return dataset, labels train_dataset, train_labels = reformat(train_dataset, train_labels) valid_dataset, valid_labels = reformat(valid_dataset, valid_labels) test_dataset, test_labels = reformat(test_dataset, test_labels) print 'Training set', train_dataset.shape, train_labels.shape print 'Validation set', valid_dataset.shape, valid_labels.shape print 'Test set', test_dataset.shape, test_labels.shape def accuracy(predictions, labels): return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1)) / predictions.shape[0]) Explanation: Reformat into a shape that's more adapted to the models we're going to train: - data as a flat matrix, - labels as float 1-hot encodings. End of explanation
9,259
Given the following text description, write Python code to implement the functionality described below step by step Description: Example Step1: Step 2 Step2: Step 3 Step3: Step 4
Python Code: %load_ext autoreload %autoreload 2 %matplotlib inline import sys, os, copy, logging, socket, time import numpy as np import pylab as plt #from ndparse.algorithms import nddl as nddl #import ndparse as ndp sys.path.append('..'); import ndparse as ndp try: logger except: # do this precisely once logger = logging.getLogger("deploy_model") logger.setLevel(logging.DEBUG) ch = logging.StreamHandler() ch.setFormatter(logging.Formatter('[%(asctime)s:%(name)s:%(levelname)s] %(message)s')) logger.addHandler(ch) Explanation: Example: Deploying a Classifier This notebook shows how one might use a previously trained deep learning model to classify a subset of the ISBI 2012 data set. This assumes you have access to the ISBI 2012 data, which is available as a download from the ISBI challenge website or via an ndparse database call (see example below). It also assumes you have a local copy of trained weights for a Keras deep learning model; one example weights file is checked into this repository which will provide reasonable (but not state-of-the-art) results. You will also need to have Keras (along with a suitable backend - we use Theano) installed. Step 1: setup python environment End of explanation print("Running on system: %s" % socket.gethostname()) # Load previously trained CNN weights weightsFile = './isbi2012_weights_e025.h5' if True: # Using a local copy of data volume #inDir = '/Users/graywr1/code/bio-segmentation/data/ISBI2012/' inDir = '/home/pekalmj1/Data/EM_2012' Xtrain = ndp.nddl.load_cube(os.path.join(inDir, 'train-volume.tif')) Ytrain = ndp.nddl.load_cube(os.path.join(inDir, 'train-labels.tif')) Xtest = ndp.nddl.load_cube(os.path.join(inDir, 'test-volume.tif')) else: # example of using ndio database call import ndio.remote.neurodata as ND tic = time.time() nd = ND() token = 'kasthuri11cc' channel = 'image' xstart, xstop = 5472, 6496 ystart, ystop = 8712, 9736 zstart, zstop = 1000, 1100 res = 1 Xtest = nd.get_cutout(token, channel, xstart, xstop, ystart, ystop, zstart, zstop, resolution=res) Xtest = np.transpose(Xtest, [2, 0, 1]) Xtest = Xtest[:, np.newaxis, :, :] # add a channel dimension print 'time elapsed is: {} seconds'.format(time.time()-tic) # show some details. Note that data tensors are assumed to have dimensions: # (#slices, #channels, #rows, #columns) # print('Test data shape is: %s' % str(Xtest.shape)) plt.imshow(Xtest[0,0,...], interpolation='none', cmap='bone') plt.title('test volume, slice 0') plt.gca().axes.get_xaxis().set_ticks([]) plt.gca().axes.get_yaxis().set_ticks([]) plt.show() Explanation: Step 2: Load data and model weights End of explanation # In the interest of time, only deploy on one slice (z-dimension) of the test volume # *and* only evaluate a subset of the pixels in that slice. # # Note: depending upon your system (e.g. CPU vs GPU) this may take a few minutes... # tic = time.time() P0 = ndp.nddl.fit(Xtest, weightsFile, slices=[0,], evalPct=.1, log=logger) print("Time to deploy: %0.2f sec" % (time.time() - tic)) # The shape of the probability estimate tensor is: # (#slices, #classes, #rows, #cols) print('Class probabilities shape: %s' % str(P0.shape)) Explanation: Step 3: Deploy the model End of explanation # Use a simple interpolation scheme to fill in "missing" values # (i.e. those pixels we did not evaluate using the CNN). # Pint = ndp.nddl.interpolate_nn(P0) # visualize plt.imshow(P0[0,0,...]); plt.colorbar() plt.gca().axes.get_xaxis().set_ticks([]) plt.gca().axes.get_yaxis().set_ticks([]) plt.title('Class Estimates (slice 0, subsampled)') plt.show() plt.imshow(Pint[0,0,...]); plt.colorbar() plt.title('Class Estimates: (slice 0, interpolated)') plt.gca().axes.get_xaxis().set_ticks([]) plt.gca().axes.get_yaxis().set_ticks([]) plt.show() Explanation: Step 4: Postprocessing Note: in order to do actual science, one would use more sophisticated postprocessing (and also put more effort into the CNN design). End of explanation
9,260
Given the following text description, write Python code to implement the functionality described below step by step Description: New York University Applied Data Science 2016 Final Project Measuring household income under Redatam in CensusData 3. Model Evaluation and Selection Project Description Step1: HELPER FUNCTIONS Step2: GET DATA Step3: DATA EXPLORATION Background Step4: Notes Step5: Conclusion from Feature Selection Step6: Model 1b Step7: Model 1c Step8: Model 1d Step9: Model 1E (CHOSEN) Step10: Model 2a Step11: Model 2b Step12: MODEL VALIDATION We are going to test our models against survey data from the Buenos Aires City Government where income is measured by comune (not census block or department) and that it's independent from the survey we used to train our model. GET SURVEY DATA Step13: DATA CLEANING Step14: GET PREDICTED DATA FROM REDATAM The following script transforms the ascii output from REDATAM into a csv file that we can work on later. Our objective is to compare the predicted income from our models related to each comune with the real income for each comune. Step15: Model 1A Step16: Model 1B Step17: Model 1C Step18: Model 2A Step19: Model 2B Step20: Model 1D Step21: Model 1E Step22: DATA CLEANING Step23: MODEL EVALUATION RESULTS Step24: Best Performing Model Step25: Appendix
Python Code: import pandas as pd import numpy as np import os import sys import simpledbf %pylab inline import matplotlib.pyplot as plt import statsmodels.api as sm from sklearn.model_selection import train_test_split from sklearn import linear_model Explanation: New York University Applied Data Science 2016 Final Project Measuring household income under Redatam in CensusData 3. Model Evaluation and Selection Project Description: Lorem ipsum Members: - Felipe Gonzales - Ilan Reinstein - Fernando Melchor - Nicolas Metallo LIBRARIES End of explanation def runModel(dataset, income, varForModel): ''' This function takes a data set, runs a model according to specifications, and returns the model, printing the summary ''' y = dataset[income].values X = dataset.loc[:,varForModel].values X = sm.add_constant(X) w = dataset.PONDERA lm = sm.WLS(y, X, weights=1. / w, missing = 'drop', hasconst=True).fit() print lm.summary() for i in range(1,len(varForModel)+1): print 'x%d: %s' % (i,varForModel[i-1]) #testing within sample R_IS=[] R_OS=[] n=500 for i in range(n): X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state = 200) X_train = sm.add_constant(X_train) X_test = sm.add_constant(X_test) lm = linear_model.LinearRegression(fit_intercept=True) lm.fit(X_train, y_train, sample_weight = 1. / w[:len(X_train)]) y_hat_IS = lm.predict(X_train) err_IS = y_hat_IS - y_train R2_IS = 1 - (np.var(err_IS) / np.var(y_train)) y_hat_OS = lm.predict(X_test) err_OS = y_hat_OS - y_test R2_OS = 1 - (np.var(err_OS) / np.var(y_test)) R_IS.append(R2_IS) R_OS.append(R2_OS) print("IS R-squared for {} times is {}".format(n,np.mean(R_IS))) print("OS R-squared for {} times is {}".format(n,np.mean(R_OS))) Explanation: HELPER FUNCTIONS End of explanation data = pd.read_csv('data/dataFinalParaModelo.csv') data = data[data.AGLO1 == 32.0] data.head() Explanation: GET DATA End of explanation varForModel = [ 'HomeType', 'RoomsNumber', 'FloorMaterial', 'RoofMaterial', 'RoofCoat', 'Water', 'Toilet', 'ToiletLocation', 'ToiletType', 'Sewer', 'EmergencyLoc', 'UsableTotalRooms', 'SleepingRooms', 'Kitchen', 'Sink', 'Ownership', 'CookingCombustible', 'BathroomUse', 'Working', 'HouseMembers', 'Memberless10', 'Membermore10', 'TotalFamilyIncome', 'CookingRec', 'WaterRec', 'OwnershipRec', 'Hacinamiento', 'schoolAndJob', 'noJob', 'job', 'headAge', 'spouseAge', 'headFemale', 'spouseFemale', 'headEduc', 'spouseEduc', 'headPrimary', 'spousePrimary', 'headSecondary', 'spouseSecondary', 'headUniversity', 'spouseUniversity', 'headJob', 'spouseJob', 'headMaritalStatus', 'spouseMaritalStatus', 'sumPredicted'] data['hasSpouse'] = np.where(np.isnan(data.spouseJob.values),0,1) data['spouseJob'] = np.where(np.isnan(data.spouseJob.values),0,data.spouseJob.values) data['TotalFamilyIncome'].replace(to_replace=[0], value=[1] , inplace=True, axis=None) data = data[data.TotalFamilyIncomeDecReg != 0] data['income_log'] = np.log(data.TotalFamilyIncome) data['FloorMaterial'] = np.where(np.isnan(data.FloorMaterial.values),5,data.FloorMaterial.values) data['sumPredicted'] = np.where(np.isnan(data.sumPredicted.values),0,data.sumPredicted.values) data['Sewer'] = np.where(np.isnan(data.Sewer.values),5,data.Sewer.values) data['ToiletType'] = np.where(np.isnan(data.ToiletType.values),4,data.ToiletType.values) data['Water'] = np.where(np.isnan(data.Water.values),4,data.Water.values) data['RoofCoat'] = np.where(np.isnan(data.RoofCoat.values),2,data.RoofCoat.values) data['income_logPer'] = np.log(data.PerCapInc) data['haciBool'] = (data.Hacinamiento > 3).astype(int) data['RoofMaterial'] = np.where(np.isnan(data.RoofMaterial.values),0,data.RoofMaterial.values) data['ToiletLocation'] = np.where(np.isnan(data.ToiletLocation.values),2,data.ToiletLocation.values) import seaborn as sns sns.set(context="paper", font="monospace", font_scale=1.25) corrmat = data.loc[:,list(data.loc[:,varForModel].corr()['TotalFamilyIncome'].sort_values(ascending=False).index)].corr() f, ax = plt.subplots(figsize=(12, 10)) sns.heatmap(corrmat, vmax=.8, square=True) f.tight_layout() varHomogamy = [ 'headAge', 'spouseAge', 'headFemale', 'spouseFemale', 'headEduc', 'spouseEduc', 'headPrimary', 'spousePrimary', 'headSecondary', 'spouseSecondary', 'headUniversity', 'spouseUniversity', 'headJob', 'spouseJob', 'headMaritalStatus', 'spouseMaritalStatus'] sns.set(context="paper", font="monospace", font_scale=2) corrmat = data.loc[:,varHomogamy].corr() f, ax = plt.subplots(figsize=(10, 8)) sns.heatmap(corrmat, vmax=.8, square=True) f.tight_layout() Explanation: DATA EXPLORATION Background: We have found that 'y ~ Total Household Income' works better than other models with a different 'y' (ln of Total Individual Income, Income by Activity, income deciles, etc) Correlation Matrix End of explanation varForFeatureSelection = [ 'HomeType', 'RoomsNumber', 'FloorMaterial', 'RoofMaterial', 'RoofCoat', 'Water', 'Toilet', 'ToiletLocation', 'ToiletType', 'Sewer', 'UsableTotalRooms', 'SleepingRooms', 'Kitchen', 'Sink', 'Ownership', 'CookingCombustible', 'BathroomUse', 'Working', 'HouseMembers', 'Memberless10', 'Membermore10', 'CookingRec', 'WaterRec', 'OwnershipRec', 'Hacinamiento', 'schoolAndJob', 'noJob', 'job', 'headAge', 'headFemale', 'headEduc', 'headPrimary', 'headSecondary', 'headUniversity', 'headJob', 'sumPredicted'] # !pip install minepy from sklearn.linear_model import (LinearRegression, Ridge, Lasso, RandomizedLasso) from sklearn.feature_selection import RFE, f_regression from sklearn.preprocessing import MinMaxScaler from sklearn.ensemble import RandomForestRegressor import numpy as np from minepy import MINE Y = data.TotalFamilyIncome X = np.asarray(data.loc[:,varForFeatureSelection]) names = data.loc[:,varForFeatureSelection].columns ranks = {} def rank_to_dict(ranks, names, order=1): minmax = MinMaxScaler() ranks = minmax.fit_transform(order*np.array([ranks]).T).T[0] ranks = map(lambda x: round(x, 2), ranks) return dict(zip(names, ranks )) lr = LinearRegression(normalize=True) lr.fit(X, Y) ranks["Linear reg"] = rank_to_dict(np.abs(lr.coef_), names) ridge = Ridge(alpha=7) ridge.fit(X, Y) ranks["Ridge"] = rank_to_dict(np.abs(ridge.coef_), names) lasso = Lasso(alpha=.05) lasso.fit(X, Y) ranks["Lasso"] = rank_to_dict(np.abs(lasso.coef_), names) rlasso = RandomizedLasso(alpha=0.04) rlasso.fit(X, Y) ranks["Stability"] = rank_to_dict(np.abs(rlasso.scores_), names) #stop the search when 5 features are left (they will get equal scores) rfe = RFE(lr, n_features_to_select=5) rfe.fit(X,Y) ranks["RFE"] = rank_to_dict(map(float, rfe.ranking_), names, order=-1) rf = RandomForestRegressor() rf.fit(X,Y) ranks["RF"] = rank_to_dict(rf.feature_importances_, names) f, pval = f_regression(X, Y, center=True) ranks["Corr."] = rank_to_dict(f, names) mine = MINE() mic_scores = [] for i in range(X.shape[1]): mine.compute_score(X[:,i], Y) m = mine.mic() mic_scores.append(m) ranks["MIC"] = rank_to_dict(mic_scores, names) r = {} for name in names: r[name] = round(np.mean([ranks[method][name] for method in ranks.keys()]), 2) methods = sorted(ranks.keys()) ranks["Mean"] = r methods.append("Mean") feat_ranking = pd.DataFrame(ranks) cols = feat_ranking.columns.tolist() feat_ranking = feat_ranking.ix[:, cols] feat_ranking.sort_values(['Corr.'], ascending=False).head(15) varComparison = list(feat_ranking.sort_values(['Corr.'], ascending=False).head(10).index) print 'Our first iteration gave us the following table of the top 10 most relevant features: \n' print varComparison print '\n' print 'And this is the correlation Matrix for those features:' sns.set(context="paper", font="monospace", font_scale=2) corrmat = data.loc[:,varComparison].corr() f, ax = plt.subplots(figsize=(10, 8)) sns.heatmap(corrmat, vmax=.8, square=True) f.tight_layout() Explanation: Notes: We found multi-collinearity between variables referencing the spouse and the head of the household. This is we believe a case of homogamy (marriage between individuals who are similar to each other). This is why we chose to ignore them. End of explanation varForModel = [ 'headEduc', ] runModel(data, 'TotalFamilyIncome', varForModel) Explanation: Conclusion from Feature Selection: We found that the most relevant features for predicting income are: - Eucation - Job - Number of people living in the household. Accordingly, we chose only the variables that best represented this idea based on their predictive power, model interpretability and possibility of query under REDATAM. We also removed features highty correlated between each other to avoid multi-collinearity. REGRESSION MODELS MODEL TESTING Our model will be based on education, work and number of people living in the same household. For this, we will test two alternative models each considering separate variables that account for those features. Model 1a End of explanation varForModel = [ 'headEduc', 'job', ] runModel(data, 'TotalFamilyIncome', varForModel) Explanation: Model 1b End of explanation varForModel = [ 'headEduc', 'job', 'SleepingRooms',] runModel(data, 'TotalFamilyIncome', varForModel) Explanation: Model 1c End of explanation varForModel = [ 'headEduc', 'job', 'haciBool', #'Hacinamiento' ] runModel(data, 'TotalFamilyIncome', varForModel) Explanation: Model 1d End of explanation varForModel = [ 'headEduc', 'job', 'Hacinamiento' ] runModel(data, 'TotalFamilyIncome', varForModel) Explanation: Model 1E (CHOSEN) End of explanation varForModel = [ 'schoolAndJob', ] runModel(data, 'TotalFamilyIncome', varForModel) Explanation: Model 2a End of explanation varForModel = [ 'SleepingRooms', 'schoolAndJob', ] runModel(data, 'TotalFamilyIncome', varForModel) Explanation: Model 2b End of explanation dbf = simpledbf.Dbf5('data/BaseEAH2010/EAH10_BU_IND_VERSION2.dbf') # PDF press release is available for download data10 = dbf.to_dataframe() data10 = data10.loc[data10.ITFB != 9999999,['ID','COMUNA','FEXP','ITFB']] data10.head() Explanation: MODEL VALIDATION We are going to test our models against survey data from the Buenos Aires City Government where income is measured by comune (not census block or department) and that it's independent from the survey we used to train our model. GET SURVEY DATA End of explanation data10.drop_duplicates(inplace = True) data10.ITFB.replace(to_replace=[0], value=[1] , inplace=True, axis=None) data10.FEXP = data10.FEXP.astype(int) data10exp = data10.loc[np.repeat(data10.index.values,data10.FEXP)] data10exp.ITFB.groupby(by=data10exp.COMUNA).mean() Explanation: DATA CLEANING End of explanation def readRedatamCSV(asciiFile): f = open(asciiFile, 'r') areas = [] measures = [] for line in f: columns = line.strip().split() # print columns if len(columns) > 0: if 'RESUMEN' in columns[0] : break elif columns[0] == 'AREA': area = str.split(columns[2],',')[0] areas.append(area) elif columns[0] == 'Total': measure = str.split(columns[2],',')[2] measures.append(measure) try: data = pd.DataFrame({'area':areas,'measure':measures}) return data except: print asciiFile def R2(dataset,real,predicted): fig = plt.figure(figsize=(24,6)) ax1 = fig.add_subplot(1,3,1) ax2 = fig.add_subplot(1,3,2) ax3 = fig.add_subplot(1,3,3) error = dataset[predicted]-dataset[real] ax1.scatter(dataset[predicted],dataset[real]) ax1.plot(dataset[real], dataset[real], color = 'red') ax1.set_title('Predicted vs Real') ax2.scatter((dataset[predicted] - dataset[predicted].mean())/dataset[predicted].std(), (dataset[real] - dataset[real].mean())/dataset[real].std()) ax2.plot((dataset[real] - dataset[real].mean())/dataset[real].std(), (dataset[real] - dataset[real].mean())/dataset[real].std(), color = 'red') ax2.set_title('Standarized Predicted vs Real') ax3.scatter(dataset[predicted],(error - error.mean()) / error.std()) ax3.set_title('Standard Error') print "R^2 is: ",((dataset[real] - dataset[predicted])**2).sum() / ((dataset[real] - dataset[real].mean())**2).sum() print 'Mean Error', error.mean() Explanation: GET PREDICTED DATA FROM REDATAM The following script transforms the ascii output from REDATAM into a csv file that we can work on later. Our objective is to compare the predicted income from our models related to each comune with the real income for each comune. End of explanation archivo = 'data/indecOnline/headEduc/comunas.csv' # Model 1A ingresoXComuna = readRedatamCSV(archivo) ingresoXComuna.columns = ['area','Predicted_1A'] ingresoXComuna['Real_Income'] = list(data10exp.ITFB.groupby(by=data10exp.COMUNA).mean()) ingresoXComuna = ingresoXComuna.loc[:,['area', 'Real_Income', 'Predicted_1A']] Explanation: Model 1A End of explanation archivo = 'data/indecOnline/headEducYjobs/comuna.csv' # Model 1B ingresoModelo2 = readRedatamCSV(archivo) ingresoXComuna = ingresoXComuna.merge(right=ingresoModelo2,on='area') Explanation: Model 1B End of explanation archivo = 'data/indecOnline/headEducuJobsYrooms/comunas.csv' # Model 1A ingresoModelo3 = readRedatamCSV(archivo) ingresoXComuna = ingresoXComuna.merge(right=ingresoModelo3,on='area') Explanation: Model 1C End of explanation archivo = 'data/indecOnline/jobSchool/comunas.csv' # Model 2A ingresoModelo4 = readRedatamCSV(archivo) ingresoXComuna = ingresoXComuna.merge(right=ingresoModelo4,on='area') Explanation: Model 2A End of explanation archivo = 'data/indecOnline/jobSchoolYrooms/comunas.csv' # Model 2B ingresoModelo5 = readRedatamCSV(archivo) ingresoXComuna = ingresoXComuna.merge(right=ingresoModelo5,on='area') Explanation: Model 2B End of explanation archivo = 'data/indecOnline/MODELO1D/comunas.csv' # Model 2B ingresoModelo6 = readRedatamCSV(archivo) ingresoXComuna = ingresoXComuna.merge(right=ingresoModelo6,on='area') Explanation: Model 1D End of explanation archivo = 'data/indecOnline/MODELO1E/comunas.csv' # Model 2B ingresoModelo7 = readRedatamCSV(archivo) ingresoXComuna = ingresoXComuna.merge(right=ingresoModelo7,on='area') Explanation: Model 1E End of explanation ingresoXComuna.columns = ['Comune','Real_Income','Predicted_1A','Predicted_1B','Predicted_1C', 'Predicted_2A','Predicted_2B','Predicted_1D','Predicted_1E'] for i in range(1,9): ingresoXComuna.iloc[:,[i]] = ingresoXComuna.iloc[:,[i]].astype(float) Explanation: DATA CLEANING End of explanation ingresoXComuna Explanation: MODEL EVALUATION RESULTS End of explanation R2 (ingresoXComuna,'Real_Income','Predicted_1E') Explanation: Best Performing Model End of explanation R2 (ingresoXComuna,'Real_Income','Predicted_1A') R2 (ingresoXComuna,'Real_Income','Predicted_1B') R2 (ingresoXComuna,'Real_Income','Predicted_1C') R2 (ingresoXComuna,'Real_Income','Predicted_1D') R2 (ingresoXComuna,'Real_Income','Predicted_1E') R2 (ingresoXComuna,'Real_Income','Predicted_2A') R2 (ingresoXComuna,'Real_Income','Predicted_2B') Explanation: Appendix: End of explanation
9,261
Given the following text description, write Python code to implement the functionality described below step by step Description: Table of Contents Nonlinear Filtering Step1: Introduction The Kalman filter that we have developed uses linear equations, and so the filter can only handle linear problems. But the world is nonlinear, and so the classic filter that we have been studying to this point can have very limited utility. There can be nonlinearity in the process model. Suppose we want to track an object falling through the atmosphere. The acceleration of the object depends on the drag it encounters. Drag depends on air density, and the air density decreases with altitude. In one dimension this can be modelled with the nonlinear differential equation $$\ddot x = \frac{0.0034ge^{-x/22000}\dot x^2}{2\beta} - g$$ A second source of nonlinearity comes from the measurements. For example, radars measure the slant range to an object, and we are typically interested in the aircraft's position over the ground. We invoke Pythagoras and get the nonlinear equation Step2: We can see that out intuition failed us because the nonlinearity of the problem forced all of the errors to be biased in one direction. This bias, over many iterations, can cause the Kalman filter to diverge. Even if it doesn't diverge the solution will not be optimal. Linear approximations applied to nonlinear problems yields inaccurate results. The Effect of Nonlinear Functions on Gaussians Gaussians are not closed under an arbitrary nonlinear function. Recall the equations of the Kalman filter - at each evolution we pass the Gaussian representing the state through the process function to get the Gaussian at time $k$. Our process function was always linear, so the output was always another Gaussian. Let's look at that on a graph. I will take an arbitrary Gaussian and pass it through the function $f(x) = 2x + 1$ and plot the result. We know how to do this analytically, but let's use sampling. I will generate 500,000 points with a normal distribution, pass them through $f(x)$, and plot the results. I do it this way because the next example will be nonlinear, and we will have no way to compute this analytically. Step3: This is an unsurprising result. The result of passing the Gaussian through $f(x)=2x+1$ is another Gaussian centered around 1. Let's look at the input, nonlinear function, and output at once. Step4: I explain how to plot Gaussians, and much more, in the Notebook Computing_and_Plotting_PDFs in the Supporting_Notebooks folder. You can also read it online here[1] The plot labeled 'Input' is the histogram of the original data. This is passed through the function $f(x)=2x+1$ which is displayed in the chart on the bottom left. The red lines shows how one value, $x=0$ is passed through the function. Each value from input is passed through in the same way to the output function on the right. For the output I computed the mean by taking the average of all the points, and drew the results with the dotted blue line. A solid blue line shows the actual mean for the point $x=0$. The output looks like a Gaussian, and is in fact a Gaussian. We can see that the variance in the output is larger than the variance in the input, and the mean has been shifted from 0 to 1, which is what we would expect given the transfer function $f(x)=2x+1$ The $2x$ affects the variance, and the $+1$ shifts the mean The computed mean, represented by the dotted blue line, is nearly equal to the actual mean. If we used more points in our computation we could get arbitrarily close to the actual value. Now let's look at a nonlinear function and see how it affects the probability distribution. Step5: This result may be somewhat surprising to you. The function looks "fairly" linear, but the probability distribution of the output is completely different from a Gaussian. Recall the equations for multiplying two univariate Gaussians Step6: The original data is clearly Gaussian, but the data passed through g2(x) is no longer normally distributed. There is a thick band near -3, and the points are unequally distributed on either side of the band. If you compare this to the pdf labelled 'output' in the previous chart you should be able to see how the pdf shape matches the distribution of g(data). Think of what this implies for the Kalman filter algorithm of the previous chapter. All of the equations assume that a Gaussian passed through the process function results in another Gaussian. If this is not true then all of the assumptions and guarantees of the Kalman filter do not hold. Let's look at what happens when we pass the output back through the function again, simulating the next step time step of the Kalman filter. Step7: As you can see the probability function is further distorted from the original Gaussian. However, the graph is still somewhat symmetric around x=0, let's see what the mean is. Step8: Let's compare that to the linear function that passes through (-2,3) and (2,-3), which is very close to the nonlinear function we have plotted. Using the equation of a line we have $$m=\frac{-3-3}{2-(-2)}=-1.5$$ Step9: Although the shapes of the output are very different, the mean and variance of each are almost the same. This may lead us to reasoning that perhaps we can ignore this problem if the nonlinear equation is 'close to' linear. To test that, we can iterate several times and then compare the results. Step10: Unfortunately the nonlinear version is not stable. It drifted significantly from the mean of 0, and the variance is half an order of magnitude larger. I minimized the issue by using a function that is quite close to a straight line. What happens if the function is $y(x)=-x^2$? Step11: Despite the curve being smooth and reasonably straight at $x=1$ the probability distribution of the output doesn't look anything like a Gaussian and the computed mean of the output is quite different than the value computed directly. This is not an unusual function - a ballistic object moves in a parabola, and this is the sort of nonlinearity your filter will need to handle. If you recall we've tried to track a ball and failed miserably. This graph should give you insight into why the filter performed so poorly. A 2D Example It is hard to look at probability distributions and reason about what will happen in a filter. So let's think about tracking an aircraft with radar. The estimate may have a covariance that looks like this Step12: What happens when we try to linearize this problem? The radar gives us a range to the aircraft. Suppose the radar is directly under the aircraft (x=10) and the next measurement states that the aircraft is 3 miles away (y=3). The positions that could match that measurement form a circle with radius 3 miles, like so. Step13: We can see by inspection that the probable position of the aircraft is somewhere near x=11.4, y=2.7 because that is where the covariance ellipse and range measurement overlap. But the range measurement is nonlinear so we have to linearize it. We haven't covered this material yet, but the Extended Kalman filter will linearize at the last position of the aircraft - (10,2). At x=10 the range measurement has y=3, and so we linearize at that point. Step14: Now we have a linear representation of the problem (literally a straight line) which we can solve. Unfortunately you can see that the intersection of the line and the covariance ellipse is a long way from the actual aircraft position.
Python Code: from __future__ import division, print_function %matplotlib inline #format the book import book_format book_format.set_style() Explanation: Table of Contents Nonlinear Filtering End of explanation import numpy as np from numpy.random import randn import matplotlib.pyplot as plt N = 5000 a = np.pi/2. + (randn(N) * 0.35) r = 50.0 + (randn(N) * 0.4) xs = r * np.cos(a) ys = r * np.sin(a) plt.scatter(xs, ys, label='Sensor', color='k', alpha=0.4, marker='.', s=1) xmean, ymean = sum(xs) / N, sum(ys) / N plt.scatter(0, 50, c='k', marker='o', s=200, label='Intuition') plt.scatter(xmean, ymean, c='r', marker='*', s=200, label='Mean') plt.axis('equal') plt.legend(); Explanation: Introduction The Kalman filter that we have developed uses linear equations, and so the filter can only handle linear problems. But the world is nonlinear, and so the classic filter that we have been studying to this point can have very limited utility. There can be nonlinearity in the process model. Suppose we want to track an object falling through the atmosphere. The acceleration of the object depends on the drag it encounters. Drag depends on air density, and the air density decreases with altitude. In one dimension this can be modelled with the nonlinear differential equation $$\ddot x = \frac{0.0034ge^{-x/22000}\dot x^2}{2\beta} - g$$ A second source of nonlinearity comes from the measurements. For example, radars measure the slant range to an object, and we are typically interested in the aircraft's position over the ground. We invoke Pythagoras and get the nonlinear equation: $$x=\sqrt{\mathtt{slant}^2 - \mathtt{altitude}^2}$$ These facts were not lost on the early adopters of the Kalman filter. Soon after Dr. Kalman published his paper people began working on how to extend the Kalman filter for nonlinear problems. It is almost true to state that the only equation anyone knows how to solve is $\mathbf{Ax}=\mathbf{b}$. We only really know how to do linear algebra. I can give you any linear set of equations and you can either solve it or prove that it has no solution. Anyone with formal education in math or physics has spent years learning various analytic ways to solve integrals, differential equations and so on. Yet even trivial physical systems produce equations that cannot be solved analytically. I can take an equation that you are able to integrate, insert a $\log$ term, and render it insolvable. This leads to jokes about physicists stating "assume a spherical cow on a frictionless surface in a vacuum...". Without making extreme simplifications most physical problems do not have analytic solutions. How do we do things like model airflow over an aircraft in a computer, or predict weather, or track missiles with a Kalman filter? We retreat to what we know: $\mathbf{Ax}=\mathbf{b}$. We find some way to linearize the problem, turning it into a set of linear equations, and then use linear algebra software packages to compute an approximate solution. Linearizing a nonlinear problem gives us inexact answers, and in a recursive algorithm like a Kalman filter or weather tracking system these small errors can sometimes reinforce each other at each step, quickly causing the algorithm to spit out nonsense. What we are about to embark upon is a difficult problem. There is not one obvious, correct, mathematically optimal solution anymore. We will be using approximations, we will be introducing errors into our computations, and we will forever be battling filters that diverge, that is, filters whose numerical errors overwhelm the solution. In the remainder of this short chapter I will illustrate the specific problems the nonlinear Kalman filter faces. You can only design a filter after understanding the particular problems the nonlinearity in your problem causes. Subsequent chapters will then teach you how to design and implement different kinds of nonlinear filters. The Problem with Nonlinearity The mathematics of the Kalman filter is beautiful in part due to the Gaussian equation being so special. It is nonlinear, but when we add and multiply them we get another Gaussian as a result. That is very rare. $\sin{x}*\sin{y}$ does not yield a $\sin$ as an output. What I mean by linearity may be obvious, but there are some subtleties. The mathematical requirements are twofold: additivity: $f(x+y) = f(x) + f(y)$ homogeneity: $f(ax) = af(x)$ This leads us to say that a linear system is defined as a system whose output is linearly proportional to the sum of all its inputs. A consequence of this is that to be linear if the input is zero than the output must also be zero. Consider an audio amp - if I sing into a microphone, and you start talking, the output should be the sum of our voices (input) scaled by the amplifier gain. But if the amplifier outputs a nonzero signal such as a hum for a zero input the additive relationship no longer holds. This is because linearity requires that $amp(voice) = amp(voice + 0)$. This clearly should give the same output, but if amp(0) is nonzero, then $$ \begin{aligned} amp(voice) &= amp(voice + 0) \ &= amp(voice) + amp(0) \ &= amp(voice) + non_zero_value \end{aligned} $$ which is clearly nonsense. Hence, an apparently linear equation such as $$L(f(t)) = f(t) + 1$$ is not linear because $L(0) = 1$. Be careful! An Intuitive Look at the Problem I particularly like the following way of looking at the problem, which I am borrowing from Dan Simon's Optimal State Estimation [1]. Consider a tracking problem where we get the range and bearing to a target, and we want to track its position. The reported distance is 50 km, and the reported angle is 90$^\circ$. Assume that the errors in both range and angle are distributed in a Gaussian manner. Given an infinite number of measurements what is the expected value of the position? I have been recommending using intuition to gain insight, so let's see how it fares for this problem. We might reason that since the mean of the range will be 50 km, and the mean of the angle will be 90$^\circ$, that the answer will be x=0 km, y=50 km. Let's plot that and find out. Here are 3000 points plotted with a normal distribution of the distance of 0.4 km, and the angle having a normal distribution of 0.35 radians. We compute the average of the all of the positions, and display it as a star. Our intuition is displayed with a large circle. End of explanation from numpy.random import normal data = normal(loc=0., scale=1., size=500000) plt.hist(2*data + 1, 1000); Explanation: We can see that out intuition failed us because the nonlinearity of the problem forced all of the errors to be biased in one direction. This bias, over many iterations, can cause the Kalman filter to diverge. Even if it doesn't diverge the solution will not be optimal. Linear approximations applied to nonlinear problems yields inaccurate results. The Effect of Nonlinear Functions on Gaussians Gaussians are not closed under an arbitrary nonlinear function. Recall the equations of the Kalman filter - at each evolution we pass the Gaussian representing the state through the process function to get the Gaussian at time $k$. Our process function was always linear, so the output was always another Gaussian. Let's look at that on a graph. I will take an arbitrary Gaussian and pass it through the function $f(x) = 2x + 1$ and plot the result. We know how to do this analytically, but let's use sampling. I will generate 500,000 points with a normal distribution, pass them through $f(x)$, and plot the results. I do it this way because the next example will be nonlinear, and we will have no way to compute this analytically. End of explanation from kf_book.book_plots import set_figsize, figsize from kf_book.nonlinear_plots import plot_nonlinear_func def g1(x): return 2*x+1 plot_nonlinear_func(data, g1) Explanation: This is an unsurprising result. The result of passing the Gaussian through $f(x)=2x+1$ is another Gaussian centered around 1. Let's look at the input, nonlinear function, and output at once. End of explanation def g2(x): return (np.cos(3*(x/2 + 0.7))) * np.sin(0.3*x) - 1.6*x plot_nonlinear_func(data, g2) Explanation: I explain how to plot Gaussians, and much more, in the Notebook Computing_and_Plotting_PDFs in the Supporting_Notebooks folder. You can also read it online here[1] The plot labeled 'Input' is the histogram of the original data. This is passed through the function $f(x)=2x+1$ which is displayed in the chart on the bottom left. The red lines shows how one value, $x=0$ is passed through the function. Each value from input is passed through in the same way to the output function on the right. For the output I computed the mean by taking the average of all the points, and drew the results with the dotted blue line. A solid blue line shows the actual mean for the point $x=0$. The output looks like a Gaussian, and is in fact a Gaussian. We can see that the variance in the output is larger than the variance in the input, and the mean has been shifted from 0 to 1, which is what we would expect given the transfer function $f(x)=2x+1$ The $2x$ affects the variance, and the $+1$ shifts the mean The computed mean, represented by the dotted blue line, is nearly equal to the actual mean. If we used more points in our computation we could get arbitrarily close to the actual value. Now let's look at a nonlinear function and see how it affects the probability distribution. End of explanation N = 30000 plt.subplot(121) plt.scatter(data[:N], range(N), alpha=.1, s=1.5) plt.title('Input') plt.subplot(122) plt.title('Output') plt.scatter(g2(data[:N]), range(N), alpha=.1, s=1.5); Explanation: This result may be somewhat surprising to you. The function looks "fairly" linear, but the probability distribution of the output is completely different from a Gaussian. Recall the equations for multiplying two univariate Gaussians: $$\begin{aligned} \mu &=\frac{\sigma_1^2 \mu_2 + \sigma_2^2 \mu_1} {\sigma_1^2 + \sigma_2^2} \ \sigma &= \frac{1}{\frac{1}{\sigma_1^2} + \frac{1}{\sigma_2^2}} \end{aligned}$$ These equations do not hold for non-Gaussians, and certainly do not hold for the probability distribution shown in the 'Output' chart above. Here's another way to look at the same data as scatter plots. End of explanation y = g2(data) plot_nonlinear_func(y, g2) Explanation: The original data is clearly Gaussian, but the data passed through g2(x) is no longer normally distributed. There is a thick band near -3, and the points are unequally distributed on either side of the band. If you compare this to the pdf labelled 'output' in the previous chart you should be able to see how the pdf shape matches the distribution of g(data). Think of what this implies for the Kalman filter algorithm of the previous chapter. All of the equations assume that a Gaussian passed through the process function results in another Gaussian. If this is not true then all of the assumptions and guarantees of the Kalman filter do not hold. Let's look at what happens when we pass the output back through the function again, simulating the next step time step of the Kalman filter. End of explanation print('input mean, variance: %.4f, %.4f' % (np.mean(data), np.var(data))) print('output mean, variance: %.4f, %.4f' % (np.mean(y), np.var(y))) Explanation: As you can see the probability function is further distorted from the original Gaussian. However, the graph is still somewhat symmetric around x=0, let's see what the mean is. End of explanation def g3(x): return -1.5 * x plot_nonlinear_func(data, g3) out = g3(data) print('output mean, variance: %.4f, %.4f' % (np.mean(out), np.var(out))) Explanation: Let's compare that to the linear function that passes through (-2,3) and (2,-3), which is very close to the nonlinear function we have plotted. Using the equation of a line we have $$m=\frac{-3-3}{2-(-2)}=-1.5$$ End of explanation out = g3(data) out2 = g2(data) for i in range(10): out = g3(out) out2 = g2(out2) print('linear output mean, variance: %.4f, %.4f' % (np.average(out), np.std(out)**2)) print('nonlinear output mean, variance: %.4f, %.4f' % (np.average(out2), np.std(out2)**2)) Explanation: Although the shapes of the output are very different, the mean and variance of each are almost the same. This may lead us to reasoning that perhaps we can ignore this problem if the nonlinear equation is 'close to' linear. To test that, we can iterate several times and then compare the results. End of explanation def g3(x): return -x*x data = normal(loc=1, scale=1, size=500000) plot_nonlinear_func(data, g3) Explanation: Unfortunately the nonlinear version is not stable. It drifted significantly from the mean of 0, and the variance is half an order of magnitude larger. I minimized the issue by using a function that is quite close to a straight line. What happens if the function is $y(x)=-x^2$? End of explanation import kf_book.nonlinear_internal as nonlinear_internal nonlinear_internal.plot1() Explanation: Despite the curve being smooth and reasonably straight at $x=1$ the probability distribution of the output doesn't look anything like a Gaussian and the computed mean of the output is quite different than the value computed directly. This is not an unusual function - a ballistic object moves in a parabola, and this is the sort of nonlinearity your filter will need to handle. If you recall we've tried to track a ball and failed miserably. This graph should give you insight into why the filter performed so poorly. A 2D Example It is hard to look at probability distributions and reason about what will happen in a filter. So let's think about tracking an aircraft with radar. The estimate may have a covariance that looks like this: End of explanation nonlinear_internal.plot2() Explanation: What happens when we try to linearize this problem? The radar gives us a range to the aircraft. Suppose the radar is directly under the aircraft (x=10) and the next measurement states that the aircraft is 3 miles away (y=3). The positions that could match that measurement form a circle with radius 3 miles, like so. End of explanation nonlinear_internal.plot3() Explanation: We can see by inspection that the probable position of the aircraft is somewhere near x=11.4, y=2.7 because that is where the covariance ellipse and range measurement overlap. But the range measurement is nonlinear so we have to linearize it. We haven't covered this material yet, but the Extended Kalman filter will linearize at the last position of the aircraft - (10,2). At x=10 the range measurement has y=3, and so we linearize at that point. End of explanation nonlinear_internal.plot4() Explanation: Now we have a linear representation of the problem (literally a straight line) which we can solve. Unfortunately you can see that the intersection of the line and the covariance ellipse is a long way from the actual aircraft position. End of explanation
9,262
Given the following text description, write Python code to implement the functionality described below step by step Description: Collating with CollateX First we need to tell Python that we will be needing the Python library that holds the code for CollateXโ€ฆ Step1: Now we're ready to make a collation object. We do this with the slightly hermetic line of code Step2: Now we add some witnesses. Each witness gets a letter or name (siglum) that will identify it, and for each we add the literal text of the witness to the collation object Step3: We now tell CollateX to collate the witnesses, create a table to visualize the results, and orient the table vertically. Weโ€™ve created a variable called alignment_table to hold the CollateX output. Weโ€™ll discuss the Segmentation parameter later. Step4: When we tell CollateX to create a plain text table to hold the output, it isnโ€™t rendered by default (other CollateX output formats are), so we have to print() it in order to see it Step5: Usually we want to group our output instead of rendering each token-level set in separate cells. You can tell CollateX to construct those groups by setting the value of the segmentation parameter to Trueโ€”or by not specifying a value at all, since the default behavior is equal to the True value. We switched this option off in the example above to show that CollateX is aware of the individual token alignments, but in most cases youโ€™ll want to leave segmentation on. When we collate again with segmentation, we see the default grouping Step6: The aligment table visualization is CollateX's default way of rendering a collation result, but other supported output formats are html, html2 (color-coded HTML), xml (a generic XML that can be processed further with XSLT), tei (also somewhat generic, but based on parallel segmentation), and svg. SVG outputs a variant graph, which shows the variation as a directed graph. Step7: We can perform the same operation with files that we read from the file system. Note that we specify encoding="utf-8". Step8: Now let's check if these witnesses actually contain some text by printing a few of them. Step9: And now let's collate those witnesses and let's put the result up as an HTML-formatted alignment tableโ€ฆ Step10: Hmmโ€ฆ that is still a little hard to read. Wouldn't it be nice if we got a hint where the actual differences are? Sure, tryโ€ฆ Step11: And finally, we can also generate the variant graph for this collationโ€ฆ
Python Code: from collatex import * Explanation: Collating with CollateX First we need to tell Python that we will be needing the Python library that holds the code for CollateXโ€ฆ End of explanation collation = Collation() Explanation: Now we're ready to make a collation object. We do this with the slightly hermetic line of code: collation = Collation() Here the lower case collation is the arbitrary named variable that refers to a copy (officially it is called an instance) of the CollateX collation engine. The instruction tells Python to create a new instance of a Collation() object and call it collation. End of explanation collation.add_plain_witness( "A", "The quick brown fox jumped over the lazy dog.") collation.add_plain_witness( "B", "The brown fox jumped over the dog." ) collation.add_plain_witness( "C", "The bad fox jumped over the lazy dog." ) Explanation: Now we add some witnesses. Each witness gets a letter or name (siglum) that will identify it, and for each we add the literal text of the witness to the collation object: End of explanation alignment_table = collate(collation, layout='vertical', segmentation=False ) Explanation: We now tell CollateX to collate the witnesses, create a table to visualize the results, and orient the table vertically. Weโ€™ve created a variable called alignment_table to hold the CollateX output. Weโ€™ll discuss the Segmentation parameter later. End of explanation print( alignment_table ) Explanation: When we tell CollateX to create a plain text table to hold the output, it isnโ€™t rendered by default (other CollateX output formats are), so we have to print() it in order to see it: End of explanation alignment_table = collate(collation, layout='vertical' ) print( alignment_table ) Explanation: Usually we want to group our output instead of rendering each token-level set in separate cells. You can tell CollateX to construct those groups by setting the value of the segmentation parameter to Trueโ€”or by not specifying a value at all, since the default behavior is equal to the True value. We switched this option off in the example above to show that CollateX is aware of the individual token alignments, but in most cases youโ€™ll want to leave segmentation on. When we collate again with segmentation, we see the default grouping: End of explanation graph = collate( collation, output="svg", segmentation=True ) Explanation: The aligment table visualization is CollateX's default way of rendering a collation result, but other supported output formats are html, html2 (color-coded HTML), xml (a generic XML that can be processed further with XSLT), tei (also somewhat generic, but based on parallel segmentation), and svg. SVG outputs a variant graph, which shows the variation as a directed graph. End of explanation collation = Collation() witness_1859 = open( "../fixtures/Darwin/txt/darwin1859_par1.txt", encoding='utf-8' ).read() witness_1860 = open( "../fixtures/Darwin/txt/darwin1860_par1.txt", encoding='utf-8' ).read() witness_1861 = open( "../fixtures/Darwin/txt/darwin1861_par1.txt", encoding='utf-8' ).read() witness_1866 = open( "../fixtures/Darwin/txt/darwin1866_par1.txt", encoding='utf-8' ).read() witness_1869 = open( "../fixtures/Darwin/txt/darwin1869_par1.txt", encoding='utf-8' ).read() witness_1872 = open( "../fixtures/Darwin/txt/darwin1872_par1.txt", encoding='utf-8' ).read() collation.add_plain_witness( "1859", witness_1859 ) collation.add_plain_witness( "1860", witness_1860 ) collation.add_plain_witness( "1861", witness_1861 ) collation.add_plain_witness( "1866", witness_1866 ) collation.add_plain_witness( "1869", witness_1869 ) collation.add_plain_witness( "1872", witness_1872 ) Explanation: We can perform the same operation with files that we read from the file system. Note that we specify encoding="utf-8". End of explanation print( witness_1859 ) print( witness_1860 ) Explanation: Now let's check if these witnesses actually contain some text by printing a few of them. End of explanation alignment_table = collate(collation, layout='vertical', output='html') Explanation: And now let's collate those witnesses and let's put the result up as an HTML-formatted alignment tableโ€ฆ End of explanation alignment_table = collate(collation, layout='vertical', output='html2') Explanation: Hmmโ€ฆ that is still a little hard to read. Wouldn't it be nice if we got a hint where the actual differences are? Sure, tryโ€ฆ End of explanation graph = collate( collation, output="svg" ) Explanation: And finally, we can also generate the variant graph for this collationโ€ฆ End of explanation
9,263
Given the following text description, write Python code to implement the functionality described below step by step Description: Python for Everyone!<br/>Oregon Curriculum Network Extended Precision with the Native Decimal Type With LaTeX and Generator Functions <img src="https Step2: Lets show setting precision to a thousand places within a scope defined by decimal.localcontext. We set precision internally to the scope. By default, precision is to 28 places. We set n to 1 followed by 102 zeros, so a very large number. The resulting computation matches a published value for e to 100 decimal places. Step3: In the context below, the value of n starts at 10 and then gets two more zeros every time around the loop. The yield keyword is similar to return in handing back an object, however a generator function then pauses to pick up where it left off when nudged by next(), (which triggers __next__ internally). Generator functions do not forget their internal state as they advance through next values. Note that when a Decimal type object operates with an integer, that integer is coerced (cast) as a Decimal object. Step5: <img src="https
Python Code: %%latex \begin{align} e = lim_{n \to \infty} (1 + 1/n)^n \end{align} from math import e, pi print(e) # as a floating point number print(pi) Explanation: Python for Everyone!<br/>Oregon Curriculum Network Extended Precision with the Native Decimal Type With LaTeX and Generator Functions <img src="https://c8.staticflickr.com/6/5691/30269841575_8bea763a54.jpg" alt="TAOCP" style="width: 50%; height: 50%"/> The Python Standard Library provides a decimal module containing the class Decimal. A Decimal object behaves according to the base 10 algorithms we learn in school. The precision i.e. number of decimal places to which computations are carried out, is set globally, or within the scope of a context manager. Note also that Jupyter Notebooks are able to render LaTeX when commanded to do so with the %%latex magic command. As a first example, here is an expression for the mathematical constant e. End of explanation import decimal with decimal.localcontext() as ctx: # context manager ctx.prec = 1000 n = decimal.Decimal(1e102) e = (1 + 1/n) ** n e_1000_places = 2.7182818284590452353602874713526624977572470936999595749669 6762772407663035354759457138217852516642742746639193200305992181741359662904357 2900334295260595630738132328627943490763233829880753195251019011573834187930702 1540891499348841675092447614606680822648001684774118537423454424371075390777449 9206955170276183860626133138458300075204493382656029760673711320070932870912744 3747047230696977209310141692836819025515108657463772111252389784425056953696770 7854499699679468644549059879316368892300987931277361782154249992295763514822082 6989519366803318252886939849646510582093923982948879332036250944311730123819706 8416140397019837679320683282376464804295311802328782509819455815301756717361332 0698112509961818815930416903515988885193458072738667385894228792284998920868058 2574927961048419844436346324496848756023362482704197862320900216099023530436994 1849146314093431738143640546253152096183690888707016768396424378140592714563549 0613031072085103837505101157477041718986106873969655212671546889570350354 e_1000_places = e_1000_places.replace("\n",str()) str(e)[2:103] == e_1000_places[2:103] # skipping "2." and going to 100 decimals Explanation: Lets show setting precision to a thousand places within a scope defined by decimal.localcontext. We set precision internally to the scope. By default, precision is to 28 places. We set n to 1 followed by 102 zeros, so a very large number. The resulting computation matches a published value for e to 100 decimal places. End of explanation with decimal.localcontext() as ctx: # context manager ctx.prec = 1000 def converge(): # generator function n = decimal.Decimal('10') while True: yield (1 + 1/n) ** n n = n * 100 # two more zeros f = converge() for _ in range(9): next(f) # f.__next__() <--- not quite like Python 2.x (f.next()) r = next(f) r str(r)[:20] == e_1000_places[:20] Explanation: In the context below, the value of n starts at 10 and then gets two more zeros every time around the loop. The yield keyword is similar to return in handing back an object, however a generator function then pauses to pick up where it left off when nudged by next(), (which triggers __next__ internally). Generator functions do not forget their internal state as they advance through next values. Note that when a Decimal type object operates with an integer, that integer is coerced (cast) as a Decimal object. End of explanation %%latex \begin{align} \frac{1}{\pi} = \frac{2\sqrt{2}}{9801} \sum^\infty_{k=0} \frac{(4k)!(1103+26390k)}{(k!)^4 396^{4k}} \end{align} %%latex \begin{align*} \frac{1}{\pi} = \frac{2\sqrt{2}}{9801}\sum_{n = 0}^{\infty}\frac{(4n)!(1103 + 26390n)}{4^{4n}(n!)^{4}99^{4n}} = A\sum_{n = 0}^{\infty}B_{n}C_{n},\\ A = \frac{2\sqrt{2}}{9801},\,\\ B_{n} = \frac{(4n)!(1103 + 26390n)}{4^{4n}(n!)^{4}},\,\\ C_{n} = \frac{1}{99^{4n}} \end{align*} from math import factorial from decimal import Decimal as D decimal.getcontext().prec=100 A = (2 * D('2').sqrt()) / 9801 A def B(): n = 0 while True: numerator = factorial(4 * n) * (D(1103) + 26390 * n) denominator = (4 ** (4*n))*(factorial(n))**4 yield numerator / denominator n += 1 def C(): n = 0 while True: yield 1 / (D('99')**(4*n)) n += 1 def Pi(): Bn = B() Cn = C() the_sum = 0 while True: the_sum += next(Bn) * next(Cn) yield 1/(A * the_sum) pi = Pi() next(pi) next(pi) next(pi) pi_1000_places = 3.1415926535 8979323846 2643383279 5028841971 6939937510 5820974944 5923078164 0628620899 8628034825 3421170679 8214808651 3282306647 0938446095 5058223172 5359408128 4811174502 8410270193 8521105559 6446229489 5493038196 4428810975 6659334461 2847564823 3786783165 2712019091 4564856692 3460348610 4543266482 1339360726 0249141273 7245870066 0631558817 4881520920 9628292540 9171536436 7892590360 0113305305 4882046652 1384146951 9415116094 3305727036 5759591953 0921861173 8193261179 3105118548 0744623799 6274956735 1885752724 8912279381 8301194912 9833673362 4406566430 8602139494 6395224737 1907021798 6094370277 0539217176 2931767523 8467481846 7669405132 0005681271 4526356082 7785771342 7577896091 7363717872 1468440901 2249534301 4654958537 1050792279 6892589235 4201995611 2129021960 8640344181 5981362977 4771309960 5187072113 4999999837 2978049951 0597317328 1609631859 5024459455 3469083026 4252230825 3344685035 2619311881 7101000313 7838752886 5875332083 8142061717 7669147303 5982534904 2875546873 1159562863 8823537875 9375195778 1857780532 1712268066 1300192787 6611195909 2164201989 pi_1000_places = pi_1000_places.replace(" ","").replace("\n","") r = next(pi) str(r)[:20] == pi_1000_places[:20] Explanation: <img src="https://sciencenode.org/img/img_2012/stamp.JPG" alt="Ramanujan Postage Stamp" style="width: 50%; height: 50%"/> The fancier LaTeX below renders a famous equation by Ramanujan, which has been shown to converge to 1/ฯ€ and therefore ฯ€ very quickly, relative to many other algorithms. I don't think anyone understands how some random guy could think up such a miraculaous equation. End of explanation
9,264
Given the following text description, write Python code to implement the functionality described below step by step Description: ะžะฑะฝะฐั€ัƒะถะตะฝะธะต ัั‚ะฐั‚ะธัั‚ะธั‡ะตัะบะธ ะทะฝะฐั‡ะธะผั‹ั… ะพั‚ะปะธั‡ะธะน ะฒ ัƒั€ะพะฒะฝัั… ัะบัะฟั€ะตััะธะธ ะณะตะฝะพะฒ ะฑะพะปัŒะฝั‹ั… ั€ะฐะบะพะผ ะญั‚ะพ ะทะฐะดะฐะฝะธะต ะฟะพะผะพะถะตั‚ ะฒะฐะผ ะปัƒั‡ัˆะต ั€ะฐะทะพะฑั€ะฐั‚ัŒัั ะฒ ะผะตั‚ะพะดะฐั… ะผะฝะพะถะตัั‚ะฒะตะฝะฝะพะน ะฟั€ะพะฒะตั€ะบะธ ะณะธะฟะพั‚ะตะท ะธ ะฟะพะทะฒะพะปะธั‚ ะฟั€ะธะผะตะฝะธั‚ัŒ ะฒะฐัˆะธ ะทะฝะฐะฝะธั ะฝะฐ ะดะฐะฝะฝั‹ั… ะธะท ั€ะตะฐะปัŒะฝะพะณะพ ะฑะธะพะปะพะณะธั‡ะตัะบะพะณะพ ะธััะปะตะดะพะฒะฐะฝะธั. ะ’ ัั‚ะพะผ ะทะฐะดะฐะฝะธะธ ะฒั‹ Step1: ะงะฐัั‚ัŒ 2 Step2: ะงะฐัั‚ัŒ 3
Python Code: import pandas as pd import scipy.stats df = pd.read_csv("gene_high_throughput_sequencing.csv") control_df = df[df.Diagnosis == 'normal'] neoplasia_df = df[df.Diagnosis == 'early neoplasia'] cancer_df = df[df.Diagnosis == 'cancer'] # scipy.stats.ttest_ind(data.Placebo, data.Methylphenidate, equal_var = False) genes = filter(lambda x: x not in ['Patient_id', 'Diagnosis'], df.columns.tolist()) control_vs_neoplasia = {} neoplasia_vs_cancer = {} for gene in genes: control_vs_neoplasia[gene] = scipy.stats.ttest_ind(control_df[gene], neoplasia_df[gene], equal_var = False).pvalue neoplasia_vs_cancer[gene] = scipy.stats.ttest_ind(cancer_df[gene], neoplasia_df[gene], equal_var = False).pvalue print control_df['LOC643837'],neoplasia_df['LOC643837'] scipy.stats.ttest_ind(control_df['LOC643837'], neoplasia_df['LOC643837'], equal_var = False).pvalue control_vs_neoplasia_df = pd.DataFrame.from_dict(control_vs_neoplasia, orient = 'index') control_vs_neoplasia_df.columns = ['control_vs_neoplasia_pvalue'] neoplasia_vs_cancer_df = pd.DataFrame.from_dict(neoplasia_vs_cancer, orient = 'index') neoplasia_vs_cancer_df.columns = ['neoplasia_vs_cancer_pvalue'] neoplasia_vs_cancer_df pvalue_df = control_vs_neoplasia_df.join(neoplasia_vs_cancer_df) pvalue_df.head() pvalue_df[pvalue_df.control_vs_neoplasia_pvalue < 0.05].shape pvalue_df[pvalue_df.neoplasia_vs_cancer_pvalue < 0.05].shape Explanation: ะžะฑะฝะฐั€ัƒะถะตะฝะธะต ัั‚ะฐั‚ะธัั‚ะธั‡ะตัะบะธ ะทะฝะฐั‡ะธะผั‹ั… ะพั‚ะปะธั‡ะธะน ะฒ ัƒั€ะพะฒะฝัั… ัะบัะฟั€ะตััะธะธ ะณะตะฝะพะฒ ะฑะพะปัŒะฝั‹ั… ั€ะฐะบะพะผ ะญั‚ะพ ะทะฐะดะฐะฝะธะต ะฟะพะผะพะถะตั‚ ะฒะฐะผ ะปัƒั‡ัˆะต ั€ะฐะทะพะฑั€ะฐั‚ัŒัั ะฒ ะผะตั‚ะพะดะฐั… ะผะฝะพะถะตัั‚ะฒะตะฝะฝะพะน ะฟั€ะพะฒะตั€ะบะธ ะณะธะฟะพั‚ะตะท ะธ ะฟะพะทะฒะพะปะธั‚ ะฟั€ะธะผะตะฝะธั‚ัŒ ะฒะฐัˆะธ ะทะฝะฐะฝะธั ะฝะฐ ะดะฐะฝะฝั‹ั… ะธะท ั€ะตะฐะปัŒะฝะพะณะพ ะฑะธะพะปะพะณะธั‡ะตัะบะพะณะพ ะธััะปะตะดะพะฒะฐะฝะธั. ะ’ ัั‚ะพะผ ะทะฐะดะฐะฝะธะธ ะฒั‹: ะฒัะฟะพะผะฝะธั‚ะต, ั‡ั‚ะพ ั‚ะฐะบะพะต t-ะบั€ะธั‚ะตั€ะธะน ะกั‚ัŒัŽะดะตะฝั‚ะฐ ะธ ะดะปั ั‡ะตะณะพ ะพะฝ ะฟั€ะธะผะตะฝัะตั‚ัั ัะผะพะถะตั‚ะต ะฟั€ะธะผะตะฝะธั‚ัŒ ั‚ะตั…ะฝะธะบัƒ ะผะฝะพะถะตัั‚ะฒะตะฝะฝะพะน ะฟั€ะพะฒะตั€ะบะธ ะณะธะฟะพั‚ะตะท ะธ ัƒะฒะธะดะตั‚ัŒ ัะพะฑัั‚ะฒะตะฝะฝั‹ะผะธ ะณะปะฐะทะฐะผะธ, ะบะฐะบ ะพะฝะฐ ั€ะฐะฑะพั‚ะฐะตั‚ ะฝะฐ ั€ะตะฐะปัŒะฝั‹ั… ะดะฐะฝะฝั‹ั… ะฟะพั‡ัƒะฒัั‚ะฒัƒะตั‚ะต ั€ะฐะทะฝะธั†ัƒ ะฒ ั€ะตะทัƒะปัŒั‚ะฐั‚ะฐั… ะฟั€ะธะผะตะฝะตะฝะธั ั€ะฐะทะปะธั‡ะฝั‹ั… ะผะตั‚ะพะดะพะฒ ะฟะพะฟั€ะฐะฒะบะธ ะฝะฐ ะผะฝะพะถะตัั‚ะฒะตะฝะฝัƒัŽ ะฟั€ะพะฒะตั€ะบัƒ ะžัะฝะพะฒะฝั‹ะต ะฑะธะฑะปะธะพั‚ะตะบะธ ะธ ะธัะฟะพะปัŒะทัƒะตะผั‹ะต ะผะตั‚ะพะดั‹: ะ‘ะธะฑะปะธะพั‚ะตะบะฐ scipy ะธ ะพัะฝะพะฒะฝั‹ะต ัั‚ะฐั‚ะธัั‚ะธั‡ะตัะบะธะต ั„ัƒะฝะบั†ะธะธ:http://docs.scipy.org/doc/scipy/reference/stats.html#statistical-functions ะ‘ะธะฑะปะธะพั‚ะตะบะฐ statmodels ะดะปั ะผะตั‚ะพะดะพะฒ ะบะพั€ั€ะตะบั†ะธะธ ะฟั€ะธ ะผะฝะพะถะตัั‚ะฒะตะฝะฝะพะผ ัั€ะฐะฒะฝะตะฝะธะธ: http://statsmodels.sourceforge.net/devel/stats.html ะกั‚ะฐั‚ัŒั, ะฒ ะบะพั‚ะพั€ะพะน ั€ะฐััะผะฐั‚ั€ะธะฒะฐัŽั‚ัั ะฟั€ะธะผะตั€ั‹ ะธัะฟะพะปัŒะทะพะฒะฐะฝะธั statsmodels ะดะปั ะผะฝะพะถะตัั‚ะฒะตะฝะฝะพะน ะฟั€ะพะฒะตั€ะบะธ ะณะธะฟะพั‚ะตะท: http://jpktd.blogspot.ru/2013/04/multiple-testing-p-value-corrections-in.html ะžะฟะธัะฐะฝะธะต ะธัะฟะพะปัŒะทัƒะตะผั‹ั… ะดะฐะฝะฝั‹ั… ะ”ะฐะฝะฝั‹ะต ะดะปั ัั‚ะพะน ะทะฐะดะฐั‡ะธ ะฒะทัั‚ั‹ ะธะท ะธััะปะตะดะพะฒะฐะฝะธั, ะฟั€ะพะฒะตะดะตะฝะฝะพะณะพ ะฒ Stanford School of Medicine. ะ’ ะธััะปะตะดะพะฒะฐะฝะธะธ ะฑั‹ะปะฐ ะฟั€ะตะดะฟั€ะธะฝัั‚ะฐ ะฟะพะฟั‹ั‚ะบะฐ ะฒั‹ัะฒะธั‚ัŒ ะฝะฐะฑะพั€ ะณะตะฝะพะฒ, ะบะพั‚ะพั€ั‹ะต ะฟะพะทะฒะพะปะธะปะธ ะฑั‹ ะฑะพะปะตะต ั‚ะพั‡ะฝะพ ะดะธะฐะณะฝะพัั‚ะธั€ะพะฒะฐั‚ัŒ ะฒะพะทะฝะธะบะฝะพะฒะตะฝะธะต ั€ะฐะบะฐ ะณั€ัƒะดะธ ะฝะฐ ัะฐะผั‹ั… ั€ะฐะฝะฝะธั… ัั‚ะฐะดะธัั…. ะ’ ัะบัะฟะตั€ะธะผะตะฝั‚ะต ะฟั€ะธะฝะธะผะฐะปะธ ัƒั‡ะฐัั‚ะธะต 24 ั‡ะตะปะพะฒะตะบ, ัƒ ะบะพั‚ะพั€ั‹ั… ะฝะต ะฑั‹ะปะพ ั€ะฐะบะฐ ะณั€ัƒะดะธ (normal), 25 ั‡ะตะปะพะฒะตะบ, ัƒ ะบะพั‚ะพั€ั‹ั… ัั‚ะพ ะทะฐะฑะพะปะตะฒะฐะฝะธะต ะฑั‹ะปะพ ะดะธะฐะณะฝะพัั‚ะธั€ะพะฒะฐะฝะพ ะฝะฐ ั€ะฐะฝะฝะตะน ัั‚ะฐะดะธะธ (early neoplasia), ะธ 23 ั‡ะตะปะพะฒะตะบะฐ ั ัะธะปัŒะฝะพ ะฒั‹ั€ะฐะถะตะฝะฝั‹ะผะธ ัะธะผะฟั‚ะพะผะฐะผะธ (cancer). ะฃั‡ะตะฝั‹ะต ะฟั€ะพะฒะตะปะธ ัะตะบะฒะตะฝะธั€ะพะฒะฐะฝะธะต ะฑะธะพะปะพะณะธั‡ะตัะบะพะณะพ ะผะฐั‚ะตั€ะธะฐะปะฐ ะธัะฟั‹ั‚ัƒะตะผั‹ั…, ั‡ั‚ะพะฑั‹ ะฟะพะฝัั‚ัŒ, ะบะฐะบะธะต ะธะท ัั‚ะธั… ะณะตะฝะพะฒ ะฝะฐะธะฑะพะปะตะต ะฐะบั‚ะธะฒะฝั‹ ะฒ ะบะปะตั‚ะบะฐั… ะฑะพะปัŒะฝั‹ั… ะปัŽะดะตะน. ะกะตะบะฒะตะฝะธั€ะพะฒะฐะฝะธะต โ€” ัั‚ะพ ะพะฟั€ะตะดะตะปะตะฝะธะต ัั‚ะตะฟะตะฝะธ ะฐะบั‚ะธะฒะฝะพัั‚ะธ ะณะตะฝะพะฒ ะฒ ะฐะฝะฐะปะธะทะธั€ัƒะตะผะพะผ ะพะฑั€ะฐะทั†ะต ั ะฟะพะผะพั‰ัŒัŽ ะฟะพะดัั‡ั‘ั‚ะฐ ะบะพะปะธั‡ะตัั‚ะฒะฐ ัะพะพั‚ะฒะตั‚ัั‚ะฒัƒัŽั‰ะตะน ะบะฐะถะดะพะผัƒ ะณะตะฝัƒ ะ ะะš. ะ’ ะดะฐะฝะฝั‹ั… ะดะปั ัั‚ะพะณะพ ะทะฐะดะฐะฝะธั ะฒั‹ ะฝะฐะนะดะตั‚ะต ะธะผะตะฝะฝะพ ัั‚ัƒ ะบะพะปะธั‡ะตัั‚ะฒะตะฝะฝัƒัŽ ะผะตั€ัƒ ะฐะบั‚ะธะฒะฝะพัั‚ะธ ะบะฐะถะดะพะณะพ ะธะท 15748 ะณะตะฝะพะฒ ัƒ ะบะฐะถะดะพะณะพ ะธะท 72 ั‡ะตะปะพะฒะตะบ, ะฟั€ะธะฝะธะผะฐะฒัˆะธั… ัƒั‡ะฐัั‚ะธะต ะฒ ัะบัะฟะตั€ะธะผะตะฝั‚ะต. ะ’ะฐะผ ะฝัƒะถะฝะพ ะฑัƒะดะตั‚ ะพะฟั€ะตะดะตะปะธั‚ัŒ ั‚ะต ะณะตะฝั‹, ะฐะบั‚ะธะฒะฝะพัั‚ัŒ ะบะพั‚ะพั€ั‹ั… ัƒ ะปัŽะดะตะน ะฒ ั€ะฐะทะฝั‹ั… ัั‚ะฐะดะธัั… ะทะฐะฑะพะปะตะฒะฐะฝะธั ะพั‚ะปะธั‡ะฐะตั‚ัั ัั‚ะฐั‚ะธัั‚ะธั‡ะตัะบะธ ะทะฝะฐั‡ะธะผะพ. ะšั€ะพะผะต ั‚ะพะณะพ, ะฒะฐะผ ะฝัƒะถะฝะพ ะฑัƒะดะตั‚ ะพั†ะตะฝะธั‚ัŒ ะฝะต ั‚ะพะปัŒะบะพ ัั‚ะฐั‚ะธัั‚ะธั‡ะตัะบัƒัŽ, ะฝะพ ะธ ะฟั€ะฐะบั‚ะธั‡ะตัะบัƒัŽ ะทะฝะฐั‡ะธะผะพัั‚ัŒ ัั‚ะธั… ั€ะตะทัƒะปัŒั‚ะฐั‚ะพะฒ, ะบะพั‚ะพั€ะฐั ั‡ะฐัั‚ะพ ะธัะฟะพะปัŒะทัƒะตั‚ัั ะฒ ะฟะพะดะพะฑะฝั‹ั… ะธััะปะตะดะพะฒะฐะฝะธัั…. ะ”ะธะฐะณะฝะพะท ั‡ะตะปะพะฒะตะบะฐ ัะพะดะตั€ะถะธั‚ัั ะฒ ัั‚ะพะปะฑั†ะต ะฟะพะด ะฝะฐะทะฒะฐะฝะธะตะผ "Diagnosis". ะŸั€ะฐะบั‚ะธั‡ะตัะบะฐั ะทะฝะฐั‡ะธะผะพัั‚ัŒ ะธะทะผะตะฝะตะฝะธั ะฆะตะปัŒ ะธััะปะตะดะพะฒะฐะฝะธะน โ€” ะฝะฐะนั‚ะธ ะณะตะฝั‹, ัั€ะตะดะฝัั ัะบัะฟั€ะตััะธั ะบะพั‚ะพั€ั‹ั… ะพั‚ะปะธั‡ะฐะตั‚ัั ะฝะต ั‚ะพะปัŒะบะพ ัั‚ะฐั‚ะธัั‚ะธั‡ะตัะบะธ ะทะฝะฐั‡ะธะผะพ, ะฝะพ ะธ ะดะพัั‚ะฐั‚ะพั‡ะฝะพ ัะธะปัŒะฝะพ. ะ’ ัะบัะฟั€ะตััะธะพะฝะฝั‹ั… ะธััะปะตะดะพะฒะฐะฝะธัั… ะดะปั ัั‚ะพะณะพ ั‡ะฐัั‚ะพ ะธัะฟะพะปัŒะทัƒะตั‚ัั ะผะตั‚ั€ะธะบะฐ, ะบะพั‚ะพั€ะฐั ะฝะฐะทั‹ะฒะฐะตั‚ัั fold change (ะบั€ะฐั‚ะฝะพัั‚ัŒ ะธะทะผะตะฝะตะฝะธั). ะžะฟั€ะตะดะตะปัะตั‚ัั ะพะฝะฐ ัะปะตะดัƒัŽั‰ะธะผ ะพะฑั€ะฐะทะพะผ: $$F_{c}(C,T) = \begin{cases} \frac{T}{C}, T>C \ -\frac{C}{T}, T<C \end{cases}$$ ะณะดะต C,T โ€” ัั€ะตะดะฝะธะต ะทะฝะฐั‡ะตะฝะธั ัะบัะฟั€ะตััะธะธ ะณะตะฝะฐ ะฒ control ะธ treatment ะณั€ัƒะฟะฟะฐั… ัะพะพั‚ะฒะตั‚ัั‚ะฒะตะฝะฝะพ. ะŸะพ ััƒั‚ะธ, fold change ะฟะพะบะฐะทั‹ะฒะฐะตั‚, ะฒะพ ัะบะพะปัŒะบะพ ั€ะฐะท ะพั‚ะปะธั‡ะฐัŽั‚ัั ัั€ะตะดะฝะธะต ะดะฒัƒั… ะฒั‹ะฑะพั€ะพะบ. ะ˜ะฝัั‚ั€ัƒะบั†ะธะธ ะบ ั€ะตัˆะตะฝะธัŽ ะทะฐะดะฐั‡ะธ ะ—ะฐะดะฐะฝะธะต ัะพัั‚ะพะธั‚ ะธะท ั‚ั€ั‘ั… ั‡ะฐัั‚ะตะน. ะ•ัะปะธ ะฝะต ัะบะฐะทะฐะฝะพ ะพะฑั€ะฐั‚ะฝะพะต, ั‚ะพ ัƒั€ะพะฒะตะฝัŒ ะทะฝะฐั‡ะธะผะพัั‚ะธ ะฝัƒะถะฝะพ ะฟั€ะธะฝัั‚ัŒ ั€ะฐะฒะฝั‹ะผ 0.05. ะงะฐัั‚ัŒ 1: ะฟั€ะธะผะตะฝะตะฝะธะต t-ะบั€ะธั‚ะตั€ะธั ะกั‚ัŒัŽะดะตะฝั‚ะฐ ะ’ ะฟะตั€ะฒะพะน ั‡ะฐัั‚ะธ ะฒะฐะผ ะฝัƒะถะฝะพ ะฑัƒะดะตั‚ ะฟั€ะธะผะตะฝะธั‚ัŒ ะบั€ะธั‚ะตั€ะธะน ะกั‚ัŒัŽะดะตะฝั‚ะฐ ะดะปั ะฟั€ะพะฒะตั€ะบะธ ะณะธะฟะพั‚ะตะทั‹ ะพ ั€ะฐะฒะตะฝัั‚ะฒะต ัั€ะตะดะฝะธั… ะฒ ะดะฒัƒั… ะฝะตะทะฐะฒะธัะธะผั‹ั… ะฒั‹ะฑะพั€ะบะฐั…. ะŸั€ะธะผะตะฝะธั‚ัŒ ะบั€ะธั‚ะตั€ะธะน ะดะปั ะบะฐะถะดะพะณะพ ะณะตะฝะฐ ะฝัƒะถะฝะพ ะฑัƒะดะตั‚ ะดะฒะฐะถะดั‹: ะดะปั ะณั€ัƒะฟะฟ normal (control) ะธ early neoplasia (treatment) ะดะปั ะณั€ัƒะฟะฟ early neoplasia (control) ะธ cancer (treatment) ะ’ ะบะฐั‡ะตัั‚ะฒะต ะพั‚ะฒะตั‚ะฐ ะฒ ัั‚ะพะน ั‡ะฐัั‚ะธ ะทะฐะดะฐะฝะธั ะฝะตะพะฑั…ะพะดะธะผะพ ัƒะบะฐะทะฐั‚ัŒ ะบะพะปะธั‡ะตัั‚ะฒะพ ัั‚ะฐั‚ะธัั‚ะธั‡ะตัะบะธ ะทะฝะฐั‡ะธะผั‹ั… ะพั‚ะปะธั‡ะธะน, ะบะพั‚ะพั€ั‹ะต ะฒั‹ ะฝะฐัˆะปะธ ั ะฟะพะผะพั‰ัŒัŽ t-ะบั€ะธั‚ะตั€ะธั ะกั‚ัŒัŽะดะตะฝั‚ะฐ, ั‚ะพ ะตัั‚ัŒ ั‡ะธัะปะพ ะณะตะฝะพะฒ, ัƒ ะบะพั‚ะพั€ั‹ั… p-value ัั‚ะพะณะพ ั‚ะตัั‚ะฐ ะพะบะฐะทะฐะปัั ะผะตะฝัŒัˆะต, ั‡ะตะผ ัƒั€ะพะฒะตะฝัŒ ะทะฝะฐั‡ะธะผะพัั‚ะธ. End of explanation import statsmodels.stats.multitest as smm pvalue_df['control_mean_expression'] = control_df.mean() pvalue_df['neoplasia_mean_expression'] = neoplasia_df.mean() pvalue_df['cancer_mean_expression'] = cancer_df.mean() def abs_fold_change(c, t): if t > c: return t/c else: return c/t pvalue_df['control_vs_neoplasia_fold_change'] = map(lambda x, y: abs_fold_change(x, y), pvalue_df.control_mean_expression, pvalue_df.neoplasia_mean_expression ) pvalue_df['neoplasia_vs_cancer_fold_change'] = map(lambda x, y: abs_fold_change(x, y), pvalue_df.neoplasia_mean_expression, pvalue_df.cancer_mean_expression ) pvalue_df['control_vs_neoplasia_rej_hb'] = smm.multipletests(pvalue_df.control_vs_neoplasia_pvalue, alpha=0.025, method='h')[0] pvalue_df['neoplasia_vs_cancer_rej_hb'] = smm.multipletests(pvalue_df.neoplasia_vs_cancer_pvalue, alpha=0.025, method='h')[0] pvalue_df[(pvalue_df.control_vs_neoplasia_rej_hb) & (pvalue_df.control_vs_neoplasia_fold_change > 1.5)].shape pvalue_df[(pvalue_df.neoplasia_vs_cancer_rej_hb) & (pvalue_df.neoplasia_vs_cancer_fold_change > 1.5)].shape Explanation: ะงะฐัั‚ัŒ 2: ะฟะพะฟั€ะฐะฒะบะฐ ะผะตั‚ะพะดะพะผ ะฅะพะปะผะฐ ะ”ะปั ัั‚ะพะน ั‡ะฐัั‚ะธ ะทะฐะดะฐะฝะธั ะฒะฐะผ ะฟะพะฝะฐะดะพะฑะธั‚ัั ะผะพะดัƒะปัŒ multitest ะธะท statsmodels. import statsmodels.stats.multitest as smm ะ’ ัั‚ะพะน ั‡ะฐัั‚ะธ ะทะฐะดะฐะฝะธั ะฝัƒะถะฝะพ ะฑัƒะดะตั‚ ะฟั€ะธะผะตะฝะธั‚ัŒ ะฟะพะฟั€ะฐะฒะบัƒ ะฅะพะปะผะฐ ะดะปั ะฟะพะปัƒั‡ะธะฒัˆะธั…ัั ะดะฒัƒั… ะฝะฐะฑะพั€ะพะฒ ะดะพัั‚ะธะณะฐะตะผั‹ั… ัƒั€ะพะฒะฝะตะน ะทะฝะฐั‡ะธะผะพัั‚ะธ ะธะท ะฟั€ะตะดั‹ะดัƒั‰ะตะน ั‡ะฐัั‚ะธ. ะžะฑั€ะฐั‚ะธั‚ะต ะฒะฝะธะผะฐะฝะธะต, ั‡ั‚ะพ ะฟะพัะบะพะปัŒะบัƒ ะฒั‹ ะฑัƒะดะตั‚ะต ะดะตะปะฐั‚ัŒ ะฟะพะฟั€ะฐะฒะบัƒ ะดะปั ะบะฐะถะดะพะณะพ ะธะท ะดะฒัƒั… ะฝะฐะฑะพั€ะพะฒ p-value ะพั‚ะดะตะปัŒะฝะพ, ั‚ะพ ะฟั€ะพะฑะปะตะผะฐ, ัะฒัะทะฐะฝะฝะฐั ั ะผะฝะพะถะตัั‚ะฒะตะฝะฝะพะน ะฟั€ะพะฒะตั€ะบะพะน ะพัั‚ะฐะฝะตั‚ัั. ะ”ะปั ั‚ะพะณะพ, ั‡ั‚ะพะฑั‹ ะตะต ัƒัั‚ั€ะฐะฝะธั‚ัŒ, ะดะพัั‚ะฐั‚ะพั‡ะฝะพ ะฒะพัะฟะพะปัŒะทะพะฒะฐั‚ัŒัั ะฟะพะฟั€ะฐะฒะบะพะน ะ‘ะพะฝั„ะตั€ั€ะพะฝะธ, ั‚ะพ ะตัั‚ัŒ ะธัะฟะพะปัŒะทะพะฒะฐั‚ัŒ ัƒั€ะพะฒะตะฝัŒ ะทะฝะฐั‡ะธะผะพัั‚ะธ 0.05 / 2 ะฒะผะตัั‚ะพ 0.05 ะดะปั ะดะฐะปัŒะฝะตะนัˆะตะณะพ ัƒั‚ะพั‡ะฝะตะฝะธั ะทะฝะฐั‡ะตะฝะธะน p-value c ะฟะพะผะพั‰ัŒัŽ ะผะตั‚ะพะดะฐ ะฅะพะปะผะฐ. ะ’ ะบะฐั‡ะตัั‚ะฒะต ะพั‚ะฒะตั‚ะฐ ะบ ัั‚ะพะผัƒ ะทะฐะดะฐะฝะธัŽ ั‚ั€ะตะฑัƒะตั‚ัั ะฒะฒะตัั‚ะธ ะบะพะปะธั‡ะตัั‚ะฒะพ ะทะฝะฐั‡ะธะผั‹ั… ะพั‚ะปะธั‡ะธะน ะฒ ะบะฐะถะดะพะน ะณั€ัƒะฟะฟะต ะฟะพัะปะต ั‚ะพะณะพ, ะบะฐะบ ะฟั€ะพะธะทะฒะตะดะตะฝะฐ ะบะพั€ั€ะตะบั†ะธั ะฅะพะปะผะฐ-ะ‘ะพะฝั„ะตั€ั€ะพะฝะธ. ะŸั€ะธั‡ะตะผ ัั‚ะพ ั‡ะธัะปะพ ะฝัƒะถะฝะพ ะฒะฒะตัั‚ะธ ั ัƒั‡ะตั‚ะพะผ ะฟั€ะฐะบั‚ะธั‡ะตัะบะพะน ะทะฝะฐั‡ะธะผะพัั‚ะธ: ะฟะพัั‡ะธั‚ะฐะนั‚ะต ะดะปั ะบะฐะถะดะพะณะพ ะทะฝะฐั‡ะธะผะพะณะพ ะธะทะผะตะฝะตะฝะธั fold change ะธ ะฒั‹ะฟะธัˆะธั‚ะต ะฒ ะพั‚ะฒะตั‚ ั‡ะธัะปะพ ั‚ะฐะบะธั… ะทะฝะฐั‡ะธะผั‹ั… ะธะทะผะตะฝะตะฝะธะน, ะฐะฑัะพะปัŽั‚ะฝะพะต ะทะฝะฐั‡ะตะฝะธะต fold change ะบะพั‚ะพั€ั‹ั… ะฑะพะปัŒัˆะต, ั‡ะตะผ 1.5. ะžะฑั€ะฐั‚ะธั‚ะต ะฒะฝะธะผะฐะฝะธะต, ั‡ั‚ะพ ะฟั€ะธะผะตะฝัั‚ัŒ ะฟะพะฟั€ะฐะฒะบัƒ ะฝะฐ ะผะฝะพะถะตัั‚ะฒะตะฝะฝัƒัŽ ะฟั€ะพะฒะตั€ะบัƒ ะฝัƒะถะฝะพ ะบะพ ะฒัะตะผ ะทะฝะฐั‡ะตะฝะธัะผ ะดะพัั‚ะธะณะฐะตะผั‹ั… ัƒั€ะพะฒะฝะตะน ะทะฝะฐั‡ะธะผะพัั‚ะธ, ะฐ ะฝะต ั‚ะพะปัŒะบะพ ะดะปั ั‚ะตั…, ะบะพั‚ะพั€ั‹ะต ะผะตะฝัŒัˆะต ะทะฝะฐั‡ะตะฝะธั ัƒั€ะพะฒะฝั ะดะพะฒะตั€ะธั. ะฟั€ะธ ะธัะฟะพะปัŒะทะพะฒะฐะฝะธะธ ะฟะพะฟั€ะฐะฒะบะธ ะฝะฐ ัƒั€ะพะฒะฝะต ะทะฝะฐั‡ะธะผะพัั‚ะธ 0.025 ะผะตะฝััŽั‚ัั ะทะฝะฐั‡ะตะฝะธั ะดะพัั‚ะธะณะฐะตะผะพะณะพ ัƒั€ะพะฒะฝั ะทะฝะฐั‡ะธะผะพัั‚ะธ, ะฝะพ ะฝะต ะผะตะฝัะตั‚ัั ะทะฝะฐั‡ะตะฝะธะต ัƒั€ะพะฒะฝั ะดะพะฒะตั€ะธั (ั‚ะพ ะตัั‚ัŒ ะดะปั ะพั‚ะฑะพั€ะฐ ะทะฝะฐั‡ะธะผั‹ั… ะธะทะผะตะฝะตะฝะธะน ัะบะพั€ั€ะตะบั‚ะธั€ะพะฒะฐะฝะฝั‹ะต ะทะฝะฐั‡ะตะฝะธั ัƒั€ะพะฒะฝั ะทะฝะฐั‡ะธะผะพัั‚ะธ ะฝัƒะถะฝะพ ัั€ะฐะฒะฝะธะฒะฐั‚ัŒ ั ะฟะพั€ะพะณะพะผ 0.025, ะฐ ะฝะต 0.05)! End of explanation pvalue_df['control_vs_neoplasia_rej_bh'] = smm.multipletests(pvalue_df.control_vs_neoplasia_pvalue, alpha=0.025, method='fdr_i')[0] pvalue_df['neoplasia_vs_cancer_rej_bh'] = smm.multipletests(pvalue_df.neoplasia_vs_cancer_pvalue, alpha=0.025, method='fdr_i')[0] pvalue_df.control_vs_neoplasia_rej_bh.value_counts() pvalue_df[(pvalue_df.control_vs_neoplasia_rej_bh) & (pvalue_df.control_vs_neoplasia_fold_change > 1.5)].shape pvalue_df[(pvalue_df.neoplasia_vs_cancer_rej_bh) & (pvalue_df.neoplasia_vs_cancer_fold_change > 1.5)].shape Explanation: ะงะฐัั‚ัŒ 3: ะฟะพะฟั€ะฐะฒะบะฐ ะผะตั‚ะพะดะพะผ ะ‘ะตะฝะดะถะฐะผะธะฝะธ-ะฅะพั…ะฑะตั€ะณะฐ ะ”ะฐะฝะฝะฐั ั‡ะฐัั‚ัŒ ะทะฐะดะฐะฝะธั ะฐะฝะฐะปะพะณะธั‡ะฝะฐ ะฒั‚ะพั€ะพะน ั‡ะฐัั‚ะธ ะทะฐ ะธัะบะปัŽั‡ะตะฝะธะตะผ ั‚ะพะณะพ, ั‡ั‚ะพ ะฝัƒะถะฝะพ ะฑัƒะดะตั‚ ะธัะฟะพะปัŒะทะพะฒะฐั‚ัŒ ะผะตั‚ะพะด ะ‘ะตะฝะดะถะฐะผะธะฝะธ-ะฅะพั…ะฑะตั€ะณะฐ. ะžะฑั€ะฐั‚ะธั‚ะต ะฒะฝะธะผะฐะฝะธะต, ั‡ั‚ะพ ะผะตั‚ะพะดั‹ ะบะพั€ั€ะตะบั†ะธะธ, ะบะพั‚ะพั€ั‹ะต ะบะพะฝั‚ั€ะพะปะธั€ัƒัŽั‚ FDR, ะดะพะฟัƒัะบะฐะตั‚ ะฑะพะปัŒัˆะต ะพัˆะธะฑะพะบ ะฟะตั€ะฒะพะณะพ ั€ะพะดะฐ ะธ ะธะผะตัŽั‚ ะฑะพะปัŒัˆัƒัŽ ะผะพั‰ะฝะพัั‚ัŒ, ั‡ะตะผ ะผะตั‚ะพะดั‹, ะบะพะฝั‚ั€ะพะปะธั€ัƒัŽั‰ะธะต FWER. ะ‘ะพะปัŒัˆะฐั ะผะพั‰ะฝะพัั‚ัŒ ะพะทะฝะฐั‡ะฐะตั‚, ั‡ั‚ะพ ัั‚ะธ ะผะตั‚ะพะดั‹ ะฑัƒะดัƒั‚ ัะพะฒะตั€ัˆะฐั‚ัŒ ะผะตะฝัŒัˆะต ะพัˆะธะฑะพะบ ะฒั‚ะพั€ะพะณะพ ั€ะพะดะฐ (ั‚ะพ ะตัั‚ัŒ ะฑัƒะดัƒั‚ ะปัƒั‡ัˆะต ัƒะปะฐะฒะปะธะฒะฐั‚ัŒ ะพั‚ะบะปะพะฝะตะฝะธั ะพั‚ H0, ะบะพะณะดะฐ ะพะฝะธ ะตัั‚ัŒ, ะธ ะฑัƒะดัƒั‚ ั‡ะฐั‰ะต ะพั‚ะบะปะพะฝัั‚ัŒ H0, ะบะพะณะดะฐ ะพั‚ะปะธั‡ะธะน ะฝะตั‚). ะ’ ะบะฐั‡ะตัั‚ะฒะต ะพั‚ะฒะตั‚ะฐ ะบ ัั‚ะพะผัƒ ะทะฐะดะฐะฝะธัŽ ั‚ั€ะตะฑัƒะตั‚ัั ะฒะฒะตัั‚ะธ ะบะพะปะธั‡ะตัั‚ะฒะพ ะทะฝะฐั‡ะธะผั‹ั… ะพั‚ะปะธั‡ะธะน ะฒ ะบะฐะถะดะพะน ะณั€ัƒะฟะฟะต ะฟะพัะปะต ั‚ะพะณะพ, ะบะฐะบ ะฟั€ะพะธะทะฒะตะดะตะฝะฐ ะบะพั€ั€ะตะบั†ะธั ะ‘ะตะฝะดะถะฐะผะธะฝะธ-ะฅะพั…ะฑะตั€ะณะฐ, ะฟั€ะธั‡ะตะผ ั‚ะฐะบ ะถะต, ะบะฐะบ ะธ ะฒะพ ะฒั‚ะพั€ะพะน ั‡ะฐัั‚ะธ, ัั‡ะธั‚ะฐั‚ัŒ ั‚ะพะปัŒะบะพ ั‚ะฐะบะธะต ะพั‚ะปะธั‡ะธั, ัƒ ะบะพั‚ะพั€ั‹ั… abs(fold change) > 1.5. End of explanation
9,265
Given the following text description, write Python code to implement the functionality described below step by step Description: โ˜… Partial Differential Equations โ˜… Step1: 8.1 Parabolic Equations Forward Difference Method Step2: Backward Difference Method Step3: Example Apply the Backward Difference Method to solve the heat equation $$ \left{\begin{matrix}\begin{align} & u_t = 4u_{xx}\ Step4: Example Apply the Backward Difference Method to solve the heat equation with homogeneous Neumann boundary conditions $$ \left{\begin{matrix}\begin{align} & u_t = u_{xx}\ Step5: Crank-Nicolson Method Step6: Example Apply the Crank-Nicolson Method to the heat equation $$ \left{\begin{matrix}\begin{align} & u_t = Du_{xx} + Cu \ & u(x,0) = \sin^2{(\frac{\pi}{L} x)}\ Step7: 8.2 Hyperbolic Equations Example Apply the explicit Finite Difference Method to the wave equation with wave speed $c = 2$ and initial conditions $f(x) = \sin{\pi x}$ and $g(x) = l(x) = r(x) = 0$ Step8: The CFL condition The finite Difference Method is applied to the wave equation with wave speed $c > 0$ is stable if $\sigma = \frac{ck}{h} \leq 1$ Step9: 8.3 Elliptic Equations Example Apply the Finite Difference Method with m = n = 5 to approximate the solution of the Laplace equation $\Delta{u} = 0$ on $[0,1] \times [1,2]$ with the following Dirichlet boundary conditions Step10: Example Find the electrostatic potential on the square $[0,1] \times [0,1]$, assuming no charge in the interior and assuming the following boundary conditions Step11: Finite Element Method for elliptic equations Step12: Example Apply the Finite Element Method with M = N = 16 to approximate the solution of the elliptic Dirichlet problem $$ \left{\begin{matrix}\begin{align} & \Delta{u} + 4\pi^2u = 2 \sin{2\pi y} \ & u(x,0) = 0\text{ for } 0 \le x \le 1 \ & u(x,1) = 0\text{ for } 0 \le x \le 1 \ & u(0,y) = 0\text{ for } 0 \le y \le 1 \ & u(1,y) = \sin{2\pi y}\text{ for } 0 \le y \le 1 \end{align}\end{matrix}\right. $$ Step13: 8.4 Nonlinear partial differential equations Example Use the Backward Difference Equation with Newton iteration to solve Burgers' equation $$ \left{\begin{matrix}\begin{align} & u_t + uu_x = Du_{xx} \ & u(x,0) = \frac{2D\beta\pi\sin{\pi x}}{\alpha + \beta\cos{\pi x}}\text{ for } 0 \le x \le 1 \ & u(0,t) = 0\text{ for all } t \ge 0 \ & u(1,t) = 0\text{ for all } t \ge 0 \end{align}\end{matrix}\right. $$
Python Code: # Import modules import numpy as np import scipy import sympy as sym from scipy import sparse from matplotlib import pyplot as plt from mpl_toolkits.mplot3d import axes3d from IPython.display import Math from IPython.display import display sym.init_printing(use_latex=True) Explanation: โ˜… Partial Differential Equations โ˜… End of explanation def heatfd(xl, xr, yb, yt, M, N): f = lambda x : np.power(np.sin(2 * np.pi * x), 2) l = lambda t : 0 * t r = lambda t : 0 * t D = 1 h = (xr - xl) / M k = (yt - yb) / N m = M - 1 n = N sigma = D * k / np.power(h, 2) A = np.diag(1 - 2 * sigma * np.ones(m)) + \ np.diag(sigma * np.ones(m - 1), 1) + \ np.diag(sigma * np.ones(m - 1), -1) lside = l(yb + np.arange(n) * k) rside = r(yb + np.arange(n) * k) x = sym.Symbol('x') expr = sym.sin(2 * sym.pi * x) ** 2 # expr = sym.diff(expr, x) w = np.zeros(n * m).reshape(n, m).astype(np.float128) for i in range(m): w[0, i] = expr.subs(x, xl + (i + 1) * h).evalf() for j in range(n - 1): ww = np.zeros(m) ww[0] = lside[j] ww[-1] = rside[j] v = np.matmul(A, w[j]) + sigma * ww w[j + 1,:] = v w = np.column_stack([lside, w, rside]) x = np.arange(0, m+2) * h t = np.arange(0, n) * k X, T = np.meshgrid(x, t) fig = plt.figure() ax = fig.gca(projection='3d') ax.plot_surface(X, T, w) plt.show() plt.clf() heatfd(0, 1, 0, 1, 30, 2000) Explanation: 8.1 Parabolic Equations Forward Difference Method End of explanation def heatbd(xl, xr, yb, yt, M, N): f = lambda x : np.sin(2 * np.pi * x) ** 2 l = lambda t : 0 * t r = lambda t : 0 * t h = (xr - xl) / M k = (yt - yb) / N m = M - 1 n = N D = 1 # diffusion coefficient sigma = D * k / (h ** 2) A = np.diag(1 + 2 * sigma * np.ones(m)) + \ np.diag(-sigma * np.ones(m - 1), 1) + \ np.diag(-sigma * np.ones(m - 1), -1) lside = l(yb + np.arange(n) * k) rside = r(yb + np.arange(n) * k) ''' Initial conditions ''' x = sym.Symbol('x') expr = sym.sin(2 * sym.pi * x) ** 2 # expr = sym.diff(expr, x) w = np.zeros(n * m).reshape(n, m).astype(np.float128) for i in range(m): w[0, i] = expr.subs(x, xl + (i + 1) * h).evalf() for j in range(n - 1): ww = np.zeros(m) ww[0] = lside[j] ww[-1] = rside[j] v = np.matmul(np.linalg.inv(A), w[j,:] + sigma * ww) w[j + 1,:] = v w = np.column_stack([lside, w, rside]) x = np.arange(0, m+2) * h t = np.arange(0, n) * k X, T = np.meshgrid(x, t) fig = plt.figure() ax = fig.gca(projection='3d') ax.plot_surface(X, T, w) plt.xlabel('x') plt.ylabel('y') plt.show() plt.clf() heatbd(0, 1, 0, 1, 30, 20) Explanation: Backward Difference Method End of explanation def heatbd(xl, xr, yb, yt, M, N): l = lambda t : np.exp(t) r = lambda t : np.exp(t - 0.5) h = (xr - xl) / M k = (yt - yb) / N m = M - 1 n = N D = 4 # diffusion coefficient sigma = D * k / (h ** 2) A = np.diag((1 + 2 * sigma) * np.ones(m)) + \ np.diag(-sigma * np.ones(m - 1), 1) + \ np.diag(-sigma * np.ones(m - 1), -1) lside = l(yb + np.arange(n) * k) rside = r(yb + np.arange(n) * k) ''' Initial conditions ''' x = sym.Symbol('x') expr = sym.exp(-x / 2) # expr = sym.diff(expr, x) w = np.zeros(n * m).reshape(n, m).astype(np.float128) for i in range(m): w[0, i] = expr.subs(x, xl + (i + 1) * h).evalf() for j in range(n - 1): ww = np.zeros(m) ww[0] = lside[j] ww[-1] = rside[j] v = np.matmul(np.linalg.inv(A), w[j,:] + sigma * ww) w[j + 1,:] = v w = np.column_stack([lside, w, rside]) x = np.arange(0, m+2) * h t = np.arange(0, n) * k X, T = np.meshgrid(x, t) fig = plt.figure() ax = fig.gca(projection='3d') stride = 0 ax.plot_surface(X[stride:], T[stride:], w[stride:]) plt.xlabel('x') plt.ylabel('y') plt.show() plt.clf() heatbd(0, 1, 0, 1, 20, 100) Explanation: Example Apply the Backward Difference Method to solve the heat equation $$ \left{\begin{matrix}\begin{align} & u_t = 4u_{xx}\: & for\:all\:0 \leq x \leq 1\:,0 \leq t \leq 1 \ & u(x,0) = e^{-x/2}\: & for\:all\:0 \leq x \leq 1 \ & u(0,t) = e^t\: & for\:all\:0 \leq t \leq 1 \ & u(1,t) = e^{t-1/2}\: & for\:all\:0 \leq t \leq 1 \end{align}\end{matrix}\right. $$ End of explanation def heatbdn(xl, xr, yb, yt, M, N): h = (xr - xl) / M k = (yt - yb) / N m = M - 1 n = N D = 1 # diffusion coefficient sigma = D * k / (h ** 2) A = np.diag((1 + 2 * sigma) * np.ones(m)) + \ np.diag(-sigma * np.ones(m - 1), 1) + \ np.diag(-sigma * np.ones(m - 1), -1) A[0,:3] = np.array([-3, 4, -1]) A[-1,-3:] = np.array([-1, 4, -3]) ''' Initial conditions ''' x = sym.Symbol('x') expr = sym.sin(2 * sym.pi * x) ** 2 # expr = sym.diff(expr, x) w = np.zeros(n * m).reshape(n, m).astype(np.float128) for i in range(m): w[0, i] = expr.subs(x, xl + (i + 1) * h).evalf() for j in range(n - 1): b = w[j,:] b[0] = 0 b[-1] = 0 w[j + 1,:] = np.matmul(np.linalg.inv(A), b) x = np.arange(0, m) * h t = np.arange(0, n) * k X, T = np.meshgrid(x, t) fig = plt.figure() ax = fig.gca(projection='3d') stride = 0 ax.plot_surface(X[stride:], T[stride:], w[stride:]) plt.xlabel('x') plt.ylabel('y') plt.show() plt.clf() heatbdn(0, 1, 0, 1, 20, 20) Explanation: Example Apply the Backward Difference Method to solve the heat equation with homogeneous Neumann boundary conditions $$ \left{\begin{matrix}\begin{align} & u_t = u_{xx}\: & for\:all\:0 \leq x \leq 1\:,0 \leq t \leq 1 \ & u(x,0) = \sin^2{2\pi x}\: & for\:all\:0 \leq x \leq 1 \ & u(0,t) = 0\: & for\:all\:0 \leq t \leq 1 \ & u(1,t) = 0\: & for\:all\:0 \leq t \leq 1 \end{align}\end{matrix}\right. $$ End of explanation def crank_nicolson_heat(xl, xr, yb, yt, M, N): l = lambda t : 0 * t r = lambda t : 0 * t D = 1 h = (xr - xl) / M k = (yt - yb) / N m = M - 1 n = N sigma = D * k / (h ** 2) A = np.diag((2 + 2 * sigma) * np.ones(m)) + \ np.diag(-sigma * np.ones(m - 1), 1) + \ np.diag(-sigma * np.ones(m - 1), -1) B = np.diag((2 - 2 * sigma) * np.ones(m)) + \ np.diag(sigma * np.ones(m - 1), 1) + \ np.diag(sigma * np.ones(m - 1), -1) lside = l(yb + np.arange(n) * k) rside = r(yb + np.arange(n) * k) ''' Initial conditions ''' x = sym.Symbol('x') expr = sym.sin(2 * sym.pi * x) ** 2 # expr = sym.diff(expr, x) w = np.zeros(n * m).reshape(n, m).astype(np.float128) for i in range(m): w[0, i] = expr.subs(x, xl + (i + 1) * h).evalf() for j in range(n - 1): s = np.zeros(m) s[0] = lside[j] + lside[j+1] s[-1] = rside[j] + rside[j+1] w[j + 1,:] = np.matmul(np.linalg.inv(A), np.matmul(B, w[j,:]) + sigma * s) w = np.column_stack([lside, w, rside]) x = xl +np.arange(0, m+2) * h t = yb + np.arange(0, n) * k X, T = np.meshgrid(x, t) fig = plt.figure() ax = fig.gca(projection='3d') stride = 0 ax.plot_surface(X[stride:], T[stride:], w[stride:]) plt.xlabel('x') plt.ylabel('y') plt.show() plt.clf() crank_nicolson_heat(0, 0.5, 0, 1, 30, 100) Explanation: Crank-Nicolson Method End of explanation def crank_nicolson_growth(xl, xr, yb, yt, M, N): l = lambda t : 0 * t r = lambda t : 0 * t D = 1 L = 1 C = 9.5 h = (xr - xl) / M k = (yt - yb) / N m = M - 1 n = N sigma = D * k / h ** 2 A = np.diag((2 - k * C + 2 * sigma) * np.ones(m)) + \ np.diag(-sigma * np.ones(m - 1), 1) + \ np.diag(-sigma * np.ones(m - 1), -1) B = np.diag((2 + k * C - 2 * sigma) * np.ones(m)) + \ np.diag(sigma * np.ones(m - 1), 1) + \ np.diag(sigma * np.ones(m - 1), -1) lside = l(yb + np.arange(n) * k) rside = r(yb + np.arange(n) * k) ''' Initial conditions ''' f = lambda x : np.power(np.sin(np.pi * x / L), 2) w = np.zeros(n * m).reshape(n, m).astype(np.float128) for i in range(m): w[0, i] = f(xl + (i + 1) * h) for j in range(n - 1): s = np.zeros(m) s[0] = lside[j] + lside[j+1] s[-1] = rside[j] + rside[j+1] w[j + 1,:] = np.matmul(np.linalg.inv(A), np.matmul(B, w[j,:]) + sigma * s) w = np.column_stack([lside, w, rside]) x = xl + np.arange(0, m+2) * h t = yb + np.arange(0, n) * k X, T = np.meshgrid(x, t) fig = plt.figure() ax = fig.gca(projection='3d') stride = 0 ax.plot_surface(X[stride:], T[stride:], w[stride:]) plt.xlabel('x') plt.ylabel('t') plt.show() plt.clf() crank_nicolson_growth(0, 1, 0, 1, 20, 20) Explanation: Example Apply the Crank-Nicolson Method to the heat equation $$ \left{\begin{matrix}\begin{align} & u_t = Du_{xx} + Cu \ & u(x,0) = \sin^2{(\frac{\pi}{L} x)}\: &for\:all\:0 \leq x \leq L \ & u(0,t) = 0\: &for\:all\:t \geq 0 \ & u(L,t) = 0\: &for\:all\:t \geq 0 \end{align}\end{matrix}\right. $$ End of explanation def wavefd(xl, xr, yb, yt, M, N): c = 1 h = (xr - xl) / M k = (yt - yb) / N m = M - 1 n = N sigma = c * k / h f = lambda x : np.sin(x * np.pi) l = lambda x : 0 * x r = lambda x : 0 * x g = lambda x : 0 * x lside = l(yb + np.arange(n) * k) rside = r(yb + np.arange(n) * k) A = np.diag((2 - 2 * sigma ** 2) * np.ones(m)) + \ np.diag((sigma ** 2) * np.ones(m - 1), 1) + \ np.diag((sigma ** 2) * np.ones(m - 1), -1) '''Initial condition''' w = np.zeros(n * m).reshape(n, m).astype(np.float128) xv = np.linspace(0, 1, M + 1)[1:-1] w[0, :] = f(xv) w[1, :] = 0.5 * np.matmul(A, w[0, :]) + \ k * g(xv) + \ 0.5 * np.power(sigma, 2) * np.array([lside[0], *np.zeros(m - 2), rside[0]]) for i in range(2, n - 1): w[i,:] = np.matmul(A, w[i-1,:]) - w[i-2,:] + np.power(sigma, 2) * \ np.array([lside[i-1], *np.zeros(m - 2), rside[i-1]]) w = np.column_stack([lside, w, rside]) x = xl + np.arange(0, m + 2) * h t = yb + np.arange(0, n) * k X, T = np.meshgrid(x, t) fig = plt.figure() ax = fig.gca(projection='3d') stride = 0 ax.plot_wireframe(X[stride:], T[stride:], w[stride:]) # ax.plot_surface(X[stride:], T[stride:], w[stride:]) ax.view_init(azim=60, elev=30) plt.xlabel('x') plt.ylabel('t') plt.show() plt.clf() wavefd(0, 1, 0, 1, 20, 20) Explanation: 8.2 Hyperbolic Equations Example Apply the explicit Finite Difference Method to the wave equation with wave speed $c = 2$ and initial conditions $f(x) = \sin{\pi x}$ and $g(x) = l(x) = r(x) = 0$ End of explanation def wavefd_cfl(xl, xr, yb, yt, M, N, C = 1): c = C h = (xr - xl) / M k = (yt - yb) / N if c * k > h: raise ValueError("CFL condition 'c * k <= h' is not satisfied, c * k is %f and h is %f" %(c * k, h) ) m = M - 1 n = N sigma = c * k / h f = lambda x : np.sin(x * np.pi) l = lambda x : 0 * x r = lambda x : 0 * x g = lambda x : 0 * x lside = l(yb + np.arange(n) * k) rside = r(yb + np.arange(n) * k) A = np.diag((2 - 2 * sigma ** 2) * np.ones(m)) + \ np.diag((sigma ** 2) * np.ones(m - 1), 1) + \ np.diag((sigma ** 2) * np.ones(m - 1), -1) '''Initial condition''' w = np.zeros(n * m).reshape(n, m).astype(np.float128) xv = np.linspace(0, 1, M + 1)[1:-1] w[0, :] = f(xv) w[1, :] = 0.5 * np.matmul(A, w[0, :]) + \ k * g(xv) + \ 0.5 * np.power(sigma, 2) * np.array([lside[0], *np.zeros(m - 2), rside[0]]) for i in range(2, n - 1): w[i,:] = np.matmul(A, w[i-1,:]) - w[i-2,:] + np.power(sigma, 2) * \ np.array([lside[i-1], *np.zeros(m - 2), rside[i-1]]) w = np.column_stack([lside, w, rside]) x = xl + np.arange(0, m + 2) * h t = yb + np.arange(0, n) * k X, T = np.meshgrid(x, t) fig = plt.figure() ax = fig.gca(projection='3d') stride = 0 ax.plot_wireframe(X[stride:], T[stride:], w[stride:]) # ax.plot_surface(X[stride:], T[stride:], w[stride:]) ax.view_init(azim=20, elev=20) plt.xlabel('x') plt.ylabel('t') plt.show() plt.clf() wavefd_cfl(0, 1, 0, 1, 20, 200, 6) Explanation: The CFL condition The finite Difference Method is applied to the wave equation with wave speed $c > 0$ is stable if $\sigma = \frac{ck}{h} \leq 1$ End of explanation def poisson(xl, xr, yb, yt, M, N): f = lambda x, y : 0 g1 = lambda x : np.log(pow(x, 2) + 1) g2 = lambda x : np.log(pow(x, 2) + 4) g3 = lambda y : 2 * np.log(y) g4 = lambda y : np.log(pow(y, 2) + 1) m, n = M + 1, N + 1 mn = m * n h, k = (xr - xl) / M, (yt - yb) / N h2, k2 = pow(h, 2), pow(k, 2) x = xl + np.arange(M + 1) * h y = yb + np.arange(N + 1) * k A = np.zeros((mn, mn)) b = np.zeros((mn, 1)) ''' interior points ''' for i in range(2, m): for j in range(2, n): A[i+(j-1)*m - 1][i-1+(j-1)*m - 1] = 1 / h2 A[i+(j-1)*m - 1][i+1+(j-1)*m - 1] = 1 / h2 A[i+(j-1)*m - 1][i+(j-1)*m - 1] = - 2 / h2 - 2 / k2 A[i+(j-1)*m - 1][i+(j-2)*m - 1] = 1 / k2 A[i+(j-1)*m - 1][i+j*m - 1] = 1 / k2 b[i+(j-1)*m - 1] = f(x[i], y[j]) ''' bottom and top boundary points ''' for i in range(1, m + 1): j = 1 A[i+(j-1)*m - 1][i+(j-1)*m - 1]=1 b[i+(j-1)*m - 1] = g1(x[i - 1]) j = n A[i+(j-1)*m - 1][i+(j-1)*m - 1]=1 b[i+(j-1)*m - 1] = g2(x[i - 1]) ''' left and right boundary points ''' for j in range(2, n): i = 1 A[i+(j-1)*m - 1][i+(j-1)*m - 1]=1 b[i+(j-1)*m - 1] = g3(y[j - 1]) i = m A[i+(j-1)*m - 1][i+(j-1)*m - 1]=1 b[i+(j-1)*m - 1] = g4(y[j - 1]) v = np.matmul(np.linalg.inv(A), b) w = v.reshape(n, m).T X, Y = np.meshgrid(x, y) fig = plt.figure() ax =fig.gca(projection='3d') ax.view_init(azim=225) ax.plot_surface(X, Y, w) plt.xlabel('x') plt.ylabel('y') plt.show(fig) plt.close() poisson(0, 1, 1, 2, 4, 4) poisson(0, 1, 1, 2, 10, 10) Explanation: 8.3 Elliptic Equations Example Apply the Finite Difference Method with m = n = 5 to approximate the solution of the Laplace equation $\Delta{u} = 0$ on $[0,1] \times [1,2]$ with the following Dirichlet boundary conditions : $$ \begin{matrix}\begin{align} u(x,1) &= \ln{(x^2 + 1)} \ u(x,2) &= \ln{(x^2 + 4)} \ u(0,y) &= 2\ln{y} \ u(1,y) &= \ln{(y^2 + 1)} \end{align}\end{matrix} $$ End of explanation def poisson(xl, xr, yb, yt, M, N): f = lambda x, y : 0 g1 = lambda x : np.sin(x * np.pi) g2 = lambda x : np.sin(x * np.pi) g3 = lambda y : 0 g4 = lambda y : 0 m, n = M + 1, N + 1 mn = m * n h, k = (xr - xl) / M, (yt - yb) / N h2, k2 = pow(h, 2), pow(k, 2) x = xl + np.arange(M + 1) * h y = yb + np.arange(N + 1) * k A = np.zeros((mn, mn)) b = np.zeros((mn, 1)) ''' interior points ''' for i in range(2, m): for j in range(2, n): A[i+(j-1)*m - 1][i-1+(j-1)*m - 1] = 1 / h2 A[i+(j-1)*m - 1][i+1+(j-1)*m - 1] = 1 / h2 A[i+(j-1)*m - 1][i+(j-1)*m - 1] = - 2 / h2 - 2 / k2 A[i+(j-1)*m - 1][i+(j-2)*m - 1] = 1 / k2 A[i+(j-1)*m - 1][i+j*m - 1] = 1 / k2 b[i+(j-1)*m - 1] = f(x[i], y[j]) ''' bottom and top boundary points ''' for i in range(1, m + 1): j = 1 A[i+(j-1)*m - 1][i+(j-1)*m - 1]=1 b[i+(j-1)*m - 1] = g1(x[i - 1]) j = n A[i+(j-1)*m - 1][i+(j-1)*m - 1]=1 b[i+(j-1)*m - 1] = g2(x[i - 1]) ''' left and right boundary points ''' for j in range(2, n): i = 1 A[i+(j-1)*m - 1][i+(j-1)*m - 1]=1 b[i+(j-1)*m - 1] = g3(y[j - 1]) i = m A[i+(j-1)*m - 1][i+(j-1)*m - 1]=1 b[i+(j-1)*m - 1] = g4(y[j - 1]) v = np.matmul(np.linalg.inv(A), b) w = v.reshape(n, m).T X, Y = np.meshgrid(x, y) fig = plt.figure() ax =fig.gca(projection='3d') ax.view_init(azim=225) ax.plot_surface(X, Y, w) plt.xlabel('x') plt.ylabel('y') plt.show(fig) plt.close() poisson(0, 1, 0, 1, 10, 10) Explanation: Example Find the electrostatic potential on the square $[0,1] \times [0,1]$, assuming no charge in the interior and assuming the following boundary conditions: $$ \begin{matrix}\begin{align} u(x,0) &= \sin{\pi x} \ u(x,1) &= \sin{\pi x} \ u(0,y) &= 0 \ u(1,y) &= 0 \end{align}\end{matrix} $$ End of explanation def poissonfem(xl, xr, yb, yt, M, N): f = lambda x, y : 0 r = lambda x, y : 0 g1 = lambda x : np.log(pow(x, 2) + 1) g2 = lambda x : np.log(pow(x, 2) + 4) g3 = lambda y : 2 * np.log(y) g4 = lambda y : np.log(pow(y, 2) + 1) m, n = M + 1, N + 1 mn = m * n h, k = (xr - xl) / M, (yt - yb) / N hk = h * k h2, k2 = pow(h, 2), pow(k, 2) x = xl + np.arange(M + 1) * h y = yb + np.arange(N + 1) * k A = np.zeros((mn, mn)) b = np.zeros((mn, 1)) B1 = lambda i, j : (x[i] - 2 * h / 3, y[j] - k / 3) B2 = lambda i, j : (x[i] - h / 3, y[j] - 2 * k / 3) B3 = lambda i, j : (x[i] + h / 3, y[j] - k / 3) B4 = lambda i, j : (x[i] + 2 * h / 3, y[j] + k / 3) B5 = lambda i, j : (x[i] + h / 3, y[j] + 2 * k / 3) B6 = lambda i, j : (x[i] - h / 3, y[j] + k / 3) ''' interior points ''' for i in range(2, m): for j in range(2, n): rsum = r(*B1(i,j)) + r(*B2(i,j)) + r(*B3(i,j)) + r(*B4(i,j)) + r(*B5(i,j)) + r(*B6(i,j)) fsum = f(*B1(i,j)) + f(*B2(i,j)) + f(*B3(i,j)) + f(*B4(i,j)) + f(*B5(i,j)) + f(*B6(i,j)) A[i+(j-1)*m - 1][i+(j-1)*m - 1] = 2 * (h2 + k2) / hk - hk * rsum / 18 A[i+(j-1)*m - 1][i-1+(j-1)*m - 1] = -k/h - hk * (r(*B1(i,j)) + r(*B6(i,j))) / 18 A[i+(j-1)*m - 1][i-1+(j-2)*m - 1] = -hk * (r(*B1(i,j)) + r(*B2(i,j))) / 18 A[i+(j-1)*m - 1][i+(j-2)*m - 1] = -h/k - hk * (r(*B2(i,j)) + r(*B3(i,j))) / 18 A[i+(j-1)*m - 1][i+1+(j-1)*m - 1] = -k/h - hk * (r(*B3(i,j)) + r(*B4(i,j))) / 18 A[i+(j-1)*m - 1][i+1+j*m - 1] = -hk * (r(*B4(i,j)) + r(*B5(i,j))) / 18 A[i+(j-1)*m - 1][i+j*m - 1] = - h / k - hk * (r(*B5(i,j)) + r(*B6(i,j))) / 18 b[i+(j-1)*m - 1] = - h * k * fsum / 6 ''' bottom and top boundary points ''' for i in range(1, m + 1): j = 1 A[i+(j-1)*m - 1][i+(j-1)*m - 1]=1 b[i+(j-1)*m - 1] = g1(x[i - 1]) j = n A[i+(j-1)*m - 1][i+(j-1)*m - 1]=1 b[i+(j-1)*m - 1] = g2(x[i - 1]) ''' left and right boundary points ''' for j in range(2, n): i = 1 A[i+(j-1)*m - 1][i+(j-1)*m - 1]=1 b[i+(j-1)*m - 1] = g3(y[j - 1]) i = m A[i+(j-1)*m - 1][i+(j-1)*m - 1]=1 b[i+(j-1)*m - 1] = g4(y[j - 1]) v = np.matmul(np.linalg.inv(A), b) w = v.reshape(n, m).T X, Y = np.meshgrid(x, y) fig = plt.figure() ax =fig.gca(projection='3d') ax.view_init(azim=225) ax.plot_surface(X, Y, w) plt.xlabel('x') plt.ylabel('y') plt.show(fig) plt.close() poissonfem(0, 1, 1, 2, 4, 4) poissonfem(0, 1, 1, 2, 10, 10) Explanation: Finite Element Method for elliptic equations End of explanation def poissonfem(xl, xr, yb, yt, M, N): f = lambda x, y : 2 * np.sin(2 * np.pi * y) r = lambda x, y : 4 * pow(np.pi, 2) g1 = lambda x : 0 g2 = lambda x : 0 g3 = lambda y : 0 g4 = lambda y : np.sin(2 * np.pi * y) m, n = M + 1, N + 1 mn = m * n h, k = (xr - xl) / M, (yt - yb) / N hk = h * k h2, k2 = pow(h, 2), pow(k, 2) x = xl + np.arange(M + 1) * h y = yb + np.arange(N + 1) * k A = np.zeros((mn, mn)) b = np.zeros((mn, 1)) B1 = lambda i, j : (x[i] - 2 * h / 3, y[j] - k / 3) B2 = lambda i, j : (x[i] - h / 3, y[j] - 2 * k / 3) B3 = lambda i, j : (x[i] + h / 3, y[j] - k / 3) B4 = lambda i, j : (x[i] + 2 * h / 3, y[j] + k / 3) B5 = lambda i, j : (x[i] + h / 3, y[j] + 2 * k / 3) B6 = lambda i, j : (x[i] - h / 3, y[j] + k / 3) ''' interior points ''' for i in range(2, m): for j in range(2, n): rsum = r(*B1(i,j)) + r(*B2(i,j)) + r(*B3(i,j)) + r(*B4(i,j)) + r(*B5(i,j)) + r(*B6(i,j)) fsum = f(*B1(i,j)) + f(*B2(i,j)) + f(*B3(i,j)) + f(*B4(i,j)) + f(*B5(i,j)) + f(*B6(i,j)) A[i+(j-1)*m - 1][i+(j-1)*m - 1] = 2 * (h2 + k2) / hk - hk * rsum / 18 A[i+(j-1)*m - 1][i-1+(j-1)*m - 1] = -k/h - hk * (r(*B1(i,j)) + r(*B6(i,j))) / 18 A[i+(j-1)*m - 1][i-1+(j-2)*m - 1] = -hk * (r(*B1(i,j)) + r(*B2(i,j))) / 18 A[i+(j-1)*m - 1][i+(j-2)*m - 1] = -h/k - hk * (r(*B2(i,j)) + r(*B3(i,j))) / 18 A[i+(j-1)*m - 1][i+1+(j-1)*m - 1] = -k/h - hk * (r(*B3(i,j)) + r(*B4(i,j))) / 18 A[i+(j-1)*m - 1][i+1+j*m - 1] = -hk * (r(*B4(i,j)) + r(*B5(i,j))) / 18 A[i+(j-1)*m - 1][i+j*m - 1] = - h / k - hk * (r(*B5(i,j)) + r(*B6(i,j))) / 18 b[i+(j-1)*m - 1] = - h * k * fsum / 6 ''' bottom and top boundary points ''' for i in range(1, m + 1): j = 1 A[i+(j-1)*m - 1][i+(j-1)*m - 1]=1 b[i+(j-1)*m - 1] = g1(x[i - 1]) j = n A[i+(j-1)*m - 1][i+(j-1)*m - 1]=1 b[i+(j-1)*m - 1] = g2(x[i - 1]) ''' left and right boundary points ''' for j in range(2, n): i = 1 A[i+(j-1)*m - 1][i+(j-1)*m - 1]=1 b[i+(j-1)*m - 1] = g3(y[j - 1]) i = m A[i+(j-1)*m - 1][i+(j-1)*m - 1]=1 b[i+(j-1)*m - 1] = g4(y[j - 1]) v = np.matmul(np.linalg.inv(A), b) w = v.reshape(n, m).T X, Y = np.meshgrid(x, y) fig = plt.figure() ax =fig.gca(projection='3d') ax.view_init(azim=225) ax.plot_surface(X, Y, w) plt.xlabel('x') plt.ylabel('y') plt.show(fig) plt.close() poissonfem(0, 1, 0, 1, 16, 16) Explanation: Example Apply the Finite Element Method with M = N = 16 to approximate the solution of the elliptic Dirichlet problem $$ \left{\begin{matrix}\begin{align} & \Delta{u} + 4\pi^2u = 2 \sin{2\pi y} \ & u(x,0) = 0\text{ for } 0 \le x \le 1 \ & u(x,1) = 0\text{ for } 0 \le x \le 1 \ & u(0,y) = 0\text{ for } 0 \le y \le 1 \ & u(1,y) = \sin{2\pi y}\text{ for } 0 \le y \le 1 \end{align}\end{matrix}\right. $$ End of explanation def burgers(xl, xr, tb, te, M, N): alpha = 5 beta = 4 D = 0.05 f = lambda x : 2 * D * beta * np.pi * np.sin(x * np.pi) / (alpha + beta * np.cos(np.pi * x)) l = lambda t : 0 * t r = lambda t : 0 * t h, k = (xr - xl) / M, (te - tb) / N m, n = M + 1, N sigma = D * k / (h * h) w = np.zeros((M + 1) * (n + 1)).reshape(M + 1, n + 1) w[:, 0] = f(xl + np.arange(M + 1) * h) w1 = np.copy(w[:, 0]) for j in range(0, n): for it in range(3): DF1 = np.diag(1 + 2 * sigma * np.ones(m)) + np.diag(-sigma * np.ones(m-1), 1) \ + np.diag(-sigma * np.ones(m-1), -1) DF2 = np.diag([0,*(k * w1[2:m] / (2 * h)),0]) - np.diag([0,*(k * w1[0:m - 2] / (2 * h)),0]) \ + np.diag([0,*(k * w1[1:m - 1] / (2 * h))], 1) - np.diag([*(k * w1[1:m - 1] / (2 * h)), 0], -1) DF = DF1 + DF2; F = -w[:,j] + np.matmul((DF1 + DF2 / 2), w1) DF[0,:] = np.array([1, *np.zeros(m-1)]) F[0] = w1[0] - l(j) F[m-1] = w1[m-1] - r(j) w1 -= np.matmul(np.linalg.inv(DF), F) w[:, j + 1] = w1 # 3-D Plot x = xl + np.arange(M + 1) * h t = tb + np.arange(n + 1) * k X, T = np.meshgrid(x, t) fig = plt.figure() ax = fig.gca(projection='3d') ax.view_init(azim=225) ax.plot_surface(X, T, w.T) plt.xlabel('x') plt.ylabel('t') plt.show() plt.close() burgers(0, 1, 0, 2, 20, 40) Explanation: 8.4 Nonlinear partial differential equations Example Use the Backward Difference Equation with Newton iteration to solve Burgers' equation $$ \left{\begin{matrix}\begin{align} & u_t + uu_x = Du_{xx} \ & u(x,0) = \frac{2D\beta\pi\sin{\pi x}}{\alpha + \beta\cos{\pi x}}\text{ for } 0 \le x \le 1 \ & u(0,t) = 0\text{ for all } t \ge 0 \ & u(1,t) = 0\text{ for all } t \ge 0 \end{align}\end{matrix}\right. $$ End of explanation
9,266
Given the following text description, write Python code to implement the functionality described below step by step Description: Lecture 6 Step1: Let's break it down. for element in range(10) Step2: There it is Step3: and we want to generate a list of sentences Step4: Start with the loop header--you see it on the far right Step5: We used the items() method on the dictionary again, which gives us a list of tuples Since we know the items in the list are tuples of two elements, we use unpacking We provide our element-by-element construction of the list with our statement value ** 2, squaring the value Part 2 Step6: As we know, this will create a list[-like thing] with the numbers 0 through 9, inclusive, and assign it to the variable x. Now you'll see why I've been using the "list[-like thing]" notation Step7: To get a list, we've been casting the generator to a list Step8: and we get a vanilla Python list. So range() gives us a generator! Great! ...what does that mean, exactly? For most practical purposes, generators and lists are indistinguishable. However, there are some key differences to be aware of Step9: Also--where have we seen parentheses before? TUPLES! You can think of a generator as a sort of tuple. After all, like a tuple, a generator is immutable (cannot be changed once created). Be careful with this, though Step10: zip() does pretty much the same thing, but on steroids Step11: I want to loop through these three lists simultaneously, so I can print out the person's first name, last name, and their favorite language on the same line. Since I know they're the same length, I could just do a range(len(fname)), but this is arguably more elegant Step12: enumerate() Of course, there are always those situations where it's really, really nice to have an index variable in the loop. Let's take a look at that previous example Step13: This is great if all I want to do is loop through the lists simultaneously. But what if the ordering of the elements matters? For example, I want to prefix each sentence with the line number. How can I track what index I'm on in a loop if I don't use range()? enumerate() handles this. By wrapping the object we loop over inside enumerate(), on each loop iteration we not only get the next object of interest, but also the index of that object. To wit Step14: This comes in handy anytime you need to loop through a list or generator, but also need to know what index you're on. break and continue With for loops, you specify how many times to run the loop. With while loops, you iterate until some condition is met. For the vast majority of cases, this works well. But sometimes you need just a little more control for extenuating circumstances. Take the example of a web server Step15: How do you get out of this infinite loop? With a break statement. Step16: Just break. That will snap whatever loop you're currently in and immediately dump you out just after it. Same thing with for loops Step17: Similar to break is continue, though you use this when you essentially want to "skip" certain iterations. continue will also halt the current iteration, but instead of ending the loop entirely, it basically skips you on to the next iteration of the loop without executing any code that may be below it. Step18: Notice how the print statement inside the loop is never executed, but our loop counter i is still incremented through the very end. Part 4 Step19: Now Step20: Now, we have access to all the functions available in the itertools package--to use them, just type the package name, a dot ".", and the function you want to call. In this example, we want to use the itertools.chain() function Step21: Err, what's an itertools.chain object? Don't panic--any thoughts as to what kind of object this might be? It's an iterable, and we know how to handle those! Step22: And there they are--all four lists, joined at the hip. Another phenomenal function is combinations. If you've ever taken a combinatorics class, or are at all interested in the idea of finding all the possible combinations of a certain collection of things, this is your function. A common task in data science is finding combinations of configuration values that work well together--e.g., plotting your data in two dimensions. Which two dimensions will give the nicest plot? Here's a list of numbers. How many possible pairings are there? Step23: It doesn't have to be pairs; we can also try to find all the triplets of items.
Python Code: squares = [] for element in range(10): squares.append(element ** 2) print(squares) Explanation: Lecture 6: Advanced Data Structures CSCI 1360: Foundations for Informatics and Analytics Overview and Objectives We've covered list, tuples, sets, and dictionaries. These are the foundational data structures in Python. In this lecture, we'll go over some more advanced topics that are related to these datasets. By the end of this lecture, you should be able to Compare and contrast generators and comprehensions, and how to construct them Explain the benefits of generators, especially in the case of huge datasets Loop over multiple lists simultaneously with zip() and index them with enumerate() Dive into advanced iterations using the itertools module Part 1: List Comprehensions Here's some good news: if we get right down to it, having done loops and lists already, there's nothing new here. Here's the bad news: it's a different, and possibly less-easy-to-understand, but much more concise way of creating lists. We'll go over it bit by bit. Let's look at an example from a previous lecture: creating a list of squares. End of explanation squares = [element ** 2 for element in range(10)] print(squares) Explanation: Let's break it down. for element in range(10): It's a standard "for" loop header. The thing we're iterating over is at the end: range(10), or a list[-like thing] of numbers [0, 10) by 1s. In each loop, the current element from range(10) is stored in element. squares.append(element ** 2) Inside the loop, we append a new item to our list squares The item is computed by taking the current its, element, and computing its square We'll see these same pieces show up again, just in a slightly different order. End of explanation word_counts = { 'the': 10, 'race': 2, 'is': 3, 'on': 5 } Explanation: There it is: a list comprehension. Let's break it down. Notice, first, that the entire expression is surrounded by the square brackets [ ] of a list. This is for the exact reason you'd think: we're building a list! The "for" loop is completely intact, too; the entire header appears just as before. The biggest wrinkle is the loop body. It appears right after the opening bracket, before the loop header. The rationale for this is that it's easy to see from the start of the line that We're building a list (revealed by the opening square bracket), and The list is built by successfully squaring a variable element Let's say we have some dictionary of word counts: End of explanation sentences = ['"{}" appears {} times.'.format(word, count) for word, count in word_counts.items()] print(sentences) Explanation: and we want to generate a list of sentences: End of explanation squared_counts = [value ** 2 for key, value in word_counts.items()] print(squared_counts) Explanation: Start with the loop header--you see it on the far right: for word, count in word_counts.items() Then look at the loop body: '"{}" appears {} times.'.format(word, count) All wrapped in square brackets [ ] Assigned to the variable sentences Here's another example: going from a dictionary of word counts to a list of squared counts. End of explanation x = range(10) Explanation: We used the items() method on the dictionary again, which gives us a list of tuples Since we know the items in the list are tuples of two elements, we use unpacking We provide our element-by-element construction of the list with our statement value ** 2, squaring the value Part 2: Generators Generators are cool twists on lists (see what I did there). They've been around since Python 2 but took on a whole new life in Python 3. That said, if you ever get confused about generators, just think of them as lists. This can potentially get you in trouble with weird errors, but 90% of the time it'll work every time. Let's start with an example you're probably already quite familiar with: range() End of explanation print(x) print(type(x)) Explanation: As we know, this will create a list[-like thing] with the numbers 0 through 9, inclusive, and assign it to the variable x. Now you'll see why I've been using the "list[-like thing]" notation: it's not really a list! End of explanation list(x) Explanation: To get a list, we've been casting the generator to a list: End of explanation x = [i for i in range(10)] # Brackets -> list print(x) x = (i for i in range(10)) # Parentheses -> generator print(x) Explanation: and we get a vanilla Python list. So range() gives us a generator! Great! ...what does that mean, exactly? For most practical purposes, generators and lists are indistinguishable. However, there are some key differences to be aware of: Generators are "lazy". This means when you call range(10), not all 10 numbers are immediately computed; in fact, none of them are. They're computed on-the-fly in the loop itself! This really comes in handy if, say, you wanted to loop through 1 trillion numbers, or call range(1000000000000). With vanilla lists, this would immediately create 1 trillion numbers in memory and store them, taking up a whole lot of space. With generators, only 1 number is ever computed at a given loop iteration. Huge memory savings! Generators only work once. This is where you can get into trouble. Let's say you're trying to identify the two largest numbers in a generator of numbers. You'd loop through once and identify the largest number, then use that as a point of comparison to loop through again to find the second-largest number (you could do it with just one loop, but for the sake of discussion let's assume you did it this way). With a list, this would work just fine. Not with a generator, though. You'd need to explicitly recreate the generator. How do we build generators? Aside from range(), that is. Remember list comprehensions? Just replace the brackets of a list comprehension [ ] with parentheses ( ). End of explanation d = { 'uga': 'University of Georgia', 'gt': 'Georgia Tech', 'upitt': 'University of Pittsburgh', 'cmu': 'Carnegie Mellon University' } for key, value in d.items(): print("'{}' stands for '{}'.".format(key, value)) Explanation: Also--where have we seen parentheses before? TUPLES! You can think of a generator as a sort of tuple. After all, like a tuple, a generator is immutable (cannot be changed once created). Be careful with this, though: all generators are very like tuples, but not all tuples are like generators. In sum, use lists if: you're working with a relatively small amount of elements you want to add to / edit / remove from the elements you need direct access to arbitrary elements, e.g. some_list[431] On the other hand, use generators if: you're working with a giant collection of elements you'll only loop through the elements once or twice when looping through elements, you're fine going in sequential order Part 3: Other looping mechanisms There are a few other advanced looping mechanisms in Python that are a little complex, but can make your life a lot easier when used correctly (especially if you're a convert from something like C++ or Java). zip() zip() is a small method that packs a big punch. It "zips" multiple lists together into something of one big mega-list for the sole purpose of being able to iterate through them all simultaneously. We've already seen something like this before: the items() method in dictionaries. Dictionaries are more or less two lists stacked right up against each other: one list holds the keys, and the corresponding elements of the other list holds the values for each key. items() lets us loop through both simultaneously, giving us the corresponding elements from each list, one at a time: End of explanation first_names = ['Shannon', 'Jen', 'Natasha', 'Benjamin'] last_names = ['Quinn', 'Benoit', 'Romanov', 'Button'] fave_langs = ['Python', 'Java', 'Assembly', 'Go'] Explanation: zip() does pretty much the same thing, but on steroids: rather than just "zipping" together two lists, it can zip together as many as you want. Here's an example: first names, last names, and favorite programming languages. End of explanation for fname, lname, lang in zip(first_names, last_names, fave_langs): print("{} {}'s favorite language is {}.".format(fname, lname, lang)) Explanation: I want to loop through these three lists simultaneously, so I can print out the person's first name, last name, and their favorite language on the same line. Since I know they're the same length, I could just do a range(len(fname)), but this is arguably more elegant: End of explanation for fname, lname, lang in zip(first_names, last_names, fave_langs): print("{} {}'s favorite language is {}.".format(fname, lname, lang)) Explanation: enumerate() Of course, there are always those situations where it's really, really nice to have an index variable in the loop. Let's take a look at that previous example: End of explanation x = ['a', 'list', 'of', 'strings'] for index, element in enumerate(x): print("Found '{}' at index {}.".format(element, index)) Explanation: This is great if all I want to do is loop through the lists simultaneously. But what if the ordering of the elements matters? For example, I want to prefix each sentence with the line number. How can I track what index I'm on in a loop if I don't use range()? enumerate() handles this. By wrapping the object we loop over inside enumerate(), on each loop iteration we not only get the next object of interest, but also the index of that object. To wit: End of explanation while True: # Listen for incoming requests # Handle the request Explanation: This comes in handy anytime you need to loop through a list or generator, but also need to know what index you're on. break and continue With for loops, you specify how many times to run the loop. With while loops, you iterate until some condition is met. For the vast majority of cases, this works well. But sometimes you need just a little more control for extenuating circumstances. Take the example of a web server: Listens for incoming requests Serves those requests (e.g. returns web pages) Goes back to listening for more requests This is essentially implemented using a purposefully-infinite loop: End of explanation import numpy as np def handle_request(): return np.random.randint(100) loops = 0 while True: loops += 1 req = handle_request() break print("Exiting program after {} loop.".format(loops)) Explanation: How do you get out of this infinite loop? With a break statement. End of explanation for i in range(100000): # Loop 100,000 times! break print(i) Explanation: Just break. That will snap whatever loop you're currently in and immediately dump you out just after it. Same thing with for loops: End of explanation for i in range(100): continue print("This will never be printed.") print(i) Explanation: Similar to break is continue, though you use this when you essentially want to "skip" certain iterations. continue will also halt the current iteration, but instead of ending the loop entirely, it basically skips you on to the next iteration of the loop without executing any code that may be below it. End of explanation letters = ['a', 'b', 'c', 'd', 'e', 'f'] booleans = [1, 0, 1, 0, 0, 1] numbers = [23, 20, 44, 32, 7, 12] decimals = [0.1, 0.7, 0.4, 0.4, 0.5] Explanation: Notice how the print statement inside the loop is never executed, but our loop counter i is still incremented through the very end. Part 4: itertools Aside: Welcome to the big wide world of Beyond Core Python. Technically we're still in base Python, but in order to use more advanced iterating tools, we have to actually pull in an external package--itertools. Justin Duke has an excellent web tutorial, which I'll reproduce in part here. Let's say we have a couple of lists we want to operate on: End of explanation import itertools Explanation: Now: I want you to string all four of these lists together, end-to-end, into one long list. How would you do it? Here's a way simpler way, though it requires pulling in an external package. You can do this with the keyword import: End of explanation monster = itertools.chain(letters, booleans, numbers, decimals) print(monster) Explanation: Now, we have access to all the functions available in the itertools package--to use them, just type the package name, a dot ".", and the function you want to call. In this example, we want to use the itertools.chain() function: End of explanation for item in monster: print(item, end = " ") Explanation: Err, what's an itertools.chain object? Don't panic--any thoughts as to what kind of object this might be? It's an iterable, and we know how to handle those! End of explanation items = ['one', 'two', 'three', 'four', 'five', 'six'] combos = itertools.combinations(items, 2) # The "2" means pairs. for combo in combos: print(combo) Explanation: And there they are--all four lists, joined at the hip. Another phenomenal function is combinations. If you've ever taken a combinatorics class, or are at all interested in the idea of finding all the possible combinations of a certain collection of things, this is your function. A common task in data science is finding combinations of configuration values that work well together--e.g., plotting your data in two dimensions. Which two dimensions will give the nicest plot? Here's a list of numbers. How many possible pairings are there? End of explanation combos = itertools.combinations(items, 3) # Now it's a 3. for combo in combos: print(combo) Explanation: It doesn't have to be pairs; we can also try to find all the triplets of items. End of explanation
9,267
Given the following text description, write Python code to implement the functionality described below step by step Description: Find Natural Neighbors Verification Finding natural neighbors in a triangulation A triangle is a natural neighbor of a point if that point is within a circumradius of the circumcenter of a circumscribed circle containing the triangle. Step1: Since finding natural neighbors already calculates circumcenters and circumradii, return that information for later use. The key of the neighbors dictionary refers to the test point index, and the list of integers are the triangles that are natural neighbors of that particular test point. Since point 4 is far away from the triangulation, it has no natural neighbors. Point 3 is at the confluence of several triangles so it has many natural neighbors. Step2: We can then use the information in tri_info later. The dictionary key is the index of a particular triangle in the Delaunay triangulation data structure. 'cc' is that triangle's circumcenter, and 'r' is the radius of the circumcircle containing that triangle.
Python Code: import matplotlib.pyplot as plt import numpy as np from scipy.spatial import Delaunay from metpy.interpolate.geometry import find_natural_neighbors # Create test observations, test points, and plot the triangulation and points. gx, gy = np.meshgrid(np.arange(0, 20, 4), np.arange(0, 20, 4)) pts = np.vstack([gx.ravel(), gy.ravel()]).T tri = Delaunay(pts) fig, ax = plt.subplots(figsize=(15, 10)) for i, inds in enumerate(tri.simplices): pts = tri.points[inds] x, y = np.vstack((pts, pts[0])).T ax.plot(x, y) ax.annotate(i, xy=(np.mean(x), np.mean(y))) test_points = np.array([[2, 2], [5, 10], [12, 13.4], [12, 8], [20, 20]]) for i, (x, y) in enumerate(test_points): ax.plot(x, y, 'k.', markersize=6) ax.annotate('test ' + str(i), xy=(x, y)) Explanation: Find Natural Neighbors Verification Finding natural neighbors in a triangulation A triangle is a natural neighbor of a point if that point is within a circumradius of the circumcenter of a circumscribed circle containing the triangle. End of explanation neighbors, tri_info = find_natural_neighbors(tri, test_points) print(neighbors) Explanation: Since finding natural neighbors already calculates circumcenters and circumradii, return that information for later use. The key of the neighbors dictionary refers to the test point index, and the list of integers are the triangles that are natural neighbors of that particular test point. Since point 4 is far away from the triangulation, it has no natural neighbors. Point 3 is at the confluence of several triangles so it has many natural neighbors. End of explanation fig, ax = plt.subplots(figsize=(15, 10)) for i, inds in enumerate(tri.simplices): pts = tri.points[inds] x, y = np.vstack((pts, pts[0])).T ax.plot(x, y) ax.annotate(i, xy=(np.mean(x), np.mean(y))) # Using circumcenter and radius information from tri_info, plot circumcircles and # circumcenters for each triangle. for _idx, item in tri_info.items(): ax.plot(item['cc'][0], item['cc'][1], 'k.', markersize=5) circ = plt.Circle(item['cc'], item['r'], edgecolor='k', facecolor='none', transform=fig.axes[0].transData) ax.add_artist(circ) ax.set_aspect('equal', 'datalim') plt.show() Explanation: We can then use the information in tri_info later. The dictionary key is the index of a particular triangle in the Delaunay triangulation data structure. 'cc' is that triangle's circumcenter, and 'r' is the radius of the circumcircle containing that triangle. End of explanation
9,268
Given the following text description, write Python code to implement the functionality described below step by step Description: Unsupervised Learning - Principal Components Analysis Timothy Helton <br> <font color="red"> NOTE Step1: Exercise 1 - Crowdedness at the Campus Gym The dataset consists of 26,000 people counts (about every 10 minutes) over the last year. In addition, I gathered extra info including weather and semester-specific information that might affect how crowded it is. The label is the number of people, which I'd like to predict given some subset of the features. Label Step2: Findings The two temperature variables show a week correlation to correlation to number of people in the gym. The following variables show minimal correlation to number of people in the gym. day_number weekend start_of_semester The holiday variable shows no correlation. Run PCA Step3: Findings From the PCA analysis the last two principle componenets will be neglected. For initial investigations neglecting the last three priniciple components would be justifiable. Step4: Exercise 2 - IMDB Movie Data How can we tell the greatness of a movie before it is released in cinema? This question puzzled me for a long time since there is no universal way to claim the goodness of movies. Many people rely on critics to gauge the quality of a film, while others use their instincts. But it takes the time to obtain a reasonable amount of critics review after a movie is released. And human instinct sometimes is unreliable. To answer this question, I scraped 5000+ movies from IMDB website using a Python library called "scrapy". The scraping process took 2 hours to finish. In the end, I was able to obtain all needed 28 variables for 5043 movies and 4906 posters (998MB), spanning across 100 years in 66 countries. There are 2399 unique director names, and thousands of actors/actresses. Below are the 28 variables Step5: Findings From the top four correlation joint plots it appears that the IMBD score does not have any dominate drivers. Step6: Findings For this dataset the first ten principle components. This will capture just shy of 90% of the variance.
Python Code: from k2datascience import pca from IPython.core.interactiveshell import InteractiveShell InteractiveShell.ast_node_interactivity = "all" %matplotlib inline Explanation: Unsupervised Learning - Principal Components Analysis Timothy Helton <br> <font color="red"> NOTE: <br> This notebook uses code found in the <a href="https://github.com/TimothyHelton/k2datascience/blob/master/k2datascience/pca.py"> <strong>k2datascience.pca</strong></a> module. To execute all the cells do one of the following items: <ul> <li>Install the k2datascience package to the active Python interpreter.</li> <li>Add k2datascience/k2datascience to the PYTHON_PATH system variable.</li> <li>Create a link to the pca.py file in the same directory as this notebook.</li> </font> Imports End of explanation gym = pca.Gym(label_column='people') gym.data_name = 'Gym' gym.feature_columns = [ 'day_number', 'weekend', 'holiday', 'apparent_temp', 'temp', 'start_of_semester', 'seconds', ] header = '#' * 25 print(f'\n\n{header}\n### Data Head\n{header}') gym.data.head() print(f'\n\n{header}\n### Data Overview\n{header}') gym.data.info() print(f'\n\n{header}\n### Summary Statistics\n{header}') gym.data.describe() print(f'\n\n{header}\n### Absolute Correlation\n{header}') (gym.data.corr() .people .abs() .sort_values(ascending=False)) gym.plot_correlation_heatmap() gym.plot_correlation() Explanation: Exercise 1 - Crowdedness at the Campus Gym The dataset consists of 26,000 people counts (about every 10 minutes) over the last year. In addition, I gathered extra info including weather and semester-specific information that might affect how crowded it is. The label is the number of people, which I'd like to predict given some subset of the features. Label: Number of people Features: timestamp (int; number of seconds since beginning of day) day_of_week (int; 0 - 6) is_weekend (int; 0 or 1) is_holiday (int; 0 or 1) apparent_temperature (float; degrees fahrenheit) temperature (float; degrees fahrenheit) is_start_of_semester (int; 0 or 1) Based off the Kaggle dataset. Task - We are going to apply Principal Component Analysis on the given dataset using scikit-learn (bonus points if you use your own optimized Python version). We want to find the components with the maximum variance. Features with little or no variance are dropped and then the data is trained on transformed dataset to apply machine learning models. Read in the gym dataset. Explore the data, the summay statistics and identify any strong positive or negative correlations between the features. Convert temperature and apparent temperature from Fahrenheit to Celcius. Extract the features to a new dataframe. The column you would eventually predict is number_people. Make a heatmap of the correlation. Run PCA on the feature dataframe, and plot the explained variance ratio of the principal components. Which components would you drop and why? Re-run PCA on the feature dataframe, restricting it to the number of principal components you want and plot the explained variance ratios again. End of explanation gym.plot_variance(fig_size=(14,4)) gym.scree_plot() Explanation: Findings The two temperature variables show a week correlation to correlation to number of people in the gym. The following variables show minimal correlation to number of people in the gym. day_number weekend start_of_semester The holiday variable shows no correlation. Run PCA End of explanation gym.n_components = 5 gym.calc_components() gym.plot_variance() gym.scree_plot() Explanation: Findings From the PCA analysis the last two principle componenets will be neglected. For initial investigations neglecting the last three priniciple components would be justifiable. End of explanation movie = pca.Movies(label_column='imdb_score') movie.data_name = 'Movie' movie.feature_columns = movie.data_numeric.columns header = '#' * 25 print(f'\n\n{header}\n### Data Head\n{header}') movie.data.head() print(f'\n\n{header}\n### Data Overview\n{header}') movie.data.info() print(f'\n\n{header}\n### Summary Statistics\n{header}') movie.data.describe() print(f'\n\n{header}\n### Absolute Correlation\n{header}') (movie.data.corr() .imdb_score .abs() .sort_values(ascending=False)) movie.top_correlation_joint_plots() Explanation: Exercise 2 - IMDB Movie Data How can we tell the greatness of a movie before it is released in cinema? This question puzzled me for a long time since there is no universal way to claim the goodness of movies. Many people rely on critics to gauge the quality of a film, while others use their instincts. But it takes the time to obtain a reasonable amount of critics review after a movie is released. And human instinct sometimes is unreliable. To answer this question, I scraped 5000+ movies from IMDB website using a Python library called "scrapy". The scraping process took 2 hours to finish. In the end, I was able to obtain all needed 28 variables for 5043 movies and 4906 posters (998MB), spanning across 100 years in 66 countries. There are 2399 unique director names, and thousands of actors/actresses. Below are the 28 variables: "movie_title" "color" "num_critic_for_reviews" "movie_facebook_likes" "duration" "director_name" "director_facebook_likes" "actor_3_name" "actor_3_facebook_likes" "actor_2_name" "actor_2_facebook_likes" "actor_1_name" "actor_1_facebook_likes" "gross" "genres" "num_voted_users" "cast_total_facebook_likes" "facenumber_in_poster" "plot_keywords" "movie_imdb_link" "num_user_for_reviews" "language" "country" "content_rating" "budget" "title_year" "imdb_score" "aspect_ratio" Based off the Kaggle dataset. Task - We are going to apply Principal Component Analysis on the given dataset using scikit-learn (bonus points if you use your own optimized Python version). We want to find the components with the maximum variance. Features with little or no variance are dropped and then the data is trained on transformed dataset to apply machine learning models. Read in the movie dataset. Explore the data, the summay statistics and identify any strong positive or negative correlations between the features. Some columns contain numbers, while others contain words. Do some filtering to extract only the numbered columns and not the ones with words into a new dataframe. Remove null values and standardize the values. Create hexbin visualizations to get a feel for how the correlations between different features compare to one another. Can you draw any conclusions about the features? Create a heatmap of the pearson correlation of movie features. Detail your observations. Perform PCA on the dataset, and plot the individual and cumulative explained variance superimposed on the same graph. How many components do you want to use? Implement PCA and transform the dataset. Create a 2D and 3D scatter plot of the the 1st 2 and the 1st 3 components. Do you notice any distinct clusters in the plots? (For future clustering assignment) End of explanation movie.plot_correlation_heatmap() movie.plot_variance(fig_size=(14,4)) movie.scree_plot() Explanation: Findings From the top four correlation joint plots it appears that the IMBD score does not have any dominate drivers. End of explanation movie.n_components = 10 movie.calc_components() movie.plot_variance() movie.scree_plot() movie.plot_component_2_vs_1() movie.plot_componets_1_2_3() Explanation: Findings For this dataset the first ten principle components. This will capture just shy of 90% of the variance. End of explanation
9,269
Given the following text description, write Python code to implement the functionality described below step by step Description: Contact Binary Hierarchy Setup Let's first make sure we have the latest version of PHOEBE 2.3 installed (uncomment this line if running in an online notebook session such as colab). Step1: Here we'll initialize a default binary, but ask for it to be created as a contact system. Step2: We'll compare this to the default detached binary Step3: Hierarchy Let's first look at the hierarchy of the default detached binary, and then compare that to the hierarchy of the overcontact system Step4: As you can see, the overcontact system has an additional "component" with method "envelope" and component label "contact_envelope". Next let's look at the parameters in the envelope and star components. You can see that most of parameters in the envelope class are constrained, while the equivalent radius of the primary is unconstrained. The value of primary equivalent radius constrains the potential and fillout factor of the envelope, as well as the equivalent radius of the secondary. Step5: Now, of course, if we didn't originally know we wanted a contact binary and built the default detached system, we could still turn it into an contact binary just by changing the hierarchy. Step6: However, since our system was detached, the system is not overflowing, and therefore doesn't pass system checks Step7: And because of this, the potential and requiv@secondary constraints cannot be updated from the constraints. Step8: Likewise, we can make a contact system detached again simply by removing the envelope from the hierarchy. The parameters themselves will still exist (unless you remove them), so you can always just change the hierarchy again to change back to an overcontact system. Step9: Although the constraints have been removed, PHOEBE has lost the original value of the secondary radius (because of the failed contact constraints), so we'll have to reset that here as well. Step10: Adding Datasets Step11: For comparison, we'll do the same to our detached system Step12: Running Compute Step13: Synthetics To ensure compatibility with computing synthetics in detached and semi-detached systems in Phoebe, the synthetic meshes for our overcontact system are attached to each component separetely, instead of the contact envelope. Step14: Plotting Meshes Step15: Orbits Step16: Light Curves Step17: RVs
Python Code: #!pip install -I "phoebe>=2.3,<2.4" import phoebe from phoebe import u # units import numpy as np import matplotlib.pyplot as plt logger = phoebe.logger() Explanation: Contact Binary Hierarchy Setup Let's first make sure we have the latest version of PHOEBE 2.3 installed (uncomment this line if running in an online notebook session such as colab). End of explanation b_cb = phoebe.default_binary(contact_binary=True) Explanation: Here we'll initialize a default binary, but ask for it to be created as a contact system. End of explanation b_detached = phoebe.default_binary() Explanation: We'll compare this to the default detached binary End of explanation print(b_detached.hierarchy) print(b_cb.hierarchy) Explanation: Hierarchy Let's first look at the hierarchy of the default detached binary, and then compare that to the hierarchy of the overcontact system End of explanation print(b_cb.filter(component='contact_envelope', kind='envelope', context='component')) print(b_cb.filter(component='primary', kind='star', context='component')) b_cb['requiv@primary'] = 1.5 b_cb['pot@contact_envelope@component'] b_cb['fillout_factor@contact_envelope@component'] b_cb['requiv@secondary@component'] Explanation: As you can see, the overcontact system has an additional "component" with method "envelope" and component label "contact_envelope". Next let's look at the parameters in the envelope and star components. You can see that most of parameters in the envelope class are constrained, while the equivalent radius of the primary is unconstrained. The value of primary equivalent radius constrains the potential and fillout factor of the envelope, as well as the equivalent radius of the secondary. End of explanation b_detached.add_component('envelope', component='contact_envelope') hier = phoebe.hierarchy.binaryorbit(b_detached['binary'], b_detached['primary'], b_detached['secondary'], b_detached['contact_envelope']) print(hier) b_detached.filter(context='constraint',constraint_func='pitch',component='primary') b_detached.set_hierarchy(hier) print(b_detached.hierarchy) Explanation: Now, of course, if we didn't originally know we wanted a contact binary and built the default detached system, we could still turn it into an contact binary just by changing the hierarchy. End of explanation print(b_detached.run_checks()) Explanation: However, since our system was detached, the system is not overflowing, and therefore doesn't pass system checks End of explanation b_detached['pot@component'] b_detached['requiv@secondary@component'] Explanation: And because of this, the potential and requiv@secondary constraints cannot be updated from the constraints. End of explanation hier = phoebe.hierarchy.binaryorbit(b_detached['binary'], b_detached['primary'], b_detached['secondary']) print(hier) b_detached.set_hierarchy(hier) print(b_detached.hierarchy) Explanation: Likewise, we can make a contact system detached again simply by removing the envelope from the hierarchy. The parameters themselves will still exist (unless you remove them), so you can always just change the hierarchy again to change back to an overcontact system. End of explanation b_detached['requiv@secondary@component'] = 1.0 Explanation: Although the constraints have been removed, PHOEBE has lost the original value of the secondary radius (because of the failed contact constraints), so we'll have to reset that here as well. End of explanation b_cb.add_dataset('mesh', compute_times=[0], dataset='mesh01') b_cb.add_dataset('orb', compute_times=np.linspace(0,1,201), dataset='orb01') b_cb.add_dataset('lc', times=np.linspace(0,1,21), dataset='lc01') b_cb.add_dataset('rv', times=np.linspace(0,1,21), dataset='rv01') Explanation: Adding Datasets End of explanation b_detached.add_dataset('mesh', compute_times=[0], dataset='mesh01') b_detached.add_dataset('orb', compute_times=np.linspace(0,1,201), dataset='orb01') b_detached.add_dataset('lc', times=np.linspace(0,1,21), dataset='lc01') b_detached.add_dataset('rv', times=np.linspace(0,1,21), dataset='rv01') Explanation: For comparison, we'll do the same to our detached system End of explanation b_cb.run_compute(irrad_method='none') b_detached.run_compute(irrad_method='none') Explanation: Running Compute End of explanation print(b_cb['mesh01@model'].components) print(b_detached['mesh01@model'].components) Explanation: Synthetics To ensure compatibility with computing synthetics in detached and semi-detached systems in Phoebe, the synthetic meshes for our overcontact system are attached to each component separetely, instead of the contact envelope. End of explanation afig, mplfig = b_cb['mesh01@model'].plot(x='ws', show=True) afig, mplfig = b_detached['mesh01@model'].plot(x='ws', show=True) Explanation: Plotting Meshes End of explanation afig, mplfig = b_cb['orb01@model'].plot(x='ws',show=True) afig, mplfig = b_detached['orb01@model'].plot(x='ws',show=True) Explanation: Orbits End of explanation afig, mplfig = b_cb['lc01@model'].plot(show=True) afig, mplfig = b_detached['lc01@model'].plot(show=True) Explanation: Light Curves End of explanation afig, mplfig = b_cb['rv01@model'].plot(show=True) afig, mplfig = b_detached['rv01@model'].plot(show=True) Explanation: RVs End of explanation
9,270
Given the following text description, write Python code to implement the functionality described below step by step Description: When analyzing data, I usually use the following three modules. I use pandas for data management, filtering, grouping, and processing. I use numpy for basic array math. I use toyplot for rendering the charts. Step1: Load in the "auto" dataset. This is a fun collection of data on cars manufactured between 1970 and 1982. The source for this data can be found at https Step2: For this analysis I am going to group data by the car maker. The make is not directly stored in the data, but all the names start with the make, so extract the first word in that column. Step3: The data has some inconsistencies with the make strings (misspellings or alternate spellings). Do some simple fixes. Step4: In this plot we are going to show the average miles per gallon (MPG) rating for each car maker. We can use the pivot_table feature of pandas to get this information from the data. (Excel and other spreadsheets have similar functionality.) Step5: There are many different makers represented in this data set, but several have only a few cars and perhaps are therefore not a signficant sample. Filter out the car makers that have fewer than 10 entries in the data. (Mostly I'm doing this to make these examples fit better even though it works OK with all the data, too.) Step6: Add a column with a car maker index so that we can plot by index. Note that we have filtered the make by those manufacturers that have at least 10 models, so any make with less than 10 models is filtered out. Step7: Now use toyplot to plot the MPG of every car (that matches our criteria), organized by manufacturer.
Python Code: import pandas import numpy import toyplot import toyplot.pdf import toyplot.png import toyplot.svg print('Pandas version: ', pandas.__version__) print('Numpy version: ', numpy.__version__) print('Toyplot version: ', toyplot.__version__) Explanation: When analyzing data, I usually use the following three modules. I use pandas for data management, filtering, grouping, and processing. I use numpy for basic array math. I use toyplot for rendering the charts. End of explanation column_names = ['MPG', 'Cylinders', 'Displacement', 'Horsepower', 'Weight', 'Acceleration', 'Model Year', 'Origin', 'Car Name'] data = pandas.read_table('auto-mpg.data', delim_whitespace=True, names=column_names, index_col=False) Explanation: Load in the "auto" dataset. This is a fun collection of data on cars manufactured between 1970 and 1982. The source for this data can be found at https://archive.ics.uci.edu/ml/datasets/Auto+MPG. The data are stored in a text file containing columns of data. We use the pandas.read_table() method to parse the data and load it in a pandas DataFrame. The file does not contain a header row, so we need to specify the names of the columns manually. End of explanation data['Make'] = data['Car Name'].str.split().str.get(0) Explanation: For this analysis I am going to group data by the car maker. The make is not directly stored in the data, but all the names start with the make, so extract the first word in that column. End of explanation data.ix[data['Make'] == 'chevroelt', 'Make'] = 'chevrolet' data.ix[data['Make'] == 'chevy', 'Make'] = 'chevrolet' data.ix[data['Make'] == 'maxda', 'Make'] = 'mazda' data.ix[data['Make'] == 'mercedes-benz', 'Make'] = 'mercedes' data.ix[data['Make'] == 'vokswagen', 'Make'] = 'volkswagen' data.ix[data['Make'] == 'vw', 'Make'] = 'volkswagen' Explanation: The data has some inconsistencies with the make strings (misspellings or alternate spellings). Do some simple fixes. End of explanation average_mpg_per_make = data.pivot_table(columns='Make', values='MPG', aggfunc='mean') len(average_mpg_per_make.index) Explanation: In this plot we are going to show the average miles per gallon (MPG) rating for each car maker. We can use the pivot_table feature of pandas to get this information from the data. (Excel and other spreadsheets have similar functionality.) End of explanation count_mpg_per_make = data.pivot_table(columns='Make', values='MPG', aggfunc='count') filtered_mpg = \ average_mpg_per_make[count_mpg_per_make >= 10]. \ sort_values(ascending=False) filtered_mpg Explanation: There are many different makers represented in this data set, but several have only a few cars and perhaps are therefore not a signficant sample. Filter out the car makers that have fewer than 10 entries in the data. (Mostly I'm doing this to make these examples fit better even though it works OK with all the data, too.) End of explanation make_to_index = pandas.Series(index=filtered_mpg.index, data=xrange(0, len(filtered_mpg))) data['Make Index'] = numpy.array(make_to_index[data['Make']]) Explanation: Add a column with a car maker index so that we can plot by index. Note that we have filtered the make by those manufacturers that have at least 10 models, so any make with less than 10 models is filtered out. End of explanation canvas = toyplot.Canvas('4in', '2.6in') axes = canvas.cartesian(bounds=(41,-9,6,-58), ylabel = 'MPG') axes.scatterplot(data.dropna()['Make Index'], data.dropna()['MPG'], marker='-', size=15, opacity=0.75) # Label the x axis on the make. This is a bit harder than it should be. axes.x.ticks.locator = \ toyplot.locator.Explicit(labels=filtered_mpg.index) axes.x.ticks.labels.angle = 45 # It's usually best to make the y-axis 0-based. axes.y.domain.min = 0 toyplot.pdf.render(canvas, 'Detail.pdf') toyplot.svg.render(canvas, 'Detail.svg') toyplot.png.render(canvas, 'Detail.png', scale=5) Explanation: Now use toyplot to plot the MPG of every car (that matches our criteria), organized by manufacturer. End of explanation
9,271
Given the following text description, write Python code to implement the functionality described below step by step Description: TFX Guided Project on Vertex Learning Objectives Step1: Step 1. Environment setup Environment variable setup Let's set some environment variables to use Vertex Pipelines. Change your region if needed. Step2: Step 2. Copy the predefined template to your project directory. In this step, we will create a working pipeline project directory and files by copying additional files from a predefined template. You may give your pipeline a different name by changing the PIPELINE_NAME below. Step3: This will also become the name of the project directory where your files will be put Step4: TFX includes the taxi template with the TFX python package. If you are planning to solve a point-wise prediction problem, including classification and regresssion, this template could be used as a starting point. The tfx template copy CLI command copies predefined template files into your project directory. Step5: Next we will need to build the Docker container that will run the TFX components on Vertex and push it to the Google Cloud Registry associated with the project Step6: Let's move into the TFX project scaffold generated by tfx template and create a Dockerfile in there Step7: We can now build and push the container Step8: Step 3. Browse your copied source files The TFX template provides basic scaffold files to build a pipeline, including Python source code, sample data, and Jupyter Notebooks to analyse the output of the pipeline. The taxi template uses the Chicago Taxi dataset. Here is brief introduction to each of the Python files Step9: Let's quickly go over the structure of a test file to test Tensorflow code Step10: First of all, notice that you start by importing the code you want to test by importing the corresponding module. Here we want to test the code in features.py so we import the module features Step11: We now create this bucket in case it does not exist Step12: Let's upload our sample data to GCS bucket so that we can use it in our pipeline later. Step13: The pipeline is now ready to be compiled and executed. You will produce pipeline.json artifact describing the template TFX pipeline and that can be executed on Vertex as a Vertex pipeline using the following compilation command Step14: You should now see a pipeline.json file in PROJECT_DIR (which should be the current working directory, since we cd into it earlier) Step15: To launch the execution of this pipeline on Vertex, we will use the aiplatform sdk Step16: This pipeline is minimal and only comprises the CSVExampleGen component. In the next sections, we will add more and more components to this pipeline by uncommenting and modifying the TFX scaffold generated by tfx template. You'll be able to see the pipeline runing at Step17: Check pipeline outputs You'll be able to see the pipeline runing at Step18: You'll be able to see the pipeline runing at
Python Code: import os from google.cloud import aiplatform Explanation: TFX Guided Project on Vertex Learning Objectives: Learn how to generate a standard TFX template pipeline using tfx template Learn how to modify and run a templated TFX pipeline on Vertex End of explanation shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null GOOGLE_CLOUD_PROJECT = shell_output[0] REGION = "us-central1" %env GOOGLE_CLOUD_PROJECT={GOOGLE_CLOUD_PROJECT} %env REGION={REGION} Explanation: Step 1. Environment setup Environment variable setup Let's set some environment variables to use Vertex Pipelines. Change your region if needed. End of explanation PIPELINE_NAME = "tfx-guided-project-on-vertex" Explanation: Step 2. Copy the predefined template to your project directory. In this step, we will create a working pipeline project directory and files by copying additional files from a predefined template. You may give your pipeline a different name by changing the PIPELINE_NAME below. End of explanation PROJECT_DIR = os.path.join(os.path.expanduser("."), PIPELINE_NAME) PROJECT_DIR Explanation: This will also become the name of the project directory where your files will be put: End of explanation !tfx template copy \ --pipeline-name={PIPELINE_NAME} \ --destination-path={PROJECT_DIR} \ --model=taxi Explanation: TFX includes the taxi template with the TFX python package. If you are planning to solve a point-wise prediction problem, including classification and regresssion, this template could be used as a starting point. The tfx template copy CLI command copies predefined template files into your project directory. End of explanation # Docker image name for the pipeline image. CUSTOM_TFX_IMAGE = f"gcr.io/{GOOGLE_CLOUD_PROJECT}/{PIPELINE_NAME}" CUSTOM_TFX_IMAGE Explanation: Next we will need to build the Docker container that will run the TFX components on Vertex and push it to the Google Cloud Registry associated with the project: End of explanation %cd {PROJECT_DIR} %%writefile Dockerfile FROM gcr.io/tfx-oss-public/tfx:1.4.0 RUN pip install -U pip RUN pip install google-cloud-aiplatform==1.7.1 kfp==1.8.1 WORKDIR /pipeline COPY . ./ ENV PYTHONPATH="/pipeline:${PYTHONPATH}" Explanation: Let's move into the TFX project scaffold generated by tfx template and create a Dockerfile in there: End of explanation !gcloud builds submit --timeout 15m --tag $CUSTOM_TFX_IMAGE . Explanation: We can now build and push the container: End of explanation !python -m models.features_test !python -m models.keras.model_test Explanation: Step 3. Browse your copied source files The TFX template provides basic scaffold files to build a pipeline, including Python source code, sample data, and Jupyter Notebooks to analyse the output of the pipeline. The taxi template uses the Chicago Taxi dataset. Here is brief introduction to each of the Python files: pipeline - This directory contains the definition of the pipeline * configs.py โ€” defines common constants for pipeline runners * pipeline.py โ€” defines TFX components and a pipeline models - This directory contains ML model definitions. * features.py, features_test.py โ€” defines features for the model * preprocessing.py, preprocessing_test.py โ€” defines preprocessing jobs using tf::Transform models/estimator - This directory contains an Estimator based model. * constants.py โ€” defines constants of the model * model.py, model_test.py โ€” defines DNN model using TF estimator models/keras - This directory contains a Keras based model. * constants.py โ€” defines constants of the model * model.py, model_test.py โ€” defines DNN model using Keras local_runner.py, kubeflow_runner.py, kubeflow_v2_runner.py โ€” define runners for each orchestration engine Running the tests: You might notice that there are some files with _test.py in their name. These are unit tests of the pipeline and it is recommended to add more unit tests as you implement your own pipelines. You can run unit tests using the python -m and supplying the path to the test module. You can usually get a module name by deleting .py extension and replacing / with .. For example: End of explanation !tail -26 models/features_test.py Explanation: Let's quickly go over the structure of a test file to test Tensorflow code: End of explanation GCS_BUCKET_NAME = GOOGLE_CLOUD_PROJECT + "-kubeflowpipelines-default" GCS_BUCKET_NAME Explanation: First of all, notice that you start by importing the code you want to test by importing the corresponding module. Here we want to test the code in features.py so we import the module features: python from models import features To implement test cases start by defining your own test class inheriting from tf.test.TestCase: python class FeaturesTest(tf.test.TestCase): Wen you execute the test file with bash python -m models.features_test the main method python tf.test.main() will parse your test class (here: FeaturesTest) and execute every method whose name starts by test. Here we have two such methods for instance: python def testNumberOfBucketFeatureBucketCount(self): def testTransformedNames(self): So when you want to add a test case, just add a method to that test class whose name starts by test. Now inside the body of these test methods is where the actual testing takes place. In this case for instance, testTransformedNames test the function features.transformed_name and makes sure it outputs what is expected. Since your test class inherits from tf.test.TestCase it has a number of helper methods you can use to help you create tests, as for instance python self.assertEqual(expected_outputs, obtained_outputs) that will fail the test case if obtained_outputs do the match the expected_outputs. Typical examples of test case you may want to implement for machine learning code would comprise test insurring that your model builds correctly, your preprocessing function preprocesses raw data as expected, or that your model can train successfully on a few mock examples. When writing tests make sure that their execution is fast (we just want to check that the code works not actually train a performant model when testing). For that you may have to create synthetic data in your test files. For more information, read the tf.test.TestCase documentation and the Tensorflow testing best practices. Step 4. Run your first TFX pipeline Components in the TFX pipeline will generate outputs for each run as ML Metadata Artifacts, and they need to be stored in a GCS bucket accessible from Vertex. Let us create this bucket. Its name will be &lt;YOUR_PROJECT&gt;-kubeflowpipelines-default. Note: The name of this bucket can be changed, but then it will also need to be changed in the generated ./pipeline/configs.py file, which also defines a corresponding GCS_BUCKET_NAME variable. End of explanation !gsutil ls | grep ^gs://{GCS_BUCKET_NAME}/$ || gsutil mb -l {REGION} gs://{GCS_BUCKET_NAME} Explanation: We now create this bucket in case it does not exist: End of explanation !gsutil cp data/data.csv gs://{GCS_BUCKET_NAME}/tfx-template/data/taxi/data.csv Explanation: Let's upload our sample data to GCS bucket so that we can use it in our pipeline later. End of explanation !tfx pipeline compile --engine vertex --pipeline_path kubeflow_v2_runner.py Explanation: The pipeline is now ready to be compiled and executed. You will produce pipeline.json artifact describing the template TFX pipeline and that can be executed on Vertex as a Vertex pipeline using the following compilation command: End of explanation ls pipeline.json Explanation: You should now see a pipeline.json file in PROJECT_DIR (which should be the current working directory, since we cd into it earlier): End of explanation aiplatform.init(project=GOOGLE_CLOUD_PROJECT, location=REGION) pipeline = aiplatform.PipelineJob( display_name=PIPELINE_NAME, template_path="pipeline.json", enable_caching=True, ) pipeline.run() Explanation: To launch the execution of this pipeline on Vertex, we will use the aiplatform sdk: End of explanation !tfx pipeline compile --engine vertex --pipeline_path kubeflow_v2_runner.py aiplatform.init(project=GOOGLE_CLOUD_PROJECT, location=REGION) pipeline = aiplatform.PipelineJob( display_name=PIPELINE_NAME, template_path="pipeline.json", enable_caching=True, ) pipeline.run() Explanation: This pipeline is minimal and only comprises the CSVExampleGen component. In the next sections, we will add more and more components to this pipeline by uncommenting and modifying the TFX scaffold generated by tfx template. You'll be able to see the pipeline runing at: https://console.cloud.google.com/vertex-ai/pipelines Step 5. Add components for data validation. In this step, you will add components for data validation including StatisticsGen, SchemaGen, and ExampleValidator. If you are interested in data validation, please see Get started with Tensorflow Data Validation. Double-click to change directory to pipeline and double-click again to open pipeline.py. Find and uncomment the 3 lines which add StatisticsGen, SchemaGen, and ExampleValidator to the pipeline. (Tip: search for comments containing TODO(step 5):). Make sure to save pipeline.py after you edit it. You now need to update the existing pipeline with modified pipeline definition and trigger another run on Vertex (the cell above that runs the pipeline may need to be interrupted to allow for the execution of the two next cells) : End of explanation !tfx pipeline compile --engine vertex --pipeline_path kubeflow_v2_runner.py aiplatform.init(project=GOOGLE_CLOUD_PROJECT, location=REGION) pipeline = aiplatform.PipelineJob( display_name=PIPELINE_NAME, template_path="pipeline.json", enable_caching=True, ) pipeline.run() Explanation: Check pipeline outputs You'll be able to see the pipeline runing at: https://console.cloud.google.com/vertex-ai/pipelines Step 6. Add components for training In this step, you will add components for training and model validation including Transform, Trainer, Resolver, Evaluator, and Pusher. Double-click to open pipeline.py. Find and uncomment the 5 lines which add Transform, Trainer, ResolverNode, Evaluator and Pusher to the pipeline. (Tip: search for TODO(step 6):) You now need to update the existing pipeline with modified pipeline definition and trigger another run on Vertex: End of explanation !tfx pipeline compile --engine vertex --pipeline_path kubeflow_v2_runner.py aiplatform.init(project=GOOGLE_CLOUD_PROJECT, location=REGION) pipeline = aiplatform.PipelineJob( display_name=PIPELINE_NAME, template_path="pipeline.json", enable_caching=True, ) pipeline.run() Explanation: You'll be able to see the pipeline runing at: https://console.cloud.google.com/vertex-ai/pipelines Step 7. Try BigQueryExampleGen BigQuery is a serverless, highly scalable, and cost-effective cloud data warehouse. BigQuery can be used as a source for training examples in TFX. In this step, we will add BigQueryExampleGen to the pipeline. Double-click to open pipeline.py. Comment out CsvExampleGen and uncomment the line which creates an instance of BigQueryExampleGen. You also need to uncomment the query argument of the create_pipeline function. We need to specify which GCP project to use for BigQuery, and this is done by setting --project in beam_pipeline_args when creating a pipeline. Double-click to open configs.py. Uncomment the definition of GOOGLE_CLOUD_REGION, BIG_QUERY_WITH_DIRECT_RUNNER_BEAM_PIPELINE_ARGS and BIG_QUERY_QUERY. You should replace the region value in this file with the correct values for your GCP project. Note: You MUST set your GCP region in the configs.py file before proceeding Change directory one level up. Click the name of the directory above the file list. The name of the directory is the name of the pipeline which is tfx-guided-project-on-vertex if you didn't change. Double-click to open kubeflow_v2_runner.py. Uncomment two arguments, query and beam_pipeline_args, for the create_pipeline function. Now the pipeline is ready to use BigQuery as an example source. Update the pipeline as before and create a new execution run as we did in step 5 and 6. End of explanation
9,272
Given the following text description, write Python code to implement the functionality described below step by step Description: Goal If the DNA species distribution is truely Gaussian in a buoyant density gradient, then what sigma would be needed to reproduce the detection of all taxa > 0.1% in abundance throughout the entire gradient If 1e10 16S rRNA copies in community, then 0.1% abundant taxon = 1e7 If detection limit = 1 molecule, then probability density of normal distribution across the entire gradient that we sequence must be >= 1e-7 ie., at least 1 of the 1e7 16S rRNA DNA molecules in every gradient fraction Method assess PDF across gradient for different levels of sigma Setting parameters Step1: Init Step2: GC min-max Step3: How big must sigma be to detect throughout the gradient Step4: Notes sigma must be >= 18 to have taxon detected in all gradients assuming mean GC of taxon fragments is 30% How small would the fragments need to be to explain this just from diffusion (Clay et al., 2003)? How small of fragments would be needed to get the observed detection threshold? sigma distribution of fragment GC for the reference dataset genomes Step5: Percent of taxa that would be detected in all fraction depending on the fragment BD stdev with accounting for diffusion
Python Code: %load_ext rpy2.ipython workDir = '/home/nick/notebook/SIPSim/dev/fullCyc/frag_norm_9_2.5_n5/default_run/' %%R sigmas = seq(1, 50, 1) means = seq(30, 70, 1) # mean GC content of 30 to 70% ## max 13C shift max_13C_shift_in_BD = 0.036 ## min BD (that we care about) min_GC = 13.5 min_BD = min_GC/100.0 * 0.098 + 1.66 ## max BD (that we care about) max_GC = 80 max_BD = max_GC / 100.0 * 0.098 + 1.66 # 80.0% G+C max_BD = max_BD + max_13C_shift_in_BD Explanation: Goal If the DNA species distribution is truely Gaussian in a buoyant density gradient, then what sigma would be needed to reproduce the detection of all taxa > 0.1% in abundance throughout the entire gradient If 1e10 16S rRNA copies in community, then 0.1% abundant taxon = 1e7 If detection limit = 1 molecule, then probability density of normal distribution across the entire gradient that we sequence must be >= 1e-7 ie., at least 1 of the 1e7 16S rRNA DNA molecules in every gradient fraction Method assess PDF across gradient for different levels of sigma Setting parameters End of explanation %%R library(dplyr) library(tidyr) library(ggplot2) library(gridExtra) import numpy as np import pandas as pd import scipy.stats as stats import dill %%R GC2BD = function(GC) GC / 100.0 * 0.098 + 1.66 GC2BD(50) %>% print BD2GC = function(BD) (BD - 1.66) / 0.098 * 100 BD2GC(1.709) %>% print Explanation: Init End of explanation %%R min_GC = BD2GC(min_BD) max_GC = BD2GC(max_BD) cat('Min-max GC:', min_GC, max_GC, '\n') Explanation: GC min-max End of explanation %%R # where is density > 1e-7 detect_thresh = function(mean, sd){ dens = dnorm(0:117, mean=mean, sd=sd) all(dens > 1e-7) } df = expand.grid(means, sigmas) colnames(df) = c('mean', 'sigma') df$detect = mapply(detect_thresh, mean=df$mean, sd=df$sigma) df %>% head(n=4) %%R -w 600 # plotting ggplot(df, aes(mean, sigma, fill=detect)) + geom_tile(color='black') + theme_bw() + theme( text = element_text(size=16) ) Explanation: How big must sigma be to detect throughout the gradient End of explanation # loading fragments F = os.path.join(workDir, '1', 'fragsParsed.pkl') with open(F, 'rb') as inFH: frags = dill.load(inFH) stds = [] for x in frags: otu = x[0] for scaf,arr in x[1].items(): arr = np.array(arr) sd = np.std(arr[:,2]) # fragment GC stds.append([otu, scaf, sd]) stds = np.array(stds) %%R -i stds -w 500 -h 300 stds = stds %>% as.data.frame colnames(stds) = c('taxon', 'scaffold', 'sigma') stds = stds %>% mutate(sigma = sigma %>% as.character %>% as.numeric) ggplot(stds, aes(sigma)) + geom_histogram() + theme_bw() + theme( text = element_text(size=16) ) %%R # using 10% quantile ## a relatively small, but not totally outlier of a sigma ## this will require a lot of diffusion q10 = quantile(stds$sigma, probs=c(0.1)) %>% as.vector q10 %%R # function for sigma diffusion (Clay et al., 2003) sigma_dif = function(L){ sqrt(44.5 / L) } # function for calculating total sigma (fragment buoyant density) based on mean fragment length total_sigma = function(L, sigma_start){ # L = fragment length (kb) # start_sigma = genome sigma prior to diffusion sigma_D = sigma_dif(L) sqrt(sigma_D**2 + sigma_start**2) } frag_lens = seq(0.1, 20, 0.1) total_sd = sapply(frag_lens, total_sigma, sigma_start=q10) df = data.frame('length__kb' = frag_lens, 'sigma' = total_sd) df %>% head %%R -w 600 -h 350 # plotting ggplot(df, aes(length__kb, sigma)) + geom_point() + geom_line() + geom_hline(yintercept=18, linetype='dashed', alpha=0.7) + labs(x='Fragment length (kb)', y='Standard deviation of fragment BD\n(+diffusion)') + theme_bw() + theme( text = element_text(size=16) ) Explanation: Notes sigma must be >= 18 to have taxon detected in all gradients assuming mean GC of taxon fragments is 30% How small would the fragments need to be to explain this just from diffusion (Clay et al., 2003)? How small of fragments would be needed to get the observed detection threshold? sigma distribution of fragment GC for the reference dataset genomes End of explanation %%R sigma_thresh = 18 frag_lens = seq(0.1, 20, 0.1) df = expand.grid(stds$sigma, frag_lens) colnames(df) = c('sigma', 'length__kb') df$total_sd = mapply(total_sigma, df$length__kb, df$sigma) df$detect = ifelse(df$total_sd >= sigma_thresh, 1, 0) df = df %>% group_by(length__kb) %>% summarize(n = n(), detected = sum(detect), detect_perc = detected / n * 100) df %>% head(n=3) %%R -w 600 -h 350 # plotting ggplot(df, aes(length__kb, detect_perc)) + geom_point() + geom_line() + labs(x='Fragment length (kb)', y='% of taxa detected in all\ngradient fractions') + theme_bw() + theme( text = element_text(size=16) ) Explanation: Percent of taxa that would be detected in all fraction depending on the fragment BD stdev with accounting for diffusion End of explanation
9,273
Given the following text description, write Python code to implement the functionality described below step by step Description: Recap In order of priority/time taken basalareaincremementnonspatialaw this is actually slow because of the number of times the BAFromZeroToDataAw function is called as shown above relaxing the tolerance may help indeed the tolerance is 0.01 * some value while the other factor finder functions have 0.1 tolerance i think can also use cython for the increment functions vectorize merch and gross volume functions they require a lot of getting scalars off data frame, which is quite slow. faster to get an array do a profiling run with IO (of reading input data and writing the plot curves to files) in next run Decide on the action speed up increment functions use cython for increment functions it turns out this may not help that much. the function is pretty fast, it's called almost 500,000 times on the sample of 300 plots reduce the number of times its called maybe by using gradient descent for the optimization? relax the tolerance gives a context to refactor them as well (into their own module) which would be a welcome change the increment functions use numpy functions but operate on scalars, there is no benefit to using numpy functions there performance-wise, it is not clear that this will pay off so much. vectorizing the volume functions is probably wiser Characterize what is happening Step1: The original gross volume function checks that top height is greater than 0 ``` python def GrossTotalVolume_Pl(BA_Pl, topHeight_Pl) Step2: MWEs If we can rewrite it to handle 0s properly, i.e. to return 0 where an input is 0, then it is trivial to vectorize Step3: Timings Step4: The array method is 20x faster. This is worth implementing. We should also add tests to help be explicity about the behaviour of these volume functions. Revise the code Go on. Do it. Tests Yes, though data was changed as the new implementations yield NaN where input is NaN, instead of yielding 0. Review code changes Step5: Run timings From last time Step6: It yielded a 13% reduction in the time. Run profiling Step7: Compare performance visualizations Now use either of these commands to visualize the profiling ``` pyprof2calltree -k -i forward-sim-1.prof forward-sim-3.txt or dc run --service-ports snakeviz notebooks/forward-sim-3.prof ``` Old New Summary of performance improvements The calculation of gross and merchantable volume is drastically faster now; under profiling it decrease to 1 second from 22 seconds. A lot of that seems to be profiler overhead, as when using gypsy simulate CLI it only got 15% faster; however I expect i/o is obfuscating the outcome there. Profile with I/O
Python Code: import pandas as pd import numpy as np Explanation: Recap In order of priority/time taken basalareaincremementnonspatialaw this is actually slow because of the number of times the BAFromZeroToDataAw function is called as shown above relaxing the tolerance may help indeed the tolerance is 0.01 * some value while the other factor finder functions have 0.1 tolerance i think can also use cython for the increment functions vectorize merch and gross volume functions they require a lot of getting scalars off data frame, which is quite slow. faster to get an array do a profiling run with IO (of reading input data and writing the plot curves to files) in next run Decide on the action speed up increment functions use cython for increment functions it turns out this may not help that much. the function is pretty fast, it's called almost 500,000 times on the sample of 300 plots reduce the number of times its called maybe by using gradient descent for the optimization? relax the tolerance gives a context to refactor them as well (into their own module) which would be a welcome change the increment functions use numpy functions but operate on scalars, there is no benefit to using numpy functions there performance-wise, it is not clear that this will pay off so much. vectorizing the volume functions is probably wiser Characterize what is happening End of explanation from gypsy.GYPSYNonSpatial import GrossTotalVolume_Pl GrossTotalVolume_Pl(np.random.random(10) * 100, np.random.random(10) * 100) Explanation: The original gross volume function checks that top height is greater than 0 ``` python def GrossTotalVolume_Pl(BA_Pl, topHeight_Pl): Tvol_Pl = 0 if topHeight_Pl &gt; 0: a1 = 0.194086 a2 = 0.988276 a3 = 0.949346 a4 = -3.39036 Tvol_Pl = a1* (BA_Pl**a2) * (topHeight_Pl **a3) * numpy.exp(1+(a4/((topHeight_Pl**2)+1))) return Tvol_Pl ``` This makes it fail if trying to use it on an array: End of explanation def GrossTotalVolume_Pl_arr(BA_Pl, topHeight_Pl): a1 = 0.194086 a2 = 0.988276 a3 = 0.949346 a4 = -3.39036 Tvol_Pl = a1* (BA_Pl**a2) * (topHeight_Pl **a3) * np.exp(1+(a4/((topHeight_Pl**2)+1))) return Tvol_Pl print(GrossTotalVolume_Pl_arr(10, 10)) print(GrossTotalVolume_Pl_arr(0, 10)) print(GrossTotalVolume_Pl_arr(10, 0)) print(GrossTotalVolume_Pl_arr(np.random.random(10) * 100, np.random.random(10) * 100)) print(GrossTotalVolume_Pl_arr(np.zeros(10) * 100, np.random.random(10) * 100)) Explanation: MWEs If we can rewrite it to handle 0s properly, i.e. to return 0 where an input is 0, then it is trivial to vectorize End of explanation ba = np.random.random(1000) * 100 top_height = np.random.random(1000) * 100 d = pd.DataFrame({'ba': ba, 'th': top_height}) %%timeit d.apply( lambda x: GrossTotalVolume_Pl( x.at['ba'], x.at['th'] ), axis=1 ) %%timeit GrossTotalVolume_Pl_arr(ba, top_height) Explanation: Timings End of explanation %%bash git log --since "2016-11-14 19:30" --oneline # 19:30 GMT/UTC ! git diff "HEAD~$(git log --since "2016-11-14 19:30" --oneline | wc -l)" ../gypsy Explanation: The array method is 20x faster. This is worth implementing. We should also add tests to help be explicity about the behaviour of these volume functions. Revise the code Go on. Do it. Tests Yes, though data was changed as the new implementations yield NaN where input is NaN, instead of yielding 0. Review code changes End of explanation %%bash # git checkout 36941343aca2df763f93192abef461093918fff4 -b vectorize-volume-functions # time gypsy simulate ../private-data/prepped_random_sample_300.csv --output-dir tmp # rm -rfd tmp # real 4m51.287s # user 4m41.770s # sys 0m1.070s 45/336. Explanation: Run timings From last time: real 5m36.407s user 5m25.740s sys 0m2.140s After cython'ing iter functions: End of explanation from gypsy.forward_simulation import simulate_forwards_df data = pd.read_csv('../private-data/prepped_random_sample_300.csv', index_col=0, nrows=10) %%prun -D forward-sim-3.prof -T forward-sim-3.txt -q result = simulate_forwards_df(data) !head forward-sim-3.txt Explanation: It yielded a 13% reduction in the time. Run profiling End of explanation ! rm -rfd gypsy-output output_dir = 'gypsy-output' %%prun -D forward-sim-2.prof -T forward-sim-2.txt -q # restart the kernel first data = pd.read_csv('../private-data/prepped_random_sample_300.csv', index_col=0, nrows=10) result = simulate_forwards_df(data) os.makedirs(output_dir) for plot_id, df in result.items(): filename = '%s.csv' % plot_id output_path = os.path.join(output_dir, filename) df.to_csv(output_path) Explanation: Compare performance visualizations Now use either of these commands to visualize the profiling ``` pyprof2calltree -k -i forward-sim-1.prof forward-sim-3.txt or dc run --service-ports snakeviz notebooks/forward-sim-3.prof ``` Old New Summary of performance improvements The calculation of gross and merchantable volume is drastically faster now; under profiling it decrease to 1 second from 22 seconds. A lot of that seems to be profiler overhead, as when using gypsy simulate CLI it only got 15% faster; however I expect i/o is obfuscating the outcome there. Profile with I/O End of explanation
9,274
Given the following text description, write Python code to implement the functionality described below step by step Description: Simulation Archive A Simulation Archive (Rein & Tamayo 2017) is useful when one runs long simulations. With the Simulation Archive, one can easily take snapshots of the simulation, and then later restart and analyize it. Since Spring 2018, the default Simulation Archive version is 2. Version 2 works with all integrators and very few restrictions that apply (you need to be careful when using function pointers). To illustrate the Simulation Archive, let us setup a simulation of a two planet system and turn on the Simulation Archive. This is done with the following code Step1: The first argument of automateSimulationArchive is the path and name of the binary file to write to, the interval argument specifies the interval at which snapshots of the simulation are saved (in whichever code units you work). The smaller the interval, the larger the file size, but the faster the access. The deletefile=True flag makes REBOUND delete the file if it already exists. We now integrate the simulation forward in time. This should take a few seconds. Step2: We can now delete the simulation. Note that we could also have run the simulation using the C version of REBOUND. This might be useful if one wants to run a long simulation on a cluster and doesn't want to bother with installing python. In C, one can initialize the Simulation Archive with (you need to delete the file manually if it already exists) Step3: We now look at the Simulation Archive. You could do this at a later time, on a different computer, with a different version of REBOUND and it will still work. Step4: Let's first print the number of snapshots and the time of the first and last snaphot in the archive Step5: We can access each snapshot by indexing the Simulation Archive. This returns a REBOUND simulation object that corresponds to that time. Everything is accurate down to the last bit. That means one could use this simulation object and restart the simulation, the final coordinates of the planets will be exactly the same as in the original simulation. Step6: One can also step through every simulation in the archive using the generator functionality, for example to store the eccentricity of the inner planet as a function of time Step7: If we want to access a simulation at a specific time, such as in-between snapshots, one can use the getSimulation() function Step8: By default, the function returns a simulation that corresponds to the snapshot that is nearby. To get closer to the requested time, one can use the mode attribute Step9: In the above code, REBOUND looks up a nearby snaphot and then integrates the simulation forward in time to get close to the request time. As one can see, with mode="close", one gets a simulation very close to the request time, but it is still slightly off. This is because WHFast uses a fixed timestep. If we want to reach the requested time eactly, we have to change the timestep. Changing a timestep in a symplectic integrator can cause problems, but if one really wants to get a simulation object at the exact time (for example to match observations), then the mode="exact" flag does that. Step10: Requesting a simulation at any time between tmin and tmax only takes a few seconds at most (keep in mind, REBOUND integrates the simulation from the nearest snaphot to the requested time). To analyze a large simulation, you might want to do this in parallel. We can easily do that by using REBOUND's InterruptiblePool. In the following example, we calculate the distance between the two planets at 432 times in the interval $[t_{min},t_{max}]$. Step11: Note that in the above example, we use an initializer function so that each thread has its own Simulation Archive. Note Since Spring 2018, the SimulationArchive object always returns a new Simulation object when you request a simulation from the archive. In earlier versions, it kept a reference to one Simulation object internally, updated it when a new time was requested, and then returned a reference. Manual Snapshots With the new version of the simulation archive you can also add snapshots manually, giving you further control beyond the automated options used above. This can be useful to save snapshots when particular conditions like collisions or ejections occur. Here we give an example that saves logarithmically spaced snapshots Step12: We now iterate over an array of logarithmically spaced times, and save a snapshot after each using the manual simulationarchive_snapshot function. If no file with that filename exists, it will create a new one first. Note that if it doesn't already exist, it will always append a snapshot to the file, so you need to delete any existing file when starting a new simulation. Step13: We now plot the energy error at each of the snapshots
Python Code: import rebound import numpy as np sim = rebound.Simulation() sim.add(m=1.) sim.add(m=1e-3, a=1.) sim.add(m=1e-3, a=1.9) sim.move_to_com() sim.dt = sim.particles[1].P*0.05 # timestep is 5% of orbital period sim.integrator = "whfast" sim.automateSimulationArchive("archive.bin",interval=1e3,deletefile=True) Explanation: Simulation Archive A Simulation Archive (Rein & Tamayo 2017) is useful when one runs long simulations. With the Simulation Archive, one can easily take snapshots of the simulation, and then later restart and analyize it. Since Spring 2018, the default Simulation Archive version is 2. Version 2 works with all integrators and very few restrictions that apply (you need to be careful when using function pointers). To illustrate the Simulation Archive, let us setup a simulation of a two planet system and turn on the Simulation Archive. This is done with the following code: End of explanation sim.integrate(1e6) Explanation: The first argument of automateSimulationArchive is the path and name of the binary file to write to, the interval argument specifies the interval at which snapshots of the simulation are saved (in whichever code units you work). The smaller the interval, the larger the file size, but the faster the access. The deletefile=True flag makes REBOUND delete the file if it already exists. We now integrate the simulation forward in time. This should take a few seconds. End of explanation del sim Explanation: We can now delete the simulation. Note that we could also have run the simulation using the C version of REBOUND. This might be useful if one wants to run a long simulation on a cluster and doesn't want to bother with installing python. In C, one can initialize the Simulation Archive with (you need to delete the file manually if it already exists): c struct reb_simulation* sim = reb_create_simulation(); ... reb_simulationarchive_automate_interval("archive.bin",1e3); End of explanation sa = rebound.SimulationArchive("archive.bin") Explanation: We now look at the Simulation Archive. You could do this at a later time, on a different computer, with a different version of REBOUND and it will still work. End of explanation print("Number of snapshots: %d" % len(sa)) print("Time of first and last snapshot: %.1f, %.1f" % (sa.tmin, sa.tmax)) Explanation: Let's first print the number of snapshots and the time of the first and last snaphot in the archive: End of explanation sim = sa[500] print(sim.t, sim.particles[1]) Explanation: We can access each snapshot by indexing the Simulation Archive. This returns a REBOUND simulation object that corresponds to that time. Everything is accurate down to the last bit. That means one could use this simulation object and restart the simulation, the final coordinates of the planets will be exactly the same as in the original simulation. End of explanation eccentricities = np.zeros(len(sa)) for i, sim in enumerate(sa): eccentricities[i] = sim.particles[1].e Explanation: One can also step through every simulation in the archive using the generator functionality, for example to store the eccentricity of the inner planet as a function of time: End of explanation sim = sa.getSimulation(12345.6) print(sim.t) Explanation: If we want to access a simulation at a specific time, such as in-between snapshots, one can use the getSimulation() function: End of explanation sim = sa.getSimulation(12345.6, mode="close") print(sim.t) Explanation: By default, the function returns a simulation that corresponds to the snapshot that is nearby. To get closer to the requested time, one can use the mode attribute: End of explanation sim = sa.getSimulation(12345.6, mode="exact") print(sim.t) Explanation: In the above code, REBOUND looks up a nearby snaphot and then integrates the simulation forward in time to get close to the request time. As one can see, with mode="close", one gets a simulation very close to the request time, but it is still slightly off. This is because WHFast uses a fixed timestep. If we want to reach the requested time eactly, we have to change the timestep. Changing a timestep in a symplectic integrator can cause problems, but if one really wants to get a simulation object at the exact time (for example to match observations), then the mode="exact" flag does that. End of explanation def thread_init(*rest): global sat sat = rebound.SimulationArchive("archive.bin") def analyze(t): sim = sat.getSimulation(t,mode="close") d12 = sim.particles[1] - sim.particles[2] return np.sqrt(d12.x*d12.x+d12.y*d12.y+d12.z*d12.z) pool = rebound.InterruptiblePool(initializer=thread_init) times = np.linspace(sa.tmin, sa.tmax, 432) distances = pool.map(analyze,times) Explanation: Requesting a simulation at any time between tmin and tmax only takes a few seconds at most (keep in mind, REBOUND integrates the simulation from the nearest snaphot to the requested time). To analyze a large simulation, you might want to do this in parallel. We can easily do that by using REBOUND's InterruptiblePool. In the following example, we calculate the distance between the two planets at 432 times in the interval $[t_{min},t_{max}]$. End of explanation sim = rebound.Simulation() sim.add(m=1.) sim.add(m=1e-3, a=1.) sim.add(m=1e-3, a=1.9) sim.move_to_com() sim.dt = sim.particles[1].P*0.05 # timestep is 5% of orbital period sim.integrator = "whfast" Explanation: Note that in the above example, we use an initializer function so that each thread has its own Simulation Archive. Note Since Spring 2018, the SimulationArchive object always returns a new Simulation object when you request a simulation from the archive. In earlier versions, it kept a reference to one Simulation object internally, updated it when a new time was requested, and then returned a reference. Manual Snapshots With the new version of the simulation archive you can also add snapshots manually, giving you further control beyond the automated options used above. This can be useful to save snapshots when particular conditions like collisions or ejections occur. Here we give an example that saves logarithmically spaced snapshots End of explanation filename = 'testsa.bin' Nout = 1000 times = np.logspace(0, 4, Nout)*sim.particles[1].P for i, time in enumerate(times): sim.integrate(time, exact_finish_time=0) # need outputs on the nearest WHFast timesteps to the times we pass to get symplectic behavior sim.simulationarchive_snapshot(filename) Explanation: We now iterate over an array of logarithmically spaced times, and save a snapshot after each using the manual simulationarchive_snapshot function. If no file with that filename exists, it will create a new one first. Note that if it doesn't already exist, it will always append a snapshot to the file, so you need to delete any existing file when starting a new simulation. End of explanation sa = rebound.SimulationArchive(filename) sim0 = sa[0] P = sim0.particles[1].P E0 = sim.calculate_energy() Eerr = np.zeros(Nout) for i, sim in enumerate(sa): E = sim.calculate_energy() Eerr[i] = np.abs((E-E0)/E0) %matplotlib inline import matplotlib.pyplot as plt fig, ax = plt.subplots() ax.plot(times/sim0.particles[1].P, Eerr, '.') ax.set_xscale('log'); ax.set_yscale('log') ax.set_xlabel('time [orbits]'); ax.set_ylabel('relative energy error'); Explanation: We now plot the energy error at each of the snapshots End of explanation
9,275
Given the following text description, write Python code to implement the functionality described below step by step Description: <h1> Text Classification using TensorFlow/Keras on AI Platform </h1> This notebook illustrates Step1: Note Step2: We will look at the titles of articles and figure out whether the article came from the New York Times, TechCrunch or GitHub. We will use hacker news as our data source. It is an aggregator that displays tech related headlines from various sources. Creating Dataset from BigQuery Hacker news headlines are available as a BigQuery public dataset. The dataset contains all headlines from the sites inception in October 2006 until October 2015. Here is a sample of the dataset Step3: Let's do some regular expression parsing in BigQuery to get the source of the newspaper article from the URL. For example, if the url is http Step5: Now that we have good parsing of the URL to get the source, let's put together a dataset of source and titles. This will be our labeled dataset for AI Platform. Step6: For ML training, we will need to split our dataset into training and evaluation datasets (and perhaps an independent test dataset if we are going to do model or feature selection based on the evaluation dataset). A simple, repeatable way to do this is to use the hash of a well-distributed column in our data (See https Step7: Below we can see that roughly 75% of the data is used for training, and 25% for evaluation. We can also see that within each dataset, the classes are roughly balanced. Step8: Finally we will save our data, which is currently in-memory, to disk. Step9: TensorFlow/Keras Code Please explore the code in this <a href="txtclsmodel/trainer">directory</a> Step10: Train on the Cloud Let's first copy our training data to the cloud Step11: Change the job name appropriately. View the job in the console, and wait until the job is complete. Step12: Results What accuracy did you get? You should see around 80%. Rerun with Pre-trained Embedding We will use the popular GloVe embedding which is trained on Wikipedia as well as various news sources like the New York Times. You can read more about Glove at the project homepage
Python Code: !sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst !pip install --user google-cloud-bigquery==1.25.0 Explanation: <h1> Text Classification using TensorFlow/Keras on AI Platform </h1> This notebook illustrates: <ol> <li> Creating datasets for AI Platform using BigQuery <li> Creating a text classification model using the Estimator API with a Keras model <li> Training on Cloud AI Platform <li> Rerun with pre-trained embedding </ol> End of explanation # change these to try this notebook out BUCKET = 'cloud-training-demos-ml' PROJECT = 'cloud-training-demos' REGION = 'us-central1' import os os.environ['BUCKET'] = BUCKET os.environ['PROJECT'] = PROJECT os.environ['REGION'] = REGION os.environ['TFVERSION'] = '2.5' if 'COLAB_GPU' in os.environ: # this is always set on Colab, the value is 0 or 1 depending on whether a GPU is attached from google.colab import auth auth.authenticate_user() # download "sidecar files" since on Colab, this notebook will be on Drive !rm -rf txtclsmodel !git clone --depth 1 https://github.com/GoogleCloudPlatform/training-data-analyst !mv training-data-analyst/courses/machine_learning/deepdive/09_sequence/txtclsmodel/ . !rm -rf training-data-analyst # downgrade TensorFlow to the version this notebook has been tested with !pip install --upgrade tensorflow==$TFVERSION import tensorflow as tf print(tf.__version__) Explanation: Note: Restart your kernel to use updated packages. Kindly ignore the deprecation warnings and incompatibility errors related to google-cloud-storage. End of explanation %load_ext google.cloud.bigquery %%bigquery --project $PROJECT SELECT url, title, score FROM `bigquery-public-data.hacker_news.stories` WHERE LENGTH(title) > 10 AND score > 10 AND LENGTH(url) > 0 LIMIT 10 Explanation: We will look at the titles of articles and figure out whether the article came from the New York Times, TechCrunch or GitHub. We will use hacker news as our data source. It is an aggregator that displays tech related headlines from various sources. Creating Dataset from BigQuery Hacker news headlines are available as a BigQuery public dataset. The dataset contains all headlines from the sites inception in October 2006 until October 2015. Here is a sample of the dataset: End of explanation %%bigquery --project $PROJECT SELECT ARRAY_REVERSE(SPLIT(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.'))[OFFSET(1)] AS source, COUNT(title) AS num_articles FROM `bigquery-public-data.hacker_news.stories` WHERE REGEXP_CONTAINS(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.com$') AND LENGTH(title) > 10 GROUP BY source ORDER BY num_articles DESC LIMIT 10 Explanation: Let's do some regular expression parsing in BigQuery to get the source of the newspaper article from the URL. For example, if the url is http://mobile.nytimes.com/...., I want to be left with <i>nytimes</i> End of explanation from google.cloud import bigquery bq = bigquery.Client(project=PROJECT) query= SELECT source, LOWER(REGEXP_REPLACE(title, '[^a-zA-Z0-9 $.-]', ' ')) AS title FROM (SELECT ARRAY_REVERSE(SPLIT(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.'))[OFFSET(1)] AS source, title FROM `bigquery-public-data.hacker_news.stories` WHERE REGEXP_CONTAINS(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.com$') AND LENGTH(title) > 10 ) WHERE (source = 'github' OR source = 'nytimes' OR source = 'techcrunch') df = bq.query(query + " LIMIT 5").to_dataframe() df.head() Explanation: Now that we have good parsing of the URL to get the source, let's put together a dataset of source and titles. This will be our labeled dataset for AI Platform. End of explanation traindf = bq.query(query + " AND ABS(MOD(FARM_FINGERPRINT(title), 4)) > 0").to_dataframe() evaldf = bq.query(query + " AND ABS(MOD(FARM_FINGERPRINT(title), 4)) = 0").to_dataframe() Explanation: For ML training, we will need to split our dataset into training and evaluation datasets (and perhaps an independent test dataset if we are going to do model or feature selection based on the evaluation dataset). A simple, repeatable way to do this is to use the hash of a well-distributed column in our data (See https://www.oreilly.com/learning/repeatable-sampling-of-data-sets-in-bigquery-for-machine-learning). End of explanation traindf['source'].value_counts() evaldf['source'].value_counts() Explanation: Below we can see that roughly 75% of the data is used for training, and 25% for evaluation. We can also see that within each dataset, the classes are roughly balanced. End of explanation import os, shutil DATADIR='data/txtcls' shutil.rmtree(DATADIR, ignore_errors=True) os.makedirs(DATADIR) traindf.to_csv( os.path.join(DATADIR,'train.tsv'), header=False, index=False, encoding='utf-8', sep='\t') evaldf.to_csv( os.path.join(DATADIR,'eval.tsv'), header=False, index=False, encoding='utf-8', sep='\t') !head -3 data/txtcls/train.tsv !wc -l data/txtcls/*.tsv Explanation: Finally we will save our data, which is currently in-memory, to disk. End of explanation %%bash pip install google-cloud-storage rm -rf txtcls_trained gcloud ai-platform local train \ --module-name=trainer.task \ --package-path=${PWD}/txtclsmodel/trainer \ -- \ --output_dir=${PWD}/txtcls_trained \ --train_data_path=${PWD}/data/txtcls/train.tsv \ --eval_data_path=${PWD}/data/txtcls/eval.tsv \ --num_epochs=0.1 Explanation: TensorFlow/Keras Code Please explore the code in this <a href="txtclsmodel/trainer">directory</a>: model.py contains the TensorFlow model and task.py parses command line arguments and launches off the training job. In particular look for the following: tf.keras.preprocessing.text.Tokenizer.fit_on_texts() to generate a mapping from our word vocabulary to integers tf.keras.preprocessing.text.Tokenizer.texts_to_sequences() to encode our sentences into a sequence of their respective word-integers tf.keras.preprocessing.sequence.pad_sequences() to pad all sequences to be the same length The embedding layer in the keras model takes care of one-hot encoding these integers and learning a dense emedding represetation from them. Finally we pass the embedded text representation through a CNN model pictured below Run Locally (optional step) Let's make sure the code compiles by running locally for a fraction of an epoch. This may not work if you don't have all the packages installed locally for gcloud (such as in Colab). This is an optional step; move on to training on the cloud. End of explanation %%bash gsutil cp data/txtcls/*.tsv gs://${BUCKET}/txtcls/ %%bash OUTDIR=gs://${BUCKET}/txtcls/trained_fromscratch JOBNAME=txtcls_$(date -u +%y%m%d_%H%M%S) gsutil -m rm -rf $OUTDIR gcloud ai-platform jobs submit training $JOBNAME \ --region=$REGION \ --module-name=trainer.task \ --package-path=${PWD}/txtclsmodel/trainer \ --job-dir=$OUTDIR \ --scale-tier=BASIC_GPU \ --runtime-version 2.3 \ --python-version 3.7 \ -- \ --output_dir=$OUTDIR \ --train_data_path=gs://${BUCKET}/txtcls/train.tsv \ --eval_data_path=gs://${BUCKET}/txtcls/eval.tsv \ --num_epochs=5 Explanation: Train on the Cloud Let's first copy our training data to the cloud: End of explanation !gcloud ai-platform jobs describe txtcls_190209_224828 Explanation: Change the job name appropriately. View the job in the console, and wait until the job is complete. End of explanation !gsutil cp gs://cloud-training-demos/courses/machine_learning/deepdive/09_sequence/text_classification/glove.6B.200d.txt gs://$BUCKET/txtcls/ Explanation: Results What accuracy did you get? You should see around 80%. Rerun with Pre-trained Embedding We will use the popular GloVe embedding which is trained on Wikipedia as well as various news sources like the New York Times. You can read more about Glove at the project homepage: https://nlp.stanford.edu/projects/glove/ You can download the embedding files directly from the stanford.edu site, but we've rehosted it in a GCS bucket for faster download speed. End of explanation
9,276
Given the following text description, write Python code to implement the functionality described below step by step Description: Anรกlisis de los datos obtenidos Compararaciรณn de tres filamentos distintos Filamento de BQ Filamento de formfutura Filamento de filastriuder Step1: Representamos ambos diรกmetro y la velocidad de la tractora en la misma grรกfica
Python Code: %pylab inline #Importamos las librerรญas utilizadas import numpy as np import pandas as pd import seaborn as sns #Mostramos las versiones usadas de cada librerรญas print ("Numpy v{}".format(np.__version__)) print ("Pandas v{}".format(pd.__version__)) print ("Seaborn v{}".format(sns.__version__)) #Abrimos los ficheros con los datos conclusiones = pd.read_csv('Conclusiones.csv') columns=['bq','formfutura','filastruder'] #Mostramos un resumen de los datos obtenidoss conclusiones[columns].describe() Explanation: Anรกlisis de los datos obtenidos Compararaciรณn de tres filamentos distintos Filamento de BQ Filamento de formfutura Filamento de filastriuder End of explanation graf=conclusiones[columns].plot(figsize=(16,10),ylim=(0.5,2.6)) graf.axhspan(1.65,1.85, alpha=0.2) #datos['RPM TRAC'].plot(secondary_y='RPM TRAC') graf = conclusiones[columns].boxplot(return_type='axes') graf.axhspan(1.65,1.85, alpha=0.2) Explanation: Representamos ambos diรกmetro y la velocidad de la tractora en la misma grรกfica End of explanation
9,277
Given the following text description, write Python code to implement the functionality described below step by step Description: Literature results Band gap engineering in amorphous $Al_xGa_{1-x}N$ Experiment and ab initio calculations, Appl. Phys. Lett. 77, 1117 (2000) Step1: Band gap engineering of mixed Cd(1-x)Zn (x) Se thin films, J. Alloys Compd., 703, 40-44, (2017) Step2: Band gap engineering of ZnO by doping with Mg, Phys. Scr. 90(8), 085502, (2015) Step3: Band-gap engineering for removing shallow traps in rare-earth Lu3Al5O12 garnet scintillators using Ga3+ doping, Phys. Rev. B, 84(8) 081102, (2011)
Python Code: import matplotlib.pyplot as plt import seaborn as sns import numpy as np %matplotlib inline xs = [0.0, 0.3305234864554154, 0.5015690020887643, 0.5719846500105247, 0.6616169303259445, 0.7943943392865815, 1.0] exp = [3.27, 3.973509933774835, 4.56953642384106, 4.56953642384106, 4.668874172185431, 5.178807947019868, 5.95] predicted = [ [3.3234272, 3.443913, 2.8061478, 3.6181135, 2.7264364, 3.5291746], [3.3851593, 4.1667223, 3.6109998, 4.2762685, 3.2637808, 3.9524834], [3.7519681, 4.5286236, 4.0720024, 4.8389845, 3.7563384, 4.3596983], [3.9960487, 4.735317, 4.2778697, 5.0086565, 3.9825325, 4.511471], [4.405121, 5.0085635, 4.5971274, 5.240199, 4.266697, 4.6896057], [5.000762, 5.3106785, 5.1751385, 5.5603375, 4.6557055, 4.8134403], [5.612711, 5.6376424, 6.206752, 5.748086, 5.1321826, 3.567781]] #predicted = [3.241202, 3.7759016, 4.2179356, 4.418649, 4.701219, 5.0860105, 5.3175263] #errors = [0.34811336, 0.38219276, 0.39871, 0.37541276, 0.334767, 0.3028004, 0.84278023] plt.rcParams['font.size'] = 22 plt.rcParams['font.family'] = 'Arial' plt.figure(figsize=(5.8, 5)) color = '#F67F12' # plot. Set color of marker edge flierprops = dict(marker='o', markerfacecolor=color, markersize=6, linestyle='none', markeredgecolor=color) box = plt.boxplot(np.array(predicted).T, positions=xs, widths=0.1, flierprops=flierprops, patch_artist=True, boxprops=dict(facecolor=color, color='k')) for i in box['boxes']: plt.setp(i, zorder=0) for i in box['medians']: plt.setp(i, color='w') h, = plt.plot(xs, exp, 'o--', markeredgewidth=2, markersize=10, markerfacecolor='w', label='Experiment') plt.xlabel('$x$ in $\mathregular{Al_{x}Ga_{1-x}N}$') plt.ylabel('$E_g$ (eV)') plt.legend([ h, box['boxes'][1]], ['Experiment', 'Model'], frameon=False) plt.xlim([-0.1, 1.1]) plt.xticks([0, 0.2, 0.4, 0.6, 0.8, 1.0], [0, 0.2, 0.4, 0.6, 0.8, 1.0]) plt.yticks([3.0, 4.0, 5.0, 6.0], [3.0, 4.0, 5.0, 6.0]) plt.tight_layout() plt.savefig('AlxGa1-xN.pdf') Explanation: Literature results Band gap engineering in amorphous $Al_xGa_{1-x}N$ Experiment and ab initio calculations, Appl. Phys. Lett. 77, 1117 (2000) End of explanation xs = [0.0, 0.2, 0.4, 0.6, 0.8, 1.0] exp = [1.67, 1.82, 2.03, 2.2, 2.35, 2.6] predicted = [[1.8370335, 1.8265239, 1.84641, 1.7628204, 1.6473178, 2.057174], [2.0188313, 1.7471551, 1.84461, 1.8407636, 1.6486685, 2.1046824], [2.296093, 1.7560242, 1.9315724, 2.1663737, 2.0004115, 2.1264758], [2.4811623, 2.1458392, 2.4289386, 2.3242218, 2.259788, 2.2726748], [2.2881293, 2.6813347, 2.914834, 2.524885, 2.53475, 2.9929655], [2.343427, 2.7570426, 2.9132087, 2.7734382, 2.847426, 2.5522652]] plt.figure(figsize=(6, 5)) # plot. Set color of marker edge flierprops = dict(marker='o', markerfacecolor=color, markersize=6, linestyle='none', markeredgecolor=color) box = plt.boxplot(np.array(predicted).T, positions=xs, widths=0.1, flierprops=flierprops, patch_artist=True, boxprops=dict(facecolor=color, color='k')) for i in box['boxes']: plt.setp(i, zorder=0) for i in box['medians']: plt.setp(i, color='w') h, = plt.plot(xs, exp, 'o--', markeredgewidth=2, markersize=10, markerfacecolor='w', label='Experiment') plt.xlim([-0.1, 1.1]) plt.xticks([0, 0.2, 0.4, 0.6, 0.8, 1.0], [0, 0.2, 0.4, 0.6, 0.8, 1.0]) plt.xlabel('$x$ in $\mathregular{Cd_{1-x}Zn_xSe}$') plt.ylabel('$E_g$ (eV)') plt.tight_layout() plt.savefig('Cd1-xZnxSe.pdf') Explanation: Band gap engineering of mixed Cd(1-x)Zn (x) Se thin films, J. Alloys Compd., 703, 40-44, (2017) End of explanation xs = [0.0, 0.01, 0.04, 0.08, 0.12, 0.16] exp = [3.1389807162534433, 3.1685950413223143, 3.229201101928375, 3.3201101928374657, 3.36900826446281, 3.4413223140495868] predicted = [[3.0789735, 3.5106642, 3.297291, 3.0242383, 3.6874614, 3.4847476], [3.3934112, 3.4003148, 3.426836, 3.158033, 3.5232787, 3.4476407], [3.441858, 3.4407954, 3.8043046, 3.2040539, 3.5728483, 3.547456], [3.4638472, 3.4949489, 4.0739813, 3.2903996, 3.6263444, 3.6790297], [3.4503715, 3.5805423, 3.5919178, 3.4007187, 3.655845, 3.8145654], [3.4545772, 3.717163, 2.974183, 3.5161405, 3.6517656, 3.9443142]] plt.figure(figsize=(5.8, 5)) flierprops = dict(marker='o', markerfacecolor=color, markersize=6, linestyle='none', markeredgecolor=color) box = plt.boxplot(np.array(predicted).T, positions=xs, widths=0.02, flierprops=flierprops, patch_artist=True, boxprops=dict(facecolor=color, color='k')) for i in box['boxes']: plt.setp(i, zorder=0) for i in box['medians']: plt.setp(i, color='w') h, = plt.plot(xs, exp, 'o--', markeredgewidth=2, markersize=10, markerfacecolor='w', label='Experiment') plt.xlabel('$x$ in $\mathregular{Zn_{1-x}Mg_xO}$') plt.ylabel('$E_g$ (eV)') # plt.legend(frameon=False) plt.xticks([0, 0.05, 0.1, 0.15], [0, 0.05, 0.1, 0.15]) plt.xlim([-0.016, 0.177]) plt.yticks([3, 3.5, 4], [3.0, 3.5, 4.0]) #plt.yticks([0., 0.5, 1, 1.5]) plt.tight_layout() plt.savefig('MgxZn1-xO.pdf') Explanation: Band gap engineering of ZnO by doping with Mg, Phys. Scr. 90(8), 085502, (2015) End of explanation xs = [0.0, 0.05, 0.1, 0.2, 0.4, 0.6, 1.0] exp = [5.551115123125783e-17, 0.003409090909090917, -0.030681818181818143, -0.2079545454545454, -0.4363636363636363, -0.7465909090909091, -1.5681818181818183] predicted = [[-0.22912502, 0.13428593, 0.3091731, -0.06367111, 0.21906424, -0.3697219], [-0.2466507, 0.19774532, 0.28103304, -0.1059885, 0.23758173, -0.40776634], [-0.2611394, 0.23995829, 0.23054123, -0.16277695, 0.2702756, -0.46076345], [-0.2759061, 0.23510027, 0.055459976, -0.36338377, 0.37191725, -0.63695955], [-0.33351135, 0.03725624, -0.13293362, -2.0082474, 0.2829337, -1.0578184], [-0.68473625, -0.51482725, -0.41922426, -4.0862846, -0.273458, -1.3728428], [-1.9137676, -1.2256684, -0.9274936, -5.6792936, -1.0770473, -2.3557308]] errors = [0.24319291, 0.26221794, 0.28499138, 0.3530279, 0.7791653, 1.3266295, 1.6350949] plt.figure(figsize=(5.9, 5)) plt.rcParams['font.family'] = 'Arial' plt.rcParams['font.size'] = 22 flierprops = dict(marker='o', markerfacecolor=color, markersize=6, linestyle='none', markeredgecolor=color) box = plt.boxplot(np.array(predicted).T, positions=xs, widths=0.1, flierprops=flierprops, patch_artist=True, boxprops=dict(facecolor=color, color='k')) for i in box['boxes']: plt.setp(i, zorder=0) for i in box['medians']: plt.setp(i, color='w') h, = plt.plot(xs, exp, 'o--', markeredgewidth=2, markersize=10, markerfacecolor='w', label='Experiment') plt.xlabel('$x$ in $\mathregular{Lu_3(Ga_xAl_{1-x})_5O_{12}}$') plt.ylabel('$\Delta E_g$ (eV)') #plt.legend(frameon=False) plt.xlim([-0.1, 1.1]) plt.xticks([0, 0.2, 0.4, 0.6, 0.8, 1.0]) plt.yticks([-4, -2, 0], [-4.0, -2.0, 0]) # plt.yticks([0.5, 1., 1.5]) plt.tight_layout() plt.savefig('Ga_Lu3Al5O12.pdf') Explanation: Band-gap engineering for removing shallow traps in rare-earth Lu3Al5O12 garnet scintillators using Ga3+ doping, Phys. Rev. B, 84(8) 081102, (2011) End of explanation
9,278
Given the following text description, write Python code to implement the functionality described below step by step Description: Exercise 1 Step1: Load and explore data Step2: Part 1 Step3: Scale features and set them to zero mean Step4: Add intercept term to X Step5: Part 2 Step6: Cost at initial theta Step7: Run gradient descent Step8: The theta values found by gradient descent should be [ 340412.65957447, 109447.79646964, -6578.35485416]). Step9: Convergence graph Step10: Estimate the price of a 1650 sqft, 3 bedroom house Step11: Part 3 Step12: Theta found using the normal equations Step13: Price estimation of a 1650sqft house with 3 bedrooms, using theta from the normal equations
Python Code: import pandas import numpy as np import matplotlib.pyplot as plt %matplotlib inline Explanation: Exercise 1: Linear regression with multiple variables End of explanation data = pandas.read_csv('ex1data2.txt', header=None, names=['x1', 'x2', 'y']) data.head() data.shape X = data[['x1', 'x2']].values Y = data['y'].values m = len(data) Explanation: Load and explore data: End of explanation def feature_normalize(X): # FEATURENORMALIZE Normalizes the features in X # FEATURENORMALIZE(X) returns a normalized version of X where # the mean value of each feature is 0 and the standard deviation # is 1. This is often a good preprocessing step to do when # working with learning algorithms. # You need to set these values correctly X_norm = X mu = np.zeros(X.shape[1]) sigma = np.zeros(X.shape[1]) # ====================== YOUR CODE HERE ====================== # Instructions: First, for each feature dimension, compute the mean # of the feature and subtract it from the dataset, # storing the mean value in mu. Next, compute the # standard deviation of each feature and divide # each feature by it's standard deviation, storing # the standard deviation in sigma. # # Note that X is a matrix where each column is a # feature and each row is an example. You need # to perform the normalization separately for # each feature. # # Hint: You might find the 'np.mean' and 'np.std' functions useful. # # ============================================================ return X_norm, mu, sigma Explanation: Part 1: Feature Normalization End of explanation X_norm, mu, sigma = feature_normalize(X) Explanation: Scale features and set them to zero mean: End of explanation X_norm = np.insert(X_norm, 0, 1, 1) X_norm[:2] # choose some alpha value alpha = 0.01 # Init Theta theta = np.zeros(3) iterations = 400 Explanation: Add intercept term to X: End of explanation def compute_cost_multi(X, y, theta): # COMPUTECOSTMULTI Compute cost for linear regression # J = COMPUTECOSTMULTI(X, y, theta) computes the cost of using theta as the # parameter for linear regression to fit the data points in X and y # some useful values m = len(X) # You need to return this value correctly: J = 0 # ====================== YOUR CODE HERE ====================== # Instructions: Compute the cost of a particular choice of theta # You should set J to the cost. # ============================================================ return J Explanation: Part 2: Gradient Descent Make sure your implementations of compute_cost and gradient_descent work when X has more than 2 columns! End of explanation compute_cost_multi(X_norm, Y, theta) def gradient_descent_multi(X, y, theta, alpha, num_iters): # GRADIENTDESCENT Performs gradient descent to learn theta # theta = GRADIENTDESCENT(X, y, theta, alpha, num_iters) updates theta by # taking num_iters gradient steps with learning rate alpha # Initialize J_history = np.zeros(num_iters) T_history = np.zeros((num_iters,X.shape[1])) for i in range(num_iters): T_history[i] = theta ### ========= YOUR CODE HERE ============ # Instructions: Perform a single gradient step on the parameter vector theta. ### ===================================== J_history[i] = compute_cost_multi(X, y, theta) return theta, J_history, T_history Explanation: Cost at initial theta: End of explanation theta, J_history, T_history = gradient_descent_multi(X_norm, Y, theta, alpha, iterations) Explanation: Run gradient descent: End of explanation theta Explanation: The theta values found by gradient descent should be [ 340412.65957447, 109447.79646964, -6578.35485416]). End of explanation pandas.Series(J_history).plot() Explanation: Convergence graph: End of explanation # Estimate the price of a 1650 sq-ft, 3 br house # ====================== YOUR CODE HERE ====================== # Recall that the first column of X is all-ones. Thus, it does # not need to be normalized. price = 0 # ============================================================ price Explanation: Estimate the price of a 1650 sqft, 3 bedroom house: End of explanation data = pandas.read_csv('ex1data2.txt', header=None, names=['x1', 'x2', 'y']) X = data[['x1', 'x2']].values Y = data['y'].values X = np.insert(X, 0, 1, 1) def normal_eqn(X, y): #NORMALEQN Computes the closed-form solution to linear regression # NORMALEQN(X,y) computes the closed-form solution to linear # regression using the normal equations. theta = np.zeros(X.shape[1]); # ====================== YOUR CODE HERE ====================== # Instructions: Complete the code to compute the closed form solution # to linear regression and put the result in theta. # # ============================================================ return theta theta = normal_eqn(X, Y) Explanation: Part 3: Normal Equations The following code computes the closed form solution for linear regression using the normal equations. You should complete the code in normal_eqn(). After doing so, you should complete this code to predict the price of a 1650 sq-ft, 3 br house. End of explanation theta Explanation: Theta found using the normal equations: End of explanation # ====================== YOUR CODE HERE ====================== 0 # ============================================================ Explanation: Price estimation of a 1650sqft house with 3 bedrooms, using theta from the normal equations: End of explanation
9,279
Given the following text description, write Python code to implement the functionality described below step by step Description: PyTorch dataset interface In this example we will look at how a pyxis LMDB can be used with PyTorch's torch.utils.data.Dataset and torch.utils.data.DataLoader. Step1: As usual, we will begin by creating a small dataset to test with. It will consist of 10 samples, where each input observation has four features and targets are scalar values. Step2: The data is written using a with statement. Step3: To be sure the data was stored correctly, we will read the data back - again using a with statement. Step4: Working with PyTorch Step5: In pyxis.torch we have implemented a wrapper around torch.utils.data.Dataset called pyxis.torch.TorchDataset. This object is not imported into the pyxis name space because it relies on PyTorch being installed. As such, we first need to import pyxis.torch Step6: pyxis.torch.TorchDataset has a single constructor argument Step7: The pyxis.torch.TorchDataset object has only three methods Step8: pyxis.torch.TorchDataset can be directly combined with torch.utils.data.DataLoader to create an iterator type object
Python Code: from __future__ import print_function import numpy as np import pyxis as px Explanation: PyTorch dataset interface In this example we will look at how a pyxis LMDB can be used with PyTorch's torch.utils.data.Dataset and torch.utils.data.DataLoader. End of explanation nb_samples = 10 X = np.outer(np.arange(1, nb_samples + 1, dtype=np.uint8), np.arange(1, 4 + 1, dtype=np.uint8)) y = np.arange(nb_samples, dtype=np.uint8) for i in range(nb_samples): print('Input: {} -> Target: {}'.format(X[i], y[i])) Explanation: As usual, we will begin by creating a small dataset to test with. It will consist of 10 samples, where each input observation has four features and targets are scalar values. End of explanation with px.Writer(dirpath='data', map_size_limit=10, ram_gb_limit=1) as db: db.put_samples('input', X, 'target', y) Explanation: The data is written using a with statement. End of explanation with px.Reader('data') as db: print(db) Explanation: To be sure the data was stored correctly, we will read the data back - again using a with statement. End of explanation try: import torch import torch.utils.data except ImportError: raise ImportError('Could not import the PyTorch library `torch` or ' '`torch.utils.data`. Please refer to ' 'https://pytorch.org/ for installation instructions.') Explanation: Working with PyTorch End of explanation import pyxis.torch as pxt Explanation: In pyxis.torch we have implemented a wrapper around torch.utils.data.Dataset called pyxis.torch.TorchDataset. This object is not imported into the pyxis name space because it relies on PyTorch being installed. As such, we first need to import pyxis.torch: End of explanation dataset = pxt.TorchDataset('data') Explanation: pyxis.torch.TorchDataset has a single constructor argument: dirpath, i.e. the location of the pyxis LMDB. End of explanation len(dataset) dataset[0] dataset Explanation: The pyxis.torch.TorchDataset object has only three methods: __len__, __getitem__, and __repr__, each of which you can see an example of below: End of explanation use_cuda = True and torch.cuda.is_available() kwargs = {"num_workers": 4, "pin_memory": True} if use_cuda else {} loader = torch.utils.data.DataLoader(dataset, batch_size=2, shuffle=False, **kwargs) for i, d in enumerate(loader): print('Batch:', i) print('\t', d['input']) print('\t', d['target']) Explanation: pyxis.torch.TorchDataset can be directly combined with torch.utils.data.DataLoader to create an iterator type object: End of explanation
9,280
Given the following text description, write Python code to implement the functionality described below step by step Description: Check Homework HW05 Use this notebook to check your solutions. This notebook will not be graded. Step1: Now, import your solutions from hw5_answers.py. The following code looks a bit redundant. However, we do this to allow reloading the hw5_answers.py in case you made some changes. Normally, Python assumes that modules don't change and therefore does not try to import them again. Step2: The Employees, Territory, Customers, and Orders tables are the same as those we used in class. Step3: Problem 1 Write a function called get_manager that takes as its one argument the Pandas DataFrame "Employees" and returns a DataFrame containing list of all employees (EmployeeID, first name, middle name, last name), and their manager's first and last name. The columns in the output DataFrame should be Step4: Shape of resulting table Step5: Shape of resulting table Step6: Shape of resulting table
Python Code: import pandas as pd import numpy as np Explanation: Check Homework HW05 Use this notebook to check your solutions. This notebook will not be graded. End of explanation import hw5_answers reload(hw5_answers) from hw5_answers import * Explanation: Now, import your solutions from hw5_answers.py. The following code looks a bit redundant. However, we do this to allow reloading the hw5_answers.py in case you made some changes. Normally, Python assumes that modules don't change and therefore does not try to import them again. End of explanation Employees = pd.read_excel('/home/data/AdventureWorks/Employees.xls') Territory = pd.read_excel('/home/data/AdventureWorks/SalesTerritory.xls') Customers = pd.read_excel('/home/data/AdventureWorks/Customers.xls') Orders = pd.read_excel('/home/data/AdventureWorks/ItemsOrdered.xls') Explanation: The Employees, Territory, Customers, and Orders tables are the same as those we used in class. End of explanation df1 = get_manager(Employees) print "Shape of resulting table: ", df1.shape print "Columns: ", ', '.join(df1.columns) df1.head() Explanation: Problem 1 Write a function called get_manager that takes as its one argument the Pandas DataFrame "Employees" and returns a DataFrame containing list of all employees (EmployeeID, first name, middle name, last name), and their manager's first and last name. The columns in the output DataFrame should be: EmployeeID, FirstName, MiddleName, LastName, ManagerFirstName, ManagerLastName. End of explanation df2 = get_spend_by_order(Orders, Customers) print "Shape of resulting table: ", df2.shape print "Columns: ", ', '.join(df2.columns) df2.head() Explanation: Shape of resulting table: (291, 6) Columns: EmployeeID, FirstName, MiddleName, LastName, ManagerFirstName, ManagerLastName | EmployeeID | FirstName |MiddleName | LastName | ManagerFirstName | ManagerLastName -----------|-----------|-----------|----------|------------------|---------------- 0 | 259 | Ben | T | Miller |Sheela | Word 1 | 278 | Garrett | R | Vargas |Stephen | Jiang 2 | 204 | Gabe | B | Mares | Peter | Krebs 3 | 78 | Reuben | H | D'sa | Peter | Krebs 4 | 255 | Gordon | L | Hee | Sheela | Word Problem 2 Write a functon called get_spend_by_order that takes as its two arguments the Pandas DataFrames "Orders" and "Customers", and returns a DataFrame with the following columns: "FirstName", "LastName", "Item", "TotalSpent", listing all cutomer names, their purchased items, and the total amount spend on that item (remember that the "Price" listed in "Orders" is the price per item). End of explanation df3 = get_order_location(Orders, Customers, Territory) print "Shape of resulting table: ", df3.shape print "Columns: ", ', '.join(df3.columns) df3.head() Explanation: Shape of resulting table: (32, 4) Columns: FirstName, LastName, Item, TotalSpent |FirstName | LastName | Item | TotalSpent ----------|----------|------|----------- 0 | Anthony | Sanchez | Umbrella | 4.5 1 | Conrad | Giles | Ski Poles | 51.0 2 | Conrad | Giles | Tent | 88.0 3 | Donald | Davids | Lawnchair | 128.0 4 | Elroy | Keller | Inflatable Mattress | 38.0 Problem 3 Write a function called get_order_location that takes three arguments: "Orders", "Customers", and "Territory", and returns a DataFrame containing the following columns: "CustomerID", "Name", and "TotalItems", that gives, for each order, the CustomerID, the name of the territory where the order was placed, and the total number of items ordered (yes, 2 ski poles counts as 2 items). End of explanation df4 = employee_info(Employees) print "Shape of resulting table: ", df4.shape print "Columns: ", ', '.join(df4.columns) df4.head() Explanation: Shape of resulting table: (11, 3) Columns: CustomerID, Name, TotalItems | CustomerID | Name | TotalItems -----------|------|----------- 0 | 10315 | Central | 1 1 | 10438 | Central | 3 2 | 10439 | Central | 2 3 | 10101 | Northwest | 6 4 | 10299 | Northwest | 2 Problem 4 Write a function called employee_info that takes one argument: "Employees", and returns a DataFrame containing the following columns: JobTitle, NumberOfEmployees, and MeanVacationHours, containing all job titles, the number of employees with that job title, and the mean number of vacation days for employees with that job title. End of explanation
9,281
Given the following text description, write Python code to implement the functionality described below step by step Description: Model comparison To demonstrate the use of model comparison criteria in PyMC3, we implement the 8 schools example from Section 5.5 of Gelman et al (2003), which attempts to infer the effects of coaching on SAT scores of students from 8 schools. Below, we fit a pooled model, which assumes a single fixed effect across all schools, and a hierarchical model that allows for a random effect that partially pools the data. Step1: The data include the observed treatment effects and associated standard deviations in the 8 schools. Step2: Pooled model Step3: Hierarchical model Step4: Deviance Information Criterion (DIC) DIC (Spiegelhalter et al. 2002) is an information theoretic criterion for estimating predictive accuracy that is analogous to Akaike's Information Criterion (AIC). It is a more Bayesian approach that allows for the modeling of random effects, replacing the maximum likelihood estimate with the posterior mean and using the effective number of parameters to correct for bias. Step5: Widely-applicable Information Criterion (WAIC) WAIC (Watanabe 2010) is a fully Bayesian criterion for estimating out-of-sample expectation, using the computed log pointwise posterior predictive density (LPPD) and correcting for the effective number of parameters to adjust for overfitting. Step6: PyMC3 includes two convenience functions to help compare WAIC for different models. The first of this functions is compare, this one computes WAIC (or LOO) from a set of traces and models and returns a DataFrame. Step7: We have many columns so let check one by one the meaning of them Step8: The empty circle represents the values of WAIC and the black error bars associated with them are the values of the standard deviation of WAIC. The value of the lowest WAIC is also indicated with a vertical dashed grey line to ease comparison with other WAIC values. The filled black dots are the in-sample deviance of each model, which for WAIC is 2 pWAIC from the corresponding WAIC value. For all models except the top-ranked one we also get a triangle indicating the value of the difference of WAIC between that model and the top model and a grey errobar indicating the standard error of the differences between the top-ranked WAIC and WAIC for each model. Leave-one-out Cross-validation (LOO) LOO cross-validation is an estimate of the out-of-sample predictive fit. In cross-validation, the data are repeatedly partitioned into training and holdout sets, iteratively fitting the model with the former and evaluating the fit with the holdout data. Vehtari et al. (2016) introduced an efficient computation of LOO from MCMC samples, which are corrected using Pareto-smoothed importance sampling (PSIS) to provide an estimate of point-wise out-of-sample prediction accuracy. Step9: We can also use compare with LOO. Step10: The columns return the equivalent values for LOO, notice that in this example we get two warnings. Also notice that the order of the models is not the same as the one for WAIC. We can also plot the results
Python Code: %matplotlib inline import pymc3 as pm import numpy as np import matplotlib.pyplot as plt import seaborn as sns sns.set_context('notebook') Explanation: Model comparison To demonstrate the use of model comparison criteria in PyMC3, we implement the 8 schools example from Section 5.5 of Gelman et al (2003), which attempts to infer the effects of coaching on SAT scores of students from 8 schools. Below, we fit a pooled model, which assumes a single fixed effect across all schools, and a hierarchical model that allows for a random effect that partially pools the data. End of explanation J = 8 y = np.array([28, 8, -3, 7, -1, 1, 18, 12]) sigma = np.array([15, 10, 16, 11, 9, 11, 10, 18]) Explanation: The data include the observed treatment effects and associated standard deviations in the 8 schools. End of explanation with pm.Model() as pooled: mu = pm.Normal('mu', 0, sd=1e6) obs = pm.Normal('obs', mu, sd=sigma, observed=y) trace_p = pm.sample(2000) pm.traceplot(trace_p); Explanation: Pooled model End of explanation with pm.Model() as hierarchical: eta = pm.Normal('eta', 0, 1, shape=J) mu = pm.Normal('mu', 0, sd=1e6) tau = pm.HalfCauchy('tau', 5) theta = pm.Deterministic('theta', mu + tau*eta) obs = pm.Normal('obs', theta, sd=sigma, observed=y) trace_h = pm.sample(2000) pm.traceplot(trace_h, varnames=['mu']); pm.forestplot(trace_h, varnames=['theta']); Explanation: Hierarchical model End of explanation pooled_dic = pm.dic(trace_p, pooled) pooled_dic hierarchical_dic = pm.dic(trace_h, hierarchical) hierarchical_dic Explanation: Deviance Information Criterion (DIC) DIC (Spiegelhalter et al. 2002) is an information theoretic criterion for estimating predictive accuracy that is analogous to Akaike's Information Criterion (AIC). It is a more Bayesian approach that allows for the modeling of random effects, replacing the maximum likelihood estimate with the posterior mean and using the effective number of parameters to correct for bias. End of explanation pooled_waic = pm.waic(trace_p, pooled) pooled_waic.WAIC hierarchical_waic = pm.waic(trace_h, hierarchical) hierarchical_waic.WAIC Explanation: Widely-applicable Information Criterion (WAIC) WAIC (Watanabe 2010) is a fully Bayesian criterion for estimating out-of-sample expectation, using the computed log pointwise posterior predictive density (LPPD) and correcting for the effective number of parameters to adjust for overfitting. End of explanation df_comp_WAIC = pm.compare((trace_h, trace_p), (hierarchical, pooled)) df_comp_WAIC Explanation: PyMC3 includes two convenience functions to help compare WAIC for different models. The first of this functions is compare, this one computes WAIC (or LOO) from a set of traces and models and returns a DataFrame. End of explanation pm.compare_plot(df_comp_WAIC); Explanation: We have many columns so let check one by one the meaning of them: The first column clearly contains the values of WAIC. The DataFrame is always sorted from lowest to highest WAIC. The index reflects the order in which the models are passed to this function. The second column is the estimated effective number of parameters. In general, models with more parameters will be more flexible to fit data and at the same time could also lead to overfitting. Thus we can interpret pWAIC as a penalization term, intuitively we can also interpret it as measure of how flexible each model is in fitting the data. The third column is the relative difference between the value of WAIC for the top-ranked model and the value of WAIC for each model. For this reason we will always get a value of 0 for the first model. Sometimes when comparing models, we do not want to select the "best" model, instead we want to perform predictions by averaging along all the models (or at least several models). Ideally we would like to perform a weighted average, giving more weight to the model that seems to explain/predict the data better. There are many approaches to perform this task, one of them is to use Akaike weights based on the values of WAIC for each model. These weights can be loosely interpreted as the probability of each model (among the compared models) given the data. One caveat of this approach is that the weights are based on point estimates of WAIC (i.e. the uncertainty is ignored). The fifth column records the standard error for the WAIC computations. The standard error can be useful to assess the uncertainty of the WAIC estimates. Nevertheless, caution need to be taken because the estimation of the standard error assumes normality and hence could be problematic when the sample size is low. In the same way that we can compute the standard error for each value of WAIC, we can compute the standard error of the differences between two values of WAIC. Notice that both quantities are not necessarily the same, the reason is that the uncertainty about WAIC is correlated between models. This quantity is always 0 for the top-ranked model. Finally we have the last column named "warning". A value of 1 indicates that the computation of WAIC may not be reliable, this warning is based on an empirical determined cutoff value and need to be interpreted with caution. For more details you can read this paper. The second convenience function takes the output of compare and produces a summary plot in the style of the one used in the book Statistical Rethinking by Richard McElreath (check also this port of the examples in the book to PyMC3). End of explanation pooled_loo = pm.loo(trace_p, pooled) pooled_loo.LOO hierarchical_loo = pm.loo(trace_h, hierarchical) hierarchical_loo.LOO Explanation: The empty circle represents the values of WAIC and the black error bars associated with them are the values of the standard deviation of WAIC. The value of the lowest WAIC is also indicated with a vertical dashed grey line to ease comparison with other WAIC values. The filled black dots are the in-sample deviance of each model, which for WAIC is 2 pWAIC from the corresponding WAIC value. For all models except the top-ranked one we also get a triangle indicating the value of the difference of WAIC between that model and the top model and a grey errobar indicating the standard error of the differences between the top-ranked WAIC and WAIC for each model. Leave-one-out Cross-validation (LOO) LOO cross-validation is an estimate of the out-of-sample predictive fit. In cross-validation, the data are repeatedly partitioned into training and holdout sets, iteratively fitting the model with the former and evaluating the fit with the holdout data. Vehtari et al. (2016) introduced an efficient computation of LOO from MCMC samples, which are corrected using Pareto-smoothed importance sampling (PSIS) to provide an estimate of point-wise out-of-sample prediction accuracy. End of explanation df_comp_LOO = pm.compare((trace_h, trace_p), (hierarchical, pooled), ic='LOO') df_comp_LOO Explanation: We can also use compare with LOO. End of explanation pm.compare_plot(df_comp_LOO); Explanation: The columns return the equivalent values for LOO, notice that in this example we get two warnings. Also notice that the order of the models is not the same as the one for WAIC. We can also plot the results End of explanation
9,282
Given the following text description, write Python code to implement the functionality described below step by step Description: AI Explanations Step1: Restart Kernel Setup Import libraries Import the libraries for this tutorial. Step2: Run the following cell to create your Cloud Storage bucket if it does not already exist. Step3: Explore the Dataset The dataset used for this tutorial is the flowers dataset from TensorFlow Datasets. This section shows how to shuffle, split, and copy the files to your GCS bucket. Load, split, and copy the dataset to your GCS bucket Step4: Run the following commands. You should see a number of .tfrec files in your GCS bucket at both gs Step5: Create ingest functions and visualize some of the examples Define and execute helper functions to plot the images and corresponding labels. Step6: Build training pipeline In this section you will build an application with keras to train an image classification model on Vertex AI Custom Training. Create a directory for the training application and an init .py file (this is required for a Python application but it can be empty). Step7: Create training application in train.py This code contains the training logic. Here you build an application to ingest data from GCS and train an image classification model using mobileNet as a feature extractor, then sending it's output feature vector through a tf.keras.dense layer with 5 units and softmax activation (because there are 5 possible labels). Also, use the fire library which enables arguments to train_and_evaluate to be passed via the command line. Step8: Test training application locally It's always a good idea to test out a training application locally (with only a few training steps) to make sure the code runs as expected. Step9: Package code as source distribution Now that you have validated your model training code, we need to package our code as a source distribution in order to submit a custom training job to Vertex AI. Step10: Store the package in GCS Step11: To submit to the Cloud we use gcloud custom-jobs create and simply specify some additional parameters for the Vertex AI Training Service Step12: NOTE Model training will take 5 minutes or so. You have to wait for training to finish before moving forward. Serving function for image data To pass images to the prediction service, you encode the compressed (e.g., JPEG) image bytes into base 64 -- which makes the content safe from modification while transmitting binary data over the network. Since this deployed model expects input data as raw (uncompressed) bytes, you need to ensure that the base 64 encoded data gets converted back to raw bytes before it is passed as input to the deployed model. To resolve this, define a serving function (serving_fn) and attach it to the model as a preprocessing step. Add a @tf.function decorator so the serving function is fused to the underlying model (instead of upstream on a CPU). When you send a prediction or explanation request, the content of the request is base 64 decoded into a Tensorflow string (tf.string), which is passed to the serving function (serving_fn). The serving function preprocesses the tf.string into raw (uncompressed) numpy bytes (preprocess_fn) to match the input requirements of the model Step13: Get the serving function signature You can get the signatures of your model's input and output layers by reloading the model into memory, and querying it for the signatures corresponding to each layer. When making a prediction request, you need to route the request to the serving function instead of the model, so you need to know the input layer name of the serving function -- which you will use later when you make a prediction request. You also need to know the name of the serving function's input and output layer for constructing the explanation metadata -- which is discussed subsequently. Step14: Upload the model Next, upload your model to a Model resource using Model.upload() method, with the following parameters Step15: NOTE This can take a few minutes to run. Step16: Deploy the model Next, deploy your model for online prediction. To deploy the model, you invoke the deploy method, with the following parameters Step17: Prepare the request content You are going to send the flower image as compressed JPG image, instead of the raw uncompressed bytes Step18: Read the JPG image and encode it with base64 to send to the model endpoint. Send the encoded image to the endpoint with endpoint.explain. Then you can parse the response for the prediction and explanation. Full documentation on endpoint.explain can be found here. Step19: Visualize feature attributions from Integrated Gradients. Query the response to get predictions and feature attributions. Use Matplotlib to visualize.
Python Code: # Install needed deps !pip install opencv-python Explanation: AI Explanations: Deploying an Explainable Image Model with Vertex AI Overview This lab shows how to train a classification model on image data and deploy it to Vertex AI to serve predictions with explanations (feature attributions). In this lab you will: * Explore the dataset * Build and train a custom image classification model with Vertex AI * Deploy the model to an endpoint * Serve predictions with explanations * Visualize feature attributions from Integrated Gradients End of explanation import base64 import os import random from datetime import datetime import cv2 import numpy as np import tensorflow as tf import tensorflow_hub as hub from google.cloud import aiplatform from matplotlib import pyplot as plt PROJECT = !(gcloud config get-value core/project) PROJECT = PROJECT[0] BUCKET = PROJECT # defaults to PROJECT REGION = "us-central1" # Replace with your REGION TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S") BUCKET = PROJECT REGION = "us-central1" GCS_PATTERN = "gs://flowers-public/tfrecords-jpeg-192x192-2/*.tfrec" DATA_PATH = f"gs://{BUCKET}/flowers/data" OUTDIR = f"gs://{BUCKET}/flowers/model_{TIMESTAMP}" os.environ["BUCKET"] = BUCKET os.environ["REGION"] = REGION os.environ["DATA_PATH"] = DATA_PATH os.environ["OUTDIR"] = OUTDIR os.environ["TIMESTAMP"] = TIMESTAMP print(f"Project: {PROJECT}") Explanation: Restart Kernel Setup Import libraries Import the libraries for this tutorial. End of explanation %%bash exists=$(gsutil ls -d | grep -w gs://${BUCKET}/) if [ -n "$exists" ]; then echo -e "Bucket gs://${BUCKET} already exists." else echo "Creating a new GCS bucket." gsutil mb -l ${REGION} gs://${BUCKET} echo -e "\nHere are your current buckets:" gsutil ls fi Explanation: Run the following cell to create your Cloud Storage bucket if it does not already exist. End of explanation TRAINING_DATA_PATH = DATA_PATH + "/training" EVAL_DATA_PATH = DATA_PATH + "/validation" VALIDATION_SPLIT = 0.2 # Split data files between training and validation filenames = tf.io.gfile.glob(GCS_PATTERN) random.shuffle(filenames) split = int(len(filenames) * VALIDATION_SPLIT) training_filenames = filenames[split:] validation_filenames = filenames[:split] # Copy training files to GCS for file in training_filenames: !gsutil -m cp $file $TRAINING_DATA_PATH/ # Copy eval files to GCS for file in validation_filenames: !gsutil -m cp $file $EVAL_DATA_PATH/ Explanation: Explore the Dataset The dataset used for this tutorial is the flowers dataset from TensorFlow Datasets. This section shows how to shuffle, split, and copy the files to your GCS bucket. Load, split, and copy the dataset to your GCS bucket End of explanation !gsutil ls -l $TRAINING_DATA_PATH !gsutil ls -l $EVAL_DATA_PATH Explanation: Run the following commands. You should see a number of .tfrec files in your GCS bucket at both gs://{BUCKET}/flowers/data/training and gs://{BUCKET}/flowers/data/validation End of explanation IMAGE_SIZE = [192, 192] BATCH_SIZE = 32 # Do not change, maps to the labels in the data CLASSES = [ "daisy", "dandelion", "roses", "sunflowers", "tulips", ] def read_tfrecord(example): features = { "image": tf.io.FixedLenFeature( [], tf.string ), # tf.string means bytestring "class": tf.io.FixedLenFeature([], tf.int64), # shape [] means scalar "one_hot_class": tf.io.VarLenFeature(tf.float32), } example = tf.io.parse_single_example(example, features) image = tf.image.decode_jpeg(example["image"], channels=3) image = ( tf.cast(image, tf.float32) / 255.0 ) # convert image to floats in [0, 1] range image = tf.reshape(image, [*IMAGE_SIZE, 3]) one_hot_class = tf.sparse.to_dense(example["one_hot_class"]) one_hot_class = tf.reshape(one_hot_class, [5]) return image, one_hot_class # Load tfrecords into tf.data.Dataset def load_dataset(gcs_pattern): filenames = filenames = tf.io.gfile.glob(gcs_pattern + "/*") ds = tf.data.TFRecordDataset(filenames).map(read_tfrecord) return ds # Converts N examples in dataset to numpy arrays def dataset_to_numpy(dataset, N): numpy_images = [] numpy_labels = [] for images, labels in dataset.take(N): numpy_images.append(images.numpy()) numpy_labels.append(labels.numpy()) return numpy_images, numpy_labels def display_one_image(image, title, subplot): plt.subplot(subplot) plt.axis("off") plt.imshow(image) plt.title(title, fontsize=16) return subplot + 1 def display_9_images_from_dataset(dataset): subplot = 331 plt.figure(figsize=(13, 13)) images, labels = dataset_to_numpy(dataset, 9) for i, image in enumerate(images): title = CLASSES[np.argmax(labels[i], axis=-1)] subplot = display_one_image(image, title, subplot) if i >= 8: break plt.tight_layout() plt.subplots_adjust(wspace=0.1, hspace=0.1) plt.show() # Display 9 examples from the dataset ds = load_dataset(gcs_pattern=TRAINING_DATA_PATH) display_9_images_from_dataset(ds) Explanation: Create ingest functions and visualize some of the examples Define and execute helper functions to plot the images and corresponding labels. End of explanation %%bash mkdir -p flowers/trainer touch flowers/trainer/__init__.py Explanation: Build training pipeline In this section you will build an application with keras to train an image classification model on Vertex AI Custom Training. Create a directory for the training application and an init .py file (this is required for a Python application but it can be empty). End of explanation %%writefile flowers/trainer/train.py import datetime import fire import os import tensorflow as tf import tensorflow_hub as hub IMAGE_SIZE = [192, 192] def read_tfrecord(example): features = { "image": tf.io.FixedLenFeature( [], tf.string ), # tf.string means bytestring "class": tf.io.FixedLenFeature([], tf.int64), # shape [] means scalar "one_hot_class": tf.io.VarLenFeature(tf.float32), } example = tf.io.parse_single_example(example, features) image = tf.image.decode_jpeg(example["image"], channels=3) image = ( tf.cast(image, tf.float32) / 255.0 ) # convert image to floats in [0, 1] range image = tf.reshape( image, [*IMAGE_SIZE, 3] ) one_hot_class = tf.sparse.to_dense(example["one_hot_class"]) one_hot_class = tf.reshape(one_hot_class, [5]) return image, one_hot_class def load_dataset(gcs_pattern, batch_size=32, training=True): filenames = filenames = tf.io.gfile.glob(gcs_pattern) ds = tf.data.TFRecordDataset(filenames).map( read_tfrecord).batch(batch_size) if training: return ds.repeat() else: return ds def build_model(): # MobileNet model for feature extraction mobilenet_v2 = 'https://tfhub.dev/google/imagenet/'\ 'mobilenet_v2_100_192/feature_vector/5' feature_extractor_layer = hub.KerasLayer( mobilenet_v2, input_shape=[*IMAGE_SIZE, 3], trainable=False ) # Instantiate model model = tf.keras.Sequential([ feature_extractor_layer, tf.keras.layers.Dense(5, activation="softmax") ]) model.compile(optimizer="adam", loss="categorical_crossentropy", metrics=["accuracy"]) return model def train_and_evaluate(train_data_path, eval_data_path, output_dir, batch_size, num_epochs, train_examples): model = build_model() train_ds = load_dataset(gcs_pattern=train_data_path, batch_size=batch_size) eval_ds = load_dataset(gcs_pattern=eval_data_path, training=False) num_batches = batch_size * num_epochs steps_per_epoch = train_examples // num_batches history = model.fit( train_ds, validation_data=eval_ds, epochs=num_epochs, steps_per_epoch=steps_per_epoch, verbose=2, # 0=silent, 1=progress bar, 2=one line per epoch ) tf.saved_model.save( obj=model, export_dir=output_dir ) # with default serving function print("Exported trained model to {}".format(output_dir)) if __name__ == "__main__": fire.Fire(train_and_evaluate) Explanation: Create training application in train.py This code contains the training logic. Here you build an application to ingest data from GCS and train an image classification model using mobileNet as a feature extractor, then sending it's output feature vector through a tf.keras.dense layer with 5 units and softmax activation (because there are 5 possible labels). Also, use the fire library which enables arguments to train_and_evaluate to be passed via the command line. End of explanation %%bash OUTDIR_LOCAL=local_test_training rm -rf ${OUTDIR_LOCAL} export PYTHONPATH=${PYTHONPATH}:${PWD}/flowers python3 -m trainer.train \ --train_data_path=gs://${BUCKET}/flowers/data/training/*.tfrec \ --eval_data_path=gs://${BUCKET}/flowers/data/validation/*.tfrec \ --output_dir=${OUTDIR_LOCAL} \ --batch_size=1 \ --num_epochs=1 \ --train_examples=10 Explanation: Test training application locally It's always a good idea to test out a training application locally (with only a few training steps) to make sure the code runs as expected. End of explanation %%writefile flowers/setup.py from setuptools import find_packages from setuptools import setup setup( name='flowers_trainer', version='0.1', packages=find_packages(), include_package_data=True, install_requires=['fire==0.4.0'], description='Flowers image classifier training application.' ) %%bash cd flowers python ./setup.py sdist --formats=gztar cd .. Explanation: Package code as source distribution Now that you have validated your model training code, we need to package our code as a source distribution in order to submit a custom training job to Vertex AI. End of explanation %%bash gsutil cp flowers/dist/flowers_trainer-0.1.tar.gz gs://${BUCKET}/flowers/ Explanation: Store the package in GCS End of explanation %%bash JOB_NAME=flowers_${TIMESTAMP} PYTHON_PACKAGE_URI=gs://${BUCKET}/flowers/flowers_trainer-0.1.tar.gz PYTHON_PACKAGE_EXECUTOR_IMAGE_URI="us-docker.pkg.dev/vertex-ai/training/tf-cpu.2-3:latest" PYTHON_MODULE=trainer.train echo > ./config.yaml \ "workerPoolSpecs: machineSpec: machineType: n1-standard-8 replicaCount: 1 pythonPackageSpec: executorImageUri: $PYTHON_PACKAGE_EXECUTOR_IMAGE_URI packageUris: $PYTHON_PACKAGE_URI pythonModule: $PYTHON_MODULE args: - --train_data_path=gs://${BUCKET}/flowers/data/training/*.tfrec - --eval_data_path=gs://${BUCKET}/flowers/data/validation/*.tfrec - --output_dir=$OUTDIR - --num_epochs=15 - --train_examples=15000 - --batch_size=32 " gcloud ai custom-jobs create \ --region=${REGION} \ --display-name=$JOB_NAME \ --config=config.yaml Explanation: To submit to the Cloud we use gcloud custom-jobs create and simply specify some additional parameters for the Vertex AI Training Service: - display-name: A unique identifier for the Cloud job. We usually append system time to ensure uniqueness - region: Cloud region to train in. See here for supported Vertex AI Training Service regions You might have earlier seen gcloud ai custom-jobs create executed with the worker pool spec and pass-through Python arguments specified directly in the command call, here we will use a YAML file, this will make it easier to transition to hyperparameter tuning. Through the args: argument we add in the passed-through arguments for our task.py file. End of explanation local_model = tf.keras.models.load_model(OUTDIR) local_model.summary() CONCRETE_INPUT = "numpy_inputs" def _preprocess(bytes_input): decoded = tf.io.decode_jpeg(bytes_input, channels=3) decoded = tf.image.convert_image_dtype(decoded, tf.float32) resized = #TODO: Resize decoded image rescale = #TODO: Rescale image return rescale @tf.function(input_signature=[tf.TensorSpec([None], tf.string)]) def preprocess_fn(bytes_inputs): decoded_images = tf.map_fn( _preprocess, bytes_inputs, dtype=tf.float32, back_prop=False ) return { CONCRETE_INPUT: decoded_images } # User needs to make sure the key matches model's input @tf.function(input_signature=[tf.TensorSpec([None], tf.string)]) def serving_fn(bytes_inputs): images = preprocess_fn(bytes_inputs) prob = m_call(**images) return prob # the function that sends data through the model itself and returns # the output probabilities m_call = tf.function(local_model.call).get_concrete_function( [ tf.TensorSpec( shape=[None, 192, 192, 3], dtype=tf.float32, name=CONCRETE_INPUT ) ] ) tf.saved_model.save( local_model, OUTDIR, signatures={ "serving_default": serving_fn, # Required for XAI "xai_preprocess": preprocess_fn, "xai_model": m_call, }, ) Explanation: NOTE Model training will take 5 minutes or so. You have to wait for training to finish before moving forward. Serving function for image data To pass images to the prediction service, you encode the compressed (e.g., JPEG) image bytes into base 64 -- which makes the content safe from modification while transmitting binary data over the network. Since this deployed model expects input data as raw (uncompressed) bytes, you need to ensure that the base 64 encoded data gets converted back to raw bytes before it is passed as input to the deployed model. To resolve this, define a serving function (serving_fn) and attach it to the model as a preprocessing step. Add a @tf.function decorator so the serving function is fused to the underlying model (instead of upstream on a CPU). When you send a prediction or explanation request, the content of the request is base 64 decoded into a Tensorflow string (tf.string), which is passed to the serving function (serving_fn). The serving function preprocesses the tf.string into raw (uncompressed) numpy bytes (preprocess_fn) to match the input requirements of the model: - io.decode_jpeg- Decompresses the JPG image which is returned as a Tensorflow tensor with three channels (RGB). - image.convert_image_dtype - Changes integer pixel values to float 32. - image.resize - Resizes the image to match the input shape for the model. - resized / 255.0 - Rescales (normalization) the pixel data between 0 and 1. At this point, the data can be passed to the model (m_call). XAI Signatures When the serving function is saved back with the underlying model (tf.saved_model.save), you specify the input layer of the serving function as the signature serving_default. For XAI image models, you need to save two additional signatures from the serving function: xai_preprocess: The preprocessing function in the serving function. xai_model: The concrete function for calling the model. Load the model into memory. NOTE This directory will not exist if your model has not finished training. Please wait for training to complete before moving forward End of explanation loaded = tf.saved_model.load(OUTDIR) serving_input = list( loaded.signatures["serving_default"].structured_input_signature[1].keys() )[0] print("Serving function input:", serving_input) serving_output = list( loaded.signatures["serving_default"].structured_outputs.keys() )[0] print("Serving function output:", serving_output) input_name = local_model.input.name print("Model input name:", input_name) output_name = local_model.output.name print("Model output name:", output_name) parameters = aiplatform.explain.ExplanationParameters( {"integrated_gradients_attribution": {"step_count": 50}} ) Explanation: Get the serving function signature You can get the signatures of your model's input and output layers by reloading the model into memory, and querying it for the signatures corresponding to each layer. When making a prediction request, you need to route the request to the serving function instead of the model, so you need to know the input layer name of the serving function -- which you will use later when you make a prediction request. You also need to know the name of the serving function's input and output layer for constructing the explanation metadata -- which is discussed subsequently. End of explanation MODEL_NAME = "flower_classifier_v1" INPUT_METADATA = {"input_tensor_name": CONCRETE_INPUT, "modality": "image"} OUTPUT_METADATA = {"output_tensor_name": serving_output} input_metadata = aiplatform.explain.ExplanationMetadata.InputMetadata( INPUT_METADATA ) output_metadata = aiplatform.explain.ExplanationMetadata.OutputMetadata( OUTPUT_METADATA ) metadata = aiplatform.explain.ExplanationMetadata( inputs={"image": input_metadata}, outputs={"class": output_metadata} ) Explanation: Upload the model Next, upload your model to a Model resource using Model.upload() method, with the following parameters: display_name: The human readable name for the Model resource. artifact: The Cloud Storage location of the trained model artifacts. serving_container_image_uri: The serving container image. sync: Whether to execute the upload asynchronously or synchronously. explanation_parameters: Parameters to configure explaining for Model's predictions. explanation_metadata: Metadata describing the Model's input and output for explanation. If the upload() method is run asynchronously, you can subsequently block until completion with the wait() method. End of explanation aiplatform.init(project=PROJECT, staging_bucket=BUCKET) model = aiplatform.Model.upload( display_name=MODEL_NAME, artifact_uri=OUTDIR, serving_container_image_uri="us-docker.pkg.dev/vertex-ai/prediction/tf2-cpu.2-3:latest", explanation_parameters=parameters, explanation_metadata=metadata, sync=False, ) model.wait() Explanation: NOTE This can take a few minutes to run. End of explanation endpoint = model.deploy( deployed_model_display_name=MODEL_NAME, traffic_split={"0": 100}, machine_type="n1-standard-4", min_replica_count=1, max_replica_count=1, ) Explanation: Deploy the model Next, deploy your model for online prediction. To deploy the model, you invoke the deploy method, with the following parameters: deployed_model_display_name: A human readable name for the deployed model. traffic_split: Percent of traffic at the endpoint that goes to this model, which is specified as a dictionary of one or more key/value pairs. If only one model, then specify as { "0": 100 }, where "0" refers to this model being uploaded and 100 means 100% of the traffic. If there are existing models on the endpoint, for which the traffic will be split, then use model_id to specify as { "0": percent, model_id: percent, ... }, where model_id is the model id of an existing model to the deployed endpoint. The percents must add up to 100. machine_type: The type of machine to use for training. max_replica_count: The maximum number of compute instances to scale to. In this tutorial, only one instance is provisioned. NOTE This can take a few minutes. End of explanation eval_ds = load_dataset(EVAL_DATA_PATH) x_test, y_test = dataset_to_numpy(eval_ds, 5) # Single image from eval dataset test_image = x_test[0] * 255.0 # Write image out as jpg cv2.imwrite("tmp.jpg", test_image) Explanation: Prepare the request content You are going to send the flower image as compressed JPG image, instead of the raw uncompressed bytes: cv2.imwrite: Use openCV to write the uncompressed image to disk as a compressed JPEG image. Denormalize the image data from [0,1) range back to [0,255). We need to do this because load_dataset scales pixel values between [0,1) however JPG files expect pixel values in the range [0,255). Convert the 32-bit floating point values to 8-bit unsigned integers. tf.io.read_file: Read the compressed JPG images back into memory as raw bytes. base64.b64encode: Encode the raw bytes into a base 64 encoded string. End of explanation # Read image and base64 encode bytes = tf.io.read_file("tmp.jpg") b64str = base64.b64encode(bytes.numpy()).decode("utf-8") instances_list = [{serving_input: {"b64": b64str}}] response = #TODO: Get prediction with explanation from endpoint print(response) Explanation: Read the JPG image and encode it with base64 to send to the model endpoint. Send the encoded image to the endpoint with endpoint.explain. Then you can parse the response for the prediction and explanation. Full documentation on endpoint.explain can be found here. End of explanation import io from io import BytesIO import matplotlib.image as mpimg import matplotlib.pyplot as plt CLASSES = [ "daisy", "dandelion", "roses", "sunflowers", "tulips", ] # Parse prediction for prediction in response.predictions: label_index = np.argmax(prediction) class_name = CLASSES[label_index] confidence_score = prediction[label_index] print( "Predicted class: " + class_name + "\n" + "Confidence score: " + str(confidence_score) ) image = base64.b64decode(b64str) image = BytesIO(image) img = mpimg.imread(image, format="JPG") # Parse explanation for explanation in response.explanations: attributions = dict(explanation.attributions[0].feature_attributions) xai_label_index = explanation.attributions[0].output_index[0] xai_class_name = CLASSES[xai_label_index] xai_b64str = attributions["image"]["b64_jpeg"] xai_image = base64.b64decode(xai_b64str) xai_image = io.BytesIO(xai_image) xai_img = mpimg.imread(xai_image, format="JPG") # Plot image, feature attribution mask, and overlayed image fig = plt.figure(figsize=(13, 18)) fig.add_subplot(1, 3, 1) plt.title("Input Image") plt.imshow(img) fig.add_subplot(1, 3, 2) plt.title("Feature Attribution Mask") plt.imshow(xai_img) fig.add_subplot(1, 3, 3) plt.title("Overlayed Attribution Mask") plt.imshow(img) plt.imshow(xai_img, alpha=0.6) plt.show() Explanation: Visualize feature attributions from Integrated Gradients. Query the response to get predictions and feature attributions. Use Matplotlib to visualize. End of explanation
9,283
Given the following text description, write Python code to implement the functionality described below step by step Description: First steps in data science with Python Installation For new comers, I recommend using the Anacaonda distribution. You can download it from here. If you are familiar with Python, create a conda environment and install the needed libraries (using the environment.yml file provided in this repository) Step1: Here, we have created a fictional dataset that contains earnings for years 2016 and 2017 Step2: You might ask, what is the problem with this dataset? <br> There are two main ones Step3: That's much better! <br> In summary, a tidy dataset has the following properties Step4: Loading data Kaggle offers many free datasets with lots of metadata, descriptions, kernels, discussions and so on. <br> Today, we will be working with the San Francisco Salaries dataset. You can download it from here (you need a Kaggle account) or get it from the workshop repository. The dataset we will be working with is a CSV file. Fortunately for us, Pandas has a handy method .read_csv. Let's try it out! Step5: Data exploration Step6: Some analysis What are the different job titles? How many? Step7: Highest and lowest salaries per year? Which jobs?
Python Code: import pandas as pd messy_df = pd.DataFrame({'2016': [1000, 2000, 3000], '2017': [1200, 1300, 4000], 'company': ['slack', 'twitter', 'twitch'] }) Explanation: First steps in data science with Python Installation For new comers, I recommend using the Anacaonda distribution. You can download it from here. If you are familiar with Python, create a conda environment and install the needed libraries (using the environment.yml file provided in this repository): conda env create -f environment.yml. Finally, activate the environement using: conda activate workshop The Python data science ecosystem Jupyter notebook Jupyter notebook is the code environment we will be using today. <br> Previously known as ipython notebook, it is an interactive environment that makes prototyping easier for data scientists. Pandas Pandas is the primary toolbox used for collecting and cleaning datasets from various data sources. <br> Most of the concepts that we are exploring today can be found in the following great cheatsheet Matplotlib Matplotlib is the standard and de facto Python library for creating visualizations. Numerical and statistical (numpy, scipy, statsmodels) Alongside the above tools, Python offers a set of numerical and statistical packages to perform data analysis. The most famous ones are: numpy: Base N-dimensional array package scipy: Fundamental library for scientific computing statsmodels: Statistical computations and models for Python Keep in mind that most of the capabilites of the above package are integrated within the Pandas library. Tidy data This is a very important concept when doing data science. To demonstrate how important it is, let's start by creating a messy one and tidying it. End of explanation messy_df Explanation: Here, we have created a fictional dataset that contains earnings for years 2016 and 2017 End of explanation tidy_df = pd.melt(messy_df, id_vars=['company'], value_name='earnings', var_name='year') tidy_df Explanation: You might ask, what is the problem with this dataset? <br> There are two main ones: The coloumns 2016 and 2017 contain the same type of variable (earnings) The columns 2016 and 2017 contain an information about the year Now that we have a "messy" dataset, let's clean it. End of explanation import pandas as pd import missingno as msno Explanation: That's much better! <br> In summary, a tidy dataset has the following properties: Each column represents only one variable Each row represents an observation Example Import pacakges End of explanation sf_slaries_df = pd.read_csv('data/Salaries.csv') Explanation: Loading data Kaggle offers many free datasets with lots of metadata, descriptions, kernels, discussions and so on. <br> Today, we will be working with the San Francisco Salaries dataset. You can download it from here (you need a Kaggle account) or get it from the workshop repository. The dataset we will be working with is a CSV file. Fortunately for us, Pandas has a handy method .read_csv. Let's try it out! End of explanation sf_slaries_df.head(3).transpose() sf_slaries_df.sample(5).transpose() sf_slaries_df.columns sf_slaries_df.dtypes sf_slaries_df.describe() msno.matrix(sf_slaries_df) Explanation: Data exploration End of explanation sf_slaries_df.JobTitle.value_counts() sf_slaries_df.JobTitle.nunique() Explanation: Some analysis What are the different job titles? How many? End of explanation sf_slaries_df.groupby('Year').TotalPay.agg(['min', 'max']) lowest_idx = sf_slaries_df.groupby('Year').apply(lambda df: df.TotalPay.argmin()) sf_slaries_df.loc[lowest_idx, ['Year', 'JobTitle']] highest_idx = sf_slaries_df.groupby('Year').apply(lambda df: df.TotalPay.argmax()) sf_slaries_df.loc[highest_idx, ['Year', 'JobTitle']] Explanation: Highest and lowest salaries per year? Which jobs? End of explanation
9,284
Given the following text description, write Python code to implement the functionality described below step by step Description: Validated department boundaries vs government units with highest incident share comparison The backing theory for this notebook is proving that we will be able to use the government unit with the greatest number of fire incidents are the department's boundary. We will compare the department boundaries vs the chosen government unit for departments have have a validated boundary. Validated boundaries are those that have been deemed accurate by department administrators. Step3: Processing Step4: Results
Python Code: import psycopg2 from psycopg2.extras import RealDictCursor import pandas as pd # import geopandas as gpd # from shapely import wkb # from shapely.geometry import mapping as to_geojson # import folium pd.options.display.max_columns = None pd.options.display.max_rows = None #pd.set_option('display.float_format', lambda x: '%.3f' % x) %matplotlib inline conn = psycopg2.connect('service=firecares') nfirs = psycopg2.connect('service=nfirs') Explanation: Validated department boundaries vs government units with highest incident share comparison The backing theory for this notebook is proving that we will be able to use the government unit with the greatest number of fire incidents are the department's boundary. We will compare the department boundaries vs the chosen government unit for departments have have a validated boundary. Validated boundaries are those that have been deemed accurate by department administrators. End of explanation q = select id, fdid, state, name from firestation_firedepartment where boundary_verified = true; with nfirs.cursor(cursor_factory=RealDictCursor) as c: c.execute(q) fds = c.fetchall() q = with fires as (select * from joint_buildingfires inner join joint_incidentaddress using (fdid, inc_no, inc_date, state, exp_no) where fdid = %(fdid)s and state = %(state)s ), govt_units as ( select gu.name, gu.source, gu.id, gu.geom, fd.id as fc_id, fd.geom as fd_geom, ST_Distance(addr.geom, ST_Centroid(gu.geom)) as distance_to_headquarters from firestation_firedepartment fd inner join firecares_core_address addr on addr.id = fd.headquarters_address_id join usgs_governmentunits gu on ST_Intersects(ST_Buffer(addr.geom, 0.05), gu.geom) where fd.fdid = %(fdid)s and fd.state = %(state)s and source != 'stateorterritoryhigh' ) select gu.fc_id, count(fires), ST_Area(ST_SymDifference(gu.fd_geom, gu.geom)) / ST_Area(gu.fd_geom) as percent_difference_to_verified_boundary, ST_Area(gu.geom), gu.distance_to_headquarters, gu.name, gu.id, gu.source from fires inner join govt_units gu on ST_Intersects(fires.geom, gu.geom) group by gu.name, gu.id, gu.geom, gu.source, gu.distance_to_headquarters, gu.fd_geom, gu.fc_id order by (count(fires) / (select count(1) from fires)::float) desc; for fd in fds[62:]: with nfirs.cursor(cursor_factory=RealDictCursor) as c: print 'Analyzing: {} (id: {} fdid: {} {})'.format(fd['name'], fd['id'], fd['fdid'], fd['state']) c.execute(q, dict(fdid=fd['fdid'], state=fd['state'])) items = c.fetchall() df = pd.DataFrame(items) df.to_csv('./boundary-incident-share-analysis-{}.csv'.format(fd['id'])) Explanation: Processing End of explanation from glob import glob df = None for f in glob("boundary-incident-share-analysis*.csv"): if df is not None: df = df.append(pd.read_csv(f)) else: df = pd.read_csv(f) df.rename(columns={'Unnamed: 0': 'rank'}, inplace=True) selected_government_units = df[df['rank'] == 0].set_index('fc_id') total_validated_department_count = len(selected_government_units) perfect_fits = len(selected_government_units[selected_government_units['percent_difference_to_verified_boundary'] == 0]) print 'Perfect fits: {}/{} ({:.2%})'.format(perfect_fits, total_validated_department_count, float(perfect_fits) / total_validated_department_count) print 'Machine-selected government unit area difference mean: {:.2%}'.format(df[df['rank'] == 0].percent_difference_to_verified_boundary.mean()) selected_government_units['percent_difference_to_verified_boundary'].hist(bins=50) selected_government_units df.set_index('fc_id') df.to_csv('./validated-boundary-vs-government-unit-incident-share.csv') pd.read_csv('./validated-boundary-vs-government-unit-incident-share.csv') Explanation: Results End of explanation
9,285
Given the following text description, write Python code to implement the functionality described below step by step Description: Practical use of Jupyter notebook Second motivation Step1: Expected results Step2: Techniques used Regular expressions Pythonic / Functional programming Step3: Data wrangling in action Step4: The extracted tree still contains much noise Step5: Use of lambda functions and piping Step6: Further clean-up
Python Code: Image("img/init.png") Explanation: Practical use of Jupyter notebook Second motivation : learning Python by web scraping Scraping data from the WHO End of explanation Image("img/target_result.png") Explanation: Expected results End of explanation # FOR WEB SCRAPING from lxml import html import requests # FOR FUNCTIONAL PROGRAMMING import cytoolz # pipe # FOR DATA WRANGLING import pandas as pd # use of R like dataframes import re #re for regular expressions # TO INSERT IMAGES from IPython.display import Image Explanation: Techniques used Regular expressions Pythonic / Functional programming : use lists (iterable) : avoid looping on indices whenever possible list / for comprehensions lambda expressions pipe and map essentially : (based on cytoolz) Python web scraping : lxml (Python library) XPath (web page content) End of explanation ### Target URL outbreakNewsURL = "http://www.who.int/csr/don/archive/disease/zika-virus-infection/en/" page = requests.get(outbreakNewsURL) tree = html.fromstring(page.content) newsXPath = '//li' zikaNews = tree.xpath(newsXPath) ### Store the relevant news in a list zikaNews_dirty = [p.text_content() for p in zikaNews] # Printing the first 20 elements zikaNews_dirty[1:20] # omitting first element Explanation: Data wrangling in action End of explanation Image("img/flatten_tree_data.png") # Extract only the items containing the pattern "Zika virus infection " #sample= '\n22 April 2016\n\t\t\tZika virus infection โ€“ Papua New Guinea - USA\n' keywdEN ="Zika virus infection " zikaNews_content = [s for s in zikaNews_dirty if re.search(keywdEN, s)] zikaNews_content[0:10] # first 11 elements Explanation: The extracted tree still contains much noise End of explanation #### Use of lambdas (avoid creating verbose Python functions with def f():{}) substitudeUnicodeDash = lambda s : re.sub(u'โ€“',"@", s) substituteNonUnicode = lambda s : re.sub(r"\s"," ",s) removeSpace = lambda s: s.strip() # Use of pipe to chain lambda functions within a list comprehension ### Should be familiar to those using R dplyr %>% zikaNews_dirty = [cytoolz.pipe(s, removeSpace, substituteNonUnicode) for s in zikaNews_content] # List comprehension zikaNews_dirty = [s.split("Zika virus infection") for s in zikaNews_dirty ] zikaNews_dirty[0:10] Explanation: Use of lambda functions and piping End of explanation # Structure data into a Pandas dataframe zika = pd.DataFrame(zikaNews_dirty, columns = ["Date","Locations"]) zika.head(n=20) ### Removing the first dash sign / for zika["Locations"] # Step 1 : transform in a list of strings, via str.split() # Step 2 : copy the list, except the first element list[1:] # Step 3 : reconstitute the entire string using ' '.join(list[1:]) # Step 1 : transform in a list of strings, via str.split() zika["Split_Locations"] = pd.Series(zika["Locations"].iloc[i].split() for i in range(len(zika))) # Step 2 : copy the list, except the first element list[1:] zika["Split_Locations"] = pd.Series([s[1:] for s in zika["Split_Locations"]]) # Step 3 : reconstitute the entire string using ' '.join(list[1:]) zika["Split_Locations"] = pd.Series([" ".join(s) for s in zika["Split_Locations"]]) zika["Split_Locations"] = pd.Series([s.split("-") for s in zika["Split_Locations"]]) zika["Split_Date"] = pd.Series([s.split() for s in zika["Date"]]) # Show the first 10 rows using HEAD zika.head(n=10) ### Extract Day / Month / Year in the Split_Date column, 1 row is of the form [21, January, 2016] zika["Day"]= pd.Series(zika["Split_Date"].iloc[i][0] for i in range(len(zika))) zika["Month"]= pd.Series(zika["Split_Date"].iloc[i][1] for i in range(len(zika))) zika["Year"]= pd.Series(zika["Split_Date"].iloc[i][2] for i in range(len(zika))) # Show the first 10 rows using HEAD zika.head(n=10) # Extract Country and Territory zika["Country"] = pd.Series(zika["Split_Locations"].iloc[i][0] for i in range(len(zika))) zika["Territory"] = pd.Series(zika["Split_Locations"].iloc[i][len(zika["Split_Locations"].iloc[i])-1] for i in range(len(zika))) # Show the first 20 rows using HEAD zika[['Split_Locations','Country','Territory']].head(20) zika["Territory"] =pd.Series(zika["Territory"][i] if zika["Territory"][i] != zika["Country"][i] else " " for i in range(len(zika)) ) # Show the first 20 rows using HEAD zika[['Split_Locations','Country','Territory']].head(20) Explanation: Further clean-up : use of the Pandas library Use extensiveley the Pandas library End of explanation
9,286
Given the following text description, write Python code to implement the functionality described below step by step Description: Near real-time HF-Radar currents in the proximity of the Deepwater Horizon site The explosion on the Deepwater Horizon (DWH) tragically killed 11 people, and resulted in one of the largest marine oil spills in history. One of the first questions when there is such a tragedy is Step1: The interactive interface is handy for exploration but we usually need to download "mechanically" in order to use them in our analysis, plots, or for downloading time-series. One way to achieve that is to use an OPeNDAP client, here Python's xarray, and explore the endpoint directly. (We'll use the same 6 km resolution from the IFrame above.) Step2: How about extracting a week time-series from the dataset averaged around the area of interest? Step3: With xarray we can average hourly (resample) the whole dataset with one method call. Step4: Now all we have to do is mask the missing data with NaNs and average over the area. Step5: To close this post let's us reproduce the HF radar DAC image from above but using yesterday's data. Step6: Now that we singled out the date and and time we want the data, we trigger the download by accessing the data with xarray's .data property. Step7: The cell below computes the speed from the velocity. We can use the speed computation to color code the vectors. Note that we re-create the vector velocity preserving the direction but using intensity of 1. (The same visualization technique used in the HF radar DAC.) Step8: Now we can create a matplotlib figure displaying the data.
Python Code: from IPython.display import HTML url = ( "https://cordc.ucsd.edu/projects/mapping/maps/fullpage.php?" "ll=29.061888,-87.373643&" "zm=7&" "mt=&" "rng=0.00,50.00&" "us=1&" "cs=4&" "res=6km_h&" "ol=3&" "cp=1" ) iframe = ( '<iframe src="{src}" width="750" height="450" style="border:none;"></iframe>'.format ) HTML(iframe(src=url)) Explanation: Near real-time HF-Radar currents in the proximity of the Deepwater Horizon site The explosion on the Deepwater Horizon (DWH) tragically killed 11 people, and resulted in one of the largest marine oil spills in history. One of the first questions when there is such a tragedy is: where will the oil go? In order the help answer that question one can use Near real time currents from the HF-Radar sites near the incident. First let's start with the HF-Radar DAC, where one can browser the all available data interactively. Below we show an IFrame with the area near DWH for the 27 of July of 2017. In this notebook we will demonstrate how to obtain such data programmatically. (For more information on the DWH see http://response.restoration.noaa.gov/oil-and-chemical-spills/significant-incidents/deepwater-horizon-oil-spill.) End of explanation import xarray as xr url = ( "http://hfrnet-tds.ucsd.edu/thredds/dodsC/HFR/USEGC/6km/hourly/RTV/" "HFRADAR_US_East_and_Gulf_Coast_6km_Resolution_Hourly_RTV_best.ncd" ) ds = xr.open_dataset(url) ds Explanation: The interactive interface is handy for exploration but we usually need to download "mechanically" in order to use them in our analysis, plots, or for downloading time-series. One way to achieve that is to use an OPeNDAP client, here Python's xarray, and explore the endpoint directly. (We'll use the same 6 km resolution from the IFrame above.) End of explanation dx = dy = 2.25 # Area around the point of interest. center = -87.373643, 29.061888 # Point of interest. dsw = ds.sel(time=slice("2017-07-20", "2017-07-27")) dsw = dsw.sel( lon=(dsw.lon < center[0] + dx) & (dsw.lon > center[0] - dx), lat=(dsw.lat < center[1] + dy) & (dsw.lat > center[1] - dy), ) Explanation: How about extracting a week time-series from the dataset averaged around the area of interest? End of explanation resampled = dsw.resample(indexer={"time": "1H"}) avg = resampled.mean(dim="time") Explanation: With xarray we can average hourly (resample) the whole dataset with one method call. End of explanation import numpy.ma as ma v = avg["v"].data u = avg["u"].data time = avg["time"].to_index().to_pydatetime() u = ma.masked_invalid(u) v = ma.masked_invalid(v) i, j, k = u.shape u = u.reshape(i, j * k).mean(axis=1) v = v.reshape(i, j * k).mean(axis=1) %matplotlib inline import matplotlib.pyplot as plt from oceans.plotting import stick_plot fig, ax = plt.subplots(figsize=(11, 2.75)) q = stick_plot(time, u, v, ax=ax) ref = 0.5 qk = plt.quiverkey( q, 0.1, 0.85, ref, "{} {}".format(ref, ds["u"].units), labelpos="N", coordinates="axes", ) _ = plt.xticks(rotation=70) Explanation: Now all we have to do is mask the missing data with NaNs and average over the area. End of explanation from datetime import date, timedelta yesterday = date.today() - timedelta(days=1) dsy = ds.sel(time=yesterday) Explanation: To close this post let's us reproduce the HF radar DAC image from above but using yesterday's data. End of explanation u = dsy["u"].data v = dsy["v"].data lon = dsy.coords["lon"].data lat = dsy.coords["lat"].data time = dsy.coords["time"].data Explanation: Now that we singled out the date and and time we want the data, we trigger the download by accessing the data with xarray's .data property. End of explanation import numpy as np from oceans.ocfis import spdir2uv, uv2spdir angle, speed = uv2spdir(u, v) us, vs = spdir2uv(np.ones_like(speed), angle, deg=True) Explanation: The cell below computes the speed from the velocity. We can use the speed computation to color code the vectors. Note that we re-create the vector velocity preserving the direction but using intensity of 1. (The same visualization technique used in the HF radar DAC.) End of explanation import cartopy.crs as ccrs from cartopy import feature from cartopy.mpl.gridliner import LATITUDE_FORMATTER, LONGITUDE_FORMATTER LAND = feature.NaturalEarthFeature( "physical", "land", "10m", edgecolor="face", facecolor="lightgray" ) sub = 2 bbox = lon.min(), lon.max(), lat.min(), lat.max() fig, ax = plt.subplots(figsize=(9, 9), subplot_kw=dict(projection=ccrs.PlateCarree())) ax.set_extent([center[0] - dx - dx, center[0] + dx, center[1] - dy, center[1] + dy]) vmin, vmax = np.nanmin(speed[::sub, ::sub]), np.nanmax(speed[::sub, ::sub]) speed_clipped = np.clip(speed[::sub, ::sub], 0, 0.65) ax.quiver( lon[::sub], lat[::sub], us[::sub, ::sub], vs[::sub, ::sub], speed_clipped, scale=30, ) # Deepwater Horizon site. ax.plot(-88.365997, 28.736628, marker="o", color="crimson") gl = ax.gridlines(draw_labels=True) gl.xlabels_top = gl.ylabels_right = False gl.xformatter = LONGITUDE_FORMATTER gl.yformatter = LATITUDE_FORMATTER feature = ax.add_feature(LAND, zorder=0, edgecolor="black") Explanation: Now we can create a matplotlib figure displaying the data. End of explanation
9,287
Given the following text description, write Python code to implement the functionality described below step by step Description: Copyright 2020 The TensorFlow Authors. Step1: Deep & Cross Network (DCN) <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https Step2: Toy Example To illustrate the benefits of DCN, let's work through a simple example. Suppose we have a dataset where we're trying to model the likelihood of a customer clicking on a blender Ad, with its features and label described as follows. | Features / Label | Description | Value Type / Range | | ------------- |-------------| -----| | $x_1$ = country | the country this customer lives in | Int in [0, 199] | | $x_2$ = bananas | # bananas the customer has purchased |Int in [0, 23] | | $x_3$ = cookbooks | # cooking books the customer has purchased |Int in [0, 5] | | $y$ | the likelihood of clicking on a blender Ad | -- | Then, we let the data follow the following underlying distribution Step3: Let's generate the data that follows the distribution, and split the data into 90% for training and 10% for testing. Step4: Model construction We're going to try out both cross network and deep network to illustrate the advantage a cross network can bring to recommenders. As the data we just created only contains 2nd-order feature interactions, it would be sufficient to illustrate with a single-layered cross network. If we wanted to model higher-order feature interactions, we could stack multiple cross layers and use a multi-layered cross network. The two models we will be building are Step5: Then, we specify the cross network (with 1 cross layer of size 3) and the ReLU-based DNN (with layer sizes [512, 256, 128]) Step6: Model training Now that we have the data and models ready, we are going to train the models. We first shuffle and batch the data to prepare for model training. Step7: Then, we define the number of epochs as well as the learning rate. Step8: Alright, everything is ready now and let's compile and train the models. You could set verbose=True if you want to see how the model progresses. Step9: Model evaluation We verify the model performance on the evaluation dataset and report the Root Mean Squared Error (RMSE, the lower the better). Step10: We see that the cross network achieved magnitudes lower RMSE than a ReLU-based DNN, with magnitudes fewer parameters. This has suggested the efficieny of a cross network in learning feaure crosses. Model understanding We already know what feature crosses are important in our data, it would be fun to check whether our model has indeed learned the important feature cross. This can be done by visualizing the learned weight matrix in DCN. The weight $W_{ij}$ represents the learned importance of interaction between feature $x_i$ and $x_j$. Step11: Darker colours represent stronger learned interactions - in this case, it's clear that the model learned that purchasing babanas and cookbooks together is important. If you are interested in trying out more complicated synthetic data, feel free to check out this paper. Movielens 1M example We now examine the effectiveness of DCN on a real-world dataset Step12: Next, we randomly split the data into 80% for training and 20% for testing. Step13: Then, we create vocabulary for each feature. Step14: Model construction The model architecture we will be building starts with an embedding layer, which is fed into a cross network followed by a deep network. The embedding dimension is set to 32 for all the features. You could also use different embedding sizes for different features. Step15: Model training We shuffle, batch and cache the training and test data. Step16: Let's define a function that runs a model multiple times and returns the model's RMSE mean and standard deviation out of multiple runs. Step17: We set some hyper-parameters for the models. Note that these hyper-parameters are set globally for all the models for demonstration purpose. If you want to obtain the best performance for each model, or conduct a fair comparison among models, then we'd suggest you to fine-tune the hyper-parameters. Remember that the model architecture and optimization schemes are intertwined. Step18: DCN (stacked). We first train a DCN model with a stacked structure, that is, the inputs are fed to a cross network followed by a deep network. <div> <center> <img src="http Step19: Low-rank DCN. To reduce the training and serving cost, we leverage low-rank techniques to approximate the DCN weight matrices. The rank is passed in through argument projection_dim; a smaller projection_dim results in a lower cost. Note that projection_dim needs to be smaller than (input size)/2 to reduce the cost. In practice, we've observed using low-rank DCN with rank (input size)/4 consistently preserved the accuracy of a full-rank DCN. <div> <center> <img src="http Step20: DNN. We train a same-sized DNN model as a reference. Step21: We evaluate the model on test data and report the mean and standard deviation out of 5 runs. Step22: We see that DCN achieved better performance than a same-sized DNN with ReLU layers. Moreover, the low-rank DCN was able to reduce parameters while maintaining the accuracy. More on DCN. Besides what've been demonstrated above, there are more creative yet practically useful ways to utilize DCN [1]. DCN with a parallel structure. The inputs are fed in parallel to a cross network and a deep network. Concatenating cross layers. The inputs are fed in parallel to multiple cross layers to capture complementary feature crosses. <div class="fig figcenter fighighlight"> <center> <img src="http
Python Code: #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. Explanation: Copyright 2020 The TensorFlow Authors. End of explanation !pip install -q tensorflow-recommenders !pip install -q --upgrade tensorflow-datasets import pprint %matplotlib inline import matplotlib.pyplot as plt from mpl_toolkits.axes_grid1 import make_axes_locatable import numpy as np import tensorflow as tf import tensorflow_datasets as tfds import tensorflow_recommenders as tfrs Explanation: Deep & Cross Network (DCN) <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/recommenders/examples/dcn"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/recommenders/blob/main/docs/examples/dcn.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/recommenders/blob/main/docs/examples/dcn.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/recommenders/docs/examples/dcn.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> This tutorial demonstrates how to use Deep & Cross Network (DCN) to effectively learn feature crosses. Background What are feature crosses and why are they important? Imagine that we are building a recommender system to sell a blender to customers. Then, a customer's past purchase history such as purchased_bananas and purchased_cooking_books, or geographic features, are single features. If one has purchased both bananas and cooking books, then this customer will more likely click on the recommended blender. The combination of purchased_bananas and purchased_cooking_books is referred to as a feature cross, which provides additional interaction information beyond the individual features. <div> <center> <img src="http://drive.google.com/uc?export=view&id=1e8pYZHM1ZSwqBLYVkKDoGg0_2t2UPc2y" width="600"/> </center> </div> What are the challenges in learning feature crosses? In Web-scale applications, data are mostly categorical, leading to large and sparse feature space. Identifying effective feature crosses in this setting often requires manual feature engineering or exhaustive search. Traditional feed-forward multilayer perceptron (MLP) models are universal function approximators; however, they cannot efficiently approximate even 2nd or 3rd-order feature crosses [1, 2]. What is Deep & Cross Network (DCN)? DCN was designed to learn explicit and bounded-degree cross features more effectively. It starts with an input layer (typically an embedding layer), followed by a cross network containing multiple cross layers that models explicit feature interactions, and then combines with a deep network that models implicit feature interactions. Cross Network. This is the core of DCN. It explicitly applies feature crossing at each layer, and the highest polynomial degree increases with layer depth. The following figure shows the $(i+1)$-th cross layer. <div class="fig figcenter fighighlight"> <center> <img src="http://drive.google.com/uc?export=view&id=1QvIDptMxixFNp6P4bBqMN4AYAhAIAYQZ" width="50%" style="display:block"> </center> </div> Deep Network. It is a traditional feedforward multilayer perceptron (MLP). The deep network and cross network are then combined to form DCN [1]. Commonly, we could stack a deep network on top of the cross network (stacked structure); we could also place them in parallel (parallel structure). <div class="fig figcenter fighighlight"> <center> <img src="http://drive.google.com/uc?export=view&id=1WtDUCV6b-eetUnWVCAmcPh8mJFut5EUd" hspace="40" width="30%" style="margin: 0px 100px 0px 0px;"> <img src="http://drive.google.com/uc?export=view&id=1xo_twKb847hasfss7JxF0UtFX_rEb4nt" width="20%"> </center> </div> In the following, we will first show the advantage of DCN with a toy example, and then we will walk you through some common ways to utilize DCN using the MovieLen-1M dataset. Let's first install and import the necessary packages for this colab. End of explanation def get_mixer_data(data_size=100_000, random_seed=42): # We need to fix the random seed # to make colab runs repeatable. rng = np.random.RandomState(random_seed) country = rng.randint(200, size=[data_size, 1]) / 200. bananas = rng.randint(24, size=[data_size, 1]) / 24. cookbooks = rng.randint(6, size=[data_size, 1]) / 6. x = np.concatenate([country, bananas, cookbooks], axis=1) # # Create 1st-order terms. y = 0.1 * country + 0.4 * bananas + 0.7 * cookbooks # Create 2nd-order cross terms. y += 0.1 * country * bananas + 3.1 * bananas * cookbooks + ( 0.1 * cookbooks * cookbooks) return x, y Explanation: Toy Example To illustrate the benefits of DCN, let's work through a simple example. Suppose we have a dataset where we're trying to model the likelihood of a customer clicking on a blender Ad, with its features and label described as follows. | Features / Label | Description | Value Type / Range | | ------------- |-------------| -----| | $x_1$ = country | the country this customer lives in | Int in [0, 199] | | $x_2$ = bananas | # bananas the customer has purchased |Int in [0, 23] | | $x_3$ = cookbooks | # cooking books the customer has purchased |Int in [0, 5] | | $y$ | the likelihood of clicking on a blender Ad | -- | Then, we let the data follow the following underlying distribution: $$y = f(x_1, x_2, x_3) = 0.1x_1 + 0.4x_2+0.7x_3 + 0.1x_1x_2+3.1x_2x_3+0.1x_3^2$$ where the likelihood $y$ depends linearly both on features $x_i$'s, but also on multiplicative interactions between the $x_i$'s. In our case, we would say that the likelihood of purchasing a blender ($y$) depends not just on buying bananas ($x_2$) or cookbooks ($x_3$), but also on buying bananas and cookbooks together ($x_2x_3$). We can generate the data for this as follows: Synthetic data generation We first define $f(x_1, x_2, x_3)$ as described above. End of explanation x, y = get_mixer_data() num_train = 90000 train_x = x[:num_train] train_y = y[:num_train] eval_x = x[num_train:] eval_y = y[num_train:] Explanation: Let's generate the data that follows the distribution, and split the data into 90% for training and 10% for testing. End of explanation class Model(tfrs.Model): def __init__(self, model): super().__init__() self._model = model self._logit_layer = tf.keras.layers.Dense(1) self.task = tfrs.tasks.Ranking( loss=tf.keras.losses.MeanSquaredError(), metrics=[ tf.keras.metrics.RootMeanSquaredError("RMSE") ] ) def call(self, x): x = self._model(x) return self._logit_layer(x) def compute_loss(self, features, training=False): x, labels = features scores = self(x) return self.task( labels=labels, predictions=scores, ) Explanation: Model construction We're going to try out both cross network and deep network to illustrate the advantage a cross network can bring to recommenders. As the data we just created only contains 2nd-order feature interactions, it would be sufficient to illustrate with a single-layered cross network. If we wanted to model higher-order feature interactions, we could stack multiple cross layers and use a multi-layered cross network. The two models we will be building are: 1. Cross Network with only one cross layer; 2. Deep Network with wider and deeper ReLU layers. We first build a unified model class whose loss is the mean squared error. End of explanation crossnet = Model(tfrs.layers.dcn.Cross()) deepnet = Model( tf.keras.Sequential([ tf.keras.layers.Dense(512, activation="relu"), tf.keras.layers.Dense(256, activation="relu"), tf.keras.layers.Dense(128, activation="relu") ]) ) Explanation: Then, we specify the cross network (with 1 cross layer of size 3) and the ReLU-based DNN (with layer sizes [512, 256, 128]): End of explanation train_data = tf.data.Dataset.from_tensor_slices((train_x, train_y)).batch(1000) eval_data = tf.data.Dataset.from_tensor_slices((eval_x, eval_y)).batch(1000) Explanation: Model training Now that we have the data and models ready, we are going to train the models. We first shuffle and batch the data to prepare for model training. End of explanation epochs = 100 learning_rate = 0.4 Explanation: Then, we define the number of epochs as well as the learning rate. End of explanation crossnet.compile(optimizer=tf.keras.optimizers.Adagrad(learning_rate)) crossnet.fit(train_data, epochs=epochs, verbose=False) deepnet.compile(optimizer=tf.keras.optimizers.Adagrad(learning_rate)) deepnet.fit(train_data, epochs=epochs, verbose=False) Explanation: Alright, everything is ready now and let's compile and train the models. You could set verbose=True if you want to see how the model progresses. End of explanation crossnet_result = crossnet.evaluate(eval_data, return_dict=True, verbose=False) print(f"CrossNet(1 layer) RMSE is {crossnet_result['RMSE']:.4f} " f"using {crossnet.count_params()} parameters.") deepnet_result = deepnet.evaluate(eval_data, return_dict=True, verbose=False) print(f"DeepNet(large) RMSE is {deepnet_result['RMSE']:.4f} " f"using {deepnet.count_params()} parameters.") Explanation: Model evaluation We verify the model performance on the evaluation dataset and report the Root Mean Squared Error (RMSE, the lower the better). End of explanation mat = crossnet._model._dense.kernel features = ["country", "purchased_bananas", "purchased_cookbooks"] plt.figure(figsize=(9,9)) im = plt.matshow(np.abs(mat.numpy()), cmap=plt.cm.Blues) ax = plt.gca() divider = make_axes_locatable(plt.gca()) cax = divider.append_axes("right", size="5%", pad=0.05) plt.colorbar(im, cax=cax) cax.tick_params(labelsize=10) _ = ax.set_xticklabels([''] + features, rotation=45, fontsize=10) _ = ax.set_yticklabels([''] + features, fontsize=10) Explanation: We see that the cross network achieved magnitudes lower RMSE than a ReLU-based DNN, with magnitudes fewer parameters. This has suggested the efficieny of a cross network in learning feaure crosses. Model understanding We already know what feature crosses are important in our data, it would be fun to check whether our model has indeed learned the important feature cross. This can be done by visualizing the learned weight matrix in DCN. The weight $W_{ij}$ represents the learned importance of interaction between feature $x_i$ and $x_j$. End of explanation ratings = tfds.load("movie_lens/100k-ratings", split="train") ratings = ratings.map(lambda x: { "movie_id": x["movie_id"], "user_id": x["user_id"], "user_rating": x["user_rating"], "user_gender": int(x["user_gender"]), "user_zip_code": x["user_zip_code"], "user_occupation_text": x["user_occupation_text"], "bucketized_user_age": int(x["bucketized_user_age"]), }) Explanation: Darker colours represent stronger learned interactions - in this case, it's clear that the model learned that purchasing babanas and cookbooks together is important. If you are interested in trying out more complicated synthetic data, feel free to check out this paper. Movielens 1M example We now examine the effectiveness of DCN on a real-world dataset: Movielens 1M [3]. Movielens 1M is a popular dataset for recommendation research. It predicts users' movie ratings given user-related features and movie-related features. We use this dataset to demonstrate some common ways to utilize DCN. Data processing The data processing procedure follows a similar procedure as the basic ranking tutorial. End of explanation tf.random.set_seed(42) shuffled = ratings.shuffle(100_000, seed=42, reshuffle_each_iteration=False) train = shuffled.take(80_000) test = shuffled.skip(80_000).take(20_000) Explanation: Next, we randomly split the data into 80% for training and 20% for testing. End of explanation feature_names = ["movie_id", "user_id", "user_gender", "user_zip_code", "user_occupation_text", "bucketized_user_age"] vocabularies = {} for feature_name in feature_names: vocab = ratings.batch(1_000_000).map(lambda x: x[feature_name]) vocabularies[feature_name] = np.unique(np.concatenate(list(vocab))) Explanation: Then, we create vocabulary for each feature. End of explanation class DCN(tfrs.Model): def __init__(self, use_cross_layer, deep_layer_sizes, projection_dim=None): super().__init__() self.embedding_dimension = 32 str_features = ["movie_id", "user_id", "user_zip_code", "user_occupation_text"] int_features = ["user_gender", "bucketized_user_age"] self._all_features = str_features + int_features self._embeddings = {} # Compute embeddings for string features. for feature_name in str_features: vocabulary = vocabularies[feature_name] self._embeddings[feature_name] = tf.keras.Sequential( [tf.keras.layers.StringLookup( vocabulary=vocabulary, mask_token=None), tf.keras.layers.Embedding(len(vocabulary) + 1, self.embedding_dimension) ]) # Compute embeddings for int features. for feature_name in int_features: vocabulary = vocabularies[feature_name] self._embeddings[feature_name] = tf.keras.Sequential( [tf.keras.layers.IntegerLookup( vocabulary=vocabulary, mask_value=None), tf.keras.layers.Embedding(len(vocabulary) + 1, self.embedding_dimension) ]) if use_cross_layer: self._cross_layer = tfrs.layers.dcn.Cross( projection_dim=projection_dim, kernel_initializer="glorot_uniform") else: self._cross_layer = None self._deep_layers = [tf.keras.layers.Dense(layer_size, activation="relu") for layer_size in deep_layer_sizes] self._logit_layer = tf.keras.layers.Dense(1) self.task = tfrs.tasks.Ranking( loss=tf.keras.losses.MeanSquaredError(), metrics=[tf.keras.metrics.RootMeanSquaredError("RMSE")] ) def call(self, features): # Concatenate embeddings embeddings = [] for feature_name in self._all_features: embedding_fn = self._embeddings[feature_name] embeddings.append(embedding_fn(features[feature_name])) x = tf.concat(embeddings, axis=1) # Build Cross Network if self._cross_layer is not None: x = self._cross_layer(x) # Build Deep Network for deep_layer in self._deep_layers: x = deep_layer(x) return self._logit_layer(x) def compute_loss(self, features, training=False): labels = features.pop("user_rating") scores = self(features) return self.task( labels=labels, predictions=scores, ) Explanation: Model construction The model architecture we will be building starts with an embedding layer, which is fed into a cross network followed by a deep network. The embedding dimension is set to 32 for all the features. You could also use different embedding sizes for different features. End of explanation cached_train = train.shuffle(100_000).batch(8192).cache() cached_test = test.batch(4096).cache() Explanation: Model training We shuffle, batch and cache the training and test data. End of explanation def run_models(use_cross_layer, deep_layer_sizes, projection_dim=None, num_runs=5): models = [] rmses = [] for i in range(num_runs): model = DCN(use_cross_layer=use_cross_layer, deep_layer_sizes=deep_layer_sizes, projection_dim=projection_dim) model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate)) models.append(model) model.fit(cached_train, epochs=epochs, verbose=False) metrics = model.evaluate(cached_test, return_dict=True) rmses.append(metrics["RMSE"]) mean, stdv = np.average(rmses), np.std(rmses) return {"model": models, "mean": mean, "stdv": stdv} Explanation: Let's define a function that runs a model multiple times and returns the model's RMSE mean and standard deviation out of multiple runs. End of explanation epochs = 8 learning_rate = 0.01 Explanation: We set some hyper-parameters for the models. Note that these hyper-parameters are set globally for all the models for demonstration purpose. If you want to obtain the best performance for each model, or conduct a fair comparison among models, then we'd suggest you to fine-tune the hyper-parameters. Remember that the model architecture and optimization schemes are intertwined. End of explanation dcn_result = run_models(use_cross_layer=True, deep_layer_sizes=[192, 192]) Explanation: DCN (stacked). We first train a DCN model with a stacked structure, that is, the inputs are fed to a cross network followed by a deep network. <div> <center> <img src="http://drive.google.com/uc?export=view&id=1X8qoMtIYKJz4yBYifvfw4QpAwrjr70e_" width="140"/> </center> </div> End of explanation dcn_lr_result = run_models(use_cross_layer=True, projection_dim=20, deep_layer_sizes=[192, 192]) Explanation: Low-rank DCN. To reduce the training and serving cost, we leverage low-rank techniques to approximate the DCN weight matrices. The rank is passed in through argument projection_dim; a smaller projection_dim results in a lower cost. Note that projection_dim needs to be smaller than (input size)/2 to reduce the cost. In practice, we've observed using low-rank DCN with rank (input size)/4 consistently preserved the accuracy of a full-rank DCN. <div> <center> <img src="http://drive.google.com/uc?export=view&id=1ZZfUTNdxjGAaAuwNrweKkLJ1PGxMmiCm" width="400"/> </center> </div> End of explanation dnn_result = run_models(use_cross_layer=False, deep_layer_sizes=[192, 192, 192]) Explanation: DNN. We train a same-sized DNN model as a reference. End of explanation print("DCN RMSE mean: {:.4f}, stdv: {:.4f}".format( dcn_result["mean"], dcn_result["stdv"])) print("DCN (low-rank) RMSE mean: {:.4f}, stdv: {:.4f}".format( dcn_lr_result["mean"], dcn_lr_result["stdv"])) print("DNN RMSE mean: {:.4f}, stdv: {:.4f}".format( dnn_result["mean"], dnn_result["stdv"])) Explanation: We evaluate the model on test data and report the mean and standard deviation out of 5 runs. End of explanation model = dcn_result["model"][0] mat = model._cross_layer._dense.kernel features = model._all_features block_norm = np.ones([len(features), len(features)]) dim = model.embedding_dimension # Compute the norms of the blocks. for i in range(len(features)): for j in range(len(features)): block = mat[i * dim:(i + 1) * dim, j * dim:(j + 1) * dim] block_norm[i,j] = np.linalg.norm(block, ord="fro") plt.figure(figsize=(9,9)) im = plt.matshow(block_norm, cmap=plt.cm.Blues) ax = plt.gca() divider = make_axes_locatable(plt.gca()) cax = divider.append_axes("right", size="5%", pad=0.05) plt.colorbar(im, cax=cax) cax.tick_params(labelsize=10) _ = ax.set_xticklabels([""] + features, rotation=45, ha="left", fontsize=10) _ = ax.set_yticklabels([""] + features, fontsize=10) Explanation: We see that DCN achieved better performance than a same-sized DNN with ReLU layers. Moreover, the low-rank DCN was able to reduce parameters while maintaining the accuracy. More on DCN. Besides what've been demonstrated above, there are more creative yet practically useful ways to utilize DCN [1]. DCN with a parallel structure. The inputs are fed in parallel to a cross network and a deep network. Concatenating cross layers. The inputs are fed in parallel to multiple cross layers to capture complementary feature crosses. <div class="fig figcenter fighighlight"> <center> <img src="http://drive.google.com/uc?export=view&id=11RpNuj9s0OgSav9TUuGA7v7PuFLL6nVR" hspace=40 width="600" style="display:block;"> <div class="figcaption"> <b>Left</b>: DCN with a parallel structure; <b>Right</b>: Concatenating cross layers. </div> </center> </div> Model understanding The weight matrix $W$ in DCN reveals what feature crosses the model has learned to be important. Recall that in the previous toy example, the importance of interactions between the $i$-th and $j$-th features is captured by the ($i, j$)-th element of $W$. What's a bit different here is that the feature embeddings are of size 32 instead of size 1. Hence, the importance will be characterized by the $(i, j)$-th block $W_{i,j}$ which is of dimension 32 by 32. In the following, we visualize the Frobenius norm [4] $||W_{i,j}||_F$ of each block, and a larger norm would suggest higher importance (assuming the features' embeddings are of similar scales). Besides block norm, we could also visualize the entire matrix, or the mean/median/max value of each block. End of explanation
9,288
Given the following text description, write Python code to implement the functionality described below step by step Description: This is an example notebook The main purpose of this notebook is to have something to convert with gitnb. There is nothing interesting to see here. In order to make this point perfectly clear, I will start with some difficult math... Step1: some more code blocks... Step2: Here is a Raw NB Convert block
Python Code: 1+1 Explanation: This is an example notebook The main purpose of this notebook is to have something to convert with gitnb. There is nothing interesting to see here. In order to make this point perfectly clear, I will start with some difficult math... End of explanation import numpy as np eps=1e-10 def precision(v,p): v=np.array(v) p=np.array(p) tpa=[pred*int(eq) for (pred,eq) in zip(p,np.equal(v,p))] fpa=[pred*int(not eq) for (pred,eq) in zip(p,np.equal(v,p))] tp=sum(tpa) fp=sum(fpa) return tp/(fp+tp+eps) def recall(v,p): v=np.array(v) p=np.array(p) tpa=[pred*int(eq) for (pred,eq) in zip(p,np.equal(v,p))] fna=[(1-pred)*int(not eq) for (pred,eq) in zip(p,np.equal(v,p))] tp=sum(tpa) fn=sum(fna) return tp/(fn+tp+eps) def f2(a,b): pc=precision(a,b) rc=recall(a,b) return 5 * pc * rc / ((4*pc) + rc + eps) a=[1,1,0,1,1] b=[1,1,1,0,0] f2(a,b) Explanation: some more code blocks... End of explanation 2+2 Explanation: Here is a Raw NB Convert block End of explanation
9,289
Given the following text description, write Python code to implement the functionality described below step by step Description: Up-sampling with Transposed Convolution When we use neural networks to generate images, it usually involves up-sampling from low resolution to high resolution. There are various methods to conduct up-sample operation Step1: Convolution Operation Input Matrix We define a 4x4 matrix as the input. We randomly generate values for this matrix using 1-5. Step2: The matrix is visualized as below. The higher the intensity the bright the cell color is. Step3: We are using small values so that the display look simpler than with big values. If we use 0-255 just like an gray scale image, it'd look like below. Step4: Apply a convolution operation on these values can produce big values that are hard to nicely display. Also, we are ignoring the channel dimension usually used in image processing for a simplicity reason. Kernel We use a 3x3 kernel (filter) in this example (again no channel dimension). We only use 1-5 to make it easy to display the calculation results. Step5: Convolution With padding = 0 (padding='VALID') and strides = 1, the convolution produces a 2x2 matrix. $H_m, W_m$ Step6: The result of the convolution operation is as follows Step7: One important point of such convolution operation is that it keeps the positional connectivity between the input values and the output values. For example, output[0][0] is calculated from inputs[0 Step8: So, 9 values in the input matrix is used to produce 1 value in the output matrix. Going Backward Now, suppose we want to go the other direction. We want to associate 1 value in a matrix to 9 values to another matrix while keeping the same positional association. For example, the value in the left top corner of the input is associated with the 3x3 values in the left top corner of the output. This is the core idea of the transposed convolution which we can use to up-sample a small image into a larger one while making sure the positional association (connectivity) is maintained. Let's first define the convolution matrix and then talk about the transposed convolution matrix. Convolution Matrix We can express a convolution operation using a matrix. It is nothing but a kernel matrix rearranged so that we can use a matrix multiplication to conduct convolution operations. Step9: If we reshape the input into a column vector, we can use the matrix multiplication to perform convolution. Step10: We reshape it into the desired shape. Step11: This is exactly the same output as before. Transposed Convolution Matrix Let's transpose the convolution matrix. Step12: Let's make a new input whose shape is 4x1. Step13: We matrix-multiply C.T with x2 to up-sample x2 from 4 (2x2) to 16 (4x4). This operation has the same connectivity as the convolution but in the backward direction. As you can see, 1 value in the input x2 is connected to 9 values in the output matrix via the transposed convolution matrix.
Python Code: import numpy as np import matplotlib.pyplot as plt import keras import keras.backend as K from keras.layers import Conv2D from keras.models import Sequential %matplotlib inline Explanation: Up-sampling with Transposed Convolution When we use neural networks to generate images, it usually involves up-sampling from low resolution to high resolution. There are various methods to conduct up-sample operation: Nearest neighbor interpolation Bi-linear interpolation Bi-cubic interpolation All these methods involve some interpolation which we need to chose like a manual feature engineering that the network can not change later on. Instead, we could use the transposed convolution which has learnable parameters [1]. Examples of the transposed convolution usage: the generator in DCGAN takes randomly sampled values to produce a full-size image [2]. the semantic segmentation uses convolutional layers to extract features in the encoder and then restores the original image size in the encoder so that it can classify every pixel in the original image [3]. The transposed convolution is also known as: Fractionally-strided convolution Deconvolution But we will only use the word transposed convolution in this notebook. One caution: the transposed convolution is the cause of the checkerboard artifacts in generated images [4]. The paper recommends an up-sampling followed by convolution to reduce such issues. If the main objective is to generate images without such artifacts, it is worth considering one of the interpolation methods. End of explanation inputs = np.random.randint(1, 9, size=(4, 4)) inputs Explanation: Convolution Operation Input Matrix We define a 4x4 matrix as the input. We randomly generate values for this matrix using 1-5. End of explanation def show_matrix(m, color, cmap, title=None): rows, cols = len(m), len(m[0]) fig, ax = plt.subplots(figsize=(cols, rows)) ax.set_yticks(list(range(rows))) ax.set_xticks(list(range(cols))) ax.xaxis.tick_top() if title is not None: ax.set_title('{} {}'.format(title, m.shape), y=-0.5/rows) plt.imshow(m, cmap=cmap, vmin=0, vmax=1) for r in range(rows): for c in range(cols): text = '{:>3}'.format(int(m[r][c])) ax.text(c-0.2, r+0.15, text, color=color, fontsize=15) plt.show() def show_inputs(m, title='Inputs'): show_matrix(m, 'b', plt.cm.Vega10, title) def show_kernel(m, title='Kernel'): show_matrix(m, 'r', plt.cm.RdBu_r, title) def show_output(m, title='Output'): show_matrix(m, 'g', plt.cm.GnBu, title) show_inputs(inputs) Explanation: The matrix is visualized as below. The higher the intensity the bright the cell color is. End of explanation show_inputs(np.random.randint(100, 255, size=(4, 4))) Explanation: We are using small values so that the display look simpler than with big values. If we use 0-255 just like an gray scale image, it'd look like below. End of explanation kernel = np.random.randint(1, 5, size=(3, 3)) kernel show_kernel(kernel) Explanation: Apply a convolution operation on these values can produce big values that are hard to nicely display. Also, we are ignoring the channel dimension usually used in image processing for a simplicity reason. Kernel We use a 3x3 kernel (filter) in this example (again no channel dimension). We only use 1-5 to make it easy to display the calculation results. End of explanation def convolve(m, k): m_rows, m_cols = len(m), len(m[0]) # matrix rows, cols k_rows, k_cols = len(k), len(k[0]) # kernel rows, cols rows = m_rows - k_rows + 1 # result matrix rows cols = m_rows - k_rows + 1 # result matrix cols v = np.zeros((rows, cols), dtype=m.dtype) # result matrix for r in range(rows): for c in range(cols): v[r][c] = np.sum(m[r:r+k_rows, c:c+k_cols] * k) # sum of the element-wise multiplication return v Explanation: Convolution With padding = 0 (padding='VALID') and strides = 1, the convolution produces a 2x2 matrix. $H_m, W_m$: height and width of the input $H_k, W_k$: height and width of the kernel $P$: padding $S$: strides $H, W$: height and width of the output $W = \frac{W_m - W_k + 2P}{S} + 1$ $H = \frac{H_m - H_k + 2P}{S} + 1$ With the 4x4 matrix and 3x3 kernel with no zero padding and stride of 1: $\frac{4 - 3 + 2\cdot 0}{1} + 1 = 2$ So, with no zero padding and strides of 1, the convolution operation can be defined in a function like below: End of explanation output = convolve(inputs, kernel) output show_output(output) Explanation: The result of the convolution operation is as follows: End of explanation output[0][0] inputs[0:3, 0:3] kernel np.sum(inputs[0:3, 0:3] * kernel) # sum of the element-wise multiplication Explanation: One important point of such convolution operation is that it keeps the positional connectivity between the input values and the output values. For example, output[0][0] is calculated from inputs[0:3, 0:3]. The kernel is used to link between the two. End of explanation def convolution_matrix(m, k): m_rows, m_cols = len(m), len(m[0]) # matrix rows, cols k_rows, k_cols = len(k), len(k[0]) # kernel rows, cols # output matrix rows and cols rows = m_rows - k_rows + 1 cols = m_rows - k_rows + 1 # convolution matrix v = np.zeros((rows*cols, m_rows, m_cols)) for r in range(rows): for c in range(cols): i = r * cols + c v[i][r:r+k_rows, c:c+k_cols] = k v = v.reshape((rows*cols), -1) return v, rows, cols C, rows, cols = convolution_matrix(inputs, kernel) show_kernel(C, 'Convolution Matrix') Explanation: So, 9 values in the input matrix is used to produce 1 value in the output matrix. Going Backward Now, suppose we want to go the other direction. We want to associate 1 value in a matrix to 9 values to another matrix while keeping the same positional association. For example, the value in the left top corner of the input is associated with the 3x3 values in the left top corner of the output. This is the core idea of the transposed convolution which we can use to up-sample a small image into a larger one while making sure the positional association (connectivity) is maintained. Let's first define the convolution matrix and then talk about the transposed convolution matrix. Convolution Matrix We can express a convolution operation using a matrix. It is nothing but a kernel matrix rearranged so that we can use a matrix multiplication to conduct convolution operations. End of explanation def column_vector(m): return m.flatten().reshape(-1, 1) x = column_vector(inputs) x show_inputs(x) output = C @ x output show_output(output) Explanation: If we reshape the input into a column vector, we can use the matrix multiplication to perform convolution. End of explanation output = output.reshape(rows, cols) output show_output(output) Explanation: We reshape it into the desired shape. End of explanation show_kernel(C.T, 'Transposed Convolution Matrix') Explanation: This is exactly the same output as before. Transposed Convolution Matrix Let's transpose the convolution matrix. End of explanation x2 = np.random.randint(1, 5, size=(4, 1)) x2 show_inputs(x2) Explanation: Let's make a new input whose shape is 4x1. End of explanation output2 = (C.T @ x2) output2 show_output(output2) output2 = output2.reshape(4, 4) output2 show_output(output2) Explanation: We matrix-multiply C.T with x2 to up-sample x2 from 4 (2x2) to 16 (4x4). This operation has the same connectivity as the convolution but in the backward direction. As you can see, 1 value in the input x2 is connected to 9 values in the output matrix via the transposed convolution matrix. End of explanation
9,290
Given the following text description, write Python code to implement the functionality described below step by step Description: Viewing CNN Filters Review At this point, I've tested my CNN a little bit and learned that the hair really matters. If the CNN sees a lighter object representing a head with dark textures on either side of the head, it will think it is Lars. Fair enough, that might be one of the insights I would make when I compare myself with anyone, especially a female with long hair! Convolutional Filters I think some insight can be learned by actually looking at the filters. The convolutional filters are what's actually generating the inputs to the final fully connected and output layers. Again, the fully connectd layers are essentially a dot product of the filters and parts of the image Step1: Loading My Saved CNN We have to actually create the structure of the CNN before loading it. Something I tweaked with for a few minutes for before figuring it out. Step2: Import Function To View Convolutional Filters Let's import that code we found from github to view our convolutional filters Step3: First Convolutional Layer Filters Step4: Alrighty then... this looks like absolute garbage to me, i.e., the untrained eye. None of this particularly looks like anything. I'm trying to look for edges and whatnot, and I can make some out if I squint my eyes, but honestly it all just looks like static... Will the second level layers shed some more light into the meaning of life? Second Convolutional Layer Filters
Python Code: import cv2 import numpy as np from matplotlib import pyplot as plt %matplotlib inline # TFlearn libraries import tflearn from tflearn.layers.conv import conv_2d, max_pool_2d from tflearn.layers.core import input_data, dropout, fully_connected from tflearn.layers.estimator import regression Explanation: Viewing CNN Filters Review At this point, I've tested my CNN a little bit and learned that the hair really matters. If the CNN sees a lighter object representing a head with dark textures on either side of the head, it will think it is Lars. Fair enough, that might be one of the insights I would make when I compare myself with anyone, especially a female with long hair! Convolutional Filters I think some insight can be learned by actually looking at the filters. The convolutional filters are what's actually generating the inputs to the final fully connected and output layers. Again, the fully connectd layers are essentially a dot product of the filters and parts of the image: <img src="https://s3.ca-central-1.amazonaws.com/2017edmfasatb/chi_lars_face_detection/images/15_finished_convolution_3_filters.png" style="width: 500px;"/> The filters would provide a sneak peek, well actually a direct peek, into how the CNN is making its decisions. Often times, I've seen first layer and second layer filters come out as detecting edges, specific shapes, and in the context of human faces: ears, noses, mouths... etc. I found some code online that will help us visualize the filters. Viewing The CNN Filters End of explanation # sentdex's code to build the neural net using tflearn # Input layer --> conv layer w/ max pooling --> conv layer w/ max pooling --> fully connected layer --> output layer convnet = input_data(shape = [None, 91, 91, 1], name = 'input') convnet = conv_2d(convnet, 32, 10, activation = 'relu', name = 'conv_1') convnet = max_pool_2d(convnet, 2, name = 'max_pool_1') convnet = conv_2d(convnet, 64, 10, activation = 'relu', name = 'conv_2') convnet = max_pool_2d(convnet, 2, name = 'max_pool_2') convnet = fully_connected(convnet, 1024, activation = 'relu', name = 'fully_connected_1') convnet = dropout(convnet, 0.8, name = 'dropout_1') convnet = fully_connected(convnet, 2, activation = 'softmax', name = 'fully_connected_2') convnet = regression(convnet, optimizer = 'sgd', learning_rate = 0.01, loss = 'categorical_crossentropy', name = 'targets') # Define and load CNN model = tflearn.DNN(convnet) model.load('model_4_epochs_0.03_compression_99.6_named.tflearn') Explanation: Loading My Saved CNN We have to actually create the structure of the CNN before loading it. Something I tweaked with for a few minutes for before figuring it out. End of explanation import six def display_convolutions(model, layer, padding=4, filename=''): if isinstance(layer, six.string_types): vars = tflearn.get_layer_variables_by_name(layer) variable = vars[0] else: variable = layer.W data = model.get_weights(variable) # N is the total number of convolutions N = data.shape[2] * data.shape[3] print('There are {} filters'.format(N)) # Ensure the resulting image is square filters_per_row = int(np.ceil(np.sqrt(N))) # Assume the filters are square filter_size = data.shape[0] # Size of the result image including padding result_size = filters_per_row * (filter_size + padding) - padding # Initialize result image to all zeros result = np.zeros((result_size, result_size)) # Tile the filters into the result image filter_x = 0 filter_y = 0 for n in range(data.shape[3]): for c in range(data.shape[2]): if filter_x == filters_per_row: filter_y += 1 filter_x = 0 for i in range(filter_size): for j in range(filter_size): result[filter_y * (filter_size + padding) + i, filter_x * (filter_size + padding) + j] = \ data[i, j, c, n] filter_x += 1 # Normalize image to 0-1 min = result.min() max = result.max() result = (result - min) / (max - min) # Plot figure plt.figure(figsize=(10, 10)) plt.axis('off') plt.imshow(result, cmap='gray', interpolation='nearest') # Save plot if filename is set if filename != '': plt.savefig(filename, bbox_inches='tight', pad_inches=0) plt.show() Explanation: Import Function To View Convolutional Filters Let's import that code we found from github to view our convolutional filters End of explanation # Display first convolutional layer filters (32 filters) display_convolutions(model, 'conv_1') Explanation: First Convolutional Layer Filters End of explanation # Display first convolutional layer filters ( filters) display_convolutions(model, 'conv_2', filename = 'hello') Explanation: Alrighty then... this looks like absolute garbage to me, i.e., the untrained eye. None of this particularly looks like anything. I'm trying to look for edges and whatnot, and I can make some out if I squint my eyes, but honestly it all just looks like static... Will the second level layers shed some more light into the meaning of life? Second Convolutional Layer Filters End of explanation
9,291
Given the following text description, write Python code to implement the functionality described below step by step Description: TensorBoard Visualizations In this tutorial, we will learn how to visualize different types of NLP based Embeddings via TensorBoard. TensorBoard is a data visualization framework for visualizing and inspecting the TensorFlow runs and graphs. We will use a built-in Tensorboard visualizer called Embedding Projector in this tutorial. It lets you interactively visualize and analyze high-dimensional data like embeddings. Read Data For this tutorial, a transformed MovieLens dataset<sup>[1]</sup> is used. You can download the final prepared csv from here. Step1: 1. Visualizing Doc2Vec In this part, we will learn about visualizing Doc2Vec Embeddings aka Paragraph Vectors via TensorBoard. The input documents for training will be the synopsis of movies, on which Doc2Vec model is trained. <img src="Tensorboard.png"> The visualizations will be a scatterplot as seen in the above image, where each datapoint is labelled by the movie title and colored by it's corresponding genre. You can also visit this Projector link which is configured with my embeddings for the above mentioned dataset. Preprocess Text Below, we define a function to read the training documents, pre-process each document using a simple gensim pre-processing tool (i.e., tokenize text into individual words, remove punctuation, set to lowercase, etc), and return a list of words. Also, to train the model, we'll need to associate a tag/number with each document of the training corpus. In our case, the tag is simply the zero-based line number. Step2: Let's take a look at the training corpus. Step3: Training the Doc2Vec Model We'll instantiate a Doc2Vec model with a vector size with 50 words and iterating over the training corpus 55 times. We set the minimum word count to 2 in order to give higher frequency words more weighting. Model accuracy can be improved by increasing the number of iterations but this generally increases the training time. Small datasets with short documents, like this one, can benefit from more training passes. Step4: Now, we'll save the document embedding vectors per doctag. Step5: Prepare the Input files for Tensorboard Tensorboard takes two Input files. One containing the embedding vectors and the other containing relevant metadata. We'll use a gensim script to directly convert the embedding file saved in word2vec format above to the tsv format required in Tensorboard. Step6: The script above generates two files, movie_plot_tensor.tsv which contain the embedding vectors and movie_plot_metadata.tsv containing doctags. But, these doctags are simply the unique index values and hence are not really useful to interpret what the document was while visualizing. So, we will overwrite movie_plot_metadata.tsv to have a custom metadata file with two columns. The first column will be for the movie titles and the second for their corresponding genres. Step7: Now you can go to http Step8: Train LDA Model Step9: You can refer to this notebook also before training the LDA model. It contains tips and suggestions for pre-processing the text data, and how to train the LDA model to get good results. Doc-Topic distribution Now we will use get_document_topics which infers the topic distribution of a document. It basically returns a list of (topic_id, topic_probability) for each document in the input corpus. Step10: The above output shows the topic distribution of first document in the corpus as a list of (topic_id, topic_probability). Now, using the topic distribution of a document as it's vector embedding, we will plot all the documents in our corpus using Tensorboard. Prepare the Input files for Tensorboard Tensorboard takes two input files, one containing the embedding vectors and the other containing relevant metadata. As described above we will use the topic distribution of documents as their embedding vector. Metadata file will consist of Movie titles with their genres. Step11: Now you can go to http Step12: Next, we upload the previous tensor file "doc_lda_tensor.tsv" and this new metadata file to http Step13: You can even use pyLDAvis to deduce topics more efficiently. It provides a deeper inspection of the terms highly associated with each individual topic. For this, it uses a measure called relevance of a term to a topic that allows users to flexibly rank terms best suited for a meaningful topic interpretation. It's weight parameter called ฮป can be adjusted to display useful terms which could help in differentiating topics efficiently.
Python Code: import gensim import pandas as pd import smart_open import random from smart_open import smart_open # read data dataframe = pd.read_csv('movie_plots.csv') dataframe Explanation: TensorBoard Visualizations In this tutorial, we will learn how to visualize different types of NLP based Embeddings via TensorBoard. TensorBoard is a data visualization framework for visualizing and inspecting the TensorFlow runs and graphs. We will use a built-in Tensorboard visualizer called Embedding Projector in this tutorial. It lets you interactively visualize and analyze high-dimensional data like embeddings. Read Data For this tutorial, a transformed MovieLens dataset<sup>[1]</sup> is used. You can download the final prepared csv from here. End of explanation def read_corpus(documents): for i, plot in enumerate(documents): yield gensim.models.doc2vec.TaggedDocument(gensim.utils.simple_preprocess(plot, max_len=30), [i]) train_corpus = list(read_corpus(dataframe.Plots)) Explanation: 1. Visualizing Doc2Vec In this part, we will learn about visualizing Doc2Vec Embeddings aka Paragraph Vectors via TensorBoard. The input documents for training will be the synopsis of movies, on which Doc2Vec model is trained. <img src="Tensorboard.png"> The visualizations will be a scatterplot as seen in the above image, where each datapoint is labelled by the movie title and colored by it's corresponding genre. You can also visit this Projector link which is configured with my embeddings for the above mentioned dataset. Preprocess Text Below, we define a function to read the training documents, pre-process each document using a simple gensim pre-processing tool (i.e., tokenize text into individual words, remove punctuation, set to lowercase, etc), and return a list of words. Also, to train the model, we'll need to associate a tag/number with each document of the training corpus. In our case, the tag is simply the zero-based line number. End of explanation train_corpus[:2] Explanation: Let's take a look at the training corpus. End of explanation model = gensim.models.doc2vec.Doc2Vec(size=50, min_count=2, iter=55) model.build_vocab(train_corpus) model.train(train_corpus, total_examples=model.corpus_count, epochs=model.iter) Explanation: Training the Doc2Vec Model We'll instantiate a Doc2Vec model with a vector size with 50 words and iterating over the training corpus 55 times. We set the minimum word count to 2 in order to give higher frequency words more weighting. Model accuracy can be improved by increasing the number of iterations but this generally increases the training time. Small datasets with short documents, like this one, can benefit from more training passes. End of explanation model.save_word2vec_format('doc_tensor.w2v', doctag_vec=True, word_vec=False) Explanation: Now, we'll save the document embedding vectors per doctag. End of explanation %run ../../gensim/scripts/word2vec2tensor.py -i doc_tensor.w2v -o movie_plot Explanation: Prepare the Input files for Tensorboard Tensorboard takes two Input files. One containing the embedding vectors and the other containing relevant metadata. We'll use a gensim script to directly convert the embedding file saved in word2vec format above to the tsv format required in Tensorboard. End of explanation with smart_open('movie_plot_metadata.tsv','w') as w: w.write('Titles\tGenres\n') for i,j in zip(dataframe.Titles, dataframe.Genres): w.write("%s\t%s\n" % (i,j)) Explanation: The script above generates two files, movie_plot_tensor.tsv which contain the embedding vectors and movie_plot_metadata.tsv containing doctags. But, these doctags are simply the unique index values and hence are not really useful to interpret what the document was while visualizing. So, we will overwrite movie_plot_metadata.tsv to have a custom metadata file with two columns. The first column will be for the movie titles and the second for their corresponding genres. End of explanation import pandas as pd import re from gensim.parsing.preprocessing import remove_stopwords, strip_punctuation from gensim.models import ldamodel from gensim.corpora.dictionary import Dictionary # read data dataframe = pd.read_csv('movie_plots.csv') # remove stopwords and punctuations def preprocess(row): return strip_punctuation(remove_stopwords(row.lower())) dataframe['Plots'] = dataframe['Plots'].apply(preprocess) # Convert data to required input format by LDA texts = [] for line in dataframe.Plots: lowered = line.lower() words = re.findall(r'\w+', lowered, flags = re.UNICODE | re.LOCALE) texts.append(words) # Create a dictionary representation of the documents. dictionary = Dictionary(texts) # Filter out words that occur less than 2 documents, or more than 30% of the documents. dictionary.filter_extremes(no_below=2, no_above=0.3) # Bag-of-words representation of the documents. corpus = [dictionary.doc2bow(text) for text in texts] Explanation: Now you can go to http://projector.tensorflow.org/ and upload the two files by clicking on Load data in the left panel. For demo purposes I have uploaded the Doc2Vec embeddings generated from the model trained above here. You can access the Embedding projector configured with these uploaded embeddings at this link. Using Tensorboard For the visualization purpose, the multi-dimensional embeddings that we get from the Doc2Vec model above, needs to be downsized to 2 or 3 dimensions. So that we basically end up with a new 2d or 3d embedding which tries to preserve information from the original multi-dimensional embedding. As these vectors are reduced to a much smaller dimension, the exact cosine/euclidean distances between them are not preserved, but rather relative, and hence as youโ€™ll see below the nearest similarity results may change. TensorBoard has two popular dimensionality reduction methods for visualizing the embeddings and also provides a custom method based on text searches: Principal Component Analysis: PCA aims at exploring the global structure in data, and could end up losing the local similarities between neighbours. It maximizes the total variance in the lower dimensional subspace and hence, often preserves the larger pairwise distances better than the smaller ones. See an intuition behind it in this nicely explained answer on stackexchange. T-SNE: The idea of T-SNE is to place the local neighbours close to each other, and almost completely ignoring the global structure. It is useful for exploring local neighborhoods and finding local clusters. But the global trends are not represented accurately and the separation between different groups is often not preserved (see the t-sne plots of our data below which testify the same). Custom Projections: This is a custom bethod based on the text searches you define for different directions. It could be useful for finding meaningful directions in the vector space, for example, female to male, currency to country etc. You can refer to this doc for instructions on how to use and navigate through different panels available in TensorBoard. Visualize using PCA The Embedding Projector computes the top 10 principal components. The menu at the left panel lets you project those components onto any combination of two or three. <img src="pca.png"> The above plot was made using the first two principal components with total variance covered being 36.5%. Visualize using T-SNE Data is visualized by animating through every iteration of the t-sne algorithm. The t-sne menu at the left lets you adjust the value of it's two hyperparameters. The first one is Perplexity, which is basically a measure of information. It may be viewed as a knob that sets the number of effective nearest neighbors<sup>[2]</sup>. The second one is learning rate that defines how quickly an algorithm learns on encountering new examples/data points. <img src="tsne.png"> The above plot was generated with perplexity 8, learning rate 10 and iteration 500. Though the results could vary on successive runs, and you may not get the exact plot as above with same hyperparameter settings. But some small clusters will start forming as above, with different orientations. 2. Visualizing LDA In this part, we will see how to visualize LDA in Tensorboard. We will be using the Document-topic distribution as the embedding vector of a document. Basically, we treat topics as the dimensions and the value in each dimension represents the topic proportion of that topic in the document. Preprocess Text We use the movie Plots as our documents in corpus and remove rare words and common words based on their document frequency. Below we remove words that appear in less than 2 documents or in more than 30% of the documents. End of explanation # Set training parameters. num_topics = 10 chunksize = 2000 passes = 50 iterations = 200 eval_every = None # Train model model = ldamodel.LdaModel(corpus=corpus, id2word=dictionary, chunksize=chunksize, alpha='auto', eta='auto', iterations=iterations, num_topics=num_topics, passes=passes, eval_every=eval_every) Explanation: Train LDA Model End of explanation # Get document topics all_topics = model.get_document_topics(corpus, minimum_probability=0) all_topics[0] Explanation: You can refer to this notebook also before training the LDA model. It contains tips and suggestions for pre-processing the text data, and how to train the LDA model to get good results. Doc-Topic distribution Now we will use get_document_topics which infers the topic distribution of a document. It basically returns a list of (topic_id, topic_probability) for each document in the input corpus. End of explanation # create file for tensors with smart_open('doc_lda_tensor.tsv','w') as w: for doc_topics in all_topics: for topics in doc_topics: w.write(str(topics[1])+ "\t") w.write("\n") # create file for metadata with smart_open('doc_lda_metadata.tsv','w') as w: w.write('Titles\tGenres\n') for j, k in zip(dataframe.Titles, dataframe.Genres): w.write("%s\t%s\n" % (j, k)) Explanation: The above output shows the topic distribution of first document in the corpus as a list of (topic_id, topic_probability). Now, using the topic distribution of a document as it's vector embedding, we will plot all the documents in our corpus using Tensorboard. Prepare the Input files for Tensorboard Tensorboard takes two input files, one containing the embedding vectors and the other containing relevant metadata. As described above we will use the topic distribution of documents as their embedding vector. Metadata file will consist of Movie titles with their genres. End of explanation tensors = [] for doc_topics in all_topics: doc_tensor = [] for topic in doc_topics: if round(topic[1], 3) > 0: doc_tensor.append((topic[0], float(round(topic[1], 3)))) # sort topics according to highest probabilities doc_tensor = sorted(doc_tensor, key=lambda x: x[1], reverse=True) # store vectors to add in metadata file tensors.append(doc_tensor[:5]) # overwrite metadata file i=0 with smart_open('doc_lda_metadata.tsv','w') as w: w.write('Titles\tGenres\n') for j,k in zip(dataframe.Titles, dataframe.Genres): w.write("%s\t%s\n" % (''.join((str(j), str(tensors[i]))),k)) i+=1 Explanation: Now you can go to http://projector.tensorflow.org/ and upload these two files by clicking on Load data in the left panel. For demo purposes I have uploaded the LDA doc-topic embeddings generated from the model trained above here. You can also access the Embedding projector configured with these uploaded embeddings at this link. Visualize using PCA The Embedding Projector computes the top 10 principal components. The menu at the left panel lets you project those components onto any combination of two or three. <img src="doc_lda_pca.png"> From PCA, we get a simplex (tetrahedron in this case) where each data point represent a document. These data points are colored according to their Genres which were given in the Movie dataset. As we can see there are a lot of points which cluster at the corners of the simplex. This is primarily due to the sparsity of vectors we are using. The documents at the corners primarily belongs to a single topic (hence, large weight in a single dimension and other dimensions have approximately zero weight.) You can modify the metadata file as explained below to see the dimension weights along with the Movie title. Now, we will append the topics with highest probability (topic_id, topic_probability) to the document's title, in order to explore what topics do the cluster corners or edges dominantly belong to. For this, we just need to overwrite the metadata file as below: End of explanation model.show_topic(topicid=0, topn=15) Explanation: Next, we upload the previous tensor file "doc_lda_tensor.tsv" and this new metadata file to http://projector.tensorflow.org/ . <img src="topic_with_coordinate.png"> Voila! Now we can click on any point to see it's top topics with their probabilty in that document, along with the title. As we can see in the above example, "Beverly hill cops" primarily belongs to the 0th and 1st topic as they have the highest probability amongst all. Visualize using T-SNE In T-SNE, the data is visualized by animating through every iteration of the t-sne algorithm. The t-sne menu at the left lets you adjust the value of it's two hyperparameters. The first one is Perplexity, which is basically a measure of information. It may be viewed as a knob that sets the number of effective nearest neighbors[2]. The second one is learning rate that defines how quickly an algorithm learns on encountering new examples/data points. Now, as the topic distribution of a document is used as itโ€™s embedding vector, t-sne ends up forming clusters of documents belonging to same topics. In order to understand and interpret about the theme of those topics, we can use show_topic() to explore the terms that the topics consisted of. <img src="doc_lda_tsne.png"> The above plot was generated with perplexity 11, learning rate 10 and iteration 1100. Though the results could vary on successive runs, and you may not get the exact plot as above even with same hyperparameter settings. But some small clusters will start forming as above, with different orientations. I named some clusters above based on the genre of it's movies and also using the show_topic() to see relevant terms of the topic which was most prevelant in a cluster. Most of the clusters had doocumets belonging dominantly to a single topic. For ex. The cluster with movies belonging primarily to topic 0 could be named Fantasy/Romance based on terms displayed below for topic 0. You can play with the visualization yourself on this link and try to conclude a label for clusters based on movies it has and their dominant topic. You can see the top 5 topics of every point by hovering over it. Now, we can notice that their are more than 10 clusters in the above image, whereas we trained our model for num_topics=10. It's because their are few clusters, which has documents belonging to more than one topic with an approximately close topic probability values. End of explanation import pyLDAvis.gensim viz = pyLDAvis.gensim.prepare(model, corpus, dictionary) pyLDAvis.display(viz) Explanation: You can even use pyLDAvis to deduce topics more efficiently. It provides a deeper inspection of the terms highly associated with each individual topic. For this, it uses a measure called relevance of a term to a topic that allows users to flexibly rank terms best suited for a meaningful topic interpretation. It's weight parameter called ฮป can be adjusted to display useful terms which could help in differentiating topics efficiently. End of explanation
9,292
Given the following text description, write Python code to implement the functionality described below step by step Description: Visualization Introduction When you are running a simulation, it is often useful to see what is going on by visualizing particles in a 3D view or by plotting observables over time. That way, you can easily determine things like whether your choice of parameters has led to a stable simulation or whether your system has equilibrated. You may even be able to do your complete data analysis in real time as the simulation progresses. Thanks to ESPResSo's Python interface, we can make use of standard libraries like Mayavi or OpenGL (for interactive 3D views) and Matplotlib (for line graphs) for this purpose. We will also use NumPy, which both of these libraries depend on, to store data and perform some basic analysis. Simulation First, we need to set up a simulation. We will simulate a simple Lennard-Jones liquid. Step1: Live plotting Let's have a look at the total energy of the simulation. We can determine the individual energies in the system using <tt>system.analysis.energy()</tt>. We will adapt the <tt>main()</tt> function to store the total energy at each integration run into a NumPy array. We will also create a function to draw a plot after each integration run. Step2: Live visualization and plotting To interact with a live visualization, we need to move the main integration loop into a secondary thread and run the visualizer in the main thread (note that visualization or plotting cannot be run in secondary threads). First, choose a visualizer Step3: Then, re-define the <tt>main()</tt> function to run the visualizer Step4: Next, create a secondary thread for the <tt>main()</tt> function. However, as we now have multiple threads, and the first thread is already used by the visualizer, we cannot call <tt>update_plot()</tt> from the <tt>main()</tt> anymore. The solution is to register the <tt>update_plot()</tt> function as a callback of the visualizer
Python Code: from matplotlib import pyplot import espressomd import numpy espressomd.assert_features("LENNARD_JONES") # system parameters (10000 particles) box_l = 10.7437 density = 0.7 # interaction parameters (repulsive Lennard-Jones) lj_eps = 1.0 lj_sig = 1.0 lj_cut = 1.12246 lj_cap = 20 # integration parameters system = espressomd.System(box_l=[box_l, box_l, box_l]) system.time_step = 0.0001 system.cell_system.skin = 0.4 system.thermostat.set_langevin(kT=1.0, gamma=1.0, seed=42) # warmup integration (with capped LJ potential) warm_steps = 100 warm_n_times = 30 # do the warmup until the particles have at least the distance min_dist min_dist = 0.9 # integration int_steps = 1000 int_n_times = 100 ############################################################# # Setup System # ############################################################# # interaction setup system.non_bonded_inter[0, 0].lennard_jones.set_params( epsilon=lj_eps, sigma=lj_sig, cutoff=lj_cut, shift="auto") system.force_cap = lj_cap # particle setup volume = box_l * box_l * box_l n_part = int(volume * density) for i in range(n_part): system.part.add(id=i, pos=numpy.random.random(3) * system.box_l) act_min_dist = system.analysis.min_dist() ############################################################# # Warmup Integration # ############################################################# # set LJ cap lj_cap = 20 system.force_cap = lj_cap # warmup integration loop i = 0 while (i < warm_n_times and act_min_dist < min_dist): system.integrator.run(warm_steps) # warmup criterion act_min_dist = system.analysis.min_dist() i += 1 # increase LJ cap lj_cap = lj_cap + 10 system.force_cap = lj_cap ############################################################# # Integration # ############################################################# # remove force capping lj_cap = 0 system.force_cap = lj_cap def main(): for i in range(int_n_times): print("\rrun %d at time=%.0f " % (i, system.time), end='') system.integrator.run(int_steps) print('\rSimulation complete') main() Explanation: Visualization Introduction When you are running a simulation, it is often useful to see what is going on by visualizing particles in a 3D view or by plotting observables over time. That way, you can easily determine things like whether your choice of parameters has led to a stable simulation or whether your system has equilibrated. You may even be able to do your complete data analysis in real time as the simulation progresses. Thanks to ESPResSo's Python interface, we can make use of standard libraries like Mayavi or OpenGL (for interactive 3D views) and Matplotlib (for line graphs) for this purpose. We will also use NumPy, which both of these libraries depend on, to store data and perform some basic analysis. Simulation First, we need to set up a simulation. We will simulate a simple Lennard-Jones liquid. End of explanation matplotlib_notebook = True # toggle this off when outside IPython/Jupyter # setup matplotlib canvas pyplot.xlabel("Time") pyplot.ylabel("Energy") plot, = pyplot.plot([0], [0]) if matplotlib_notebook: from IPython import display else: pyplot.show(block=False) # setup matplotlib update function current_time = -1 def update_plot(): i = current_time if i < 3: return None plot.set_xdata(energies[:i + 1, 0]) plot.set_ydata(energies[:i + 1, 1]) pyplot.xlim(0, energies[i, 0]) pyplot.ylim(energies[:i + 1, 1].min(), energies[:i + 1, 1].max()) # refresh matplotlib GUI if matplotlib_notebook: display.clear_output(wait=True) display.display(pyplot.gcf()) else: pyplot.draw() pyplot.pause(0.01) # re-define the main() function def main(): global current_time for i in range(int_n_times): system.integrator.run(int_steps) energies[i] = (system.time, system.analysis.energy()['total']) current_time = i update_plot() if matplotlib_notebook: display.clear_output(wait=True) system.time = 0 # reset system timer energies = numpy.zeros((int_n_times, 2)) main() if not matplotlib_notebook: pyplot.close() Explanation: Live plotting Let's have a look at the total energy of the simulation. We can determine the individual energies in the system using <tt>system.analysis.energy()</tt>. We will adapt the <tt>main()</tt> function to store the total energy at each integration run into a NumPy array. We will also create a function to draw a plot after each integration run. End of explanation from espressomd import visualization from threading import Thread visualizer = visualization.openGLLive(system) # alternative: visualization.mayaviLive(system) Explanation: Live visualization and plotting To interact with a live visualization, we need to move the main integration loop into a secondary thread and run the visualizer in the main thread (note that visualization or plotting cannot be run in secondary threads). First, choose a visualizer: End of explanation def main(): global current_time for i in range(int_n_times): system.integrator.run(int_steps) energies[i] = (system.time, system.analysis.energy()['total']) current_time = i visualizer.update() system.time = 0 # reset system timer Explanation: Then, re-define the <tt>main()</tt> function to run the visualizer: End of explanation # setup new matplotlib canvas if matplotlib_notebook: pyplot.xlabel("Time") pyplot.ylabel("Energy") plot, = pyplot.plot([0], [0]) # execute main() in a secondary thread t = Thread(target=main) t.daemon = True t.start() # execute the visualizer in the main thread visualizer.register_callback(update_plot, interval=int_steps // 2) visualizer.start() Explanation: Next, create a secondary thread for the <tt>main()</tt> function. However, as we now have multiple threads, and the first thread is already used by the visualizer, we cannot call <tt>update_plot()</tt> from the <tt>main()</tt> anymore. The solution is to register the <tt>update_plot()</tt> function as a callback of the visualizer: End of explanation
9,293
Given the following text description, write Python code to implement the functionality described below step by step Description: Launching Using Spark 1.4 and Python 3.4. The way of launching the ipython notebook has changed IPYTHON=1 IPYTHON_OPTS=notebook PYSPARK_PYTHON=python3 pyspark Step1: Create the SQLContext Step2: Create different "classes" for parsing the input Each row contains the (already computed) scenario values for each date and risk factor Step3: and because the number of scenarios is fixed, each scenario is a column Step4: and we can parse the rows of the csv file accordingly Step5: Process the file in Spark Step6: Let's do some VaR aggregation For each day, we want to aggregate scenarios from different risk factors, and then compute the Value at Risk per day. Step7: Slightly more complex case, with two portfolios Define portfolios and put them into a Spark DataFrame Step8: Me trying to register python UDFs
Python Code: import os, sys from pyspark.sql import SQLContext, Row import datetime from collections import namedtuple import numpy as np import pandas as pd Explanation: Launching Using Spark 1.4 and Python 3.4. The way of launching the ipython notebook has changed IPYTHON=1 IPYTHON_OPTS=notebook PYSPARK_PYTHON=python3 pyspark End of explanation sql = SQLContext(sc) Explanation: Create the SQLContext End of explanation RFScenario = namedtuple('RFScenario', ('rf', 'date', 'neutral', 'scenarios')) Explanation: Create different "classes" for parsing the input Each row contains the (already computed) scenario values for each date and risk factor: Row = DAY x RiskFactor x NeutralScenario x Scenarios Ideally the rows would be parsed like this, but because custom row aggregation is not fully supported End of explanation def construct_scenarios_type(number_scenarios=250, name = 'Scenarios'): names = ['rf', 'date', 'neutral'] scenario_cols = ["s%d"%x for x in range(1,number_scenarios+1)] names.extend(scenario_cols) Scenarios = namedtuple('Scenarios', names) return Scenarios, scenario_cols Scenarios, scenario_cols = construct_scenarios_type() Explanation: and because the number of scenarios is fixed, each scenario is a column End of explanation DATA_DIR = os.path.join(os.pardir, 'data') csv_filename = os.path.join(DATA_DIR, "scenarios2.csv") pd.read_csv(csv_filename, header=None).head() from pyspark.mllib.linalg import Vectors, DenseVector, SparseVector, _convert_to_vector def parse(row): DATE_FMT = "%Y-%m-%d" row[0] = row[0] row[1] = datetime.datetime.strptime(row[1], DATE_FMT) for i in np.arange(2,len(row)): row[i] = float(row[i]) return RFScenario(row[0], row[1], row[2], DenseVector(row[3:6])) def parse_explicit(row): DATE_FMT = "%Y-%m-%d" row[0] = row[0] row[1] = datetime.datetime.strptime(row[1], DATE_FMT) for i in np.arange(2,len(row)): row[i] = float(row[i]) return Scenarios(*row) Explanation: and we can parse the rows of the csv file accordingly End of explanation lines = sc.textFile(csv_filename) parts = lines.map(lambda l: l.split(",")) rows = parts.map(parse) rows_exp = parts.map(parse_explicit) df_exp = sql.createDataFrame(rows_exp) df_exp.head(1) Explanation: Process the file in Spark End of explanation def var(scenarios, level=99, neutral_scenario=0): pnls = scenarios - neutral_scenario return - np.percentile(pnls, 100-level, interpolation='linear') scenario_dates = df_exp.groupBy('date').sum() var_rdd = scenario_dates.map(lambda r: (r[0], r[1], float(var(np.array(r[2:]) - r[1])))) df_var = sql.createDataFrame(var_rdd, schema=['date', 'neutral', 'var']) %matplotlib notebook df_var.toPandas().plot() Explanation: Let's do some VaR aggregation For each day, we want to aggregate scenarios from different risk factors, and then compute the Value at Risk per day. End of explanation pf_rdd = sc.parallelize([('P1', 'RF1', 1.), ('P1', 'RF2', 2.), ('P2', 'RF1', 0.2), ('P2', 'RF2', -0.8)]) dfpf = sql.createDataFrame(pf_rdd, ['portfolio', 'rf', 'qty']) dfpf.collect() res = df_exp.join(dfpf, dfpf.rf == df_exp.rf) res.head(1) # scenario_dates = df_exp.groupBy('date').sum() var_per_portfolio = res.groupBy('date', 'portfolio').sum() # var_per_portfolio.toPandas().plot() var_per_portfolio = var_per_portfolio.map(lambda r: (r[0], r[1], r[2], float(var(np.array(r[3:]) - r[2])))) var_per_portfolio = sql.createDataFrame(var_per_portfolio, schema=['date', 'portfolio', 'neutral', 'var']) %matplotlib notebook df1 = var_per_portfolio.toPandas() df2 = df1.set_index(['date', 'portfolio']) # ['neutral'].plot(subplots=True) df3 = df2.unstack(1) #['var'].plot(subplots=True) df3 Explanation: Slightly more complex case, with two portfolios Define portfolios and put them into a Spark DataFrame End of explanation f = sql.udf.register("fadd", lambda x: (np.array(x[3]) * 3.1).tolist(), ArrayType(FloatType())) fagg = sql.udf.register("fagg", lambda x,y: (np.array(x[3]) + np.array(y[3])).tolist(), ArrayType(FloatType())) sql.registerDataFrameAsTable(df, 'scen') sql.sql('select date, fadd(scenarios) from scen group by date').collect() Explanation: Me trying to register python UDFs End of explanation
9,294
Given the following text description, write Python code to implement the functionality described below step by step Description: <div style='background-image Step1: Exercise 1 Define a python function call "get_cheby_matrix(nx)" that initializes the Chebyshev derivative matrix $D_{ij}$ Step2: Exercise 2 Calculate the numerical derivative by applying the differentiation matrix $D_{ij}$. Define an arbitrary function (e.g. a Gaussian) and initialize its analytical derivative on the Chebyshev collocation points. Calculate the numerical derivative and the difference to the analytical solution. Vary the wavenumber content of the analytical function. Does it make a difference? Why is the numerical result not entirely exact? Step3: Exercise 3 Now that the numerical derivative is available, we can visually inspect our results. Make a plot of both, the analytical and numerical derivatives together with the difference error.
Python Code: # This is a configuration step for the exercise. Please run it before calculating the derivative! import numpy as np import matplotlib.pyplot as plt # Show the plots in the Notebook. plt.switch_backend("nbagg") Explanation: <div style='background-image: url("../../share/images/header.svg") ; padding: 0px ; background-size: cover ; border-radius: 5px ; height: 250px'> <div style="float: right ; margin: 50px ; padding: 20px ; background: rgba(255 , 255 , 255 , 0.7) ; width: 50% ; height: 150px"> <div style="position: relative ; top: 50% ; transform: translatey(-50%)"> <div style="font-size: xx-large ; font-weight: 900 ; color: rgba(0 , 0 , 0 , 0.8) ; line-height: 100%">Computational Seismology</div> <div style="font-size: large ; padding-top: 20px ; color: rgba(0 , 0 , 0 , 0.5)">Numerical derivatives based on a derivative matrix</div> </div> </div> </div> Seismo-Live: http://seismo-live.org Authors: Fabian Linder (@fablindner) Heiner Igel (@heinerigel) David Vargas (@dvargas) Basic Equations Calculating a derivative using the differentation theorem of the Fourier Transform is in the mathematical sense a convolution of the function $f(x)$ with $ik$, where $k$ is the wavenumber and $i$ the imaginary unit. This can also be formulated as a matrix-vector product involving so-called Toeplitz matrices. An elegant (but inefficient) way of performing a derivative operation on a space-dependent function described on the Chebyshev collocation points is by defining a derivative matrix $D_{ij}$ $$ D_{ij} \ = \ -\frac{2 N^2 + 1}{6} \hspace{1.5cm} \text{for i = j = N} $$ $$ D_{ij} \ = \ -\frac{1}{2} \frac{x_i}{1-x_i^2} \hspace{1.5cm} \text{for i = j = 1,2,...,N-1} $$ $$ D_{ij} \ = \ \frac{c_i}{c_j} \frac{(-1)^{i+j}}{x_i - x_j} \hspace{1.5cm} \text{for i $\neq$ j = 0,1,...,N}$$ where $N+1$ is the number of Chebyshev collocation points $ \ x_i = cos(i\pi / N)$, $ \ i=0,...,N$ and the $c_i$ are given as $$ c_i = 2 \hspace{1.5cm} \text{for i = 0 or N} $$ $$ c_i = 1 \hspace{1.5cm} \text{otherwise} $$ This differentiation matrix allows us to write the derivative of the function $f_i = f(x_i)$ (possibly depending on time) simply as $$\partial_x u_i = D_{ij} \ u_j$$ where the right-hand side is a matrix-vector product, and the Einstein summation convention applies. End of explanation ################################################################# # IMPLEMENT THE CHEBYSHEV DERIVATIVE MATRIX METHOD HERE! ################################################################# Explanation: Exercise 1 Define a python function call "get_cheby_matrix(nx)" that initializes the Chebyshev derivative matrix $D_{ij}$ End of explanation ################################################################# # IMPLEMENT YOUR SOLUTION HERE! ################################################################# Explanation: Exercise 2 Calculate the numerical derivative by applying the differentiation matrix $D_{ij}$. Define an arbitrary function (e.g. a Gaussian) and initialize its analytical derivative on the Chebyshev collocation points. Calculate the numerical derivative and the difference to the analytical solution. Vary the wavenumber content of the analytical function. Does it make a difference? Why is the numerical result not entirely exact? End of explanation ################################################################# # PLOT YOUR SOLUTION HERE! ################################################################# Explanation: Exercise 3 Now that the numerical derivative is available, we can visually inspect our results. Make a plot of both, the analytical and numerical derivatives together with the difference error. End of explanation
9,295
Given the following text description, write Python code to implement the functionality described below step by step Description: Train a gesture recognition model for microcontroller use This notebook demonstrates how to train a 20kb gesture recognition model for TensorFlow Lite for Microcontrollers. It will produce the same model used in the magic_wand example application. The model is designed to be used with Google Colaboratory. <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https Step1: Prepare the data Next, we'll download the data and extract it into the expected location within the training scripts' directory. Step2: We'll then run the scripts that split the data into training, validation, and test sets. Step3: Load TensorBoard Now, we set up TensorBoard so that we can graph our accuracy and loss as training proceeds. Step4: Begin training The following cell will begin the training process. Training will take around 5 minutes on a GPU runtime. You'll see the metrics in TensorBoard after a few epochs. Step5: Create a C source file The train.py script writes a model, model.tflite, to the training scripts' directory. In the following cell, we convert this model into a C++ source file we can use with TensorFlow Lite for Microcontrollers.
Python Code: # Clone the repository from GitHub !git clone --depth 1 -q https://github.com/tensorflow/tensorflow # Copy the training scripts into our workspace !cp -r tensorflow/tensorflow/lite/micro/examples/magic_wand/train train Explanation: Train a gesture recognition model for microcontroller use This notebook demonstrates how to train a 20kb gesture recognition model for TensorFlow Lite for Microcontrollers. It will produce the same model used in the magic_wand example application. The model is designed to be used with Google Colaboratory. <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/micro/examples/magic_wand/train/train_magic_wand_model.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/micro/examples/magic_wand/train/train_magic_wand_model.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> </table> Training is much faster using GPU acceleration. Before you proceed, ensure you are using a GPU runtime by going to Runtime -> Change runtime type and selecting GPU. Training will take around 5 minutes on a GPU runtime. Configure dependencies Run the following cell to ensure the correct version of TensorFlow is used. We'll also clone the TensorFlow repository, which contains the training scripts, and copy them into our workspace. End of explanation # Download the data we will use to train the model !wget http://download.tensorflow.org/models/tflite/magic_wand/data.tar.gz # Extract the data into the train directory !tar xvzf data.tar.gz -C train 1>/dev/null Explanation: Prepare the data Next, we'll download the data and extract it into the expected location within the training scripts' directory. End of explanation # The scripts must be run from within the train directory %cd train # Prepare the data !python data_prepare.py # Split the data by person !python data_split_person.py Explanation: We'll then run the scripts that split the data into training, validation, and test sets. End of explanation # Load TensorBoard %load_ext tensorboard %tensorboard --logdir logs/scalars Explanation: Load TensorBoard Now, we set up TensorBoard so that we can graph our accuracy and loss as training proceeds. End of explanation !python train.py --model CNN --person true Explanation: Begin training The following cell will begin the training process. Training will take around 5 minutes on a GPU runtime. You'll see the metrics in TensorBoard after a few epochs. End of explanation # Install xxd if it is not available !apt-get -qq install xxd # Save the file as a C source file !xxd -i model.tflite > /content/model.cc # Print the source file !cat /content/model.cc Explanation: Create a C source file The train.py script writes a model, model.tflite, to the training scripts' directory. In the following cell, we convert this model into a C++ source file we can use with TensorFlow Lite for Microcontrollers. End of explanation
9,296
Given the following text description, write Python code to implement the functionality described below step by step Description: Overview The goal of this tutorial is to provide an example of the use of SciPy. SciPy is a collection of many different algorihtms, so there's no way we can cover everything here. For more information, try looking at the Step1: Let's create some data, using normally distributed locations. Step2: Now let's create some more data to analyze. Step3: So we have a messy dataset, and we'd like to pull reduce the number of points. For this exercise, let's pick points and clear out the radius around them. We can do this by favoring certain points; in this case, we'll favor those with higher strength values.
Python Code: # Set-up to have matplotlib use its IPython notebook backend %matplotlib inline # Convention for import of the pyplot interface import matplotlib.pyplot as plt import numpy as np Explanation: Overview The goal of this tutorial is to provide an example of the use of SciPy. SciPy is a collection of many different algorihtms, so there's no way we can cover everything here. For more information, try looking at the: - SciPy Reference Guide - SciPy Lectures SciPy is a library that wraps general-purpose, scientific algorithms. These algorithms are frequently written in FORTRAN, so SciPy gives you the ability to work with these performant algorithms without dealing with the compiled languages. Integration Optimization Spatial Algorithms ODE Solvers Interpolation Statistics Linear Algebra Special Functions Signal Processing FFT An Example Using Spatial This example walks through using the spatial algorithms library in SciPy to reduce some point data. End of explanation # Create some example data import scipy.stats # Initialize the RandomState so that this is repeatable rs = np.random.RandomState(seed=20170122) # Set up the distribution dist = scipy.stats.norm(loc=5, scale=2) # Request a bunch of random values from this distribution x, y = dist.rvs(size=(2, 100000), random_state=rs) # Go ahead and explicitly create a figure and an axes fig, ax = plt.subplots(figsize=(10, 6), dpi=100) # Do a scatter plot of our locations ax.scatter(x, y) Explanation: Let's create some data, using normally distributed locations. End of explanation # Some exponentially distributed values to make things interesting size = scipy.stats.expon(loc=10, scale=10).rvs(size=100000, random_state=rs) strength = scipy.stats.expon(loc=5).rvs(size=100000, random_state=rs) # Make the scatter plot more complex--change the color of markers by strength, # and scale their size by the size variable fig, ax = plt.subplots(figsize=(10, 6), dpi=100) # c specifies what to color by, s what to scale by ax.scatter(x, y, c=strength, s=size**2, alpha=0.7) Explanation: Now let's create some more data to analyze. End of explanation import scipy.spatial # Put the x and y values together--so that this is (N, 2) xy = np.vstack((x, y)).T # Create a mask--all True values initially. We keep values where this is True. keep = np.ones(x.shape, dtype=np.bool) # Get the indices that would sort the strength array--and can be used to sort # the point locations by strength sorted_indices = np.argsort(strength)[::-1] # Create a kdTree--a data structure that makes it easy to do search in nD space tree = scipy.spatial.cKDTree(xy) # Loop over all the potential points for sort_index in sorted_indices: # Check if this point is being kept if keep[sort_index]: # Use the kdTree to find the neighbors around the current point neighbors = tree.query_ball_point(xy[sort_index], r=1) # Eliminate the points within that radius--but not the current point for index in neighbors: if index != sort_index: keep[index] = False # Make the scatter plot more complex--change the color of markers by strength, # and scale their size by the size variable fig, ax = plt.subplots(figsize=(10, 6), dpi=100) # c specifies what to color by, s what to scale by ax.scatter(x[keep], y[keep], c=strength[keep], s=size[keep]**2, alpha=0.7) Explanation: So we have a messy dataset, and we'd like to pull reduce the number of points. For this exercise, let's pick points and clear out the radius around them. We can do this by favoring certain points; in this case, we'll favor those with higher strength values. End of explanation
9,297
Given the following text description, write Python code to implement the functionality described below step by step Description: Step1: Convolutional Networks So far we have worked with deep fully-connected networks, using them to explore different optimization strategies and network architectures. Fully-connected networks are a good testbed for experimentation because they are very computationally efficient, but in practice all state-of-the-art results use convolutional networks instead. First you will implement several layer types that are used in convolutional networks. You will then use these layers to train a convolutional network on the CIFAR-10 dataset. Step2: Convolution Step4: Aside Step5: Convolution Step6: Max pooling Step7: Max pooling Step8: Fast layers Making convolution and pooling layers fast can be challenging. To spare you the pain, we've provided fast implementations of the forward and backward passes for convolution and pooling layers in the file cs231n/fast_layers.py. The fast convolution implementation depends on a Cython extension; to compile it you need to run the following from the cs231n directory Step9: Convolutional "sandwich" layers Previously we introduced the concept of "sandwich" layers that combine multiple operations into commonly used patterns. In the file cs231n/layer_utils.py you will find sandwich layers that implement a few commonly used patterns for convolutional networks. Step10: Three-layer ConvNet Now that you have implemented all the necessary layers, we can put them together into a simple convolutional network. Open the file cs231n/classifiers/cnn.py and complete the implementation of the ThreeLayerConvNet class. Run the following cells to help you debug Step11: Gradient check After the loss looks reasonable, use numeric gradient checking to make sure that your backward pass is correct. When you use numeric gradient checking you should use a small amount of artifical data and a small number of neurons at each layer. Note Step12: Overfit small data A nice trick is to train your model with just a few training samples. You should be able to overfit small datasets, which will result in very high training accuracy and comparatively low validation accuracy. Step13: Plotting the loss, training accuracy, and validation accuracy should show clear overfitting Step14: Train the net By training the three-layer convolutional network for one epoch, you should achieve greater than 40% accuracy on the training set Step15: Visualize Filters You can visualize the first-layer convolutional filters from the trained network by running the following Step16: Spatial Batch Normalization We already saw that batch normalization is a very useful technique for training deep fully-connected networks. Batch normalization can also be used for convolutional networks, but we need to tweak it a bit; the modification will be called "spatial batch normalization." Normally batch-normalization accepts inputs of shape (N, D) and produces outputs of shape (N, D), where we normalize across the minibatch dimension N. For data coming from convolutional layers, batch normalization needs to accept inputs of shape (N, C, H, W) and produce outputs of shape (N, C, H, W) where the N dimension gives the minibatch size and the (H, W) dimensions give the spatial size of the feature map. If the feature map was produced using convolutions, then we expect the statistics of each feature channel to be relatively consistent both between different images and different locations within the same image. Therefore spatial batch normalization computes a mean and variance for each of the C feature channels by computing statistics over both the minibatch dimension N and the spatial dimensions H and W. Spatial batch normalization Step17: Spatial batch normalization
Python Code: # As usual, a bit of setup from __future__ import print_function import numpy as np import matplotlib.pyplot as plt from cs231n.classifiers.cnn import * from cs231n.data_utils import get_CIFAR10_data from cs231n.gradient_check import eval_numerical_gradient_array, eval_numerical_gradient from cs231n.layers import * from cs231n.fast_layers import * from cs231n.solver import Solver %matplotlib inline plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' # for auto-reloading external modules # see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython %load_ext autoreload %autoreload 2 def rel_error(x, y): returns relative error return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y)))) # Load the (preprocessed) CIFAR10 data. data = get_CIFAR10_data() for k, v in data.items(): print('%s: ' % k, v.shape) Explanation: Convolutional Networks So far we have worked with deep fully-connected networks, using them to explore different optimization strategies and network architectures. Fully-connected networks are a good testbed for experimentation because they are very computationally efficient, but in practice all state-of-the-art results use convolutional networks instead. First you will implement several layer types that are used in convolutional networks. You will then use these layers to train a convolutional network on the CIFAR-10 dataset. End of explanation x_shape = (2, 3, 4, 4) w_shape = (3, 3, 4, 4) x = np.linspace(-0.1, 0.5, num=np.prod(x_shape)).reshape(x_shape) w = np.linspace(-0.2, 0.3, num=np.prod(w_shape)).reshape(w_shape) b = np.linspace(-0.1, 0.2, num=3) conv_param = {'stride': 2, 'pad': 1} out, _ = conv_forward_naive(x, w, b, conv_param) correct_out = np.array([[[[-0.08759809, -0.10987781], [-0.18387192, -0.2109216 ]], [[ 0.21027089, 0.21661097], [ 0.22847626, 0.23004637]], [[ 0.50813986, 0.54309974], [ 0.64082444, 0.67101435]]], [[[-0.98053589, -1.03143541], [-1.19128892, -1.24695841]], [[ 0.69108355, 0.66880383], [ 0.59480972, 0.56776003]], [[ 2.36270298, 2.36904306], [ 2.38090835, 2.38247847]]]]) # Compare your output to ours; difference should be around 2e-8 print('Testing conv_forward_naive') print('difference: ', rel_error(out, correct_out)) Explanation: Convolution: Naive forward pass The core of a convolutional network is the convolution operation. In the file cs231n/layers.py, implement the forward pass for the convolution layer in the function conv_forward_naive. You don't have to worry too much about efficiency at this point; just write the code in whatever way you find most clear. You can test your implementation by running the following: End of explanation from scipy.misc import imread, imresize kitten, puppy = imread('kitten.jpg'), imread('puppy.jpg') # kitten is wide, and puppy is already square d = kitten.shape[1] - kitten.shape[0] kitten_cropped = kitten[:, d//2:-d//2, :] img_size = 200 # Make this smaller if it runs too slow x = np.zeros((2, 3, img_size, img_size)) x[0, :, :, :] = imresize(puppy, (img_size, img_size)).transpose((2, 0, 1)) x[1, :, :, :] = imresize(kitten_cropped, (img_size, img_size)).transpose((2, 0, 1)) # Set up a convolutional weights holding 2 filters, each 3x3 w = np.zeros((2, 3, 3, 3)) # The first filter converts the image to grayscale. # Set up the red, green, and blue channels of the filter. w[0, 0, :, :] = [[0, 0, 0], [0, 0.3, 0], [0, 0, 0]] w[0, 1, :, :] = [[0, 0, 0], [0, 0.6, 0], [0, 0, 0]] w[0, 2, :, :] = [[0, 0, 0], [0, 0.1, 0], [0, 0, 0]] # Second filter detects horizontal edges in the blue channel. w[1, 2, :, :] = [[1, 2, 1], [0, 0, 0], [-1, -2, -1]] # Vector of biases. We don't need any bias for the grayscale # filter, but for the edge detection filter we want to add 128 # to each output so that nothing is negative. b = np.array([0, 128]) # Compute the result of convolving each input in x with each filter in w, # offsetting by b, and storing the results in out. out, _ = conv_forward_naive(x, w, b, {'stride': 1, 'pad': 1}) def imshow_noax(img, normalize=True): Tiny helper to show images as uint8 and remove axis labels if normalize: img_max, img_min = np.max(img), np.min(img) img = 255.0 * (img - img_min) / (img_max - img_min) plt.imshow(img.astype('uint8')) plt.gca().axis('off') # Show the original images and the results of the conv operation plt.subplot(2, 3, 1) imshow_noax(puppy, normalize=False) plt.title('Original image') plt.subplot(2, 3, 2) imshow_noax(out[0, 0]) plt.title('Grayscale') plt.subplot(2, 3, 3) imshow_noax(out[0, 1]) plt.title('Edges') plt.subplot(2, 3, 4) imshow_noax(kitten_cropped, normalize=False) plt.subplot(2, 3, 5) imshow_noax(out[1, 0]) plt.subplot(2, 3, 6) imshow_noax(out[1, 1]) plt.show() Explanation: Aside: Image processing via convolutions As fun way to both check your implementation and gain a better understanding of the type of operation that convolutional layers can perform, we will set up an input containing two images and manually set up filters that perform common image processing operations (grayscale conversion and edge detection). The convolution forward pass will apply these operations to each of the input images. We can then visualize the results as a sanity check. End of explanation np.random.seed(231) x = np.random.randn(4, 3, 5, 5) w = np.random.randn(2, 3, 3, 3) b = np.random.randn(2,) dout = np.random.randn(4, 2, 5, 5) conv_param = {'stride': 1, 'pad': 1} dx_num = eval_numerical_gradient_array(lambda x: conv_forward_naive(x, w, b, conv_param)[0], x, dout) dw_num = eval_numerical_gradient_array(lambda w: conv_forward_naive(x, w, b, conv_param)[0], w, dout) db_num = eval_numerical_gradient_array(lambda b: conv_forward_naive(x, w, b, conv_param)[0], b, dout) out, cache = conv_forward_naive(x, w, b, conv_param) dx, dw, db = conv_backward_naive(dout, cache) # Your errors should be around 1e-8' print('Testing conv_backward_naive function') print('dx error: ', rel_error(dx, dx_num)) print('dw error: ', rel_error(dw, dw_num)) print('db error: ', rel_error(db, db_num)) Explanation: Convolution: Naive backward pass Implement the backward pass for the convolution operation in the function conv_backward_naive in the file cs231n/layers.py. Again, you don't need to worry too much about computational efficiency. When you are done, run the following to check your backward pass with a numeric gradient check. End of explanation x_shape = (2, 3, 4, 4) x = np.linspace(-0.3, 0.4, num=np.prod(x_shape)).reshape(x_shape) pool_param = {'pool_width': 2, 'pool_height': 2, 'stride': 2} out, _ = max_pool_forward_naive(x, pool_param) correct_out = np.array([[[[-0.26315789, -0.24842105], [-0.20421053, -0.18947368]], [[-0.14526316, -0.13052632], [-0.08631579, -0.07157895]], [[-0.02736842, -0.01263158], [ 0.03157895, 0.04631579]]], [[[ 0.09052632, 0.10526316], [ 0.14947368, 0.16421053]], [[ 0.20842105, 0.22315789], [ 0.26736842, 0.28210526]], [[ 0.32631579, 0.34105263], [ 0.38526316, 0.4 ]]]]) # Compare your output with ours. Difference should be around 1e-8. print('Testing max_pool_forward_naive function:') print('difference: ', rel_error(out, correct_out)) Explanation: Max pooling: Naive forward Implement the forward pass for the max-pooling operation in the function max_pool_forward_naive in the file cs231n/layers.py. Again, don't worry too much about computational efficiency. Check your implementation by running the following: End of explanation np.random.seed(231) x = np.random.randn(3, 2, 8, 8) dout = np.random.randn(3, 2, 4, 4) pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2} dx_num = eval_numerical_gradient_array(lambda x: max_pool_forward_naive(x, pool_param)[0], x, dout) out, cache = max_pool_forward_naive(x, pool_param) dx = max_pool_backward_naive(dout, cache) # Your error should be around 1e-12 print('Testing max_pool_backward_naive function:') print('dx error: ', rel_error(dx, dx_num)) Explanation: Max pooling: Naive backward Implement the backward pass for the max-pooling operation in the function max_pool_backward_naive in the file cs231n/layers.py. You don't need to worry about computational efficiency. Check your implementation with numeric gradient checking by running the following: End of explanation from cs231n.fast_layers import conv_forward_fast, conv_backward_fast from time import time np.random.seed(231) x = np.random.randn(100, 3, 31, 31) w = np.random.randn(25, 3, 3, 3) b = np.random.randn(25,) dout = np.random.randn(100, 25, 16, 16) conv_param = {'stride': 2, 'pad': 1} t0 = time() out_naive, cache_naive = conv_forward_naive(x, w, b, conv_param) t1 = time() out_fast, cache_fast = conv_forward_fast(x, w, b, conv_param) t2 = time() print('Testing conv_forward_fast:') print('Naive: %fs' % (t1 - t0)) print('Fast: %fs' % (t2 - t1)) print('Speedup: %fx' % ((t1 - t0) / (t2 - t1))) print('Difference: ', rel_error(out_naive, out_fast)) t0 = time() dx_naive, dw_naive, db_naive = conv_backward_naive(dout, cache_naive) t1 = time() dx_fast, dw_fast, db_fast = conv_backward_fast(dout, cache_fast) t2 = time() print('\nTesting conv_backward_fast:') print('Naive: %fs' % (t1 - t0)) print('Fast: %fs' % (t2 - t1)) print('Speedup: %fx' % ((t1 - t0) / (t2 - t1))) print('dx difference: ', rel_error(dx_naive, dx_fast)) print('dw difference: ', rel_error(dw_naive, dw_fast)) print('db difference: ', rel_error(db_naive, db_fast)) from cs231n.fast_layers import max_pool_forward_fast, max_pool_backward_fast np.random.seed(231) x = np.random.randn(100, 3, 32, 32) dout = np.random.randn(100, 3, 16, 16) pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2} t0 = time() out_naive, cache_naive = max_pool_forward_naive(x, pool_param) t1 = time() out_fast, cache_fast = max_pool_forward_fast(x, pool_param) t2 = time() print('Testing pool_forward_fast:') print('Naive: %fs' % (t1 - t0)) print('fast: %fs' % (t2 - t1)) print('speedup: %fx' % ((t1 - t0) / (t2 - t1))) print('difference: ', rel_error(out_naive, out_fast)) t0 = time() dx_naive = max_pool_backward_naive(dout, cache_naive) t1 = time() dx_fast = max_pool_backward_fast(dout, cache_fast) t2 = time() print('\nTesting pool_backward_fast:') print('Naive: %fs' % (t1 - t0)) print('speedup: %fx' % ((t1 - t0) / (t2 - t1))) print('dx difference: ', rel_error(dx_naive, dx_fast)) Explanation: Fast layers Making convolution and pooling layers fast can be challenging. To spare you the pain, we've provided fast implementations of the forward and backward passes for convolution and pooling layers in the file cs231n/fast_layers.py. The fast convolution implementation depends on a Cython extension; to compile it you need to run the following from the cs231n directory: bash python setup.py build_ext --inplace The API for the fast versions of the convolution and pooling layers is exactly the same as the naive versions that you implemented above: the forward pass receives data, weights, and parameters and produces outputs and a cache object; the backward pass recieves upstream derivatives and the cache object and produces gradients with respect to the data and weights. NOTE: The fast implementation for pooling will only perform optimally if the pooling regions are non-overlapping and tile the input. If these conditions are not met then the fast pooling implementation will not be much faster than the naive implementation. You can compare the performance of the naive and fast versions of these layers by running the following: End of explanation from cs231n.layer_utils import conv_relu_pool_forward, conv_relu_pool_backward np.random.seed(231) x = np.random.randn(2, 3, 16, 16) w = np.random.randn(3, 3, 3, 3) b = np.random.randn(3,) dout = np.random.randn(2, 3, 8, 8) conv_param = {'stride': 1, 'pad': 1} pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2} out, cache = conv_relu_pool_forward(x, w, b, conv_param, pool_param) dx, dw, db = conv_relu_pool_backward(dout, cache) dx_num = eval_numerical_gradient_array(lambda x: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], x, dout) dw_num = eval_numerical_gradient_array(lambda w: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], w, dout) db_num = eval_numerical_gradient_array(lambda b: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], b, dout) print('Testing conv_relu_pool') print('dx error: ', rel_error(dx_num, dx)) print('dw error: ', rel_error(dw_num, dw)) print('db error: ', rel_error(db_num, db)) from cs231n.layer_utils import conv_relu_forward, conv_relu_backward np.random.seed(231) x = np.random.randn(2, 3, 8, 8) w = np.random.randn(3, 3, 3, 3) b = np.random.randn(3,) dout = np.random.randn(2, 3, 8, 8) conv_param = {'stride': 1, 'pad': 1} out, cache = conv_relu_forward(x, w, b, conv_param) dx, dw, db = conv_relu_backward(dout, cache) dx_num = eval_numerical_gradient_array(lambda x: conv_relu_forward(x, w, b, conv_param)[0], x, dout) dw_num = eval_numerical_gradient_array(lambda w: conv_relu_forward(x, w, b, conv_param)[0], w, dout) db_num = eval_numerical_gradient_array(lambda b: conv_relu_forward(x, w, b, conv_param)[0], b, dout) print('Testing conv_relu:') print('dx error: ', rel_error(dx_num, dx)) print('dw error: ', rel_error(dw_num, dw)) print('db error: ', rel_error(db_num, db)) Explanation: Convolutional "sandwich" layers Previously we introduced the concept of "sandwich" layers that combine multiple operations into commonly used patterns. In the file cs231n/layer_utils.py you will find sandwich layers that implement a few commonly used patterns for convolutional networks. End of explanation model = ThreeLayerConvNet() N = 50 X = np.random.randn(N, 3, 32, 32) y = np.random.randint(10, size=N) loss, grads = model.loss(X, y) print('Initial loss (no regularization): ', loss) model.reg = 0.5 loss, grads = model.loss(X, y) print('Initial loss (with regularization): ', loss) Explanation: Three-layer ConvNet Now that you have implemented all the necessary layers, we can put them together into a simple convolutional network. Open the file cs231n/classifiers/cnn.py and complete the implementation of the ThreeLayerConvNet class. Run the following cells to help you debug: Sanity check loss After you build a new network, one of the first things you should do is sanity check the loss. When we use the softmax loss, we expect the loss for random weights (and no regularization) to be about log(C) for C classes. When we add regularization this should go up. End of explanation num_inputs = 2 input_dim = (3, 16, 16) reg = 0.0 num_classes = 10 np.random.seed(231) X = np.random.randn(num_inputs, *input_dim) y = np.random.randint(num_classes, size=num_inputs) model = ThreeLayerConvNet(num_filters=3, filter_size=3, input_dim=input_dim, hidden_dim=7, dtype=np.float64) loss, grads = model.loss(X, y) for param_name in sorted(grads): f = lambda _: model.loss(X, y)[0] param_grad_num = eval_numerical_gradient(f, model.params[param_name], verbose=False, h=1e-6) e = rel_error(param_grad_num, grads[param_name]) print('%s max relative error: %e' % (param_name, rel_error(param_grad_num, grads[param_name]))) Explanation: Gradient check After the loss looks reasonable, use numeric gradient checking to make sure that your backward pass is correct. When you use numeric gradient checking you should use a small amount of artifical data and a small number of neurons at each layer. Note: correct implementations may still have relative errors up to 1e-2. End of explanation np.random.seed(231) num_train = 100 small_data = { 'X_train': data['X_train'][:num_train], 'y_train': data['y_train'][:num_train], 'X_val': data['X_val'], 'y_val': data['y_val'], } model = ThreeLayerConvNet(weight_scale=1e-2) solver = Solver(model, small_data, num_epochs=15, batch_size=50, update_rule='adam', optim_config={ 'learning_rate': 1e-3, }, verbose=True, print_every=1) solver.train() Explanation: Overfit small data A nice trick is to train your model with just a few training samples. You should be able to overfit small datasets, which will result in very high training accuracy and comparatively low validation accuracy. End of explanation plt.subplot(2, 1, 1) plt.plot(solver.loss_history, 'o') plt.xlabel('iteration') plt.ylabel('loss') plt.subplot(2, 1, 2) plt.plot(solver.train_acc_history, '-o') plt.plot(solver.val_acc_history, '-o') plt.legend(['train', 'val'], loc='upper left') plt.xlabel('epoch') plt.ylabel('accuracy') plt.show() Explanation: Plotting the loss, training accuracy, and validation accuracy should show clear overfitting: End of explanation model = ThreeLayerConvNet(weight_scale=0.001, hidden_dim=500, reg=0.001) solver = Solver(model, data, num_epochs=1, batch_size=50, update_rule='adam', optim_config={ 'learning_rate': 1e-3, }, verbose=True, print_every=20) solver.train() Explanation: Train the net By training the three-layer convolutional network for one epoch, you should achieve greater than 40% accuracy on the training set: End of explanation from cs231n.vis_utils import visualize_grid grid = visualize_grid(model.params['W1'].transpose(0, 2, 3, 1)) plt.imshow(grid.astype('uint8')) plt.axis('off') plt.gcf().set_size_inches(5, 5) plt.show() Explanation: Visualize Filters You can visualize the first-layer convolutional filters from the trained network by running the following: End of explanation np.random.seed(231) # Check the training-time forward pass by checking means and variances # of features both before and after spatial batch normalization N, C, H, W = 2, 3, 4, 5 x = 4 * np.random.randn(N, C, H, W) + 10 print('Before spatial batch normalization:') print(' Shape: ', x.shape) print(' Means: ', x.mean(axis=(0, 2, 3))) print(' Stds: ', x.std(axis=(0, 2, 3))) # Means should be close to zero and stds close to one gamma, beta = np.ones(C), np.zeros(C) bn_param = {'mode': 'train'} out, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param) print('After spatial batch normalization:') print(' Shape: ', out.shape) print(' Means: ', out.mean(axis=(0, 2, 3))) print(' Stds: ', out.std(axis=(0, 2, 3))) # Means should be close to beta and stds close to gamma gamma, beta = np.asarray([3, 4, 5]), np.asarray([6, 7, 8]) out, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param) print('After spatial batch normalization (nontrivial gamma, beta):') print(' Shape: ', out.shape) print(' Means: ', out.mean(axis=(0, 2, 3))) print(' Stds: ', out.std(axis=(0, 2, 3))) np.random.seed(231) # Check the test-time forward pass by running the training-time # forward pass many times to warm up the running averages, and then # checking the means and variances of activations after a test-time # forward pass. N, C, H, W = 10, 4, 11, 12 bn_param = {'mode': 'train'} gamma = np.ones(C) beta = np.zeros(C) for t in range(50): x = 2.3 * np.random.randn(N, C, H, W) + 13 spatial_batchnorm_forward(x, gamma, beta, bn_param) bn_param['mode'] = 'test' x = 2.3 * np.random.randn(N, C, H, W) + 13 a_norm, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param) # Means should be close to zero and stds close to one, but will be # noisier than training-time forward passes. print('After spatial batch normalization (test-time):') print(' means: ', a_norm.mean(axis=(0, 2, 3))) print(' stds: ', a_norm.std(axis=(0, 2, 3))) Explanation: Spatial Batch Normalization We already saw that batch normalization is a very useful technique for training deep fully-connected networks. Batch normalization can also be used for convolutional networks, but we need to tweak it a bit; the modification will be called "spatial batch normalization." Normally batch-normalization accepts inputs of shape (N, D) and produces outputs of shape (N, D), where we normalize across the minibatch dimension N. For data coming from convolutional layers, batch normalization needs to accept inputs of shape (N, C, H, W) and produce outputs of shape (N, C, H, W) where the N dimension gives the minibatch size and the (H, W) dimensions give the spatial size of the feature map. If the feature map was produced using convolutions, then we expect the statistics of each feature channel to be relatively consistent both between different images and different locations within the same image. Therefore spatial batch normalization computes a mean and variance for each of the C feature channels by computing statistics over both the minibatch dimension N and the spatial dimensions H and W. Spatial batch normalization: forward In the file cs231n/layers.py, implement the forward pass for spatial batch normalization in the function spatial_batchnorm_forward. Check your implementation by running the following: End of explanation np.random.seed(231) N, C, H, W = 2, 3, 4, 5 x = 5 * np.random.randn(N, C, H, W) + 12 gamma = np.random.randn(C) beta = np.random.randn(C) dout = np.random.randn(N, C, H, W) bn_param = {'mode': 'train'} fx = lambda x: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0] fg = lambda a: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0] fb = lambda b: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0] dx_num = eval_numerical_gradient_array(fx, x, dout) da_num = eval_numerical_gradient_array(fg, gamma, dout) db_num = eval_numerical_gradient_array(fb, beta, dout) _, cache = spatial_batchnorm_forward(x, gamma, beta, bn_param) dx, dgamma, dbeta = spatial_batchnorm_backward(dout, cache) print('dx error: ', rel_error(dx_num, dx)) print('dgamma error: ', rel_error(da_num, dgamma)) print('dbeta error: ', rel_error(db_num, dbeta)) Explanation: Spatial batch normalization: backward In the file cs231n/layers.py, implement the backward pass for spatial batch normalization in the function spatial_batchnorm_backward. Run the following to check your implementation using a numeric gradient check: End of explanation
9,298
Given the following text description, write Python code to implement the functionality described below step by step Description: Simple Reinforcement Learning in Tensorflow Part 2-b Step2: The Policy-Based Agent Step3: Training the Agent
Python Code: import tensorflow as tf import tensorflow.contrib.slim as slim import numpy as np import gym import matplotlib.pyplot as plt %matplotlib inline try: xrange = xrange except: xrange = range env = gym.make('CartPole-v0') Explanation: Simple Reinforcement Learning in Tensorflow Part 2-b: Vanilla Policy Gradient Agent This tutorial contains a simple example of how to build a policy-gradient based agent that can solve the CartPole problem. For more information, see this Medium post. This implementation is generalizable to more than two actions. For more Reinforcement Learning algorithms, including DQN and Model-based learning in Tensorflow, see my Github repo, DeepRL-Agents. End of explanation gamma = 0.99 def discount_rewards(r): take 1D float array of rewards and compute discounted reward discounted_r = np.zeros_like(r) running_add = 0 for t in reversed(xrange(0, r.size)): running_add = running_add * gamma + r[t] discounted_r[t] = running_add return discounted_r class agent(): def __init__(self, lr, s_size,a_size,h_size): #These lines established the feed-forward part of the network. The agent takes a state and produces an action. self.state_in= tf.placeholder(shape=[None,s_size],dtype=tf.float32) hidden = slim.fully_connected(self.state_in,h_size,biases_initializer=None,activation_fn=tf.nn.relu) self.output = slim.fully_connected(hidden,a_size,activation_fn=tf.nn.softmax,biases_initializer=None) self.chosen_action = tf.argmax(self.output,1) #The next six lines establish the training proceedure. We feed the reward and chosen action into the network #to compute the loss, and use it to update the network. self.reward_holder = tf.placeholder(shape=[None],dtype=tf.float32) self.action_holder = tf.placeholder(shape=[None],dtype=tf.int32) self.indexes = tf.range(0, tf.shape(self.output)[0]) * tf.shape(self.output)[1] + self.action_holder self.responsible_outputs = tf.gather(tf.reshape(self.output, [-1]), self.indexes) self.loss = -tf.reduce_mean(tf.log(self.responsible_outputs)*self.reward_holder) tvars = tf.trainable_variables() self.gradient_holders = [] for idx,var in enumerate(tvars): placeholder = tf.placeholder(tf.float32,name=str(idx)+'_holder') self.gradient_holders.append(placeholder) self.gradients = tf.gradients(self.loss,tvars) optimizer = tf.train.AdamOptimizer(learning_rate=lr) self.update_batch = optimizer.apply_gradients(zip(self.gradient_holders,tvars)) Explanation: The Policy-Based Agent End of explanation tf.reset_default_graph() #Clear the Tensorflow graph. myAgent = agent(lr=1e-2,s_size=4,a_size=2,h_size=8) #Load the agent. total_episodes = 5000 #Set total number of episodes to train agent on. max_ep = 999 update_frequency = 5 init = tf.global_variables_initializer() # Launch the tensorflow graph with tf.Session() as sess: sess.run(init) i = 0 total_reward = [] total_length = [] gradBuffer = sess.run(tf.trainable_variables()) for ix,grad in enumerate(gradBuffer): gradBuffer[ix] = grad * 0 while i < total_episodes: s = env.reset() running_reward = 0 ep_history = [] for j in range(max_ep): #Probabilistically pick an action given our network outputs. a_dist = sess.run(myAgent.output,feed_dict={myAgent.state_in:[s]}) a = np.random.choice(a_dist[0],p=a_dist[0]) a = np.argmax(a_dist == a) s1,r,d,_ = env.step(a) #Get our reward for taking an action given a bandit. ep_history.append([s,a,r,s1]) s = s1 running_reward += r if d == True: #Update the network. ep_history = np.array(ep_history) ep_history[:,2] = discount_rewards(ep_history[:,2]) feed_dict={myAgent.reward_holder:ep_history[:,2], myAgent.action_holder:ep_history[:,1],myAgent.state_in:np.vstack(ep_history[:,0])} grads = sess.run(myAgent.gradients, feed_dict=feed_dict) for idx,grad in enumerate(grads): gradBuffer[idx] += grad if i % update_frequency == 0 and i != 0: feed_dict= dictionary = dict(zip(myAgent.gradient_holders, gradBuffer)) _ = sess.run(myAgent.update_batch, feed_dict=feed_dict) for ix,grad in enumerate(gradBuffer): gradBuffer[ix] = grad * 0 total_reward.append(running_reward) total_length.append(j) break #Update our running tally of scores. if i % 100 == 0: print(np.mean(total_reward[-100:])) i += 1 Explanation: Training the Agent End of explanation
9,299
Given the following text description, write Python code to implement the functionality described below step by step Description: Branching GP Regression on synthetic data Alexis Boukouvalas, 2017 Branching GP regression with Gaussian noise on the hematopoiesis data described in the paper "BGP Step1: Load the data Monocle has already been run on the data. The first columns contains the state assigned by the DDRTree algorithm to each cell. Second column is the gene time. All other columns are the 40 genes. The first 10 branch early, then 20 branch late and 10 do not branch. Step2: Plot the data Step3: Run the BGP model Run script runsyntheticData.py to obtain a pickle file with results. This script can take ~10 to 20 minutes depending on your hardware. It performs a gene-by-gene branch model fitting. Plot BGP posterior fit Plot posterior fit. Step4: We can also plot with the predictive uncertainty of the GP. The dashed lines are the 95% confidence intervals. Step5: Plot posterior Plotting the posterior alongside the true branching location.
Python Code: import pickle import numpy as np import pandas as pd from matplotlib import pyplot as plt from BranchedGP import VBHelperFunctions as bplot plt.style.use("ggplot") %matplotlib inline Explanation: Branching GP Regression on synthetic data Alexis Boukouvalas, 2017 Branching GP regression with Gaussian noise on the hematopoiesis data described in the paper "BGP: Gaussian processes for identifying branching dynamics in single cell data". This notebook shows how to build a BGP model and plot the posterior model fit and posterior branching times. End of explanation datafile = "syntheticdata/synthetic20.csv" data = pd.read_csv(datafile, index_col=[0]) G = data.shape[1] - 2 # all data - time columns - state column Y = data.iloc[:, 2:] trueBranchingTimes = np.array([float(Y.columns[i][-3:]) for i in range(G)]) data.head() Explanation: Load the data Monocle has already been run on the data. The first columns contains the state assigned by the DDRTree algorithm to each cell. Second column is the gene time. All other columns are the 40 genes. The first 10 branch early, then 20 branch late and 10 do not branch. End of explanation f, ax = plt.subplots(5, 8, figsize=(10, 8)) ax = ax.flatten() for i in range(G): for s in np.unique(data["MonocleState"]): idxs = s == data["MonocleState"].values ax[i].scatter(data["Time"].loc[idxs], Y.iloc[:, i].loc[idxs]) ax[i].set_title(Y.columns[i]) ax[i].set_yticklabels([]) ax[i].set_xticklabels([]) f.suptitle("Branching genes, location=1.1 indicates no branching") Explanation: Plot the data End of explanation r = pickle.load(open("syntheticdata/syntheticDataRun.p", "rb")) r.keys() # plot fit for a gene g = 0 GPy = Y.iloc[:, g][:, None] GPt = data["Time"].values globalBranching = data["MonocleState"].values.astype(int) bmode = r["Bsearch"][np.argmax(r["gpmodels"][g]["loglik"])] print("True branching time", trueBranchingTimes[g], "BGP Maximum at b=%.2f" % bmode) _ = bplot.PlotBGPFit(GPy, GPt, r["Bsearch"], r["gpmodels"][g]) Explanation: Run the BGP model Run script runsyntheticData.py to obtain a pickle file with results. This script can take ~10 to 20 minutes depending on your hardware. It performs a gene-by-gene branch model fitting. Plot BGP posterior fit Plot posterior fit. End of explanation g = 0 bmode = r["Bsearch"][np.argmax(r["gpmodels"][g]["loglik"])] pred = r["gpmodels"][g]["prediction"] # prediction object from GP _ = bplot.plotBranchModel( bmode, GPt, GPy, pred["xtest"], pred["mu"], pred["var"], r["gpmodels"][g]["Phi"], fPlotPhi=True, fColorBar=True, fPlotVar=True, ) Explanation: We can also plot with the predictive uncertainty of the GP. The dashed lines are the 95% confidence intervals. End of explanation fs, ax = plt.subplots(1, 1, figsize=(5, 5)) for g in range(G): bmode = r["Bsearch"][np.argmax(r["gpmodels"][g]["loglik"])] ax.scatter(bmode, g, s=100, color="b") # BGP mode ax.scatter(trueBranchingTimes[g] + 0.05, g, s=100, color="k") # True Explanation: Plot posterior Plotting the posterior alongside the true branching location. End of explanation